Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
1,800 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
JUPYTER NOTEBOOK
http
Step1: Observações
Step2: Plotando o resultado dos mínimos quadrados para polinômios de graus 0 a 9. Qual é um bom modelo? | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pylab as plt
from ipywidgets import *
#Variância do ruído
var = 0.3
#Conjunto de treino
train_size = 10
x_train = np.linspace(0,1,train_size)
y_train = np.sin(2*np.pi*x_train) + np.random.normal(0,var,train_size) #sinal + ruido
#Conjunto de teste
test_size = 100
x_test= np.linspace(0,1,test_size)
y = np.sin(2*np.pi*x_test)
y_test = y + np.random.normal(0,var,test_size) #sinal + ruido
# Gráfico do sinal sem ruído e do conhunto de treinamento gerado
plt.figure()
plt.plot(x_test,y,linewidth = 2.0,label = r'Modelo: $sin(2 \pi x)$')
plt.scatter(x_train,y_train,color='red',label = "Modelo + ruido")
plt.legend(loc = (0.02, 0.18))
plt.xlabel("x")
plt.ylabel("y")
plt.show()
Explanation: JUPYTER NOTEBOOK
http://jupyter.org/
Project Jupyter is an open source project was born out of the IPython Project in 2014 as it evolved to support interactive data science and scientific computing across all programming languages. Jupyter will always be 100% open source software, free for all to use and released under the liberal terms of the modified BSD license
Características
- Interface no navegador
- Compartilhamento de notebooks (inslusive remoto)
- nbviwer - http://nbviewer.jupyter.org/
- jupterhub
- GitHub
- Docker
- Suporte para mais de 40 linguagens de programação
- Python
- R
- Julia
- Scala
- etc
- Big data integration
- Apache Spark
- from Python
- R
- Scala
- scikit-learn
- ggplot2
- dplyr
- etc
- Suporte para latex $\LaTeX$, videos e imagens
- Documentação de suporte - https://jupyter.readthedocs.io/en/latest/index.html
- Interatividade e Widgets - http://jupyter.org/widgets.html
- Exporta para - https://ipython.org/ipython-doc/3/notebook/nbconvert.html
- latex
- html
- py/ipynb
- PDF
- Iporta modulos .py e .ipynb
- Tabelas
instalação
http://jupyter.readthedocs.io/en/latest/install.html
- Linux
- pip
- pip3 install --upgrade pip
- pip3 install jupyter
- Anaconda
- Windows/ macOS
- Anaconda - https://www.continuum.io/downloads
Usar o pytthon 2.7 porque é compatível com a grande maioria dos pacotes.
Se quiser instalar mais de uma versão do python, é melhor criar multiplos enviroments.
- Para poder exportar PDF
- http://pandoc.org/installing.html
EXEMPLO DE USO
Ajuste Polinomial de Curvas
Esse tutorial visa explicar os conceitos de overfitting e regulzarização através de um exemplo de ajuste polinomial de curvas usando o método dos mínimos quadrados. Overfitting ocorre quando o modelo decora os dados de entrada, de forma que o modelo se torne incapaz de generalizar para novos dados. Regulzarização é uma técnica para evitar o overfitting.
O tutorial é uma adaptação do exemplo apresentado no capítulo 1 do livro:
"Christopher M. Bishop. 2006. Pattern Recognition and Machine Learning (Information Science and Statistics). Springer-Verlag New York, Inc., Secaucus, NJ, USA."
End of explanation
#Implementação da solução dos mínimos quadrados
def polynomial_fit(X,T,M):
A = np.power(X.reshape(-1,1),np.arange(0,M+1).reshape(1,-1))
T = T.reshape(-1,1)
W = np.dot(np.linalg.pinv(A),T)
return W.ravel()
Explanation: Observações: $$\boldsymbol{X} =(x_1,x_2,...,x_N)^T$$
Alvo: $$\boldsymbol{T} =(t_1,t_2,...,t_N)^T$$
Dados
Observações: $$\boldsymbol{X} =(x_1,x_2,...,x_N)^T$$
Alvo: $$\boldsymbol{T} =(t_1,t_2,...,t_N)^T$$
Modelo
$$y(x,\boldsymbol{W})= w_0 + w_1x +w_2x^2+...+w_mx^m = \sum^M_{j=0}w_jx^j$$
Função de custo
Função de custo quadrática: $$E(\boldsymbol{W})=\frac{1}{2}\sum_{n=1}^N{y(x_n,\boldsymbol{W})-t_n}^2$$
Derivando a função de custo e igualando a zero obtemos o vetor W que minimiza o erro:
$$ \boldsymbol{W}^ = (\boldsymbol{A}^T\boldsymbol{A})^{-1}\boldsymbol{A} ^T\boldsymbol{T}$$
Onde A é definido por:
$$\boldsymbol{A} = \begin{bmatrix}
1 & x_{1} & x_{1}^2 & \dots & x_{1}^M \
1 & x_{2} & x_{2}^2 & \dots & x_{2}^M \
\vdots & \vdots & \vdots & \ddots & \vdots \
1 & x_{N} & x_{N}^2 & \dots & x_{N}^M
\end{bmatrix}$$
End of explanation
def plotmodel(M):
coefs = polynomial_fit(x_train, y_train, M)[::-1]
p = np.poly1d(coefs)
plt.figure()
plt.plot(x_test,y,linewidth = 2.0,label = 'Real')
plt.scatter(x_train,y_train,color='red',label= "Treino")
plt.xlabel("x")
plt.ylabel(r'y')
y_fit = p(x_test)
plt.plot(x_test,y_fit,linewidth = 2.0,label ="Estimado")
plt.plot(x_test,y_test,'x',color='black',label = "Teste")
plt.legend(loc=(0.02,0.02))
interact(plotmodel,M=(0,9,1))
Explanation: Plotando o resultado dos mínimos quadrados para polinômios de graus 0 a 9. Qual é um bom modelo?
End of explanation |
1,801 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<figure>
<IMG SRC="../logo/logo.png" WIDTH=250 ALIGN="right">
</figure>
Strip of land with same rise at both sides
@Theo Olsthoorn
2019-12-21
This exercise was done in class on 2019-01-10
Step1: The uniform aquifer of half-infinite extent ($0 \le x \le \infty$)
The solution is valid for a uniform aquifer of half infinite extent, i.e. $x \ge 0$ and $kD$ and $S$ are constant. Further $s(0, x) = 0$. And we have as boundary condition that $s(t, 0) = A$ for $t \ge 0$
Step2: Strip of width $L = 2b$, head $s(t, x=\pm b) = A$ for $t>0$ and $s(t=0, x) = 0$
A strip of limited width requires mirroring to ensure tha tthe head at the head at each of the two boundaries $x = \pm b $ remains at the desired value. In this case, the we implement the situation where the head at both ends of the strip is suddenly raised to $A$ at $t=0$ and is kept at that value thereafter.
By starting with $s(0, x) = A$ and subtracting the solution, we get the situation where the head starts at $A$ and is suddenly lowered to $0$ at $t=0$. This allows comparison with the example hereafter.
We show the result (head as a function of $x$) for different times. The times are chosen equal to a multiple of the half-time $T_{50\%} \approx 0.28 \frac {b^2 S} {kD}$, so that the head of each next line should be reduced by 50\% relative to the previous time.
$$ s(x, t) = A \sum _{i=1} ^\infty \left{
(-1) ^{i-1} \left[
\mathtt{erfc}\left( \left( (2 i -1) b + x \right) \sqrt {\frac S {4 kD t}} \right)
+
\mathtt{erfc}\left( \left( (2 i -1) b - x \right) \sqrt {\frac S {4 kD t}} \right) \right]
\right} $$
Step3: Symmetrical solution of a draining strip of land
This solution describes the head in a strip land of size $L = 2b$ where the initial head is everywhere equal to $A$ and where the head at $x = \pm b$ is suddenly lowered to zero at $t=0$. Hence, the groundwater will gradually drain until the head reaches zero everywhere at $t \rightarrow\infty$. Therefore, we should get exactly the same result as in the previous example, although the solution looks completely different mathematically.
$$ s(x, t) = A \frac 4 \pi \sum _{i=1} ^\infty \left{
\frac {(-1)^{i-1}} {2i - 1} \cos \left[ (2 i - 1) \left( \frac \pi 2\right) \frac x b \right] \exp \left[ -(2 i - 1)^2 \left( \frac \pi 2 \right) ^2
\frac {kD } {b^2 S} t \right] \right} $$ | Python Code:
# import modules we need
import matplotlib.pyplot as plt
import numpy as np
from scipy.special import erfc
Explanation: <figure>
<IMG SRC="../logo/logo.png" WIDTH=250 ALIGN="right">
</figure>
Strip of land with same rise at both sides
@Theo Olsthoorn
2019-12-21
This exercise was done in class on 2019-01-10
End of explanation
kD = 900 # m2/d
S = 0.2 # [-]
L = 2000 # m
b = L/2 # half width
A = 2. # m
x = np.linspace(0, 0.6 * b, 101)
times = [0.1, 0.25, 0.5, 1, 2, 4, 8] # d
plt.title('Half infinite aquifer, sudden change')
plt.xlabel('x [m]')
plt.ylabel('head [m]')
plt.grid()
for t in times:
s = A * erfc (x * np.sqrt(S / (4 * kD * t)))
plt.plot(x, s, label='t={:.2f} d'.format(t))
plt.legend(loc='center left')
plt.show()
Explanation: The uniform aquifer of half-infinite extent ($0 \le x \le \infty$)
The solution is valid for a uniform aquifer of half infinite extent, i.e. $x \ge 0$ and $kD$ and $S$ are constant. Further $s(0, x) = 0$. And we have as boundary condition that $s(t, 0) = A$ for $t \ge 0$:
$$ s(x, t) = A \, \mathtt{erfc} \left( u \right)$$
where
$$ u = \sqrt {\frac {x^2 S} {4 kD t} } = x \sqrt{ \frac {S} {4 kD t} }, \,\,\, x \ge 0 $$
The erfc function is a well-known function in engineering, which is derived from statistics. It is mathematically defined as
$$ \mathtt{erfc}(z) = \frac 2 {\sqrt{\pi}} \intop _z ^{\infty} e^{-y^2} dy $$
We don't have to implement it, because it is available in scipy.special as erfc.
However, we need the mathematical expression to get its derivation, which we need to compute the flow
$$ Q = -kD \frac {\partial s} {\partial x} $$
$$ = -kD \, A \, \frac {\partial \mathtt{erfc} (u)} {\partial x} $$
$$ = kD \, A \,\frac 2 {\sqrt \pi} e^{-u^2} \frac {\partial u} {\partial x} $$
$$ = kD \, A \, \frac 2 {\sqrt \pi} e^{-u^2} \sqrt {\frac S {4 kD t}} $$
$$ Q = A \,\sqrt {\frac {kD S} {\pi t}}e^{-u^2} $$
A half-infinite aquifer first
End of explanation
plt.title('strip of width {}'.format(L))
plt.xlabel('x [m]')
plt.ylabel('s [m]')
plt.grid()
T50 = 0.28 * b**2 * S / kD # halftime of the head decline
times = np.array([0.1, 1, 2, 3, 4, 5, 6]) * T50 # multiple halftimes
x = np.linspace(-b, b, 101)
for t in times:
s = A + np.zeros_like(x)
for i in range(1, 20):
si = A *(-1)**(i-1) * (
erfc(((2 * i - 1) * b + x) * np.sqrt(S / (4 * kD * t)))
+ erfc(((2 * i - 1) * b - x) * np.sqrt(S / (4 * kD * t)))
)
s -= si
plt.plot(x, s, label='t={:.2f} d'.format(t))
plt.legend()
plt.show()
Explanation: Strip of width $L = 2b$, head $s(t, x=\pm b) = A$ for $t>0$ and $s(t=0, x) = 0$
A strip of limited width requires mirroring to ensure tha tthe head at the head at each of the two boundaries $x = \pm b $ remains at the desired value. In this case, the we implement the situation where the head at both ends of the strip is suddenly raised to $A$ at $t=0$ and is kept at that value thereafter.
By starting with $s(0, x) = A$ and subtracting the solution, we get the situation where the head starts at $A$ and is suddenly lowered to $0$ at $t=0$. This allows comparison with the example hereafter.
We show the result (head as a function of $x$) for different times. The times are chosen equal to a multiple of the half-time $T_{50\%} \approx 0.28 \frac {b^2 S} {kD}$, so that the head of each next line should be reduced by 50\% relative to the previous time.
$$ s(x, t) = A \sum _{i=1} ^\infty \left{
(-1) ^{i-1} \left[
\mathtt{erfc}\left( \left( (2 i -1) b + x \right) \sqrt {\frac S {4 kD t}} \right)
+
\mathtt{erfc}\left( \left( (2 i -1) b - x \right) \sqrt {\frac S {4 kD t}} \right) \right]
\right} $$
End of explanation
T = b**2 * S / kD
plt.title('strip of width symmetrical {}'.format(L))
plt.xlabel('x [m]')
plt.ylabel('s [m]')
plt.grid()
for t in times:
s = np.zeros_like(x)
for i in range(1, 20):
si = ((-1)**(i - 1) / (2 * i - 1) *
np.cos((2 * i - 1) * (np.pi / 2) * x /b) *
np.exp(-(2 * i - 1)**2 * (np.pi / 2)**2 * t/T))
s += A * 4 / np.pi * si
plt.plot(x, s, label='t = {:.2f} d'.format(t))
plt.legend()
plt.show()
Explanation: Symmetrical solution of a draining strip of land
This solution describes the head in a strip land of size $L = 2b$ where the initial head is everywhere equal to $A$ and where the head at $x = \pm b$ is suddenly lowered to zero at $t=0$. Hence, the groundwater will gradually drain until the head reaches zero everywhere at $t \rightarrow\infty$. Therefore, we should get exactly the same result as in the previous example, although the solution looks completely different mathematically.
$$ s(x, t) = A \frac 4 \pi \sum _{i=1} ^\infty \left{
\frac {(-1)^{i-1}} {2i - 1} \cos \left[ (2 i - 1) \left( \frac \pi 2\right) \frac x b \right] \exp \left[ -(2 i - 1)^2 \left( \frac \pi 2 \right) ^2
\frac {kD } {b^2 S} t \right] \right} $$
End of explanation |
1,802 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Certamen 2A, TI 2, 2017-1
Leo Ferres & Rodrigo Trigo
UDD
Pregunta 1
Cree la función fechaValida(fecha) que devuelva True si el argumento es una fecha real, o False si no. Ejemplo, "32 de enero" no es válida (no considere bisiestos). La fecha se dará en el siguiente formato
Step1: Pregunta 2
Dado el string de su RUT sin guión ni dígito verificador encuentre la $\sum_{i=1}^{n}d_i*i$, donde $n$ es el largo del string, $d$ es cada dígito, y $d_1$ es el último número del RUT.
Step2: Pregunta 3
Cree dos funciones | Python Code:
##escriba la función aqui##
fechaValida('02/06/2017')
Explanation: Certamen 2A, TI 2, 2017-1
Leo Ferres & Rodrigo Trigo
UDD
Pregunta 1
Cree la función fechaValida(fecha) que devuelva True si el argumento es una fecha real, o False si no. Ejemplo, "32 de enero" no es válida (no considere bisiestos). La fecha se dará en el siguiente formato: dd/mm/yyyy. Sugerencia: puede usar la función split() de str. Compruebe que ejecute usando su fecha de nacimiento.
End of explanation
rut = input("ingrese su rut: ")
##su código va aqui##
Explanation: Pregunta 2
Dado el string de su RUT sin guión ni dígito verificador encuentre la $\sum_{i=1}^{n}d_i*i$, donde $n$ es el largo del string, $d$ es cada dígito, y $d_1$ es el último número del RUT.
End of explanation
import random
random.seed(int(rut))
##su código va aqui##
Explanation: Pregunta 3
Cree dos funciones: 1) tirarDado() que devuelva un número $x$ aleatorio $1\leq x \leq 6$, y 2) la función sumar() que lanza dados y finaliza cuando la suma de los dados sea mayor que 10000 y retorna cuántos dados lanzó.
End of explanation |
1,803 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
This is a live IPython notebook. You can write and test code, annotate it with text (including equations $\hat x = \frac{1}{n}\sum_{i=0}^n x_i$), plot graphs and start external processes. You can even edit this introduction (try double-clicking on this text).
A large number of scientific and data processing libraries are available in this environment; some of these are demoed below.
Code appears on a gray background; text annotations on a white background. The currently selected cell (if any) has an outline drawn around it. Code blocks can be run by clicking on them to select, and then pressing SHIFT-ENTER to run the cell. Any output will be printed below
Step1: You can also get interactive help on functions, objects in IPython -- just put a question mark after the object in question.
sum?
Step3: You can run python scripts using %run
%run myscript.py
and edit scripts with %edit
%edit myscript.py
If you invoke %edit with no argument, it will open an editor with a blank temporary file -- if you save and exit, IPython will load the contents into a cell and execute it
You can get previous cell inputs using _i (the last input) and _ for the last output
Previous values can be accessed via the In[] and Out[] arrays (see the In[n] at the left of each code cell -- you can use this directly)
print In[1]
Basic python
Let's compute the length of a path in the Collatz sequence. That is, we take a number $n$ and halve it if $n \equiv 0 \mod 2$, otherwise replace it with $3n+1$. We count the number of steps until we reach 1.
Remember that to execute the cell, and thus to define the function, select the cell and press SHIFT-ENTER.
Step4: Now try it in the cell below (e.g. entering collatz(331) and pressing SHIFT-ENTER should print 24).
Step5: OK, let's plot the graph of this function for various $n$. We'll use numpy to manipulate vectors of values, and matplotlib to plot the graph. First we must import them. In future workbooks this will already be done at the start, but we do it explicitly here for clarity.
Step6: Now we create an array of integers 1
Step7: If you hit SHIFT-ENTER on the above, you should see a plot.
It looks pretty noisy; perhaps there is some periodic structure. We can use the FFT to look at this. np.fft.fft() computes the http
Step9: That looks pretty unstructured. Let's try more numbers, and make the fft plot a function we can reuse later.
Step10: Some interesting structure, with big spikes, but the frequency approach isn't revealing much. Let's investigate the distribution of the variable. We can get a histogram with plt.hist().
Step11: OK, this is more interesting. Let's show a normal fit to the distribution, using maximum likelihood estimation. scipy.stats has the tools we need to do this.
Step12: Obviously, this distribution is non-Gaussian, but let's test to make sure. scipy.stats provides many statistical tools, including normality testing. scipy.stats.normaltest gives us a combination of D’Agostino and Pearson’s test.
Note
Step13: We can safely assume this distribution is non-Gaussian.
As an additional measure, we can plot a Q-Q plot, showing the quantiles of the collatzed distribution against the quantiles of a normal distribution. If the distribution is normal, the plot would be show as a straight line.
The Q-Q plot is a very useful way of eyeballing distribution fits.
scipy.stats.probplot() does the job easily | Python Code:
# note that this is a code cell -- you can execute it with Shift-ENTER
%matplotlib inline
Explanation: Introduction
This is a live IPython notebook. You can write and test code, annotate it with text (including equations $\hat x = \frac{1}{n}\sum_{i=0}^n x_i$), plot graphs and start external processes. You can even edit this introduction (try double-clicking on this text).
A large number of scientific and data processing libraries are available in this environment; some of these are demoed below.
Code appears on a gray background; text annotations on a white background. The currently selected cell (if any) has an outline drawn around it. Code blocks can be run by clicking on them to select, and then pressing SHIFT-ENTER to run the cell. Any output will be printed below:
In[1]: print "Hello, world!"
Out[1]: Hello, world!
These notes are avaiable at GitHub
Getting started
There are some warm ups below, covering Python itself, IPython features such as LaTeX markup, numerical computing with Numpy and Scipy, data processing with pandas, plotting with matplotlib and machine learning with sklearn, and even using Cython for high-performance computation.
Markdown
The text inside these boxes is Markdown. You can create your own Markdown cells by creating a new cell (Insert/Below Current) and then selection Cell/Cell Type/Markdown (or you can hit Shift-Enter (create new cell) and ESC-M (convert to markdown) to get the same effect). This allows formatted rich text, such as italics, bolding,
headers
of
different sizes
bulleted lists (note the space before the * in the source!)
* One
* Two
* Three
And also verbatim code; to enter code, insert a blank line, and then indent the code with TAB:
print "hello world"
Inline verbatim can be created using backquotes around text, such as x = x + 1
IPython will also recognise LaTeX formulae if you surround them with dollar symbols \$ \$:
$ x = x + 1 $
And you can make display equations using double dollar signs:
$$ \sum_{i=0}^{N} i^{\alpha_i} + \beta_i $$
Some IPython
IPython extends Python with "magic commands". These are command beginning with %: for example changing the current directory:
%cd /some/path
A useful magic command enables figures to be drawn inline in the notebook (rather than opening as separate windows). To do this use
%matplotlib inline
End of explanation
# try running this cell -- note the pop up at the bottom of the screen
sum?
# if that's not enough, you can get full details on an object with ??
%matplotlib??
Explanation: You can also get interactive help on functions, objects in IPython -- just put a question mark after the object in question.
sum?
End of explanation
def collatz(n, steps=0):
Compute the number of steps to reach 1 in the Collatz sequence.
Note the use of triple quotes to specify a docstring.
Also note the use of a default parameter (steps) to count the number
of recursive calls.
if n==1:
return steps
if n%2==0:
return collatz(n/2, steps+1)
else:
return collatz(3*n+1, steps+1)
Explanation: You can run python scripts using %run
%run myscript.py
and edit scripts with %edit
%edit myscript.py
If you invoke %edit with no argument, it will open an editor with a blank temporary file -- if you save and exit, IPython will load the contents into a cell and execute it
You can get previous cell inputs using _i (the last input) and _ for the last output
Previous values can be accessed via the In[] and Out[] arrays (see the In[n] at the left of each code cell -- you can use this directly)
print In[1]
Basic python
Let's compute the length of a path in the Collatz sequence. That is, we take a number $n$ and halve it if $n \equiv 0 \mod 2$, otherwise replace it with $3n+1$. We count the number of steps until we reach 1.
Remember that to execute the cell, and thus to define the function, select the cell and press SHIFT-ENTER.
End of explanation
collatz(331)
Explanation: Now try it in the cell below (e.g. entering collatz(331) and pressing SHIFT-ENTER should print 24).
End of explanation
import numpy as np # np is the conventional short name for numpy
import matplotlib.pyplot as plt # and plt is the conventional name for matplotlib
import seaborn # all this does (in this case) is restyle matplotlib to use better layouts
Explanation: OK, let's plot the graph of this function for various $n$. We'll use numpy to manipulate vectors of values, and matplotlib to plot the graph. First we must import them. In future workbooks this will already be done at the start, but we do it explicitly here for clarity.
End of explanation
ns = np.arange(1,500) # generate n = [1,2,3,4,...]
collatzed = np.array([collatz(n) for n in ns]) # apply collatz(n) to each value and put it in a numpy array
plt.plot(ns, collatzed) # plot the result
Explanation: Now we create an array of integers 1:n and plot it. Note the use of arange to create an array of integers, and the list comprehension [collatz(n) for n in ns], which applies the collatz function to each element of ns.
End of explanation
fftd = np.fft.fft(collatzed)
# take absolute value
real_magnitude = np.abs(fftd)
# trim off symmetric part (note the slice syntax)
real_magnitude = real_magnitude[1:len(fftd)/2] # note that we drop the 0th element (DC)
fig = plt.figure() # make a new figure
ax = fig.add_subplot(111) # this just creates a new single blank axis
ax.plot(real_magnitude) # and plots onto it (we can create multi-panel plots using add_subplot)
Explanation: If you hit SHIFT-ENTER on the above, you should see a plot.
It looks pretty noisy; perhaps there is some periodic structure. We can use the FFT to look at this. np.fft.fft() computes the http://en.wikipedia.org/wiki/Fourier_transform; we can compute the magnitude spectrum $|f(x)|$ by taking the absolute value, and discarding the symmetric half:
End of explanation
ns = np.arange(1,10000)
collatzed = [collatz(n) for n in ns]
def fft_plot(x):
Plot the magnitude spectrum of x, showing only the real, positive-frequency
portion, and excluding component 0 (DC).
fftd = np.fft.fft(x)
# get absolute (magnitude spectrum)
real_magnitude = np.abs(fftd)
# chop off symmetric part
real_magnitude = real_magnitude[1:len(fftd)/2]
fig = plt.figure() # make a new figure
ax = fig.add_subplot(111)
ax.plot(real_magnitude)
fft_plot(collatzed)
Explanation: That looks pretty unstructured. Let's try more numbers, and make the fft plot a function we can reuse later.
End of explanation
# normed forces the frequency axis to sum to 1
plt.hist(collatzed, bins=50, normed=True);
Explanation: Some interesting structure, with big spikes, but the frequency approach isn't revealing much. Let's investigate the distribution of the variable. We can get a histogram with plt.hist().
End of explanation
mean, std = np.mean(collatzed), np.std(collatzed)
import scipy.stats as stats # we must import scipy.stats, as we've not used it yet
# np.linspace() linearly spaces points on a range: here 200 points spanning the distribution
pdf_range = np.linspace(np.min(collatzed), np.max(collatzed), 200)
# scipy.stats has many distribution functions, including normal (norm)
pdf = stats.norm.pdf(pdf_range, mean, std)
plt.hist(collatzed, bins=50, normed=True)
plt.plot(pdf_range, pdf, 'g', linewidth=3) # plot using thick green line
Explanation: OK, this is more interesting. Let's show a normal fit to the distribution, using maximum likelihood estimation. scipy.stats has the tools we need to do this.
End of explanation
import scipy.stats as stats # we must import scipy.stats as we've not used it yet
k2, p = stats.normaltest(collatzed)
print p # p-value, testing if the distribution differs from the normal. p<0.05 suggests it is
Explanation: Obviously, this distribution is non-Gaussian, but let's test to make sure. scipy.stats provides many statistical tools, including normality testing. scipy.stats.normaltest gives us a combination of D’Agostino and Pearson’s test.
Note: to see the documentation for normaltest, try clicking at the end of normaltest and press SHIFT-TAB to see the tooltip. Hit the ^ symbol to bring up the full help in a pane below. This works for any function.
End of explanation
plt.figure() # don't plot on the same axis as the previous plot
qq = stats.probplot(collatzed, dist="norm", plot=plt) # note the use of "norm" to specify the test distribution
Explanation: We can safely assume this distribution is non-Gaussian.
As an additional measure, we can plot a Q-Q plot, showing the quantiles of the collatzed distribution against the quantiles of a normal distribution. If the distribution is normal, the plot would be show as a straight line.
The Q-Q plot is a very useful way of eyeballing distribution fits.
scipy.stats.probplot() does the job easily:
End of explanation |
1,804 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Cookbook
Step1: Create a raxml Class object
Create a raxml object which has a bunch of default parameters associated with it. The only required argument to initialize the object is a phylip formatted sequence file. In this example I provide a name and working directory as well.
Step2: Additional options
You can also modify many of the other command line arguments to raxml by changing values in the params dictionary of your raxml object. These values could also have been set when you initialized the object.
Step3: Print the command string
It is good practice to always print the command string so that you know exactly what was called for you analysis and it is documented.
Step4: Run the job
This will start the job running. We haven't made a progress bar yet but we will add one soon.
Step5: Access results
One of the reasons it is so convenient to run your raxml jobs this way is that the results files are easily accessible from your raxml objects.
Step6: Plot the results
Here we use toytree to plot the bootstrap results.
Step7: [optional] Submit raxml jobs to run on a cluster
Using the ipyparallel library you can submit raxml jobs to run in parallel on cluster in a load-balanced fashion. You can then tell the notebook to wait until all jobs are finished before progressing in the notebook to draw trees, etc.
Start an ipyparallel cluster
In a separate terminal start an ipcluster instance and tell it how many engines to start.
Step8: Create a Client connected to the cluster
Step9: Create several raxml objects for different data sets
Step10: Submit jobs to run on the cluster queue.
Step11: Wait for jobs to finish
Step12: Plot trees when jobs are finished
Here we will draw a slighly more complex tree figure that combines two trees onto a single canvas. | Python Code:
## conda install ipyrad -c ipyrad
## conda install toytree -c eaton-lab
## conda install raxml -c bioconda
Explanation: Cookbook: RAxML analyses in a notebook
As part of the ipyrad.analysis toolkit we've created convenience functions for easily running common RAxML commands. This can be useful when you want to run all of your analyes in a clean stream-lined way in a jupyter-notebook to create a completely reproducible study.
Install software
There are many ways to install raxml, the simplest of which is to use conda. This will install several raxml binaries into your conda path. If you want to call a different version of raxml that can easily be done by changing the parameter 'binary'.
End of explanation
import ipyrad.analysis as ipa
import toyplot
import toytree
rax = ipa.raxml(
data="./analysis-ipyrad/aligntest_outfiles/aligntest.phy",
name="aligntest",
workdir="analysis-raxml",
);
Explanation: Create a raxml Class object
Create a raxml object which has a bunch of default parameters associated with it. The only required argument to initialize the object is a phylip formatted sequence file. In this example I provide a name and working directory as well.
End of explanation
## set some other params
rax.params.N = 10
rax.params.T = 2
rax.params.o = None
#rax.params.o = ["32082_przewalskii", "33588_przewalskii"]
Explanation: Additional options
You can also modify many of the other command line arguments to raxml by changing values in the params dictionary of your raxml object. These values could also have been set when you initialized the object.
End of explanation
print rax.command
Explanation: Print the command string
It is good practice to always print the command string so that you know exactly what was called for you analysis and it is documented.
End of explanation
rax.run(force=True)
Explanation: Run the job
This will start the job running. We haven't made a progress bar yet but we will add one soon.
End of explanation
rax.trees
Explanation: Access results
One of the reasons it is so convenient to run your raxml jobs this way is that the results files are easily accessible from your raxml objects.
End of explanation
tre = toytree.tree(rax.trees.bipartitions)
tre.root(wildcard="3")
tre.draw(
height=300,
width=300,
node_labels=tre.get_node_values("support"),
);
Explanation: Plot the results
Here we use toytree to plot the bootstrap results.
End of explanation
##
## ipcluster start --n=20
##
Explanation: [optional] Submit raxml jobs to run on a cluster
Using the ipyparallel library you can submit raxml jobs to run in parallel on cluster in a load-balanced fashion. You can then tell the notebook to wait until all jobs are finished before progressing in the notebook to draw trees, etc.
Start an ipyparallel cluster
In a separate terminal start an ipcluster instance and tell it how many engines to start.
End of explanation
import ipyparallel as ipp
ipyclient = ipp.Client()
Explanation: Create a Client connected to the cluster
End of explanation
rax1 = ipa.raxml(
data="~/Documents/ipyrad/tests/analysis-ipyrad/pedic_outfiles/pedic.phy",
name="rax1", T=4, N=100)
rax2 = ipa.raxml(
data="~/Documents/ipyrad/tests/analysis-ipyrad/aligntest_outfiles/aligntest.phy",
name="rax2", T=4, N=100)
Explanation: Create several raxml objects for different data sets
End of explanation
rax1.run(ipyclient=ipyclient, force=True)
rax2.run(ipyclient=ipyclient, force=True)
Explanation: Submit jobs to run on the cluster queue.
End of explanation
## you can query each job while it's running
rax1.async.ready()
## or just block until all jobs on ipyclient are finished
ipyclient.wait()
Explanation: Wait for jobs to finish
End of explanation
## load trees and add to axes
tre1 = toytree.tree(rax1.trees.bipartitions)
tre1.root(wildcard="prz")
tre1.draw(width=300);
tre2 = toytree.tree(rax2.trees.bipartitions)
tre2.root(wildcard="3")
tre2.draw(width=300);
Explanation: Plot trees when jobs are finished
Here we will draw a slighly more complex tree figure that combines two trees onto a single canvas.
End of explanation |
1,805 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<table align="left">
<td>
<a href="https
Step1: Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
Step2: Before you begin
Select a GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select "Runtime --> Change runtime type > GPU"
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI API and Compute Engine API.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step3: Otherwise, set your project ID here.
Step4: Set gcloud config to your project ID.
Step5: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
Step6: Authenticate your Google Cloud account
If you are using Vertex AI Workbench, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step7: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a training job using the Cloud SDK, you upload a Python package
containing your training code to a Cloud Storage bucket. Vertex AI runs
the code from this package. In this tutorial, Vertex AI also saves the
trained model that results from your job in the same bucket. Using this model artifact, you can then
create Vertex AI model and endpoint resources in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are
available. You may
not use a Multi-Regional Storage bucket for training with Vertex AI.
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Import libraries and define constants
Import required libraries.
Step11: Initialize Vertex AI and set an experiment
Define experiment name.
Step12: If EXEPERIMENT_NAME is not set, set a default one below
Step13: Initialize the client for Vertex AI.
Step14: Tracking parameters and metrics in Vertex AI custom training jobs
This example uses the Abalone Dataset. For more information about this dataset please visit
Step15: Create a managed tabular dataset from a CSV
A Managed dataset can be used to create an AutoML model or a custom model.
Step16: Write the training script
Run the following cell to create the training script that is used in the sample custom training job.
Step17: Launch a custom training job and track its trainig parameters on Vertex AI ML Metadata
Step18: Start a new experiment run to track training parameters and start the training job. Note that this operation will take around 10 mins.
Step19: Deploy Model and calculate prediction metrics
Deploy model to Google Cloud. This operation will take 10-20 mins.
Step20: Once model is deployed, perform online prediction using the abalone_test dataset and calculate prediction metrics.
Prepare the prediction dataset.
Step21: Perform online prediction.
Step22: Calculate and track prediction evaluation metrics.
Step23: Extract all parameters and metrics created during this experiment.
Step24: View data in the Cloud Console
Parameters and metrics can also be viewed in the Cloud Console.
Step25: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | Python Code:
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
! pip3 install -U tensorflow $USER_FLAG
! python3 -m pip install {USER_FLAG} google-cloud-aiplatform --upgrade
! pip3 install scikit-learn {USER_FLAG}
Explanation: <table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb">
<img src="https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32" alt="Vertex AI logo">
Open in Vertex AI Workbench
</a>
</td>
</table>
Vertex AI: Track parameters and metrics for custom training jobs
Overview
This notebook demonstrates how to track metrics and parameters for Vertex AI custom training jobs, and how to perform detailed analysis using this data.
Dataset
This example uses the Abalone Dataset. For more information about this dataset please visit: https://archive.ics.uci.edu/ml/datasets/abalone
Objective
In this notebook, you will learn how to use Vertex AI SDK for Python to:
* Track training parameters and prediction metrics for a custom training job.
* Extract and perform analysis for all parameters and metrics within an Experiment.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Vertex AI Workbench, your environment already meets
all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements.
You need the following:
The Google Cloud SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Google Cloud guide to Setting up a Python development
environment and the Jupyter
installation guide provide detailed instructions
for meeting these requirements. The following steps provide a condensed set of
instructions:
Install and initialize the Cloud SDK.
Install Python 3.
Install
virtualenv
and create a virtual environment that uses Python 3. Activate the virtual environment.
To install Jupyter, run pip install jupyter on the
command-line in a terminal shell.
To launch Jupyter, run jupyter notebook on the command-line in a terminal shell.
Open this notebook in the Jupyter Notebook Dashboard.
Install additional packages
Install additional package dependencies not installed in your notebook environment.
End of explanation
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
import os
PROJECT_ID = ""
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
Explanation: Before you begin
Select a GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select "Runtime --> Change runtime type > GPU"
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI API and Compute Engine API.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
Set your project ID
If you don't know your project ID, you may be able to get your project ID using gcloud.
End of explanation
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
Explanation: Otherwise, set your project ID here.
End of explanation
!gcloud config set project $PROJECT_ID
Explanation: Set gcloud config to your project ID.
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
End of explanation
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebooks, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Vertex AI Workbench, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key
page.
Click Create service account.
In the Service account name field, enter a name, and
click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex AI"
into the filter box, and select
Vertex AI Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your
local environment.
Enter the path to your service account key as the
GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
BUCKET_URI = "gs://[your-bucket-name]" # @param {type:"string"}
REGION = "[your-region]" # @param {type:"string"}
if BUCKET_URI == "" or BUCKET_URI is None or BUCKET_URI == "gs://[your-bucket-name]":
BUCKET_URI = "gs://" + PROJECT_ID + "-aip-" + TIMESTAMP
if REGION == "[your-region]":
REGION = "us-central1"
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a training job using the Cloud SDK, you upload a Python package
containing your training code to a Cloud Storage bucket. Vertex AI runs
the code from this package. In this tutorial, Vertex AI also saves the
trained model that results from your job in the same bucket. Using this model artifact, you can then
create Vertex AI model and endpoint resources in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are
available. You may
not use a Multi-Regional Storage bucket for training with Vertex AI.
End of explanation
! gsutil mb -l $REGION $BUCKET_URI
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_URI
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
import pandas as pd
from google.cloud import aiplatform
from sklearn.metrics import mean_absolute_error, mean_squared_error
from tensorflow.python.keras.utils import data_utils
Explanation: Import libraries and define constants
Import required libraries.
End of explanation
EXPERIMENT_NAME = "" # @param {type:"string"}
Explanation: Initialize Vertex AI and set an experiment
Define experiment name.
End of explanation
if EXPERIMENT_NAME == "" or EXPERIMENT_NAME is None:
EXPERIMENT_NAME = "my-experiment-" + TIMESTAMP
Explanation: If EXEPERIMENT_NAME is not set, set a default one below:
End of explanation
aiplatform.init(
project=PROJECT_ID,
location=REGION,
staging_bucket=BUCKET_URI,
experiment=EXPERIMENT_NAME,
)
Explanation: Initialize the client for Vertex AI.
End of explanation
!wget https://storage.googleapis.com/download.tensorflow.org/data/abalone_train.csv
!gsutil cp abalone_train.csv {BUCKET_URI}/data/
gcs_csv_path = f"{BUCKET_URI}/data/abalone_train.csv"
Explanation: Tracking parameters and metrics in Vertex AI custom training jobs
This example uses the Abalone Dataset. For more information about this dataset please visit: https://archive.ics.uci.edu/ml/datasets/abalone
End of explanation
ds = aiplatform.TabularDataset.create(display_name="abalone", gcs_source=[gcs_csv_path])
ds.resource_name
Explanation: Create a managed tabular dataset from a CSV
A Managed dataset can be used to create an AutoML model or a custom model.
End of explanation
%%writefile training_script.py
import pandas as pd
import argparse
import os
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
parser = argparse.ArgumentParser()
parser.add_argument('--epochs', dest='epochs',
default=10, type=int,
help='Number of epochs.')
parser.add_argument('--num_units', dest='num_units',
default=64, type=int,
help='Number of unit for first layer.')
args = parser.parse_args()
# uncomment and bump up replica_count for distributed training
# strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
# tf.distribute.experimental_set_strategy(strategy)
col_names = ["Length", "Diameter", "Height", "Whole weight", "Shucked weight", "Viscera weight", "Shell weight", "Age"]
target = "Age"
def aip_data_to_dataframe(wild_card_path):
return pd.concat([pd.read_csv(fp.numpy().decode(), names=col_names)
for fp in tf.data.Dataset.list_files([wild_card_path])])
def get_features_and_labels(df):
return df.drop(target, axis=1).values, df[target].values
def data_prep(wild_card_path):
return get_features_and_labels(aip_data_to_dataframe(wild_card_path))
model = tf.keras.Sequential([layers.Dense(args.num_units), layers.Dense(1)])
model.compile(loss='mse', optimizer='adam')
model.fit(*data_prep(os.environ["AIP_TRAINING_DATA_URI"]),
epochs=args.epochs ,
validation_data=data_prep(os.environ["AIP_VALIDATION_DATA_URI"]))
print(model.evaluate(*data_prep(os.environ["AIP_TEST_DATA_URI"])))
# save as Vertex AI Managed model
tf.saved_model.save(model, os.environ["AIP_MODEL_DIR"])
Explanation: Write the training script
Run the following cell to create the training script that is used in the sample custom training job.
End of explanation
job = aiplatform.CustomTrainingJob(
display_name="train-abalone-dist-1-replica",
script_path="training_script.py",
container_uri="us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-8:latest",
requirements=["gcsfs==0.7.1"],
model_serving_container_image_uri="us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-8:latest",
)
Explanation: Launch a custom training job and track its trainig parameters on Vertex AI ML Metadata
End of explanation
aiplatform.start_run("custom-training-run-1") # Change this to your desired run name
parameters = {"epochs": 10, "num_units": 64}
aiplatform.log_params(parameters)
model = job.run(
ds,
replica_count=1,
model_display_name="abalone-model",
args=[f"--epochs={parameters['epochs']}", f"--num_units={parameters['num_units']}"],
)
Explanation: Start a new experiment run to track training parameters and start the training job. Note that this operation will take around 10 mins.
End of explanation
endpoint = model.deploy(machine_type="n1-standard-4")
Explanation: Deploy Model and calculate prediction metrics
Deploy model to Google Cloud. This operation will take 10-20 mins.
End of explanation
def read_data(uri):
dataset_path = data_utils.get_file("abalone_test.data", uri)
col_names = [
"Length",
"Diameter",
"Height",
"Whole weight",
"Shucked weight",
"Viscera weight",
"Shell weight",
"Age",
]
dataset = pd.read_csv(
dataset_path,
names=col_names,
na_values="?",
comment="\t",
sep=",",
skipinitialspace=True,
)
return dataset
def get_features_and_labels(df):
target = "Age"
return df.drop(target, axis=1).values, df[target].values
test_dataset, test_labels = get_features_and_labels(
read_data(
"https://storage.googleapis.com/download.tensorflow.org/data/abalone_test.csv"
)
)
Explanation: Once model is deployed, perform online prediction using the abalone_test dataset and calculate prediction metrics.
Prepare the prediction dataset.
End of explanation
prediction = endpoint.predict(test_dataset.tolist())
prediction
Explanation: Perform online prediction.
End of explanation
mse = mean_squared_error(test_labels, prediction.predictions)
mae = mean_absolute_error(test_labels, prediction.predictions)
aiplatform.log_metrics({"mse": mse, "mae": mae})
Explanation: Calculate and track prediction evaluation metrics.
End of explanation
aiplatform.get_experiment_df()
Explanation: Extract all parameters and metrics created during this experiment.
End of explanation
print("Vertex AI Experiments:")
print(
f"https://console.cloud.google.com/ai/platform/experiments/experiments?folder=&organizationId=&project={PROJECT_ID}"
)
Explanation: View data in the Cloud Console
Parameters and metrics can also be viewed in the Cloud Console.
End of explanation
# Warning: Setting this to true will delete everything in your bucket
delete_bucket = False
# Delete dataset
ds.delete()
# Delete the training job
job.delete()
# Undeploy model from endpoint
endpoint.undeploy_all()
# Delete the endpoint
endpoint.delete()
# Delete the model
model.delete()
if delete_bucket or os.getenv("IS_TESTING"):
! gsutil -m rm -r $BUCKET_URI
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Training Job
Model
Cloud Storage Bucket
Vertex AI Dataset
Training Job
Model
Endpoint
Cloud Storage Bucket
End of explanation |
1,806 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Implementing a Stack Class
First, we define an empty class Stack.
Step1: Next we define a constructor for this class. The function stack(S) takes an uninitialized, empty object S
and initializes its member variable mStackElements. This member variable is a list containing the data
stored in the stack.
Step2: We add this method to the class Stack. Since we add it under the name __init__, this method will be the
constructor. Furthermore, to keep the environment tidy, we delete the function stack.
Step3: Next, we add the method push to the class Stack. The method $\texttt{push}(S, e)$ pushes $e$ onto the stack $S$.
Step4: We add this method to the class Stack.
Step5: The method pop removes the topmost element from a stack. It is an error to pop an empty stack.
Step6: The method top returns the element that is on top of the stack.
It is an error to call this method if the stack is empty.
Step7: The method S.isEmpty() checks whether the stack S is empty.
Step8: The method S.copy() creates a shallow copy of the given stack, i.e. the copy contains the same objects as
the stack S.
Step9: The method S.toStr() converts a stack S into a string. Note that we assign it to the method __str__. This method is called automatically when an object of class Stack is cast into a string.
Step10: The method convert converts a stack into a string.
Step11: Testing
The method createStack(L) takes a list L and pushes all of its elements on a newly created stack, which is
then returned.
Step12: By defining the function S.__repr__() for stack objects, we can print stacks in Jupyter notebooks without calling the function print. | Python Code:
class Stack:
pass
S = Stack()
S
Explanation: Implementing a Stack Class
First, we define an empty class Stack.
End of explanation
def stack(S):
S.mStackElements = []
Explanation: Next we define a constructor for this class. The function stack(S) takes an uninitialized, empty object S
and initializes its member variable mStackElements. This member variable is a list containing the data
stored in the stack.
End of explanation
Stack.__init__ = stack
del stack
Explanation: We add this method to the class Stack. Since we add it under the name __init__, this method will be the
constructor. Furthermore, to keep the environment tidy, we delete the function stack.
End of explanation
def push(S, e):
S.mStackElements += [e]
Explanation: Next, we add the method push to the class Stack. The method $\texttt{push}(S, e)$ pushes $e$ onto the stack $S$.
End of explanation
Stack.push = push
del push
Explanation: We add this method to the class Stack.
End of explanation
def pop(S):
assert len(S.mStackElements) > 0, "popping empty stack"
S.mStackElements = S.mStackElements[:-1]
Stack.pop = pop
del pop
Explanation: The method pop removes the topmost element from a stack. It is an error to pop an empty stack.
End of explanation
def top(S):
assert len(S.mStackElements) > 0, "top of empty stack"
return S.mStackElements[-1]
Stack.top = top
del top
Explanation: The method top returns the element that is on top of the stack.
It is an error to call this method if the stack is empty.
End of explanation
def isEmpty(S):
return S.mStackElements == []
Stack.isEmpty = isEmpty
del isEmpty
Explanation: The method S.isEmpty() checks whether the stack S is empty.
End of explanation
def copy(S):
C = Stack()
C.mStackElements = S.mStackElements[:]
return C
Stack.copy = copy
del copy
Explanation: The method S.copy() creates a shallow copy of the given stack, i.e. the copy contains the same objects as
the stack S.
End of explanation
def toStr(S):
C = S.copy()
result = C._convert()
dashes = "-" * len(result)
return '\n'.join([dashes, result, dashes])
Stack.__str__ = toStr
del toStr
Explanation: The method S.toStr() converts a stack S into a string. Note that we assign it to the method __str__. This method is called automatically when an object of class Stack is cast into a string.
End of explanation
def convert(S):
if S.isEmpty():
return '|'
top = S.top()
S.pop()
return S._convert() + ' ' + str(top) + ' |'
Stack._convert = convert
del convert
Explanation: The method convert converts a stack into a string.
End of explanation
def createStack(L):
S = Stack()
for x in L:
S.push(x)
print(S)
return S
S = createStack(range(10))
S
Explanation: Testing
The method createStack(L) takes a list L and pushes all of its elements on a newly created stack, which is
then returned.
End of explanation
Stack.__repr__ = Stack.__str__
S
for i in range(10):
print(S.top())
S.pop()
print(S)
Explanation: By defining the function S.__repr__() for stack objects, we can print stacks in Jupyter notebooks without calling the function print.
End of explanation |
1,807 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gaia DR2 variability lightcurves
Part III
Step1: Ok, this flat file is just what we want. It contains the flux as a function of time for unique sources, with additional metadata flags.
Step2: It looks like the filename encodes the range of sources housed in each file. Let's extract that metadata without having to read the files.
Step3: Now we can make a mask to find which file we want. Let's say we want the Gaia source
Step4: Not bad! We have a 96 point lightcurve!
Step5: The Gaia photometry is taken over 500 days! The mean starspot coverage fraction is not expected to be coherent over such large timescales. There's a portion of the data that is taken contiguously. Let's highlight those.
Step6: Seems plausible...
Step7: The full K2 postage stamp contains another source, which would have easily been separated in Gaia.
Step8: Gaia has 0.771791, close! | Python Code:
# %load /Users/obsidian/Desktop/defaults.py
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
! du -hs ../data/dr2/Gaia/gdr2/light_curves/csv/
df0 = pd.read_csv('../data/dr2/Gaia/gdr2/light_curves/csv/light_curves_1042504286338226688_1098703830327377408.csv.gz')
df0.shape
df0.head(2)
df0.tail(2)
Explanation: Gaia DR2 variability lightcurves
Part III: What do the Gaia lightcurves look like?
gully
May 2, 2018
End of explanation
import glob
fns = glob.glob('../data/dr2/Gaia/gdr2/light_curves/csv/light_curves_*.csv.gz')
n_files = len(fns)
n_files
Explanation: Ok, this flat file is just what we want. It contains the flux as a function of time for unique sources, with additional metadata flags.
End of explanation
fn_df = pd.DataFrame({'fn':fns})
fn_df.head()
fn_df['basename'] = fn_df.fn.str.split('/').str[-1].str.split('light_curves_').str[-1].str.split('.csv.gz').str[0]
fn_df['low'] = fn_df.basename.str.split('_').str[0].astype(np.int64)
fn_df['high'] = fn_df.basename.str.split('_').str[1].astype(np.int64)
Explanation: It looks like the filename encodes the range of sources housed in each file. Let's extract that metadata without having to read the files.
End of explanation
source = 66511970924353792
k2_source = 211059767
gaia_period = 0.771791
mask = (source > fn_df.low) & (source < fn_df.high)
mask.sum()
path = fn_df[mask].fn.values[0]
df_lc = pd.read_csv(path)
df_lc = df_lc[df_lc.source_id==source]
df_lc.shape
Explanation: Now we can make a mask to find which file we want. Let's say we want the Gaia source: 66511970924353792
End of explanation
df_lc.band.value_counts()
gi = df_lc.band == 'G'
plt.plot(df_lc.time[gi], df_lc.flux[gi], '.')
Explanation: Not bad! We have a 96 point lightcurve!
End of explanation
plt.plot(np.mod(df_lc.time[gi], gaia_period), df_lc.flux[gi], '.')
alt = gi & (df_lc.time >1900) & (df_lc.time<1950)
plt.plot(np.mod(df_lc.time[alt], gaia_period), df_lc.flux[alt], 'o')
Explanation: The Gaia photometry is taken over 500 days! The mean starspot coverage fraction is not expected to be coherent over such large timescales. There's a portion of the data that is taken contiguously. Let's highlight those.
End of explanation
30.0*4000
from lightkurve import KeplerTargetPixelFile
tpf = KeplerTargetPixelFile.from_archive(k2_source)
k2_lc = tpf.to_lightcurve()
k2_lc = k2_lc[(k2_lc.flux == k2_lc.flux) & np.isfinite(k2_lc.flux) & (k2_lc.flux_err == k2_lc.flux_err)]
tpf.interact(lc=k2_lc)
Explanation: Seems plausible...
End of explanation
# %load https://www.astroml.org/gatspy/periodic/lomb_scargle-1.py
from gatspy import periodic
model = periodic.LombScargle()
model.optimizer.period_range = (0.5, 1)
model.fit(k2_lc.time, k2_lc.flux, k2_lc.flux_err)
periods = np.linspace(0.5, 1, 10000)
import warnings
with warnings.catch_warnings():
warnings.simplefilter("ignore")
scores = model.score(periods)
# Plot the results
fig, ax = plt.subplots(figsize=(8, 3))
fig.subplots_adjust(bottom=0.2)
ax.plot(periods, scores)
ax.set(xlabel='period (days)', ylabel='Lomb Scargle Power')
model.best_period
Explanation: The full K2 postage stamp contains another source, which would have easily been separated in Gaia.
End of explanation
plt.plot(np.mod(df_lc.time[gi], 0.7779122), df_lc.flux[gi], '.')
Explanation: Gaia has 0.771791, close!
End of explanation |
1,808 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MNIST handwritten digits recognition
Written by Yujun Lin
Preparation
Follow the instructions on notebook for training a binary single-layer perception and saving weights and images to local file.
change image_id for other pictures. There are 10 pictures in total, numbered from 0 to 9.
change filename to the name of file containing weights and images.
Step1: Preview the image of digit
To have a look at the image to be recognized on icestick
Step2: Rewrite script for new image_id and filename
Step3: Compile
we can use the magma binary to compile nn.py for the icestick
Step4: To inspect the generated verilog
Step5: To inspect the genterated pcf
Step6: To flash the nn circuit onto the icestick using yosys, arachne-pnr and the icestorm tools.
To see the results of synthesis, delete choice -q in arachne-pnr command.
Step7: To view the timing analysis | Python Code:
image_id = 9
filename = 'nn_train/BNN.pkl'
Explanation: MNIST handwritten digits recognition
Written by Yujun Lin
Preparation
Follow the instructions on notebook for training a binary single-layer perception and saving weights and images to local file.
change image_id for other pictures. There are 10 pictures in total, numbered from 0 to 9.
change filename to the name of file containing weights and images.
End of explanation
import pickle
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
with open(filename, 'rb') as input_file:
checkpoint = pickle.load(input_file)
imgs = checkpoint['imgs']
def display_img(index):
img = imgs[index, :]
img = np.reshape(img, [16, 16])
plt.imshow(img, cmap='gray')
plt.show()
display_img(image_id)
Explanation: Preview the image of digit
To have a look at the image to be recognized on icestick
End of explanation
with open('modules.py', 'r') as m, open('pipeline.py', 'w') as p:
for line in m:
if line.startswith('image_id = '):
p.write('image_id = {}\n'.format(image_id))
elif line.startswith('filename = '):
p.write('filename = \'{}\'\n'.format(filename))
else:
p.write(line)
Explanation: Rewrite script for new image_id and filename
End of explanation
!../../bin/magma -b icestick main.py
Explanation: Compile
we can use the magma binary to compile nn.py for the icestick
End of explanation
with open("build/main.v", "r") as main_verilog:
print(main_verilog.read())
Explanation: To inspect the generated verilog
End of explanation
with open("build/main.pcf", "r") as main_pcf:
print(main_pcf.read())
Explanation: To inspect the genterated pcf
End of explanation
%%bash
yosys -q -p 'synth_ice40 -top main -blif build/main.blif' build/main.v
arachne-pnr -d 1k -o build/main.txt -p build/main.pcf build/main.blif
icepack build/main.txt build/main.bin
#iceprog build/main.bin
Explanation: To flash the nn circuit onto the icestick using yosys, arachne-pnr and the icestorm tools.
To see the results of synthesis, delete choice -q in arachne-pnr command.
End of explanation
!icetime -tmd hx1k build/main.txt
Explanation: To view the timing analysis
End of explanation |
1,809 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Next
I want to write code -- or, find an existing module! -- that reloads
Step1: Add D3 Visualization showing MCMC over two different methods
Add PS (I'm looking for a job in NYC)
WHEREAMI?
Add and test function reloading
rename collector_func and mapper_func to mapper and collector
classes don't work, unless they take no arguments for initialization
General Strategy
Most scraping projects map a set of raw documents to clean ones. It sounds easy, but in practice their are inconsistencies that break code. For a large corpus of documents, this is frustrating. The cycle becomes
Step2: Resuming Iterators
I assume that you have created a collection of documents. Or, more commonly, you created some sort of generator that yields documents. The resuming iterator fits well with the general processor pattern. It wraps the iterable, saving the state. Given an exception, the current point in the iteration persists.
Why is this useful? Remember that the processor saves anything that throws an exception in the failset. Prior to continuing iteration, you update your map_func so that the new failing case -- and all others in the failset -- pass. After which point, you move on to "green" documents. It is possible your alteration introduced a regression in the documents already visited but not in the failset. However, in my experience, it is more probable that the later cases require more refinement than the ones already seen.
Step3: Processing with processor(example)
Step4: Processing with processor.work_through(iter) | Python Code:
%%html
<div align="center"><blockquote class="twitter-tweet" lang="en"><p lang="en" dir="ltr">When I’m trying to fix a tiny bug. <a href="https://t.co/nml6ZS5quW">pic.twitter.com/nml6ZS5quW</a></p>— Mike Bostock (@mbostock) <a href="https://twitter.com/mbostock/status/661650359069208576">November 3, 2015</a></blockquote>
<script async src="//platform.twitter.com/widgets.js" charset="utf-8"></script></div>
Explanation: Next
I want to write code -- or, find an existing module! -- that reloads:
functions
functions that reference other functions
classes
classes that reference other functions or classes
modules
Basically, given your map_func and collector_func, reload all the code so the next iteration is clean.
End of explanation
from __future__ import print_function
import sys
sys.path.append('.')
from vaquero import *
Explanation: Add D3 Visualization showing MCMC over two different methods
Add PS (I'm looking for a job in NYC)
WHEREAMI?
Add and test function reloading
rename collector_func and mapper_func to mapper and collector
classes don't work, unless they take no arguments for initialization
General Strategy
Most scraping projects map a set of raw documents to clean ones. It sounds easy, but in practice their are inconsistencies that break code. For a large corpus of documents, this is frustrating. The cycle becomes: Run your code against the data; discover an inconsistency (i.e. your code breaks); alter some code; start running all over again.
Said more simply, cleaning scraped data is an iterative process. But, the tools tend to be bad for iterating. Write -> Compile -> Run is frictionless by comparison, at least for large datasets. Vaquero is a tool for iterative data cleaning.
Also
Use generators. On a failure, you can restart from where you left off. The failset should
be a partial guard. Satisfactory flag.
Examples are your fixtures.
Assertions are self documenting
Grab only what you need now. This is iterative, remember.
Unobtrusive (the function doesn't need to be in Vaquero for production, although the asserts will slow things down. But, you can also use a PrePost wrapper so your production code is faster and the assertions are all in one place
Simplest Implementation
Failsets
Why would you disable dup checking?
Common data structures -- the kind you will probably convert your document into -- are mutable. As such, they are not hashable, so I can't use proper set data structures. Maintaining set semantics requires comparison over all elements in the collection, an O(n) operation. As the size of the collection grows, this may become time prohibitive. If the cost of running your processor over a fail set document is less than the cost of equality checking over the entire collection, this is useful.
End of explanation
items = (i for i in range(20) if i % 2 == 1)
mylist = ResumingIterator(items)
for i in mylist:
assert i != 3
print(i)
for i in mylist:
assert i != 2
print(i)
Explanation: Resuming Iterators
I assume that you have created a collection of documents. Or, more commonly, you created some sort of generator that yields documents. The resuming iterator fits well with the general processor pattern. It wraps the iterable, saving the state. Given an exception, the current point in the iteration persists.
Why is this useful? Remember that the processor saves anything that throws an exception in the failset. Prior to continuing iteration, you update your map_func so that the new failing case -- and all others in the failset -- pass. After which point, you move on to "green" documents. It is possible your alteration introduced a regression in the documents already visited but not in the failset. However, in my experience, it is more probable that the later cases require more refinement than the ones already seen.
End of explanation
f = Processor(int, print, PicklingFailSet("int.pickle"))
f("10")
f("10.0")
f("20.0")
f.fail_set.examples()
examples = ['10', 20, 20.0, '20.0', '10.0']
for example in examples:
f(example)
f.failing_examples()
Explanation: Processing with processor(example)
End of explanation
x = [1,2,3]
i = iter(x)
next(i)
next(i)
Explanation: Processing with processor.work_through(iter)
End of explanation |
1,810 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lecture 3
I/O and exceptions
About Files
RAM and volatility.
Files and non-volatility
Writing a file
file handle
mode
Step1: Reading a whole file at once
Step2: A better way to read files
Step4: Fetching from the web
```
import urllib.request
url = "https
Step5: Exercises
Write a program that reads a file and writes out a new file with the lines in reversed order (i.e. the first line in the old file becomes the last one in the new file.)
Write a program that reads a file and prints only those lines that contain the substring snake.
Write a program that reads a text file and produces an output file which is a copy of the file, except the first five columns of each line contain a four digit line number, followed by a space. Start numbering the first line in the output file at 1. Ensure that every line number is formatted to the same width in the output file. Use one of your Python programs as test data for this exercise | Python Code:
myfile = open("test.txt", "w")
myfile.write("My first file written from Python\n")
myfile.write("---------------------------------\n")
myfile.write("Hello, world!\n")
myfile.write("Did it work?\n")
myfile.close()
Explanation: Lecture 3
I/O and exceptions
About Files
RAM and volatility.
Files and non-volatility
Writing a file
file handle
mode
End of explanation
f = open("test.txt")
content = f.read()
f.close()
words = content.split()
print("There are {0} words in the file.".format(len(words)))
words
Explanation: Reading a whole file at once
End of explanation
import urllib.request
with urllib.request.urlopen('http://www.python.org/') as f:
print(f.read(1000))
Explanation: A better way to read files:
with open("somefile.txt" , 'r') as f:
do stuff here...
End of explanation
import urllib.request
def retrieve_page(url):
Retrieve the contents of a web page.
The contents is converted to a string before returning it.
with urllib.request.urlopen(url) as my_socket:
dta = str(my_socket.read())
return dta
f = open("git.txt", "w")
the_text = retrieve_page("https://reddit.com")
# print(the_text)
Explanation: Fetching from the web
```
import urllib.request
url = "https://api.github.com/"
destination_filename = "rfc793.txt"
urllib.request.urlretrieve(url, destination_filename)
```
We’ll need to get a few things right before this works:
* The resource we’re trying to fetch must exist! Check this using a browser.
* We’ll need permission to write to the destination filename, and the file will be created in the “current directory” - i.e. the same folder that the Python program is saved in.
* If we are behind a proxy server that requires authentication, (as some students are), this may require some more special handling to work around our proxy. Use a local resource for the purpose of this demonstration!
End of explanation
code_class = ['dana', 'cole', 'kevin', 'connor', 'jaydn', 'patrick',
'ransom', 'skip', 'mercy', 'nick']
rand_class = code_class.copy()
random.shuffle(rand_class)
list(zip(rand_class, rand_class[::-1]))
Explanation: Exercises
Write a program that reads a file and writes out a new file with the lines in reversed order (i.e. the first line in the old file becomes the last one in the new file.)
Write a program that reads a file and prints only those lines that contain the substring snake.
Write a program that reads a text file and produces an output file which is a copy of the file, except the first five columns of each line contain a four digit line number, followed by a space. Start numbering the first line in the output file at 1. Ensure that every line number is formatted to the same width in the output file. Use one of your Python programs as test data for this exercise: your output should be a printed and numbered listing of the Python program.
Write a program that undoes the numbering of the previous exercise: it should read a file with numbered lines and produce another file without line numbers.
Extra: Read through the requests tutorial. Requests is a much better tool for working with HTTP requests than the built in urllib.requests IMHO (it's even recommended in the urllib.requests library!).
Pair project: Boggler
Choosing pairs
End of explanation |
1,811 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Orbital Elements
We can add particles to a simulation by specifying cartesian components
Step1: Any components not passed automatically default to 0. REBOUND can also accept orbital elements.
Reference bodies
As a reminder, there is a one-to-one mapping between (x,y,z,vx,vy,vz) and orbital elements, and one should always specify what the orbital elements are referenced against (e.g., the central star, the system's barycenter, etc.). The differences betwen orbital elements referenced to these centers differ by $\sim$ the mass ratio of the largest body to the central mass. By default, REBOUND always uses Jacobi elements, which for each particle are always referenced to the center of mass of all particles with lower index in the simulation. This is a useful set for theoretical calculations, and gives a logical behavior as the mass ratio increase, e.g., in the case of a circumbinary planet. Let's set up a binary,
Step2: We always have to pass a semimajor axis (to set a length scale), but any other elements are by default set to 0. Notice that our second star has the same vz as the first one due to the default Jacobi elements. Now we could add a distant planet on a circular orbit,
Step3: This planet is set up relative to the binary center of mass (again due to the Jacobi coordinates), which is probably what we want. But imagine we now want to place a test mass in a tight orbit around the second star. If we passed things as above, the orbital elements would be referenced to the binary/outer-planet center of mass. We can override the default by explicitly passing a primary (any instance of the Particle class)
Step4: All simulations are performed in Cartesian elements, so to avoid the overhead, REBOUND does not update particles' orbital elements as the simulation progresses. However, we can always calculate them when required with sim.calculate_orbits(). Note that REBOUND will always output angles in the range $[-\pi,\pi]$, except the inclination which is always in $[0,\pi]$.
Step5: Notice that there is always one less orbit than there are particles, since orbits are only defined between pairs of particles. We see that we got the first two orbits right, but the last one is way off. The reason is that again the REBOUND default is that we always get Jacobi elements. But we initialized the last particle relative to the second star, rather than the center of mass of all the previous particles.
To get orbital elements relative to a specific body, you can manually use the calculate_orbit method of the Particle class
Step6: though we could have simply avoided this problem by adding bodies from the inside out (second star, test mass, first star, circumbinary planet).
Edge cases and orbital element sets
Different orbital elements lose meaning in various limits, e.g., a planar orbit and a circular orbit. REBOUND therefore allows initialization with several different types of variables that are appropriate in different cases. It's important to keep in mind that the procedure to initialize particles from orbital elements is not exactly invertible, so one can expect discrepant results for elements that become ill defined. For example,
Step7: The problem here is that $\omega$ (the angle from the ascending node to pericenter) is ill-defined for a circular orbit, so it's not clear what we mean when we pass it, and we get spurious results (i.e., $\omega = 0$ rather than 0.1, and $f=0.1$ rather than the default 0). Similarly, $f$, the angle from pericenter to the particle's position, is undefined. However, the true longitude $\theta$, the broken angle from the $x$ axis to the ascending node = $\Omega + \omega + f$, and then to the particle's position, is always well defined
Step8: To be clearer and ensure we get the results we expect, we could instead pass theta to specify the longitude we want, e.g.
Step9: Here we have a planar orbit, in which case the line of nodes becomes ill defined, so $\Omega$ is not a good variable, but we pass it anyway! In this case, $\omega$ is also undefined since it is referenced to the ascending node. Here we get that now these two ill-defined variables get flipped. The appropriate variable is pomega ($\varpi = \Omega + \omega$), which is the angle from the $x$ axis to pericenter
Step10: We can specify the pericenter of the orbit with either $\omega$ or $\varpi$
Step11: Note that if the inclination is exactly zero, REBOUND sets $\Omega$ (which is undefined) to 0, so $\omega = \varpi$.
Finally, we can initialize particles using mean, rather than true, longitudes or anomalies (for example, this might be useful for resonances). We can either use the mean anomaly $M$, which is referenced to pericenter (again ill-defined for circular orbits), or its better-defined counterpart the mean longitude l $= \lambda = \Omega + \omega + M$, which is analogous to $\theta$ above,
Step12: Accuracy
As a test of accuracy and demonstration of issues related to the last section, let's test the numerical stability by intializing particles with small eccentricities and true anomalies, computing their orbital elements back, and comparing the relative error. We choose the inclination and node longitude randomly
Step13: We see that the behavior is poor, which is physically due to $f$ becoming poorly defined at low $e$. If instead we initialize the orbits with the true longitude $\theta$ as discussed above, we get much better results
Step14: Hyperbolic & Parabolic Orbits
REBOUND can also handle hyperbolic orbits, which have negative $a$ and $e>1$
Step15: Currently there is no support for exactly parabolic orbits, but we can get a close approximation by passing a nearby hyperbolic orbit where we can specify the pericenter = $|a|(e-1)$ with $a$ and $e$. For example, for a 0.1 AU pericenter,
Step16: Retrograde Orbits
Orbital elements can be counterintuitive for retrograde orbits, but REBOUND tries to sort them out consistently. This can lead to some initially surprising results. For example, | Python Code:
import rebound
sim = rebound.Simulation()
sim.add(m=1., x=1., vz = 2.)
Explanation: Orbital Elements
We can add particles to a simulation by specifying cartesian components:
End of explanation
sim.add(m=1., a=1.)
sim.status()
Explanation: Any components not passed automatically default to 0. REBOUND can also accept orbital elements.
Reference bodies
As a reminder, there is a one-to-one mapping between (x,y,z,vx,vy,vz) and orbital elements, and one should always specify what the orbital elements are referenced against (e.g., the central star, the system's barycenter, etc.). The differences betwen orbital elements referenced to these centers differ by $\sim$ the mass ratio of the largest body to the central mass. By default, REBOUND always uses Jacobi elements, which for each particle are always referenced to the center of mass of all particles with lower index in the simulation. This is a useful set for theoretical calculations, and gives a logical behavior as the mass ratio increase, e.g., in the case of a circumbinary planet. Let's set up a binary,
End of explanation
sim.add(m=1.e-3, a=100.)
Explanation: We always have to pass a semimajor axis (to set a length scale), but any other elements are by default set to 0. Notice that our second star has the same vz as the first one due to the default Jacobi elements. Now we could add a distant planet on a circular orbit,
End of explanation
sim.add(primary=sim.particles[1], a=0.01)
Explanation: This planet is set up relative to the binary center of mass (again due to the Jacobi coordinates), which is probably what we want. But imagine we now want to place a test mass in a tight orbit around the second star. If we passed things as above, the orbital elements would be referenced to the binary/outer-planet center of mass. We can override the default by explicitly passing a primary (any instance of the Particle class):
End of explanation
orbits = sim.calculate_orbits()
for orbit in orbits:
print(orbit)
Explanation: All simulations are performed in Cartesian elements, so to avoid the overhead, REBOUND does not update particles' orbital elements as the simulation progresses. However, we can always calculate them when required with sim.calculate_orbits(). Note that REBOUND will always output angles in the range $[-\pi,\pi]$, except the inclination which is always in $[0,\pi]$.
End of explanation
print(sim.particles[3].calculate_orbit(sim, primary=sim.particles[1]))
Explanation: Notice that there is always one less orbit than there are particles, since orbits are only defined between pairs of particles. We see that we got the first two orbits right, but the last one is way off. The reason is that again the REBOUND default is that we always get Jacobi elements. But we initialized the last particle relative to the second star, rather than the center of mass of all the previous particles.
To get orbital elements relative to a specific body, you can manually use the calculate_orbit method of the Particle class:
End of explanation
sim = rebound.Simulation()
sim.add(m=1.)
sim.add(a=1., e=0., inc=0.1, Omega=0.3, omega=0.1)
orbits = sim.calculate_orbits()
print(orbits[0])
Explanation: though we could have simply avoided this problem by adding bodies from the inside out (second star, test mass, first star, circumbinary planet).
Edge cases and orbital element sets
Different orbital elements lose meaning in various limits, e.g., a planar orbit and a circular orbit. REBOUND therefore allows initialization with several different types of variables that are appropriate in different cases. It's important to keep in mind that the procedure to initialize particles from orbital elements is not exactly invertible, so one can expect discrepant results for elements that become ill defined. For example,
End of explanation
print(orbits[0].theta)
Explanation: The problem here is that $\omega$ (the angle from the ascending node to pericenter) is ill-defined for a circular orbit, so it's not clear what we mean when we pass it, and we get spurious results (i.e., $\omega = 0$ rather than 0.1, and $f=0.1$ rather than the default 0). Similarly, $f$, the angle from pericenter to the particle's position, is undefined. However, the true longitude $\theta$, the broken angle from the $x$ axis to the ascending node = $\Omega + \omega + f$, and then to the particle's position, is always well defined:
End of explanation
sim = rebound.Simulation()
sim.add(m=1.)
sim.add(a=1., e=0., inc=0.1, Omega=0.3, theta = 0.4)
orbits = sim.calculate_orbits()
print(orbits[0].theta)
import rebound
sim = rebound.Simulation()
sim.add(m=1.)
sim.add(a=1., e=0.2, Omega=0.1)
orbits = sim.calculate_orbits()
print(orbits[0])
Explanation: To be clearer and ensure we get the results we expect, we could instead pass theta to specify the longitude we want, e.g.
End of explanation
print(orbits[0].pomega)
Explanation: Here we have a planar orbit, in which case the line of nodes becomes ill defined, so $\Omega$ is not a good variable, but we pass it anyway! In this case, $\omega$ is also undefined since it is referenced to the ascending node. Here we get that now these two ill-defined variables get flipped. The appropriate variable is pomega ($\varpi = \Omega + \omega$), which is the angle from the $x$ axis to pericenter:
End of explanation
import rebound
sim = rebound.Simulation()
sim.add(m=1.)
sim.add(a=1., e=0.2, pomega=0.1)
orbits = sim.calculate_orbits()
print(orbits[0])
Explanation: We can specify the pericenter of the orbit with either $\omega$ or $\varpi$:
End of explanation
sim = rebound.Simulation()
sim.add(m=1.)
sim.add(a=1., e=0.1, Omega=0.3, M = 0.1)
sim.add(a=1., e=0.1, Omega=0.3, l = 0.4)
orbits = sim.calculate_orbits()
print(orbits[0].l)
print(orbits[1].l)
import rebound
sim = rebound.Simulation()
sim.add(m=1.)
sim.add(a=1., e=0.1, omega=1.)
orbits = sim.calculate_orbits()
print(orbits[0])
Explanation: Note that if the inclination is exactly zero, REBOUND sets $\Omega$ (which is undefined) to 0, so $\omega = \varpi$.
Finally, we can initialize particles using mean, rather than true, longitudes or anomalies (for example, this might be useful for resonances). We can either use the mean anomaly $M$, which is referenced to pericenter (again ill-defined for circular orbits), or its better-defined counterpart the mean longitude l $= \lambda = \Omega + \omega + M$, which is analogous to $\theta$ above,
End of explanation
import random
import numpy as np
def simulation(par):
e,f = par
e = 10**e
f = 10**f
sim = rebound.Simulation()
sim.add(m=1.)
a = 1.
inc = random.random()*np.pi
Omega = random.random()*2*np.pi
sim.add(m=0.,a=a,e=e,inc=inc,Omega=Omega, f=f)
o=sim.calculate_orbits()[0]
if o.f < 0: # avoid wrapping issues
o.f += 2*np.pi
err = max(np.fabs(o.e-e)/e, np.fabs(o.f-f)/f)
return err
random.seed(1)
N = 100
es = np.linspace(-16.,-1.,N)
fs = np.linspace(-16.,-1.,N)
params = [(e,f) for e in es for f in fs]
pool=rebound.InterruptiblePool()
res = pool.map(simulation, params)
res = np.array(res).reshape(N,N)
res = np.nan_to_num(res)
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib import ticker
from matplotlib.colors import LogNorm
import matplotlib
f,ax = plt.subplots(1,1,figsize=(7,5))
extent=[fs.min(), fs.max(), es.min(), es.max()]
ax.set_xlim(extent[0], extent[1])
ax.set_ylim(extent[2], extent[3])
ax.set_xlabel(r"true anomaly (f)")
ax.set_ylabel(r"eccentricity")
im = ax.imshow(res, norm=LogNorm(), vmax=1., vmin=1.e-16, aspect='auto', origin="lower", interpolation='nearest', cmap="RdYlGn_r", extent=extent)
cb = plt.colorbar(im, ax=ax)
cb.solids.set_rasterized(True)
cb.set_label("Relative Error")
Explanation: Accuracy
As a test of accuracy and demonstration of issues related to the last section, let's test the numerical stability by intializing particles with small eccentricities and true anomalies, computing their orbital elements back, and comparing the relative error. We choose the inclination and node longitude randomly:
End of explanation
def simulation(par):
e,theta = par
e = 10**e
theta = 10**theta
sim = rebound.Simulation()
sim.add(m=1.)
a = 1.
inc = random.random()*np.pi
Omega = random.random()*2*np.pi
omega = random.random()*2*np.pi
sim.add(m=0.,a=a,e=e,inc=inc,Omega=Omega, theta=theta)
o=sim.calculate_orbits()[0]
if o.theta < 0:
o.theta += 2*np.pi
err = max(np.fabs(o.e-e)/e, np.fabs(o.theta-theta)/theta)
return err
random.seed(1)
N = 100
es = np.linspace(-16.,-1.,N)
thetas = np.linspace(-16.,-1.,N)
params = [(e,theta) for e in es for theta in thetas]
pool=rebound.InterruptiblePool()
res = pool.map(simulation, params)
res = np.array(res).reshape(N,N)
res = np.nan_to_num(res)
f,ax = plt.subplots(1,1,figsize=(7,5))
extent=[thetas.min(), thetas.max(), es.min(), es.max()]
ax.set_xlim(extent[0], extent[1])
ax.set_ylim(extent[2], extent[3])
ax.set_xlabel(r"true longitude (\theta)")
ax.set_ylabel(r"eccentricity")
im = ax.imshow(res, norm=LogNorm(), vmax=1., vmin=1.e-16, aspect='auto', origin="lower", interpolation='nearest', cmap="RdYlGn_r", extent=extent)
cb = plt.colorbar(im, ax=ax)
cb.solids.set_rasterized(True)
cb.set_label("Relative Error")
Explanation: We see that the behavior is poor, which is physically due to $f$ becoming poorly defined at low $e$. If instead we initialize the orbits with the true longitude $\theta$ as discussed above, we get much better results:
End of explanation
sim.add(a=-0.2, e=1.4)
sim.status()
Explanation: Hyperbolic & Parabolic Orbits
REBOUND can also handle hyperbolic orbits, which have negative $a$ and $e>1$:
End of explanation
sim = rebound.Simulation()
sim.add(m=1.)
q = 0.1
a=-1.e14
e=1.+q/np.fabs(a)
sim.add(a=a, e=e)
print(sim.calculate_orbits()[0])
Explanation: Currently there is no support for exactly parabolic orbits, but we can get a close approximation by passing a nearby hyperbolic orbit where we can specify the pericenter = $|a|(e-1)$ with $a$ and $e$. For example, for a 0.1 AU pericenter,
End of explanation
sim = rebound.Simulation()
sim.add(m=1.)
sim.add(a=1.,inc=np.pi,e=0.1, Omega=0., pomega=1.)
print(sim.calculate_orbits()[0])
Explanation: Retrograde Orbits
Orbital elements can be counterintuitive for retrograde orbits, but REBOUND tries to sort them out consistently. This can lead to some initially surprising results. For example,
End of explanation |
1,812 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Variational Autoencoder
Step5: Task
Step6: Visualize reconstruction quality
Step7: Illustrating latent space
Next, we train a VAE with 2d latent space and illustrates how the encoder (the recognition network) encodes some of the labeled inputs (collapsing the Gaussian distribution in latent space to its mean). This gives us some insights into the structure of the learned manifold (latent space)
Step8: An other way of getting insights into the latent space is to use the generator network to plot reconstrunctions at the positions in the latent space for which they have been generated | Python Code:
import numpy as np
import tensorflow as tf
import tensorflow.contrib.slim as slim
from tensorflow.contrib.learn.python.learn.datasets.mnist import read_data_sets
import matplotlib.pyplot as plt
%matplotlib inline
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('fashion-mnist/data/fashion', one_hot=True)
n_samples = mnist.train.num_examples
Explanation: Variational Autoencoder
End of explanation
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.3)
config=tf.ConfigProto(gpu_options=gpu_options)
class VAE:
def __init__(self, network_architecture, transfer_fct=tf.nn.softplus,
learning_rate=0.001, batch_size=100):
self.network_architecture = network_architecture
self.transfer_fct = transfer_fct
self.learning_rate = learning_rate
self.batch_size = batch_size
self.x = tf.placeholder(tf.float32, [None, network_architecture["n_input"]])
self._create_network()
self._create_loss_optimizer()
init = tf.global_variables_initializer()
self.sess = tf.InteractiveSession()
self.sess.run(init)
def _create_network(self):
# Use recognition network to determine mean and
# (log) variance of Gaussian distribution in latent
# space
self.z_mean, self.z_log_sigma_sq = self._recognition_network()
# Draw one sample z from Gaussian distribution
n_z = self.network_architecture["n_z"]
# tip: use tf.random_normal
eps = tf.random_normal(shape=tf.shape(self.z_log_sigma_sq))
# z = mu + sigma*epsilon
self.z = tf.add(self.z_mean,
tf.multiply(tf.sqrt(tf.exp(self.z_log_sigma_sq)), eps))
# Use generator to determine mean of
# Bernoulli distribution of reconstructed input
self.x_reconstr_mean = self._generator_network()
def _recognition_network(self):
layer_1 = slim.fully_connected(self.x, self.network_architecture['n_hidden_recog_1'])
layer_2 = slim.fully_connected(layer_1, self.network_architecture['n_hidden_recog_2'])
z_mean = slim.fully_connected(layer_2, self.network_architecture['n_z'],
activation_fn=None)
z_log_sigma_sq = slim.fully_connected(layer_2, self.network_architecture['n_z'])
return z_mean, z_log_sigma_sq
def _generator_network(self):
layer_1 = slim.fully_connected(self.z, self.network_architecture['n_hidden_recog_1'])
layer_2 = slim.fully_connected(layer_1, self.network_architecture['n_hidden_recog_2'])
x_reconstr_mean = slim.fully_connected(layer_2, self.network_architecture['n_input'],
activation_fn=None)
return x_reconstr_mean
def _create_loss_optimizer(self):
reconstr_loss = tf.reduce_sum(tf.square(tf.subtract(self.x, self.x_reconstr_mean)), axis=1)
net_normal_distr = tf.distributions.Normal(loc=self.z_mean,
scale=tf.sqrt(tf.exp(self.z_log_sigma_sq)))
ideal_normal_distr = tf.distributions.Normal(loc=0., scale=1.)
latent_loss = tf.reduce_sum(tf.distributions.kl_divergence(net_normal_distr, ideal_normal_distr), axis=1)
self.cost = tf.reduce_mean(reconstr_loss + latent_loss) # average over batch
self.optimizer = tf.train.AdamOptimizer(learning_rate=self.learning_rate).minimize(self.cost)
def partial_fit(self, X):
Train model based on mini-batch of input data.
Return cost of mini-batch.
opt, cost = self.sess.run((self.optimizer, self.cost),
feed_dict={self.x: X})
return cost
def transform(self, X):
Transform data by mapping it into the latent space.
# Note: This maps to mean of distribution, we could alternatively
# sample from Gaussian distribution
return self.sess.run(self.z_mean, feed_dict={self.x: X})
def generate(self, z_mu=None):
Generate data by sampling from latent space.
If z_mu is not None, data for this point in latent space is
generated. Otherwise, z_mu is drawn from prior in latent
space.
if z_mu is None:
z_mu = np.random.normal(size=self.network_architecture["n_z"])
# Note: This maps to mean of distribution, we could alternatively
# sample from Gaussian distribution
return self.sess.run(self.x_reconstr_mean,
feed_dict={self.z: z_mu})
def reconstruct(self, X):
Use VAE to reconstruct given data.
return self.sess.run(self.x_reconstr_mean,
feed_dict={self.x: X})
def train(network_architecture, learning_rate=0.001,
batch_size=1000, training_epochs=10, display_step=5):
vae = VAE(network_architecture,
learning_rate=learning_rate,
batch_size=batch_size)
# Training cycle
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(n_samples / batch_size)
# Loop over all batches
for i in range(total_batch):
batch_xs, _ = mnist.train.next_batch(batch_size)
# Fit training using batch data
cost = vae.partial_fit(batch_xs)
# Compute average loss
avg_cost += cost / n_samples * batch_size
# Display logs per epoch step
if epoch % display_step == 0:
print("Epoch:", '%04d' % (epoch+1),
"cost=", "{:.9f}".format(avg_cost))
return vae
Explanation: Task: fill the gaps in VAE
End of explanation
network_architecture = \
dict(n_hidden_recog_1=500, # 1st layer encoder neurons
n_hidden_recog_2=500, # 2nd layer encoder neurons
n_hidden_gener_1=500, # 1st layer decoder neurons
n_hidden_gener_2=500, # 2nd layer decoder neurons
n_input=784, # MNIST data input (img shape: 28*28)
n_z=20) # dimensionality of latent space
vae = train(network_architecture, training_epochs=128)
x_sample = mnist.test.next_batch(1000)[0]
x_reconstruct = vae.reconstruct(x_sample)
plt.figure(figsize=(8, 12))
for i in range(5):
plt.subplot(5, 2, 2*i + 1)
plt.imshow(x_sample[i].reshape(28, 28), vmin=0, vmax=1, cmap="gray")
plt.title("Test input")
plt.colorbar()
plt.subplot(5, 2, 2*i + 2)
plt.imshow(x_reconstruct[i].reshape(28, 28), vmin=0, vmax=1, cmap="gray")
plt.title("Reconstruction")
plt.colorbar()
plt.tight_layout()
Explanation: Visualize reconstruction quality
End of explanation
network_architecture = \
dict(n_hidden_recog_1=500, # 1st layer encoder neurons
n_hidden_recog_2=500, # 2nd layer encoder neurons
n_hidden_gener_1=500, # 1st layer decoder neurons
n_hidden_gener_2=500, # 2nd layer decoder neurons
n_input=784, # MNIST data input (img shape: 28*28)
n_z=2) # dimensionality of latent space
vae_2d = train(network_architecture, training_epochs=128)
x_sample, y_sample = mnist.test.next_batch(5000)
z_mu = vae_2d.transform(x_sample)
plt.figure(figsize=(8, 6))
plt.scatter(z_mu[:, 0], z_mu[:, 1], c=np.argmax(y_sample, 1))
plt.colorbar()
plt.grid()
Explanation: Illustrating latent space
Next, we train a VAE with 2d latent space and illustrates how the encoder (the recognition network) encodes some of the labeled inputs (collapsing the Gaussian distribution in latent space to its mean). This gives us some insights into the structure of the learned manifold (latent space)
End of explanation
nx = ny = 20
x_values = np.linspace(-3, 3, nx)
y_values = np.linspace(-3, 3, ny)
canvas = np.empty((28*ny, 28*nx))
for i, yi in enumerate(x_values):
for j, xi in enumerate(y_values):
z_mu = np.array([[xi, yi]]*vae.batch_size)
x_mean = vae_2d.generate(z_mu)
canvas[(nx-i-1)*28:(nx-i)*28, j*28:(j+1)*28] = x_mean[0].reshape(28, 28)
plt.figure(figsize=(8, 10))
Xi, Yi = np.meshgrid(x_values, y_values)
plt.imshow(canvas, origin="upper", cmap="gray")
plt.tight_layout()
Explanation: An other way of getting insights into the latent space is to use the generator network to plot reconstrunctions at the positions in the latent space for which they have been generated:
End of explanation |
1,813 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1> Preprocessing using Dataflow </h1>
This notebook illustrates
Step1: Run the command again if you are getting oauth2client error.
Note
Step2: You may receive a UserWarning about the Apache Beam SDK for Python 3 as not being yet fully supported. Don't worry about this.
Step4: <h2> Save the query from earlier </h2>
The data is natality data (record of births in the US). My goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother. Later, we will want to split the data into training and eval datasets. The hash of the year-month will be used for that.
Step6: <h2> Create ML dataset using Dataflow </h2>
Let's use Cloud Dataflow to read in the BigQuery data, do some preprocessing, and write it out as CSV files.
Instead of using Beam/Dataflow, I had three other options
Step7: The above step will take 20+ minutes. Go to the GCP web console, navigate to the Dataflow section and <b>wait for the job to finish</b> before you run the following step. | Python Code:
pip install --user apache-beam[gcp]
Explanation: <h1> Preprocessing using Dataflow </h1>
This notebook illustrates:
<ol>
<li> Creating datasets for Machine Learning using Dataflow
</ol>
<p>
While Pandas is fine for experimenting, for operationalization of your workflow, it is better to do preprocessing in Apache Beam. This will also help if you need to preprocess data in flight, since Apache Beam also allows for streaming.
End of explanation
import apache_beam as beam
print(beam.__version__)
Explanation: Run the command again if you are getting oauth2client error.
Note: You may ignore the following responses in the cell output above:
ERROR (in Red text) related to: witwidget-gpu, fairing
WARNING (in Yellow text) related to: hdfscli, hdfscli-avro, pbr, fastavro, gen_client
<b>Restart</b> the kernel before proceeding further (On the Notebook menu - <b>Kernel</b> - <b>Restart Kernel<b>).
Make sure the Dataflow API is enabled by going to this link. Ensure that you've installed Beam by importing it and printing the version number.
End of explanation
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
Explanation: You may receive a UserWarning about the Apache Beam SDK for Python 3 as not being yet fully supported. Don't worry about this.
End of explanation
# Create SQL query using natality data after the year 2000
query =
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
# Call BigQuery and examine in dataframe
from google.cloud import bigquery
df = bigquery.Client().query(query + " LIMIT 100").to_dataframe()
df.head()
Explanation: <h2> Save the query from earlier </h2>
The data is natality data (record of births in the US). My goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother. Later, we will want to split the data into training and eval datasets. The hash of the year-month will be used for that.
End of explanation
import datetime, os
def to_csv(rowdict):
# Pull columns from BQ and create a line
import hashlib
import copy
CSV_COLUMNS = 'weight_pounds,is_male,mother_age,plurality,gestation_weeks'.split(',')
# Create synthetic data where we assume that no ultrasound has been performed
# and so we don't know sex of the baby. Let's assume that we can tell the difference
# between single and multiple, but that the errors rates in determining exact number
# is difficult in the absence of an ultrasound.
no_ultrasound = copy.deepcopy(rowdict)
w_ultrasound = copy.deepcopy(rowdict)
no_ultrasound['is_male'] = 'Unknown'
if rowdict['plurality'] > 1:
no_ultrasound['plurality'] = 'Multiple(2+)'
else:
no_ultrasound['plurality'] = 'Single(1)'
# Change the plurality column to strings
w_ultrasound['plurality'] = ['Single(1)', 'Twins(2)', 'Triplets(3)', 'Quadruplets(4)', 'Quintuplets(5)'][rowdict['plurality'] - 1]
# Write out two rows for each input row, one with ultrasound and one without
for result in [no_ultrasound, w_ultrasound]:
data = ','.join([str(result[k]) if k in result else 'None' for k in CSV_COLUMNS])
key = hashlib.sha224(data.encode('utf-8')).hexdigest() # hash the columns to form a key
yield str('{},{}'.format(data, key))
def preprocess(in_test_mode):
import shutil, os, subprocess
job_name = 'preprocess-babyweight-features' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S')
if in_test_mode:
print('Launching local job ... hang on')
OUTPUT_DIR = './preproc'
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
os.makedirs(OUTPUT_DIR)
else:
print('Launching Dataflow job {} ... hang on'.format(job_name))
OUTPUT_DIR = 'gs://{0}/babyweight/preproc/'.format(BUCKET)
try:
subprocess.check_call('gsutil -m rm -r {}'.format(OUTPUT_DIR).split())
except:
pass
options = {
'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'),
'temp_location': os.path.join(OUTPUT_DIR, 'tmp'),
'job_name': job_name,
'project': PROJECT,
'region': REGION,
'teardown_policy': 'TEARDOWN_ALWAYS',
'no_save_main_session': True,
'num_workers': 4,
'max_num_workers': 5
}
opts = beam.pipeline.PipelineOptions(flags = [], **options)
if in_test_mode:
RUNNER = 'DirectRunner'
else:
RUNNER = 'DataflowRunner'
p = beam.Pipeline(RUNNER, options = opts)
query =
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
AND weight_pounds > 0
AND mother_age > 0
AND plurality > 0
AND gestation_weeks > 0
AND month > 0
if in_test_mode:
query = query + ' LIMIT 100'
for step in ['train', 'eval']:
if step == 'train':
selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) < 3'.format(query)
else:
selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) = 3'.format(query)
(p
| '{}_read'.format(step) >> beam.io.Read(beam.io.BigQuerySource(query = selquery, use_standard_sql = True))
| '{}_csv'.format(step) >> beam.FlatMap(to_csv)
| '{}_out'.format(step) >> beam.io.Write(beam.io.WriteToText(os.path.join(OUTPUT_DIR, '{}.csv'.format(step))))
)
job = p.run()
if in_test_mode:
job.wait_until_finish()
print("Done!")
preprocess(in_test_mode = False)
Explanation: <h2> Create ML dataset using Dataflow </h2>
Let's use Cloud Dataflow to read in the BigQuery data, do some preprocessing, and write it out as CSV files.
Instead of using Beam/Dataflow, I had three other options:
Use Cloud Dataprep to visually author a Dataflow pipeline. Cloud Dataprep also allows me to explore the data, so we could have avoided much of the handcoding of Python/Seaborn calls above as well!
Read from BigQuery directly using TensorFlow.
Use the BigQuery console (http://bigquery.cloud.google.com) to run a Query and save the result as a CSV file. For larger datasets, you may have to select the option to "allow large results" and save the result into a CSV file on Google Cloud Storage.
<p>
However, in this case, I want to do some preprocessing, modifying data so that we can simulate what is known if no ultrasound has been performed. If I didn't need preprocessing, I could have used the web console. Also, I prefer to script it out rather than run queries on the user interface, so I am using Cloud Dataflow for the preprocessing.
Note that after you launch this, the actual processing is happening on the cloud. Go to the GCP webconsole to the Dataflow section and monitor the running job. It took about 20 minutes for me.
<p>
If you wish to continue without doing this step, you can copy my preprocessed output:
<pre>
gsutil -m cp -r gs://cloud-training-demos/babyweight/preproc gs://your-bucket/
</pre>
End of explanation
%%bash
gsutil ls gs://${BUCKET}/babyweight/preproc/*-00000*
Explanation: The above step will take 20+ minutes. Go to the GCP web console, navigate to the Dataflow section and <b>wait for the job to finish</b> before you run the following step.
End of explanation |
1,814 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Working with time series data
Some imports
Step1: Case study
Step2: I downloaded and preprocessed some of the data (python-airbase)
Step3: As you can see, the missing values are indicated by -9999. This can be recognized by read_csv by passing the na_values keyword
Step4: Exploring the data
Some useful methods
Step5: info()
Step6: Getting some basic summary statistics about the data with describe
Step7: Quickly visualizing the data
Step8: This does not say too much ..
We can select part of the data (eg the latest 500 data points)
Step9: Or we can use some more advanced time series features -> next section!
Working with time series data
When we ensure the DataFrame has a DatetimeIndex, time-series related functionality becomes available
Step10: Indexing a time series works with strings
Step11: A nice feature is "partial string" indexing, where we can do implicit slicing by providing a partial datetime string.
E.g. all data of 2012
Step12: Normally you would expect this to access a column named '2012', but as for a DatetimeIndex, pandas also tries to interprete it as a datetime slice.
Or all data of January up to March 2012
Step13: Time and date components can be accessed from the index
Step14: <div class="alert alert-success">
<b>EXERCISE</b>
Step15: <div class="alert alert-success">
<b>EXERCISE</b>
Step16: <div class="alert alert-success">
<b>EXERCISE</b>
Step17: <div class="alert alert-success">
<b>EXERCISE</b>
Step18: The power of pandas
Step19: By default, resample takes the mean as aggregation function, but other methods can also be specified
Step20: The string to specify the new time frequency
Step21: <div class="alert alert-success">
<b>QUESTION</b>
Step22: <div class="alert alert-success">
<b>QUESTION</b>
Step23: <div class="alert alert-success">
<b>QUESTION</b>
Step24: <div class="alert alert-success">
<b>QUESTION</b>
Step25: Combination with groupby
resample can actually be seen as a specific kind of groupby. E.g. taking annual means with data.resample('A', 'mean') is equivalent to data.groupby(data.index.year).mean() (only the result of resample still has a DatetimeIndex).
Step26: But, groupby is more flexible and can also do resamples that do not result in a new continuous time series, e.g. by grouping by the hour of the day to get the diurnal cycle.
<div class="alert alert-success">
<b>QUESTION</b>
Step27: 2. Now, we can calculate the mean of each month over the different years
Step28: 3. plot the typical monthly profile of the different stations
Step29: <div class="alert alert-success">
<b>QUESTION</b>
Step30: <div class="alert alert-success">
<b>QUESTION</b>
Step31: <div class="alert alert-success">
<b>QUESTION</b>
Step32: Add a column indicating week/weekend
Step33: <div class="alert alert-success">
<b>QUESTION</b>
Step34: <div class="alert alert-success">
<b>QUESTION</b>
Step35: <div class="alert alert-success">
<b>QUESTION</b>
Step36: <div class="alert alert-success">
<b>QUESTION</b> | Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
try:
import seaborn
except:
pass
pd.options.display.max_rows = 8
Explanation: Working with time series data
Some imports:
End of explanation
from IPython.display import HTML
HTML('<iframe src=http://www.eea.europa.eu/data-and-maps/data/airbase-the-european-air-quality-database-8#tab-data-by-country width=900 height=350></iframe>')
Explanation: Case study: air quality data of European monitoring stations (AirBase)
AirBase (The European Air quality dataBase): hourly measurements of all air quality monitoring stations from Europe.
End of explanation
!head -5 data/airbase_data.csv
Explanation: I downloaded and preprocessed some of the data (python-airbase): data/airbase_data.csv. This file includes the hourly concentrations of NO2 for 4 different measurement stations:
FR04037 (PARIS 13eme): urban background site at Square de Choisy
FR04012 (Paris, Place Victor Basch): urban traffic site at Rue d'Alesia
BETR802: urban traffic site in Antwerp, Belgium
BETN029: rural background site in Houtem, Belgium
See http://www.eea.europa.eu/themes/air/interactive/no2
Importing the data
Import the csv file:
End of explanation
data = pd.read_csv('data/airbase_data.csv', index_col=0, parse_dates=True, na_values=[-9999])
Explanation: As you can see, the missing values are indicated by -9999. This can be recognized by read_csv by passing the na_values keyword:
End of explanation
data.head(3)
data.tail()
Explanation: Exploring the data
Some useful methods:
head and tail
End of explanation
data.info()
Explanation: info()
End of explanation
data.describe()
Explanation: Getting some basic summary statistics about the data with describe:
End of explanation
data.plot(kind='box', ylim=[0,250])
data['BETR801'].plot(kind='hist', bins=50)
data.plot(figsize=(12,6))
Explanation: Quickly visualizing the data
End of explanation
data[-500:].plot(figsize=(12,6))
Explanation: This does not say too much ..
We can select part of the data (eg the latest 500 data points):
End of explanation
data.index
Explanation: Or we can use some more advanced time series features -> next section!
Working with time series data
When we ensure the DataFrame has a DatetimeIndex, time-series related functionality becomes available:
End of explanation
data["2010-01-01 09:00": "2010-01-01 12:00"]
Explanation: Indexing a time series works with strings:
End of explanation
data['2012']
Explanation: A nice feature is "partial string" indexing, where we can do implicit slicing by providing a partial datetime string.
E.g. all data of 2012:
End of explanation
data['2012-01':'2012-03']
Explanation: Normally you would expect this to access a column named '2012', but as for a DatetimeIndex, pandas also tries to interprete it as a datetime slice.
Or all data of January up to March 2012:
End of explanation
data.index.hour
data.index.year
Explanation: Time and date components can be accessed from the index:
End of explanation
data = data['1999':]
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: select all data starting from 1999
</div>
End of explanation
data[data.index.month == 1]
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: select all data in January for all different years
</div>
End of explanation
data['months'] = data.index.month
data[data['months'].isin([1, 2, 3])]
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: select all data in January, February and March for all different years
</div>
End of explanation
data[(data.index.hour >= 8) & (data.index.hour < 20)]
data.between_time('08:00', '20:00')
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: select all 'daytime' data (between 8h and 20h) for all days
</div>
End of explanation
data.resample('D').head()
Explanation: The power of pandas: resample
A very powerfull method is resample: converting the frequency of the time series (e.g. from hourly to daily data).
The time series has a frequency of 1 hour. I want to change this to daily:
End of explanation
data.resample('D', how='max').head()
Explanation: By default, resample takes the mean as aggregation function, but other methods can also be specified:
End of explanation
data.resample('M').plot() # 'A'
# data['2012'].resample('D').plot()
Explanation: The string to specify the new time frequency: http://pandas.pydata.org/pandas-docs/dev/timeseries.html#offset-aliases
These strings can also be combined with numbers, eg '10D'.
Further exploring the data:
End of explanation
data.loc['2009':, 'FR04037'].resample('M', how='mean').plot()
data.loc['2009':, 'FR04037'].resample('M', how='median').plot()
data.loc['2009':, 'FR04037'].resample('M', how=['mean', 'median']).plot()
Explanation: <div class="alert alert-success">
<b>QUESTION</b>: plot the monthly mean and median concentration of the 'FR04037' station for the years 2009-2012
</div>
End of explanation
daily = data['FR04037'].resample('D')
daily.resample('M', how=['min', 'max']).plot()
Explanation: <div class="alert alert-success">
<b>QUESTION</b>: plot the monthly mininum and maximum daily concentration of the 'BETR801' station
</div>
End of explanation
data['2012'].mean().plot(kind='bar')
Explanation: <div class="alert alert-success">
<b>QUESTION</b>: make a bar plot of the mean of the stations in year of 2012
</div>
End of explanation
data.resample('A').plot()
data.mean(axis=1).resample('A').plot(color='k', linestyle='--', linewidth=4)
Explanation: <div class="alert alert-success">
<b>QUESTION</b>: The evolution of the yearly averages with, and the overall mean of all stations?
</div>
End of explanation
data.groupby(data.index.year).mean().plot()
Explanation: Combination with groupby
resample can actually be seen as a specific kind of groupby. E.g. taking annual means with data.resample('A', 'mean') is equivalent to data.groupby(data.index.year).mean() (only the result of resample still has a DatetimeIndex).
End of explanation
data['month'] = data.index.month
Explanation: But, groupby is more flexible and can also do resamples that do not result in a new continuous time series, e.g. by grouping by the hour of the day to get the diurnal cycle.
<div class="alert alert-success">
<b>QUESTION</b>: how does the *typical monthly profile* look like for the different stations?
</div>
1. add a column to the dataframe that indicates the month (integer value of 1 to 12):
End of explanation
data.groupby('month').mean()
Explanation: 2. Now, we can calculate the mean of each month over the different years:
End of explanation
data.groupby('month').mean().plot()
Explanation: 3. plot the typical monthly profile of the different stations:
End of explanation
df2011 = data['2011']
df2011.groupby(df2011.index.week)[['BETN029', 'BETR801']].quantile(0.95).plot()
data = data.drop('month', axis=1)
Explanation: <div class="alert alert-success">
<b>QUESTION</b>: plot the weekly 95% percentiles of the concentration in 'BETR801' and 'BETN029' for 2011
</div>
End of explanation
data.groupby(data.index.hour).mean().plot()
Explanation: <div class="alert alert-success">
<b>QUESTION</b>: The typical diurnal profile for the different stations?
</div>
End of explanation
data.index.weekday?
data['weekday'] = data.index.weekday
Explanation: <div class="alert alert-success">
<b>QUESTION</b>: What is the difference in the typical diurnal profile between week and weekend days?
</div>
End of explanation
data['weekend'] = data['weekday'].isin([5, 6])
data_weekend = data.groupby(['weekend', data.index.hour]).mean()
data_weekend.head()
data_weekend_FR04012 = data_weekend['FR04012'].unstack(level=0)
data_weekend_FR04012.head()
data_weekend_FR04012.plot()
data = data.drop(['weekday', 'weekend'], axis=1)
Explanation: Add a column indicating week/weekend
End of explanation
exceedances = data > 200
# group by year and count exceedances (sum of boolean)
exceedances = exceedances.groupby(exceedances.index.year).sum()
exceedances
ax = exceedances.loc[2005:].plot(kind='bar')
ax.axhline(18, color='k', linestyle='--')
Explanation: <div class="alert alert-success">
<b>QUESTION</b>: What are the number of exceedances of hourly values above the European limit 200 µg/m3 ?
</div>
End of explanation
yearly = data['2000':].resample('A')
(yearly > 40).sum()
yearly.plot()
plt.axhline(40, linestyle='--', color='k')
Explanation: <div class="alert alert-success">
<b>QUESTION</b>: And are there exceedances of the yearly limit value of 40 µg/m3 since 200 ?
</div>
End of explanation
# add a weekday and week column
data['weekday'] = data.index.weekday
data['week'] = data.index.week
data.head()
# pivot table so that the weekdays are the different columns
data_pivoted = data['2012'].pivot_table(columns='weekday', index='week', values='FR04037')
data_pivoted.head()
box = data_pivoted.boxplot()
Explanation: <div class="alert alert-success">
<b>QUESTION</b>: Visualize the typical week profile for the different stations as boxplots.
</div>
Tip: the boxplot method of a DataFrame expects the data for the different boxes in different columns)
End of explanation
data[['BETR801', 'BETN029', 'FR04037', 'FR04012']].corr()
data[['BETR801', 'BETN029', 'FR04037', 'FR04012']].resample('D').corr()
Explanation: <div class="alert alert-success">
<b>QUESTION</b>: Calculate the correlation between the different stations
</div>
End of explanation |
1,815 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a name="pagetop"></a>
<div style="width
Step1: <a name="plotting"></a>
Plotting the Data
To plot our data we'll be using MetPy's new declarative plotting functionality. You can write lots of matplotlib based code, but this interface greatly reduces the number of lines you need to write to get a great starting plot and then lets you customize it. The declarative plotting interface consists of three fundamental objects/concepts
Step2: Let's start out with the smallest element, the plot, and build up to the largest, the panel container.
First, we'll make the ImagePlot
Step3: Next, we'll make the panel that our image will go into, the MapPanel object and add the image to the plots on the panel.
Step4: Finally, we make the PanelContainer and add the panel to its container. Remember that since we can have multiple plots on a panel and multiple panels on a plot, we use lists. In this case is just happens to be a list of length 1.
Step5: Unlike working with matplotlib directly in the notebooks, this figure hasn't actually been rendered yet. Calling the show method of the panel container builds up everything, renders, and shows it to us.
Step6: Exercise
Look at the documentation for the ImagePlot here and figure out how to set the colormap of the image. For this image, let's go with the WVCIMSS_r colormap as this is a mid-level water vapor image. Set the range for the colormap to 195-265 K.
BONUS
Step7: Solution | Python Code:
from siphon.catalog import TDSCatalog
from datetime import datetime
# Create variables for URL generation
image_date = datetime.utcnow().date()
region = 'Mesoscale-1'
channel = 8
# Create the URL to provide to siphon
data_url = ('https://thredds.ucar.edu/thredds/catalog/satellite/goes/east/products/'
f'CloudAndMoistureImagery/{region}/Channel{channel:02d}/'
f'{image_date:%Y%m%d}/catalog.xml')
cat = TDSCatalog(data_url)
dataset = cat.datasets[1]
print(dataset)
ds = dataset.remote_access(use_xarray=True)
print(ds)
Explanation: <a name="pagetop"></a>
<div style="width:1000 px">
<div style="float:right; width:98 px; height:98px;"><img src="https://pbs.twimg.com/profile_images/1187259618/unidata_logo_rgb_sm_400x400.png" alt="Unidata Logo" style="height: 98px;"></div>
<h1>Declarative Plotting with Satellite Data</h1>
<h3>Unidata Python Workshop</h3>
<div style="clear:both"></div>
</div>
<hr style="height:2px;">
<div style="float:right; width:300 px"><img src="https://unidata.github.io/MetPy/latest/_images/sphx_glr_GINI_Water_Vapor_001.png" alt="Example Satellite Image" style="height: 350px;"></div>
Overview:
Teaching: 20 minutes
Exercises: 15 minutes
Questions
How can satellite data be accessed with siphon?
How can maps of satellite data be made using the declarative plotting interface?
Table of Contents
<a href="#dataaccess">Accessing data with Siphon</a>
<a href="#plotting">Plotting the data</a>
<a name="dataaccess"></a>
Accessing data with Siphon
As we saw with the PlottingSatelliteData notebook, GOES 16/17 data is available via the Unidata THREDDS server and can be accessed with siphon. We make use of fstrings in order to provide date, region, and channel variables to the URL string.
End of explanation
from metpy.plots import ImagePlot, MapPanel, PanelContainer
%matplotlib inline
Explanation: <a name="plotting"></a>
Plotting the Data
To plot our data we'll be using MetPy's new declarative plotting functionality. You can write lots of matplotlib based code, but this interface greatly reduces the number of lines you need to write to get a great starting plot and then lets you customize it. The declarative plotting interface consists of three fundamental objects/concepts:
Plot - This is the actual representation of the data and can be ImagePlot, ContourPlot, or Plot2D.
Panel - This is a single panel (i.e. coordinate system). Panels contain plots. Currently the MapPanel is the only panel type available.
Panel Container - The container can hold multiple panels to make a multi-pane figure. Panel Containers can be thought of as the whole figure object in matplotlib.
So containers have panels which have plots. It takes a second to get that straight in your mind, but it makes setting up complex figures very simple.
For this plot we need a single panel and we want to plot the satellite image, so we'll use the ImagePlot.
End of explanation
img = ImagePlot()
img.data = ds
img.field = 'Sectorized_CMI'
Explanation: Let's start out with the smallest element, the plot, and build up to the largest, the panel container.
First, we'll make the ImagePlot:
End of explanation
panel = MapPanel()
panel.plots = [img]
Explanation: Next, we'll make the panel that our image will go into, the MapPanel object and add the image to the plots on the panel.
End of explanation
pc = PanelContainer()
pc.panels = [panel]
Explanation: Finally, we make the PanelContainer and add the panel to its container. Remember that since we can have multiple plots on a panel and multiple panels on a plot, we use lists. In this case is just happens to be a list of length 1.
End of explanation
pc.show()
Explanation: Unlike working with matplotlib directly in the notebooks, this figure hasn't actually been rendered yet. Calling the show method of the panel container builds up everything, renders, and shows it to us.
End of explanation
# Import for the bonus exercise
from metpy.plots import add_timestamp
# Make the image plot
# YOUR CODE GOES HERE
# Make the map panel and add the image to it
# YOUR CODE GOES HERE
# Make the panel container and add the panel to it
# YOUR CODE GOES HERE
# Show the plot
# YOUR CODE GOES HERE
Explanation: Exercise
Look at the documentation for the ImagePlot here and figure out how to set the colormap of the image. For this image, let's go with the WVCIMSS_r colormap as this is a mid-level water vapor image. Set the range for the colormap to 195-265 K.
BONUS: Use the MetPy add_timestamp method from metpy.plots to add a timestamp to the plot. You can get the axes object to plot on from the ImagePlot. The call will look something like img.ax. This needs to happen after the panels have been added to the PanelContainer.
DAILY DOUBLE: Using the start_date_time attribute on the dataset ds, change the call to add_timestamp to use that date and time and the pretext to say GOES 16 Channel X.
End of explanation
# %load solutions/sat_map.py
Explanation: Solution
End of explanation |
1,816 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
xarray Python library is great for analysing multi-dimensional arrays of data with labelled dimensions, which is a common situation in geosciences.
According to the docs, xarray has two core data structures. Both are fundamentally N-dimensional
Step1: The following examples will be done for ERA-Interim reanalysis data of 2-m temperature and mean sea level pressure. As we will see, these are global daily data for 2014-2015.
Step2: Among other things, xarray prints out metadata in a much more readable format than, say, netCDF4.
Extracting variables
Since a Dataset is a dict-like object, variables can be accessed by keys.
Step3: But often it's even more convenient to access them as attributes of a Dataset.
Step4: Key properties of a DataArray
Step5: etc...
Indexing
1. positional and by integer label, like numpy
python
t2m[456,
Step6: python
t2m.sel(time=slice(time_start, time_end))
Nearest neighbour lookups
The following line would not work, because neither longitude coordinate array contains a value of 10, nor latitude contains 20.
python
t2m.sel(longitude=10, latitude=20) # Results in KeyError
However, we can use a built-in nearest-neighbour lookup method to find an element closest to the given coordinate values.
python
t2m.sel(longitude=10, latitude=15, method='nearest')
Saving data to netCDF
Let's extract a subset of the original data
Step7: Saving data to a netCDF file cannot get any easier
Step8: Convert from pandas.DataFrame
Step9: GroupBy operations and resampling
xarray data structures allow us to perform resampling really easily
Step10: Being pandas's sibling, xarray supports groupby methods. For example, in the following line of code we do averaging of temperature by seasons (just 1 line of code!).
Step11: Note how the time dimension was transformed into a 'season' dimension with the appropriate labels.
Plots - plots - plots
Step12: Basics
You can use OO approach
Step13: Or create a figure and axis first, and then pass that as an argument to a plotting fucntions
Step14: Now to something more interesting...
First, let's wrap the previous example into a small function
Step15: Next, perform a monthly averaging on the original global data
Step16: And finally, plot the result.
Step17: It's always good to close the file you are reading data from.
Step18: References
This notebook was inspired by | Python Code:
import numpy as np
import xarray as xr
import warnings
warnings.filterwarnings('ignore')
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: xarray Python library is great for analysing multi-dimensional arrays of data with labelled dimensions, which is a common situation in geosciences.
According to the docs, xarray has two core data structures. Both are fundamentally N-dimensional:
DataArray is our implementation of a labeled, N-dimensional array. It is an N-D generalization of a pandas.Series.
Dataset is a multi-dimensional, in-memory array database. It serves a similar purpose in xarray to the pandas.DataFrame.
Essentially, xarray adds dimensions names and coordinate indexes to numpy.ndarray. This significantly simplifies such operations like indexing, subsetting, broadcasting and even plotting.
Let's have a look.
First, we import some necessary modules, including of course xarray:
End of explanation
ds = xr.open_dataset('../data/tsurf_slp.nc')
ds
Explanation: The following examples will be done for ERA-Interim reanalysis data of 2-m temperature and mean sea level pressure. As we will see, these are global daily data for 2014-2015.
End of explanation
ds['t2m']
Explanation: Among other things, xarray prints out metadata in a much more readable format than, say, netCDF4.
Extracting variables
Since a Dataset is a dict-like object, variables can be accessed by keys.
End of explanation
t2m = ds.t2m
Explanation: But often it's even more convenient to access them as attributes of a Dataset.
End of explanation
t2m.shape
t2m.ndim
Explanation: Key properties of a DataArray
End of explanation
import datetime
time_start = datetime.datetime.strptime('2014-02-03', '%Y-%m-%d')
time_end = datetime.datetime.strptime('2014-02-05', '%Y-%m-%d')
print('{}\n{}'.format(time_start, time_end))
Explanation: etc...
Indexing
1. positional and by integer label, like numpy
python
t2m[456, :, 123]
2. positional and by coordinate label, like pandas
python
t2m.loc[dict(longitude=2.25)]
3. by dimension name and integer label
python
t2m[:2, :, 0]
python
t2m.isel(longitude=0, time=slice(None, 2))
4. by dimension name and coordinate label
End of explanation
new_data = t2m.sel(longitude=slice(-5, 10), latitude=slice(55, 44))
new_data.shape
Explanation: python
t2m.sel(time=slice(time_start, time_end))
Nearest neighbour lookups
The following line would not work, because neither longitude coordinate array contains a value of 10, nor latitude contains 20.
python
t2m.sel(longitude=10, latitude=20) # Results in KeyError
However, we can use a built-in nearest-neighbour lookup method to find an element closest to the given coordinate values.
python
t2m.sel(longitude=10, latitude=15, method='nearest')
Saving data to netCDF
Let's extract a subset of the original data:
End of explanation
new_data.to_dataframe().head()
new_data.to_series().head()
Explanation: Saving data to a netCDF file cannot get any easier:
new_data.to_dataset('test.nc')
Converting to and from other objects
End of explanation
import pandas as pd
import string
df = pd.DataFrame(np.random.randn(365, 5),
index=pd.date_range(start='2014-1-1', periods=365),
columns=list(string.ascii_letters[:5]))
df.head()
df_ds = xr.Dataset.from_dataframe(df)
df_ds
Explanation: Convert from pandas.DataFrame
End of explanation
ten_daily = t2m.resample('W', dim='time', how='mean')
Explanation: GroupBy operations and resampling
xarray data structures allow us to perform resampling really easily:
End of explanation
seas = t2m.groupby('time.season').mean('time')
print(seas.season)
Explanation: Being pandas's sibling, xarray supports groupby methods. For example, in the following line of code we do averaging of temperature by seasons (just 1 line of code!).
End of explanation
import cartopy.crs as ccrs
from calendar import month_name
Explanation: Note how the time dimension was transformed into a 'season' dimension with the appropriate labels.
Plots - plots - plots
End of explanation
t2m.isel(time=10).plot.contourf()
Explanation: Basics
You can use OO approach:
End of explanation
fig, ax = plt.subplots(figsize=(10, 3),
subplot_kw=dict(projection=ccrs.PlateCarree()))
t2m.isel(time=10).plot.contourf(ax=ax)
ax.coastlines()
Explanation: Or create a figure and axis first, and then pass that as an argument to a plotting fucntions
End of explanation
def plot_field(da, ax=None, title=None):
if ax is None:
fig, ax = plt.subplots(figsize=(10, (da.shape[0] / da.shape[1]) * 10),
subplot_kw=dict(projection=ccrs.PlateCarree()))
da.plot.contourf(ax=ax)
ax.coastlines()
if title is not None:
ax.set_title(title)
Explanation: Now to something more interesting...
First, let's wrap the previous example into a small function:
End of explanation
monthly_t2m = t2m.groupby('time.month').mean('time')
Explanation: Next, perform a monthly averaging on the original global data:
End of explanation
fig, axs = plt.subplots(nrows=4, ncols=3, figsize=(14, 10),
subplot_kw=dict(projection=ccrs.PlateCarree()))
fig.suptitle('Monthly averages of 2-m temperature')
axes = axs.flatten()
for month in range(1, 13):
ax = axes[month-1]
plot_field(monthly_t2m.sel(month=month),
ax=ax, title=month_name[month])
Explanation: And finally, plot the result.
End of explanation
ds.close()
Explanation: It's always good to close the file you are reading data from.
End of explanation
HTML(html)
Explanation: References
This notebook was inspired by:
* xarray documentation and examples
* Nicolas Fauchereau's notebook
End of explanation |
1,817 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Artifact Correction with ICA
ICA finds directions in the feature space
corresponding to projections with high non-Gaussianity. We thus obtain
a decomposition into independent components, and the artifact's contribution
is localized in only a small number of components.
These components have to be correctly identified and removed.
If EOG or ECG recordings are available, they can be used in ICA to
automatically select the corresponding artifact components from the
decomposition. To do so, you have to first build an
Step1: Before applying artifact correction please learn about your actual artifacts
by reading tut_artifacts_detect.
<div class="alert alert-danger"><h4>Warning</h4><p>ICA is sensitive to low-frequency drifts and therefore
requires the data to be high-pass filtered prior to fitting.
Typically, a cutoff frequency of 1 Hz is recommended. Note that
FIR filters prior to MNE 0.15 used the ``'firwin2'`` design
method, which generally produces rather shallow filters that
might not work for ICA processing. Therefore, it is recommended
to use IIR filters for MNE up to 0.14. In MNE 0.15, FIR filters
can be designed with the ``'firwin'`` method, which generally
produces much steeper filters. This method will be the default
FIR design method in MNE 0.16. In MNE 0.15, you need to
explicitly set ``fir_design='firwin'`` to use this method. This
is the recommended filter method for ICA preprocessing.</p></div>
Fit ICA
ICA parameters
Step2: Define the ICA object instance
Step3: we avoid fitting ICA on crazy environmental artifacts that would
dominate the variance and decomposition
Step4: Plot ICA components
Step5: Component properties
Let's take a closer look at properties of first three independent components.
Step6: we can see that the data were filtered so the spectrum plot is not
very informative, let's change that
Step7: we can also take a look at multiple different components at once
Step8: Instead of opening individual figures with component properties, we can
also pass an instance of Raw or Epochs in inst arument to
ica.plot_components. This would allow us to open component properties
interactively by clicking on individual component topomaps. In the notebook
this woks only when running matplotlib in interactive mode (%matplotlib).
Step9: Advanced artifact detection
Let's use a more efficient way to find artefacts
Step10: We can take a look at the properties of that component, now using the
data epoched with respect to EOG events.
We will also use a little bit of smoothing along the trials axis in the
epochs image
Step11: That component is showing a prototypical average vertical EOG time course.
Pay attention to the labels, a customized read-out of the
mne.preprocessing.ICA.labels_
Step12: These labels were used by the plotters and are added automatically
by artifact detection functions. You can also manually edit them to annotate
components.
Now let's see how we would modify our signals if we removed this component
from the data.
Step13: Note that nothing is yet removed from the raw data. To remove the effects of
the rejected components,
Step14: Exercise
Step15: What if we don't have an EOG channel?
We could either
Step16: The idea behind corrmap is that artefact patterns are similar across subjects
and can thus be identified by correlating the different patterns resulting
from each solution with a template. The procedure is therefore
semi-automatic.
Step17: Remember, don't do this at home! Start by reading in a collection of ICA
solutions instead. Something like
Step18: We use our original ICA as reference.
Step19: Investigate our reference ICA
Step20: Which one is the bad EOG component?
Here we rely on our previous detection algorithm. You would need to decide
yourself if no automatic detection was available.
Step21: Indeed it looks like an EOG, also in the average time course.
We construct a list where our reference run is the first element. Then we
can detect similar components from the other runs (the other ICA objects)
using
Step22: Now we can run the CORRMAP algorithm.
Step23: Nice, we have found similar ICs from the other (simulated) runs!
In this way, you can detect a type of artifact semi-automatically for example
for all subjects in a study.
The detected template can also be retrieved as an array and stored; this
array can be used as an alternative template to | Python Code:
import numpy as np
import mne
from mne.datasets import sample
from mne.preprocessing import ICA
from mne.preprocessing import create_eog_epochs, create_ecg_epochs
# getting some data ready
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
raw = mne.io.read_raw_fif(raw_fname, preload=True)
# 1Hz high pass is often helpful for fitting ICA
raw.filter(1., 40., n_jobs=2, fir_design='firwin')
picks_meg = mne.pick_types(raw.info, meg=True, eeg=False, eog=False,
stim=False, exclude='bads')
Explanation: Artifact Correction with ICA
ICA finds directions in the feature space
corresponding to projections with high non-Gaussianity. We thus obtain
a decomposition into independent components, and the artifact's contribution
is localized in only a small number of components.
These components have to be correctly identified and removed.
If EOG or ECG recordings are available, they can be used in ICA to
automatically select the corresponding artifact components from the
decomposition. To do so, you have to first build an :class:mne.Epochs object
around blink or heartbeat events.
ICA is implemented in MNE using the :class:mne.preprocessing.ICA class,
which we will review here.
End of explanation
n_components = 25 # if float, select n_components by explained variance of PCA
method = 'fastica' # for comparison with EEGLAB try "extended-infomax" here
decim = 3 # we need sufficient statistics, not all time points -> saves time
# we will also set state of the random number generator - ICA is a
# non-deterministic algorithm, but we want to have the same decomposition
# and the same order of components each time this tutorial is run
random_state = 23
Explanation: Before applying artifact correction please learn about your actual artifacts
by reading tut_artifacts_detect.
<div class="alert alert-danger"><h4>Warning</h4><p>ICA is sensitive to low-frequency drifts and therefore
requires the data to be high-pass filtered prior to fitting.
Typically, a cutoff frequency of 1 Hz is recommended. Note that
FIR filters prior to MNE 0.15 used the ``'firwin2'`` design
method, which generally produces rather shallow filters that
might not work for ICA processing. Therefore, it is recommended
to use IIR filters for MNE up to 0.14. In MNE 0.15, FIR filters
can be designed with the ``'firwin'`` method, which generally
produces much steeper filters. This method will be the default
FIR design method in MNE 0.16. In MNE 0.15, you need to
explicitly set ``fir_design='firwin'`` to use this method. This
is the recommended filter method for ICA preprocessing.</p></div>
Fit ICA
ICA parameters:
End of explanation
ica = ICA(n_components=n_components, method=method, random_state=random_state)
print(ica)
Explanation: Define the ICA object instance
End of explanation
reject = dict(mag=5e-12, grad=4000e-13)
ica.fit(raw, picks=picks_meg, decim=decim, reject=reject)
print(ica)
Explanation: we avoid fitting ICA on crazy environmental artifacts that would
dominate the variance and decomposition
End of explanation
ica.plot_components() # can you spot some potential bad guys?
Explanation: Plot ICA components
End of explanation
# first, component 0:
ica.plot_properties(raw, picks=0)
Explanation: Component properties
Let's take a closer look at properties of first three independent components.
End of explanation
ica.plot_properties(raw, picks=0, psd_args={'fmax': 35.})
Explanation: we can see that the data were filtered so the spectrum plot is not
very informative, let's change that:
End of explanation
ica.plot_properties(raw, picks=[1, 2], psd_args={'fmax': 35.})
Explanation: we can also take a look at multiple different components at once:
End of explanation
# uncomment the code below to test the inteactive mode of plot_components:
# ica.plot_components(picks=range(10), inst=raw)
Explanation: Instead of opening individual figures with component properties, we can
also pass an instance of Raw or Epochs in inst arument to
ica.plot_components. This would allow us to open component properties
interactively by clicking on individual component topomaps. In the notebook
this woks only when running matplotlib in interactive mode (%matplotlib).
End of explanation
eog_average = create_eog_epochs(raw, reject=dict(mag=5e-12, grad=4000e-13),
picks=picks_meg).average()
eog_epochs = create_eog_epochs(raw, reject=reject) # get single EOG trials
eog_inds, scores = ica.find_bads_eog(eog_epochs) # find via correlation
ica.plot_scores(scores, exclude=eog_inds) # look at r scores of components
# we can see that only one component is highly correlated and that this
# component got detected by our correlation analysis (red).
ica.plot_sources(eog_average, exclude=eog_inds) # look at source time course
Explanation: Advanced artifact detection
Let's use a more efficient way to find artefacts
End of explanation
ica.plot_properties(eog_epochs, picks=eog_inds, psd_args={'fmax': 35.},
image_args={'sigma': 1.})
Explanation: We can take a look at the properties of that component, now using the
data epoched with respect to EOG events.
We will also use a little bit of smoothing along the trials axis in the
epochs image:
End of explanation
print(ica.labels_)
Explanation: That component is showing a prototypical average vertical EOG time course.
Pay attention to the labels, a customized read-out of the
mne.preprocessing.ICA.labels_:
End of explanation
ica.plot_overlay(eog_average, exclude=eog_inds, show=False)
# red -> before, black -> after. Yes! We remove quite a lot!
# to definitely register this component as a bad one to be removed
# there is the ``ica.exclude`` attribute, a simple Python list
ica.exclude.extend(eog_inds)
# from now on the ICA will reject this component even if no exclude
# parameter is passed, and this information will be stored to disk
# on saving
# uncomment this for reading and writing
# ica.save('my-ica.fif')
# ica = read_ica('my-ica.fif')
Explanation: These labels were used by the plotters and are added automatically
by artifact detection functions. You can also manually edit them to annotate
components.
Now let's see how we would modify our signals if we removed this component
from the data.
End of explanation
raw_copy = raw.copy().crop(0, 10)
ica.apply(raw_copy)
raw_copy.plot() # check the result
Explanation: Note that nothing is yet removed from the raw data. To remove the effects of
the rejected components,
:meth:the apply method <mne.preprocessing.ICA.apply> must be called.
Here we apply it on the copy of the first ten seconds, so that the rest of
this tutorial still works as intended.
End of explanation
ecg_epochs = create_ecg_epochs(raw, tmin=-.5, tmax=.5)
ecg_inds, scores = ica.find_bads_ecg(ecg_epochs, method='ctps')
ica.plot_properties(ecg_epochs, picks=ecg_inds, psd_args={'fmax': 35.})
Explanation: Exercise: find and remove ECG artifacts using ICA!
End of explanation
from mne.preprocessing.ica import corrmap # noqa
Explanation: What if we don't have an EOG channel?
We could either:
make a bipolar reference from frontal EEG sensors and use as virtual EOG
channel. This can be tricky though as you can only hope that the frontal
EEG channels only reflect EOG and not brain dynamics in the prefrontal
cortex.
go for a semi-automated approach, using template matching.
In MNE-Python option 2 is easily achievable and it might give better results,
so let's have a look at it.
End of explanation
# We'll start by simulating a group of subjects or runs from a subject
start, stop = [0, raw.times[-1]]
intervals = np.linspace(start, stop, 4, dtype=np.float)
icas_from_other_data = list()
raw.pick_types(meg=True, eeg=False) # take only MEG channels
for ii, start in enumerate(intervals):
if ii + 1 < len(intervals):
stop = intervals[ii + 1]
print('fitting ICA from {0} to {1} seconds'.format(start, stop))
this_ica = ICA(n_components=n_components, method=method).fit(
raw, start=start, stop=stop, reject=reject)
icas_from_other_data.append(this_ica)
Explanation: The idea behind corrmap is that artefact patterns are similar across subjects
and can thus be identified by correlating the different patterns resulting
from each solution with a template. The procedure is therefore
semi-automatic. :func:mne.preprocessing.corrmap hence takes a list of
ICA solutions and a template, that can be an index or an array.
As we don't have different subjects or runs available today, here we will
simulate ICA solutions from different subjects by fitting ICA models to
different parts of the same recording. Then we will use one of the components
from our original ICA as a template in order to detect sufficiently similar
components in the simulated ICAs.
The following block of code simulates having ICA solutions from different
runs/subjects so it should not be used in real analysis - use independent
data sets instead.
End of explanation
print(icas_from_other_data)
Explanation: Remember, don't do this at home! Start by reading in a collection of ICA
solutions instead. Something like:
icas = [mne.preprocessing.read_ica(fname) for fname in ica_fnames]
End of explanation
reference_ica = ica
Explanation: We use our original ICA as reference.
End of explanation
reference_ica.plot_components()
Explanation: Investigate our reference ICA:
End of explanation
reference_ica.plot_sources(eog_average, exclude=eog_inds)
Explanation: Which one is the bad EOG component?
Here we rely on our previous detection algorithm. You would need to decide
yourself if no automatic detection was available.
End of explanation
icas = [reference_ica] + icas_from_other_data
template = (0, eog_inds[0])
Explanation: Indeed it looks like an EOG, also in the average time course.
We construct a list where our reference run is the first element. Then we
can detect similar components from the other runs (the other ICA objects)
using :func:mne.preprocessing.corrmap. So our template must be a tuple like
(reference_run_index, component_index):
End of explanation
fig_template, fig_detected = corrmap(icas, template=template, label="blinks",
show=True, threshold=.8, ch_type='mag')
Explanation: Now we can run the CORRMAP algorithm.
End of explanation
eog_component = reference_ica.get_components()[:, eog_inds[0]]
Explanation: Nice, we have found similar ICs from the other (simulated) runs!
In this way, you can detect a type of artifact semi-automatically for example
for all subjects in a study.
The detected template can also be retrieved as an array and stored; this
array can be used as an alternative template to
:func:mne.preprocessing.corrmap.
End of explanation |
1,818 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a id="Top"></a>
___ ___ ___
_____ /\__\ /\ \ /\__\
/
Step1: <a id="TF"></a>
Build prediction model with tflearn and tensorflow
<a id="CNN"></a>
Multilayer convolutional neural network
Code based on this tflearn example, with CNN architecture modeled after TensorFlow's tutorial Deep MNIST for experts.
Step2: <a id="Digitre"></a>
Classify digit examples from Digitre
<a id="Prep"></a>
Example step-by-step preprocessing
Take example base64-encoded handwritten digit images (generated from html canvas element) and preprocess step-by-step to a format ready for classification model. Compare with MNIST example.
Step3: <a id="Class"></a>
Classify preprocessed images | Python Code:
# Standard library
import datetime
import time
# Third party libraries
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Digitre code
import digitre_preprocessing as prep
import digitre_model
import digitre_classifier
# Reload digitre code in the same session (during development)
import imp
imp.reload(prep)
imp.reload(digitre_model)
imp.reload(digitre_classifier)
# Latest update
str(datetime.datetime.now())
Explanation: <a id="Top"></a>
___ ___ ___
_____ /\__\ /\ \ /\__\
/::\ \ ___ /:/ _/_ ___ ___ /::\ \ /:/ _/_
/:/\:\ \ /\__\ /:/ /\ \ /\__\ /\__\ /:/\:\__\ /:/ /\__\
/:/ \:\__\ /:/__/ /:/ /::\ \ /:/__/ /:/ / /:/ /:/ / /:/ /:/ _/_
/:/__/ \:|__| /::\ \ /:/__\/\:\__\ /::\ \ /:/__/ /:/_/:/__/___ /:/_/:/ /\__\
\:\ \ /:/ / \/\:\ \__ \:\ \ /:/ / \/\:\ \__ /::\ \ \:\/:::::/ / \:\/:/ /:/ /
\:\ /:/ / ~~\:\/\__\ \:\ /:/ / ~~\:\/\__\ /:/\:\ \ \::/~~/~~~~ \::/_/:/ /
\:\/:/ / \::/ / \:\/:/ / \::/ / \/__\:\ \ \:\~~\ \:\/:/ /
\::/ / /:/ / \::/ / /:/ / \:\__\ \:\__\ \::/ /
\/__/ \/__/ \/__/ \/__/ \/__/ \/__/ \/__/
December 2016
Table of contents
Build prediction model with tflearn and tensorflow
Multilayer convolutional neural network
Serialize trained CNN model for serving
Classify digit examples from Digitre
Example step-by-step preprocessing
Classify preprocessed images
End of explanation
# Data loading and preprocessing
X, Y, testX, testY = digitre_model.load_data()
#X = X.reshape([-1, 28, 28, 1])
#testX = testX.reshape([-1, 28, 28, 1])
# Plot functions
def plot_digit(digit, show=True, file_name=None):
plt.imshow(digit, cmap = 'Greys', interpolation = 'none')
plt.tick_params(axis='both', which='both', bottom='off', top='off',
labelbottom='off', right='off', left='off', labelleft='off')
if file_name is not None:
plt.savefig(file_name)
if show:
plt.show()
def plot_digits(digits, rows, columns):
for i, digit in enumerate(digits):
plt.subplot(rows, columns, i+1)
plot_digit(digit, show=False)
plt.show()
# Plot a few training examples
X_eg = X[10:20,:,:,:]
X_eg = [digit.reshape(28, 28) for digit in X_eg]
plot_digits(X_eg, 2, 5)
# Visualization
# Used "tensorboard_verbose=0", meaning Loss & Metric
# Run "$ tensorboard --logdir='/tmp/tflearn_logs'"
### Fit model using all data (merge training and test data)
# Done from command line:
# $ python digitre_model.py -f 'cnn_alldata.tflearn' -a -e 20
# Training Step: 20320 | total loss: 0.642990.9401 | val_loss: 0.052
# | Adam | epoch: 020 | loss: 0.64299 - acc: 0.9401 | val_loss: 0.05263 - val_acc: 0.9866 -- iter: 65000/65000
# --
# -----
# Completed training in
# 3.5 hr.
# -----
# ... Saving trained model as " cnn_alldata.tflearn "
Explanation: <a id="TF"></a>
Build prediction model with tflearn and tensorflow
<a id="CNN"></a>
Multilayer convolutional neural network
Code based on this tflearn example, with CNN architecture modeled after TensorFlow's tutorial Deep MNIST for experts.
End of explanation
with open('b64_2_preprocessing.txt', 'r') as f:
eg_2 = f.read()
# Preview base64 encoded image
print(eg_2[:500])
eg_2 = prep.b64_str_to_np(eg_2)
eg_2.shape
# Plot the example handwritten digit
plot_digit(eg_2, file_name='b64_2_preprocessing_1.png')
eg_2 = prep.crop_img(eg_2)
plot_digit(eg_2, file_name='b64_2_preprocessing_2.png')
eg_2 = prep.center_img(eg_2)
plot_digit(eg_2, file_name='b64_2_preprocessing_3.png')
eg_2 = prep.resize_img(eg_2)
eg_2.shape
plot_digit(eg_2, file_name='b64_2_preprocessing_4.png')
eg_2 = prep.min_max_scaler(eg_2, final_range=(0, 1))
plot_digit(eg_2)
# Plot processed Digitre image together with MNIST example
plot_digits([eg_2, X_eg[6]], 1, 2)
# Save MNIST example too
plot_digit(X_eg[6], file_name='MNIST_2.png')
eg_2.max()
eg_2.shape
Explanation: <a id="Digitre"></a>
Classify digit examples from Digitre
<a id="Prep"></a>
Example step-by-step preprocessing
Take example base64-encoded handwritten digit images (generated from html canvas element) and preprocess step-by-step to a format ready for classification model. Compare with MNIST example.
End of explanation
# Instantiate Classifier (loads the tflearn pre-trained model)
model = digitre_classifier.Classifier(file_name='cnn.tflearn')
# Classify same example digit
with open('b64_2_preprocessing.txt', 'r') as f:
eg_2 = f.read()
eg_2 = model.preprocess(eg_2)
pred = np.around(model.classify(eg_2)[0], 2)
pred
from altair import Chart, Data, X, Y, Axis, Scale
# Plot prediction
def prob_distribution_plot(pred):
prediction = pred.reshape([10])
data = Data(values=[{'x': i, 'y': value} for i, value in enumerate(pred)])
plot = Chart(data).mark_bar(color='#f6755e').encode(
x=X('x:O', axis=Axis(title='Digit', labelAngle=0.5,
tickLabelFontSize=15, titleFontSize=15)),
y=Y('y:Q', axis=Axis(format='%', title='Probability',
tickLabelFontSize=15, titleFontSize=15),
scale=Scale(domain=(0, 1))))
return plot
prob_distribution_plot(pred)
from altair import Chart, Data, X, Y, Axis
# Plot prediction
def prob_distribution_plot(pred):
prediction = pred.reshape([10])
data = Data(values=[{'x': i, 'y': value} for i, value in enumerate(prediction)])
plot = Chart(data).mark_bar(color='#f6755e').encode(
x=X('x:O', axis=Axis(title='Digit', labelAngle=0.5, tickLabelFontSize=15, titleFontSize=15)),
y=Y('y:Q', axis=Axis(format='%', title='Probability', tickLabelFontSize=15, titleFontSize=15)))
return plot.to_json(indent=2)
prob_distribution_plot(pred)
Explanation: <a id="Class"></a>
Classify preprocessed images
End of explanation |
1,819 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Requisitions and Documents
This example shows the Ovation Service Lab (OSL) APIs for sample accessioning and report download. We'll create a simple Requisition with one sample. Next, we'll upload supplemental documents for the requistion (e.g. a face sheet, medication list, etc.). Finally, we'll download the complete report(s) for the requisition.
Setup
Step1: Connection
s is a Session object representing a connection to the Ovation API
Step2: Many OSL APIs require the Organization id.
Step3: Creating a Requisition
Create a container for the sample (in this case, a Tube). The tube identifier and barcode will be generated by Ovation. You can supply them using the identifier and barcode attributes of the container if needed
Step4: We need to know which project this Requisition belongs to. If you already know the Project Id, you can skip this step. If you need to look up the project by name, use a query
Step5: Create a Requisition and the Sample
Step6: Uploading documents to the Requisition
Once a Requisition is created, you can upload Documents to be stored securely with the Requisition. You may use any document tag(s) to which you have write permission. The "Supplemental Documents" label is used specifically for documents associated with the requisiton form and supporting materials.
The simplest way to send the document data is as Base64-encoded data withint he POST
Step7: Downloading complete report(s)
Once a Requisition has been processed, you can retrieve the completed clinical report(s) from the Requisition's "Complete Reports" label. | Python Code:
import uuid
from pprint import pprint
from datetime import date
from ovation.session import connect
Explanation: Requisitions and Documents
This example shows the Ovation Service Lab (OSL) APIs for sample accessioning and report download. We'll create a simple Requisition with one sample. Next, we'll upload supplemental documents for the requistion (e.g. a face sheet, medication list, etc.). Finally, we'll download the complete report(s) for the requisition.
Setup
End of explanation
s = connect(input("Email: "), api='https://services-staging.ovation.io')
Explanation: Connection
s is a Session object representing a connection to the Ovation API
End of explanation
organization_id = input('Organization id: ')
Explanation: Many OSL APIs require the Organization id.
End of explanation
tube = s.post(s.path('container'),
data={'container': {'type': 'Tube'}},
params={'organization_id': organization_id})
Explanation: Creating a Requisition
Create a container for the sample (in this case, a Tube). The tube identifier and barcode will be generated by Ovation. You can supply them using the identifier and barcode attributes of the container if needed:
End of explanation
project_name = input("Project name: ")
project = s.get(s.path('project'),
params={'q': project_name, # Find project by name
'organization_id': organization_id}).projects[0]
pprint(project)
Explanation: We need to know which project this Requisition belongs to. If you already know the Project Id, you can skip this step. If you need to look up the project by name, use a query:
End of explanation
# See http://lab-services.ovation.io/api/docs#!/requisitions/createRequisition for additional information
# that can be transmitted with the Requisition including patient demographics, diagnosis, medications,
# requested test(s)/panel(s) and billing information
requisition_data = {"identifier": str(uuid.uuid4()), # Any unique (within organization) identifier
"template": "RNA Requisition", # The requisition template, for the selected project
"custom_attributes": {'my-attribute': 1.0}, # Optional; Requisition custom attributes
"samples": [
{"identifier": str(uuid.uuid4()), # Any unique (within organization) identifier
"date_received": date.today().isoformat(),
"custom_attributes": {'my-sample-attribute': 1.0}, # Optional; Sample custom attributes
"sample_states": [
{"container_id": tube.id,
"position": "A01"}
]
}
]
}
req = s.post(s.path('requisition'),
data={'requisition': requisition_data},
params={'organization_id': organization_id,
"project_id": project.id})
pprint(req)
Explanation: Create a Requisition and the Sample:
End of explanation
local_file_path = "example.pdf"
import base64
with open(local_file_path, "rb") as document_file:
document_data = base64.b64encode(document_file.read())
doc_body = {
"document": {
"name": "file1.txt", # Document name
"tags": [
{
"name": "Supplemental Documents" # Special tag for supporting materials
}
],
"file_data": document_data
}
}
doc = s.post(s.path('documents'),
data=doc_body,
params={"requisition_id": req.requisition.id} # Supply the Id of the Requisition that will receive the document
)
Explanation: Uploading documents to the Requisition
Once a Requisition is created, you can upload Documents to be stored securely with the Requisition. You may use any document tag(s) to which you have write permission. The "Supplemental Documents" label is used specifically for documents associated with the requisiton form and supporting materials.
The simplest way to send the document data is as Base64-encoded data withint he POST:
End of explanation
report_documents = s.get(s.path('document'),
params={"requisition_id": req.requisition.id,
"label": "Complete Reports"})
Explanation: Downloading complete report(s)
Once a Requisition has been processed, you can retrieve the completed clinical report(s) from the Requisition's "Complete Reports" label.
End of explanation |
1,820 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License")
Step1: Human evaluation of visual metrics
This colab explores correlations between the mucped22 dataset and various objective visual metrics.
Unlike many other datasets, these evaluations
Step2: First download the dataset containing all evaluations.
Step3: Then decorate it with whether the crop settings were actually compatible with the image size (a few, ~15, evaluations have this bug), and the worst ELO of both distortions.
Finally filter out all evaluations where the evaluator didn't seem to do a good job (didn't flip between distortions more than 2 times, didn't spend more than 3 seconds on the evaluation).
Step4: To allow a rank correlation, like Spearman, combine the metrics of the worse distortion (lesser), and the better distortion (greater), into one dataframe. To also allow comparing correlation in different regions of quality, sort by ELO score.
Step5: Then compute the correlation matrix for these, using Spearman's rank correlation coeffient.
Step6: Plot the correlation in a rolling window of 5000 evaluations with a step of 1000 evaluations for each metric, to see how they behave across a range of ELO scores. | Python Code:
# Copyright 2022 The Google Research Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License")
End of explanation
import pandas as pd
import functools
import bokeh.io
!pip install pandas_bokeh
import pandas_bokeh
import requests
import json
import numpy as np
bokeh.io.output_notebook()
Explanation: Human evaluation of visual metrics
This colab explores correlations between the mucped22 dataset and various objective visual metrics.
Unlike many other datasets, these evaluations:
* Are made using only compression distortions (since the distortions were created using AVIF, JPEG, and JXL encoders), which will benefit metrics designed for compression artifacts.
* Are made by human evaluators experienced with image quality comparisons, which will benefit smaller distortions, relatively speaking, since unexperienced evaluators often don't notice them.
* Are made using two-alternative-forced-choice with a reference image instead of MOS, which will reduce the noise, since evaluators don't need to calibrate their MOS scores.
For each image, each distortion (method and quality setting) has then been ranked using ELO, to provide an expected human-rated ranking for each distortion.
This ranking will allow a comparison of the various metrics across different levels of distortion, e.g. near just-noticeable-differences vs far from just-noticeable-differences.
End of explanation
!wget --quiet --no-check-certificate https://storage.googleapis.com/gresearch/mucped22/evaluations.json
with open('evaluations.json') as f:
data = pd.DataFrame(json.load(f))
data
Explanation: First download the dataset containing all evaluations.
End of explanation
data['complete_crop'] = data.apply(lambda row: row.crop[0] + row.crop[2] <= row.image_dims[0] and row.crop[1] + row.crop[3] <= row.image_dims[1], axis=1)
data['worst_elo'] = data.apply(lambda row: row.greater_elo if row.greater_elo > row.lesser_elo else row.lesser_elo, axis=1)
data = data[(data.rater_flips > 2) & (data.rater_time_ms > 3000) & (data.complete_crop == True)]
data
def strip(ary, n):
def stripfun(sum, el):
sum[el] = el[n:]
return sum
return functools.reduce(stripfun, ary, {})
greater_metric_cols = list(filter(lambda el: el.startswith('greater_') and not el.endswith('_file'), list(data.columns)))
lesser_metric_cols = list(filter(lambda el: el.startswith('lesser_') and not el.endswith('_file'), list(data.columns)))
greater_metrics = data[greater_metric_cols]
greater_metrics = greater_metrics.rename(columns=strip(greater_metric_cols, 8))
lesser_metrics = data[lesser_metric_cols]
lesser_metrics = lesser_metrics.rename(columns=strip(lesser_metric_cols, 7))
Explanation: Then decorate it with whether the crop settings were actually compatible with the image size (a few, ~15, evaluations have this bug), and the worst ELO of both distortions.
Finally filter out all evaluations where the evaluator didn't seem to do a good job (didn't flip between distortions more than 2 times, didn't spend more than 3 seconds on the evaluation).
End of explanation
metrics = pd.concat([greater_metrics, lesser_metrics])
metrics = metrics.sort_values('elo').reset_index(drop=True)
metrics
Explanation: To allow a rank correlation, like Spearman, combine the metrics of the worse distortion (lesser), and the better distortion (greater), into one dataframe. To also allow comparing correlation in different regions of quality, sort by ELO score.
End of explanation
corrs = metrics.corr(method='spearman')
corrs
metric_cols = list(map(lambda name: name[7:], lesser_metric_cols))
metric_cols.remove('elo')
def rollingcorr(df, method, window_size, step_size):
res = []
for start in range(0, df.shape[0] - window_size, step_size):
window = df[start:start+window_size]
row = [window.iloc[-1]['elo']]
for metric_name in metric_cols:
row.append(np.abs(window[metric_name].corr(window['elo'], method=method)))
res.append(row)
return pd.DataFrame(res, dtype=np.float, columns=['elo'] + list(map(lambda name: f"{name}", metric_cols)))
Explanation: Then compute the correlation matrix for these, using Spearman's rank correlation coeffient.
End of explanation
rollingcorr(metrics, 'spearman', 5000, 1000).plot_bokeh(x='elo', figsize=(1400, 400))
Explanation: Plot the correlation in a rolling window of 5000 evaluations with a step of 1000 evaluations for each metric, to see how they behave across a range of ELO scores.
End of explanation |
1,821 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: P1. Encode and Decode Strings
Design an algorithm to encode a list of strings to a string. The encoded string is then sent over the network and is decoded back to the original list of strings.
Machine 1 (sender) has the function
Step3: P2. Maximal Square Given a 2D binary matrix filled with 0's and 1's, find the largest square containing only 1's and return its area.
```
Example
Step5: P3.
We have two integer sequences A and B of the same non-zero length.
We are allowed to swap elements A[i] and B[i]. Note that both elements are in the same index position in their respective sequences.
At the end of some number of swaps, A and B are both strictly increasing. (A sequence is strictly increasing if and only if A[0] < A[1] < A[2] < ... < A[A.length - 1].)
Given A and B, return the minimum number of swaps to make both sequences strictly increasing. It is guaranteed that the given input always makes it possible.
Example | Python Code:
# NOTE: We assume <EOS> is a special token that shows that the string has ended
# and the next string has started.
class Codec:
def encode(self, strs):
Encodes a list of strings to a single string.
:type strs: List[str]
:rtype: str
if strs == []:
return None
return '<EOS>'.join(strs)
def decode(self, s):
Decodes a single string to a list of strings.
:type s: str
:rtype: List[str]
if s == None:
return []
elif s == '':
return [""]
return s.split("<EOS>")
# Your Codec object will be instantiated and called as such:
strs = ["Hello", "World"]
codec = Codec()
codec.decode(codec.encode(strs))
Explanation: P1. Encode and Decode Strings
Design an algorithm to encode a list of strings to a string. The encoded string is then sent over the network and is decoded back to the original list of strings.
Machine 1 (sender) has the function:
string encode(vector<string> strs) {
// ... your code
return encoded_string;
}
Machine 2 (receiver) has the function:
vector<string> decode(string s) {
//... your code
return strs;
}
So Machine 1 does:
string encoded_string = encode(strs);
and Machine 2 does:
vector<string> strs2 = decode(encoded_string);
strs2 in Machine 2 should be the same as strs in Machine 1.
Implement the encode and decode methods.
Note:
The string may contain any possible characters out of 256 valid ascii characters. Your algorithm should be generalized enough to work on any possible characters.
Do not use class member/global/static variables to store states. Your encode and decode algorithms should be stateless.
Do not rely on any library method such as eval or serialize methods. You should implement your own encode/decode algorithm.
End of explanation
def find_largest_square(matrix) -> int:
if matrix == []:
return 0
size = (len(matrix), len(matrix[0]))
max_sq_size = size[0] if size[0] < size[1] else size[1]
def get_square_size(pos):
sq_size = 1
while sq_size <= max_sq_size and (pos[0] + sq_size) <= size[0] and (pos[1] + sq_size) <= size[1]:
for i in range(pos[0], pos[0] + sq_size):
for j in range(pos[1], pos[1] + sq_size):
if matrix[i][j] == "0":
return (sq_size - 1) * (sq_size - 1)
sq_size += 1
return (sq_size - 1) * (sq_size - 1)
max_size = 0
for i in range(0, size[0]):
for j in range(0, size[1]):
if matrix[i][j] == "1":
curr_size = get_square_size((i, j))
if curr_size > max_size:
max_size = curr_size
return max_size
print(find_largest_square([["1","0","1","0","0"],["1","0","1","1","1"],["1","1","1","1","1"],["1","0","0","1","0"]]))
print(find_largest_square([]))
print(find_largest_square([["1","0"],["1","0"]]))
print(find_largest_square([["1"]]))
Explanation: P2. Maximal Square Given a 2D binary matrix filled with 0's and 1's, find the largest square containing only 1's and return its area.
```
Example:
Input:
1 0 1 0 0
1 0 1 1 1
1 1 1 1 1
1 0 0 1 0
Output: 4
```
End of explanation
class Solution(object):
def minSwap(self, A, B):
:type A: List[int]
:type B: List[int]
:rtype: int
size = len(A)
i = 0
num_swaps = 0
print(A, B)
while i < size - 1:
if A[i] >= A[i + 1] or B[i] >= B[i + 1]:
A[i], B[i] = B[i], A[i]
print(A, B, i)
num_swaps += 1
if i > 0:
i -= 1
else:
i += 1
return num_swaps
print(Solution().minSwap([2,3,2,5,6], [0,1,4,4,5]))
Explanation: P3.
We have two integer sequences A and B of the same non-zero length.
We are allowed to swap elements A[i] and B[i]. Note that both elements are in the same index position in their respective sequences.
At the end of some number of swaps, A and B are both strictly increasing. (A sequence is strictly increasing if and only if A[0] < A[1] < A[2] < ... < A[A.length - 1].)
Given A and B, return the minimum number of swaps to make both sequences strictly increasing. It is guaranteed that the given input always makes it possible.
Example:
Input: A = [1,3,5,4], B = [1,2,3,7]
Output: 1
Explanation:
Swap A[3] and B[3]. Then the sequences are:
A = [1, 3, 5, 7] and B = [1, 2, 3, 4]
which are both strictly increasing.
Note:
A, B are arrays with the same length, and that length will be in the range [1, 1000].
A[i], B[i] are integer values in the range [0, 2000].
End of explanation |
1,822 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Esistono molti modi diversi di approcciarsi alla risoluzione di un problema. In base alla propria astuzia, alle proprie conoscenze algoritmiche, matematiche, statistiche, al proprio buonsenso, è possibile elaborare strategie sempre più raffinate.
Oggi invece siamo pigri. Supponiamo di essere in assoluto il più pigro studente che sia mai esistito. E supponiamo che il problema che dobbiamo risolvere sia imparare un frammento dell’Amleto di Shakespeare. Il frammento in questione è una semplice frase di Amleto
Step1: E poi proporre la nostra frase all’insegnante il quale si limita a
Step2: Ben presto ci accorgiamo di quanta pazienza devono avere le scimmie, dopo 100000 tentativi (saremo pur pigri ma anche molto tenaci) abbiamo azzeccato appena 8 lettere su 28
Step3: Con questa semplice modifica siamo in grado di imparare la frase corretta in appena qualche migliaio di tentativi.
Circa tremila tentativi è decisamente un risultato molto migliore di prima! Certo che però provarle una alla volta è un po’ noioso, perché non proviamo invece a proporne di più allo stesso tempo? Proviamo a proporne 100 in un colpo solo
Step4: Wow ora sono riuscito a scendere attorno al migliaio, ma ho davvero guadagnato qualcosa in termini di numero di tentativi necessari? Alla fine ora faccio 100 tentativi alla volta, perchè non ci metto 100 volte meno? In effetti è sleale contare il numero di volte che passo i miei tentativi all’insegnante, dovrei contare invece quante volte l’insegnante valuta i miei tentativi!
Vediamo un po’
Step5: Diamine! Sì, è vero che ci metto circa un terzo delle iterazioni adesso ma quel poveretto del mio insegnante deve valutare uno sproposito di tentativi in più! circa 130000! Mentre prima quando gli proponevo una frase alla volta ne bastavano circa 3000. E dal momento che devo aspettare che lui abbia finito prima di poter provare nuove frasi mi sto tirando la zappa sui piedi!
Inoltre cos’è questo macello nei miei tentativi inziali?
Come possono cambiare così tanto l’uno dall’altro? Io modifico solo una lettera per volta! Aspetta… ah già, dal momento che ho molte frasi contemporaneamente può essere che alcune di loro abbiano trovato alcune lettere corrette ma che siano state battute in velocità da altre alcune iterazioni dopo. Alla fine mi curo solo di chi è il miglior candidato, se per caso qualche suo rivale riesce a trovare anche solo una lettera in più il mio povero campione viene gettato nel dimenticatoio. Ma perchè sul finale le cose sembrano molto più “uniformi” come me le aspettavo?
Mmmm… potrebbe essere che all’inizio fosse facile migliorare il proprio risultato dato che la maggior parte delle lettere erano sbagliate. Sul finale invece solo una o due lettere sono sbagliate quindi è difficile progredire, bisogna sia avere la fortuna di azzeccare la posizione giusta che la lettera mancante!
Forse è meglio se do un occhio a cosa succede alle mie frasi oltre che al campione
Step6: Proviamo a vedere cosa succede ai nostri candidati quando ne considero 10 alla volta
Step7: Eh sì è come pensavo, quello che era il mio campione ha fallito a trovare un miglioramento e quindi un suo rivale è passato in vantaggio!
Ma perchè devono competere tra di loro! Io voglio solo trovare la frase che vuole l’insegnante, se collaborassero sarebbe molto più semplice...
Posso far si che condividano le parti corrette che ognuno di loro ha trovato? Diamine se solo sapessi quali sono! Maledetto insegnante che mi dici solo quante lettere azzecco.
Mmmm e se facessi mescolare tra di loro le varie frasi sperando che mettano insieme le parti corrette che hanno trovato? Potrei prendere due frasi a caso e costruirne una nuova prendendo lettere da una o l’altra. Sì, sembra una buona idea, alla fine se entrambe le frasi hanno trovato una lettera giusta non importa da chi pesco, la nuova frase avrà senz’altro quella lettera, almeno posso star tranquillo che non rovinerò le soluzioni che trovano!
Step8: Ok, ma chi mescolo tra loro? Argh…
Uff,riflettiamo, senz’altro voglio la frase migliore, alla fine è quella con più lettere giuste, ma con chi potrei mescolarla? La seconda migliore? E se frasi meno “buone” avessero comunque trovato parti che alla migliore mancano? Facciamo così le scelgo a caso, ma do priorità a quelle con valori più alti. Si, mi sembra sensato, ma come faccio a farlo? T_T
Mmmm.... è come se volessi girare una roulette, dove chi ha un fitness più elevato ha più “spicchi” proviamo a fare uno schizzo | Python Code:
import random
import string
def random_char():
return random.choice(string.ascii_lowercase + ' ')
def genera_frase():
return [random_char() for n in range(0,len(amleto))]
amleto = list('parmi somigli ad una donnola')
print("target= '"+''.join(amleto)+"'")
frase = genera_frase()
print(str(frase)+" = '"+''.join(frase)+"'")
Explanation: Esistono molti modi diversi di approcciarsi alla risoluzione di un problema. In base alla propria astuzia, alle proprie conoscenze algoritmiche, matematiche, statistiche, al proprio buonsenso, è possibile elaborare strategie sempre più raffinate.
Oggi invece siamo pigri. Supponiamo di essere in assoluto il più pigro studente che sia mai esistito. E supponiamo che il problema che dobbiamo risolvere sia imparare un frammento dell’Amleto di Shakespeare. Il frammento in questione è una semplice frase di Amleto:
Parmi somigli ad una donnola
Dato che siamo mostruosamente pigri siamo stati affiancati ad un insegnante, profumatamente pagato per aiutarci a svolgere il nostro compito ma purtroppo altrettanto pigro.
Sforzarsi di imparare la frase che ci è stata assegnata è fuori questione. Perché non cominciare provando a buttare là frasi a caso e vedere come va? Alla fine se delle scimmie che premono tasti a caso su una macchina da scrivere possono riscrivere tutte le opere di Shakespeare perché non possiamo farcela noi con una sola frase?
Una normale persona probabilmente metterebbe insieme delle parole che l’autore potrebbe aver usato ma non vogliamo dare alle scimmie nostre rivali alcun handicap e quindi cominciamo a dire lettere a caso.
Nello specifico selezioniamo un numero di lettere casuali, pari alla lunghezza della frase che vogliamo imparare e lo proponiamo al nostro insegnante, il quale, pigro quanto noi, si limita a dirci quante lettere abbiamo azzeccato.
Quello che facciamo in pratica è:
End of explanation
def valuta( candidato ):
azzeccate = 0
for (lettera1, lettera2) in zip(candidato, amleto):
if lettera1 == lettera2:
azzeccate = azzeccate + 1
return azzeccate
risposta = valuta(frase)
print(risposta)
Explanation: E poi proporre la nostra frase all’insegnante il quale si limita a:
End of explanation
def altera(vecchia_frase):
posizione_da_cambiare = random.choice(range(0,len(vecchia_frase)))
lettera_da_cambiare = vecchia_frase[posizione_da_cambiare]
alternative = (string.ascii_lowercase + ' ').replace(lettera_da_cambiare,'')
nuova_frase = list(vecchia_frase)
nuova_frase[posizione_da_cambiare] = random.choice(alternative)
return nuova_frase
i=0
miglior_frase = [random_char() for n in range(0,len(amleto))]
miglior_risultato = valuta(miglior_frase)
while(miglior_risultato < len(amleto)):
frase = altera(miglior_frase)
risposta = valuta(frase)
i = i+1
if risposta > miglior_risultato:
miglior_risultato = risposta
miglior_frase = frase
print(str(i)+':\t"'+''.join(miglior_frase)+'"\t'+str(miglior_risultato))
Explanation: Ben presto ci accorgiamo di quanta pazienza devono avere le scimmie, dopo 100000 tentativi (saremo pur pigri ma anche molto tenaci) abbiamo azzeccato appena 8 lettere su 28:
“vfgoyflcmorpiisd untmsonqcji”
“parmi somigli ad una donnola”
Un risultato non proprio stellare.
Abbiamo 28 lettere da azzeccare, per ognuna abbiamo 27 scelte ('abcdefghijklmnopqrstuvwxyz '), il che vuol dire che la nostra probabilità di azzeccare tirando a caso è una su $27^{28}$ cioè circa:
0.00000000000000000000000000000000000000008<br>
Mentre proponiamo la 1000001-esima frase pensiamo tra noi che sarebbe bello non dover ricominciare da capo ogni volta, che magari ora che abbiamo una frase con 8 lettere corrette sarebbe bello poter migliorarla invece di buttarla via e ricominciare. Se solo il nostro insegnante fosse così gentile da dirci quali sono le lettere che abbiamo azzeccato oltre che quante, finiremmo in un attimo.
Vabbè, cerchiamo comunque di provarci, invece di buttare via la nostra frase ogni volta proviamo a tenerla e cambiare una lettera cercando di ottenere risultati sempre migliori:
End of explanation
def migliore(candidati):
ordinati = sorted(candidati,key=lambda tup: tup[1], reverse=True)
return ordinati[0]
def genera_candidati(num_candidati):
candidati = []
for i in range(0,num_candidati):
tmp_frase = genera_frase()
tmp_risposta = valuta(tmp_frase)
candidati.append((tmp_frase,tmp_risposta))
return candidati
candidati = genera_candidati(100)
i=0
miglior_frase, miglior_risultato = migliore(candidati)
while(miglior_risultato < len(amleto)):
i = i+1
for n in range(0,len(candidati)):
frase,risposta = candidati[n]
nuova_frase = altera(frase)
nuova_risposta = valuta(nuova_frase)
if nuova_risposta > risposta:
candidati[n] = (nuova_frase,nuova_risposta)
if nuova_risposta > miglior_risultato:
miglior_risultato = nuova_risposta
miglior_frase = nuova_frase
print(str(i)+':\t"'+''.join(miglior_frase)+'"\t'+str(miglior_risultato))
Explanation: Con questa semplice modifica siamo in grado di imparare la frase corretta in appena qualche migliaio di tentativi.
Circa tremila tentativi è decisamente un risultato molto migliore di prima! Certo che però provarle una alla volta è un po’ noioso, perché non proviamo invece a proporne di più allo stesso tempo? Proviamo a proporne 100 in un colpo solo:
End of explanation
def valuta( candidato ):
global valutazioni
valutazioni = valutazioni + 1
azzeccate = 0
for (lettera1, lettera2) in zip(candidato, amleto):
if lettera1 == lettera2:
azzeccate = azzeccate + 1
return azzeccate
def prova_piu_frasi_insieme(num_frasi):
global valutazioni
valutazioni = 0
i=0
candidati = genera_candidati(num_frasi)
miglior_frase, miglior_risultato = migliore(candidati)
while(miglior_risultato < len(amleto)):
i = i+1
for n in range(0,len(candidati)):
frase,risposta = candidati[n]
nuova_frase = altera(frase)
nuova_risposta = valuta(nuova_frase)
if nuova_risposta > risposta:
candidati[n] = (nuova_frase,nuova_risposta)
if nuova_risposta > miglior_risultato:
miglior_risultato = nuova_risposta
miglior_frase = nuova_frase
print(str(i)+':\t"'+''.join(miglior_frase)+'"\t'+str(miglior_risultato))
print('Valutazioni totali: '+str(valutazioni))
prova_piu_frasi_insieme(100)
Explanation: Wow ora sono riuscito a scendere attorno al migliaio, ma ho davvero guadagnato qualcosa in termini di numero di tentativi necessari? Alla fine ora faccio 100 tentativi alla volta, perchè non ci metto 100 volte meno? In effetti è sleale contare il numero di volte che passo i miei tentativi all’insegnante, dovrei contare invece quante volte l’insegnante valuta i miei tentativi!
Vediamo un po’:
End of explanation
import pprint
pp = pprint.PrettyPrinter()
def stampa_candidati(candidati):
# candidati -> array di char, li trasformo in stringhe con ''.join(...)
# [' ', 'x', 'p', 'l', 'f', … ,'d', 'z', 'h', 'f'] -> ' xplfrvvjjvnmzkovohltroudzhf'
stringhe_e_valori = list(map(lambda x : (''.join(x[0]),x[1]), candidati))
# per comodità ordino le stringhe in base al numero di lettere corrette, decrescente
stringhe_ordinate = sorted(stringhe_e_valori,key=lambda tup: tup[1], reverse=True)
pp.pprint(stringhe_ordinate)
stampa_candidati(genera_candidati(10))
Explanation: Diamine! Sì, è vero che ci metto circa un terzo delle iterazioni adesso ma quel poveretto del mio insegnante deve valutare uno sproposito di tentativi in più! circa 130000! Mentre prima quando gli proponevo una frase alla volta ne bastavano circa 3000. E dal momento che devo aspettare che lui abbia finito prima di poter provare nuove frasi mi sto tirando la zappa sui piedi!
Inoltre cos’è questo macello nei miei tentativi inziali?
Come possono cambiare così tanto l’uno dall’altro? Io modifico solo una lettera per volta! Aspetta… ah già, dal momento che ho molte frasi contemporaneamente può essere che alcune di loro abbiano trovato alcune lettere corrette ma che siano state battute in velocità da altre alcune iterazioni dopo. Alla fine mi curo solo di chi è il miglior candidato, se per caso qualche suo rivale riesce a trovare anche solo una lettera in più il mio povero campione viene gettato nel dimenticatoio. Ma perchè sul finale le cose sembrano molto più “uniformi” come me le aspettavo?
Mmmm… potrebbe essere che all’inizio fosse facile migliorare il proprio risultato dato che la maggior parte delle lettere erano sbagliate. Sul finale invece solo una o due lettere sono sbagliate quindi è difficile progredire, bisogna sia avere la fortuna di azzeccare la posizione giusta che la lettera mancante!
Forse è meglio se do un occhio a cosa succede alle mie frasi oltre che al campione:
End of explanation
def prova_piu_frasi_insieme(num_frasi):
global valutazioni
valutazioni = 0
i=0
candidati = genera_candidati(num_frasi)
miglior_frase, miglior_risultato = migliore(candidati)
while(miglior_risultato < len(amleto)):
i = i+1
for n in range(0,len(candidati)):
frase,risposta = candidati[n]
nuova_frase = altera(frase)
nuova_risposta = valuta(nuova_frase)
if nuova_risposta > risposta:
candidati[n] = (nuova_frase,nuova_risposta)
if nuova_risposta > miglior_risultato:
miglior_risultato = nuova_risposta
miglior_frase = nuova_frase
print(str(i)+':\t"'+''.join(miglior_frase)+'"\t'+str(miglior_risultato))
stampa_candidati(candidati)
print('Valutazioni totali: '+str(valutazioni))
prova_piu_frasi_insieme(10)
Explanation: Proviamo a vedere cosa succede ai nostri candidati quando ne considero 10 alla volta:
End of explanation
def mescola(frase1, frase2):
nuova_frase = []
for i in range(0,len(frase1[0])):
if random.random() > 0.5:
nuova_frase.append(frase1[0][i])
else:
nuova_frase.append(frase2[0][i])
return (nuova_frase,valuta(nuova_frase))
test_frase1 , test_frase2 = genera_frase(), genera_frase()
print('frase1: "'+''.join(test_frase1)+'"')
print('frase2: "'+''.join(test_frase2)+'"')
print('mix: "'+''.join(mescola((test_frase1,1),(test_frase2,1))[0])+'"')
Explanation: Eh sì è come pensavo, quello che era il mio campione ha fallito a trovare un miglioramento e quindi un suo rivale è passato in vantaggio!
Ma perchè devono competere tra di loro! Io voglio solo trovare la frase che vuole l’insegnante, se collaborassero sarebbe molto più semplice...
Posso far si che condividano le parti corrette che ognuno di loro ha trovato? Diamine se solo sapessi quali sono! Maledetto insegnante che mi dici solo quante lettere azzecco.
Mmmm e se facessi mescolare tra di loro le varie frasi sperando che mettano insieme le parti corrette che hanno trovato? Potrei prendere due frasi a caso e costruirne una nuova prendendo lettere da una o l’altra. Sì, sembra una buona idea, alla fine se entrambe le frasi hanno trovato una lettera giusta non importa da chi pesco, la nuova frase avrà senz’altro quella lettera, almeno posso star tranquillo che non rovinerò le soluzioni che trovano!
End of explanation
def genera_ruota(candidati):
totale = 0
ruota = []
for frase,valore in candidati:
totale = totale + valore
ruota.append((totale,frase,valore))
return ruota
def gira_ruota(wheel):
totale = wheel[-1][0]
pick = totale * random.random()
for (parziale,candidato,valore) in wheel:
if parziale >= pick:
return (candidato,valore)
return wheel[-1][1:]
candidati = genera_candidati(10)
wheel = genera_ruota(candidati)
pretty_wheel = list(map(lambda x:(x[0],''.join(x[1]),x[2]),wheel))
pp.pprint(pretty_wheel)
print("migliore='"+''.join(migliore(candidati)[0])+"'")
def prova_piu_frasi_e_mescola(num_frasi):
global valutazioni
valutazioni = 0
i=0
candidati = genera_candidati(num_frasi)
miglior_frase, miglior_risultato = migliore(candidati)
while(miglior_risultato < len(amleto)):
i = i+1
ruota = genera_ruota(candidati)
nuovi_candidati = []
for n in range(0,len(candidati)):
minitorneo = [gira_ruota(ruota),gira_ruota(ruota)]
nuova_frase = altera(mescola(minitorneo[0],minitorneo[1])[0])
nuova_risposta = valuta(nuova_frase)
minitorneo.append((nuova_frase,nuova_risposta))
vincitore,valore_vincitore = migliore(minitorneo)
nuovi_candidati.append((vincitore,valore_vincitore))
if valore_vincitore > miglior_risultato:
miglior_risultato = valore_vincitore
miglior_frase = vincitore
print(str(i)+':\t"'+''.join(miglior_frase)+'"\t'+str(miglior_risultato))
stampa_candidati(candidati)
candidati = nuovi_candidati
print('valutazioni: '+str(valutazioni))
prova_piu_frasi_e_mescola(10)
Explanation: Ok, ma chi mescolo tra loro? Argh…
Uff,riflettiamo, senz’altro voglio la frase migliore, alla fine è quella con più lettere giuste, ma con chi potrei mescolarla? La seconda migliore? E se frasi meno “buone” avessero comunque trovato parti che alla migliore mancano? Facciamo così le scelgo a caso, ma do priorità a quelle con valori più alti. Si, mi sembra sensato, ma come faccio a farlo? T_T
Mmmm.... è come se volessi girare una roulette, dove chi ha un fitness più elevato ha più “spicchi” proviamo a fare uno schizzo:
<img src="roulette.png">
Mmm una frase che ha 24 di valore, merita 24 spicchi, mentre una che ha 20 ne riceverà solo 20, quindi il totale degli spicchi è la somma di tutti i valori.
End of explanation |
1,823 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Evoked data structure
Step1: Creating Evoked objects from Epochs
Step2: Basic visualization of Evoked objects
We can visualize the average evoked response for left-auditory stimuli using
the
Step3: Like the plot() methods for
Step4: To select based on time in seconds, the
Step5: Similarities among the core data structures
Step6: Notice that
Step7: If you want to load only some of the conditions present in a .fif file,
Step8: Above, when we created an
Step9: This can be remedied by either passing a baseline parameter to
Step10: Notice that
Step11: This approach will weight each epoch equally and create a single
Step12: However, this may not always be the case; if for statistical reasons it is
important to average the same number of epochs from different conditions,
you can use
Step13: Note that the nave attribute of the resulting ~mne.Evoked object will
reflect the effective number of averages, and depends on both the nave
attributes of the contributing ~mne.Evoked objects and the weights at
which they are combined. Keeping track of effective nave is important for
inverse imaging, because nave is used to scale the noise covariance
estimate (which in turn affects the magnitude of estimated source activity).
See minimum_norm_estimates for more information (especially the
whitening_and_scaling section). Note that mne.grand_average does
not adjust nave to reflect effective number of averaged epochs; rather
it simply sets nave to the number of evokeds that were averaged
together. For this reason, it is best to use mne.combine_evoked rather than
mne.grand_average if you intend to perform inverse imaging on the resulting | Python Code:
import os
import mne
Explanation: The Evoked data structure: evoked/averaged data
This tutorial covers the basics of creating and working with :term:evoked
data. It introduces the :class:~mne.Evoked data structure in detail,
including how to load, query, subselect, export, and plot data from an
:class:~mne.Evoked object. For info on creating an :class:~mne.Evoked
object from (possibly simulated) data in a :class:NumPy array
<numpy.ndarray>, see tut_creating_data_structures.
:depth: 2
As usual we'll start by importing the modules we need:
End of explanation
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False)
events = mne.find_events(raw, stim_channel='STI 014')
# we'll skip the "face" and "buttonpress" conditions, to save memory:
event_dict = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4}
epochs = mne.Epochs(raw, events, tmin=-0.3, tmax=0.7, event_id=event_dict,
preload=True)
evoked = epochs['auditory/left'].average()
del raw # reduce memory usage
Explanation: Creating Evoked objects from Epochs
:class:~mne.Evoked objects typically store an EEG or MEG signal that has
been averaged over multiple :term:epochs, which is a common technique for
estimating stimulus-evoked activity. The data in an :class:~mne.Evoked
object are stored in an :class:array <numpy.ndarray> of shape
(n_channels, n_times) (in contrast to an :class:~mne.Epochs object,
which stores data of shape (n_epochs, n_channels, n_times)). Thus to
create an :class:~mne.Evoked object, we'll start by epoching some raw data,
and then averaging together all the epochs from one condition:
End of explanation
evoked.plot()
Explanation: Basic visualization of Evoked objects
We can visualize the average evoked response for left-auditory stimuli using
the :meth:~mne.Evoked.plot method, which yields a butterfly plot of each
channel type:
End of explanation
print(evoked.data[:2, :3]) # first 2 channels, first 3 timepoints
Explanation: Like the plot() methods for :meth:Raw <mne.io.Raw.plot> and
:meth:Epochs <mne.Epochs.plot> objects,
:meth:evoked.plot() <mne.Evoked.plot> has many parameters for customizing
the plot output, such as color-coding channel traces by scalp location, or
plotting the :term:global field power <GFP> alongside the channel traces.
See tut-visualize-evoked for more information about visualizing
:class:~mne.Evoked objects.
Subselecting Evoked data
.. sidebar:: Evokeds are not memory-mapped
:class:~mne.Evoked objects use a :attr:~mne.Evoked.data attribute
rather than a :meth:~mne.Epochs.get_data method; this reflects the fact
that the data in :class:~mne.Evoked objects are always loaded into
memory, never memory-mapped_ from their location on disk (because they
are typically much smaller than :class:~mne.io.Raw or
:class:~mne.Epochs objects).
Unlike :class:~mne.io.Raw and :class:~mne.Epochs objects,
:class:~mne.Evoked objects do not support selection by square-bracket
indexing. Instead, data can be subselected by indexing the
:attr:~mne.Evoked.data attribute:
End of explanation
evoked_eeg = evoked.copy().pick_types(meg=False, eeg=True)
print(evoked_eeg.ch_names)
new_order = ['EEG 002', 'MEG 2521', 'EEG 003']
evoked_subset = evoked.copy().reorder_channels(new_order)
print(evoked_subset.ch_names)
Explanation: To select based on time in seconds, the :meth:~mne.Evoked.time_as_index
method can be useful, although beware that depending on the sampling
frequency, the number of samples in a span of given duration may not always
be the same (see the time-as-index section of the
tutorial about Raw data <tut-raw-class> for details).
Selecting, dropping, and reordering channels
By default, when creating :class:~mne.Evoked data from an
:class:~mne.Epochs object, only the "data" channels will be retained:
eog, ecg, stim, and misc channel types will be dropped. You
can control which channel types are retained via the picks parameter of
:meth:epochs.average() <mne.Epochs.average>, by passing 'all' to
retain all channels, or by passing a list of integers, channel names, or
channel types. See the documentation of :meth:~mne.Epochs.average for
details.
If you've already created the :class:~mne.Evoked object, you can use the
:meth:~mne.Evoked.pick, :meth:~mne.Evoked.pick_channels,
:meth:~mne.Evoked.pick_types, and :meth:~mne.Evoked.drop_channels methods
to modify which channels are included in an :class:~mne.Evoked object.
You can also use :meth:~mne.Evoked.reorder_channels for this purpose; any
channel names not provided to :meth:~mne.Evoked.reorder_channels will be
dropped. Note that channel selection methods modify the object in-place, so
in interactive/exploratory sessions you may want to create a
:meth:~mne.Evoked.copy first.
End of explanation
sample_data_evk_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis-ave.fif')
evokeds_list = mne.read_evokeds(sample_data_evk_file, verbose=False)
print(evokeds_list)
print(type(evokeds_list))
Explanation: Similarities among the core data structures
:class:~mne.Evoked objects have many similarities with :class:~mne.io.Raw
and :class:~mne.Epochs objects, including:
They can be loaded from and saved to disk in .fif format, and their
data can be exported to a :class:NumPy array <numpy.ndarray> (but through
the :attr:~mne.Evoked.data attribute, not through a get_data()
method). :class:Pandas DataFrame <pandas.DataFrame> export is also
available through the :meth:~mne.Evoked.to_data_frame method.
You can change the name or type of a channel using
:meth:evoked.rename_channels() <mne.Evoked.rename_channels> or
:meth:evoked.set_channel_types() <mne.Evoked.set_channel_types>.
Both methods take :class:dictionaries <dict> where the keys are existing
channel names, and the values are the new name (or type) for that channel.
Existing channels that are not in the dictionary will be unchanged.
:term:SSP projector <projector> manipulation is possible through
:meth:~mne.Evoked.add_proj, :meth:~mne.Evoked.del_proj, and
:meth:~mne.Evoked.plot_projs_topomap methods, and the
:attr:~mne.Evoked.proj attribute. See tut-artifact-ssp for more
information on SSP.
Like :class:~mne.io.Raw and :class:~mne.Epochs objects,
:class:~mne.Evoked objects have :meth:~mne.Evoked.copy,
:meth:~mne.Evoked.crop, :meth:~mne.Evoked.time_as_index,
:meth:~mne.Evoked.filter, and :meth:~mne.Evoked.resample methods.
Like :class:~mne.io.Raw and :class:~mne.Epochs objects,
:class:~mne.Evoked objects have evoked.times,
:attr:evoked.ch_names <mne.Evoked.ch_names>, and :class:info <mne.Info>
attributes.
Loading and saving Evoked data
Single :class:~mne.Evoked objects can be saved to disk with the
:meth:evoked.save() <mne.Evoked.save> method. One difference between
:class:~mne.Evoked objects and the other data structures is that multiple
:class:~mne.Evoked objects can be saved into a single .fif file, using
:func:mne.write_evokeds. The example data <sample-dataset>
includes just such a .fif file: the data have already been epoched and
averaged, and the file contains separate :class:~mne.Evoked objects for
each experimental condition:
End of explanation
for evok in evokeds_list:
print(evok.comment)
Explanation: Notice that :func:mne.read_evokeds returned a :class:list of
:class:~mne.Evoked objects, and each one has an evoked.comment
attribute describing the experimental condition that was averaged to
generate the estimate:
End of explanation
right_vis = mne.read_evokeds(sample_data_evk_file, condition='Right visual')
print(right_vis)
print(type(right_vis))
Explanation: If you want to load only some of the conditions present in a .fif file,
:func:~mne.read_evokeds has a condition parameter, which takes either a
string (matched against the comment attribute of the evoked objects on disk),
or an integer selecting the :class:~mne.Evoked object based on the order
it's stored in the file. Passing lists of integers or strings is also
possible. If only one object is selected, the :class:~mne.Evoked object
will be returned directly (rather than a length-one list containing it):
End of explanation
evokeds_list[0].plot(picks='eeg')
Explanation: Above, when we created an :class:~mne.Evoked object by averaging epochs,
baseline correction was applied by default when we extracted epochs from the
class:~mne.io.Raw object (the default baseline period is (None, 0),
which assured zero mean for times before the stimulus event). In contrast, if
we plot the first :class:~mne.Evoked object in the list that was loaded
from disk, we'll see that the data have not been baseline-corrected:
End of explanation
evokeds_list[0].apply_baseline((None, 0))
evokeds_list[0].plot(picks='eeg')
Explanation: This can be remedied by either passing a baseline parameter to
:func:mne.read_evokeds, or by applying baseline correction after loading,
as shown here:
End of explanation
left_right_aud = epochs['auditory'].average()
print(left_right_aud)
Explanation: Notice that :meth:~mne.Evoked.apply_baseline operated in-place. Similarly,
:class:~mne.Evoked objects may have been saved to disk with or without
:term:projectors <projector> applied; you can pass proj=True to the
:func:~mne.read_evokeds function, or use the :meth:~mne.Evoked.apply_proj
method after loading.
Combining Evoked objects
One way to pool data across multiple conditions when estimating evoked
responses is to do so prior to averaging (recall that MNE-Python can select
based on partial matching of /-separated epoch labels; see
tut-section-subselect-epochs for more info):
End of explanation
left_aud = epochs['auditory/left'].average()
right_aud = epochs['auditory/right'].average()
print([evok.nave for evok in (left_aud, right_aud)])
Explanation: This approach will weight each epoch equally and create a single
:class:~mne.Evoked object. Notice that the printed representation includes
(average, N=145), indicating that the :class:~mne.Evoked object was
created by averaging across 145 epochs. In this case, the event types were
fairly close in number:
End of explanation
left_right_aud = mne.combine_evoked([left_aud, right_aud], weights='nave')
assert left_right_aud.nave == left_aud.nave + right_aud.nave
Explanation: However, this may not always be the case; if for statistical reasons it is
important to average the same number of epochs from different conditions,
you can use :meth:~mne.Epochs.equalize_event_counts prior to averaging.
Another approach to pooling across conditions is to create separate
:class:~mne.Evoked objects for each condition, and combine them afterward.
This can be accomplished by the function :func:mne.combine_evoked, which
computes a weighted sum of the :class:~mne.Evoked objects given to it. The
weights can be manually specified as a list or array of float values, or can
be specified using the keyword 'equal' (weight each ~mne.Evoked object
by $\frac{1}{N}$, where $N$ is the number of ~mne.Evoked
objects given) or the keyword 'nave' (weight each ~mne.Evoked object
proportional to the number of epochs averaged together to create it):
End of explanation
for ix, trial in enumerate(epochs[:3].iter_evoked()):
channel, latency, value = trial.get_peak(ch_type='eeg',
return_amplitude=True)
latency = int(round(latency * 1e3)) # convert to milliseconds
value = int(round(value * 1e6)) # convert to µV
print('Trial {}: peak of {} µV at {} ms in channel {}'
.format(ix, value, latency, channel))
Explanation: Note that the nave attribute of the resulting ~mne.Evoked object will
reflect the effective number of averages, and depends on both the nave
attributes of the contributing ~mne.Evoked objects and the weights at
which they are combined. Keeping track of effective nave is important for
inverse imaging, because nave is used to scale the noise covariance
estimate (which in turn affects the magnitude of estimated source activity).
See minimum_norm_estimates for more information (especially the
whitening_and_scaling section). Note that mne.grand_average does
not adjust nave to reflect effective number of averaged epochs; rather
it simply sets nave to the number of evokeds that were averaged
together. For this reason, it is best to use mne.combine_evoked rather than
mne.grand_average if you intend to perform inverse imaging on the resulting
:class:~mne.Evoked object.
Other uses of Evoked objects
Although the most common use of :class:~mne.Evoked objects is to store
averages of epoched data, there are a couple other uses worth noting here.
First, the method :meth:epochs.standard_error() <mne.Epochs.standard_error>
will create an :class:~mne.Evoked object (just like
:meth:epochs.average() <mne.Epochs.average> does), but the data in the
:class:~mne.Evoked object will be the standard error across epochs instead
of the average. To indicate this difference, :class:~mne.Evoked objects
have a :attr:~mne.Evoked.kind attribute that takes values 'average' or
'standard error' as appropriate.
Another use of :class:~mne.Evoked objects is to represent a single trial
or epoch of data, usually when looping through epochs. This can be easily
accomplished with the :meth:epochs.iter_evoked() <mne.Epochs.iter_evoked>
method, and can be useful for applications where you want to do something
that is only possible for :class:~mne.Evoked objects. For example, here
we use the :meth:~mne.Evoked.get_peak method (which isn't available for
:class:~mne.Epochs objects) to get the peak response in each trial:
End of explanation |
1,824 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
🔪 JAX - The Sharp Bits 🔪
levskaya@ mattjj@
When walking about the countryside of Italy, the people will not hesitate to tell you that JAX has "una anima di pura programmazione funzionale".
JAX is a language for expressing and composing transformations of numerical programs. JAX is also able to compile numerical programs for CPU or accelerators (GPU/TPU).
JAX works great for many numerical and scientific programs, but only if they are written with certain constraints that we describe below.
Step1: 🔪 Pure functions
JAX transformation and compilation are designed to work only on Python functions that are functionally pure
Step2: A Python function can be functionally pure even if it actually uses stateful objects internally, as long as it does not read or write external state
Step3: It is not recommended to use iterators in any JAX function you want to jit or in any control-flow primitive. The reason is that an iterator is a python object which introduces state to retrieve the next element. Therefore, it is incompatible with JAX functional programming model. In the code below, there are some examples of incorrect attempts to use iterators with JAX. Most of them return an error, but some give unexpected results.
Step4: 🔪 In-Place Updates
In Numpy you're used to doing this
Step5: If we try to update a JAX device array in-place, however, we get an error! (☉_☉)
Step6: Allowing mutation of variables in-place makes program analysis and transformation difficult. JAX requires that programs are pure functions.
Instead, JAX offers a functional array update using the .at property on JAX arrays.
️⚠️ inside jit'd code and lax.while_loop or lax.fori_loop the size of slices can't be functions of argument values but only functions of argument shapes -- the slice start indices have no such restriction. See the below Control Flow Section for more information on this limitation.
Array updates
Step7: JAX's array update functions, unlike their NumPy versions, operate out-of-place. That is, the updated array is returned as a new array and the original array is not modified by the update.
Step8: However, inside jit-compiled code, if the input value x of x.at[idx].set(y) is not reused, the compiler will optimize the array update to occur in-place.
Array updates with other operations
Indexed array updates are not limited simply to overwriting values. For example, we can perform indexed addition as follows
Step9: For more details on indexed array updates, see the documentation for the .at property.
🔪 Out-of-Bounds Indexing
In Numpy, you are used to errors being thrown when you index an array outside of its bounds, like this
Step10: However, raising an error from code running on an accelerator can be difficult or impossible. Therefore, JAX must choose some non-error behavior for out of bounds indexing (akin to how invalid floating point arithmetic results in NaN). When the indexing operation is an array index update (e.g. index_add or scatter-like primitives), updates at out-of-bounds indices will be skipped; when the operation is an array index retrieval (e.g. NumPy indexing or gather-like primitives) the index is clamped to the bounds of the array since something must be returned. For example, the last value of the array will be returned from this indexing operation
Step11: Note that due to this behavior for index retrieval, functions like jnp.nanargmin and jnp.nanargmax return -1 for slices consisting of NaNs whereas Numpy would throw an error.
Note also that, as the two behaviors described above are not inverses of each other, reverse-mode automatic differentiation (which turns index updates into index retrievals and vice versa) will not preserve the semantics of out of bounds indexing. Thus it may be a good idea to think of out-of-bounds indexing in JAX as a case of undefined behavior.
🔪 Non-array inputs
Step12: JAX departs from this, generally returning a helpful error
Step13: This is a deliberate design choice, because passing lists or tuples to traced functions can lead to silent performance degradation that might otherwise be difficult to detect.
For example, consider the following permissive version of jnp.sum that allows list inputs
Step14: The output is what we would expect, but this hides potential performance issues under the hood. In JAX's tracing and JIT compilation model, each element in a Python list or tuple is treated as a separate JAX variable, and individually processed and pushed to device. This can be seen in the jaxpr for the permissive_sum function above
Step15: Each entry of the list is handled as a separate input, resulting in a tracing & compilation overhead that grows linearly with the size of the list. To prevent surprises like this, JAX avoids implicit conversions of lists and tuples to arrays.
If you would like to pass a tuple or list to a JAX function, you can do so by first explicitly converting it to an array
Step16: 🔪 Random Numbers
If all scientific papers whose results are in doubt because of bad
rand()s were to disappear from library shelves, there would be a
gap on each shelf about as big as your fist. - Numerical Recipes
RNGs and State
You're used to stateful pseudorandom number generators (PRNGs) from numpy and other libraries, which helpfully hide a lot of details under the hood to give you a ready fountain of pseudorandomness
Step17: Underneath the hood, numpy uses the Mersenne Twister PRNG to power its pseudorandom functions. The PRNG has a period of $2^{19937}-1$ and at any point can be described by 624 32bit unsigned ints and a position indicating how much of this "entropy" has been used up.
Step18: This pseudorandom state vector is automagically updated behind the scenes every time a random number is needed, "consuming" 2 of the uint32s in the Mersenne twister state vector
Step19: The problem with magic PRNG state is that it's hard to reason about how it's being used and updated across different threads, processes, and devices, and it's very easy to screw up when the details of entropy production and consumption are hidden from the end user.
The Mersenne Twister PRNG is also known to have a number of problems, it has a large 2.5Kb state size, which leads to problematic initialization issues. It fails modern BigCrush tests, and is generally slow.
JAX PRNG
JAX instead implements an explicit PRNG where entropy production and consumption are handled by explicitly passing and iterating PRNG state. JAX uses a modern Threefry counter-based PRNG that's splittable. That is, its design allows us to fork the PRNG state into new PRNGs for use with parallel stochastic generation.
The random state is described by two unsigned-int32s that we call a key
Step20: JAX's random functions produce pseudorandom numbers from the PRNG state, but do not change the state!
Reusing the same state will cause sadness and monotony, depriving the end user of lifegiving chaos
Step21: Instead, we split the PRNG to get usable subkeys every time we need a new pseudorandom number
Step22: We propagate the key and make new subkeys whenever we need a new random number
Step23: We can generate more than one subkey at a time
Step24: 🔪 Control Flow
✔ python control_flow + autodiff ✔
If you just want to apply grad to your python functions, you can use regular python control-flow constructs with no problems, as if you were using Autograd (or Pytorch or TF Eager).
Step25: python control flow + JIT
Using control flow with jit is more complicated, and by default it has more constraints.
This works
Step26: So does this
Step27: But this doesn't, at least by default
Step28: What gives!?
When we jit-compile a function, we usually want to compile a version of the function that works for many different argument values, so that we can cache and reuse the compiled code. That way we don't have to re-compile on each function evaluation.
For example, if we evaluate an @jit function on the array jnp.array([1., 2., 3.], jnp.float32), we might want to compile code that we can reuse to evaluate the function on jnp.array([4., 5., 6.], jnp.float32) to save on compile time.
To get a view of your Python code that is valid for many different argument values, JAX traces it on abstract values that represent sets of possible inputs. There are multiple different levels of abstraction, and different transformations use different abstraction levels.
By default, jit traces your code on the ShapedArray abstraction level, where each abstract value represents the set of all array values with a fixed shape and dtype. For example, if we trace using the abstract value ShapedArray((3,), jnp.float32), we get a view of the function that can be reused for any concrete value in the corresponding set of arrays. That means we can save on compile time.
But there's a tradeoff here
Step29: Here's another example, this time involving a loop
Step30: In effect, the loop gets statically unrolled. JAX can also trace at higher levels of abstraction, like Unshaped, but that's not currently the default for any transformation
️⚠️ functions with argument-value dependent shapes
These control-flow issues also come up in a more subtle way
Step31: static_argnums can be handy if length in our example rarely changes, but it would be disastrous if it changed a lot!
Lastly, if your function has global side-effects, JAX's tracer can cause weird things to happen. A common gotcha is trying to print arrays inside jit'd functions
Step32: Structured control flow primitives
There are more options for control flow in JAX. Say you want to avoid re-compilations but still want to use control flow that's traceable, and that avoids un-rolling large loops. Then you can use these 4 structured control flow primitives
Step33: while_loop
python equivalent
Step34: fori_loop
python equivalent
Step35: Summary
$$
\begin{array} {r|rr}
\hline \
\textrm{construct}
& \textrm{jit}
& \textrm{grad} \
\hline \
\textrm{if} & ❌ & ✔ \
\textrm{for} & ✔ & ✔\
\textrm{while} & ✔ & ✔\
\textrm{lax.cond} & ✔ & ✔\
\textrm{lax.while_loop} & ✔ & \textrm{fwd}\
\textrm{lax.fori_loop} & ✔ & \textrm{fwd}\
\textrm{lax.scan} & ✔ & ✔\
\hline
\end{array}
$$
<center>
$\ast$ = argument-<b>value</b>-independent loop condition - unrolls the loop
</center>
🔪 NaNs
Debugging NaNs
If you want to trace where NaNs are occurring in your functions or gradients, you can turn on the NaN-checker by
Step38: To use double-precision numbers, you need to set the jax_enable_x64 configuration variable at startup.
There are a few ways to do this | Python Code:
import numpy as np
from jax import grad, jit
from jax import lax
from jax import random
import jax
import jax.numpy as jnp
import matplotlib as mpl
from matplotlib import pyplot as plt
from matplotlib import rcParams
rcParams['image.interpolation'] = 'nearest'
rcParams['image.cmap'] = 'viridis'
rcParams['axes.grid'] = False
Explanation: 🔪 JAX - The Sharp Bits 🔪
levskaya@ mattjj@
When walking about the countryside of Italy, the people will not hesitate to tell you that JAX has "una anima di pura programmazione funzionale".
JAX is a language for expressing and composing transformations of numerical programs. JAX is also able to compile numerical programs for CPU or accelerators (GPU/TPU).
JAX works great for many numerical and scientific programs, but only if they are written with certain constraints that we describe below.
End of explanation
def impure_print_side_effect(x):
print("Executing function") # This is a side-effect
return x
# The side-effects appear during the first run
print ("First call: ", jit(impure_print_side_effect)(4.))
# Subsequent runs with parameters of same type and shape may not show the side-effect
# This is because JAX now invokes a cached compilation of the function
print ("Second call: ", jit(impure_print_side_effect)(5.))
# JAX re-runs the Python function when the type or shape of the argument changes
print ("Third call, different type: ", jit(impure_print_side_effect)(jnp.array([5.])))
g = 0.
def impure_uses_globals(x):
return x + g
# JAX captures the value of the global during the first run
print ("First call: ", jit(impure_uses_globals)(4.))
g = 10. # Update the global
# Subsequent runs may silently use the cached value of the globals
print ("Second call: ", jit(impure_uses_globals)(5.))
# JAX re-runs the Python function when the type or shape of the argument changes
# This will end up reading the latest value of the global
print ("Third call, different type: ", jit(impure_uses_globals)(jnp.array([4.])))
g = 0.
def impure_saves_global(x):
global g
g = x
return x
# JAX runs once the transformed function with special Traced values for arguments
print ("First call: ", jit(impure_saves_global)(4.))
print ("Saved global: ", g) # Saved global has an internal JAX value
Explanation: 🔪 Pure functions
JAX transformation and compilation are designed to work only on Python functions that are functionally pure: all the input data is passed through the function parameters, all the results are output through the function results. A pure function will always return the same result if invoked with the same inputs.
Here are some examples of functions that are not functionally pure for which JAX behaves differently than the Python interpreter. Note that these behaviors are not guaranteed by the JAX system; the proper way to use JAX is to use it only on functionally pure Python functions.
End of explanation
def pure_uses_internal_state(x):
state = dict(even=0, odd=0)
for i in range(10):
state['even' if i % 2 == 0 else 'odd'] += x
return state['even'] + state['odd']
print(jit(pure_uses_internal_state)(5.))
Explanation: A Python function can be functionally pure even if it actually uses stateful objects internally, as long as it does not read or write external state:
End of explanation
import jax.numpy as jnp
import jax.lax as lax
from jax import make_jaxpr
# lax.fori_loop
array = jnp.arange(10)
print(lax.fori_loop(0, 10, lambda i,x: x+array[i], 0)) # expected result 45
iterator = iter(range(10))
print(lax.fori_loop(0, 10, lambda i,x: x+next(iterator), 0)) # unexpected result 0
# lax.scan
def func11(arr, extra):
ones = jnp.ones(arr.shape)
def body(carry, aelems):
ae1, ae2 = aelems
return (carry + ae1 * ae2 + extra, carry)
return lax.scan(body, 0., (arr, ones))
make_jaxpr(func11)(jnp.arange(16), 5.)
# make_jaxpr(func11)(iter(range(16)), 5.) # throws error
# lax.cond
array_operand = jnp.array([0.])
lax.cond(True, lambda x: x+1, lambda x: x-1, array_operand)
iter_operand = iter(range(10))
# lax.cond(True, lambda x: next(x)+1, lambda x: next(x)-1, iter_operand) # throws error
Explanation: It is not recommended to use iterators in any JAX function you want to jit or in any control-flow primitive. The reason is that an iterator is a python object which introduces state to retrieve the next element. Therefore, it is incompatible with JAX functional programming model. In the code below, there are some examples of incorrect attempts to use iterators with JAX. Most of them return an error, but some give unexpected results.
End of explanation
numpy_array = np.zeros((3,3), dtype=np.float32)
print("original array:")
print(numpy_array)
# In place, mutating update
numpy_array[1, :] = 1.0
print("updated array:")
print(numpy_array)
Explanation: 🔪 In-Place Updates
In Numpy you're used to doing this:
End of explanation
jax_array = jnp.zeros((3,3), dtype=jnp.float32)
# In place update of JAX's array will yield an error!
try:
jax_array[1, :] = 1.0
except Exception as e:
print("Exception {}".format(e))
Explanation: If we try to update a JAX device array in-place, however, we get an error! (☉_☉)
End of explanation
updated_array = jax_array.at[1, :].set(1.0)
print("updated array:\n", updated_array)
Explanation: Allowing mutation of variables in-place makes program analysis and transformation difficult. JAX requires that programs are pure functions.
Instead, JAX offers a functional array update using the .at property on JAX arrays.
️⚠️ inside jit'd code and lax.while_loop or lax.fori_loop the size of slices can't be functions of argument values but only functions of argument shapes -- the slice start indices have no such restriction. See the below Control Flow Section for more information on this limitation.
Array updates: x.at[idx].set(y)
For example, the update above can be written as:
End of explanation
print("original array unchanged:\n", jax_array)
Explanation: JAX's array update functions, unlike their NumPy versions, operate out-of-place. That is, the updated array is returned as a new array and the original array is not modified by the update.
End of explanation
print("original array:")
jax_array = jnp.ones((5, 6))
print(jax_array)
new_jax_array = jax_array.at[::2, 3:].add(7.)
print("new array post-addition:")
print(new_jax_array)
Explanation: However, inside jit-compiled code, if the input value x of x.at[idx].set(y) is not reused, the compiler will optimize the array update to occur in-place.
Array updates with other operations
Indexed array updates are not limited simply to overwriting values. For example, we can perform indexed addition as follows:
End of explanation
try:
np.arange(10)[11]
except Exception as e:
print("Exception {}".format(e))
Explanation: For more details on indexed array updates, see the documentation for the .at property.
🔪 Out-of-Bounds Indexing
In Numpy, you are used to errors being thrown when you index an array outside of its bounds, like this:
End of explanation
jnp.arange(10)[11]
Explanation: However, raising an error from code running on an accelerator can be difficult or impossible. Therefore, JAX must choose some non-error behavior for out of bounds indexing (akin to how invalid floating point arithmetic results in NaN). When the indexing operation is an array index update (e.g. index_add or scatter-like primitives), updates at out-of-bounds indices will be skipped; when the operation is an array index retrieval (e.g. NumPy indexing or gather-like primitives) the index is clamped to the bounds of the array since something must be returned. For example, the last value of the array will be returned from this indexing operation:
End of explanation
np.sum([1, 2, 3])
Explanation: Note that due to this behavior for index retrieval, functions like jnp.nanargmin and jnp.nanargmax return -1 for slices consisting of NaNs whereas Numpy would throw an error.
Note also that, as the two behaviors described above are not inverses of each other, reverse-mode automatic differentiation (which turns index updates into index retrievals and vice versa) will not preserve the semantics of out of bounds indexing. Thus it may be a good idea to think of out-of-bounds indexing in JAX as a case of undefined behavior.
🔪 Non-array inputs: NumPy vs. JAX
NumPy is generally happy accepting Python lists or tuples as inputs to its API functions:
End of explanation
try:
jnp.sum([1, 2, 3])
except TypeError as e:
print(f"TypeError: {e}")
Explanation: JAX departs from this, generally returning a helpful error:
End of explanation
def permissive_sum(x):
return jnp.sum(jnp.array(x))
x = list(range(10))
permissive_sum(x)
Explanation: This is a deliberate design choice, because passing lists or tuples to traced functions can lead to silent performance degradation that might otherwise be difficult to detect.
For example, consider the following permissive version of jnp.sum that allows list inputs:
End of explanation
make_jaxpr(permissive_sum)(x)
Explanation: The output is what we would expect, but this hides potential performance issues under the hood. In JAX's tracing and JIT compilation model, each element in a Python list or tuple is treated as a separate JAX variable, and individually processed and pushed to device. This can be seen in the jaxpr for the permissive_sum function above:
End of explanation
jnp.sum(jnp.array(x))
Explanation: Each entry of the list is handled as a separate input, resulting in a tracing & compilation overhead that grows linearly with the size of the list. To prevent surprises like this, JAX avoids implicit conversions of lists and tuples to arrays.
If you would like to pass a tuple or list to a JAX function, you can do so by first explicitly converting it to an array:
End of explanation
print(np.random.random())
print(np.random.random())
print(np.random.random())
Explanation: 🔪 Random Numbers
If all scientific papers whose results are in doubt because of bad
rand()s were to disappear from library shelves, there would be a
gap on each shelf about as big as your fist. - Numerical Recipes
RNGs and State
You're used to stateful pseudorandom number generators (PRNGs) from numpy and other libraries, which helpfully hide a lot of details under the hood to give you a ready fountain of pseudorandomness:
End of explanation
np.random.seed(0)
rng_state = np.random.get_state()
#print(rng_state)
# --> ('MT19937', array([0, 1, 1812433255, 1900727105, 1208447044,
# 2481403966, 4042607538, 337614300, ... 614 more numbers...,
# 3048484911, 1796872496], dtype=uint32), 624, 0, 0.0)
Explanation: Underneath the hood, numpy uses the Mersenne Twister PRNG to power its pseudorandom functions. The PRNG has a period of $2^{19937}-1$ and at any point can be described by 624 32bit unsigned ints and a position indicating how much of this "entropy" has been used up.
End of explanation
_ = np.random.uniform()
rng_state = np.random.get_state()
#print(rng_state)
# --> ('MT19937', array([2443250962, 1093594115, 1878467924,
# ..., 2648828502, 1678096082], dtype=uint32), 2, 0, 0.0)
# Let's exhaust the entropy in this PRNG statevector
for i in range(311):
_ = np.random.uniform()
rng_state = np.random.get_state()
#print(rng_state)
# --> ('MT19937', array([2443250962, 1093594115, 1878467924,
# ..., 2648828502, 1678096082], dtype=uint32), 624, 0, 0.0)
# Next call iterates the RNG state for a new batch of fake "entropy".
_ = np.random.uniform()
rng_state = np.random.get_state()
# print(rng_state)
# --> ('MT19937', array([1499117434, 2949980591, 2242547484,
# 4162027047, 3277342478], dtype=uint32), 2, 0, 0.0)
Explanation: This pseudorandom state vector is automagically updated behind the scenes every time a random number is needed, "consuming" 2 of the uint32s in the Mersenne twister state vector:
End of explanation
from jax import random
key = random.PRNGKey(0)
key
Explanation: The problem with magic PRNG state is that it's hard to reason about how it's being used and updated across different threads, processes, and devices, and it's very easy to screw up when the details of entropy production and consumption are hidden from the end user.
The Mersenne Twister PRNG is also known to have a number of problems, it has a large 2.5Kb state size, which leads to problematic initialization issues. It fails modern BigCrush tests, and is generally slow.
JAX PRNG
JAX instead implements an explicit PRNG where entropy production and consumption are handled by explicitly passing and iterating PRNG state. JAX uses a modern Threefry counter-based PRNG that's splittable. That is, its design allows us to fork the PRNG state into new PRNGs for use with parallel stochastic generation.
The random state is described by two unsigned-int32s that we call a key:
End of explanation
print(random.normal(key, shape=(1,)))
print(key)
# No no no!
print(random.normal(key, shape=(1,)))
print(key)
Explanation: JAX's random functions produce pseudorandom numbers from the PRNG state, but do not change the state!
Reusing the same state will cause sadness and monotony, depriving the end user of lifegiving chaos:
End of explanation
print("old key", key)
key, subkey = random.split(key)
normal_pseudorandom = random.normal(subkey, shape=(1,))
print(" \---SPLIT --> new key ", key)
print(" \--> new subkey", subkey, "--> normal", normal_pseudorandom)
Explanation: Instead, we split the PRNG to get usable subkeys every time we need a new pseudorandom number:
End of explanation
print("old key", key)
key, subkey = random.split(key)
normal_pseudorandom = random.normal(subkey, shape=(1,))
print(" \---SPLIT --> new key ", key)
print(" \--> new subkey", subkey, "--> normal", normal_pseudorandom)
Explanation: We propagate the key and make new subkeys whenever we need a new random number:
End of explanation
key, *subkeys = random.split(key, 4)
for subkey in subkeys:
print(random.normal(subkey, shape=(1,)))
Explanation: We can generate more than one subkey at a time:
End of explanation
def f(x):
if x < 3:
return 3. * x ** 2
else:
return -4 * x
print(grad(f)(2.)) # ok!
print(grad(f)(4.)) # ok!
Explanation: 🔪 Control Flow
✔ python control_flow + autodiff ✔
If you just want to apply grad to your python functions, you can use regular python control-flow constructs with no problems, as if you were using Autograd (or Pytorch or TF Eager).
End of explanation
@jit
def f(x):
for i in range(3):
x = 2 * x
return x
print(f(3))
Explanation: python control flow + JIT
Using control flow with jit is more complicated, and by default it has more constraints.
This works:
End of explanation
@jit
def g(x):
y = 0.
for i in range(x.shape[0]):
y = y + x[i]
return y
print(g(jnp.array([1., 2., 3.])))
Explanation: So does this:
End of explanation
@jit
def f(x):
if x < 3:
return 3. * x ** 2
else:
return -4 * x
# This will fail!
try:
f(2)
except Exception as e:
print("Exception {}".format(e))
Explanation: But this doesn't, at least by default:
End of explanation
def f(x):
if x < 3:
return 3. * x ** 2
else:
return -4 * x
f = jit(f, static_argnums=(0,))
print(f(2.))
Explanation: What gives!?
When we jit-compile a function, we usually want to compile a version of the function that works for many different argument values, so that we can cache and reuse the compiled code. That way we don't have to re-compile on each function evaluation.
For example, if we evaluate an @jit function on the array jnp.array([1., 2., 3.], jnp.float32), we might want to compile code that we can reuse to evaluate the function on jnp.array([4., 5., 6.], jnp.float32) to save on compile time.
To get a view of your Python code that is valid for many different argument values, JAX traces it on abstract values that represent sets of possible inputs. There are multiple different levels of abstraction, and different transformations use different abstraction levels.
By default, jit traces your code on the ShapedArray abstraction level, where each abstract value represents the set of all array values with a fixed shape and dtype. For example, if we trace using the abstract value ShapedArray((3,), jnp.float32), we get a view of the function that can be reused for any concrete value in the corresponding set of arrays. That means we can save on compile time.
But there's a tradeoff here: if we trace a Python function on a ShapedArray((), jnp.float32) that isn't committed to a specific concrete value, when we hit a line like if x < 3, the expression x < 3 evaluates to an abstract ShapedArray((), jnp.bool_) that represents the set {True, False}. When Python attempts to coerce that to a concrete True or False, we get an error: we don't know which branch to take, and can't continue tracing! The tradeoff is that with higher levels of abstraction we gain a more general view of the Python code (and thus save on re-compilations), but we require more constraints on the Python code to complete the trace.
The good news is that you can control this tradeoff yourself. By having jit trace on more refined abstract values, you can relax the traceability constraints. For example, using the static_argnums argument to jit, we can specify to trace on concrete values of some arguments. Here's that example function again:
End of explanation
def f(x, n):
y = 0.
for i in range(n):
y = y + x[i]
return y
f = jit(f, static_argnums=(1,))
f(jnp.array([2., 3., 4.]), 2)
Explanation: Here's another example, this time involving a loop:
End of explanation
def example_fun(length, val):
return jnp.ones((length,)) * val
# un-jit'd works fine
print(example_fun(5, 4))
bad_example_jit = jit(example_fun)
# this will fail:
try:
print(bad_example_jit(10, 4))
except Exception as e:
print("Exception {}".format(e))
# static_argnums tells JAX to recompile on changes at these argument positions:
good_example_jit = jit(example_fun, static_argnums=(0,))
# first compile
print(good_example_jit(10, 4))
# recompiles
print(good_example_jit(5, 4))
Explanation: In effect, the loop gets statically unrolled. JAX can also trace at higher levels of abstraction, like Unshaped, but that's not currently the default for any transformation
️⚠️ functions with argument-value dependent shapes
These control-flow issues also come up in a more subtle way: numerical functions we want to jit can't specialize the shapes of internal arrays on argument values (specializing on argument shapes is ok). As a trivial example, let's make a function whose output happens to depend on the input variable length.
End of explanation
@jit
def f(x):
print(x)
y = 2 * x
print(y)
return y
f(2)
Explanation: static_argnums can be handy if length in our example rarely changes, but it would be disastrous if it changed a lot!
Lastly, if your function has global side-effects, JAX's tracer can cause weird things to happen. A common gotcha is trying to print arrays inside jit'd functions:
End of explanation
from jax import lax
operand = jnp.array([0.])
lax.cond(True, lambda x: x+1, lambda x: x-1, operand)
# --> array([1.], dtype=float32)
lax.cond(False, lambda x: x+1, lambda x: x-1, operand)
# --> array([-1.], dtype=float32)
Explanation: Structured control flow primitives
There are more options for control flow in JAX. Say you want to avoid re-compilations but still want to use control flow that's traceable, and that avoids un-rolling large loops. Then you can use these 4 structured control flow primitives:
lax.cond differentiable
lax.while_loop fwd-mode-differentiable
lax.fori_loop fwd-mode-differentiable in general; fwd and rev-mode differentiable if endpoints are static.
lax.scan differentiable
cond
python equivalent:
python
def cond(pred, true_fun, false_fun, operand):
if pred:
return true_fun(operand)
else:
return false_fun(operand)
End of explanation
init_val = 0
cond_fun = lambda x: x<10
body_fun = lambda x: x+1
lax.while_loop(cond_fun, body_fun, init_val)
# --> array(10, dtype=int32)
Explanation: while_loop
python equivalent:
def while_loop(cond_fun, body_fun, init_val):
val = init_val
while cond_fun(val):
val = body_fun(val)
return val
End of explanation
init_val = 0
start = 0
stop = 10
body_fun = lambda i,x: x+i
lax.fori_loop(start, stop, body_fun, init_val)
# --> array(45, dtype=int32)
Explanation: fori_loop
python equivalent:
def fori_loop(start, stop, body_fun, init_val):
val = init_val
for i in range(start, stop):
val = body_fun(i, val)
return val
End of explanation
x = random.uniform(random.PRNGKey(0), (1000,), dtype=jnp.float64)
x.dtype
Explanation: Summary
$$
\begin{array} {r|rr}
\hline \
\textrm{construct}
& \textrm{jit}
& \textrm{grad} \
\hline \
\textrm{if} & ❌ & ✔ \
\textrm{for} & ✔ & ✔\
\textrm{while} & ✔ & ✔\
\textrm{lax.cond} & ✔ & ✔\
\textrm{lax.while_loop} & ✔ & \textrm{fwd}\
\textrm{lax.fori_loop} & ✔ & \textrm{fwd}\
\textrm{lax.scan} & ✔ & ✔\
\hline
\end{array}
$$
<center>
$\ast$ = argument-<b>value</b>-independent loop condition - unrolls the loop
</center>
🔪 NaNs
Debugging NaNs
If you want to trace where NaNs are occurring in your functions or gradients, you can turn on the NaN-checker by:
setting the JAX_DEBUG_NANS=True environment variable;
adding from jax.config import config and config.update("jax_debug_nans", True) near the top of your main file;
adding from jax.config import config and config.parse_flags_with_absl() to your main file, then set the option using a command-line flag like --jax_debug_nans=True;
This will cause computations to error-out immediately on production of a NaN. Switching this option on adds a nan check to every floating point type value produced by XLA. That means values are pulled back to the host and checked as ndarrays for every primitive operation not under an @jit. For code under an @jit, the output of every @jit function is checked and if a nan is present it will re-run the function in de-optimized op-by-op mode, effectively removing one level of @jit at a time.
There could be tricky situations that arise, like nans that only occur under a @jit but don't get produced in de-optimized mode. In that case you'll see a warning message print out but your code will continue to execute.
If the nans are being produced in the backward pass of a gradient evaluation, when an exception is raised several frames up in the stack trace you will be in the backward_pass function, which is essentially a simple jaxpr interpreter that walks the sequence of primitive operations in reverse. In the example below, we started an ipython repl with the command line env JAX_DEBUG_NANS=True ipython, then ran this:
```
In [1]: import jax.numpy as jnp
In [2]: jnp.divide(0., 0.)
FloatingPointError Traceback (most recent call last)
<ipython-input-2-f2e2c413b437> in <module>()
----> 1 jnp.divide(0., 0.)
.../jax/jax/numpy/lax_numpy.pyc in divide(x1, x2)
343 return floor_divide(x1, x2)
344 else:
--> 345 return true_divide(x1, x2)
346
347
.../jax/jax/numpy/lax_numpy.pyc in true_divide(x1, x2)
332 x1, x2 = _promote_shapes(x1, x2)
333 return lax.div(lax.convert_element_type(x1, result_dtype),
--> 334 lax.convert_element_type(x2, result_dtype))
335
336
.../jax/jax/lax.pyc in div(x, y)
244 def div(x, y):
245 rElementwise division: :math:x \over y.
--> 246 return div_p.bind(x, y)
247
248 def rem(x, y):
... stack trace ...
.../jax/jax/interpreters/xla.pyc in handle_result(device_buffer)
103 py_val = device_buffer.to_py()
104 if np.any(np.isnan(py_val)):
--> 105 raise FloatingPointError("invalid value")
106 else:
107 return DeviceArray(device_buffer, *result_shape)
FloatingPointError: invalid value
```
The nan generated was caught. By running %debug, we can get a post-mortem debugger. This also works with functions under @jit, as the example below shows.
```
In [4]: from jax import jit
In [5]: @jit
...: def f(x, y):
...: a = x * y
...: b = (x + y) / (x - y)
...: c = a + 2
...: return a + b * c
...:
In [6]: x = jnp.array([2., 0.])
In [7]: y = jnp.array([3., 0.])
In [8]: f(x, y)
Invalid value encountered in the output of a jit function. Calling the de-optimized version.
FloatingPointError Traceback (most recent call last)
<ipython-input-8-811b7ddb3300> in <module>()
----> 1 f(x, y)
... stack trace ...
<ipython-input-5-619b39acbaac> in f(x, y)
2 def f(x, y):
3 a = x * y
----> 4 b = (x + y) / (x - y)
5 c = a + 2
6 return a + b * c
.../jax/jax/numpy/lax_numpy.pyc in divide(x1, x2)
343 return floor_divide(x1, x2)
344 else:
--> 345 return true_divide(x1, x2)
346
347
.../jax/jax/numpy/lax_numpy.pyc in true_divide(x1, x2)
332 x1, x2 = _promote_shapes(x1, x2)
333 return lax.div(lax.convert_element_type(x1, result_dtype),
--> 334 lax.convert_element_type(x2, result_dtype))
335
336
.../jax/jax/lax.pyc in div(x, y)
244 def div(x, y):
245 rElementwise division: :math:x \over y.
--> 246 return div_p.bind(x, y)
247
248 def rem(x, y):
... stack trace ...
```
When this code sees a nan in the output of an @jit function, it calls into the de-optimized code, so we still get a clear stack trace. And we can run a post-mortem debugger with %debug to inspect all the values to figure out the error.
⚠️ You shouldn't have the NaN-checker on if you're not debugging, as it can introduce lots of device-host round-trips and performance regressions!
⚠️ The NaN-checker doesn't work with pmap. To debug nans in pmap code, one thing to try is replacing pmap with vmap.
🔪 Double (64bit) precision
At the moment, JAX by default enforces single-precision numbers to mitigate the Numpy API's tendency to aggressively promote operands to double. This is the desired behavior for many machine-learning applications, but it may catch you by surprise!
End of explanation
import jax.numpy as jnp
from jax import random
x = random.uniform(random.PRNGKey(0), (1000,), dtype=jnp.float64)
x.dtype # --> dtype('float64')
Explanation: To use double-precision numbers, you need to set the jax_enable_x64 configuration variable at startup.
There are a few ways to do this:
You can enable 64bit mode by setting the environment variable JAX_ENABLE_X64=True.
You can manually set the jax_enable_x64 configuration flag at startup:
python
# again, this only works on startup!
from jax.config import config
config.update("jax_enable_x64", True)
You can parse command-line flags with absl.app.run(main)
python
from jax.config import config
config.config_with_absl()
If you want JAX to run absl parsing for you, i.e. you don't want to do absl.app.run(main), you can instead use
python
from jax.config import config
if __name__ == '__main__':
# calls config.config_with_absl() *and* runs absl parsing
config.parse_flags_with_absl()
Note that #2-#4 work for any of JAX's configuration options.
We can then confirm that x64 mode is enabled:
End of explanation |
1,825 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example Usage of HDFWriter
If properties of a class needs to be saved in a hdf file, then the class should inherit from HDFWriterMixin as demonstrated below.
hdf_properties (list)
Step1: You can now save properties using to_hdf method.
Parameters
file_path
Step2: You can now read hdf file using pd.HDFStore , or pd.read_hdf
Step3: Saving nested class objects.
Just extend hdf_properties list to include that class object. <br>
Step4: Modifed Usage
In BasePlasma class, the way properties of object are collected is different. It does not uses hdf_properties attribute.<br>
That`s why , PlasmaWriterMixin (which extends HDFWriterMixin) changes how the properties of BasePlasma class will be collected, by changing get_properties function.<br>
Here is a quick demonstration, if behaviour of default get_properties function inside HDFWriterMixin needs to be changed, by subclassing it to create a new Mixin.
Step5: A demo class , using this modified mixin. | Python Code:
from tardis.io.util import HDFWriterMixin
class ExampleClass(HDFWriterMixin):
hdf_properties = ['property1', 'property2']
hdf_name = 'mock_setup'
def __init__(self, property1, property2):
self.property1 = property1
self.property2 = property2
import numpy as np
import pandas as pd
#Instantiating Object
property1 = np.array([4.0e14, 2, 2e14, 27.5])
property2 = pd.DataFrame({'one': pd.Series([1., 2., 3.], index=['a', 'b', 'c']),
'two': pd.Series([1., 2., 3., 4.], index=['a', 'b', 'c', 'd'])})
obj = ExampleClass(property1, property2)
Explanation: Example Usage of HDFWriter
If properties of a class needs to be saved in a hdf file, then the class should inherit from HDFWriterMixin as demonstrated below.
hdf_properties (list) : Contains names of all the properties that needs to be saved.<br>
hdf_name (str) : Specifies the default name of the group under which the properties will be saved.
End of explanation
obj.to_hdf(file_path='test.hdf', path='test')
#obj.to_hdf(file_path='test.hdf', path='test', name='hdf')
Explanation: You can now save properties using to_hdf method.
Parameters
file_path : Path where the HDF file will be saved<br>
path : Path inside the HDF store to store the elements<br>
name : Name of the group inside HDF store, under which properties will be saved.<br>
If not specified , then it uses the value specified in hdf_name attribute.<br>
If hdf_name is also not defined , then it converts the Class name into Snake Case, and uses this value.<br>
Like for example , if name is not passed as an argument , and hdf_name is also not defined for ExampleClass above, then , it will save properties under example_class group.
End of explanation
#Read HDF file
with pd.HDFStore('test.hdf','r') as data:
print data
#print data['/test/mock_setup/property1']
Explanation: You can now read hdf file using pd.HDFStore , or pd.read_hdf
End of explanation
class NestedExampleClass(HDFWriterMixin):
hdf_properties = ['property1', 'nested_object']
def __init__(self, property1, nested_obj):
self.property1 = property1
self.nested_object = nested_obj
obj2 = NestedExampleClass(property1, obj)
obj2.to_hdf(file_path='nested_test.hdf')
#Read HDF file
with pd.HDFStore('nested_test.hdf','r') as data:
print data
Explanation: Saving nested class objects.
Just extend hdf_properties list to include that class object. <br>
End of explanation
class ModifiedWriterMixin(HDFWriterMixin):
def get_properties(self):
#Change behaviour here, how properties will be collected from Class
data = {name: getattr(self, name) for name in self.outputs}
return data
Explanation: Modifed Usage
In BasePlasma class, the way properties of object are collected is different. It does not uses hdf_properties attribute.<br>
That`s why , PlasmaWriterMixin (which extends HDFWriterMixin) changes how the properties of BasePlasma class will be collected, by changing get_properties function.<br>
Here is a quick demonstration, if behaviour of default get_properties function inside HDFWriterMixin needs to be changed, by subclassing it to create a new Mixin.
End of explanation
class DemoClass(ModifiedWriterMixin):
outputs = ['property1']
hdf_name = 'demo'
def __init__(self, property1):
self.property1 = property1
obj3 = DemoClass('random_string')
obj3.to_hdf('demo_class.hdf')
with pd.HDFStore('demo_class.hdf','r') as data:
print data
Explanation: A demo class , using this modified mixin.
End of explanation |
1,826 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: Tokenize and sequence a bigger corpus of text
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Get the corpus of text
The combined dataset of reviews has been saved in a Google drive belonging to Udacity. You can download it from there.
Step3: Get the dataset
Each row in the csv file is a separate review.
The csv file has 2 columns
Step4: Get the reviews from the csv file
Step5: Tokenize the text
Create the tokenizer, specify the OOV token, tokenize the text, then inspect the word index.
Step6: Generate sequences for the reviews
Generate a sequence for each review. Set the max length to match the longest review. Add the padding zeros at the end of the review for reviews that are not as long as the longest one. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
# Import Tokenizer and pad_sequences
import tensorflow as tf
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
# Import numpy and pandas
import numpy as np
import pandas as pd
Explanation: Tokenize and sequence a bigger corpus of text
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l09c03_nlp_prepare_larger_text_corpus.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l09c03_nlp_prepare_larger_text_corpus.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
So far, you have written some test sentences and generated a word index and then created sequences for the sentences.
Now you will tokenize and sequence a larger body of text, specifically reviews from Amazon and Yelp.
About the dataset
You will use a dataset containing Amazon and Yelp reviews of products and restaurants. This dataset was originally extracted from Kaggle.
The dataset includes reviews, and each review is labelled as 0 (bad) or 1 (good). However, in this exercise, you will only work with the reviews, not the labels, to practice tokenizing and sequencing the text.
Example good reviews:
This is hands down the best phone I've ever had.
Four stars for the food & the guy in the blue shirt for his great vibe & still letting us in to eat !
Example bad reviews:
A lady at the table next to us found a live green caterpillar In her salad
If you plan to use this in a car forget about it.
See more reviews
Feel free to download the dataset from a drive folder belonging to Udacity and open it on your local machine to see more reviews.
End of explanation
path = tf.keras.utils.get_file('reviews.csv',
'https://drive.google.com/uc?id=13ySLC_ue6Umt9RJYSeM2t-V0kCv-4C-P')
print (path)
Explanation: Get the corpus of text
The combined dataset of reviews has been saved in a Google drive belonging to Udacity. You can download it from there.
End of explanation
# Read the csv file
dataset = pd.read_csv(path)
# Review the first few entries in the dataset
dataset.head()
Explanation: Get the dataset
Each row in the csv file is a separate review.
The csv file has 2 columns:
text (the review)
sentiment (0 or 1 indicating a bad or good review)
End of explanation
# Get the reviews from the text column
reviews = dataset['text'].tolist()
Explanation: Get the reviews from the csv file
End of explanation
tokenizer = Tokenizer(oov_token="<OOV>")
tokenizer.fit_on_texts(reviews)
word_index = tokenizer.word_index
print(len(word_index))
print(word_index)
Explanation: Tokenize the text
Create the tokenizer, specify the OOV token, tokenize the text, then inspect the word index.
End of explanation
sequences = tokenizer.texts_to_sequences(reviews)
padded_sequences = pad_sequences(sequences, padding='post')
# What is the shape of the vector containing the padded sequences?
# The shape shows the number of sequences and the length of each one.
print(padded_sequences.shape)
# What is the first review?
print (reviews[0])
# Show the sequence for the first review
print(padded_sequences[0])
# Try printing the review and padded sequence for other elements.
Explanation: Generate sequences for the reviews
Generate a sequence for each review. Set the max length to match the longest review. Add the padding zeros at the end of the review for reviews that are not as long as the longest one.
End of explanation |
1,827 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Лабораторная 2-1
Step1: Напишите запрос
Step2: Напишите запрос, возвращающий уникальные названия компаний, которые делают продукцию Gizmo
Step3: Задание #2 | Python Code:
%sql select * from product;
Explanation: Лабораторная 2-1:
Простые табличные запросы
Задание #1
Попробуйте записать запрос, чтобы получить на выходе все продукты, с "Touch" в имени. Укажите их имя и цену и отсортируйте в алфавитном порядке по производителю
End of explanation
%%sql
PRAGMA case_sensitive_like=ON;
select * from product
where pname LIKE '%Touch'
Explanation: Напишите запрос:
End of explanation
%%sql
select distinct manufacturer
from product
where pname = 'Gizmo';
Explanation: Напишите запрос, возвращающий уникальные названия компаний, которые делают продукцию Gizmo:
End of explanation
%sql SELECT DISTINCT category FROM product ORDER BY category;
%sql SELECT category FROM product ORDER BY pname;
%sql SELECT DISTINCT category FROM product ORDER BY pname;
Explanation: Задание #2:
ORDER BY
Попробуйте выполнить запросы, но сначала предположите, что они должны вернуть
End of explanation |
1,828 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
mpl_toolkits
In addition to the core library of Matplotlib, there are a few additional utilities that are set apart from Matplotlib proper for some reason or another, but are often shipped with Matplotlib.
Basemap - shipped separately from matplotlib due to size of mapping data that are included.
mplot3d - shipped with matplotlib to provide very simple, rudimentary 3D plots in the same style as matplotlib's 2D plots.
axes_grid1 - An enhanced SubplotAxes. Very Enhanced...
mplot3d
By taking advantage of Matplotlib's z-order layering engine, mplot3d emulates 3D plotting by projecting 3D data into 2D space, layer by layer. While it isn't going to replace any of the true 3D plotting libraries anytime soon, its goal is to allow for Matplotlib users to produce 3D plots with the same amount of simplicity as 2D plots.
Step1: axes_grid1
This module was originally intended as a collection of helper classes to ease the displaying of (possibly multiple) images with Matplotlib. Some of the functionality has come to be useful for non-image plotting as well. Some classes deals with the sizing and positioning of multiple Axes relative to each other (ImageGrid, RGB Axes, and AxesDivider). The ParasiteAxes allow for the plotting of multiple datasets in the same axes, but with each their own x or y scale. Also, there is the AnchoredArtist that can be used to anchor particular artist objects in place.
One can get a sense of the neat things that can be done with this toolkit by browsing through its user guide linked above. There is one particular feature that is an absolute must-have for me -- automatic allocation of space for colorbars.
Step3: This next feature is commonly requested on the mailing lists. The problem is that most people who request it don't quite know how to describe it. We call it "Parasite Axes".
Step8: And finally, as a nice teaser of what else axes_grid1 can do... | Python Code:
from mpl_toolkits.mplot3d import Axes3D, axes3d
fig, ax = plt.subplots(1, 1, subplot_kw={'projection': '3d'})
X, Y, Z = axes3d.get_test_data(0.05)
ax.plot_wireframe(X, Y, Z, rstride=10, cstride=10)
plt.show()
Explanation: mpl_toolkits
In addition to the core library of Matplotlib, there are a few additional utilities that are set apart from Matplotlib proper for some reason or another, but are often shipped with Matplotlib.
Basemap - shipped separately from matplotlib due to size of mapping data that are included.
mplot3d - shipped with matplotlib to provide very simple, rudimentary 3D plots in the same style as matplotlib's 2D plots.
axes_grid1 - An enhanced SubplotAxes. Very Enhanced...
mplot3d
By taking advantage of Matplotlib's z-order layering engine, mplot3d emulates 3D plotting by projecting 3D data into 2D space, layer by layer. While it isn't going to replace any of the true 3D plotting libraries anytime soon, its goal is to allow for Matplotlib users to produce 3D plots with the same amount of simplicity as 2D plots.
End of explanation
from mpl_toolkits.axes_grid1 import AxesGrid
fig = plt.figure()
grid = AxesGrid(fig, 111, # similar to subplot(111)
nrows_ncols = (2, 2),
axes_pad = 0.2,
share_all=True,
label_mode = "L", # similar to "label_outer"
cbar_location = "right",
cbar_mode="single",
)
extent = (-3,4,-4,3)
for i in range(4):
im = grid[i].imshow(Z, extent=extent, interpolation="nearest")
grid.cbar_axes[0].colorbar(im)
plt.show()
Explanation: axes_grid1
This module was originally intended as a collection of helper classes to ease the displaying of (possibly multiple) images with Matplotlib. Some of the functionality has come to be useful for non-image plotting as well. Some classes deals with the sizing and positioning of multiple Axes relative to each other (ImageGrid, RGB Axes, and AxesDivider). The ParasiteAxes allow for the plotting of multiple datasets in the same axes, but with each their own x or y scale. Also, there is the AnchoredArtist that can be used to anchor particular artist objects in place.
One can get a sense of the neat things that can be done with this toolkit by browsing through its user guide linked above. There is one particular feature that is an absolute must-have for me -- automatic allocation of space for colorbars.
End of explanation
# %load http://matplotlib.org/mpl_examples/axes_grid/demo_parasite_axes2.py
Parasite axis demo
The following code is an example of a parasite axis. It aims to show a user how
to plot multiple different values onto one single plot. Notice how in this
example, par1 and par2 are both calling twinx meaning both are tied directly to
the x-axis. From there, each of those two axis can behave separately from the
each other, meaning they can take on separate values from themselves as well as
the x-axis.
from mpl_toolkits.axes_grid1 import host_subplot
import mpl_toolkits.axisartist as AA
import matplotlib.pyplot as plt
host = host_subplot(111, axes_class=AA.Axes)
plt.subplots_adjust(right=0.75)
par1 = host.twinx()
par2 = host.twinx()
offset = 60
new_fixed_axis = par2.get_grid_helper().new_fixed_axis
par2.axis["right"] = new_fixed_axis(loc="right",
axes=par2,
offset=(offset, 0))
par2.axis["right"].toggle(all=True)
host.set_xlim(0, 2)
host.set_ylim(0, 2)
host.set_xlabel("Distance")
host.set_ylabel("Density")
par1.set_ylabel("Temperature")
par2.set_ylabel("Velocity")
p1, = host.plot([0, 1, 2], [0, 1, 2], label="Density")
p2, = par1.plot([0, 1, 2], [0, 3, 2], label="Temperature")
p3, = par2.plot([0, 1, 2], [50, 30, 15], label="Velocity")
par1.set_ylim(0, 4)
par2.set_ylim(1, 65)
host.legend()
host.axis["left"].label.set_color(p1.get_color())
par1.axis["right"].label.set_color(p2.get_color())
par2.axis["right"].label.set_color(p3.get_color())
plt.show()
Explanation: This next feature is commonly requested on the mailing lists. The problem is that most people who request it don't quite know how to describe it. We call it "Parasite Axes".
End of explanation
# %load http://matplotlib.org/mpl_toolkits/axes_grid/examples/demo_floating_axes.py
Demo of the floating axes.
This demo shows features of functions in floating_axes:
* Using scatter function and bar function with changing the
shape of the plot.
* Using GridHelperCurveLinear to rotate the plot and set the
boundary of the plot.
* Using FloatingSubplot to create a subplot using the return
value from GridHelperCurveLinear.
* Making sector plot by adding more features to GridHelperCurveLinear.
from matplotlib.transforms import Affine2D
import mpl_toolkits.axisartist.floating_axes as floating_axes
import numpy as np
import mpl_toolkits.axisartist.angle_helper as angle_helper
from matplotlib.projections import PolarAxes
from mpl_toolkits.axisartist.grid_finder import (FixedLocator, MaxNLocator,
DictFormatter)
import matplotlib.pyplot as plt
def setup_axes1(fig, rect):
A simple one.
tr = Affine2D().scale(2, 1).rotate_deg(30)
grid_helper = floating_axes.GridHelperCurveLinear(
tr, extremes=(-0.5, 3.5, 0, 4))
ax1 = floating_axes.FloatingSubplot(fig, rect, grid_helper=grid_helper)
fig.add_subplot(ax1)
aux_ax = ax1.get_aux_axes(tr)
grid_helper.grid_finder.grid_locator1._nbins = 4
grid_helper.grid_finder.grid_locator2._nbins = 4
return ax1, aux_ax
def setup_axes2(fig, rect):
With custom locator and formatter.
Note that the extreme values are swapped.
tr = PolarAxes.PolarTransform()
pi = np.pi
angle_ticks = [(0, r"$0$"),
(.25*pi, r"$\frac{1}{4}\pi$"),
(.5*pi, r"$\frac{1}{2}\pi$")]
grid_locator1 = FixedLocator([v for v, s in angle_ticks])
tick_formatter1 = DictFormatter(dict(angle_ticks))
grid_locator2 = MaxNLocator(2)
grid_helper = floating_axes.GridHelperCurveLinear(
tr, extremes=(.5*pi, 0, 2, 1),
grid_locator1=grid_locator1,
grid_locator2=grid_locator2,
tick_formatter1=tick_formatter1,
tick_formatter2=None)
ax1 = floating_axes.FloatingSubplot(fig, rect, grid_helper=grid_helper)
fig.add_subplot(ax1)
# create a parasite axes whose transData in RA, cz
aux_ax = ax1.get_aux_axes(tr)
aux_ax.patch = ax1.patch # for aux_ax to have a clip path as in ax
ax1.patch.zorder = 0.9 # but this has a side effect that the patch is
# drawn twice, and possibly over some other
# artists. So, we decrease the zorder a bit to
# prevent this.
return ax1, aux_ax
def setup_axes3(fig, rect):
Sometimes, things like axis_direction need to be adjusted.
# rotate a bit for better orientation
tr_rotate = Affine2D().translate(-95, 0)
# scale degree to radians
tr_scale = Affine2D().scale(np.pi/180., 1.)
tr = tr_rotate + tr_scale + PolarAxes.PolarTransform()
grid_locator1 = angle_helper.LocatorHMS(4)
tick_formatter1 = angle_helper.FormatterHMS()
grid_locator2 = MaxNLocator(3)
ra0, ra1 = 8.*15, 14.*15
cz0, cz1 = 0, 14000
grid_helper = floating_axes.GridHelperCurveLinear(
tr, extremes=(ra0, ra1, cz0, cz1),
grid_locator1=grid_locator1,
grid_locator2=grid_locator2,
tick_formatter1=tick_formatter1,
tick_formatter2=None)
ax1 = floating_axes.FloatingSubplot(fig, rect, grid_helper=grid_helper)
fig.add_subplot(ax1)
# adjust axis
ax1.axis["left"].set_axis_direction("bottom")
ax1.axis["right"].set_axis_direction("top")
ax1.axis["bottom"].set_visible(False)
ax1.axis["top"].set_axis_direction("bottom")
ax1.axis["top"].toggle(ticklabels=True, label=True)
ax1.axis["top"].major_ticklabels.set_axis_direction("top")
ax1.axis["top"].label.set_axis_direction("top")
ax1.axis["left"].label.set_text(r"cz [km$^{-1}$]")
ax1.axis["top"].label.set_text(r"$\alpha_{1950}$")
# create a parasite axes whose transData in RA, cz
aux_ax = ax1.get_aux_axes(tr)
aux_ax.patch = ax1.patch # for aux_ax to have a clip path as in ax
ax1.patch.zorder = 0.9 # but this has a side effect that the patch is
# drawn twice, and possibly over some other
# artists. So, we decrease the zorder a bit to
# prevent this.
return ax1, aux_ax
##########################################################
fig = plt.figure(figsize=(8, 4))
fig.subplots_adjust(wspace=0.3, left=0.05, right=0.95)
ax1, aux_ax1 = setup_axes1(fig, 131)
aux_ax1.bar([0, 1, 2, 3], [3, 2, 1, 3])
ax2, aux_ax2 = setup_axes2(fig, 132)
theta = np.random.rand(10)*.5*np.pi
radius = np.random.rand(10) + 1.
aux_ax2.scatter(theta, radius)
ax3, aux_ax3 = setup_axes3(fig, 133)
theta = (8 + np.random.rand(10)*(14 - 8))*15. # in degrees
radius = np.random.rand(10)*14000.
aux_ax3.scatter(theta, radius)
plt.show()
Explanation: And finally, as a nice teaser of what else axes_grid1 can do...
End of explanation |
1,829 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Decision trees and Random forest
Step1: Data preprocessing
Step2: Regression tree
Step3: Randomly defined train and test set
Step4: Know, we want to define the max_depht parameter that minimizes the out of sample using cross validation.
Step5: Ploting tree
Step6: Random Forest
Step7: The random forest model has a lot of parameter we can optimize to get a better fit in our model.
- n_estimators = number of trees in the foreset
- max_depth = max number of levels in each decision tree
- min_samples_split = min number of data points placed in a node before the node is split
- min_samples_leaf = min number of data points allowed in a leaf node
Step8: Even if with the random forest model we loss a lot of the model interpretation that a simple decision tree gave us, we can still get the importance of each feauture
Step9: GBM
Some information about GBM'S parameters
Step10: XGBOOST
some information on XGBOOST
Step11: Classification problem
To see how we can apply decision trees to a classification problem we are going use the following data, to build models which objective is to predict if a person is diabetic.
Step12: Optimize max_depth in classification problem
Step13: Random forest for classification problems | Python Code:
import pandas as pd
import numpy as np
import graphviz
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import mean_squared_error as mse
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import GridSearchCV
from tqdm import tqdm
import matplotlib.pyplot as plt
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from sklearn.ensemble import RandomForestRegressor
from sklearn import tree
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.tree import DecisionTreeClassifier
from xgboost import XGBClassifier
df = pd.read_csv('co_properties.csv.gz', compression='gzip', header=0, sep=',', quotechar='"', error_bad_lines=False)
Explanation: Decision trees and Random forest
End of explanation
def categoricas(df,lista):
dummies= pd.get_dummies(df[lista])
df=df.drop(columns=lista)
data = pd.concat([df, dummies], axis=1)
return data
df=df[df['operation_type']=='Venta']
df=df[df['property_type']=='Casa']
df=df[df['currency']=='COP']
df=df[df['l3']=='Bogotá D.C']
df=df.drop(columns=['l6','id','ad_type','start_date','created_on','l1','l2','l3','end_date','title','price_period','title','description','property_type','operation_type','currency'])
df=df.dropna(subset=['l4','l5','price'])
df=categoricas(df,['l4','l5'])
variables=['lat', 'lon', 'rooms', 'bedrooms', 'bathrooms', 'surface_total',
'surface_covered']
for i in variables:
df.loc[df[i].isnull()==True,i+'null']=1
df.loc[df[i].isnull()==False,i+'null']=0
df.loc[df[i].isnull()==True,i]=-1
Explanation: Data preprocessing
End of explanation
data=df
df.columns
Explanation: Regression tree
End of explanation
x_train, x_test, y_train, y_test = train_test_split(data.drop(columns=['price']),data['price'], test_size=0.30,
random_state=200,
shuffle=True)
tree2=DecisionTreeRegressor().fit(x_train,y_train)
mse(y_test, tree2.predict(x_test))
Explanation: Randomly defined train and test set
End of explanation
model = DecisionTreeRegressor()
gs = GridSearchCV(model,
param_grid = {'max_depth': range(1, 30)},
cv=10,
n_jobs=10,
scoring='neg_mean_squared_error')
cv_tree1=gs.fit(x_train, y_train)
gs.best_estimator_
mse(y_test, cv_tree1.predict(x_test))
Explanation: Know, we want to define the max_depht parameter that minimizes the out of sample using cross validation.
End of explanation
from sklearn import tree
model=DecisionTreeRegressor(max_depth=3).fit(x_train,y_train)
data.columns
fn=data.columns
fig, axes = plt.subplots(nrows = 1,ncols = 1,figsize = (4,4), dpi=300)
tree.plot_tree(model,
feature_names = fn,
filled = True)
fig.savefig('regression_tree.png')
Explanation: Ploting tree
End of explanation
rf = RandomForestRegressor(random_state = 42).fit(x_train,y_train)
mse(y_test, rf.predict(x_test))
Explanation: Random Forest
End of explanation
n_estimators = [50,100,150]
max_depth = [10,20,30,40]
min_samples_split = [2, 5, 10]
min_samples_leaf = [1, 2, 4]
param_grid = {'n_estimators': n_estimators,
'max_depth': max_depth,
'min_samples_split': min_samples_split,
'min_samples_leaf': min_samples_leaf,
}
rf = RandomForestRegressor()
grid_search = GridSearchCV(estimator = rf, param_grid = param_grid,
cv = 5, n_jobs = 10, verbose = 2)
rf_cv=grid_search.fit(x_train, y_train)
grid_search.best_estimator_
Explanation: The random forest model has a lot of parameter we can optimize to get a better fit in our model.
- n_estimators = number of trees in the foreset
- max_depth = max number of levels in each decision tree
- min_samples_split = min number of data points placed in a node before the node is split
- min_samples_leaf = min number of data points allowed in a leaf node
End of explanation
importances1 =rf_cv.best_estimator_.feature_importances_
importances_df1=pd.DataFrame({'importances':importances1,'feauture':data.drop(columns=['price']).columns})
importances_df1=importances_df1.sort_values(by=['importances'],ascending=False)
importances_df1
fig = plt.figure(figsize = (10, 5))
plt.bar(importances_df1.feauture[:15],importances_df1.importances[:15], color ='maroon',
width = 0.4)
plt.xticks( rotation='vertical')
Explanation: Even if with the random forest model we loss a lot of the model interpretation that a simple decision tree gave us, we can still get the importance of each feauture
End of explanation
n_estimators = [700]
max_depth = [10,20,30,40]
min_samples_split = [2, 5, 10]
learning_rate=[0.15,0.05,0.01,0.005]
param_grid = {'n_estimators': n_estimators,
'max_depth': max_depth,
'min_samples_split': min_samples_split,
'learning_rate':learning_rate
}
gbm= GradientBoostingRegressor()
grid_search = GridSearchCV(estimator = gbm, param_grid = param_grid,
cv = 5, n_jobs = 7, verbose = 2,scoring='neg_mean_squared_error')
gbm_cv=grid_search.fit(x_train, y_train)
grid_search.best_estimator_
mse(y_test, gbm_cv.predict(x_test))
Explanation: GBM
Some information about GBM'S parameters
End of explanation
grid = {
'learning_rate': [0.01, 0.1,0.2,0.3,0.5],
'max_depth': [10,20,30,40,50],
'objective': ['reg:squarederror']
}
xgb_model = XGBRegressor()
gsearch = GridSearchCV(estimator = xgb_model,
param_grid = grid,
scoring = 'neg_mean_squared_error',
cv = 5,
n_jobs = 7,
verbose = 1)
xg_cv=gsearch.fit(x_train,y_train)
mse(y_test, xg_cv.predict(x_test))
xg_cv.best_estimator_
df2= pd.Series(xg_cv.best_estimator_.feature_importances_, list(data.drop(columns=['price']))).sort_values(ascending=False)
df2[:15].plot(kind='bar', title='Importance of Features')
plt.ylabel('Feature Importance Score')
Explanation: XGBOOST
some information on XGBOOST
End of explanation
data2=pd.read_csv('diabetes.csv')
data2
x1_train, x1_test, y1_train, y1_test = train_test_split(data2.drop(columns=['Outcome']),data2['Outcome'], test_size=0.10,
random_state=200,
shuffle=True)
data2.isnull().sum()
clf1=DecisionTreeClassifier(max_depth=3).fit(x1_train,y1_train)
y_pred=clf1.predict(x1_test)
accuracy_score(y1_test, y_pred)
accuracy_score(y1_test, y_pred)
Explanation: Classification problem
To see how we can apply decision trees to a classification problem we are going use the following data, to build models which objective is to predict if a person is diabetic.
End of explanation
model = DecisionTreeClassifier()
gs = GridSearchCV(model,
param_grid = {'max_depth': range(1, 30)},
cv=10,
n_jobs=10,
scoring='accuracy')
clf2=gs.fit(x1_train, y1_train)
gs.best_estimator_
y_pred1=clf2.predict(x1_test)
accuracy_score(y1_test, y_pred1)
Explanation: Optimize max_depth in classification problem
End of explanation
rf_clf=RandomForestClassifier().fit(x1_train,y1_train)
y_pred2=rf_clf.predict(x1_test)
accuracy_score(y1_test, y_pred2)
rfcl = RandomForestClassifier()
n_estimators = [50,100,150]
max_depth = [10,20,30,40]
min_samples_split = [2, 5, 10]
min_samples_leaf = [1, 2, 4]
param_grid = {'n_estimators': n_estimators,
'max_depth': max_depth,
'min_samples_split': min_samples_split,
'min_samples_leaf': min_samples_leaf,
}
grid_search = GridSearchCV(estimator = rfcl, param_grid = param_grid,
cv = 5, n_jobs = 10, verbose = 2)
rfcl_cv=grid_search.fit(x1_train, y1_train)
y_pred3=rfcl_cv.predict(x1_test)
accuracy_score(y1_test, y_pred3)
importances =rfcl_cv.best_estimator_.feature_importances_
importances_df=pd.DataFrame({'importances':importances,'feauture':data2.drop(columns=['Outcome']).columns})
importances_df
fig = plt.figure(figsize = (10, 5))
plt.bar(importances_df.feauture,importances_df.importances, color ='maroon',
width = 0.4)
plt.xticks( rotation='vertical')
Explanation: Random forest for classification problems
End of explanation |
1,830 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="http
Step1: 3 topographic grids
For this tutorial we will consider three different topographic surfaces that highlight the difference between each of the flow direction algorithms.
Step2: Comparing the different methods for each grid
We can illustrate the receiver node FlowDirectionSteepest has assigned to each donor node using a plotting function in Landlab called drainage_plot. We will see many of these plots in this tutorial so let's take a moment to walk through the plot and what it contains.
The background image (white to black) shows the values of topographic elevation of the underlying surface.
The color of the dots inside of each pixel show the locations of the nodes and the type of node.
The arrows show the direction of flow, and the color shows the proportion of flow that travels along that link.
An X on top of a node indicates that node is a local sink and flows to itself.
Note that in Landlab boundary nodes, or nodes that are on the edge of a grid do not have area and do not contribute flow to nodes. These nodes can either be Fixed Gradient Nodes, Fixed Value Nodes, or Closed Nodes. With the exception of Closed Nodes the boundary nodes can receive flow.
An important step in all flow direction and accumulation is setting the proper boundary condition. Refer to the boundary condition tutorial for more information.
Grid 1
Step3: Reassuringly we can see that the flow is being sent from high elevations at the top of the grid to low elevations at the bottom of the grid. We can also see that all of the arrows are yellow, and thus all of the flow is traveling on these links.
Now let's see how the other FlowDirectors direct the flow on this simple grid. We don't need to specify the surface so long as it is the field 'topographic__elevation'.
Step4: For this ramp, the steepest slope is down a link, and not a diagonal, so FlowDirectorD8 gives the same result as FlowDirectorSteepest.
Step5: Similarly, while there is more than one node below each core node, there is only one node that is connected by a link and not a diagonal. Thus FlowDirectorMFD with the keyword diagonals set to True provides the same results as FlowDirectorSteepest and FlowDirectorD8
Step6: When we permit flow along diagonal connections between nodes and flow to all downhill nodes, we see a difference in the directing pattern on this simple ramp. The flow is partitioned between the three downhill nodes, and there is more flow being sent to along the link as compared with the diagonals (the links are a lighter color blue than the diagonals).
One issue we might have with the results from FlowDirectorMFD in this case is that the flow on the diagonals crosses. This is one of the problems with using diagonal connections between nodes.
Step7: In FlowDirectorDINF flow is partitioned to two nodes based on steepness of the eight triangular facets surrounding each node. The partitioning is based on the relation between the link and diagonal slope that form the edge of the facet and the slope of the facet itself. When one of the facet edges has the same slope as the facet, as is the case in this ramp example, all of the flow is partitioned along that edge.
Grid 2
Step8: Flow is directed down parallel to to the the Y-axis of the plane. This makes sense in the context of the FlowDirectorSteepest algorithm; it only sends flow to one node, so it an idealized geometry such as the plane in this example, it provides flow direction that is non-realistic.
As we will discuss throughout this tutorial, there are benefits and drawbacks to each FlowDirector algorithm.
Step9: FlowDirectorD8 consideres the diagonal connections between nodes. As the plane is inclined to the southwest the flow direction looks better here, though as we will see later, sometimes FlowDirectorD8 does non-realistic directing too.
Step10: As FlowDirectorMFD can send flow to all the nodes downhill it doesn't have the same problem that FlowDirectorSteepest had. Because the plane is tilted down more steeply to the south than to the east, it sends more flow on the steeper link.
Step11: When FlowDirectorMFD considers diagonals in addition to links, we see that it sends the flow to four nodes instead of three. While all of the receiver nodes are downhill from their donor nodes, we see again that using diagonals permits flow to cross itself. We also see that the most flow is routed to the south and the south east, which makes sense based on how the plane is tilted.
Step12: Here FlowDirectorDINF routes flow in two directions, to the south and southeast. The plane is steeper to from north to south than from east to west and so more flow is directed on the diagonal to the southeast.
Grid 3
Step13: Flow on this surface using FlowDirectorSteepest looks realistic, as flow is routed down into the bottom of the curved surface.
Step14: Near the bottom left of the grid, the steepest descent is on a diagonal, so using FlowDirectorD8 gives a different drainage pattern.
Step15: Permitting multiple receivers with and without diagonals give an additional two different drainage patterns.
Step16: Again we see flow paths crossing when we permit consideration of flow along the diagonals.
Step17: Finally we see yet a different drainage pattern when we use FlowDirectorDINF and flow is routed along an adjacent diagonal-link pair.
Comparison of Accumulated Area
Before concluding, let's examine the accumulated drainage area using each of the FlowDirector methods and the third grid. For an introduction to creating and running a FlowAccumulator see the tutorial "Introduction to Flow Accumulators".
Often we do flow routing and accumulation because we want to use the accumulated area as a proxy for the water discharge. So the details of how the flow is routed are important because they influence how the drainage area pattern evolves.
Lets begain with FlowDirectorSteepest.
Step18: Here we see that flow has accumulated into one channel in the bottom of the curved surface.
Step19: When diagonals are considered, as in FlowDirectorD8, the drainage patter looks very diferent. Instead of one channel we have two smaller channels.
Step20: Flow is distributed much more when we use FlowDirectorMFD.
Step21: Adding diagonals to FlowDirectorMFD gives a channel somewhat similar to the one created by FlowDirectorSteepest but much more distributed. | Python Code:
%matplotlib inline
# import plotting tools
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
import matplotlib as mpl
# import numpy
import numpy as np
# import necessary landlab components
from landlab import RasterModelGrid, HexModelGrid
from landlab.components import FlowAccumulator
from landlab.components import (
FlowDirectorD8,
FlowDirectorDINF,
FlowDirectorMFD,
FlowDirectorSteepest,
)
# import landlab plotting functionality
from landlab.plot.drainage_plot import drainage_plot
# create a plotting routine to make a 3d plot of our surface.
def surf_plot(mg, surface="topographic__elevation", title="Surface plot of topography"):
fig = plt.figure()
ax = fig.gca(projection="3d")
# Plot the surface.
Z = mg.at_node[surface].reshape(mg.shape)
color = cm.gray((Z - Z.min()) / (Z.max() - Z.min()))
surf = ax.plot_surface(
mg.x_of_node.reshape(mg.shape),
mg.y_of_node.reshape(mg.shape),
Z,
rstride=1,
cstride=1,
facecolors=color,
linewidth=0.0,
antialiased=False,
)
ax.view_init(elev=35, azim=-120)
ax.set_xlabel("X axis")
ax.set_ylabel("Y axis")
ax.set_zlabel("Elevation")
plt.title(title)
plt.show()
Explanation: <a href="http://landlab.github.io"><img style="float: left" src="../../landlab_header.png"></a>
Comparison of FlowDirectors
Introduction
Landlab's topographic flow-routing capability directs flow and accumulates it using two types of components:
FlowDirectors use the topography to determine how flow moves between adjacent nodes. For every node in the grid it determines the node(s) to receive flow, and the proportion of flow to send from one node to its receiver(s).
The FlowAccumulator uses the direction and proportion of flow moving between each node and (optionally) water runoff to calculate drainage area and discharge.
The FlowDirectors are method-specific. Presently landlab supports four different methods for determining flow direction:
FlowDirectorSteepest Flow is routed to only one node. The algorithm considers the link slopes leaving from each node and chooses the steepest downhill link to route flow along. In the case of a raster grid, only the links are considered (Landlab differentiates between links, which never cross and are located at North, South, East, and West on a raster grid, and diagonals which cross and are located at North East, North West, South East, and South West). For raster grids, this method is also known as D4 flow routing. In the case of irregular grids, all links originating from a node are consideded.
FlowDirectorD8 (raster only) Flow is only routed to one node but diagonals are also considered.
FlowDirectorMFD Flow is directed to all nodes that are located downhill of the source node. In the case of a raster grid, diagonals can be included using the keyword diagonals=True. Flow is partitioned between receiver nodes based on the relative slope along the links leading to the receiver nodes. The default method for partitioning is based on the sum of receiver slopes (partition_method='slope'). Partitioning can also be done on the basis of the square root of slope, which gives the result of a steady kinematic wave(partition_method='square_root_of_slope').
FlowDirectorDINF (raster only) Flow is directed to two cells based on the slope of the triangular facets that can be defined between a node and its neighbors. The steepest downhill facet is chosen and then flow is partitioned between the receiver nodes at the bottom of that facet based on the relative slopes along the facet-bounding links. (The method, known as "D-infinity", is described by Tarboton (1997, Water Resources Research, 33(2), 309-319)).
In this tutorial we will go over more detailed examples that contrast the differences between each flow-direction algorithm. For information about how to initialize and run a FlowDirector or the FlowAccumulator, refer to the other tutorials in this section.
First, we import the necessary python modules and make a small plotting routine.
End of explanation
mg1 = RasterModelGrid((10, 10))
_ = mg1.add_field("topographic__elevation", mg1.y_of_node, at="node")
surf_plot(mg1, title="Grid 1: A basic ramp")
mg2 = RasterModelGrid((10, 10))
_ = mg2.add_field(
"topographic__elevation", mg2.x_of_node + 2.0 * mg2.y_of_node, at="node"
)
surf_plot(mg2, title="Grid 2: A ramp inclined in X and in Y")
mg3 = RasterModelGrid((10, 10))
_ = mg3.add_field(
"topographic__elevation",
mg3.x_of_node ** 2 + mg3.y_of_node ** 2 + mg3.y_of_node,
at="node",
)
surf_plot(mg3, title="Grid 3: A more complicated surface")
Explanation: 3 topographic grids
For this tutorial we will consider three different topographic surfaces that highlight the difference between each of the flow direction algorithms.
End of explanation
mg1a = RasterModelGrid((10, 10))
_ = mg1a.add_field("topographic__elevation", mg1a.y_of_node, at="node")
fd1a = FlowDirectorSteepest(mg1a, "topographic__elevation")
fd1a.run_one_step()
plt.figure()
drainage_plot(mg1a, title="Basic Ramp using FlowDirectorSteepest")
Explanation: Comparing the different methods for each grid
We can illustrate the receiver node FlowDirectionSteepest has assigned to each donor node using a plotting function in Landlab called drainage_plot. We will see many of these plots in this tutorial so let's take a moment to walk through the plot and what it contains.
The background image (white to black) shows the values of topographic elevation of the underlying surface.
The color of the dots inside of each pixel show the locations of the nodes and the type of node.
The arrows show the direction of flow, and the color shows the proportion of flow that travels along that link.
An X on top of a node indicates that node is a local sink and flows to itself.
Note that in Landlab boundary nodes, or nodes that are on the edge of a grid do not have area and do not contribute flow to nodes. These nodes can either be Fixed Gradient Nodes, Fixed Value Nodes, or Closed Nodes. With the exception of Closed Nodes the boundary nodes can receive flow.
An important step in all flow direction and accumulation is setting the proper boundary condition. Refer to the boundary condition tutorial for more information.
Grid 1: Basic Ramp
As with the Introduction to Flow Director tutorial, let's start with the basic ramp.
End of explanation
mg1b = RasterModelGrid((10, 10))
_ = mg1b.add_field("topographic__elevation", mg1b.y_of_node, at="node")
fd1b = FlowDirectorD8(mg1b)
fd1b.run_one_step()
plt.figure()
drainage_plot(mg1b, title="Basic Ramp using FlowDirectorD8")
Explanation: Reassuringly we can see that the flow is being sent from high elevations at the top of the grid to low elevations at the bottom of the grid. We can also see that all of the arrows are yellow, and thus all of the flow is traveling on these links.
Now let's see how the other FlowDirectors direct the flow on this simple grid. We don't need to specify the surface so long as it is the field 'topographic__elevation'.
End of explanation
mg1c = RasterModelGrid((10, 10))
_ = mg1c.add_field("topographic__elevation", mg1c.y_of_node, at="node")
fd1c = FlowDirectorMFD(mg1c, diagonals=False) # diagonals=False is the default option
fd1c.run_one_step()
plt.figure()
drainage_plot(mg1c, title="Basic Ramp using FlowDirectorMFD without diagonals")
Explanation: For this ramp, the steepest slope is down a link, and not a diagonal, so FlowDirectorD8 gives the same result as FlowDirectorSteepest.
End of explanation
mg1d = RasterModelGrid((10, 10))
_ = mg1d.add_field("topographic__elevation", mg1d.y_of_node, at="node")
fd1d = FlowDirectorMFD(mg1d, diagonals=True)
fd1d.run_one_step()
plt.figure()
drainage_plot(mg1d, title="Basic Ramp using FlowDirectorMFD with diagonals")
Explanation: Similarly, while there is more than one node below each core node, there is only one node that is connected by a link and not a diagonal. Thus FlowDirectorMFD with the keyword diagonals set to True provides the same results as FlowDirectorSteepest and FlowDirectorD8
End of explanation
mg1e = RasterModelGrid((10, 10))
_ = mg1e.add_field("topographic__elevation", mg1e.y_of_node, at="node")
fd1e = FlowDirectorDINF(mg1e)
fd1e.run_one_step()
plt.figure()
drainage_plot(mg1e, title="Basic Ramp using FlowDirectorDINF")
Explanation: When we permit flow along diagonal connections between nodes and flow to all downhill nodes, we see a difference in the directing pattern on this simple ramp. The flow is partitioned between the three downhill nodes, and there is more flow being sent to along the link as compared with the diagonals (the links are a lighter color blue than the diagonals).
One issue we might have with the results from FlowDirectorMFD in this case is that the flow on the diagonals crosses. This is one of the problems with using diagonal connections between nodes.
End of explanation
mg2a = RasterModelGrid((10, 10))
_ = mg2a.add_field(
"topographic__elevation", mg2a.x_of_node + 2.0 * mg2a.y_of_node, at="node"
)
fd2a = FlowDirectorSteepest(mg2a, "topographic__elevation")
fd2a.run_one_step()
plt.figure()
drainage_plot(mg2a, title="Grid 2 using FlowDirectorSteepest")
Explanation: In FlowDirectorDINF flow is partitioned to two nodes based on steepness of the eight triangular facets surrounding each node. The partitioning is based on the relation between the link and diagonal slope that form the edge of the facet and the slope of the facet itself. When one of the facet edges has the same slope as the facet, as is the case in this ramp example, all of the flow is partitioned along that edge.
Grid 2: Inclined plane in two dimentions
Next let's look at all the flow directors but with the inclined plane. Recall that this plane is tilted in both X and Y axes, and that is tilted more steeply in the Y direction.
End of explanation
mg2b = RasterModelGrid((10, 10))
_ = mg2b.add_field(
"topographic__elevation", mg2b.x_of_node + 2.0 * mg2b.y_of_node, at="node"
)
fd2b = FlowDirectorD8(mg2b)
fd2b.run_one_step()
plt.figure()
drainage_plot(mg2b, title="Grid 2 using FlowDirectorD8")
Explanation: Flow is directed down parallel to to the the Y-axis of the plane. This makes sense in the context of the FlowDirectorSteepest algorithm; it only sends flow to one node, so it an idealized geometry such as the plane in this example, it provides flow direction that is non-realistic.
As we will discuss throughout this tutorial, there are benefits and drawbacks to each FlowDirector algorithm.
End of explanation
mg2c = RasterModelGrid((10, 10))
_ = mg2c.add_field(
"topographic__elevation", mg2c.x_of_node + 2.0 * mg2c.y_of_node, at="node"
)
fd2c = FlowDirectorMFD(mg2c, diagonals=False) # diagonals=False is the default option
fd2c.run_one_step()
plt.figure()
drainage_plot(mg2c, title="Grid 2 using FlowDirectorMFD without diagonals")
Explanation: FlowDirectorD8 consideres the diagonal connections between nodes. As the plane is inclined to the southwest the flow direction looks better here, though as we will see later, sometimes FlowDirectorD8 does non-realistic directing too.
End of explanation
mg2d = RasterModelGrid((10, 10))
_ = mg2d.add_field(
"topographic__elevation", mg2d.x_of_node + 2.0 * mg2d.y_of_node, at="node"
)
fd2d = FlowDirectorMFD(mg2d, diagonals=True)
fd2d.run_one_step()
plt.figure()
drainage_plot(mg2d, title="Grid 2 using FlowDirectorMFD with diagonals")
Explanation: As FlowDirectorMFD can send flow to all the nodes downhill it doesn't have the same problem that FlowDirectorSteepest had. Because the plane is tilted down more steeply to the south than to the east, it sends more flow on the steeper link.
End of explanation
mg2e = RasterModelGrid((10, 10))
_ = mg2e.add_field(
"topographic__elevation", mg2e.x_of_node + 2.0 * mg2e.y_of_node, at="node"
)
fd2e = FlowDirectorDINF(mg2e)
fd2e.run_one_step()
plt.figure()
drainage_plot(mg2e, title="Basic Ramp using FlowDirectorDINF")
Explanation: When FlowDirectorMFD considers diagonals in addition to links, we see that it sends the flow to four nodes instead of three. While all of the receiver nodes are downhill from their donor nodes, we see again that using diagonals permits flow to cross itself. We also see that the most flow is routed to the south and the south east, which makes sense based on how the plane is tilted.
End of explanation
mg3a = RasterModelGrid((10, 10))
_ = mg3a.add_field(
"topographic__elevation",
mg3a.x_of_node ** 2 + mg3a.y_of_node ** 2 + mg3a.y_of_node,
at="node",
)
fd3a = FlowDirectorSteepest(mg3a, "topographic__elevation")
fd3a.run_one_step()
plt.figure()
drainage_plot(mg3a, title="Grid 3 using FlowDirectorSteepest")
Explanation: Here FlowDirectorDINF routes flow in two directions, to the south and southeast. The plane is steeper to from north to south than from east to west and so more flow is directed on the diagonal to the southeast.
Grid 3: Curved surface
Finally, let's consider our curved surface.
End of explanation
mg3b = RasterModelGrid((10, 10))
_ = mg3b.add_field(
"topographic__elevation",
mg3b.x_of_node ** 2 + mg3b.y_of_node ** 2 + mg3b.y_of_node,
at="node",
)
fd3b = FlowDirectorD8(mg3b)
fd3b.run_one_step()
plt.figure()
drainage_plot(mg3b, title="Grid 3 using FlowDirectorD8")
Explanation: Flow on this surface using FlowDirectorSteepest looks realistic, as flow is routed down into the bottom of the curved surface.
End of explanation
mg3c = RasterModelGrid((10, 10))
_ = mg3c.add_field(
"topographic__elevation",
mg3c.x_of_node ** 2 + mg3c.y_of_node ** 2 + mg3c.y_of_node,
at="node",
)
fd3c = FlowDirectorMFD(mg3c, diagonals=False) # diagonals=False is the default option
fd3c.run_one_step()
plt.figure()
drainage_plot(mg3c, title="Grid 3 using FlowDirectorMFD without diagonals")
Explanation: Near the bottom left of the grid, the steepest descent is on a diagonal, so using FlowDirectorD8 gives a different drainage pattern.
End of explanation
mg3d = RasterModelGrid((10, 10))
_ = mg3d.add_field(
"topographic__elevation",
mg3d.x_of_node ** 2 + mg3d.y_of_node ** 2 + mg3d.y_of_node,
at="node",
)
fd3d = FlowDirectorMFD(mg3d, diagonals=True)
fd3d.run_one_step()
plt.figure()
drainage_plot(mg3d, title="Grid 3 using FlowDirectorMFD with diagonals")
Explanation: Permitting multiple receivers with and without diagonals give an additional two different drainage patterns.
End of explanation
mg3e = RasterModelGrid((10, 10))
_ = mg3e.add_field(
"topographic__elevation",
mg3e.x_of_node ** 2 + mg3e.y_of_node ** 2 + mg3e.y_of_node,
at="node",
)
fd3e = FlowDirectorDINF(mg3e)
fd3e.run_one_step()
plt.figure()
drainage_plot(mg3e, title="Grid 3 using FlowDirectorDINF")
Explanation: Again we see flow paths crossing when we permit consideration of flow along the diagonals.
End of explanation
from landlab.components import FlowAccumulator
mg3 = RasterModelGrid((10, 10))
_ = mg3.add_field(
"topographic__elevation",
mg3.x_of_node ** 2 + mg3.y_of_node ** 2 + mg3.y_of_node,
at="node",
)
fa = FlowAccumulator(mg3, "topographic__elevation", flow_director="Steepest")
fa.run_one_step()
plt.figure()
drainage_plot(
mg3, "drainage_area", title="Flow Accumulation using FlowDirectorSteepest"
)
Explanation: Finally we see yet a different drainage pattern when we use FlowDirectorDINF and flow is routed along an adjacent diagonal-link pair.
Comparison of Accumulated Area
Before concluding, let's examine the accumulated drainage area using each of the FlowDirector methods and the third grid. For an introduction to creating and running a FlowAccumulator see the tutorial "Introduction to Flow Accumulators".
Often we do flow routing and accumulation because we want to use the accumulated area as a proxy for the water discharge. So the details of how the flow is routed are important because they influence how the drainage area pattern evolves.
Lets begain with FlowDirectorSteepest.
End of explanation
fa = FlowAccumulator(mg3, "topographic__elevation", flow_director="D8")
fa.run_one_step()
plt.figure()
drainage_plot(mg3, "drainage_area", title="Flow Accumulation using FlowDirectorD8")
Explanation: Here we see that flow has accumulated into one channel in the bottom of the curved surface.
End of explanation
mg3 = RasterModelGrid((10, 10))
_ = mg3.add_field(
"topographic__elevation",
mg3.x_of_node ** 2 + mg3.y_of_node ** 2 + mg3.y_of_node,
at="node",
)
fa = FlowAccumulator(mg3, "topographic__elevation", flow_director="MFD")
fa.run_one_step()
plt.figure()
drainage_plot(
mg3,
"drainage_area",
title="Flow Accumulation using FlowDirectorMFD without diagonals",
)
Explanation: When diagonals are considered, as in FlowDirectorD8, the drainage patter looks very diferent. Instead of one channel we have two smaller channels.
End of explanation
mg3 = RasterModelGrid((10, 10))
_ = mg3.add_field(
"topographic__elevation",
mg3.x_of_node ** 2 + mg3.y_of_node ** 2 + mg3.y_of_node,
at="node",
)
fa = FlowAccumulator(mg3, "topographic__elevation", flow_director="MFD", diagonals=True)
fa.run_one_step()
plt.figure()
drainage_plot(
mg3, "drainage_area", title="Flow Accumulation using FlowDirectorMFD with diagonals"
)
Explanation: Flow is distributed much more when we use FlowDirectorMFD.
End of explanation
mg3 = RasterModelGrid((10, 10))
_ = mg3.add_field(
"topographic__elevation",
mg3.x_of_node ** 2 + mg3.y_of_node ** 2 + mg3.y_of_node,
at="node",
)
fa = FlowAccumulator(mg3, "topographic__elevation", flow_director="DINF")
fa.run_one_step()
plt.figure()
drainage_plot(mg3, "drainage_area", title="Flow Accumulation using FlowDirectorDINF")
Explanation: Adding diagonals to FlowDirectorMFD gives a channel somewhat similar to the one created by FlowDirectorSteepest but much more distributed.
End of explanation |
1,831 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Hit Processor</h1>
<hr style="border
Step1: <span>
Let's parse
</span>
Step2: <span>
Parse a Hit with Plain Processor
</span>
Step3: <span>
Compute diffs
Step4: <span>
Parse a Hit with Matrix Processor
</span>
Step5: <span>
Compute diffs | Python Code:
import sys
#sys.path.insert(0, '/home/asanso/workspace/att-spyder/att/src/python/')
sys.path.insert(0, 'i:/dev/workspaces/python/att-workspace/att/src/python/')
Explanation: <h1>Hit Processor</h1>
<hr style="border: 1px solid #000;">
<span>
<h2>ATT raw Hit processor.</h2>
</span>
<br>
<span>
This notebook shows how the hit processor works.<br>
The Hit processors aim is to parse the raw hit readings from the serial port.
</span>
<span>
Set modules path first:
</span>
End of explanation
from hit.process.processor import ATTMatrixHitProcessor
from hit.process.processor import ATTPlainHitProcessor
plainProcessor = ATTPlainHitProcessor()
matProcessor = ATTMatrixHitProcessor()
Explanation: <span>
Let's parse
</span>
End of explanation
plainHit = plainProcessor.parse_hit("hit: {0:25 1549:4 2757:4 1392:4 2264:7 1764:7 1942:5 2984:5 r}")
print plainHit
Explanation: <span>
Parse a Hit with Plain Processor
</span>
End of explanation
plainDiffs = plainProcessor.hit_diffs(plainHit["sensor_timings"])
print plainDiffs
Explanation: <span>
Compute diffs:
</span>
End of explanation
matHit = matProcessor.parse_hit("hit: {0:25 1549:4 2757:4 1392:4 2264:7 1764:7 1942:5 2984:5 r}")
print matHit
Explanation: <span>
Parse a Hit with Matrix Processor
</span>
End of explanation
matDiffs = matProcessor.hit_diffs((matHit["sensor_timings"]))
print matDiffs
matDiffs
Explanation: <span>
Compute diffs:
</span>
End of explanation |
1,832 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Brief look at Cartopy
Cartopy is a Python package that provides easy creation of maps with matplotlib.
Cartopy vs Basemap
Cartopy is better integrated with matplotlib and in a more active development state
Proper handling of datelines in cartopy - one of the bugs in basemap (example
Step1: Then let's import the cartopy
Step2: In addition, we import cartopy's coordinate reference system submodule
Step3: Creating GeoAxes
Cartopy-matplotlib interface is set up via the projection keyword when constructing Axes / SubAxes
The resulting instance (cartopy.mpl.geoaxes.GeoAxesSubplot) has new methods specific to drawing cartographic data, e.g. coastlines
Step4: Here we are using a Plate Carrée projection, which is one of equidistant cylindrical projections.
A full list of Cartopy projections is available at http
Step5: Notice that unless we specify a map extent (we did so via the set_global method in this case) the map will zoom into the range of the plotted data.
Decorating the map
We can add grid lines and tick labels to the map using the gridlines() method
Step6: Unfortunately, gridline labels work only in PlateCarree and Mercator projections.
We can control the specific tick values by using matplotlib's locator object, and the formatting can be controlled with matplotlib formatters
Step7: Plotting layers directly from Web Map Service (WMS) and Web Map Tile Service (WMTS)
Step8: Exercise
Step9: Idea 1
Use data in a rotated pole coordinate system
Create a proper CRS, plot it
Plot the data on a different map with a different projection (e.g., Plate Carree) | Python Code:
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Brief look at Cartopy
Cartopy is a Python package that provides easy creation of maps with matplotlib.
Cartopy vs Basemap
Cartopy is better integrated with matplotlib and in a more active development state
Proper handling of datelines in cartopy - one of the bugs in basemap (example: Challenger circumnavigation)
Cartopy offers powerful vector data handling by integrating shapefile reading with Shapely capabilities
Basemap: gridline labels for any projection; limited to a few in cartopy (workaround for Lambert Conic)
Basemap has a map scale bar feature (can be buggy); still not implemented in cartopy, but there are some messy workarounds
As for the standard matplotlib plots, we first need to import pyplot submodule and make the graphical output appear in the notebook:
In order to create a map with cartopy and matplotlib, we typically need to import pyplot from matplotlib and cartopy's crs (coordinate reference system) submodule. These are typically imported as follows:
End of explanation
import cartopy
Explanation: Then let's import the cartopy
End of explanation
import cartopy.crs as ccrs
Explanation: In addition, we import cartopy's coordinate reference system submodule:
End of explanation
ax = plt.axes(projection=ccrs.PlateCarree())
ax.coastlines()
print('axes type:', type(ax))
Explanation: Creating GeoAxes
Cartopy-matplotlib interface is set up via the projection keyword when constructing Axes / SubAxes
The resulting instance (cartopy.mpl.geoaxes.GeoAxesSubplot) has new methods specific to drawing cartographic data, e.g. coastlines:
End of explanation
ax = plt.axes(projection=ccrs.PlateCarree())
ax.coastlines()
ax.set_global()
plt.plot([-100, 50], [25, 25], linewidth=4, color='r', transform=ccrs.PlateCarree())
plt.plot([-100, 50], [25, 25], linewidth=4, color='b', transform=ccrs.Geodetic())
Explanation: Here we are using a Plate Carrée projection, which is one of equidistant cylindrical projections.
A full list of Cartopy projections is available at http://scitools.org.uk/cartopy/docs/latest/crs/projections.html.
Putting georeferenced data on a map
Use the standard matplotlib plotting routines with an additional transform keyword.
The value of the transform argument should be the cartopy coordinate reference system of the data being plotted
End of explanation
ax = plt.axes(projection=ccrs.Mercator())
ax.coastlines()
gl = ax.gridlines(draw_labels=True)
Explanation: Notice that unless we specify a map extent (we did so via the set_global method in this case) the map will zoom into the range of the plotted data.
Decorating the map
We can add grid lines and tick labels to the map using the gridlines() method:
End of explanation
import matplotlib.ticker as mticker
from cartopy.mpl.gridliner import LATITUDE_FORMATTER
ax = plt.axes(projection=ccrs.PlateCarree())
ax.coastlines()
gl = ax.gridlines(draw_labels=True)
gl.xlocator = mticker.FixedLocator([-180, -45, 0, 45, 180])
gl.yformatter = LATITUDE_FORMATTER
fig = plt.figure(figsize=(9, 6))
ax = fig.add_subplot(111, projection=ccrs.PlateCarree())
ax.set_global()
lons = -75, 77.2, 151.2, -75
lats = 43, 28.6, -33.9, 43
ax.plot(lons, lats,
color='green', linewidth=2, marker='o', ms=10,
transform=ccrs.Geodetic())
# feature = cartopy.feature.LAND
feature = cartopy.feature.NaturalEarthFeature(name='land', category='physical',
scale='110m',
edgecolor='red', facecolor='black')
ax.add_feature(feature)
_ = ax.add_feature(cartopy.feature.LAKES, facecolor='b')
states = cartopy.feature.NaturalEarthFeature(category='cultural', scale='50m', facecolor='none',
name='admin_1_states_provinces_lines')
_ = ax.add_feature(states, edgecolor='gray')
Explanation: Unfortunately, gridline labels work only in PlateCarree and Mercator projections.
We can control the specific tick values by using matplotlib's locator object, and the formatting can be controlled with matplotlib formatters:
End of explanation
url = 'http://map1c.vis.earthdata.nasa.gov/wmts-geo/wmts.cgi'
ax = plt.axes(projection=ccrs.PlateCarree())
ax.add_wmts(url, 'VIIRS_CityLights_2012')
Explanation: Plotting layers directly from Web Map Service (WMS) and Web Map Tile Service (WMTS)
End of explanation
import numpy as np
x = np.linspace(310, 390, 25)
y = np.linspace(-24, 25, 35)
x2d, y2d = np.meshgrid(x, y)
data = np.cos(np.deg2rad(y2d) * 4) + np.sin(np.deg2rad(x2d) * 4)
Explanation: Exercise
End of explanation
rot_crs = ccrs.RotatedPole(177.5, 37.5)
ax = plt.axes(projection=rot_crs)
ax.coastlines()
fig = plt.figure()
ax = fig.add_subplot(111, projection=ccrs.PlateCarree())
ax.contourf(x2d, y2d, data, transform=rot_crs)
ax.coastlines()
Explanation: Idea 1
Use data in a rotated pole coordinate system
Create a proper CRS, plot it
Plot the data on a different map with a different projection (e.g., Plate Carree)
End of explanation |
1,833 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualizing the 2016 General Election Polls
Step1: Hover on the map to visualize the poll data for that state.
Step2: Visualizing the County Results of the 2008 Elections
Step3: Hover on the map to visualize the voting percentage for each candidate in that county | Python Code:
import pandas as pd
import numpy as np
from __future__ import print_function
from ipywidgets import VBox, HBox
import os
codes = pd.read_csv(os.path.abspath('../data_files/state_codes.csv'))
try:
from pollster import Pollster
except ImportError:
print('Pollster not found. Installing Pollster..')
import pip
try:
pip.main(['install', 'pollster==0.1.6'])
except:
print("The pip installation failed. Please manually install Pollster and re-run this notebook.")
def get_candidate_data(question):
clinton, trump, undecided, other = 0., 0., 0., 0.
for candidate in question['subpopulations'][0]['responses']:
if candidate['last_name'] == 'Clinton':
clinton = candidate['value']
elif candidate['last_name'] == 'Trump':
trump = candidate['value']
elif candidate['choice'] == 'Undecided':
undecided = candidate['value']
else:
other = candidate['value']
return clinton, trump, other, undecided
def get_row(question, partisan='Nonpartisan', end_date='2016-06-21'):
# if question['topic'] != '2016-president':
if ('2016' in question['topic']) and ('Presidential' in question['topic']):
hillary, donald, other, undecided = get_candidate_data(question)
return [{'Name': question['name'], 'Partisan': partisan, 'State': question['state'],
'Date': np.datetime64(end_date), 'Trump': donald, 'Clinton': hillary, 'Other': other,
'Undecided': undecided}]
else:
return
def analyze_polls(polls):
global data
for poll in polls:
for question in poll.questions:
resp = get_row(question, partisan=poll.partisan, end_date=poll.end_date)
if resp is not None:
data = data.append(resp)
return
try:
from pollster import Pollster
pollster = Pollster()
# Getting data from Pollster. This might take a second.
raw_data = pollster.charts(topic='2016-president')
data = pd.DataFrame(columns=['Name', 'Partisan', 'State', 'Date', 'Trump', 'Clinton', 'Other',
'Undecided'])
for i in raw_data:
analyze_polls(i.polls())
except:
raise ValueError('Please install Pollster and run the functions above')
def get_state_party(code):
state = codes[codes['FIPS']==code]['USPS'].values[0]
if data[data['State']==state].shape[0] == 0:
return None
polls = data[(data['State']==state) & (data['Trump'] > 0.) & (data['Clinton'] > 0.)].sort_values(by='Date')
if polls.shape[0] == 0:
return None
if (polls.tail(1)['Trump'] > polls.tail(1)['Clinton']).values[0]:
return 'Republican'
else:
return 'Democrat'
def get_color_data():
color_data = {}
for i in codes['FIPS']:
color_data[i] = get_state_party(i)
return color_data
def get_state_data(code):
state = codes[codes['FIPS']==code]['USPS'].values[0]
if data[data['State']==state].shape[0] == 0:
return None
polls = data[(data['State']==state) & (data['Trump'] > 0.) & (data['Clinton'] > 0.)].sort_values(by='Date')
return polls
from bqplot import *
from ipywidgets import Layout
dt_x = DateScale()
sc_y = LinearScale()
time_series = Lines(scales={'x': dt_x, 'y': sc_y}, colors=['#E91D0E', '#2aa1ec'], marker='circle')
ax_x = Axis(scale=dt_x, label='Date')
ax_y = Axis(scale=sc_y, orientation='vertical', label='Percentage')
ts_fig = Figure(marks=[time_series], axes=[ax_x, ax_y], title='General Election - State Polls',
layout=Layout(min_width='650px', min_height='400px'))
sc_geo = AlbersUSA()
sc_c1 = OrdinalColorScale(domain=['Democrat', 'Republican'], colors=['#2aa1ec', '#E91D0E'])
color_data = get_color_data()
map_styles = {'color': color_data,
'scales': {'projection': sc_geo, 'color': sc_c1}, 'colors': {'default_color': 'Grey'}}
axis = ColorAxis(scale=sc_c1)
states_map = Map(map_data=topo_load('map_data/USStatesMap.json'), tooltip=ts_fig, **map_styles)
map_fig = Figure(marks=[states_map], axes=[axis],title='General Election Polls - State Wise')
def hover_callback(name, value):
polls = get_state_data(value['data']['id'])
if polls is None or polls.shape[0] == 0:
time_series.y = [0.]
return
time_series.x, time_series.y = polls['Date'].values.astype(np.datetime64), [polls['Trump'].values, polls['Clinton'].values]
ts_fig.title = str(codes[codes['FIPS']==value['data']['id']]['Name'].values[0]) + ' Polls - Presidential Election'
states_map.on_hover(hover_callback)
national = data[(data['State']=='US') & (data['Trump'] > 0.) & (data['Clinton'] > 0.)].sort_values(by='Date')
dt_x = DateScale()
sc_y = LinearScale()
clinton_scatter = Scatter(x=national['Date'].values.astype(np.datetime64), y=national['Clinton'],
scales={'x': dt_x, 'y': sc_y},
colors=['#2aa1ec'])
trump_scatter = Scatter(x=national['Date'].values.astype(np.datetime64), y=national['Trump'],
scales={'x': dt_x, 'y': sc_y},
colors=['#E91D0E'])
ax_x = Axis(scale=dt_x, label='Date', tick_format='%b-%Y', num_ticks=8)
ax_y = Axis(scale=sc_y, orientation='vertical', label='Percentage')
scat_fig = Figure(marks=[clinton_scatter, trump_scatter], axes=[ax_x, ax_y], title='General Election - National Polls')
Explanation: Visualizing the 2016 General Election Polls
End of explanation
VBox([map_fig, scat_fig])
Explanation: Hover on the map to visualize the poll data for that state.
End of explanation
county_data = pd.read_csv(os.path.abspath('../data_files/2008-election-results.csv'))
winner = np.array(['McCain'] * county_data.shape[0])
winner[(county_data['Obama'] > county_data['McCain']).values] = 'Obama'
sc_geo_county = AlbersUSA()
sc_c1_county = OrdinalColorScale(domain=['McCain', 'Obama'], colors=['Red', 'DeepSkyBlue'])
color_data_county = dict(zip(county_data['FIPS'].values.astype(int), list(winner)))
map_styles_county = {'color': color_data_county,
'scales': {'projection': sc_geo_county, 'color': sc_c1_county}, 'colors': {'default_color': 'Grey'}}
axis_county = ColorAxis(scale=sc_c1_county)
county_map = Map(map_data=topo_load('map_data/USCountiesMap.json'), **map_styles_county)
county_fig = Figure(marks=[county_map], axes=[axis_county],title='US Elections 2008 - Example',
layout=Layout(min_width='800px', min_height='550px'))
names_sc = OrdinalScale(domain=['Obama', 'McCain'])
vote_sc_y = LinearScale(min=0, max=100.)
names_ax = Axis(scale=names_sc, label='Candidate')
vote_ax = Axis(scale=vote_sc_y, orientation='vertical', label='Percentage')
vote_bars = Bars(scales={'x': names_sc, 'y': vote_sc_y}, colors=['#2aa1ec', '#E91D0E'])
bar_fig = Figure(marks=[vote_bars], axes=[names_ax, vote_ax], title='Vote Margin',
layout=Layout(min_width='600px', min_height='400px'))
def county_hover(name, value):
if (county_data['FIPS'] == value['data']['id']).sum() == 0:
bar_fig.title = ''
vote_bars.y = [0., 0.]
return
votes = county_data[county_data['FIPS'] == value['data']['id']]
dem_vote = float(votes['Obama %'].values[0])
rep_vote = float(votes['McCain %'].values[0])
vote_bars.x, vote_bars.y = ['Obama', 'McCain'], [dem_vote, rep_vote]
bar_fig.title = 'Vote % - ' + value['data']['name']
county_map.on_hover(county_hover)
county_map.tooltip = bar_fig
Explanation: Visualizing the County Results of the 2008 Elections
End of explanation
county_fig
Explanation: Hover on the map to visualize the voting percentage for each candidate in that county
End of explanation |
1,834 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Jim's MetaD convergence script, also shows fast file readin using streaming vs slow file readin vs. np.getfromtext
Step1: Graph the final FES and plot the two squares on top of it
Step2: The two functions below calculate the average free energy of a region by integrating over whichever boxes you defined above. Since the FES is discrete and points are equally spaced, this is trivially taken as a summation
Step3: Below this is all testing of different read-in options
Step4: Profiling speed of different read in options | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import glob
import os
from matplotlib.patches import Rectangle
# define all variables for convergence script
# these will pass to the bash magic below used to call plumed sum_hills
dir="MetaD_converge" #where the intermediate fes will be stored
hills="MetaD/HILLS" #your HILLS file from the simulation
finalfes='MetaD/fes.dat' #the final fes.dat file
stride=1000
kT=8.314e-3*300 #throughout we convert to kcal, but the HILLS are assumed to be in GROMACS units (kJ)
## here is where you set the boxes to define convergence regions
C1=[-1.5,1.0] #center of box 1
C2=[1.0,-.5]
edge1=1.0 #edge of box1
edge2=1.0
%%bash -s "$dir" "$hills" "$stride" "$kT"
# calling sum hills and output to devnul
HILLSFILE=HILLS
rm -rf $1
mkdir $1
cp $2 $1
cd $1
plumed sum_hills --hills $HILLSFILE --kt $4 --stride $3 >& /dev/null
Explanation: Jim's MetaD convergence script, also shows fast file readin using streaming vs slow file readin vs. np.getfromtext
End of explanation
%matplotlib inline
#read the data in from a text file
fesdata = np.genfromtxt(finalfes,comments='#');
fesdata = fesdata[:,0:3]
#what was your grid size? this calculates it
dim=int(np.sqrt(np.size(fesdata)/3))
#some post-processing to be compatible with contourf
X=np.reshape(fesdata[:,0],[dim,dim],order="F") #order F was 20% faster than A/C
Y=np.reshape(fesdata[:,1],[dim,dim],order="F")
Z=np.reshape((fesdata[:,2]-np.min(fesdata[:,2]))/4.184,[dim,dim],order="F") #convert to kcal/mol
#what spacing do you want? assume units are in kJ/mol
spacer=1 #this means 1kcal/mol spacing
lines=20
levels=np.linspace(0,lines*spacer,num=(lines+1),endpoint=True)
fig=plt.figure(figsize=(8,6))
axes = fig.add_subplot(111)
xlabel='$\Phi$'
ylabel='$\Psi$'
plt.contourf(X, Y, Z, levels, cmap=plt.cm.bone,)
plt.colorbar()
axes.set_xlabel(xlabel, fontsize=20)
axes.set_ylabel(ylabel, fontsize=20)
currentAxis = plt.gca()
currentAxis.add_patch(Rectangle((C1[0]-edge1/2, C1[1]-edge1/2), edge1, edge1,facecolor='none',edgecolor='yellow',linewidth='3'))
currentAxis.add_patch(Rectangle((C2[0]-edge2/2, C2[1]-edge2/2), edge2, edge2,facecolor='none',edgecolor='yellow',linewidth='3'))
plt.show()
Explanation: Graph the final FES and plot the two squares on top of it
End of explanation
def diffNP(file):
#read the data in from a text file
# note - this is very slow
fesdata = np.genfromtxt(file,comments='#');
A=0.0
B=0.0
dim=np.shape(fesdata)[0]
for i in range(0, dim):
x=fesdata[i][0]
y=fesdata[i][1]
z=fesdata[i][2]
if x < C1[0]+edge1/2 and x > C1[0]-edge1/2 and y > C1[1]-edge1/2 and y < C1[1]+edge1/2:
A+=np.exp(-z/kT)
if x < C2[0]+edge2/2 and x > C2[0]-edge2/2 and y > C2[1]-edge2/2 and y < C2[1]+edge2/2:
B+=np.exp(-z/kT)
A=-kT*np.log(A)
B=-kT*np.log(B)
diff=(A-B)/4.184 #output in kcal
return diff
def diff(file):
kT=8.314e-3*300
A=0.0
B=0.0
f = open(file, 'r')
for line in f:
if line[:1] != '#':
line=line.strip()
if line:
columns = line.split()
x=float(columns[0])
y=float(columns[1])
z=float(columns[2])
if x < C1[0]+edge1/2 and x > C1[0]-edge1/2 and y > C1[1]-edge1/2 and y < C1[1]+edge1/2:
A+=np.exp(-z/kT)
if x < C2[0]+edge2/2 and x > C2[0]-edge2/2 and y > C2[1]-edge2/2 and y < C2[1]+edge2/2:
B+=np.exp(-z/kT)
f.close
A=-kT*np.log(A)
B=-kT*np.log(B)
diff=(A-B)/4.184
return diff
diffvec=None
rootdir = '/Users/jpfaendt/Learning/Python/ALA2_MetaD/MetaD_converge'
i=0
diffvec=np.zeros((1,2))
#the variable func defines which function you are going to call to read in your data files fes_*.dat
#func=diffNP uses the numpy read in (SLOW)
#func=diff streams in data from a text file
#to experience the differnece , uncomment out the print statements and run each way
func=diff
for infile in glob.glob( os.path.join(rootdir, 'fes_?.dat') ):
if i >= i:
diffvec.resize((i+1,2))
#print "current file is: " + infile
diffvec[i][0]=i*1.0
diffvec[i][1]=func(infile)
i+=1
for infile in glob.glob( os.path.join(rootdir, 'fes_??.dat') ):
if i >= i:
diffvec.resize((i+1,2))
#print "current file is: " + infile
diffvec[i][0]=i*1.0
diffvec[i][1]=func(infile)
i+=1
for infile in glob.glob( os.path.join(rootdir, 'fes_???.dat') ):
if i >= i:
diffvec.resize((i+1,2))
#print "current file is: " + infile
diffvec[i][0]=i*1.0
diffvec[i][1]=func(infile)
i+=1
fig = plt.figure(figsize=(6,6))
axes = fig.add_subplot(111)
xlabel='time (generic)'
ylabel='diff (A-B) (kcal/mol)'
axes.plot(diffvec[:,0],diffvec[:,1])
axes.set_xlabel(xlabel, fontsize=20)
axes.set_ylabel(ylabel, fontsize=20)
plt.show()
Explanation: The two functions below calculate the average free energy of a region by integrating over whichever boxes you defined above. Since the FES is discrete and points are equally spaced, this is trivially taken as a summation:
$F_A = \sum_A exp\left(-F_{Ai}/k_BT\right) $
Don't forget that this is formally a free-energy plus some trivial constant but that the constant is equal for both regions $A$ and $B$ so that you will obtain the same free-energy difference irrespective of the reference point.
On the other hand, it doesn't make much sense to just use the arbitrary nubmers coming from sum_hills, which are related only to the amount of aggregate bias produced in your simulation. This is why we reference the lowest point to zero on the contour plots.
I left both functions in as a teaching tool to show how slow np.genfromtext is
End of explanation
##
#read the data in from a text file using genfrom txt
fesdata = np.genfromtxt('MetaD_converge/fes_1.dat',comments='#');
kT=8.314e-3*300
A=0.0
B=0.0
dim=np.shape(fesdata)[0]
for i in range(0, dim):
x=fesdata[i][0]
y=fesdata[i][1]
z=fesdata[i][2]
if x < C1[0]+edge1/2 and x > C1[0]-edge1/2 and y > C1[1]-edge1/2 and y < C1[1]+edge1/2:
A+=np.exp(-z/kT)
if x < C2[0]+edge2/2 and x > C2[0]-edge2/2 and y > C2[1]-edge2/2 and y < C2[1]+edge2/2:
B+=np.exp(-z/kT)
A=-kT*np.log(A)
B=-kT*np.log(B)
diff=(A-B)/4.184
diff
##
#read the data in from a text file using read in commands
kT=8.314e-3*300
A=0.0
B=0.0
f = open('MetaD_converge/fes_1.dat', 'r')
for line in f:
if line[:1] != '#':
line=line.strip()
if line:
columns = line.split()
x=float(columns[0])
y=float(columns[1])
z=float(columns[2])
if x < C1[0]+edge1/2 and x > C1[0]-edge1/2 and y > C1[1]-edge1/2 and y < C1[1]+edge1/2:
A+=np.exp(-z/kT)
if x < C2[0]+edge2/2 and x > C2[0]-edge2/2 and y > C2[1]-edge2/2 and y < C2[1]+edge2/2:
B+=np.exp(-z/kT)
f.close
A=-kT*np.log(A)
B=-kT*np.log(B)
diff=(A-B)/4.184
diff
Explanation: Below this is all testing of different read-in options:
End of explanation
file='MetaD/fes.dat'
%timeit diffNP(file)
%timeit diff(file)
Explanation: Profiling speed of different read in options:
End of explanation |
1,835 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Type
Is Required
Step7: 1.4. Elemental Stoichiometry
Is Required
Step8: 1.5. Elemental Stoichiometry Details
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 1.7. Diagnostic Variables
Is Required
Step11: 1.8. Damping
Is Required
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required
Step13: 2.2. Timestep If Not From Ocean
Is Required
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required
Step15: 3.2. Timestep If Not From Ocean
Is Required
Step16: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required
Step17: 4.2. Scheme
Is Required
Step18: 4.3. Use Different Scheme
Is Required
Step19: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required
Step20: 5.2. River Input
Is Required
Step21: 5.3. Sediments From Boundary Conditions
Is Required
Step22: 5.4. Sediments From Explicit Model
Is Required
Step23: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required
Step24: 6.2. CO2 Exchange Type
Is Required
Step25: 6.3. O2 Exchange Present
Is Required
Step26: 6.4. O2 Exchange Type
Is Required
Step27: 6.5. DMS Exchange Present
Is Required
Step28: 6.6. DMS Exchange Type
Is Required
Step29: 6.7. N2 Exchange Present
Is Required
Step30: 6.8. N2 Exchange Type
Is Required
Step31: 6.9. N2O Exchange Present
Is Required
Step32: 6.10. N2O Exchange Type
Is Required
Step33: 6.11. CFC11 Exchange Present
Is Required
Step34: 6.12. CFC11 Exchange Type
Is Required
Step35: 6.13. CFC12 Exchange Present
Is Required
Step36: 6.14. CFC12 Exchange Type
Is Required
Step37: 6.15. SF6 Exchange Present
Is Required
Step38: 6.16. SF6 Exchange Type
Is Required
Step39: 6.17. 13CO2 Exchange Present
Is Required
Step40: 6.18. 13CO2 Exchange Type
Is Required
Step41: 6.19. 14CO2 Exchange Present
Is Required
Step42: 6.20. 14CO2 Exchange Type
Is Required
Step43: 6.21. Other Gases
Is Required
Step44: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required
Step45: 7.2. PH Scale
Is Required
Step46: 7.3. Constants If Not OMIP
Is Required
Step47: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required
Step48: 8.2. Sulfur Cycle Present
Is Required
Step49: 8.3. Nutrients Present
Is Required
Step50: 8.4. Nitrous Species If N
Is Required
Step51: 8.5. Nitrous Processes If N
Is Required
Step52: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required
Step53: 9.2. Upper Trophic Levels Treatment
Is Required
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required
Step55: 10.2. Pft
Is Required
Step56: 10.3. Size Classes
Is Required
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required
Step58: 11.2. Size Classes
Is Required
Step59: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required
Step60: 12.2. Lability
Is Required
Step61: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required
Step62: 13.2. Types If Prognostic
Is Required
Step63: 13.3. Size If Prognostic
Is Required
Step64: 13.4. Size If Discrete
Is Required
Step65: 13.5. Sinking Speed If Prognostic
Is Required
Step66: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required
Step67: 14.2. Abiotic Carbon
Is Required
Step68: 14.3. Alkalinity
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'csir-csiro', 'vresm-1-0', 'ocnbgchem')
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: CSIR-CSIRO
Source ID: VRESM-1-0
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:54
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation |
1,836 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Preprocessing
Step1: Hide all GPUs from TensorFlow to not automatically occupy any GPU RAM.
Step2: Config
Automatically discover the paths to various data folders and compose the project structure.
Step3: The maximum allowed size of the embedding matrix and the maximum length our sequences will be padded/trimmed to.
Step4: Load data
Preprocessed and tokenized questions. Stopwords should be kept for neural models.
Step5: Word embedding database queried from the trained FastText model.
Step6: Build features
Collect all texts
Step7: Create question sequences
Step8: Create embedding lookup matrix
Step9: Allocate an embedding matrix. Include the NULL word.
Step10: Fill the matrix using the vectors for individual words.
Step11: Save features
Word embedding lookup matrix.
Step12: Padded word index sequences. | Python Code:
from pygoose import *
from gensim.models.wrappers.fasttext import FastText
Explanation: Preprocessing: FastText Sequences & Embeddings
Based on the tokenized questions and a pre-built word embedding database, build fixed-length (padded) sequences of word indices for each question, as well as a lookup matrix that maps word indices to word vectors.
Imports
This utility package imports numpy, pandas, matplotlib and a helper kg module into the root namespace.
End of explanation
kg.gpu.cuda_disable_gpus()
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
Explanation: Hide all GPUs from TensorFlow to not automatically occupy any GPU RAM.
End of explanation
project = kg.Project.discover()
Explanation: Config
Automatically discover the paths to various data folders and compose the project structure.
End of explanation
MAX_VOCAB_SIZE = 125000
MAX_SEQUENCE_LENGTH = 30
Explanation: The maximum allowed size of the embedding matrix and the maximum length our sequences will be padded/trimmed to.
End of explanation
tokens_train = kg.io.load(project.preprocessed_data_dir + 'tokens_lowercase_spellcheck_train.pickle')
tokens_test = kg.io.load(project.preprocessed_data_dir + 'tokens_lowercase_spellcheck_test.pickle')
Explanation: Load data
Preprocessed and tokenized questions. Stopwords should be kept for neural models.
End of explanation
embedding_model = FastText.load_word2vec_format(project.aux_dir + 'fasttext_vocab.vec')
EMBEDDING_DIM = len(embedding_model['apple'])
Explanation: Word embedding database queried from the trained FastText model.
End of explanation
texts_q1_train = [' '.join(pair[0]) for pair in tokens_train]
texts_q2_train = [' '.join(pair[1]) for pair in tokens_train]
texts_q1_test = [' '.join(pair[0]) for pair in tokens_test]
texts_q2_test = [' '.join(pair[1]) for pair in tokens_test]
unique_question_texts = list(set(texts_q1_train + texts_q2_train + texts_q1_test + texts_q2_test))
Explanation: Build features
Collect all texts
End of explanation
tokenizer = Tokenizer(
num_words=MAX_VOCAB_SIZE,
split=' ',
lower=True,
char_level=False,
)
tokenizer.fit_on_texts(unique_question_texts)
sequences_q1_train = tokenizer.texts_to_sequences(texts_q1_train)
sequences_q2_train = tokenizer.texts_to_sequences(texts_q2_train)
sequences_q1_test = tokenizer.texts_to_sequences(texts_q1_test)
sequences_q2_test = tokenizer.texts_to_sequences(texts_q2_test)
Explanation: Create question sequences
End of explanation
num_words = min(MAX_VOCAB_SIZE, len(tokenizer.word_index))
Explanation: Create embedding lookup matrix
End of explanation
embedding_matrix = np.zeros((num_words + 1, EMBEDDING_DIM))
Explanation: Allocate an embedding matrix. Include the NULL word.
End of explanation
for word, index in progressbar(tokenizer.word_index.items()):
if word in embedding_model.vocab:
embedding_matrix[index] = embedding_model[word]
Explanation: Fill the matrix using the vectors for individual words.
End of explanation
kg.io.save(embedding_matrix, project.aux_dir + 'fasttext_vocab_embedding_matrix.pickle')
Explanation: Save features
Word embedding lookup matrix.
End of explanation
sequences_q1_padded_train = pad_sequences(sequences_q1_train, maxlen=MAX_SEQUENCE_LENGTH)
sequences_q2_padded_train = pad_sequences(sequences_q2_train, maxlen=MAX_SEQUENCE_LENGTH)
sequences_q1_padded_test = pad_sequences(sequences_q1_test, maxlen=MAX_SEQUENCE_LENGTH)
sequences_q2_padded_test = pad_sequences(sequences_q2_test, maxlen=MAX_SEQUENCE_LENGTH)
kg.io.save(sequences_q1_padded_train, project.preprocessed_data_dir + 'sequences_q1_fasttext_train.pickle')
kg.io.save(sequences_q2_padded_train, project.preprocessed_data_dir + 'sequences_q2_fasttext_train.pickle')
kg.io.save(sequences_q1_padded_test, project.preprocessed_data_dir + 'sequences_q1_fasttext_test.pickle')
kg.io.save(sequences_q2_padded_test, project.preprocessed_data_dir + 'sequences_q2_fasttext_test.pickle')
Explanation: Padded word index sequences.
End of explanation |
1,837 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Predicting Yelp Star Ratings
In this little exercise, I am going to have a look at the distribution of Yelp ratings (1 to 5 stars) and their correlations to business and user attributes. Eventually I am testing several ML algorithms to predict a rating from business / user attributes and the review text.
Import statements
Step1: User Settings
Step2: Verify the source directory
Step3: Data Wrangling
Load and format business data
Step4: Load and format user data
Step5: Load and format reviews data
Step6: Merge reviews, users and business tables (left joins)
Step7: Exploratory Data Analysis
Plot number of votes per review
Step8: Investigate the distribution of ratings
Step9: Investigate correlations with the ratings column
Step10: Investigate the relation between review word count and rating
Step11: Predictive Data Analysis
Predict a rating from business/user attributes and the review text
Split the available data into training and test set
Step12: Vectorize the review texts
Step13: Prepare features and labels
Step14: Multinomial Naive Bayes (using review texts only)
Step15: Multinomial Naive Bayes (using business / user attributes only)
Step16: Multinomial Naive Bayes (using review texts and additional attributes)
Step17: Stochastic Gradient Descent (using review texts only)
Step18: Decision Tree
Step19: Random Forest Classifier | Python Code:
import os, sys
import numpy as np
import scipy as sp
import pandas as pd
import random
import re
import matplotlib
import matplotlib.pyplot as plt
#matplotlib.style.use('ggplot')
matplotlib.style.use('fivethirtyeight')
from sklearn.model_selection import train_test_split
from sklearn import metrics
from sklearn.naive_bayes import MultinomialNB
from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import SVC
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.linear_model import SGDClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.ensemble import RandomForestClassifier
%matplotlib inline
%config InlineBackend.figure_formats=['svg']
def format_column_names(dataFrame):
dataFrame.columns = dataFrame.columns.str.replace('[^\w.]+','_')
dataFrame.columns = dataFrame.columns.str.lower()
pd.options.display.max_seq_items = 500
Explanation: Predicting Yelp Star Ratings
In this little exercise, I am going to have a look at the distribution of Yelp ratings (1 to 5 stars) and their correlations to business and user attributes. Eventually I am testing several ML algorithms to predict a rating from business / user attributes and the review text.
Import statements
End of explanation
# The fraction (random sample) of the review dataset, which is to be parsed
# Table joining and text vectorization is demanding in terms of memory and cpu load,
# so I recommend a value <= 0.1.
REVIEW_FRAC = 0.05
Explanation: User Settings
End of explanation
# Set the source directory for the input csv files (business.csv, user.csv, review.csv)
source_dir = os.path.join( os.getcwd(), 'scratch' )
required_files = ['business.csv', 'user.csv', 'review.csv']
nfiles_found = sum( os.path.isfile( os.path.join(source_dir, f) ) for f in required_files)
if nfiles_found < len(required_files):
source_dir = input('Specify CSV source directory: ')
if nfiles_found < len(required_files):
print('Source files not found.')
sys.exit(1)
print('Source directory: {0}'.format(source_dir))
Explanation: Verify the source directory
End of explanation
businesses_file = os.path.join(source_dir, 'business.csv')
# Load business data
businesses = pd.read_csv( businesses_file,
parse_dates=True,
low_memory=False,
index_col='business_id'
)
format_column_names(businesses)
# Drop irrelevant columns
irrel_cols = [col for col in list(businesses) if col.startswith('attributes.hair_types')]
businesses.drop(irrel_cols, axis=1, inplace=True)
# Identify column starting with 'attribute'
attr_cols = [col for col in list(businesses) if col.startswith('attributes.')]
# Convert attribute columns to numeric values (no/undefined/yes -> 0.0/0.5/1.0)
businesses[attr_cols] = businesses[attr_cols].replace(
to_replace=[True, 'yes', 'full_bar', 'free', 'yes_free', 'quiet', 'yes_corkage', 'beer_and_wine'], value=1.0 )
businesses[attr_cols] = businesses[attr_cols].replace(
to_replace=[False, 'no', 'none', 'very_loud'], value=0.0 )
businesses[attr_cols] = businesses[attr_cols].apply(pd.to_numeric, errors='coerce')
businesses[attr_cols] = businesses[attr_cols].fillna(value=0.5)
# Convert categorical data
#businesses['city'] = pd.Categorical(businesses['city']).codes
businesses['city'] = pd.factorize(businesses['city'])[0]
#businesses.columns
businesses.info()
# Plot the business mean ratings
#star_counts = businesses.stars.value_counts(sort=False, normalize=True).sort_index()
#star_counts.plot(kind="bar", title="Business Mean Ratings", rot='0').set_xlabel('Rating')
Explanation: Data Wrangling
Load and format business data
End of explanation
users_file = os.path.join(source_dir, 'user.csv')
# Load user data
users = pd.read_csv( users_file, parse_dates=True, index_col='user_id' )
format_column_names(users)
compl_cols = [col for col in list(users) if col.startswith('compliments.')]
users['compliments'] = users[compl_cols].sum(axis=1)
vote_cols = [col for col in list(users) if col.startswith('votes.')]
users['votes'] = users[vote_cols].sum(axis=1)
#users.columns
users.info()
Explanation: Load and format user data
End of explanation
reviews_file = os.path.join(source_dir, 'review.csv')
# count lines
#num_lines = sum(1 for _ in open(reviews_file))
num_lines = 10000000
# configure random line indices to skip
random.seed(123)
skip_idx = random.sample(range(1, num_lines), num_lines - int(REVIEW_FRAC*num_lines))
# only load a random fraction of the reviews dataset, specified by REVIEW_FRAC
reviews = pd.read_csv( reviews_file,
parse_dates=True,
index_col='review_id',
skiprows=skip_idx
)
format_column_names(reviews)
reviews['text_length'] = reviews['text'].str.len()
reviews['text_wc'] = reviews['text'].str.split().apply(len)
vote_cols = [col for col in list(reviews) if col.startswith('votes.')]
reviews['votes'] = reviews[vote_cols].sum(axis=1)
times = pd.DatetimeIndex(reviews.date)
reviews['year'] = times.year
#reviews.columns
reviews.info()
Explanation: Load and format reviews data
End of explanation
%time rb = pd.merge(reviews, businesses, how='left', left_on='business_id', right_index=True, suffixes=('@reviews', '@businesses'))
%time rbu = pd.merge(rb, users, how='left', left_on='user_id', right_index=True, suffixes=('@reviews', '@users'))
#rbu['stars@reviews'].loc[rbu['votes@reviews'] >= 1].size
del businesses
del users
del reviews
#rbu.columns
rbu.info()
Explanation: Merge reviews, users and business tables (left joins)
End of explanation
rbu['votes@reviews'].value_counts(normalize=True).ix[:20] \
.plot.bar(rot=90, title='Distribution of votes per review')
Explanation: Exploratory Data Analysis
Plot number of votes per review
End of explanation
star_counts = rbu['stars@reviews'].value_counts(normalize=True).sort_index()
star_counts_min1 = rbu['stars@reviews'].loc[rbu['votes@reviews'] >= 1].value_counts(normalize=True).sort_index()
star_counts_min5 = rbu['stars@reviews'].loc[rbu['votes@reviews'] >= 5].value_counts(normalize=True).sort_index()
star_counts_comb = pd.concat([star_counts, star_counts_min1, star_counts_min5], axis=1)
star_counts_comb.columns = ['all', 'minimum of 1 vote', 'minimum of 5 votes']
star_counts_comb.plot.bar(title="Distribution of ratings", stacked=False, rot=0).set_xlabel('Rating')
MIN_VOTES = 5
rbu_min_votes = rbu.loc[rbu['votes@reviews'] >= MIN_VOTES]
star_counts_per_year = rbu_min_votes.groupby(['year'])['stars@reviews'].value_counts(normalize=True).unstack().transpose()
star_counts_per_year[star_counts_per_year.columns[-5:]].plot.bar(title="Distribution of ratings per year (min. of {0} votes)".format(MIN_VOTES), stacked=False, rot=0).set_xlabel('Rating')
Explanation: Investigate the distribution of ratings
End of explanation
cols = ['attributes.accepts_credit_cards',
'attributes.alcohol',
'attributes.by_appointment_only',
'attributes.caters',
'attributes.coat_check',
'attributes.corkage',
'attributes.delivery',
'attributes.dogs_allowed',
'attributes.drive_thru'] \
+ [col for col in list(rbu) if col.startswith('attributes.good_for')] \
+ ['attributes.happy_hour',
'attributes.has_tv',
'attributes.noise_level',
'attributes.open_24_hours',
'attributes.order_at_counter',
'attributes.outdoor_seating',
'attributes.price_range',
'attributes.smoking',
'attributes.take_out',
'attributes.takes_reservations',
'attributes.waiter_service',
'attributes.wheelchair_accessible',
'attributes.wi_fi',
'review_count@users',
'compliments']
correls = rbu[cols].corrwith(rbu['stars@reviews'], drop=True).sort_values()
correls.plot.bar(figsize=(10,4), title='Correlation of business attributes (#1) with avg. rating')
cols = [col for col in list(rbu) if re.search('(\.ambience|\.music|\.parking)', col)]
correls = rbu[cols].corrwith(rbu['stars@businesses'], drop=True).sort_values()
correls.plot.bar(figsize=(10,4), title='Correlation of business attributes (#2) with avg. rating')
cols = ['compliments', 'votes@reviews', 'votes@users', 'review_count@users', 'fans']
correls = rbu[cols].corrwith(rbu['stars@reviews'], drop=True).sort_values()
correls.plot.bar(title='Correlation of user attributes & review votes with ratings')
# rbu.loc[rbu['votes'] >= 5].groupby(['stars_review'])['text_length'].mean().plot.bar(title="Mean review length (characters) vs. rating (min. of 5 votes)", stacked=False, rot=0).set_xlabel('Rating')
Explanation: Investigate correlations with the ratings column
End of explanation
rbu.groupby(['stars@reviews'])['text_wc'].mean().plot.bar(title="Mean review word counts vs. rating (min. of 5 votes)", stacked=False, rot=0).set_xlabel('Rating')
Explanation: Investigate the relation between review word count and rating
End of explanation
data_train, data_test = train_test_split(
rbu,
test_size = 0.2,
random_state=1
)
# column containting the review texts
text_col = 'text'
# column containing the label
label_col = 'stars@reviews'
# columns containting attribute features
attr_cols = [col for col in list(rbu) if re.match('(votes\..+@users|votes\..+@reviews|attributes\.|city)', col)]
#attr_cols
Explanation: Predictive Data Analysis
Predict a rating from business/user attributes and the review text
Split the available data into training and test set
End of explanation
# train the vectorizer from training data (review texts)
vect = CountVectorizer(
stop_words='english',
ngram_range=(1, 2),
strip_accents='unicode',
max_df=0.9,
min_df=3,
max_features=100000
)
vect.fit(data_train[text_col])
Explanation: Vectorize the review texts
End of explanation
# collect features
#X_train_attr = data_train[attr_cols]
#X_test_attr = data_test[attr_cols]
X_train_attr = pd.get_dummies( data_train[attr_cols] )
X_test_attr = pd.get_dummies( data_test[attr_cols] )
X_train_dtm = vect.transform(data_train[text_col])
X_test_dtm = vect.transform(data_test[text_col])
# collect labels
y_train = data_train[label_col]
y_test = data_test[label_col]
# combine attribute matrices and sparse document-term matrices
X_train = sp.sparse.hstack((X_train_dtm, X_train_attr))
X_test = sp.sparse.hstack((X_test_dtm, X_test_attr))
Explanation: Prepare features and labels
End of explanation
# use a Multinomial Naive Bayes model
nb = MultinomialNB(
alpha=0.1
)
nb.fit(X_train_dtm, y_train)
# make class predictions for X_test_dtm
y_pred = nb.predict(X_test_dtm)
# calculate accuracy of class predictions
metrics.accuracy_score(y_test, y_pred)
# calculate the mean error (more meaningful in our case)
metrics.mean_absolute_error(y_test, y_pred)
# print the confusion matrix
metrics.confusion_matrix(y_test, y_pred)
# examine how often tokens of our dictionary appear in each rating category
tokens = pd.DataFrame(
{'token': vect.get_feature_names(),
'1 star' : nb.feature_count_[0, :] / nb.class_count_[0],
'2 stars' : nb.feature_count_[1, :] / nb.class_count_[1],
'3 stars' : nb.feature_count_[2, :] / nb.class_count_[2] ,
'4 stars' : nb.feature_count_[3, :] / nb.class_count_[3] ,
'5 stars' : nb.feature_count_[4, :] / nb.class_count_[4]}
).set_index('token')
tokens.sample(10, random_state=3)
Explanation: Multinomial Naive Bayes (using review texts only)
End of explanation
# use a Multinomial Naive Bayes model
nb = MultinomialNB(
alpha=0.1
)
nb.fit(X_train_attr, y_train)
# make class predictions for X_test_dtm
y_pred = nb.predict(X_test_attr)
# calculate accuracy of class predictions
metrics.accuracy_score(y_test, y_pred)
# calculate the mean error (more meaningful in our case)
metrics.mean_absolute_error(y_test, y_pred)
# print the confusion matrix
metrics.confusion_matrix(y_test, y_pred)
Explanation: Multinomial Naive Bayes (using business / user attributes only)
End of explanation
# use a Multinomial Naive Bayes model
nb = MultinomialNB(
alpha=0.1
)
nb.fit(X_train, y_train)
# make class predictions for X_test_dtm
y_pred = nb.predict(X_test)
# calculate accuracy of class predictions
metrics.accuracy_score(y_test, y_pred)
# calculate the mean error (more meaningful in our case)
metrics.mean_absolute_error(y_test, y_pred)
# print the confusion matrix
metrics.confusion_matrix(y_test, y_pred)
Explanation: Multinomial Naive Bayes (using review texts and additional attributes)
End of explanation
# use a linear model with stochastic gradient descent (SGD)
sgd = SGDClassifier(
loss='modified_huber',
penalty='l2',
alpha=1e-3,
n_iter=20,
n_jobs=1,
random_state=0)
sgd.fit(X_train_dtm, y_train)
# make class predictions for X_test_dtm
y_pred = sgd.predict(X_test_dtm)
# calculate accuracy of class predictions
metrics.accuracy_score(y_test, y_pred)
# calculate the mean error (more meaningful in our case)
metrics.mean_absolute_error(y_test, y_pred)
# print the confusion matrix
metrics.confusion_matrix(y_test, y_pred)
Explanation: Stochastic Gradient Descent (using review texts only)
End of explanation
dtree = DecisionTreeClassifier(
criterion='entropy',
random_state=0,
min_samples_leaf=10,
max_depth=None
)
dtree.fit(X_train, y_train)
# make class predictions for X_test_dtm
y_pred = dtree.predict(X_test)
# calculate accuracy of class predictions
metrics.accuracy_score(y_test, y_pred)
# calculate the mean error
metrics.mean_absolute_error(y_test, y_pred)
# print the confusion matrix
metrics.confusion_matrix(y_test, y_pred)
Explanation: Decision Tree
End of explanation
rf = RandomForestClassifier(
random_state=0,
n_estimators=20,
criterion='entropy',
n_jobs=3,
max_depth=150
)
rf.fit(X_train, y_train)
# make class predictions for X_test_dtm
y_pred = rf.predict(X_test)
# calculate accuracy of class predictions
metrics.accuracy_score(y_test, y_pred)
# calculate the mean error
metrics.mean_absolute_error(y_test, y_pred)
# print the confusion matrix
metrics.confusion_matrix(y_test, y_pred)
Explanation: Random Forest Classifier
End of explanation |
1,838 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Goals of this Lesson
Gradient Descent for PCA
Nonlinear Dimensionality Reduction
Autoencoder
Step1: Again we need functions for shuffling the data and calculating classification errrors.
Step2: 0.1 Load the dataset of handwritten digits
We are going to use the MNIST dataset throughout this session. Let's load the data...
Step3: 1 Gradient Descent for PCA
Recall the Principal Component Analysis model we covered in the last session. Again, the goal of PCA is for a given datapoint $\mathbf{x}{i}$, find a lower-dimensional representation $\mathbf{h}{i}$ such that $\mathbf{x}{i}$ can be 'predicted' from $\mathbf{h}{i}$ using a linear transformation. Again, the loss function can be written as
Step4: Let's visualize a reconstruction...
Step5: We can again visualize the transformation matrix $\mathbf{W}^{T}$. It's rows act as 'filters' or 'feature detectors'. However, without the orthogonality constraint, we've loss the identifiably of the components...
Step6: 2. Nonlinear Dimensionality Reduction with Autoencoders
In the last session (and section) we learned about Principal Component Analysis, a technique that finds some linear projection that reduces the dimensionality of the data while preserving its variance. We looked at it as a form of unsupervised linear regression, where we predict the data itself instead of some associated value (i.e. a label). In this section, we will move on to a nonlinear dimensionality reduction technique called an Autoencoder and derive it's optimization procedure.
2.1 Defining the Autoencoder Model
Recall that PCA is comprised of a linear projection step followed by application of the inverse projection. An Autoencoder is the same model but with a non-linear transformation placed on the hidden representation. To reiterate, our goal is
Step7: Should print
array([[ 4.70101821, 2.26494039],
[ 2.86585042, 0.0731302 ],
[ 0.79869215, 0.15570277]])
Autoencoder (AE) Overview
Data
We observe $\mathbf{x}_{i}$ where
\begin{eqnarray}
\mathbf{x}{i} = (x{i,1}, \dots, x_{i,D}) &
Step8: 2.3 SciKit Learn Version
We can hack the Scikit-Learn Regression neural network into an Autoencoder by feeding it the data back as the labels...
Step9: 2.4 Denoising Autoencoder (DAE)
Lastly, we are going to examine an extension to the Autoencoder called a Denoising Autoencoder (DAE). It has the following loss fuction
Step10: When training larger autoencoders, you'll see filters that look like these...
Regular Autoencoder
Step11: This dataset contains 400 64x64 pixel images of 40 people each exhibiting 10 facial expressions. The images are in gray-scale, not color, and therefore flattened vectors contain 4096 dimensions.
<span style="color
Step12: <span style="color
Step13: <span style="color
Step14: <span style="color
Step15: <span style="color | Python Code:
from IPython.display import Image
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import time
%matplotlib inline
Explanation: Goals of this Lesson
Gradient Descent for PCA
Nonlinear Dimensionality Reduction
Autoencoder: Model and Learning
Autoencoding Images
Denoising Autoencoder
End of explanation
### function for shuffling the data and labels
def shuffle_in_unison(features, labels):
rng_state = np.random.get_state()
np.random.shuffle(features)
np.random.set_state(rng_state)
np.random.shuffle(labels)
### calculate classification errors
# return a percentage: (number misclassified)/(total number of datapoints)
def calc_classification_error(predictions, class_labels):
n = predictions.size
num_of_errors = 0.
for idx in xrange(n):
if (predictions[idx] >= 0.5 and class_labels[idx]==0) or (predictions[idx] < 0.5 and class_labels[idx]==1):
num_of_errors += 1
return num_of_errors/n
Explanation: Again we need functions for shuffling the data and calculating classification errrors.
End of explanation
mnist = pd.read_csv('../data/mnist_train_100.csv', header=None)
# load the 70,000 x 784 matrix
from sklearn.datasets import fetch_mldata
mnist = fetch_mldata('MNIST original').data
# reduce to 5k instances
np.random.shuffle(mnist)
#mnist = mnist[:5000,:]/255.
print "Dataset size: %d x %d"%(mnist.shape)
# subplot containing first image
ax1 = plt.subplot(1,2,1)
digit = mnist[1,:]
ax1.imshow(np.reshape(digit, (28, 28)), cmap='Greys_r')
# subplot containing second image
ax2 = plt.subplot(1,2,2)
digit = mnist[2,:]
ax2.imshow(np.reshape(digit, (28, 28)), cmap='Greys_r')
plt.show()
Explanation: 0.1 Load the dataset of handwritten digits
We are going to use the MNIST dataset throughout this session. Let's load the data...
End of explanation
# set the random number generator for reproducability
np.random.seed(49)
# define the dimensionality of the hidden rep.
n_components = 200
# Randomly initialize the Weight matrix
W = np.random.uniform(low=-4 * np.sqrt(6. / (n_components + mnist.shape[1])),\
high=4 * np.sqrt(6. / (n_components + mnist.shape[1])), size=(mnist.shape[1], n_components))
# Initialize the step-size
alpha = 1e-3
# Initialize the gradient
grad = np.infty
# Set the tolerance
tol = 1e-8
# Initialize error
old_error = 0
error = [np.infty]
batch_size = 250
### train with stochastic gradients
start_time = time.time()
iter_idx = 1
# loop until gradient updates become small
while (alpha*np.linalg.norm(grad) > tol) and (iter_idx < 300):
for batch_idx in xrange(mnist.shape[0]/batch_size):
x = mnist[batch_idx*batch_size:(batch_idx+1)*batch_size, :]
h = np.dot(x, W)
x_recon = np.dot(h, W.T)
# compute gradient
diff = x - x_recon
grad = (-4./batch_size)*np.dot(diff.T, h)
# update parameters
W = W - alpha*grad
# track the error
if iter_idx % 25 == 0:
old_error = error[-1]
diff = mnist - np.dot(np.dot(mnist, W), W.T)
recon_error = np.mean( np.sum(diff**2, 1) )
error.append(recon_error)
print "Epoch %d, Reconstruction Error: %.3f" %(iter_idx, recon_error)
iter_idx += 1
end_time = time.time()
print
print "Training ended after %i iterations, taking a total of %.2f seconds." %(iter_idx, end_time-start_time)
print "Final Reconstruction Error: %.2f" %(error[-1])
reduced_mnist = np.dot(mnist, W)
print "Dataset is now of size: %d x %d"%(reduced_mnist.shape)
Explanation: 1 Gradient Descent for PCA
Recall the Principal Component Analysis model we covered in the last session. Again, the goal of PCA is for a given datapoint $\mathbf{x}{i}$, find a lower-dimensional representation $\mathbf{h}{i}$ such that $\mathbf{x}{i}$ can be 'predicted' from $\mathbf{h}{i}$ using a linear transformation. Again, the loss function can be written as: $$ \mathcal{L}{\text{PCA}} = \sum{i=1}^{N} (\mathbf{x}{i} - \mathbf{x}{i}\mathbf{W}\mathbf{W}^{T})^{2}.$$
Instead of using the closed-form solution we discussed in the previous session, here we'll use gradient descent. The reason for doing this will become clear later in the session, as we move on to cover a non-linear version of PCA. To run gradient descent, we of course need the derivative of the loss w.r.t. the parameters, which are in this case, the transformation matrix $\mathbf{W}$:
$$ \nabla_{\mathbf{W}} \mathcal{L}{\text{PCA}} = -4\sum{i=1}^{N} (\mathbf{x}{i} - \mathbf{\tilde x}{i})^{T}\mathbf{h}_{i} $$
Now let's run our stochastic gradient PCA on the MNIST dataset...
<span style="color:red">Caution: Running the following PCA code could take several minutes or more, depending on your computer's processing power.</span>
End of explanation
img_idx = 2
reconstructed_img = np.dot(reduced_mnist[img_idx,:], W.T)
original_img = mnist[img_idx,:]
# subplot for original image
ax1 = plt.subplot(1,2,1)
ax1.imshow(np.reshape(original_img, (28, 28)), cmap='Greys_r')
ax1.set_title("Original Painting")
# subplot for reconstruction
ax2 = plt.subplot(1,2,2)
ax2.imshow(np.reshape(reconstructed_img, (28, 28)), cmap='Greys_r')
ax2.set_title("Reconstruction")
plt.show()
Explanation: Let's visualize a reconstruction...
End of explanation
# two components to show
comp1 = 0
comp2 = 150
# subplot
ax1 = plt.subplot(1,2,1)
filter1 = W[:, comp1]
ax1.imshow(np.reshape(filter1, (28, 28)), cmap='Greys_r')
# subplot
ax2 = plt.subplot(1,2,2)
filter2 = W[:, comp2]
ax2.imshow(np.reshape(filter2, (28, 28)), cmap='Greys_r')
plt.show()
Explanation: We can again visualize the transformation matrix $\mathbf{W}^{T}$. It's rows act as 'filters' or 'feature detectors'. However, without the orthogonality constraint, we've loss the identifiably of the components...
End of explanation
def logistic(x):
return 1./(1+np.exp(-x))
def logistic_derivative(x):
z = logistic(x)
return np.multiply(z, 1-z)
def compute_gradient(x, x_recon, h, a):
# parameters:
# x: the original data
# x_recon: the reconstruction of x
# h: the hidden units (after application of f)
# a: the pre-activations (before the application of f)
return #TODO
np.random.seed(39)
# dummy variables for testing
x = np.random.normal(size=(5,3))
x_recon = x + np.random.normal(size=x.shape)
W = np.random.normal(size=(x.shape[1], 2))
a = np.dot(x, W)
h = logistic(a)
compute_gradient(x, x_recon, h, a)
Explanation: 2. Nonlinear Dimensionality Reduction with Autoencoders
In the last session (and section) we learned about Principal Component Analysis, a technique that finds some linear projection that reduces the dimensionality of the data while preserving its variance. We looked at it as a form of unsupervised linear regression, where we predict the data itself instead of some associated value (i.e. a label). In this section, we will move on to a nonlinear dimensionality reduction technique called an Autoencoder and derive it's optimization procedure.
2.1 Defining the Autoencoder Model
Recall that PCA is comprised of a linear projection step followed by application of the inverse projection. An Autoencoder is the same model but with a non-linear transformation placed on the hidden representation. To reiterate, our goal is: for a datapoint $\mathbf{x}{i}$, find a lower-dimensional representation $\mathbf{h}{i}$ such that $\mathbf{x}{i}$ can be 'predicted' from $\mathbf{h}{i}$---but this time, not necessarily with a linear transformation. In math, this statement can be written as $$\mathbf{\tilde x}{i} = \mathbf{h}{i} \mathbf{W}^{T} \text{ where } \mathbf{h}{i} = f(\mathbf{x}{i} \mathbf{W}). $$ $\mathbf{W}$ is a $D \times K$ matrix of parameters that need to be learned--much like the $\beta$ vector in regression models. $D$ is the dimensionality of the original data, and $K$ is the dimensionality of the compressed representation $\mathbf{h}_{i}$. Lastly, we have the new component, the transformation function $f$. There are many possible function to choose for $f$; yet we'll use a framilar one, the logistic function $$f(z) = \frac{1}{1+\exp(-z)}.$$ The graphic below depicts the autoencoder's computation path:
Optimization
Having defined the Autoencoder model, we look to write learning as an optimization process. Recall that we wish to make a reconstruction of the data, denoted $\mathbf{\tilde x}{i}$, as close as possible to the original input: $$\mathcal{L}{\text{AE}} = \sum_{i=1}^{N} (\mathbf{x}{i} - \mathbf{\tilde x}{i})^{2}.$$ We can make a substitution for $\mathbf{\tilde x}{i}$ from the equation above: $$ = \sum{i=1}^{N} (\mathbf{x}{i} - \mathbf{h}{i}\mathbf{W}^{T})^{2}.$$ And we can make another substitution for $\mathbf{h}{i}$, bringing us to the final form of the loss function: $$ = \sum{i=1}^{N} (\mathbf{x}{i} - f(\mathbf{x}{i}\mathbf{W})\mathbf{W}^{T})^{2}.$$
<span style="color:red">STUDENT ACTIVITY (15 mins)</span>
Derive an expression for the gradient: $$ \nabla_{W}\mathcal{L}_{\text{AE}} = ? $$
Take $f$ to be the logistic function, which has a derivative of $f'(z) = f(z)(1-f(z))$. Those functions are provided for you below.
End of explanation
# set the random number generator for reproducability
np.random.seed(39)
# define the dimensionality of the hidden rep.
n_components = 200
# Randomly initialize the transformation matrix
W = np.random.uniform(low=-4 * np.sqrt(6. / (n_components + mnist.shape[1])),\
high=4 * np.sqrt(6. / (n_components + mnist.shape[1])), size=(mnist.shape[1], n_components))
# Initialize the step-size
alpha = .01
# Initialize the gradient
grad = np.infty
# Initialize error
old_error = 0
error = [np.infty]
batch_size = 250
### train with stochastic gradients
start_time = time.time()
iter_idx = 1
# loop until gradient updates become small
while (alpha*np.linalg.norm(grad) > tol) and (iter_idx < 300):
for batch_idx in xrange(mnist.shape[0]/batch_size):
x = mnist[batch_idx*batch_size:(batch_idx+1)*batch_size, :]
pre_act = np.dot(x, W)
h = logistic(pre_act)
x_recon = np.dot(h, W.T)
# compute gradient
grad = compute_gradient(x, x_recon, h, pre_act)
# update parameters
W = W - alpha/batch_size * grad
# track the error
if iter_idx % 25 == 0:
old_error = error[-1]
diff = mnist - np.dot(logistic(np.dot(mnist, W)), W.T)
recon_error = np.mean( np.sum(diff**2, 1) )
error.append(recon_error)
print "Epoch %d, Reconstruction Error: %.3f" %(iter_idx, recon_error)
iter_idx += 1
end_time = time.time()
print
print "Training ended after %i iterations, taking a total of %.2f seconds." %(iter_idx, end_time-start_time)
print "Final Reconstruction Error: %.2f" %(error[-1])
reduced_mnist = np.dot(mnist, W)
print "Dataset is now of size: %d x %d"%(reduced_mnist.shape)
img_idx = 2
reconstructed_img = np.dot(logistic(reduced_mnist[img_idx,:]), W.T)
original_img = mnist[img_idx,:]
# subplot for original image
ax1 = plt.subplot(1,2,1)
ax1.imshow(np.reshape(original_img, (28, 28)), cmap='Greys_r')
ax1.set_title("Original Digit")
# subplot for reconstruction
ax2 = plt.subplot(1,2,2)
ax2.imshow(np.reshape(reconstructed_img, (28, 28)), cmap='Greys_r')
ax2.set_title("Reconstruction")
plt.show()
# two components to show
comp1 = 0
comp2 = 150
# subplot
ax1 = plt.subplot(1,2,1)
filter1 = W[:, comp1]
ax1.imshow(np.reshape(filter1, (28, 28)), cmap='Greys_r')
# subplot
ax2 = plt.subplot(1,2,2)
filter2 = W[:, comp2]
ax2.imshow(np.reshape(filter2, (28, 28)), cmap='Greys_r')
plt.show()
Explanation: Should print
array([[ 4.70101821, 2.26494039],
[ 2.86585042, 0.0731302 ],
[ 0.79869215, 0.15570277]])
Autoencoder (AE) Overview
Data
We observe $\mathbf{x}_{i}$ where
\begin{eqnarray}
\mathbf{x}{i} = (x{i,1}, \dots, x_{i,D}) &:& \mbox{set of $D$ explanatory variables (aka features). No labels.}
\end{eqnarray}
Parameters
$\mathbf{W}$: Matrix with dimensionality $D \times K$, where $D$ is the dimensionality of the original data and $K$ the dimensionality of the new features. The matrix encodes the transformation between the original and new feature spaces.
Error Function
\begin{eqnarray}
\mathcal{L} = \sum_{i=1}^{N} ( \mathbf{x}{i} - f(\mathbf{x}{i} \mathbf{W}) \mathbf{W}^{T})^{2}
\end{eqnarray}
2.2 Autoencoder Implementation
Now let's train an Autoencoder...
End of explanation
from sklearn.neural_network import MLPRegressor
# set the random number generator for reproducability
np.random.seed(39)
# define the dimensionality of the hidden rep.
n_components = 200
# define model
ae = MLPRegressor(hidden_layer_sizes=(n_components,), activation='logistic')
### train Autoencoder
start_time = time.time()
ae.fit(mnist, mnist)
end_time = time.time()
recon_error = np.mean(np.sum((mnist - ae.predict(mnist))**2, 1))
W = ae.coefs_[0]
b = ae.intercepts_[0]
reduced_mnist = logistic(np.dot(mnist, W) + b)
print
print "Training ended after a total of %.2f seconds." %(end_time-start_time)
print "Final Reconstruction Error: %.2f" %(recon_error)
print "Dataset is now of size: %d x %d"%(reduced_mnist.shape)
img_idx = 5
reconstructed_img = np.dot(reduced_mnist[img_idx,:], ae.coefs_[1]) + ae.intercepts_[1]
original_img = mnist[img_idx,:]
# subplot for original image
ax1 = plt.subplot(1,2,1)
ax1.imshow(np.reshape(original_img, (28, 28)), cmap='Greys_r')
ax1.set_title("Original Digit")
# subplot for reconstruction
ax2 = plt.subplot(1,2,2)
ax2.imshow(np.reshape(reconstructed_img, (28, 28)), cmap='Greys_r')
ax2.set_title("Reconstruction")
plt.show()
# two components to show
comp1 = 0
comp2 = 150
# subplot
ax1 = plt.subplot(1,2,1)
filter1 = W[:, comp1]
ax1.imshow(np.reshape(filter1, (28, 28)), cmap='Greys_r')
# subplot
ax2 = plt.subplot(1,2,2)
filter2 = W[:, comp2]
ax2.imshow(np.reshape(filter2, (28, 28)), cmap='Greys_r')
plt.show()
Explanation: 2.3 SciKit Learn Version
We can hack the Scikit-Learn Regression neural network into an Autoencoder by feeding it the data back as the labels...
End of explanation
# set the random number generator for reproducability
np.random.seed(39)
# define the dimensionality of the hidden rep.
n_components = 200
# Randomly initialize the Beta vector
W = np.random.uniform(low=-4 * np.sqrt(6. / (n_components + mnist.shape[1])),\
high=4 * np.sqrt(6. / (n_components + mnist.shape[1])), size=(mnist.shape[1], n_components))
# Initialize the step-size
alpha = .01
# Initialize the gradient
grad = np.infty
# Set the tolerance
tol = 1e-8
# Initialize error
old_error = 0
error = [np.infty]
batch_size = 250
### train with stochastic gradients
start_time = time.time()
iter_idx = 1
# loop until gradient updates become small
while (alpha*np.linalg.norm(grad) > tol) and (iter_idx < 300):
for batch_idx in xrange(mnist.shape[0]/batch_size):
x = mnist[batch_idx*batch_size:(batch_idx+1)*batch_size, :]
# add noise to features
x_corrupt = np.multiply(x, np.random.binomial(n=1, p=.8, size=x.shape))
pre_act = np.dot(x_corrupt, W)
h = logistic(pre_act)
x_recon = np.dot(h, W.T)
# compute gradient
diff = x - x_recon
grad = -2.*(np.dot(diff.T, h) + np.dot(np.multiply(np.dot(diff, W), logistic_derivative(pre_act)).T, x_corrupt).T)
# NOTICE: during the 'backward pass', use the uncorrupted features
# update parameters
W = W - alpha/batch_size * grad
# track the error
if iter_idx % 25 == 0:
old_error = error[-1]
diff = mnist - np.dot(logistic(np.dot(mnist, W)), W.T)
recon_error = np.mean( np.sum(diff**2, 1) )
error.append(recon_error)
print "Epoch %d, Reconstruction Error: %.3f" %(iter_idx, recon_error)
iter_idx += 1
end_time = time.time()
print
print "Training ended after %i iterations, taking a total of %.2f seconds." %(iter_idx, end_time-start_time)
print "Final Reconstruction Error: %.2f" %(error[-1])
reduced_mnist = np.dot(mnist, W)
print "Dataset is now of size: %d x %d"%(reduced_mnist.shape)
img_idx = 5
reconstructed_img = np.dot(logistic(reduced_mnist[img_idx,:]), W.T)
original_img = mnist[img_idx,:]
# subplot for original image
ax1 = plt.subplot(1,2,1)
ax1.imshow(np.reshape(original_img, (28, 28)), cmap='Greys_r')
ax1.set_title("Original Painting")
# subplot for reconstruction
ax2 = plt.subplot(1,2,2)
ax2.imshow(np.reshape(reconstructed_img, (28, 28)), cmap='Greys_r')
ax2.set_title("Reconstruction")
plt.show()
# two components to show
comp1 = 0
comp2 = 150
# subplot
ax1 = plt.subplot(1,2,1)
filter1 = W[:, comp1]
ax1.imshow(np.reshape(filter1, (28, 28)), cmap='Greys_r')
# subplot
ax2 = plt.subplot(1,2,2)
filter2 = W[:, comp2]
ax2.imshow(np.reshape(filter2, (28, 28)), cmap='Greys_r')
plt.show()
Explanation: 2.4 Denoising Autoencoder (DAE)
Lastly, we are going to examine an extension to the Autoencoder called a Denoising Autoencoder (DAE). It has the following loss fuction: $$\mathcal{L}{\text{DAE}} = \sum{i=1}^{N} (\mathbf{x}{i} - f((\hat{\boldsymbol{\zeta}} \odot \mathbf{x}{i})\mathbf{W})\mathbf{W}^{T})^{2} \ \text{ where } \hat{\boldsymbol{\zeta}} \sim \text{Bernoulli}(p).$$ In words, what we're doing is drawning a Bernoulli (i.e. binary) matrix the same size as the input features, and feeding a corrupted version of $\mathbf{x}_{i}$. The Autoencoder, then, must try to recreate the original data from a lossy representation. This has the effect of forcing the Autoencoder to use features that better generalize.
Let's make the simple change that implements a DAE below...
End of explanation
from sklearn.datasets import fetch_olivetti_faces
faces_dataset = fetch_olivetti_faces(shuffle=True)
faces = faces_dataset.data # 400 flattened 64x64 images
person_ids = faces_dataset.target # denotes the identity of person (40 total)
print "Dataset size: %d x %d" %(faces.shape)
print "And the images look like this..."
plt.imshow(np.reshape(faces[200,:], (64, 64)), cmap='Greys_r')
plt.show()
Explanation: When training larger autoencoders, you'll see filters that look like these...
Regular Autoencoder:
Denoising Autoencoder:
<span style="color:red">STUDENT ACTIVITY (until end of session)</span>
Your task is to reproduce the faces experiment from the previous session but using an Autoencoder instead of PCA
End of explanation
### Your code goes here ###
# train Autoencoder model on 'faces'
###########################
print "Training took a total of %.2f seconds." %(end_time-start_time)
print "Final reconstruction error: %.2f%%" %(recon_error)
print "Dataset is now of size: %d x %d"%(faces_reduced.shape)
Explanation: This dataset contains 400 64x64 pixel images of 40 people each exhibiting 10 facial expressions. The images are in gray-scale, not color, and therefore flattened vectors contain 4096 dimensions.
<span style="color:red">Subtask 1: Run (Regular) Autoencoder</span>
End of explanation
### Your code goes here ###
# Use learned transformation matrix to project back to the original 4096-dimensional space
# Remember you need to use np.reshape()
###########################
Explanation: <span style="color:red">Subtask 2: Reconstruct an image</span>
End of explanation
### Your code goes here ###
###########################
Explanation: <span style="color:red">Subtask 3: Train a Denoising Autoencoder</span>
End of explanation
### Your code goes here ###
# Run AE for 2 components
# Generate plot
# Bonus: color the scatter plot according to the person_ids to see if any structure can be seen
###########################
Explanation: <span style="color:red">Subtask 4: Generate a 2D scatter plot from both models</span>
End of explanation
### Your code goes here ###
# Run PCA but add noise to the input first
###########################
Explanation: <span style="color:red">Subtask 5: Train a denoising version of PCA and test its performance</span>
End of explanation |
1,839 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: <small><i>This notebook was prepared by Donne Martin. Source and license info is on GitHub.</i></small>
Challenge Notebook
Problem
Step2: Unit Test
The following unit test is expected to fail until you solve the challenge. | Python Code:
def list_of_chars(list_chars):
Returns a list of characters in reverse order.
Takes a list of characters, if the list is not None, returns the list in
reverse order
Parameters
--------------
Input:
list_chars: list
a list of single character strings
Output:
list_chars[::-1]
a list of single character strings in reverse order from the input
if list_chars is None:
return list_chars
else:
return list_chars[::-1]
Explanation: <small><i>This notebook was prepared by Donne Martin. Source and license info is on GitHub.</i></small>
Challenge Notebook
Problem: Implement a function to reverse a string (a list of characters), in-place.
Constraints
Test Cases
Algorithm
Code
Unit Test
Solution Notebook
Constraints
Can I assume the string is ASCII?
Yes
Note: Unicode strings could require special handling depending on your language
Since we need to do this in-place, it seems we cannot use the slice operator or the reversed function?
Correct
Since Python string are immutable, can I use a list of characters instead?
Yes
Test Cases
None -> None
[''] -> ['']
['f', 'o', 'o', ' ', 'b', 'a', 'r'] -> ['r', 'a', 'b', ' ', 'o', 'o', 'f']
Algorithm
Refer to the Solution Notebook. If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start.
Code
End of explanation
# %load test_reverse_string.py
from nose.tools import assert_equal
class TestReverse(object):
def test_reverse(self):
assert_equal(list_of_chars(None), None)
assert_equal(list_of_chars(['']), [''])
assert_equal(list_of_chars(
['f', 'o', 'o', ' ', 'b', 'a', 'r']),
['r', 'a', 'b', ' ', 'o', 'o', 'f'])
print('Success: test_reverse')
def main():
test = TestReverse()
test.test_reverse()
if __name__ == '__main__':
main()
Explanation: Unit Test
The following unit test is expected to fail until you solve the challenge.
End of explanation |
1,840 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
automaton.accessible
Create a new automaton from the accessible part of the input, i.e., the subautomaton whose states can be reached from an initial state.
Preconditions
Step1: The following automaton has one unreachable state
Step2: Calling accessible returns a copy of the automaton without non-accessible states | Python Code:
import vcsn
Explanation: automaton.accessible
Create a new automaton from the accessible part of the input, i.e., the subautomaton whose states can be reached from an initial state.
Preconditions:
- None
Postconditions:
- Result.is_accessible()
See also:
- automaton.is_accessible
- automaton.trim
Examples
End of explanation
%%automaton a
context = "lal_char(abc), b"
$ -> 0
0 -> 1 a
1 -> $
2 -> 0 a
1 -> 3 a
a.is_accessible()
Explanation: The following automaton has one unreachable state:
End of explanation
a.accessible()
a.accessible().is_accessible()
Explanation: Calling accessible returns a copy of the automaton without non-accessible states:
End of explanation |
1,841 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Please find torch implementation of this notebook here
Step11: Data
As data, we use the book "The Time Machine" by H G Wells,
preprocessed using the code in this colab.
Step13: Model
We fit an unconditional RNN, for language modeling (ie. not vec2seq or seq2seq. Following the D2L notation, the model has the form
$$\begin{align}
H_t &= \phi(X_t W_{xh} + H_{t-1} W_{hh} + b_h) \
O_t &= H_t W_{hq} + b_q
\end{align}
$$
where $X_t$ is the $(n,d)$ matrix of (one-hot) inputs
(for batch size $n$ and vocabulary size $d$),
$H_t$ is the $(n,h)$ matrix of hidden states
(for $h$ hidden states),
and $O_t$ is the $(n,q)$ matrix of output logits
(for $q$ output labels, often $q=d$).
Step15: Prediction (generation)
We pass in an initial prefix string, that is not generated. This is used to "warm-up" the hidden state. Specifically, we update the hidden state given the observed prefix, but don't generate anything. After that, for each of the T steps, we compute the (1,V) output tensor, pick the argmax index, and append it to the output. Finally, we convert the to indices to readable token sequence of size (1,T). (Note that this is a greedy, deterministic procedure.)
Step17: Training
To ensure the gradient doesn't blow up when doing backpropagation through many layers, we use gradient clipping, which corresponds to the update
$$
g
Step28: The training step is fairly standard, except for the use of gradient clipping, and the issue of the hidden state.
If the data iterator uses random ordering of the sequences, we need to initialize the hidden state for each minibatch. However, if the data iterator uses sequential ordering, we only initialize the hidden state at the very beginning of the process. In the latter case, the hidden state will depend on the value at the previous minibatch.
The state vector may be a tensor or a tuple, depending on what kind of RNN we are using. In addition, the parameter updater can be an optax optimizer, or a simpler custom sgd optimizer.
Step29: The main training function is fairly standard.
The loss function is per-symbol cross-entropy, $-\log q(x_t)$, where $q$ is the model prediction from the RNN. Since we compute the average loss across time steps within a batch, we are computing $-\frac{1}{T} \sum_{t=1}^T \log p(x_t|x_{1
Step33: Creating a Flax module
We now show how to use create an RNN as a module, which is faster than our pure Python implementation.
While Flax has cells for more advanced recurrent models, it does not have a basic RNNCell. Therefore, we create an RNNCell similar to those defined in flax.linen.recurrent here.
Step34: Now, we create an RNN module to call the RNNCell for each step.
Step35: Now we update the state with a random one-hot array of inputs.
Step37: Now we define our model. It consists of an RNN Layer followed by a dense layer.
Step38: Test the untrained model.
Step39: Train it. The results are similar to the 'from scratch' implementation, but much faster. | Python Code:
import jax.numpy as jnp
import matplotlib.pyplot as plt
import math
from IPython import display
import jax
try:
import flax.linen as nn
except ModuleNotFoundError:
%pip install -qq flax
import flax.linen as nn
from flax import jax_utils
try:
import optax
except ModuleNotFoundError:
%pip install -qq optax
import optax
import collections
import re
import random
import os
import requests
import hashlib
import time
import functools
random.seed(0)
rng = jax.random.PRNGKey(0)
!mkdir figures # for saving plots
Explanation: Please find torch implementation of this notebook here: https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/book1/15/rnn_torch.ipynb
<a href="https://colab.research.google.com/github/probml/probml-notebooks/blob/main/notebooks-d2l/rnn_jax.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Recurrent neural networks
We show how to implement RNNs from scratch.
Based on sec 8.5 of http://d2l.ai/chapter_recurrent-neural-networks/rnn-scratch.html.
End of explanation
class SeqDataLoader:
An iterator to load sequence data.
def __init__(self, batch_size, num_steps, use_random_iter, max_tokens):
if use_random_iter:
self.data_iter_fn = seq_data_iter_random
else:
self.data_iter_fn = seq_data_iter_sequential
self.corpus, self.vocab = load_corpus_time_machine(max_tokens)
self.batch_size, self.num_steps = batch_size, num_steps
def __iter__(self):
return self.data_iter_fn(self.corpus, self.batch_size, self.num_steps)
class Vocab:
Vocabulary for text.
def __init__(self, tokens=None, min_freq=0, reserved_tokens=None):
if tokens is None:
tokens = []
if reserved_tokens is None:
reserved_tokens = []
# Sort according to frequencies
counter = count_corpus(tokens)
self.token_freqs = sorted(counter.items(), key=lambda x: x[1], reverse=True)
# The index for the unknown token is 0
self.unk, uniq_tokens = 0, ["<unk>"] + reserved_tokens
uniq_tokens += [token for token, freq in self.token_freqs if freq >= min_freq and token not in uniq_tokens]
self.idx_to_token, self.token_to_idx = [], dict()
for token in uniq_tokens:
self.idx_to_token.append(token)
self.token_to_idx[token] = len(self.idx_to_token) - 1
def __len__(self):
return len(self.idx_to_token)
def __getitem__(self, tokens):
if not isinstance(tokens, (list, tuple)):
return self.token_to_idx.get(tokens, self.unk)
return [self.__getitem__(token) for token in tokens]
def to_tokens(self, indices):
if not isinstance(indices, (list, tuple)):
return self.idx_to_token[indices]
return [self.idx_to_token[index] for index in indices]
def tokenize(lines, token="word"):
Split text lines into word or character tokens.
if token == "word":
return [line.split() for line in lines]
elif token == "char":
return [list(line) for line in lines]
else:
print("ERROR: unknown token type: " + token)
def count_corpus(tokens):
Count token frequencies.
# Here `tokens` is a 1D list or 2D list
if len(tokens) == 0 or isinstance(tokens[0], list):
# Flatten a list of token lists into a list of tokens
tokens = [token for line in tokens for token in line]
return collections.Counter(tokens)
def seq_data_iter_random(corpus, batch_size, num_steps):
Generate a minibatch of subsequences using random sampling.
# Start with a random offset (inclusive of `num_steps - 1`) to partition a
# sequence
corpus = corpus[random.randint(0, num_steps - 1) :]
# Subtract 1 since we need to account for labels
num_subseqs = (len(corpus) - 1) // num_steps
# The starting indices for subsequences of length `num_steps`
initial_indices = list(range(0, num_subseqs * num_steps, num_steps))
# In random sampling, the subsequences from two adjacent random
# minibatches during iteration are not necessarily adjacent on the
# original sequence
random.shuffle(initial_indices)
def data(pos):
# Return a sequence of length `num_steps` starting from `pos`
return corpus[pos : pos + num_steps]
num_batches = num_subseqs // batch_size
for i in range(0, batch_size * num_batches, batch_size):
# Here, `initial_indices` contains randomized starting indices for
# subsequences
initial_indices_per_batch = initial_indices[i : i + batch_size]
X = [data(j) for j in initial_indices_per_batch]
Y = [data(j + 1) for j in initial_indices_per_batch]
yield jnp.array(X), jnp.array(Y)
def seq_data_iter_sequential(corpus, batch_size, num_steps):
Generate a minibatch of subsequences using sequential partitioning.
# Start with a random offset to partition a sequence
offset = random.randint(0, num_steps)
num_tokens = ((len(corpus) - offset - 1) // batch_size) * batch_size
Xs = jnp.array(corpus[offset : offset + num_tokens])
Ys = jnp.array(corpus[offset + 1 : offset + 1 + num_tokens])
Xs, Ys = Xs.reshape(batch_size, -1), Ys.reshape(batch_size, -1)
num_batches = Xs.shape[1] // num_steps
for i in range(0, num_steps * num_batches, num_steps):
X = Xs[:, i : i + num_steps]
Y = Ys[:, i : i + num_steps]
yield X, Y
def download(name, cache_dir=os.path.join("..", "data")):
Download a file inserted into DATA_HUB, return the local filename.
assert name in DATA_HUB, f"{name} does not exist in {DATA_HUB}."
url, sha1_hash = DATA_HUB[name]
os.makedirs(cache_dir, exist_ok=True)
fname = os.path.join(cache_dir, url.split("/")[-1])
if os.path.exists(fname):
sha1 = hashlib.sha1()
with open(fname, "rb") as f:
while True:
data = f.read(1048576)
if not data:
break
sha1.update(data)
if sha1.hexdigest() == sha1_hash:
return fname # Hit cache
print(f"Downloading {fname} from {url}...")
r = requests.get(url, stream=True, verify=True)
with open(fname, "wb") as f:
f.write(r.content)
return fname
def read_time_machine():
Load the time machine dataset into a list of text lines.
with open(download("time_machine"), "r") as f:
lines = f.readlines()
return [re.sub("[^A-Za-z]+", " ", line).strip().lower() for line in lines]
def load_corpus_time_machine(max_tokens=-1):
Return token indices and the vocabulary of the time machine dataset.
lines = read_time_machine()
tokens = tokenize(lines, "char")
vocab = Vocab(tokens)
# Since each text line in the time machine dataset is not necessarily a
# sentence or a paragraph, flatten all the text lines into a single list
corpus = [vocab[token] for line in tokens for token in line]
if max_tokens > 0:
corpus = corpus[:max_tokens]
return corpus, vocab
def load_data_time_machine(batch_size, num_steps, use_random_iter=False, max_tokens=10000):
Return the iterator and the vocabulary of the time machine dataset.
data_iter = SeqDataLoader(batch_size, num_steps, use_random_iter, max_tokens)
return data_iter, data_iter.vocab
DATA_HUB = dict()
DATA_URL = "http://d2l-data.s3-accelerate.amazonaws.com/"
DATA_HUB["time_machine"] = (DATA_URL + "timemachine.txt", "090b5e7e70c295757f55df93cb0a180b9691891a")
batch_size, num_steps = 32, 35
train_iter, vocab = load_data_time_machine(batch_size, num_steps)
Explanation: Data
As data, we use the book "The Time Machine" by H G Wells,
preprocessed using the code in this colab.
End of explanation
# Create the initial parameters
def get_params(vocab_size, num_hiddens, init_rng):
num_inputs = num_outputs = vocab_size
def normal(shape, rng):
return jax.random.normal(rng, shape=shape) * 0.01
hidden_rng, out_rng = jax.random.split(init_rng)
# Hidden layer parameters
W_xh = normal((num_inputs, num_hiddens), hidden_rng)
W_hh = normal((num_hiddens, num_hiddens), hidden_rng)
b_h = jnp.zeros(num_hiddens)
# Output layer parameters
W_hq = normal((num_hiddens, num_outputs), out_rng)
b_q = jnp.zeros(num_outputs)
params = [W_xh, W_hh, b_h, W_hq, b_q]
return params
# Create the initial state
# We assume this is a tuple of one element (later we will use longer tuples)
def init_rnn_state(batch_size, num_hiddens):
return (jnp.zeros((batch_size, num_hiddens)),)
# Forward function.
# Input sequence is (T,B,V), where T is length of the sequence, B is batch size, V is vocab size.
# We iterate over each time step, and process the batch (for that timestep in parallel).
# Output sequence is (T*B, V), since we concatenate all the time steps.
# We also return the final state, so we can process the next subsequence.
@jax.jit
def rnn(params, state, inputs):
# Here `inputs` shape: (`num_steps`, `batch_size`, `vocab_size`)
W_xh, W_hh, b_h, W_hq, b_q = params
(H,) = state
outputs = []
# Shape of `X`: (`batch_size`, `vocab_size`)
for X in inputs:
H = jnp.tanh((X @ W_xh) + (H @ W_hh) + b_h)
Y = H @ W_hq + b_q
outputs.append(Y)
return jnp.concatenate(outputs, axis=0), (H,)
# Make the model class
# Input X to apply is (B,T) matrix of integers (from vocab encoding).
# We transpose this to (T,B) then one-hot encode to (T,B,V), where V is vocab.
# The result is passed to the forward function.
# (We define the forward function as an argument, so we can change it later.)
class RNNModelScratch:
A RNN Model implemented from scratch.
def __init__(self, vocab_size, num_hiddens, get_params, init_state, forward_fn):
self.vocab_size, self.num_hiddens = vocab_size, num_hiddens
self.init_state, self.get_params = init_state, get_params
self.forward_fn = forward_fn
def apply(self, params, state, X):
X = jax.nn.one_hot(X.T, num_classes=self.vocab_size)
return self.forward_fn(params, state, X)
def begin_state(self, batch_size):
return self.init_state(batch_size, self.num_hiddens)
def init_params(self, rng):
return self.get_params(self.vocab_size, self.num_hiddens, rng)
num_hiddens = 512
net = RNNModelScratch(len(vocab), num_hiddens, get_params, init_rnn_state, rnn)
X = jnp.arange(10).reshape((2, 5)) # batch 2, sequence length is 5
params = net.init_params(rng)
state = net.begin_state(X.shape[0])
print(len(state)) # length 1
print(state[0].shape) # (2,512)
Y, new_state = net.apply(params, state, X)
print(len(vocab)) # 28
print(Y.shape) # (2x5, 28)
Explanation: Model
We fit an unconditional RNN, for language modeling (ie. not vec2seq or seq2seq. Following the D2L notation, the model has the form
$$\begin{align}
H_t &= \phi(X_t W_{xh} + H_{t-1} W_{hh} + b_h) \
O_t &= H_t W_{hq} + b_q
\end{align}
$$
where $X_t$ is the $(n,d)$ matrix of (one-hot) inputs
(for batch size $n$ and vocabulary size $d$),
$H_t$ is the $(n,h)$ matrix of hidden states
(for $h$ hidden states),
and $O_t$ is the $(n,q)$ matrix of output logits
(for $q$ output labels, often $q=d$).
End of explanation
def predict(prefix, num_preds, net, params, vocab):
Generate new characters following the `prefix`.
state = net.begin_state(batch_size=1)
outputs = [vocab[prefix[0]]]
get_input = lambda: jnp.array([outputs[-1]]).reshape((1, 1))
for y in prefix[1:]: # Warm-up period
_, state = net.apply(params, state, get_input())
outputs.append(vocab[y])
for _ in range(num_preds): # Predict `num_preds` steps
y, state = net.apply(params, state, get_input())
y = y.reshape(-1, y.shape[-1])
outputs.append(int(y.argmax(axis=1).reshape(1)))
return "".join([vocab.idx_to_token[i] for i in outputs])
# sample 10 characters after the prefix.
# since the model is untrained, the results will be garbage.
predict("time traveller ", 10, net, params, vocab)
Explanation: Prediction (generation)
We pass in an initial prefix string, that is not generated. This is used to "warm-up" the hidden state. Specifically, we update the hidden state given the observed prefix, but don't generate anything. After that, for each of the T steps, we compute the (1,V) output tensor, pick the argmax index, and append it to the output. Finally, we convert the to indices to readable token sequence of size (1,T). (Note that this is a greedy, deterministic procedure.)
End of explanation
@jax.jit
def grad_clipping(grads, theta):
Clip the gradient.
def grad_update(grads):
return jax.tree_map(lambda g: g * theta / norm, grads)
norm = jnp.sqrt(sum(jax.tree_util.tree_leaves(jax.tree_map(lambda x: jnp.sum(x**2), grads))))
# Update gradient if norm > theta
# This is jax.jit compatible
grads = jax.lax.cond(norm > theta, grad_update, lambda g: g, grads)
return grads
Explanation: Training
To ensure the gradient doesn't blow up when doing backpropagation through many layers, we use gradient clipping, which corresponds to the update
$$
g := \min(1, \theta /||g||) g
$$
where $\theta$ is the scaling parameter, and $g$ is the gradient vector.
End of explanation
class Animator:
For plotting data in animation.
def __init__(
self,
xlabel=None,
ylabel=None,
legend=None,
xlim=None,
ylim=None,
xscale="linear",
yscale="linear",
fmts=("-", "m--", "g-.", "r:"),
nrows=1,
ncols=1,
figsize=(3.5, 2.5),
):
# Incrementally plot multiple lines
if legend is None:
legend = []
display.set_matplotlib_formats("svg")
self.fig, self.axes = plt.subplots(nrows, ncols, figsize=figsize)
if nrows * ncols == 1:
self.axes = [
self.axes,
]
# Use a lambda function to capture arguments
self.config_axes = lambda: set_axes(self.axes[0], xlabel, ylabel, xlim, ylim, xscale, yscale, legend)
self.X, self.Y, self.fmts = None, None, fmts
def add(self, x, y):
# Add multiple data points into the figure
if not hasattr(y, "__len__"):
y = [y]
n = len(y)
if not hasattr(x, "__len__"):
x = [x] * n
if not self.X:
self.X = [[] for _ in range(n)]
if not self.Y:
self.Y = [[] for _ in range(n)]
for i, (a, b) in enumerate(zip(x, y)):
if a is not None and b is not None:
self.X[i].append(a)
self.Y[i].append(b)
self.axes[0].cla()
for x, y, fmt in zip(self.X, self.Y, self.fmts):
self.axes[0].plot(x, y, fmt)
self.config_axes()
display.display(self.fig)
display.clear_output(wait=True)
class Timer:
Record multiple running times.
def __init__(self):
self.times = []
self.start()
def start(self):
Start the timer.
self.tik = time.time()
def stop(self):
Stop the timer and record the time in a list.
self.times.append(time.time() - self.tik)
return self.times[-1]
def avg(self):
Return the average time.
return sum(self.times) / len(self.times)
def sum(self):
Return the sum of time.
return sum(self.times)
def cumsum(self):
Return the accumulated time.
return jnp.array(self.times).cumsum().tolist()
class Accumulator:
For accumulating sums over `n` variables.
def __init__(self, n):
self.data = [0.0] * n
def add(self, *args):
self.data = [a + float(b) for a, b in zip(self.data, args)]
def reset(self):
self.data = [0.0] * len(self.data)
def __getitem__(self, idx):
return self.data[idx]
def set_axes(axes, xlabel, ylabel, xlim, ylim, xscale, yscale, legend):
Set the axes for matplotlib.
axes.set_xlabel(xlabel)
axes.set_ylabel(ylabel)
axes.set_xscale(xscale)
axes.set_yscale(yscale)
axes.set_xlim(xlim)
axes.set_ylim(ylim)
if legend:
axes.legend(legend)
axes.grid()
@jax.jit
def sgd(params, grads, lr, batch_size):
Minibatch stochastic gradient descent.
params = jax.tree_map(lambda p, g: p - lr * g / batch_size, params, grads)
return params
@jax.jit
def train_step(apply_fn, loss_fn, params, state, X, Y):
def loss(params, state, X, Y):
y = Y.T.reshape(-1) # (B,T) -> (T,B)
y_hat, state = apply_fn(params, state, X)
y_hat = y_hat.reshape(-1, y_hat.shape[-1])
y_one_hot = jax.nn.one_hot(y, num_classes=y_hat.shape[-1])
return loss_fn(y_hat, y_one_hot).mean(), state
grad_fn = jax.value_and_grad(loss, has_aux=True)
(l, state), grads = grad_fn(params, state, X, Y)
grads = grad_clipping(grads, 1)
return l, state, grads
def train_epoch(net, params, train_iter, loss, updater, use_random_iter):
state, timer = None, Timer()
metric = Accumulator(2) # Sum of training loss, no. of tokens
if isinstance(updater, optax.GradientTransformation):
updater_state = updater.init(params)
# Convert to jax Partial functions for jax.jit compatibility
apply_fn = jax.tree_util.Partial(net.apply)
loss_fn = jax.tree_util.Partial(loss)
for X, Y in train_iter:
if state is None or use_random_iter:
# Initialize `state` when either it is the first iteration or
# using random sampling
state = net.begin_state(batch_size=X.shape[0])
l, state, grads = train_step(apply_fn, loss_fn, params, state, X, Y)
if isinstance(updater, optax.GradientTransformation):
updates, updater_state = updater.update(grads, updater_state)
params = optax.apply_updates(params, updates)
else:
# batch_size=1 since the `mean` function has been invoked
params = updater(params, grads, batch_size=1)
metric.add(l * Y.size, Y.size)
return params, math.exp(metric[0] / metric[1]), metric[1] / timer.stop()
Explanation: The training step is fairly standard, except for the use of gradient clipping, and the issue of the hidden state.
If the data iterator uses random ordering of the sequences, we need to initialize the hidden state for each minibatch. However, if the data iterator uses sequential ordering, we only initialize the hidden state at the very beginning of the process. In the latter case, the hidden state will depend on the value at the previous minibatch.
The state vector may be a tensor or a tuple, depending on what kind of RNN we are using. In addition, the parameter updater can be an optax optimizer, or a simpler custom sgd optimizer.
End of explanation
def train(net, params, train_iter, vocab, lr, num_epochs, use_random_iter=False):
loss = optax.softmax_cross_entropy
animator = Animator(xlabel="epoch", ylabel="perplexity", legend=["train"], xlim=[10, num_epochs])
# Initialize
if isinstance(net, nn.Module):
updater = optax.sgd(lr)
else:
updater = lambda params, grads, batch_size: sgd(params, grads, lr, batch_size)
num_preds = 50
predict_ = lambda prefix: predict(prefix, num_preds, net, params, vocab)
# Train and predict
for epoch in range(num_epochs):
params, ppl, speed = train_epoch(net, params, train_iter, loss, updater, use_random_iter)
if (epoch + 1) % 10 == 0:
# Prediction takes time on the flax model
# print(predict_('time traveller'))
animator.add(epoch + 1, [ppl])
device = jax.default_backend()
print(f"perplexity {ppl:.1f}, {speed:.1f} tokens/sec on {device}")
print(predict_("time traveller"))
print(predict_("traveller"))
return params
random.seed(0)
num_epochs, lr = 500, 1
num_hiddens = 512
net = RNNModelScratch(len(vocab), num_hiddens, get_params, init_rnn_state, rnn)
params = net.init_params(rng)
params = train(net, params, train_iter, vocab, lr, num_epochs)
num_preds = 100
predict_ = lambda prefix: predict(prefix, num_preds, net, params, vocab)
print(predict_("time traveller"))
print(predict_("the"))
num_preds = 500
predict_ = lambda prefix: predict(prefix, num_preds, net, params, vocab)
print(predict_("the"))
Explanation: The main training function is fairly standard.
The loss function is per-symbol cross-entropy, $-\log q(x_t)$, where $q$ is the model prediction from the RNN. Since we compute the average loss across time steps within a batch, we are computing $-\frac{1}{T} \sum_{t=1}^T \log p(x_t|x_{1:t-1})$. The exponential of this is the perplexity (ppl). We plot this metric during training, since it is independent of document length. In addition, we print the MAP sequence prediction following the suffix 'time traveller', to get a sense of what the model is doing.
End of explanation
from typing import Any, Callable, Tuple
PRNGKey = Any
Shape = Tuple[int]
Dtype = Any
Array = Any
class RNNCell(nn.recurrent.RNNCellBase):
RNN Cell.
activation_fn: Callable[..., Any] = nn.activation.tanh
kernel_init: Callable[[PRNGKey, Shape, Dtype], Array] = nn.linear.default_kernel_init
recurrent_kernel_init: Callable[[PRNGKey, Shape, Dtype], Array] = nn.initializers.orthogonal()
bias_init: Callable[[PRNGKey, Shape, Dtype], Array] = nn.initializers.zeros
@nn.compact
def __call__(self, carry, inputs):
RNN Cell.
Args:
carry: the hidden state of the RNN cell,
initialized using `RNNCell.initialize_carry`.
inputs: an ndarray with the input for the current time step.
All dimensions except the final are considered batch dimensions.
Returns:
A tuple with the new carry and the output.
h = carry
hidden_features = h.shape[-1]
# Dense layer applied to the previous state
dense_h = functools.partial(
nn.Dense,
features=hidden_features,
use_bias=False,
kernel_init=self.recurrent_kernel_init,
bias_init=self.bias_init,
)
# Dense layer applied to the input, i
dense_i = functools.partial(
nn.Dense, features=hidden_features, use_bias=True, kernel_init=self.kernel_init, bias_init=self.bias_init
)
new_h = self.activation_fn(dense_i()(inputs) + dense_h()(h))
return new_h, new_h
@staticmethod
def initialize_carry(rng, batch_dims, size, init_fn=nn.initializers.zeros):
Initialize the RNN cell carry.
Args:
rng: random number generator passed to the init_fn.
batch_dims: a tuple providing the shape of the batch dimensions.
size: the size or number of features of the memory.
init_fn: initializer function for the carry.
Returns:
An initialized carry for the given RNN cell.
mem_shape = batch_dims + (size,)
return init_fn(rng, mem_shape)
Explanation: Creating a Flax module
We now show how to use create an RNN as a module, which is faster than our pure Python implementation.
While Flax has cells for more advanced recurrent models, it does not have a basic RNNCell. Therefore, we create an RNNCell similar to those defined in flax.linen.recurrent here.
End of explanation
class RNN(nn.Module):
@functools.partial(
nn.transforms.scan, variable_broadcast="params", in_axes=0, out_axes=0, split_rngs={"params": False}
)
@nn.compact
def __call__(self, state, x):
return RNNCell()(state, x)
@staticmethod
def initialize_carry(rng, batch_dims, size):
return RNNCell.initialize_carry(rng, batch_dims, size)
num_hiddens = 256
rnn_layer = RNN()
batch_size, num_steps = 32, 35
state = rnn_layer.initialize_carry(rng, (batch_size,), num_hiddens)
state.shape
Explanation: Now, we create an RNN module to call the RNNCell for each step.
End of explanation
X = jax.random.normal(rng, shape=(num_steps, batch_size, len(vocab)))
params = rnn_layer.init(rng, state, X)
state_new, Y = rnn_layer.apply(params, state, X)
Y.shape, state_new.shape
Explanation: Now we update the state with a random one-hot array of inputs.
End of explanation
class RNNModel(nn.Module):
The RNN model.
rnn: nn.Module
vocab_size: int
num_hiddens: int
bidirectional: bool = False
def setup(self):
# If the RNN is bidirectional (to be introduced later),
# `num_directions` should be 2, else it should be 1.
if not self.bidirectional:
self.num_directions = 1
else:
self.num_directions = 2
@nn.compact
def __call__(self, state, inputs):
X = jax.nn.one_hot(inputs.T, num_classes=self.vocab_size)
state, Y = self.rnn(state, X)
output = nn.Dense(self.vocab_size)(Y)
return output, state
def begin_state(self, batch_size=1):
# Use fixed random key since default state init fn is just `zeros`.
return self.rnn.initialize_carry(jax.random.PRNGKey(0), (batch_size,), num_hiddens)
Explanation: Now we define our model. It consists of an RNN Layer followed by a dense layer.
End of explanation
net = RNNModel(rnn=rnn_layer, vocab_size=len(vocab), num_hiddens=num_hiddens)
params = net.init(rng, state, jnp.ones([batch_size, num_steps]))
predict("time traveller", 50, net, params, vocab)
Explanation: Test the untrained model.
End of explanation
num_epochs, lr = 500, 1
params = train(net, params, train_iter, vocab, lr, num_epochs)
Explanation: Train it. The results are similar to the 'from scratch' implementation, but much faster.
End of explanation |
1,842 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Kevitsa DC Forward Similation
Step1: Setup
We have stored the data and simulation mesh so that they can just be downloaded and used here
Step2: Model
This model is a synthetic based on geologic surfaces interpreted from seismic data over the Kevitsa deposit in Finland. Synthetic 3D conductivity model is generated, and below figure shows conductivity section acrosses the mineralzined zone of interest. Nearsurface conductor on the lefthand side corresponds to sedimentary unit, and embedded conductor on the righthand side indicates conductive mineralized zone.
Step4: Survey
Direct current (DC) resisistivity and IP survey have been perforemd by using Titan24 system; pole-dpole array was used. We use a same survey set up at 12150N line, having 61 current sources (poles). Largest offset between current pole and potential eletrodes are around 2 km. We read in field data using below script, and form a DC survey object that we can pass to our DC problem.
Step5: Problem
This is a physics behind DC resistivity survey. Here we solve Poisson's equation and compute potential in our discretized domain. Survey information is required to run simulation.
Step6: Forward Simulation
Things are set. Now we can run simulaton by passing conductivity model to the DC problem.
Step7: Plot the Data
We are going to plot simulated data for each current pole. By moving slider bar below, you can explore the data at different current pole location. We provide both voltage and apparent resistivity.
Step8: Plot the currents
Did you understand simluated data? why they are changing? Here we show how currents flow in the earth medium. Similarly, you can move slider bar to see how current changes depending upon current source location.
Step9: Plot Pseudo section
We are going to plot simulated data for each current pole. By moving slider bar below, you can explore the data at different current pole location.
Step10: Plot Field data (pseudo-section)
Let's see how the field data looks like on this line (12150N). Are they similar with our simulated data? | Python Code:
import cPickle as pickle
from SimPEG import EM, Mesh, Utils, Maps
from SimPEG.Survey import Data
%pylab inline
import numpy as np
from pymatsolver import PardisoSolver
from matplotlib.colors import LogNorm
from ipywidgets import interact, IntSlider
Explanation: Kevitsa DC Forward Similation
End of explanation
url = "https://storage.googleapis.com/simpeg/kevitsa_synthetic/"
files = ['dcipdata_12150N.txt', 'dc_mesh.txt', 'dc_sigma.txt', 'dc_topo.txt']
keys = ['data', 'mesh', 'sigma', 'topo']
downloads = Utils.download([url + f for f in files], folder='./KevitsaDC', overwrite=True)
downloads = dict(zip(keys, downloads))
mesh = Mesh.TensorMesh.readUBC(downloads["mesh"])
sigma = mesh.readModelUBC(downloads["sigma"])
topo = np.loadtxt(downloads["topo"])
dcipdata = np.loadtxt(downloads["data"])
actind = ~np.isnan(sigma)
mesh.plotGrid()
Explanation: Setup
We have stored the data and simulation mesh so that they can just be downloaded and used here
End of explanation
figsize(8, 4)
indy = 6
temp = 1./sigma.copy()
temp[~actind] = np.nan
out = mesh.plotSlice(temp, normal="Y", ind=indy, pcolorOpts={"norm": LogNorm(), "cmap":"jet_r"}, clim=(1e0, 1e3))
plt.ylim(-800, 250)
plt.xlim(5000, 11000)
plt.gca().set_aspect(2.)
plt.title(("y= %d m")%(mesh.vectorCCy[indy]))
cb = plt.colorbar(out[0], orientation="horizontal")
cb.set_label("Resistivity (Ohm-m)")
Explanation: Model
This model is a synthetic based on geologic surfaces interpreted from seismic data over the Kevitsa deposit in Finland. Synthetic 3D conductivity model is generated, and below figure shows conductivity section acrosses the mineralzined zone of interest. Nearsurface conductor on the lefthand side corresponds to sedimentary unit, and embedded conductor on the righthand side indicates conductive mineralized zone.
End of explanation
def getGeometricFactor(locA, locB, locsM, locsN, eps = 0.01):
Geometric factor for a pole-dipole survey
MA = np.abs(locA[0] - locsM[:, 0])
MB = np.abs(locB[0] - locsM[:, 0])
NA = np.abs(locA[0] - locsN[:, 0])
NB = np.abs(locB[0] - locsN[:, 0])
geometric = 1./(2*np.pi) * (1/MA - 1/NA)
return geometric
A = dcipdata[:,:2]
B = dcipdata[:,2:4]
M = dcipdata[:,4:6]
N = dcipdata[:,6:8]
Elec_locs = np.vstack((A, B, M, N))
uniqElec = Utils.uniqueRows(Elec_locs)
nElec = len(uniqElec[1])
pts = np.c_[uniqElec[0][:,0], uniqElec[0][:,1]]
elec_topo = EM.Static.Utils.drapeTopotoLoc(mesh, pts[:,:2], actind=actind)
Elec_locsz = np.ones(Elec_locs.shape[0]) * np.nan
for iElec in range (nElec):
inds = np.argwhere(uniqElec[2] == iElec)
Elec_locsz[inds] = elec_topo[iElec,2]
Elec_locs = np.c_[Elec_locs, Elec_locsz]
nloc = int(Elec_locs.shape[0]/4)
A = Elec_locs[:nloc]
B = Elec_locs[nloc:2*nloc]
M = Elec_locs[2*nloc:3*nloc]
N = Elec_locs[3*nloc:4*nloc]
uniq = Utils.uniqueRows(np.c_[A, B])
nSrc = len(uniq[1])
mid_AB = A[:,0]
mid_MN = (M[:,0] + N[:,0]) * 0.5
mid_z = -abs(mid_AB - mid_MN) * 0.4
mid_x = abs(mid_AB + mid_MN) * 0.5
srcLists = []
appres = []
geometric = []
voltage = []
inds_data = []
for iSrc in range (nSrc):
inds = uniq[2] == iSrc
# TODO: y-location should be assigned ...
locsM = M[inds,:]
locsN = N[inds,:]
inds_data.append(np.arange(len(inds))[inds])
rx = EM.Static.DC.Rx.Dipole(locsM, locsN)
locA = uniq[0][iSrc,:3]
locB = uniq[0][iSrc,3:]
src = EM.Static.DC.Src.Pole([rx], locA)
# src = EM.Static.DC.Src.Dipole([rx], locA, locB)
geometric.append(getGeometricFactor(locA, locB, locsM, locsN))
appres.append(dcipdata[:,8][inds])
voltage.append(dcipdata[:,9][inds])
srcLists.append(src)
inds_data = np.hstack(inds_data)
geometric = np.hstack(geometric)
dobs_appres = np.hstack(appres)
dobs_voltage = np.hstack(voltage) * 1e-3
DCsurvey = EM.Static.DC.Survey(srcLists)
DCsurvey.dobs = dobs_voltage
Explanation: Survey
Direct current (DC) resisistivity and IP survey have been perforemd by using Titan24 system; pole-dpole array was used. We use a same survey set up at 12150N line, having 61 current sources (poles). Largest offset between current pole and potential eletrodes are around 2 km. We read in field data using below script, and form a DC survey object that we can pass to our DC problem.
End of explanation
m0 = np.ones(actind.sum())*np.log(1e-3)
actMap = Maps.InjectActiveCells(mesh, actind, np.log(1e-8))
mapping = Maps.ExpMap(mesh) * actMap
problem = EM.Static.DC.Problem3D_N(mesh, sigmaMap=mapping)
problem.Solver = PardisoSolver
if DCsurvey.ispaired:
DCsurvey.unpair()
problem.pair(DCsurvey)
Explanation: Problem
This is a physics behind DC resistivity survey. Here we solve Poisson's equation and compute potential in our discretized domain. Survey information is required to run simulation.
End of explanation
f = problem.fields(np.log(sigma)[actind])
dpred = DCsurvey.dpred(np.log(sigma)[actind], f=f)
appres = dpred / geometric
dcdata = Data(DCsurvey, v=dpred)
appresdata = Data(DCsurvey, v=appres)
Explanation: Forward Simulation
Things are set. Now we can run simulaton by passing conductivity model to the DC problem.
End of explanation
def vizdata(isrc):
fig = plt.figure(figsize = (7, 2))
src = srcLists[isrc]
rx = src.rxList[0]
data_temp = dcdata[src, rx]
appres_temp = appresdata[src, rx]
midx = (rx.locs[0][:,0] + rx.locs[1][:,0]) * 0.5
midz = (rx.locs[0][:,2] + rx.locs[1][:,2]) * 0.5
ax = plt.subplot(111)
ax_1 = ax.twinx()
ax.plot(midx, data_temp, 'k.-')
ax_1.plot(midx, appres_temp, 'r.-')
ax.set_xlim(5000, 11000)
ax.set_ylabel("Voltage")
ax_1.set_ylabel("$\\rho_a$ (Ohm-m)")
ax.grid(True)
plt.show()
interact(vizdata, isrc=(0, DCsurvey.nSrc-1, 1))
Explanation: Plot the Data
We are going to plot simulated data for each current pole. By moving slider bar below, you can explore the data at different current pole location. We provide both voltage and apparent resistivity.
End of explanation
fig = plt.figure(figsize = (7, 1.5))
def vizJ(isrc):
indy = 6
src = srcLists[isrc]
rx = src.rxList[0]
out = mesh.plotSlice(f[src, 'j'], vType="E", normal="Y", view="vec", ind=indy, streamOpts={"color":"k"}, pcolorOpts={"norm": LogNorm(), "cmap":"viridis"}, clim=(1e-10, 1e-4))
plt.plot(src.loc[0], src.loc[1], 'ro')
plt.ylim(-800, 250)
plt.xlim(5000, 11000)
plt.gca().set_aspect(2.)
# plt.title(("y= %d m")%(mesh.vectorCCy[indy]))
plt.title("")
cb = plt.colorbar(out[0], orientation="horizontal")
cb.set_label("Current density (A/m$^2$)")
midx = (rx.locs[0][:,0] + rx.locs[1][:,0]) * 0.5
midz = (rx.locs[0][:,2] + rx.locs[1][:,2]) * 0.5
plt.plot(midx, midz, 'g.', ms=4)
plt.gca().get_xlim()
plt.show()
interact(vizJ, isrc=(0, DCsurvey.nSrc-1, 1))
Explanation: Plot the currents
Did you understand simluated data? why they are changing? Here we show how currents flow in the earth medium. Similarly, you can move slider bar to see how current changes depending upon current source location.
End of explanation
vmin, vmax = 1, 1e4
appres = dpred/geometric
temp = appres.copy()
Utils.plot2Ddata(np.c_[mid_x[inds_data], mid_z[inds_data]], temp, ncontour=100, dataloc=True, scale="log", contourOpts={"vmin":np.log10(vmin), "vmax":np.log10(vmax)})
cb = plt.colorbar(out[0], orientation="horizontal", format="1e%.0f", ticks=np.linspace(np.log10(vmin), np.log10(vmax), 3))
cb.set_label("Resistivity (Ohm-m)")
# plt.title("Line 12150N")
Explanation: Plot Pseudo section
We are going to plot simulated data for each current pole. By moving slider bar below, you can explore the data at different current pole location.
End of explanation
vmin, vmax = 1, 1e4
temp = dcipdata[:,8].copy()
temp[dcipdata[:,8]<vmin] = vmin
temp[dcipdata[:,8]>vmax] = vmax
out = Utils.plot2Ddata(np.c_[mid_x[inds_data], mid_z[inds_data]], temp[inds_data], ncontour=100, dataloc=True, scale="log")
cb = plt.colorbar(out[0], orientation="horizontal", format="1e%.0f", ticks=np.linspace(np.log10(vmin), np.log10(vmax), 3))
cb.set_label("Resistivity (Ohm-m)")
# plt.title("Line 12150N")
Explanation: Plot Field data (pseudo-section)
Let's see how the field data looks like on this line (12150N). Are they similar with our simulated data?
End of explanation |
1,843 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DSGRN Python Interface Tutorial
This notebook shows the basics of manipulating DSGRN with the python interface.
Step2: Network
The starting point of the DSGRN analysis is a network specification.
We write each node name, a colon, and then a formula specifying how it reacts to its inputs.
Step3: ParameterGraph
Given a network, there is an associated "Parameter Graph", which is a combinatorial representation of parameter space.
Step4: Parameter
The ParameterGraph class may be regarded as a factory which produces parameter nodes. In the DSGRN code, parameter nodes are referred to simply as "parameters" and are represented as "Parameter" objects.
Step5: DomainGraph
Let's compute the dynamics corresponding to this parameter node. In particular, we can instruct DSGRN to create a "domaingraph" object.
Step6: MorseDecomposition
Let's compute the partially ordered set of recurrent components (strongly connected components with an edge) of the domain graph.
Step7: MorseGraph
The final step in our analysis is the production of an annotated Morse graph.
Step8: Drawing Tables
Step9: ParameterSampler
We can sample real-valued parameter values from combinatorial parameter regions using the ParameterSampler class. This class provides a method sample which takes an integer parameter index and returns a string describing a randomly sampled instance within the associated parameter region. | Python Code:
import DSGRN
Explanation: DSGRN Python Interface Tutorial
This notebook shows the basics of manipulating DSGRN with the python interface.
End of explanation
network = DSGRN.Network(
X1 : (X1+X2)(~X3)
X2 : (X1)
X3 : (X1)(~X2))
DSGRN.DrawGraph(network)
Explanation: Network
The starting point of the DSGRN analysis is a network specification.
We write each node name, a colon, and then a formula specifying how it reacts to its inputs.
End of explanation
parametergraph = DSGRN.ParameterGraph(network)
print("There are " + str(parametergraph.size()) + " nodes in the parameter graph.")
Explanation: ParameterGraph
Given a network, there is an associated "Parameter Graph", which is a combinatorial representation of parameter space.
End of explanation
parameterindex = 34892 # An arbitrarily selected integer in [0,326592)
parameter = parametergraph.parameter(parameterindex)
print(parameter)
Explanation: Parameter
The ParameterGraph class may be regarded as a factory which produces parameter nodes. In the DSGRN code, parameter nodes are referred to simply as "parameters" and are represented as "Parameter" objects.
End of explanation
domaingraph = DSGRN.DomainGraph(parameter)
DSGRN.DrawGraph(domaingraph)
print(domaingraph.coordinates(5)) # ... I wonder what region in phase space domain 5 corresponds to.
Explanation: DomainGraph
Let's compute the dynamics corresponding to this parameter node. In particular, we can instruct DSGRN to create a "domaingraph" object.
End of explanation
morsedecomposition = DSGRN.MorseDecomposition(domaingraph.digraph())
DSGRN.DrawGraph(morsedecomposition)
Explanation: MorseDecomposition
Let's compute the partially ordered set of recurrent components (strongly connected components with an edge) of the domain graph.
End of explanation
morsegraph = DSGRN.MorseGraph(domaingraph, morsedecomposition)
DSGRN.DrawGraph(morsegraph)
Explanation: MorseGraph
The final step in our analysis is the production of an annotated Morse graph.
End of explanation
from DSGRN import *
pg = ParameterGraph(Network("X : ~Y\nY : ~X\n"))
Table(["Parameter Index", "Morse Graph"],
[ [ i, DrawGraph(MorseGraph(DomainGraph(pg.parameter(i))))] for i in range(0,pg.size())])
Explanation: Drawing Tables
End of explanation
sampler = DSGRN.ParameterSampler(parametergraph)
sampler.sample(parameterindex)
Explanation: ParameterSampler
We can sample real-valued parameter values from combinatorial parameter regions using the ParameterSampler class. This class provides a method sample which takes an integer parameter index and returns a string describing a randomly sampled instance within the associated parameter region.
End of explanation |
1,844 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Text Classification TFX Pipeline Starter
Objective
Step1: Note
Step2: If the versions above do not match, update your packages in the current Jupyter kernel below. The default %pip package installation location is not on your system installation PATH; use the command below to append the local installation path to pick up the latest package versions. Note that you may also need to restart your notebook kernel to pick up the specified package versions and re-run the imports cell above before proceeding with the lab.
Step3: Configure lab settings
Set constants, location paths and other environment settings.
Step4: Preparing the dataset
Step5: Interactive Context
TFX Interactive Context allows you to create and run TFX Components in an interactive mode. It is designed to support experimentation and development in a Jupyter Notebook environment. It is an experimental feature and major changes to interface and functionality are expected. When creating the interactive context you can specifiy the following parameters
Step6: Ingesting data using ExampleGen
In any ML development process the first step is to ingest the training and test datasets. The ExampleGen component ingests data into a TFX pipeline. It consumes external files/services to generate a set file files in the TFRecord format, which will be used by other TFX components. It can also shuffle the data and split into an arbitrary number of partitions.
<img src=../../images/ExampleGen.png width="300">
Configure and run CsvExampleGen
Step7: Examine the ingested data
Step8: Generating statistics using StatisticsGen
The StatisticsGen component generates data statistics that can be used by other TFX components. StatisticsGen uses TensorFlow Data Validation. StatisticsGen generate statistics for each split in the ExampleGen component's output. In our case there two splits
Step9: Visualize statistics
The generated statistics can be visualized using the tfdv.visualize_statistics() function from the TensorFlow Data Validation library or using a utility method of the InteractiveContext object. In fact, most of the artifacts generated by the TFX components can be visualized using InteractiveContext.
Step10: Infering data schema using SchemaGen
Some TFX components use a description input data called a schema. The schema is an instance of schema.proto. It can specify data types for feature values, whether a feature has to be present in all examples, allowed value ranges, and other properties. SchemaGen automatically generates the schema by inferring types, categories, and ranges from data statistics. The auto-generated schema is best-effort and only tries to infer basic properties of the data. It is expected that developers review and modify it as needed. SchemaGen uses TensorFlow Data Validation.
The SchemaGen component generates the schema using the statistics for the train split. The statistics for other splits are ignored.
<img src=../../images/SchemaGen.png width="200">
Configure and run the SchemaGen components
Step11: Visualize the inferred schema
Step12: Updating the auto-generated schema
In most cases the auto-generated schemas must be fine-tuned manually using insights from data exploration and/or domain knowledge about the data. For example, you know that in the covertype dataset there are seven types of forest cover (coded using 1-7 range) and that the value of the Slope feature should be in the 0-90 range. You can manually add these constraints to the auto-generated schema by setting the feature domain.
Load the auto-generated schema proto file
Step13: Modify the schema
You can use the protocol buffer APIs to modify the schema using tfdv.set_somain.
Review the TFDV library API documentation on setting a feature's domain. You can use the protocol buffer APIs to modify the schema. Review the Tensorflow Metadata proto definition for configuration options.
Save the updated schema
Step14: Importing the updated schema using ImporterNode
The ImporterNode component allows you to import an external artifact, including the schema file, so it can be used by other TFX components in your workflow.
Configure and run the ImporterNode component
Step15: Visualize the imported schema
Step16: Validating data with ExampleValidator
The ExampleValidator component identifies anomalies in data. It identifies anomalies by comparing data statistics computed by the StatisticsGen component against a schema generated by SchemaGen or imported by ImporterNode.
ExampleValidator can detect different classes of anomalies. For example it can
Step17: Examine the output of ExampleValidator
The output artifact of the ExampleValidator is the anomalies.pbtxt file describing an anomalies_pb2.Anomalies protobuf.
Step18: Visualize validation results
The file anomalies.pbtxt can be visualized using context.show.
Step19: In our case no anomalies were detected in the eval split.
For a detailed deep dive into data validation and schema generation refer to the lab-31-tfdv-structured-data lab.
Preprocessing data with Transform
The Transform component performs data transformation and feature engineering. The Transform component consumes tf.Examples emitted from the ExampleGen component and emits the transformed feature data and the SavedModel graph that was used to process the data. The emitted SavedModel can then be used by serving components to make sure that the same data pre-processing logic is applied at training and serving.
The Transform component requires more code than many other components because of the arbitrary complexity of the feature engineering that you may need for the data and/or model that you're working with. It requires code files to be available which define the processing needed.
<img src=../../images/Transform.png width="400">
Define the pre-processing module
To configure Transform, you need to encapsulate your pre-processing code in the Python preprocessing_fn function and save it to a python module that is then provided to the Transform component as an input. This module will be loaded by transform and the preprocessing_fn function will be called when the Transform component runs.
In most cases, your implementation of the preprocessing_fn makes extensive use of TensorFlow Transform for performing feature engineering on your dataset.
Step20: Configure and run the Transform component.
Step21: Examine the Transform component's outputs
The Transform component has 2 outputs
Step22: And the transform.examples artifact
Step24: Train your TensorFlow model with the Trainer component
The Trainer component trains a model using TensorFlow.
Trainer takes
Step25: Create and run the Trainer component
As of the 0.25.0 release of TFX, the Trainer component only supports passing a single field - num_steps - through the train_args and eval_args arguments.
Step26: Analyzing training runs with TensorBoard
In this step you will analyze the training run with TensorBoard.dev. TensorBoard.dev is a managed service that enables you to easily host, track and share your ML experiments.
Retrieve the location of TensorBoard logs
Each model run's train and eval metric logs are written to the model_run directory by the Tensorboard callback defined in model.py.
Step27: Upload the logs and start TensorBoard.dev
Open a new JupyterLab terminal window
From the terminal window, execute the following command
tensorboard dev upload --logdir [YOUR_LOGDIR]
Where [YOUR_LOGDIR] is an URI retrieved by the previous cell.
You will be asked to authorize TensorBoard.dev using your Google account. If you don't have a Google account or you don't want to authorize TensorBoard.dev you can skip this exercise.
After the authorization process completes, follow the link provided to view your experiment.
Evaluating trained models with Evaluator
The Evaluator component analyzes model performance using the TensorFlow Model Analysis library. It runs inference requests on particular subsets of the test dataset, based on which slices are defined by the developer. Knowing which slices should be analyzed requires domain knowledge of what is important in this particular use case or domain.
The Evaluator can also optionally validate a newly trained model against a previous model. In this lab, you only train one model, so the Evaluator automatically will label the model as "blessed".
<img src=../../images/Evaluator.png width="400">
Configure and run the Evaluator component
Use the ResolverNode to pick the previous model to compare against. The model resolver is only required if performing model validation in addition to evaluation. In this case we validate against the latest blessed model. If no model has been blessed before (as in this case) the evaluator will make our candidate the first blessed model.
Step28: Configure evaluation metrics and slices.
Step29: Check the model performance validation status
Step30: Deploying models with Pusher
The Pusher component checks whether a model has been "blessed", and if so, deploys it by pushing the model to a well known file destination.
<img src=../../images/Pusher.png width="400">
Configure and run the Pusher component
Step31: Examine the output of Pusher | Python Code:
import os
import tempfile
import time
from pprint import pprint
import absl
import pandas as pd
import tensorflow as tf
import tensorflow_data_validation as tfdv
import tensorflow_model_analysis as tfma
import tensorflow_transform as tft
import tfx
from tensorflow_metadata.proto.v0 import (
anomalies_pb2,
schema_pb2,
statistics_pb2,
)
from tensorflow_transform.tf_metadata import schema_utils
from tfx.components import (
CsvExampleGen,
Evaluator,
ExampleValidator,
InfraValidator,
Pusher,
ResolverNode,
SchemaGen,
StatisticsGen,
Trainer,
Transform,
Tuner,
)
from tfx.components.common_nodes.importer_node import ImporterNode
from tfx.components.trainer import executor as trainer_executor
from tfx.dsl.components.base import executor_spec
from tfx.dsl.experimental import latest_blessed_model_resolver
from tfx.orchestration import metadata, pipeline
from tfx.orchestration.experimental.interactive.interactive_context import (
InteractiveContext,
)
from tfx.proto import (
evaluator_pb2,
example_gen_pb2,
infra_validator_pb2,
pusher_pb2,
trainer_pb2,
)
from tfx.proto.evaluator_pb2 import SingleSlicingSpec
from tfx.types import Channel
from tfx.types.standard_artifacts import (
HyperParameters,
InfraBlessing,
Model,
ModelBlessing,
)
Explanation: Text Classification TFX Pipeline Starter
Objective: In this notebook, we show you how to put a text classification model implemented in model.py, preprocessing.py, and config.py into an interactive TFX pipeline. Using these files and the code snippets in this notebook, you'll configure a TFX pipeline generated by the tfx template tool as in the previous guided project so that the text classification can be run on a CAIP Pipelines Kubeflow cluster. The dataset itself consists of article titles along with their source, and the goal is to predict the source from the title. (This dataset can be re-generated by running either the keras_for_text_classification.ipynb notebook or the reusable_embeddings.ipynb notebook, which contain different models to solve this problem.) The solution we propose here is fairly simple and you can build on it by inspecting these notebooks.
End of explanation
print("Tensorflow Version:", tf.__version__)
print("TFX Version:", tfx.__version__)
print("TFDV Version:", tfdv.__version__)
print("TFMA Version:", tfma.VERSION_STRING)
absl.logging.set_verbosity(absl.logging.INFO)
Explanation: Note: this lab was developed and tested with the following TF ecosystem package versions:
Tensorflow Version: 2.3.1
TFX Version: 0.25.0
TFDV Version: 0.25.0
TFMA Version: 0.25.0
If you encounter errors with the above imports (e.g. TFX component not found), check your package versions in the cell below.
End of explanation
os.environ["PATH"] += os.pathsep + "/home/jupyter/.local/bin"
Explanation: If the versions above do not match, update your packages in the current Jupyter kernel below. The default %pip package installation location is not on your system installation PATH; use the command below to append the local installation path to pick up the latest package versions. Note that you may also need to restart your notebook kernel to pick up the specified package versions and re-run the imports cell above before proceeding with the lab.
End of explanation
ARTIFACT_STORE = os.path.join(os.sep, "home", "jupyter", "artifact-store")
SERVING_MODEL_DIR = os.path.join(os.sep, "home", "jupyter", "serving_model")
DATA_ROOT = "./data"
DATA_ROOT = f"{ARTIFACT_STORE}/data"
!mkdir -p $DATA_ROOT
Explanation: Configure lab settings
Set constants, location paths and other environment settings.
End of explanation
data = pd.read_csv("./data/titles_sample.csv")
data.head()
LABEL_MAPPING = {"github": 0, "nytimes": 1, "techcrunch": 2}
data["source"] = data["source"].apply(lambda label: LABEL_MAPPING[label])
data.head()
data.to_csv(f"{DATA_ROOT}/dataset.csv", index=None)
!head $DATA_ROOT/*.csv
Explanation: Preparing the dataset
End of explanation
PIPELINE_NAME = "tfx-title-classifier"
PIPELINE_ROOT = os.path.join(
ARTIFACT_STORE, PIPELINE_NAME, time.strftime("%Y%m%d_%H%M%S")
)
os.makedirs(PIPELINE_ROOT, exist_ok=True)
context = InteractiveContext(
pipeline_name=PIPELINE_NAME,
pipeline_root=PIPELINE_ROOT,
metadata_connection_config=None,
)
Explanation: Interactive Context
TFX Interactive Context allows you to create and run TFX Components in an interactive mode. It is designed to support experimentation and development in a Jupyter Notebook environment. It is an experimental feature and major changes to interface and functionality are expected. When creating the interactive context you can specifiy the following parameters:
- pipeline_name - Optional name of the pipeline for ML Metadata tracking purposes. If not specified, a name will be generated for you.
- pipeline_root - Optional path to the root of the pipeline's outputs. If not specified, an ephemeral temporary directory will be created and used.
- metadata_connection_config - Optional metadata_store_pb2.ConnectionConfig instance used to configure connection to a ML Metadata connection. If not specified, an ephemeral SQLite MLMD connection contained in the pipeline_root directory with file name "metadata.sqlite" will be used.
End of explanation
output_config = example_gen_pb2.Output(
split_config=example_gen_pb2.SplitConfig(
splits=[
example_gen_pb2.SplitConfig.Split(name="train", hash_buckets=4),
example_gen_pb2.SplitConfig.Split(name="eval", hash_buckets=1),
]
)
)
example_gen = tfx.components.CsvExampleGen(
input_base=DATA_ROOT, output_config=output_config
)
context.run(example_gen)
Explanation: Ingesting data using ExampleGen
In any ML development process the first step is to ingest the training and test datasets. The ExampleGen component ingests data into a TFX pipeline. It consumes external files/services to generate a set file files in the TFRecord format, which will be used by other TFX components. It can also shuffle the data and split into an arbitrary number of partitions.
<img src=../../images/ExampleGen.png width="300">
Configure and run CsvExampleGen
End of explanation
examples_uri = example_gen.outputs["examples"].get()[0].uri
tfrecord_filenames = [
os.path.join(examples_uri, "train", name)
for name in os.listdir(os.path.join(examples_uri, "train"))
]
dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")
for tfrecord in dataset.take(2):
example = tf.train.Example()
example.ParseFromString(tfrecord.numpy())
for name, feature in example.features.feature.items():
if feature.HasField("bytes_list"):
value = feature.bytes_list.value
if feature.HasField("float_list"):
value = feature.float_list.value
if feature.HasField("int64_list"):
value = feature.int64_list.value
print(f"{name}: {value}")
print("******")
Explanation: Examine the ingested data
End of explanation
statistics_gen = tfx.components.StatisticsGen(
examples=example_gen.outputs["examples"]
)
context.run(statistics_gen)
Explanation: Generating statistics using StatisticsGen
The StatisticsGen component generates data statistics that can be used by other TFX components. StatisticsGen uses TensorFlow Data Validation. StatisticsGen generate statistics for each split in the ExampleGen component's output. In our case there two splits: train and eval.
<img src=../../images/StatisticsGen.png width="200">
Configure and run the StatisticsGen component
End of explanation
context.show(statistics_gen.outputs["statistics"])
Explanation: Visualize statistics
The generated statistics can be visualized using the tfdv.visualize_statistics() function from the TensorFlow Data Validation library or using a utility method of the InteractiveContext object. In fact, most of the artifacts generated by the TFX components can be visualized using InteractiveContext.
End of explanation
schema_gen = SchemaGen(
statistics=statistics_gen.outputs["statistics"], infer_feature_shape=False
)
context.run(schema_gen)
Explanation: Infering data schema using SchemaGen
Some TFX components use a description input data called a schema. The schema is an instance of schema.proto. It can specify data types for feature values, whether a feature has to be present in all examples, allowed value ranges, and other properties. SchemaGen automatically generates the schema by inferring types, categories, and ranges from data statistics. The auto-generated schema is best-effort and only tries to infer basic properties of the data. It is expected that developers review and modify it as needed. SchemaGen uses TensorFlow Data Validation.
The SchemaGen component generates the schema using the statistics for the train split. The statistics for other splits are ignored.
<img src=../../images/SchemaGen.png width="200">
Configure and run the SchemaGen components
End of explanation
context.show(schema_gen.outputs["schema"])
Explanation: Visualize the inferred schema
End of explanation
schema_proto_path = "{}/{}".format(
schema_gen.outputs["schema"].get()[0].uri, "schema.pbtxt"
)
schema = tfdv.load_schema_text(schema_proto_path)
Explanation: Updating the auto-generated schema
In most cases the auto-generated schemas must be fine-tuned manually using insights from data exploration and/or domain knowledge about the data. For example, you know that in the covertype dataset there are seven types of forest cover (coded using 1-7 range) and that the value of the Slope feature should be in the 0-90 range. You can manually add these constraints to the auto-generated schema by setting the feature domain.
Load the auto-generated schema proto file
End of explanation
schema_dir = os.path.join(ARTIFACT_STORE, "schema")
tf.io.gfile.makedirs(schema_dir)
schema_file = os.path.join(schema_dir, "schema.pbtxt")
tfdv.write_schema_text(schema, schema_file)
!cat {schema_file}
Explanation: Modify the schema
You can use the protocol buffer APIs to modify the schema using tfdv.set_somain.
Review the TFDV library API documentation on setting a feature's domain. You can use the protocol buffer APIs to modify the schema. Review the Tensorflow Metadata proto definition for configuration options.
Save the updated schema
End of explanation
schema_importer = ImporterNode(
instance_name="Schema_Importer",
source_uri=schema_dir,
artifact_type=tfx.types.standard_artifacts.Schema,
reimport=False,
)
context.run(schema_importer)
Explanation: Importing the updated schema using ImporterNode
The ImporterNode component allows you to import an external artifact, including the schema file, so it can be used by other TFX components in your workflow.
Configure and run the ImporterNode component
End of explanation
context.show(schema_importer.outputs["result"])
Explanation: Visualize the imported schema
End of explanation
example_validator = ExampleValidator(
instance_name="Data_Validation",
statistics=statistics_gen.outputs["statistics"],
schema=schema_importer.outputs["result"],
)
context.run(example_validator)
Explanation: Validating data with ExampleValidator
The ExampleValidator component identifies anomalies in data. It identifies anomalies by comparing data statistics computed by the StatisticsGen component against a schema generated by SchemaGen or imported by ImporterNode.
ExampleValidator can detect different classes of anomalies. For example it can:
perform validity checks by comparing data statistics against a schema
detect training-serving skew by comparing training and serving data.
detect data drift by looking at a series of data.
The ExampleValidator component validates the data in the eval split only. Other splits are ignored.
<img src=../../images/ExampleValidator.png width="350">
Configure and run the ExampleValidator component
End of explanation
train_uri = example_validator.outputs["anomalies"].get()[0].uri
train_anomalies_filename = os.path.join(train_uri, "train/anomalies.pbtxt")
!cat $train_anomalies_filename
Explanation: Examine the output of ExampleValidator
The output artifact of the ExampleValidator is the anomalies.pbtxt file describing an anomalies_pb2.Anomalies protobuf.
End of explanation
context.show(example_validator.outputs["output"])
Explanation: Visualize validation results
The file anomalies.pbtxt can be visualized using context.show.
End of explanation
%%writefile config.py
FEATURE_KEY = "title"
LABEL_KEY = "source"
N_CLASSES = 3
HUB_URL = "https://tfhub.dev/google/nnlm-en-dim50/2"
HUB_DIM = 50
N_NEURONS = 16
TRAIN_BATCH_SIZE = 5
EVAL_BATCH_SIZE = 5
MODEL_NAME = "tfx_title_classifier"
def transformed_name(key):
return key + "_xf"
%%writefile preprocessing.py
import tensorflow as tf
from config import FEATURE_KEY, LABEL_KEY, N_CLASSES, transformed_name
def _fill_in_missing(x):
default_value = "" if x.dtype == tf.string else 0
return tf.squeeze(
tf.sparse.to_dense(
tf.SparseTensor(x.indices, x.values, [x.dense_shape[0], 1]),
default_value,
),
axis=1,
)
def preprocessing_fn(inputs):
features = _fill_in_missing(inputs[FEATURE_KEY])
labels = _fill_in_missing(inputs[LABEL_KEY])
return {
transformed_name(FEATURE_KEY): features,
transformed_name(LABEL_KEY): labels,
}
TRANSFORM_MODULE = "preprocessing.py"
Explanation: In our case no anomalies were detected in the eval split.
For a detailed deep dive into data validation and schema generation refer to the lab-31-tfdv-structured-data lab.
Preprocessing data with Transform
The Transform component performs data transformation and feature engineering. The Transform component consumes tf.Examples emitted from the ExampleGen component and emits the transformed feature data and the SavedModel graph that was used to process the data. The emitted SavedModel can then be used by serving components to make sure that the same data pre-processing logic is applied at training and serving.
The Transform component requires more code than many other components because of the arbitrary complexity of the feature engineering that you may need for the data and/or model that you're working with. It requires code files to be available which define the processing needed.
<img src=../../images/Transform.png width="400">
Define the pre-processing module
To configure Transform, you need to encapsulate your pre-processing code in the Python preprocessing_fn function and save it to a python module that is then provided to the Transform component as an input. This module will be loaded by transform and the preprocessing_fn function will be called when the Transform component runs.
In most cases, your implementation of the preprocessing_fn makes extensive use of TensorFlow Transform for performing feature engineering on your dataset.
End of explanation
transform = Transform(
examples=example_gen.outputs["examples"],
schema=schema_importer.outputs["result"],
module_file=TRANSFORM_MODULE,
)
context.run(transform)
Explanation: Configure and run the Transform component.
End of explanation
os.listdir(transform.outputs["transform_graph"].get()[0].uri)
Explanation: Examine the Transform component's outputs
The Transform component has 2 outputs:
transform_graph - contains the graph that can perform the preprocessing operations (this graph will be included in the serving and evaluation models).
transformed_examples - contains the preprocessed training and evaluation data.
Take a peek at the transform_graph artifact: it points to a directory containing 3 subdirectories:
End of explanation
os.listdir(transform.outputs["transformed_examples"].get()[0].uri)
transform_uri = transform.outputs["transformed_examples"].get()[0].uri
tfrecord_filenames = [
os.path.join(transform_uri, "train", name)
for name in os.listdir(os.path.join(transform_uri, "train"))
]
dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")
for tfrecord in dataset.take(4):
example = tf.train.Example()
example.ParseFromString(tfrecord.numpy())
for name, feature in example.features.feature.items():
if feature.HasField("bytes_list"):
value = feature.bytes_list.value
if feature.HasField("float_list"):
value = feature.float_list.value
if feature.HasField("int64_list"):
value = feature.int64_list.value
print(f"{name}: {value}")
print("******")
Explanation: And the transform.examples artifact
End of explanation
%%writefile model.py
import tensorflow as tf
import tensorflow_transform as tft
from config import (
EVAL_BATCH_SIZE,
HUB_DIM,
HUB_URL,
LABEL_KEY,
MODEL_NAME,
N_CLASSES,
N_NEURONS,
TRAIN_BATCH_SIZE,
transformed_name,
)
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
from tensorflow_hub import KerasLayer
from tfx_bsl.tfxio import dataset_options
def _get_serve_tf_examples_fn(model, tf_transform_output):
model.tft_layer = tf_transform_output.transform_features_layer()
@tf.function
def serve_tf_examples_fn(serialized_tf_examples):
Returns the output to be used in the serving signature.
feature_spec = tf_transform_output.raw_feature_spec()
feature_spec.pop(LABEL_KEY)
parsed_features = tf.io.parse_example(
serialized_tf_examples, feature_spec
)
transformed_features = model.tft_layer(parsed_features)
return model(transformed_features)
return serve_tf_examples_fn
def _input_fn(file_pattern, data_accessor, tf_transform_output, batch_size=200):
return data_accessor.tf_dataset_factory(
file_pattern,
dataset_options.TensorFlowDatasetOptions(
batch_size=batch_size, label_key=transformed_name(LABEL_KEY)
),
tf_transform_output.transformed_metadata.schema,
)
def _load_hub_module_layer():
hub_module = KerasLayer(
HUB_URL,
output_shape=[HUB_DIM],
input_shape=[],
dtype=tf.string,
trainable=True,
)
return hub_module
def _build_keras_model():
hub_module = _load_hub_module_layer()
model = Sequential(
[
hub_module,
Dense(N_NEURONS, activation="relu"),
Dense(N_CLASSES, activation="softmax"),
]
)
model.compile(
optimizer="adam",
loss="sparse_categorical_crossentropy",
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()],
)
return model
def run_fn(fn_args):
tf_transform_output = tft.TFTransformOutput(fn_args.transform_output)
train_dataset = _input_fn(
fn_args.train_files,
fn_args.data_accessor,
tf_transform_output,
TRAIN_BATCH_SIZE,
)
eval_dataset = _input_fn(
fn_args.eval_files,
fn_args.data_accessor,
tf_transform_output,
EVAL_BATCH_SIZE,
)
mirrored_strategy = tf.distribute.MirroredStrategy()
with mirrored_strategy.scope():
model = _build_keras_model()
tensorboard_callback = tf.keras.callbacks.TensorBoard(
log_dir=fn_args.model_run_dir, update_freq="batch"
)
model.fit(
train_dataset,
steps_per_epoch=fn_args.train_steps,
validation_data=eval_dataset,
validation_steps=fn_args.eval_steps,
callbacks=[tensorboard_callback],
)
signatures = {
"serving_default": _get_serve_tf_examples_fn(
model, tf_transform_output
).get_concrete_function(
tf.TensorSpec(shape=[None], dtype=tf.string, name="examples")
),
}
model.save(
fn_args.serving_model_dir, save_format="tf", signatures=signatures
)
TRAINER_MODULE_FILE = "model.py"
Explanation: Train your TensorFlow model with the Trainer component
The Trainer component trains a model using TensorFlow.
Trainer takes:
tf.Examples used for training and eval.
A user provided module file that defines the trainer logic.
A data schema created by SchemaGen or imported by ImporterNode.
A proto definition of train args and eval args.
An optional transform graph produced by upstream Transform component.
An optional base models used for scenarios such as warmstarting training.
<img src=../../images/Trainer.png width="400">
Define the trainer module
To configure Trainer, you need to encapsulate your training code in a Python module that is then provided to the Trainer as an input.
End of explanation
trainer = Trainer(
custom_executor_spec=executor_spec.ExecutorClassSpec(
trainer_executor.GenericExecutor
),
module_file=TRAINER_MODULE_FILE,
transformed_examples=transform.outputs.transformed_examples,
schema=schema_importer.outputs.result,
transform_graph=transform.outputs.transform_graph,
train_args=trainer_pb2.TrainArgs(splits=["train"], num_steps=20),
eval_args=trainer_pb2.EvalArgs(splits=["eval"], num_steps=5),
)
context.run(trainer)
Explanation: Create and run the Trainer component
As of the 0.25.0 release of TFX, the Trainer component only supports passing a single field - num_steps - through the train_args and eval_args arguments.
End of explanation
logs_path = trainer.outputs["model_run"].get()[0].uri
print(logs_path)
Explanation: Analyzing training runs with TensorBoard
In this step you will analyze the training run with TensorBoard.dev. TensorBoard.dev is a managed service that enables you to easily host, track and share your ML experiments.
Retrieve the location of TensorBoard logs
Each model run's train and eval metric logs are written to the model_run directory by the Tensorboard callback defined in model.py.
End of explanation
model_resolver = ResolverNode(
instance_name="latest_blessed_model_resolver",
resolver_class=latest_blessed_model_resolver.LatestBlessedModelResolver,
model=Channel(type=Model),
model_blessing=Channel(type=ModelBlessing),
)
context.run(model_resolver)
Explanation: Upload the logs and start TensorBoard.dev
Open a new JupyterLab terminal window
From the terminal window, execute the following command
tensorboard dev upload --logdir [YOUR_LOGDIR]
Where [YOUR_LOGDIR] is an URI retrieved by the previous cell.
You will be asked to authorize TensorBoard.dev using your Google account. If you don't have a Google account or you don't want to authorize TensorBoard.dev you can skip this exercise.
After the authorization process completes, follow the link provided to view your experiment.
Evaluating trained models with Evaluator
The Evaluator component analyzes model performance using the TensorFlow Model Analysis library. It runs inference requests on particular subsets of the test dataset, based on which slices are defined by the developer. Knowing which slices should be analyzed requires domain knowledge of what is important in this particular use case or domain.
The Evaluator can also optionally validate a newly trained model against a previous model. In this lab, you only train one model, so the Evaluator automatically will label the model as "blessed".
<img src=../../images/Evaluator.png width="400">
Configure and run the Evaluator component
Use the ResolverNode to pick the previous model to compare against. The model resolver is only required if performing model validation in addition to evaluation. In this case we validate against the latest blessed model. If no model has been blessed before (as in this case) the evaluator will make our candidate the first blessed model.
End of explanation
accuracy_threshold = tfma.MetricThreshold(
value_threshold=tfma.GenericValueThreshold(
lower_bound={"value": 0.30}, upper_bound={"value": 0.99}
)
)
metrics_specs = tfma.MetricsSpec(
metrics=[
tfma.MetricConfig(
class_name="SparseCategoricalAccuracy", threshold=accuracy_threshold
),
tfma.MetricConfig(class_name="ExampleCount"),
]
)
eval_config = tfma.EvalConfig(
model_specs=[tfma.ModelSpec(label_key="source")],
metrics_specs=[metrics_specs],
)
eval_config
model_analyzer = Evaluator(
examples=example_gen.outputs.examples,
model=trainer.outputs.model,
baseline_model=model_resolver.outputs.model,
eval_config=eval_config,
)
context.run(model_analyzer, enable_cache=False)
Explanation: Configure evaluation metrics and slices.
End of explanation
model_blessing_uri = model_analyzer.outputs.blessing.get()[0].uri
!ls -l {model_blessing_uri}
Explanation: Check the model performance validation status
End of explanation
trainer.outputs["model"]
pusher = Pusher(
model=trainer.outputs["model"],
model_blessing=model_analyzer.outputs["blessing"],
push_destination=pusher_pb2.PushDestination(
filesystem=pusher_pb2.PushDestination.Filesystem(
base_directory=SERVING_MODEL_DIR
)
),
)
context.run(pusher)
Explanation: Deploying models with Pusher
The Pusher component checks whether a model has been "blessed", and if so, deploys it by pushing the model to a well known file destination.
<img src=../../images/Pusher.png width="400">
Configure and run the Pusher component
End of explanation
pusher.outputs
# Set `PATH` to include a directory containing `saved_model_cli.
PATH = get_ipython().run_line_magic("env", "PATH")
%env PATH=/opt/conda/envs/tfx/bin:{PATH}
latest_pushed_model = os.path.join(
SERVING_MODEL_DIR, max(os.listdir(SERVING_MODEL_DIR))
)
!saved_model_cli show --dir {latest_pushed_model} --all
Explanation: Examine the output of Pusher
End of explanation |
1,845 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A explorar los datos del LHC
Hoy vamos a combinar dos conceptos que vimos ayer
Step1: Datos LHC
Hemos preparado una version mini de los datos, que funcionara bastante bien.
Los datos estan en formato CSV (Que siginifica CSV?)
Step2: Utilidades
Step3: Columnas
Step4: Y si quiero imprimir columnas, una por una?
Usamos un for!
Step5: Recuerda
Step6: Dividir datos
Finalmente vamos dividir los datos entre los que son bosones (s) y los que no.
Cada uno sera una base de datos seprada
Step7: Preguntas
Step8: Histogramas
Usando la funcion sns.distplot, esta combina la funcionalidad de un histograma y ademas trata de ajustar una curva a los datos.
Step9: Scatter plots
Con scatter plots podemos ver la relacion de dos variables, graficando puntos en una dimension (X) y luego en otra (Y).
En este caso escojemos una variable extra PRI_tau_pt que segun la documentacion representa
PRI_tau_pt The transverse momentum $\sqrt{p^2_x + p^2_y}$ of the hadronic tau.
Es decir el momento transversal del tau hadronico..algo loco
Probemoslo
Step10: Ven algun problema ?
En este caso tiene sentido visualizar los datos separados...asi tenemos una mejor idea | Python Code:
import pandas as pd
import numpy as np # modulo de computo numerico
import matplotlib.pyplot as plt # modulo de graficas
# esta linea hace que las graficas salgan en el notebook
import seaborn as sns
%matplotlib inline
Explanation: A explorar los datos del LHC
Hoy vamos a combinar dos conceptos que vimos ayer:
Abrir bases de datos con pandas
Visualizar datos mediante histogramas
Ademas veremos como:
Utilizar scatter plots 2D
Box plots
Nuestra meta: <br> Pensar como classificar un datos como boson (s) o ruido (b)
Primero las librerias
Instalaremos una libreria (seaborn) para visualizacion avanzada usando el comando en su terminal (anaconda prompt o terminal):
shell
conda install seaborn
End of explanation
df = pd.read_csv('files/mini-LHC.csv')
df.head()
Explanation: Datos LHC
Hemos preparado una version mini de los datos, que funcionara bastante bien.
Los datos estan en formato CSV (Que siginifica CSV?)
End of explanation
print(df.shape)
print(len(df))
Explanation: Utilidades:
Podemos accesar informacion de la base de datos (DataFrame) mediante las siugientes formas:
Tamaño
Para eso podemos usar len() (longitud) y .shape (forma).
End of explanation
print(df.columns)
Explanation: Columnas
End of explanation
for col in df.columns:
print(col)
Explanation: Y si quiero imprimir columnas, una por una?
Usamos un for!
End of explanation
df['PRI_met']
Explanation: Recuerda: <br> Para accesar una columna usamos su nombre
End of explanation
boson_df = df[df['Label']=='s']
ruido_df = df[df['Label']=='b']
Explanation: Dividir datos
Finalmente vamos dividir los datos entre los que son bosones (s) y los que no.
Cada uno sera una base de datos seprada
End of explanation
sns.boxplot(x="Label", y="DER_mass_MMC",data=df)
plt.show()
Explanation: Preguntas:
Cuantos Bosones tenemos?
Y cuantos de ruido?
Visualizar!
Ahora que sabemos accesar a los datos, vamos a visualizar los datos.
Como ejemplo usaremos la propiedad fisica DER_mass_MMC que de acuerdo al documento de datos dice:
DER_mass_MMC: The estimated mass mH of the Higgs boson candidate, obtained through a prob- abilistic phase space integration
Es decir la masa estimada de la particula.
BoxPlot
Con los boxplots, vemos el minimo, maximo, promedio y una caja donde esta concentrado la mayoria de los datos (75%).
Con boxplots usamos todos los datos no divididos (df)..le decimos quien es esta en el eje x (Label) y quien en el eje y que puede ser cualquier propiedad fisica.
End of explanation
sns.distplot(boson_df["DER_mass_MMC"],label='boson')
sns.distplot(ruido_df["DER_mass_MMC"],label='ruido')
plt.ylabel('Frecuencia')
plt.legend()
plt.title("Distribucion de DER_mass_MMC")
plt.show()
Explanation: Histogramas
Usando la funcion sns.distplot, esta combina la funcionalidad de un histograma y ademas trata de ajustar una curva a los datos.
End of explanation
ejeX = "DER_mass_MMC"
ejeY = "PRI_tau_pt"
plt.scatter(df[ejeX],df[ejeY],alpha=0.5)
plt.xlabel(ejeX)
plt.ylabel(ejeY)
plt.show()
Explanation: Scatter plots
Con scatter plots podemos ver la relacion de dos variables, graficando puntos en una dimension (X) y luego en otra (Y).
En este caso escojemos una variable extra PRI_tau_pt que segun la documentacion representa
PRI_tau_pt The transverse momentum $\sqrt{p^2_x + p^2_y}$ of the hadronic tau.
Es decir el momento transversal del tau hadronico..algo loco
Probemoslo:
End of explanation
ejeX = "DER_mass_MMC"
ejeY = "PRI_tau_pt"
plt.scatter(boson_df[ejeX],boson_df[ejeY],c='r',alpha=0.5,label='boson')
plt.scatter(ruido_df[ejeX],ruido_df[ejeY],c='g',alpha=0.5,label='ruido')
plt.xlabel(ejeX)
plt.ylabel(ejeY)
plt.legend()
plt.show()
Explanation: Ven algun problema ?
En este caso tiene sentido visualizar los datos separados...asi tenemos una mejor idea
End of explanation |
1,846 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CS228 Python Tutorial
Adapted by Volodymyr Kuleshov and Isaac Caswell from the CS231n Python tutorial by Justin Johnson
<a href="http
Step1: Python versions
This version of the notebook has been adapted to work with Python 3.6.
You can check your Python version at the command line by running python --version.
Step2: Basic data types
Numbers
Integers and floats work as you would expect from other languages
Step3: Note that unlike certain other languages like <tt>C</tt>, <tt>C++</tt>, <em>Java</em>, or <tt>C#</tt>, Python does not have unary increment (x++) or decrement (x--) operators.
Python also has built-in types for long integers and complex numbers; you can find all of the details in the documentation.
Booleans
Python implements all of the usual operators for Boolean logic, but uses English words rather than symbols (&&, ||, etc.)
Step4: Now we let's look at the operations
Step5: Strings
Step6: String objects have a bunch of useful methods; for example
Step7: You can find a list of all string methods in the documentation.
Containers
Python includes several built-in container types
Step8: As usual, you can find all the gory details about lists in the documentation.
Slicing
In addition to accessing list elements one at a time, Python provides concise syntax to access sublists; this is known as slicing
Step9: Loops
You can loop over the elements of a list like this
Step10: If you want access to the index of each element within the body of a loop, use the built-in enumerate function
Step11: List comprehensions
Step12: You can make this code simpler using a list comprehension
Step13: List comprehensions can also contain conditions
Step14: Dictionaries
A dictionary stores (key, value) pairs, similar to a Map in Java or an object in Javascript. You can use it like this
Step15: You can find all you need to know about dictionaries in the documentation.
It is easy to iterate over the keys in a dictionary
Step16: If you want access to keys and their corresponding values, use the <tt>items</tt> method
Step17: Dictionary comprehensions
Step18: Sets
A set is an unordered collection of distinct elements. As a simple example, consider the following
Step19: Loops
Step20: Set comprehensions
Step21: Tuples
A tuple is an (immutable) ordered list of values. A tuple is in many ways similar to a list; one of the most important differences is that tuples can be used as keys in dictionaries and as elements of sets, while lists cannot. Here is a trivial example
Step22: Functions
Python functions are defined using the def keyword. For example
Step23: We will often define functions to take optional keyword arguments, like this
Step24: Classes
The syntax for defining classes in Python is straightforward
Step25: Numpy
Numpy is the core library for scientific computing in Python. It provides a high-performance multidimensional array object, and tools for working with these arrays.
To use Numpy, we first need to import the numpy package
Step26: Arrays
A numpy array is a grid of values, all of the same type, and is indexed by a tuple of nonnegative integers. The number of dimensions is called the rank of the array; the shape of an array is a tuple of integers giving the size of the array along each dimension.
We can initialize numpy arrays from nested Python lists, and access elements using square brackets
Step27: Numpy also provides many functions to create arrays
Step28: Array indexing
Numpy offers several ways to index into arrays.
Slicing
Step29: A slice of an array is a view into the same data, so modifying it will modify the original array.
Step30: You can also mix integer indexing with slice indexing. However, doing so will yield an array of lower rank than the original array.
Step31: Two ways of accessing the data in the middle row of the array.
Mixing integer indexing with slices yields an array of lower rank,
while using only slices yields an array of the same rank as the
original array
Step32: Integer array indexing
Step33: The following expression will return an array containing the elements a[0,1]and a[2,3].
Step34: One useful trick with integer array indexing is selecting or mutating one element from each row of a matrix
Step35: same as a[0,0], a[1,2], a[2,0], a[3,1],
Step36: Boolean array indexing
Step37: For brevity we have left out a lot of details about numpy array indexing; if you want to know more you should read the documentation.
Datatypes
Every numpy array is a grid of elements of the same type. Numpy provides a large set of numeric datatypes that you can use to construct arrays. Numpy tries to guess a datatype when you create an array, but functions that construct arrays usually also include an optional argument to explicitly specify the datatype. Here is an example
Step38: You can read all about numpy datatypes in the documentation.
Array math
Basic mathematical functions operate elementwise on arrays, and are available both as operator overloads and as functions in the numpy module
Step39: Note that unlike MATLAB, * is elementwise multiplication, not matrix multiplication. We instead use the dot function to compute inner products of vectors, to multiply a vector by a matrix, and to multiply matrices. dot is available both as a function in the numpy module and as an instance method of array objects
Step40: Numpy provides many useful functions for performing computations on arrays; one of the most useful is sum
Step41: You can find the full list of mathematical functions provided by numpy in the documentation.
Apart from computing mathematical functions using arrays, we frequently need to reshape or otherwise manipulate data in arrays. The simplest example of this type of operation is transposing a matrix; to transpose a matrix, simply use the T attribute of an array object
Step42: Broadcasting
Broadcasting is a powerful mechanism that allows numpy to work with arrays of different shapes when performing arithmetic operations. Frequently we have a smaller array and a larger array, and we want to use the smaller array multiple times to perform some operation on the larger array.
For example, suppose that we want to add a constant vector to each row of a matrix. We could do it like this
Step43: Create an empty matrix with the same shape as x. The elements of this matrix are initialized arbitrarily.
Step44: This works; however when the matrix x is very large, computing an explicit loop in Python could be slow. Note that adding the vector v to each row of the matrix x is equivalent to forming a matrix vv by stacking multiple copies of v vertically, then performing elementwise summation of x and vv. We could implement this approach like this
Step45: Numpy broadcasting allows us to perform this computation without actually creating multiple copies of v. Consider this version, using broadcasting
Step46: The line y = x + v works even though x has shape (4, 3) and v has shape (3,) due to broadcasting; this line works as if v actually had shape (4, 3), where each row was a copy of v, and the sum was performed elementwise.
Broadcasting two arrays together follows these rules
Step47: Add a vector to each column of a matrix
x has shape (2, 3) and w has shape (2,).
If we transpose x then it has shape (3, 2) and can be broadcast
against w to yield a result of shape (3, 2); transposing this result
yields the final result of shape (2, 3) which is the matrix x with
the vector w added to each column. Gives the following matrix
Step48: Another solution is to reshape w to be a row vector of shape (2, 1);
we can then broadcast it directly against x to produce the same
output.
Step49: Multiply a matrix by a constant
Step50: Broadcasting typically makes your code more concise and faster, so you should strive to use it where possible.
This brief overview has touched on many of the important things that you need to know about numpy, but is far from complete. Check out the numpy reference to find out much more about numpy.
Matplotlib
Matplotlib is a plotting library. In this section give a brief introduction to the matplotlib.pyplot module, which provides a plotting system similar to that of MATLAB.
Step51: By running this special iPython command, we will be displaying plots inline
Step52: Plotting
The most important function in matplotlib is plot, which allows you to plot 2D data. Here is a simple example
Step53: With just a little bit of extra work we can easily plot multiple lines at once, and add a title, legend, and axis labels
Step54: Subplots
You can plot different things in the same figure using the subplot function. Here is an example | Python Code:
def quicksort(arr):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quicksort(left) + middle + quicksort(right)
quicksort([3,6,8,10,1,2,1])
Explanation: CS228 Python Tutorial
Adapted by Volodymyr Kuleshov and Isaac Caswell from the CS231n Python tutorial by Justin Johnson
<a href="http://cs231n.github.io/python-numpy-tutorial/">Python Numpy Tutorial</a>.
Introduction
Python is a great general-purpose programming language on its own, but with the help of a few popular libraries (numpy, scipy, matplotlib) it becomes a powerful environment for scientific computing.
We expect that many of you will have some experience with Python and numpy; for the rest of you, this section will serve as a quick crash course both on the Python programming language and on the use of Python for scientific computing.
Some of you may have previous knowledge in Matlab, in which case we also recommend the numpy for Matlab users page (https://docs.scipy.org/doc/numpy-dev/user/numpy-for-matlab-users.html).
In this tutorial, we will cover:
Basic Python: Basic data types (Containers, Lists, Dictionaries, Sets, Tuples), Functions, Classes
Numpy: Arrays, Array indexing, Datatypes, Array math, Broadcasting
Matplotlib: Plotting, Subplots, Images
IPython: Creating notebooks, Typical workflows
Basics of Python
Python is a high-level, dynamically typed multiparadigm programming language. Python code is often said to be almost like pseudocode, since it allows you to express very powerful ideas in very few lines of code while being very readable. As an example, here is an implementation of the classic quicksort algorithm in Python:
End of explanation
!python --version
Explanation: Python versions
This version of the notebook has been adapted to work with Python 3.6.
You can check your Python version at the command line by running python --version.
End of explanation
x = 3
x, type(x)
print(x + 1) # Addition;
print(x - 1) # Subtraction;
print(x * 2) # Multiplication;
print(x ** 2) # Exponentiation;
x += 1
print(x) # Prints "4"
x *= 2
print(x) # Prints "8"
y = 2.5
print(type(y))
print(y, y + 1, y * 2, y ** 2)
Explanation: Basic data types
Numbers
Integers and floats work as you would expect from other languages:
End of explanation
t, f = True, False
type(t)
print(type(t))
Explanation: Note that unlike certain other languages like <tt>C</tt>, <tt>C++</tt>, <em>Java</em>, or <tt>C#</tt>, Python does not have unary increment (x++) or decrement (x--) operators.
Python also has built-in types for long integers and complex numbers; you can find all of the details in the documentation.
Booleans
Python implements all of the usual operators for Boolean logic, but uses English words rather than symbols (&&, ||, etc.):
End of explanation
print(t and f) # Logical AND
print(t or f) # Logical OR
print( not t) # Logical NOT
print(t != f) # Logical XOR
Explanation: Now we let's look at the operations:
End of explanation
hello = 'hello' # String literals can use single quotes
world = "world" # or double quotes; it does not matter.
print(hello, len(hello))
hw = hello + ' ' + world # String concatenation
hw
hw12 = '%s, %s! %d' % (hello, world, 12) # sprintf style string formatting
hw12
Explanation: Strings
End of explanation
s = "hello"
print(s.capitalize()) # Capitalize a string
print(s.upper()) # Convert a string to uppercase
print(s.rjust(7)) # Right-justify a string, padding with spaces
print(s.center(7)) # Center a string, padding with spaces
print(s.replace('l', '\N{greek small letter lamda}')) # Replace all instances of one substring with another
print(' world '.strip()) # Strip leading and trailing whitespace
Explanation: String objects have a bunch of useful methods; for example:
End of explanation
xs = [3, 1, 2] # Create a list
print(xs, xs[2]) # Indexing starts at 0
print(xs[-1]) # Negative indices count from the end of the list; prints "2"
xs[2] = 'foo' # Lists can contain elements of different types
xs
xs.append('bar') # Add a new element to the end of the list
xs
x = xs.pop() # Remove and return the last element of the list
x, xs
Explanation: You can find a list of all string methods in the documentation.
Containers
Python includes several built-in container types: lists, dictionaries, sets, and tuples.
Lists
A list is the Python equivalent of an array, but is resizeable and can contain elements of different types:
End of explanation
nums = list(range(5))
print(nums)
print(nums[2:4]) # Get a slice from index 2 to 4 (exclusive)
print(nums[2:]) # Get a slice from index 2 to the end
print(nums[:2]) # Get a slice from the start to index 2 (exclusive)
print(nums[:]) # Get a slice of the whole list, creates a shallow copy
print(nums[:-1]) # Slice indices can be negative
nums[2:4] = [8, 9, 10] # Assign a new sublist to a slice
nums
Explanation: As usual, you can find all the gory details about lists in the documentation.
Slicing
In addition to accessing list elements one at a time, Python provides concise syntax to access sublists; this is known as slicing:
End of explanation
animals = ['cat', 'dog', 'monkey']
for animal in animals:
print(animal)
Explanation: Loops
You can loop over the elements of a list like this:
End of explanation
animals = ['cat', 'dog', 'monkey']
for idx, animal in enumerate(animals):
print('#%d: %s' % (idx + 1, animal))
Explanation: If you want access to the index of each element within the body of a loop, use the built-in enumerate function:
End of explanation
nums = [0, 1, 2, 3, 4]
squares = []
for x in nums:
squares.append(x ** 2)
squares
Explanation: List comprehensions:
When programming, frequently we want to transform one type of data into another. As a simple example, consider the following code that computes square numbers:
End of explanation
nums = [0, 1, 2, 3, 4]
squares = [x ** 2 for x in nums]
squares
Explanation: You can make this code simpler using a list comprehension:
End of explanation
nums = [0, 1, 2, 3, 4]
even_squares = [x ** 2 for x in nums if x % 2 == 0]
even_squares
Explanation: List comprehensions can also contain conditions:
End of explanation
d = {'cat': 'cute', 'dog': 'furry'} # Create a new dictionary with some data
print(d['cat']) # Get an entry from a dictionary
print('cat' in d) # Check if a dictionary has a given key
d['fish'] = 'wet' # Set an entry in a dictionary
d['fish']
d['monkey'] # KeyError: 'monkey' not a key of d
print(d.get('monkey', 'N/A')) # Get an element with a default
print(d.get('fish', 'N/A')) # Get an element with a default
del d['fish'] # Remove an element from a dictionary
d.get('fish', 'N/A') # "fish" is no longer a key
Explanation: Dictionaries
A dictionary stores (key, value) pairs, similar to a Map in Java or an object in Javascript. You can use it like this:
End of explanation
d = {'person': 2, 'cat': 4, 'spider': 8}
for animal in d:
legs = d[animal]
print('A %s has %d legs.' % (animal.ljust(6), legs))
Explanation: You can find all you need to know about dictionaries in the documentation.
It is easy to iterate over the keys in a dictionary:
End of explanation
d = {'person': 2, 'cat': 4, 'spider': 8}
for animal, legs in d.items():
print('A %s has %d legs.' % (animal.ljust(6), legs))
Explanation: If you want access to keys and their corresponding values, use the <tt>items</tt> method:
End of explanation
nums = [0, 1, 2, 3, 4, 5, 6]
even_num_to_square = {x: x ** 2 for x in nums if x % 2 == 0}
even_num_to_square
Explanation: Dictionary comprehensions: These are similar to list comprehensions, but allow you to easily construct dictionaries. For example:
End of explanation
animals = {'cat', 'dog'}
print('cat' in animals) # Check if an element is in a set
print('fish' in animals)
animals.add('fish') # Add an element to a set
print('fish' in animals)
print(len(animals)) # Number of elements in a set
animals.add('cat') # Adding an element that is already in the set does nothing
print(len(animals))
animals.remove('cat') # Remove an element from a set
print(len(animals))
animals
Explanation: Sets
A set is an unordered collection of distinct elements. As a simple example, consider the following:
End of explanation
animals = {'cat', 'dog', 'fish'}
for idx, animal in enumerate(animals):
print('#%d: %s' % (idx + 1, animal))
Explanation: Loops: Iterating over a set has the same syntax as iterating over a list; however since sets are unordered, you cannot make assumptions about the order in which you visit the elements of the set:
End of explanation
from math import sqrt
{ int(sqrt(x)) for x in range(30) }
Explanation: Set comprehensions: Like lists and dictionaries, we can easily construct sets using set comprehensions:
End of explanation
d = { (x, x + 1): x for x in range(10) } # Create a dictionary with tuple keys
t = (5, 6) # Create a tuple
print(type(t))
print(d[t])
print(d[(1, 2)])
d
t[0] = 1
Explanation: Tuples
A tuple is an (immutable) ordered list of values. A tuple is in many ways similar to a list; one of the most important differences is that tuples can be used as keys in dictionaries and as elements of sets, while lists cannot. Here is a trivial example:
End of explanation
def sign(x):
if x > 0:
return 'positive'
elif x < 0:
return 'negative'
else:
return 'zero'
for x in [-1, 0, 1]:
print(sign(x))
Explanation: Functions
Python functions are defined using the def keyword. For example:
End of explanation
def hello(name, loud=False):
if loud:
print('HELLO, %s' % name.upper())
else:
print('Hello, %s!' % name)
hello('Bob')
hello('Fred', loud=True)
Explanation: We will often define functions to take optional keyword arguments, like this:
End of explanation
class Greeter:
# Constructor
def __init__(self, name):
self.name = name # Create an instance variable
# Instance method
def greet(self, loud=False):
if loud:
print('HELLO, %s!' % self.name.upper())
else:
print('Hello, %s' % self.name)
g = Greeter('Fred') # Construct an instance of the Greeter class
g.greet() # Call an instance method
g.greet(loud=True)
Explanation: Classes
The syntax for defining classes in Python is straightforward:
End of explanation
import numpy as np
Explanation: Numpy
Numpy is the core library for scientific computing in Python. It provides a high-performance multidimensional array object, and tools for working with these arrays.
To use Numpy, we first need to import the numpy package:
End of explanation
a = np.array([1, 2, 3]) # Create a rank 1 array
print(type(a), a.shape, a[0], a[1], a[2])
a[0] = 5 # Change an element of the array
a
b = np.array([[1,2,3],[4,5,6]]) # Create a rank 2 array
b
print(b.shape)
print(b[0, 0], b[0, 1], b[1, 0])
Explanation: Arrays
A numpy array is a grid of values, all of the same type, and is indexed by a tuple of nonnegative integers. The number of dimensions is called the rank of the array; the shape of an array is a tuple of integers giving the size of the array along each dimension.
We can initialize numpy arrays from nested Python lists, and access elements using square brackets:
End of explanation
np.zeros((2,2)) # Create an array of all zeros
np.ones((1,2)) # Create an array of all ones
np.full((2,2), 7) # Create a constant array
np.eye(2) # Create a 2x2 identity matrix
np.random.random((2,2)) # Create an array filled with random values
Explanation: Numpy also provides many functions to create arrays:
End of explanation
import numpy as np
# Create the following rank 2 array with shape (3, 4)
# [[ 1 2 3 4]
# [ 5 6 7 8]
# [ 9 10 11 12]]
a = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]])
print(a)
# Use slicing to pull out the subarray consisting of the first 2 rows
# and columns 1 and 2; b is the following array of shape (2, 2):
# [[2 3]
# [6 7]]
b = a[:2, 1:3]
print(b)
Explanation: Array indexing
Numpy offers several ways to index into arrays.
Slicing: Similar to Python lists, numpy arrays can be sliced. Since arrays may be multidimensional, you must specify a slice for each dimension of the array:
End of explanation
print(a[0, 1])
b[0, 0] = 77 # b[0, 0] is the same piece of data as a[0, 1]
print(a[0, 1])
Explanation: A slice of an array is a view into the same data, so modifying it will modify the original array.
End of explanation
# Create the following rank 2 array with shape (3, 4)
a = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]])
a
Explanation: You can also mix integer indexing with slice indexing. However, doing so will yield an array of lower rank than the original array.
End of explanation
row_r1 = a[1, :] # Rank 1 view of the second row of a
row_r2 = a[1:2, :] # Rank 2 view of the second row of a
row_r3 = a[[1], :] # Rank 2 view of the second row of a
print(row_r1, row_r1.shape)
print(row_r2, row_r2.shape)
print(row_r3, row_r3.shape)
# We can make the same distinction when accessing columns of an array:
col_r1 = a[:, 1]
col_r2 = a[:, 1:2]
print(col_r1, col_r1.shape)
print()
print(col_r2, col_r2.shape)
Explanation: Two ways of accessing the data in the middle row of the array.
Mixing integer indexing with slices yields an array of lower rank,
while using only slices yields an array of the same rank as the
original array:
End of explanation
a
np.array([a[0, 0], a[1, 1], a[2, 0]])
Explanation: Integer array indexing: When you index into numpy arrays using slicing, the resulting array view will always be a subarray of the original array. In contrast, integer array indexing allows you to construct arbitrary arrays using the data from another array. Here is an example:
End of explanation
# When using integer array indexing, you can reuse the same
# element from the source array:
a[[0, 2], [1, 3]]
a[0,1], a[2,3]
# Equivalent to the previous integer array indexing example
np.array([a[0, 1], a[2, 3]])
Explanation: The following expression will return an array containing the elements a[0,1]and a[2,3].
End of explanation
# Create a new array from which we will select elements
a = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])
a
# Create an array of indices
b = np.array([0, 2, 0, 1])
b
# Select one element from each row of a using the indices in b
a[[0, 1, 2, 3], b]
Explanation: One useful trick with integer array indexing is selecting or mutating one element from each row of a matrix:
End of explanation
a[0,0], a[1,2], a[2,0], a[3,1]
# Mutate one element from each row of a using the indices in b
a[[0, 1, 2, 3], b] += 100
a
Explanation: same as a[0,0], a[1,2], a[2,0], a[3,1],
End of explanation
import numpy as np
a = np.array([[1,2], [3, 4], [5, 6]])
print('a = \n', a, sep='')
bool_idx = (a > 2) # Find the elements of a that are bigger than 2;
# this returns a numpy array of Booleans of the same
# shape as a, where each slot of bool_idx tells
# whether that element of a is > 2.
bool_idx
# We use boolean array indexing to construct a rank 1 array
# consisting of the elements of a corresponding to the True values
# of bool_idx
a[bool_idx]
# We can do all of the above in a single concise statement:
a[a > 2]
Explanation: Boolean array indexing: Boolean array indexing lets you pick out arbitrary elements of an array. Frequently this type of indexing is used to select the elements of an array that satisfy some condition. Here is an example:
End of explanation
x = np.array([1, 2]) # Let numpy choose the datatype
y = np.array([1.0, 2.0]) # Let numpy choose the datatype
z = np.array([1, 2], dtype=np.int64) # Force a particular datatype
x.dtype, y.dtype, z.dtype
Explanation: For brevity we have left out a lot of details about numpy array indexing; if you want to know more you should read the documentation.
Datatypes
Every numpy array is a grid of elements of the same type. Numpy provides a large set of numeric datatypes that you can use to construct arrays. Numpy tries to guess a datatype when you create an array, but functions that construct arrays usually also include an optional argument to explicitly specify the datatype. Here is an example:
End of explanation
x = np.array([[1,2],[3,4]], dtype=np.float64)
y = np.array([[5,6],[7,8]], dtype=np.float64)
# Elementwise sum
x + y
np.add(x, y)
# Elementwise difference
x - y
np.subtract(x, y)
# Elementwise product
x * y
np.multiply(x, y)
# Elementwise division
x / y
np.divide(x, y)
# Elementwise square root
np.sqrt(x)
Explanation: You can read all about numpy datatypes in the documentation.
Array math
Basic mathematical functions operate elementwise on arrays, and are available both as operator overloads and as functions in the numpy module:
End of explanation
x = np.array([[1,2],[3,4]])
x
y = np.array([[5,6],[7,8]])
y
v = np.array([9,10])
v
w = np.array([11, 12])
w
# Inner product of vectors
print(v.dot(w))
print(np.dot(v, w))
# Matrix / vector product
print(x.dot(v))
print(np.dot(x, v))
# Matrix / matrix product; both produce the rank 2 array
print(x.dot(y))
print(np.dot(x, y))
Explanation: Note that unlike MATLAB, * is elementwise multiplication, not matrix multiplication. We instead use the dot function to compute inner products of vectors, to multiply a vector by a matrix, and to multiply matrices. dot is available both as a function in the numpy module and as an instance method of array objects:
End of explanation
x
np.sum(x) # Compute sum of all elements
np.sum(x, axis=0) # Compute sum of each column
np.sum(x, axis=1) # Compute sum of each row
Explanation: Numpy provides many useful functions for performing computations on arrays; one of the most useful is sum:
End of explanation
x
x.T
v = np.array([[1,2,3]])
print(v)
print(v.T)
Explanation: You can find the full list of mathematical functions provided by numpy in the documentation.
Apart from computing mathematical functions using arrays, we frequently need to reshape or otherwise manipulate data in arrays. The simplest example of this type of operation is transposing a matrix; to transpose a matrix, simply use the T attribute of an array object:
End of explanation
# We will add the vector v to each row of the matrix x,
# storing the result in the matrix y
x = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])
x
v = np.array([1, 0, 1])
v
Explanation: Broadcasting
Broadcasting is a powerful mechanism that allows numpy to work with arrays of different shapes when performing arithmetic operations. Frequently we have a smaller array and a larger array, and we want to use the smaller array multiple times to perform some operation on the larger array.
For example, suppose that we want to add a constant vector to each row of a matrix. We could do it like this:
End of explanation
y = np.empty_like(x)
y
# Add the vector v to each row of the matrix x with an explicit loop
for i in range(4):
y[i, :] = x[i, :] + v
y
Explanation: Create an empty matrix with the same shape as x. The elements of this matrix are initialized arbitrarily.
End of explanation
vv = np.tile(v, (4, 1)) # Stack 4 copies of v on top of each other
vv
y = x + vv # Add x and vv elementwise
y
Explanation: This works; however when the matrix x is very large, computing an explicit loop in Python could be slow. Note that adding the vector v to each row of the matrix x is equivalent to forming a matrix vv by stacking multiple copies of v vertically, then performing elementwise summation of x and vv. We could implement this approach like this:
End of explanation
# We will add the vector v to each row of the matrix x,
# storing the result in the matrix y
y = x + v # Add v to each row of x using broadcasting
y
Explanation: Numpy broadcasting allows us to perform this computation without actually creating multiple copies of v. Consider this version, using broadcasting:
End of explanation
# Compute outer product of vectors
v = np.array([1,2,3]) # v has shape (3,)
w = np.array([4,5]) # w has shape (2,)
# To compute an outer product, we first reshape v to be a column
# vector of shape (3, 1); we can then broadcast it against w to yield
# an output of shape (3, 2), which is the outer product of v and w:
np.reshape(v, (3, 1)) * w
# Add a vector to each row of a matrix
x = np.array([[1,2,3], [4,5,6]])
# x has shape (2, 3) and v has shape (3,) so they broadcast to (2, 3),
# giving the following matrix:
x + v
Explanation: The line y = x + v works even though x has shape (4, 3) and v has shape (3,) due to broadcasting; this line works as if v actually had shape (4, 3), where each row was a copy of v, and the sum was performed elementwise.
Broadcasting two arrays together follows these rules:
If the arrays do not have the same rank, prepend the shape of the lower rank array with 1s until both shapes have the same length.
The two arrays are said to be compatible in a dimension if they have the same size in the dimension, or if one of the arrays has size 1 in that dimension.
The arrays can be broadcast together if they are compatible in all dimensions.
After broadcasting, each array behaves as if it had shape equal to the elementwise maximum of shapes of the two input arrays.
In any dimension where one array had size 1 and the other array had size greater than 1, the first array behaves as if it were copied along that dimension
If this explanation does not make sense, try reading the explanation from the documentation or this explanation.
Functions that support broadcasting are known as universal functions. You can find the list of all universal functions in the documentation.
Here are some applications of broadcasting:
End of explanation
(x.T + w).T
Explanation: Add a vector to each column of a matrix
x has shape (2, 3) and w has shape (2,).
If we transpose x then it has shape (3, 2) and can be broadcast
against w to yield a result of shape (3, 2); transposing this result
yields the final result of shape (2, 3) which is the matrix x with
the vector w added to each column. Gives the following matrix:
End of explanation
x + np.reshape(w, (2, 1))
Explanation: Another solution is to reshape w to be a row vector of shape (2, 1);
we can then broadcast it directly against x to produce the same
output.
End of explanation
x * 2
Explanation: Multiply a matrix by a constant:
x has shape (2, 3). Numpy treats scalars as arrays of shape ();
these can be broadcast together to shape (2, 3), producing the
following array:
End of explanation
import matplotlib.pyplot as plt
Explanation: Broadcasting typically makes your code more concise and faster, so you should strive to use it where possible.
This brief overview has touched on many of the important things that you need to know about numpy, but is far from complete. Check out the numpy reference to find out much more about numpy.
Matplotlib
Matplotlib is a plotting library. In this section give a brief introduction to the matplotlib.pyplot module, which provides a plotting system similar to that of MATLAB.
End of explanation
%matplotlib inline
Explanation: By running this special iPython command, we will be displaying plots inline:
End of explanation
# Compute the x and y coordinates for points on a sine curve
x = np.arange(0, 3 * np.pi, 0.1)
y = np.sin(x)
# Plot the points using matplotlib
plt.plot(x, y)
Explanation: Plotting
The most important function in matplotlib is plot, which allows you to plot 2D data. Here is a simple example:
End of explanation
y_sin = np.sin(x)
y_cos = np.cos(x)
# Plot the points using matplotlib
plt.plot(x, y_sin)
plt.plot(x, y_cos)
plt.xlabel('x axis label')
plt.ylabel('y axis label')
plt.title('Sine and Cosine')
plt.legend(['Sine', 'Cosine'])
Explanation: With just a little bit of extra work we can easily plot multiple lines at once, and add a title, legend, and axis labels:
End of explanation
# Compute the x and y coordinates for points on sine and cosine curves
x = np.arange(0, 3 * np.pi, 0.1)
y_sin = np.sin(x)
y_cos = np.cos(x)
# Set up a subplot grid that has height 2 and width 1,
# and set the first such subplot as active.
plt.subplot(2, 1, 1)
# Make the first plot
plt.plot(x, y_sin)
plt.title('Sine')
# Set the second subplot as active, and make the second plot.
plt.subplot(2, 1, 2)
plt.plot(x, y_cos)
plt.title('Cosine')
# Show the figure.
plt.show()
Explanation: Subplots
You can plot different things in the same figure using the subplot function. Here is an example:
End of explanation |
1,847 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Utility-Fairness Tradeoff
In this post, I'll be taking a dive into the capabilities of themis_ml as a tool to measure and mitigate discriminatory patterns in training data and the predictions made by machine learning algorithms trained for the purposes of socially sensitive decision processes.
The overall goal of this research is to come up with a reasonable way to
think about how to make machine learning algorithms more fair. While the
mathematical formalization of fairness is not sufficient to solve the
problem of discrimination, our ability to understand and articulate
what it means for an algorithm to be fair is a step in the right direction.
Since the "discrimination" is an value-laden term in this context, I'll
refer to the opposite of fairness as potential discrimination (PD)
since the any socially biased patterns we'll be measuring in the
training data did not necessarily arise from discriminatory processes.
I'll be using the German Credit data, which consists of ~1000 loan
application containing roughly 20 input variables (including foreign_worker, housing, and credit_history) and 1 binary target variable credit_risk, which is either good or bad.
In the context of a good/bad credit_risk binary predict task and an
explicit definition of fairness, our objectives will be to
Step1: Measure Social Bias
target variable
Step2: protected class
Step3: protected class
Step4: protected class
Step6: These mean differences and confidence interval bounds suggest that
on average
Step7: It appears that the variance of normalized_mean_difference
across the 10 cross-validation folds is higher than mean_difference,
likely because the normalization factor d_max depends on the
rate of positive labels in the data.
Step8: Naive Fairness-aware Approach
Step9: Fairness-aware Method
Step10: Validation Curve
Step11: Fairness-aware Method
Step12: Fairness-aware Method
Step13: Comparison of Fairness-aware Techniques
Step14: We can make some interesting observations when comparing the results from different fairness-aware techniques. | Python Code:
from themis_ml import datasets
from themis_ml.datasets.german_credit_data_map import \
preprocess_german_credit_data
from themis_ml.metrics import mean_difference, normalized_mean_difference, \
mean_confidence_interval
german_credit = datasets.german_credit()
german_credit[
["credit_risk", "purpose", "age_in_years", "foreign_worker"]].head()
german_credit_preprocessed = (
preprocess_german_credit_data(german_credit)
# the following binary variable indicates whether someone is female or
# not since the unique values in `personal_status` are:
# 'personal_status_and_sex_female_divorced/separated/married'
# 'personal_status_and_sex_male_divorced/separated'
# 'personal_status_and_sex_male_married/widowed'
# 'personal_status_and_sex_male_single'
.assign(female=lambda df:
df["personal_status_and_sex_female_divorced/separated/married"])
# we're going to hypothesize here that young people, aged below 25,
# might be considered to have bad credit risk moreso than other groups
.assign(age_below_25=lambda df: df["age_in_years"] <= 25)
)
Explanation: The Utility-Fairness Tradeoff
In this post, I'll be taking a dive into the capabilities of themis_ml as a tool to measure and mitigate discriminatory patterns in training data and the predictions made by machine learning algorithms trained for the purposes of socially sensitive decision processes.
The overall goal of this research is to come up with a reasonable way to
think about how to make machine learning algorithms more fair. While the
mathematical formalization of fairness is not sufficient to solve the
problem of discrimination, our ability to understand and articulate
what it means for an algorithm to be fair is a step in the right direction.
Since the "discrimination" is an value-laden term in this context, I'll
refer to the opposite of fairness as potential discrimination (PD)
since the any socially biased patterns we'll be measuring in the
training data did not necessarily arise from discriminatory processes.
I'll be using the German Credit data, which consists of ~1000 loan
application containing roughly 20 input variables (including foreign_worker, housing, and credit_history) and 1 binary target variable credit_risk, which is either good or bad.
In the context of a good/bad credit_risk binary predict task and an
explicit definition of fairness, our objectives will be to:
Measure the degree of discrimination in the dataset with respect to some
discrimination metric and protected class.
Establish a baseline performance level with respect to utility and fairness
metrics with models trained on a fairness-unaware machine learning pipeline.
Measure and compare the baseline metrics with fairness aware models.
Load Data
End of explanation
credit_risk = german_credit_preprocessed.credit_risk
credit_risk.value_counts()
Explanation: Measure Social Bias
target variable: credit_risk
1 = low risk (good)
0 = high risk (bad)
End of explanation
is_female = german_credit_preprocessed.female
is_female.value_counts()
def report_metric(metric, mean_diff, lower, upper):
print("{metric}: {md:0.02f} - 95% CI [{lower:0.02f}, {upper:0.02f}]"
.format(metric=metric, md=mean_diff, lower=lower, upper=upper))
report_metric(
"mean difference",
*map(lambda x: x * 100, mean_difference(credit_risk, is_female)))
report_metric(
"normalized mean difference",
*map(lambda x: x * 100, normalized_mean_difference(credit_risk, is_female)))
Explanation: protected class: sex
advantaged group: men
disadvantaged group: women
End of explanation
is_foreign = german_credit_preprocessed.foreign_worker
is_foreign.value_counts()
report_metric(
"mean difference",
*map(lambda x: x * 100, mean_difference(credit_risk, is_foreign)))
report_metric(
"normalized mean difference",
*map(lambda x: x * 100, normalized_mean_difference(credit_risk, is_foreign)))
Explanation: protected class: immigration status
advantaged group: citizen worker
disadvantaged group: foreign worker
End of explanation
age_below_25 = german_credit_preprocessed.age_below_25
age_below_25.value_counts()
report_metric(
"mean difference",
*map(lambda x: x * 100, mean_difference(credit_risk, age_below_25)))
report_metric(
"normalized mean difference",
*map(lambda x: x * 100, normalized_mean_difference(credit_risk, age_below_25)))
Explanation: protected class: age
advantaged group: age above 25
disadvantaged group: age below 25
End of explanation
import itertools
import numpy as np
import pandas as pd
from sklearn.model_selection import StratifiedKFold, RepeatedStratifiedKFold
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import (
accuracy_score, roc_auc_score, f1_score)
# specify feature set. Note that we're excluding the `is_female`
# and `age_below_25` columns that we created above.
feature_set_1 = [
'duration_in_month',
'credit_amount',
'installment_rate_in_percentage_of_disposable_income',
'present_residence_since',
'age_in_years',
'number_of_existing_credits_at_this_bank',
'number_of_people_being_liable_to_provide_maintenance_for',
'status_of_existing_checking_account',
'savings_account/bonds',
'present_employment_since',
'job',
'telephone',
'foreign_worker',
'credit_history_all_credits_at_this_bank_paid_back_duly',
'credit_history_critical_account/other_credits_existing_not_at_this_bank',
'credit_history_delay_in_paying_off_in_the_past',
'credit_history_existing_credits_paid_back_duly_till_now',
'credit_history_no_credits_taken/all_credits_paid_back_duly',
'purpose_business',
'purpose_car_(new)',
'purpose_car_(used)',
'purpose_domestic_appliances',
'purpose_education',
'purpose_furniture/equipment',
'purpose_others',
'purpose_radio/television',
'purpose_repairs',
'purpose_retraining',
'personal_status_and_sex_female_divorced/separated/married',
'personal_status_and_sex_male_divorced/separated',
'personal_status_and_sex_male_married/widowed',
'personal_status_and_sex_male_single',
'other_debtors/guarantors_co-applicant',
'other_debtors/guarantors_guarantor',
'other_debtors/guarantors_none',
'property_building_society_savings_agreement/life_insurance',
'property_car_or_other',
'property_real_estate',
'property_unknown/no_property',
'other_installment_plans_bank',
'other_installment_plans_none',
'other_installment_plans_stores',
'housing_for free',
'housing_own',
'housing_rent',
]
N_SPLITS = 10
N_REPEATS = 5
RANDOM_STATE = 1000
def get_estimator_name(e):
return "".join([x for x in str(type(e)).split(".")[-1]
if x.isalpha()])
def get_grid_params(grid_params_dict):
Get outer product of grid search parameters.
return [
dict(params) for params in itertools.product(
*[[(k, v_i) for v_i in v] for
k, v in grid_params_dict.items()])]
def fit_with_s(estimator):
has_relabeller = getattr(estimator, "relabeller", None) is not None
child_estimator = getattr(estimator, "estimator", None)
estimator_fit_with_s = getattr(estimator, "S_ON_FIT", False)
child_estimator_fit_with_s = getattr(child_estimator, "S_ON_FIT", False)
return has_relabeller or estimator_fit_with_s or\
child_estimator_fit_with_s
def predict_with_s(estimator):
estimator_pred_with_s = getattr(estimator, "S_ON_PREDICT", False)
child_estimator = getattr(estimator, "estimator", None)
return estimator_pred_with_s or \
getattr(child_estimator, "S_ON_PREDICT", False)
def cross_validation_experiment(estimators, X, y, s, s_name, verbose=True):
msg = "Training models: protected_class = %s" % s_name
if verbose:
print(msg)
print("-" * len(msg))
performance_scores = []
# stratified groups tries to balance out y and s
groups = [i + j for i, j in
zip(y.astype(str), s_female.astype(str))]
cv = RepeatedStratifiedKFold(
n_splits=N_SPLITS,
n_repeats=N_REPEATS,
random_state=RANDOM_STATE)
for e_name, e in estimators:
if verbose:
print("%s, fold:" % e_name),
for i, (train, test) in enumerate(cv.split(X, y, groups=groups)):
if verbose:
print(i),
# create train and validation fold partitions
X_train, X_test = X[train], X[test]
y_train, y_test = y[train], y[test]
s_train, s_test = s[train], s[test]
# fit model and generate train and test predictions
if fit_with_s(e):
e.fit(X_train, y_train, s_train)
else:
e.fit(X_train, y_train)
train_pred_args = (X_train, s_train) if predict_with_s(e) \
else (X_train, )
test_pred_args = (X_test, s_test) if predict_with_s(e) \
else (X_test, )
train_pred_prob = e.predict_proba(*train_pred_args)[:, 1]
train_pred = e.predict(*train_pred_args)
test_pred_prob = e.predict_proba(*test_pred_args)[:, 1]
test_pred = e.predict(*test_pred_args)
# train scores
performance_scores.append([
s_name, e_name, i, "train",
# regular metrics
roc_auc_score(y_train, train_pred_prob),
# fairness metrics
mean_difference(train_pred, s_train)[0],
])
# test scores
performance_scores.append([
s_name, e_name, i, "test",
# regular metrics
roc_auc_score(y_test, test_pred_prob),
# fairness metrics
mean_difference(test_pred, s_test)[0]
])
if verbose:
print("")
if verbose:
print("")
return pd.DataFrame(
performance_scores,
columns=[
"protected_class", "estimator", "cv_fold", "fold_type",
"auc", "mean_diff"])
# training and target data
X = german_credit_preprocessed[feature_set_1].values
y = german_credit_preprocessed["credit_risk"].values
s_female = german_credit_preprocessed["female"].values
s_foreign = german_credit_preprocessed["foreign_worker"].values
s_age_below_25 = german_credit_preprocessed["age_below_25"].values
LOGISTIC_REGRESSION = LogisticRegression(
penalty="l2", C=0.001, class_weight="balanced")
DECISION_TREE_CLF = DecisionTreeClassifier(
criterion="entropy", max_depth=10, min_samples_leaf=10, max_features=10,
class_weight="balanced")
RANDOM_FOREST_CLF = RandomForestClassifier(
criterion="entropy", n_estimators=50, max_depth=10, max_features=10,
min_samples_leaf=10, class_weight="balanced")
estimators = [
("LogisticRegression", LOGISTIC_REGRESSION),
("DecisionTree", DECISION_TREE_CLF),
("RandomForest", RANDOM_FOREST_CLF)
]
experiment_baseline_female = cross_validation_experiment(
estimators, X, y, s_female, "female")
experiment_baseline_foreign = cross_validation_experiment(
estimators, X, y, s_foreign, "foreign_worker")
experiment_baseline_age_below_25 = cross_validation_experiment(
estimators, X, y, s_age_below_25, "age_below_25")
import seaborn as sns
import matplotlib.pyplot as plt
% matplotlib inline
UTILITY_METRICS = ["auc"]
FAIRNESS_METRICS = ["mean_diff"]
def summarize_experiment_results(experiment_df):
return (
experiment_df
.drop("cv_fold", axis=1)
.groupby(["protected_class", "estimator", "fold_type"])
.mean())
experiment_baseline = pd.concat([
experiment_baseline_female,
experiment_baseline_foreign,
experiment_baseline_age_below_25
])
experiment_baseline_summary = summarize_experiment_results(
experiment_baseline)
experiment_baseline_summary.query("fold_type == 'test'")
baseline_df = (
experiment_baseline
.query("fold_type == 'test' and estimator == 'LogisticRegression'")
)
sns.factorplot(y="protected_class", x="mean_diff", orient="h", data=baseline_df,
size=4, aspect=2, join=False)
protected_classes = ["female", "foreign_worker", "age_below_25"]
for s in protected_classes:
mean_ci = mean_confidence_interval(
plot_df.query("protected_class == @s").mean_diff.dropna())
print(
"grand_mean(mean_diff) for %s - mean: %0.03f, 95%% CI(%0.03f, %0.03f)" %
(s, mean_ci[0], mean_ci[1], mean_ci[2]))
def plot_experiment_results(experiment_results):
return (
experiment_results
.query("fold_type == 'test'")
.drop(["fold_type", "cv_fold"], axis=1)
.pipe(pd.melt, id_vars=["protected_class", "estimator"],
var_name="metric", value_name="score")
.pipe((sns.factorplot, "data"), y="metric",
x="score", hue="estimator", col="protected_class", col_wrap=3,
size=3.5, aspect=1.2, join=False, dodge=0.4))
plot_experiment_results(experiment_baseline);
Explanation: These mean differences and confidence interval bounds suggest that
on average:
men have "good" credit risk at a 7.48% higher rate than women,
with a lower bound of 1.35% and upper bound of 13.61%.
citizen workers have "good" credit risk at a 19.93% higher rate
than foreign workers, with a lower bound of 4.91% and upper
bound of 34.94%.
people above the age of 25 have "good" credit risk at a
14.94% higher rate than those below 25 with a lower bound of 8.97%
and upper bound of 25.61%.
Establish Baseline Metrics
Suppose that Unjust Bank wants to use these data to train a
machine learning algorithm to classify new observations into the
"good credit risk"/"bad credit risk" buckets.
In scenario 1, let's also suppose that the data scientists at
Unjust Bank are using typical, fairness-unaware modeling techniques.
Furthermore, they give absolutely no thought into what inputs
go into the learning process. Using this kitchen sink approach, they
plan on using variables like sex, age_below_25, and foreign_worker
to learn the classifier.
However, a rogue element in the data science team is interested
in at least measuring the potentially discriminatory (PD) patterns
in the learned algorithms, so in addition to measure performance
with metrics like accuracy or ROC area under the curve, also
measures the degree to which the algorithm generates PD predictions
that favor one social group over another.
Procedure
Specify model hyperparameter settings for training models.
Partition the training data into 10 validation folds.
For each of the validation folds, train model on the rest of the
data on each of the hyperparameter settings.
Evaluate the performance of the model on the validation fold.
Pick model with the best average performance to deploy to
production.
Below we use StratifiedKFold so that we can partition our
data according to the protected class of interest and train the
the following models:
LogisticRegression
DecisionTreeClassifier
RandomForest
End of explanation
from IPython.display import Markdown, display
def print_best_metrics(experiment_results, protected_classes):
for pclass in protected_classes:
msg = "#### protected class = %s:" % pclass
display(Markdown(msg))
exp_df = experiment_results[
(experiment_results["protected_class"] == pclass) &
(experiment_results["fold_type"] == "test")]
msg = ""
for m in UTILITY_METRICS:
utility_msg = \
"- best utility measured by %s (higher is better)" % m
best_model = (
exp_df
.sort_values(m, ascending=False)
.drop(["fold_type"], axis=1)
.iloc[0][[m, "estimator"]])
msg += utility_msg + " = %0.03f: %s\n" % \
(best_model[0], best_model[1])
for m in FAIRNESS_METRICS:
fairness_msg = \
"- best fairness measured by %s (lower is better)" % m
best_model = (
exp_df
# score closer to zero is better
.assign(abs_measure=lambda df: df[m].abs())
.sort_values("abs_measure")
.drop(["abs_measure", "fold_type"], axis=1)
.iloc[0][[m, "estimator"]])
msg += fairness_msg + " = %0.03f: %s\n" % \
(best_model[0], best_model[1])
display(Markdown(msg))
print_best_metrics(
experiment_baseline_summary.reset_index(),
["female", "foreign_worker", "age_below_25"])
Explanation: It appears that the variance of normalized_mean_difference
across the 10 cross-validation folds is higher than mean_difference,
likely because the normalization factor d_max depends on the
rate of positive labels in the data.
End of explanation
# create feature sets that remove variables with protected class information
feature_set_no_sex = [
f for f in feature_set_1 if
f not in [
'personal_status_and_sex_female_divorced/separated/married',
'personal_status_and_sex_male_divorced/separated',
'personal_status_and_sex_male_married/widowed',
'personal_status_and_sex_male_single']]
feature_set_no_foreign = [f for f in feature_set_1 if f != "foreign_worker"]
feature_set_no_age = [f for f in feature_set_1 if f != "age_in_years"]
# training and target data
X_no_sex = german_credit_preprocessed[feature_set_no_sex].values
X_no_foreign = german_credit_preprocessed[feature_set_no_foreign].values
X_no_age = german_credit_preprocessed[feature_set_no_age].values
experiment_naive_female = cross_validation_experiment(
estimators, X_no_sex, y, s_female, "female")
experiment_naive_foreign = cross_validation_experiment(
estimators, X_no_foreign, y, s_foreign, "foreign_worker")
experiment_naive_age_below_25 = cross_validation_experiment(
estimators, X_no_age, y, s_age_below_25, "age_below_25")
experiment_naive = pd.concat([
experiment_naive_female,
experiment_naive_foreign,
experiment_naive_age_below_25
])
experiment_naive_summary = summarize_experiment_results(experiment_naive)
experiment_naive_summary.query("fold_type == 'test'")
plot_experiment_results(experiment_naive);
print_best_metrics(
experiment_naive_summary.reset_index(),
["female", "foreign_worker", "age_below_25"])
Explanation: Naive Fairness-aware Approach: Remove Protected Class
The naive approach to training fairness-aware models is to remove the
protected class variables from the input data. While at face value this
approach might seem like a good measure to prevent the model from learning
the discriminatory patterns in the raw data, it doesn't preclude the
possibility of other non-protected class variables highly correlate with
protected class variables.
An well-known example of this is how zipcode correlates with race, so
zipcode essentially serves as a proxy for race in the training data
even if race is excluded from the input data.
End of explanation
from sklearn.base import clone
from themis_ml.preprocessing.relabelling import Relabeller
from themis_ml.meta_estimators import FairnessAwareMetaEstimator
# here we use the relabeller class to create new y vectors for each of the
# protected class contexts.
# we also use the FairnessAwareMetaEstimator as a convenience class to
# compose together different fairness-aware methods. This wraps around the
# estimators that we defined in the previous
relabeller = Relabeller()
relabelling_estimators = [
(name, FairnessAwareMetaEstimator(e, relabeller=relabeller))
for name, e in estimators]
experiment_relabel_female = cross_validation_experiment(
relabelling_estimators, X_no_sex, y, s_female, "female")
experiment_relabel_foreign = cross_validation_experiment(
relabelling_estimators, X_no_foreign, y, s_foreign, "foreign_worker")
experiment_relabel_age_below_25 = cross_validation_experiment(
relabelling_estimators, X_no_age, y, s_age_below_25, "age_below_25")
experiment_relabel = pd.concat([
experiment_relabel_female,
experiment_relabel_foreign,
experiment_relabel_age_below_25
])
experiment_relabel_summary = summarize_experiment_results(experiment_relabel)
experiment_relabel_summary.query("fold_type == 'test'")
plot_experiment_results(experiment_relabel);
print_best_metrics(
experiment_relabel_summary.reset_index(),
["female", "foreign_worker", "age_below_25"])
Explanation: Fairness-aware Method: Relabelling
In this and the following fairness-aware modeling runs, we exclude the
protected class variables as in the Naive Fairness-aware Approach
section in addition to the explicit fairness-aware technique.
End of explanation
LOGREG_L2_PARAM = [
3, 1, 3e-1, 1e-1, 3e-2, 1e-2, 3e-3, 1e-3,
3e-4, 1e-4, 3e-5, 1e-5, 3e-6, 1e-6, 3e-7, 1e-7, 3e-8, 1e-8]
def validation_curve_experiment(
estimator_name, estimator, param_name, param_list, update_func):
validaton_curve_experiment = []
for param in param_list:
e = clone(estimator)
e = update_func(e, param_name, param)
estimators = [(estimator_name, e)]
experiment_relabel_female = cross_validation_experiment(
estimators, X_no_sex, y, s_female, "female",
verbose=False)
experiment_relabel_foreign = cross_validation_experiment(
estimators, X_no_foreign, y, s_foreign, "foreign_worker",
verbose=False)
experiment_relabel_age_below_25 = cross_validation_experiment(
estimators, X_no_age, y, s_age_below_25, "age_below_25",
verbose=False)
validaton_curve_experiment.extend(
[experiment_relabel_female.assign(**{param_name: param}),
experiment_relabel_foreign.assign(**{param_name: param}),
experiment_relabel_age_below_25.assign(**{param_name: param})])
return pd.concat(validaton_curve_experiment)
def update_relabeller(e, param_name, param):
e = clone(e)
child_estimator = clone(e.estimator)
child_estimator.set_params(**{param_name: param})
e.set_params(estimator=child_estimator)
return e
relabel_validaton_curve_experiment = validation_curve_experiment(
"LogisticRegression", FairnessAwareMetaEstimator(
LOGISTIC_REGRESSION, relabeller=Relabeller()),
"C", LOGREG_L2_PARAM, update_relabeller)
def validation_curve_plot(x, y, **kwargs):
ax = plt.gca()
lw = 2.5
data = kwargs.pop("data")
train_data = data.query("fold_type == 'train'")
test_data = data.query("fold_type == 'test'")
grp_data_train = train_data.groupby(x)
grp_data_test = test_data.groupby(x)
mean_data_train = grp_data_train[y].mean()
mean_data_test = grp_data_test[y].mean()
std_data_train = grp_data_train[y].std()
std_data_test = grp_data_test[y].std()
ax.semilogx(mean_data_train.index, mean_data_train,
label="train", color="#848484", lw=lw)
ax.semilogx(mean_data_test.index, mean_data_test,
label="test", color="#ae33bf", lw=lw)
# # Add error region
ax.fill_between(mean_data_train.index, mean_data_train - std_data_train,
mean_data_train + std_data_train, alpha=0.2,
color="darkorange", lw=lw)
ax.fill_between(mean_data_test.index, mean_data_test - std_data_test,
mean_data_test + std_data_test, alpha=0.1,
color="navy", lw=lw)
relabel_validaton_curve_experiment_df = (
relabel_validaton_curve_experiment
.pipe(pd.melt,
id_vars=["protected_class", "estimator", "cv_fold", "fold_type",
"C"],
value_vars=["auc", "mean_diff"],
var_name="metric", value_name="score")
.assign(
protected_class=lambda df: df.protected_class.str.replace("_", " "),
metric=lambda df: df.metric.str.replace("_", " "))
.rename(columns={"score": "mean score"})
)
# relabel_validaton_curve_experiment_df
g = sns.FacetGrid(
relabel_validaton_curve_experiment_df,
row="protected_class",
col="metric", size=2.5, aspect=1.1, sharey=False,
margin_titles=False)
g = g.map_dataframe(validation_curve_plot, "C", "mean score")
g.set_titles(template="{row_name}, {col_name}")
# g.add_legend()
# g.add_legend(bbox_to_anchor=(0.275, 0.91))
g.add_legend(bbox_to_anchor=(0.28, 0.9))
g.fig.tight_layout()
g.savefig("IMG/logistic_regression_validation_curve.png");
Explanation: Validation Curve: Logistic Regression
End of explanation
from sklearn.linear_model import LinearRegression
from sklearn.ensemble import RandomForestRegressor
from sklearn.tree import DecisionTreeRegressor
from themis_ml.linear_model.counterfactually_fair_models import \
LinearACFClassifier
LINEAR_REG = LinearRegression()
DECISION_TREE_REG = DecisionTreeRegressor(max_depth=10, min_samples_leaf=10)
RANDOM_FOREST_REG = RandomForestRegressor(
n_estimators=50, max_depth=10, min_samples_leaf=10)
# use the estimators defined above to define the linear additive
# counterfactually fair models
linear_acf_estimators = [
(name, LinearACFClassifier(
target_estimator=e,
binary_residual_type="absolute"))
for name, e in estimators]
experiment_acf_female = cross_validation_experiment(
linear_acf_estimators, X_no_sex, y, s_female, "female")
experiment_acf_foreign = cross_validation_experiment(
linear_acf_estimators, X_no_foreign, y, s_foreign, "foreign_worker")
experiment_acf_age_below_25 = cross_validation_experiment(
linear_acf_estimators, X_no_age, y, s_age_below_25, "age_below_25")
experiment_acf = pd.concat([
experiment_acf_female,
experiment_acf_foreign,
experiment_acf_age_below_25
])
experiment_acf_summary = summarize_experiment_results(experiment_acf)
experiment_acf_summary.query("fold_type == 'test'")
experiment_acf = pd.concat([
experiment_acf_female,
experiment_acf_foreign,
experiment_acf_age_below_25
])
experiment_acf_summary = summarize_experiment_results(experiment_acf)
experiment_acf_summary.query("fold_type == 'test'")
plot_experiment_results(experiment_acf);
print_best_metrics(
experiment_acf_summary.reset_index(),
["female", "foreign_worker", "age_below_25"])
Explanation: Fairness-aware Method: Additive Counterfactually Fair Model
End of explanation
from themis_ml.postprocessing.reject_option_classification import \
SingleROClassifier
# use the estimators defined above to define the linear additive
# counterfactually fair models
single_roc_clf_estimators = [
(name, SingleROClassifier(estimator=e))
for name, e in estimators]
experiment_single_roc_female = cross_validation_experiment(
single_roc_clf_estimators, X_no_sex, y, s_female, "female")
experiment_single_roc_foreign = cross_validation_experiment(
single_roc_clf_estimators, X_no_foreign, y, s_foreign, "foreign_worker")
experiment_single_roc_age_below_25 = cross_validation_experiment(
single_roc_clf_estimators, X_no_age, y, s_age_below_25, "age_below_25")
experiment_single_roc = pd.concat([
experiment_single_roc_female,
experiment_single_roc_foreign,
experiment_single_roc_age_below_25
])
experiment_single_roc_summary = summarize_experiment_results(
experiment_single_roc)
experiment_single_roc_summary.query("fold_type == 'test'")
plot_experiment_results(experiment_acf);
print_best_metrics(
experiment_acf.reset_index(),
["female", "foreign_worker", "age_below_25"])
Explanation: Fairness-aware Method: Reject-option Classification
End of explanation
compare_experiments = (
pd.concat([
experiment_baseline.assign(experiment="B"),
experiment_naive.assign(experiment="RPA"),
experiment_relabel.assign(experiment="RTV"),
experiment_acf.assign(experiment="CFM"),
experiment_single_roc.assign(experiment="ROC")
])
.assign(
protected_class=lambda df: df.protected_class.str.replace("_", " "),
)
)
compare_experiments.head()
comparison_palette = sns.color_palette("Dark2", n_colors=8)
def compare_experiment_results_multiple_model(experiment_results):
g = (
experiment_results
.query("fold_type == 'test'")
.drop(["cv_fold"], axis=1)
.pipe(pd.melt, id_vars=["experiment", "protected_class", "estimator",
"fold_type"],
var_name="metric", value_name="score")
.assign(
metric=lambda df: df.metric.str.replace("_", " "))
.pipe((sns.factorplot, "data"), y="experiment",
x="score", hue="metric",
col="protected_class", row="estimator",
join=False, size=3, aspect=1.1, dodge=0.3,
palette=comparison_palette, margin_titles=True, legend=False))
g.set_axis_labels("mean score (95% CI)")
for ax in g.axes.ravel():
ax.set_ylabel("")
plt.setp(ax.texts, text="")
g.set_titles(row_template="{row_name}", col_template="{col_name}")
plt.legend(title="metric", loc=9, bbox_to_anchor=(-0.65, -0.4))
g.fig.legend(loc=9, bbox_to_anchor=(0.5, -0.3))
g.fig.tight_layout()
g.savefig("IMG/fairness_aware_comparison.png", dpi=500);
compare_experiment_results_multiple_model(
compare_experiments.query("estimator == 'LogisticRegression'"));
Explanation: Comparison of Fairness-aware Techniques
End of explanation
from scipy import stats
def compute_corr_pearson(x, y, ci=0.95):
corr = stats.pearsonr(x, y)
z = np.arctanh(corr[0])
sigma = (1 / ((len(x) - 3) ** 0.5))
cint = z + np.array([-1, 1]) * sigma * stats.norm.ppf((1 + ci ) / 2)
return corr, np.tanh(cint)
black_palette = sns.color_palette(["#222222"])
def plot_utility_fairness_tradeoff(x, y, **kwargs):
ax = plt.gca()
data = kwargs.pop("data")
sns_ax = sns.regplot(x=x, y=y, data=data, scatter_kws={'alpha':0.5},
**kwargs)
(corr, p_val), ci = compute_corr_pearson(data[x], data[y])
r_text = 'r = %0.02f (%0.02f, %0.02f)' % \
(corr, ci[0], ci[1])
sns_ax.annotate(
r_text, xy=(0.7, 0),
xytext=(0.07, 0.91),
textcoords='axes fraction',
fontweight="bold",
fontsize=9,
color="gray"
)
bottom_padding = 0.05
top_padding = 0.5
ylim = (data[y].min() - bottom_padding, data[y].max() + top_padding)
sns_ax.set_ylim(*ylim)
g = sns.FacetGrid(
(
compare_experiments
.drop("cv_fold", axis=1)
.reset_index()
.query("fold_type == 'test'")
.rename(
columns={"mean_diff": "mean diff"})
),
col="protected_class",
row="experiment",
hue="experiment",
size=2.0, aspect=1.3, sharey=True,
palette=black_palette)
g.map_dataframe(plot_utility_fairness_tradeoff, "auc", "mean diff")
g.set_titles(template="{row_name}, {col_name}")
g.fig.tight_layout()
g.savefig("IMG/fairness_utility_tradeoff.png", dpi=500);
g = sns.FacetGrid(
(
compare_experiments
.drop("cv_fold", axis=1)
.reset_index()
.query("fold_type == 'test'")
.rename(
columns={"mean_diff": "mean diff"})
),
col="protected_class",
row="estimator",
hue="estimator",
size=3.5, aspect=1,
sharey=True, sharex=False,
palette=black_palette)
g.map_dataframe(plot_utility_fairness_tradeoff, "auc", "mean diff")
g.set_titles(template="{row_name}, {col_name}")
g.fig.tight_layout()
Explanation: We can make some interesting observations when comparing the results from different fairness-aware techniques.
End of explanation |
1,848 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sérialisation - correction
Step1: Exercice 1
Step2: Etape 2
Step3: Etape 3 | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: Sérialisation - correction
End of explanation
import random
values = [ [random.random() for i in range(0,20)] for _ in range(0,100000) ]
col = [ "col%d" % i for i in range(0,20) ]
import pandas
df = pandas.DataFrame( values, columns = col )
Explanation: Exercice 1 : sérialisation d'un gros dataframe
Etape 1 : construction d'un gros dataframe composé de nombres aléatoires
End of explanation
df.to_csv("df_text.txt", sep="\t")
df.to_pickle("df_text.bin")
Explanation: Etape 2 : on sauve ce dataframe sous deux formats texte et sérialisé (binaire)
End of explanation
%timeit pandas.read_csv("df_text.txt", sep="\t")
%timeit pandas.read_pickle("df_text.bin")
Explanation: Etape 3 : on mesure le temps de chargement
End of explanation |
1,849 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mpi-m', 'sandbox-3', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: MPI-M
Source ID: SANDBOX-3
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:17
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
1,850 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How to start a Pod
In this notebook, we show you how to create a single container Pod.
Start by importing the Kubernetes module
Step1: If you are using a proxy, you can use the client Configuration to setup the host that the client should use. Otherwise read the kubeconfig file.
Step2: Pods are a stable resource in the V1 API group. Instantiate a client for that API group endpoint.
Step3: In this example, we only start one container in the Pod. The container is an instnace of the V1Container class.
Step4: The specification of the Pod is made of a single container in its list.
Step5: Get existing list of Pods, before the creation of the new Pod.
Step6: You are now ready to create the Pod.
Step7: Get list of Pods, after the creation of the new Pod. Note the newly created pod with name "busybox"
Step8: Delete the Pod
You refer to the Pod by name, you need to add its namespace and pass some delete options. | Python Code:
from kubernetes import client, config
Explanation: How to start a Pod
In this notebook, we show you how to create a single container Pod.
Start by importing the Kubernetes module
End of explanation
config.load_incluster_config()
Explanation: If you are using a proxy, you can use the client Configuration to setup the host that the client should use. Otherwise read the kubeconfig file.
End of explanation
v1=client.CoreV1Api()
pod=client.V1Pod()
spec=client.V1PodSpec()
pod.metadata=client.V1ObjectMeta(name="busybox")
Explanation: Pods are a stable resource in the V1 API group. Instantiate a client for that API group endpoint.
End of explanation
container=client.V1Container()
container.image="busybox"
container.args=["sleep", "3600"]
container.name="busybox"
Explanation: In this example, we only start one container in the Pod. The container is an instnace of the V1Container class.
End of explanation
spec.containers = [container]
pod.spec = spec
Explanation: The specification of the Pod is made of a single container in its list.
End of explanation
ret = v1.list_namespaced_pod(namespace="default")
for i in ret.items:
print("%s %s %s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
Explanation: Get existing list of Pods, before the creation of the new Pod.
End of explanation
v1.create_namespaced_pod(namespace="default",body=pod)
Explanation: You are now ready to create the Pod.
End of explanation
ret = v1.list_namespaced_pod(namespace="default")
for i in ret.items:
print("%s %s %s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
Explanation: Get list of Pods, after the creation of the new Pod. Note the newly created pod with name "busybox"
End of explanation
v1.delete_namespaced_pod(name="busybox", namespace="default", body=client.V1DeleteOptions())
Explanation: Delete the Pod
You refer to the Pod by name, you need to add its namespace and pass some delete options.
End of explanation |
1,851 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Final cube analysis
Step1: VISUALIZE POTENTIAL DIFFERENCE PART
Step2: Coordinates
Step3: NACS ANALYSIS
Step4: NACS visualization
Step5: here we try to make interpolated "unified" NAC values for S_2 - S_1
Step6: DIPOLES visualization
Step7: Minimum geometry is found by getting the minimum on the ground state potential
Step8: CI geometry by taking the maximum NAC value between 0 and 1
Step9: Product/reactant catcher
Here I want to generate the cubes of 1 and 0 to catch different regions of my cube.
So, now I want to do this. I want to create several cubes with different regions (basically the cubes are of 1 and 0). A is FC region, B is PRODUCT region and C is REACTANT
Step10: HERE THE REGIONS FOR ADVANCED MASKS
Step11: Here I check the direction of the permanent dipoles.
Step12: temporary cells for last correction sign
Step13: sign flipper on extrapolated SMO cube
you used the cells below to correct NAC on the main plane... it was still flipping
Step14: Things regarding writing down the Pickle file
Step15: those cells here are used to visualize in 3d space the dipoles/nac
Step16: those to make the wall on extrapolated gamma values | Python Code:
import quantumpropagator as qp
import matplotlib.pyplot as plt
%matplotlib ipympl
plt.rcParams.update({'font.size': 8})
from mpl_toolkits.mplot3d import axes3d
import matplotlib.pyplot as pltfrom
from ipywidgets import interact,fixed #, interactive, fixed, interact_manual
import ipywidgets as widgets
from matplotlib import cm
import pickle
# name_data_file = '/home/alessio/n-Propagation/newExtrapolated_allCorrection.pickle'
name_data_file = '/home/alessio/n-Propagation/newExtrapolated_gammaExtrExag.pickle'
# dataDict = np.load('/home/alessio/n-Propagation/datanewoneWithNACnow.npy')[()]
# # name_data_file = '/home/alessio/n-Propagation/NAC_2_1_little_exagerated.pickle'
with open(name_data_file, "rb") as input_file:
data = pickle.load(input_file)
%load_ext Cython
# data.keys()
# name_data_file2 = 'NAC_2_1_little_exagerated.pickle'
# with open(name_data_file2, "rb") as input_file:
# data2 = pickle.load(input_file)
# name_data_file3 = 'newExtrapolated_gammaExtrExag.pickle'
# with open(name_data_file3, "rb") as input_file:
# data3 = pickle.load(input_file)
# pot = data['potCube']
# pot2= data2['potCube']
# pot3 = data3['potCube']
# np.all(pot == pot3)
Explanation: Final cube analysis
End of explanation
pot = data['potCube']
data['potCube'].shape
print(pot.shape)
qp.find_numpy_index_minumum(pot), pot[29, 28, 55, 0]
%matplotlib ipympl
pot_difference_AU = pot[15:-15,15:-15,30:-30,2] - pot[15:-15,15:-15,30:-30,3]
#phiC, gamC, theC, phiD, gamD, theD = (27,25,85, 4, 7, 24)
#pot_difference_AU = pot[phiC-phiD:phiC+phiD,gamC-gamD:gamC+gamD,theC-theD:theC+theD,2] - pot[phiC-phiD:phiC+phiD,gamC-gamD:gamC+gamD,theC-theD:theC+theD,3]
pot_difference = qp.fromHartoEv(pot_difference_AU)
print(qp.find_numpy_index_minumum(pot_difference))
b = pd.Series(pot_difference.flatten())
b.describe()
b.hist(bins=100)
plt.close('all')
phiC, gamC, theC, phiD, gamD, theD = (27,26,85, 2, 2, 24)
pot_difference_AU = pot[phiC-phiD:phiC+phiD,gamC-gamD:gamC+gamD,theC-theD:theC+theD,2] - pot[phiC-phiD:phiC+phiD,gamC-gamD:gamC+gamD,theC-theD:theC+theD,3]
pot_difference = qp.fromHartoEv(pot_difference_AU)
print(qp.find_numpy_index_minumum(pot_difference))
b = pd.Series(pot_difference.flatten())
b.describe()
b.hist(bins=100)
%matplotlib ipympl
dp = 7
dg = 7
dt = 20
mask = pot_difference[22-dp:22+dp,22-dg:22+dg,110-dt:110+dt]
c = pd.Series(mask.flatten())
c.describe()
#c.hist()
diff_0_1_all = pot[:,:,:,1]-pot[:,:,:,0]
diff_2_3_all = pot[:,:,:,3]-pot[:,:,:,2]
diff_0_1 = np.zeros_like(diff_0_1_all) + 999
diff_0_1[15:-15,15:-15,30:-30] = diff_0_1_all[15:-15,15:-15,30:-30]
diff_2_3 = np.zeros_like(diff_2_3_all) + 999
diff_2_3[15:-15,15:-15,30:-30] = diff_2_3_all[15:-15,15:-15,30:-30]
save_pot_diff = True
dictio = {}
a = 0
if save_pot_diff:
filename = '/home/alessio/IMPORTANTS/VISUALIZE_ENERGY_DIFFERENCE/PotDiff{:04}.h5'.format(a)
# I reput the zeros out.
dictio['diff'] = qp.fromHartoEv(diff_0_1)
dictio['lab'] = 'Diff 0 1'
qp.writeH5fileDict(filename, dictio)
a = a + 1
if save_pot_diff:
filename = '/home/alessio/IMPORTANTS/VISUALIZE_ENERGY_DIFFERENCE/PotDiff{:04}.h5'.format(a)
# I reput the zeros out.
dictio['diff'] = qp.fromHartoEv(diff_2_3)
dictio['lab'] = 'Diff 2 3'
qp.writeH5fileDict(filename, dictio)
a = 0
for i in range(8):
a = a + 1
thisState = np.zeros_like(diff_0_1_all) + 999
thisState[15:-15,15:-15,30:-30] = pot[15:-15,15:-15,30:-30,i]
dictio = {}
if save_pot_diff:
filename = '/home/alessio/IMPORTANTS/VISUALIZE_ENERGY_DIFFERENCE/Energy{:04}.h5'.format(a)
dictio['diff'] = qp.fromHartoEv(thisState)
dictio['lab'] = i
qp.writeH5fileDict(filename, dictio)
Explanation: VISUALIZE POTENTIAL DIFFERENCE PART
End of explanation
from quantumpropagator import fromLabelsToFloats, labTranformA
phis_ext = labTranformA(data['phis'])
gams_ext = labTranformA(data['gams'])
thes_ext = labTranformA(data['thes'])
phiV_ext, gamV_ext, theV_ext = fromLabelsToFloats(data)
# take step
dphi = phis_ext[0] - phis_ext[1]
dgam = gams_ext[0] - gams_ext[1]
dthe = thes_ext[0] - thes_ext[1]
# take range
range_phi = phis_ext[-1] - phis_ext[0]
range_gam = gams_ext[-1] - gams_ext[0]
range_the = thes_ext[-1] - thes_ext[0]
phis = phis_ext[15:-15]
gams = gams_ext[15:-15]
thes = thes_ext[30:-30]
phiV = phiV_ext[15:-15]
gamV = gamV_ext[15:-15]
theV = theV_ext[30:-30]
header = ' Labels extr. internal extr. dq range\n'
string = 'Phi -> {:8.4f} {:8.4f} {:8.4f} {:8.4f} {:8.4f} {:8.4f}\nGam -> {:8.4f} {:8.4f} {:8.4f} {:8.4f} {:8.4f} {:8.4f}\nThe -> {:8.4f} {:8.4f} {:8.4f} {:8.4f} {:8.4f} {:8.4f}'
out = (header + string).format(phiV_ext[-1],phiV_ext[0],phis_ext[-1],phis_ext[0],dphi,range_phi,
gamV_ext[-1],gamV_ext[0],gams_ext[-1],gams_ext[0],dgam,range_gam,
theV_ext[-1],theV_ext[0],thes_ext[-1],thes_ext[0],dthe,range_the)
print(out)
Explanation: Coordinates
End of explanation
nacs = data['smoCube']
# take out zeros
NACS = nacs[15:-15,15:-15,30:-30]
# select the two states
print(NACS.shape, nacs.shape)
pL, gL, tL, sL, dL, coorL = NACS.shape
#%%time
n=10
makeGraph = True
states_to_consider = 2
if makeGraph:
for s1 in range(states_to_consider):
for s2 in range(s1):
a = np.abs(NACS[:,:,:,s1,s2,0].flatten())
binZ = [0.0000000000000001, 0.0000001, 0.000001, 0.00001,0.0001,0.001,0.01,0.1]
# thing here is the integer where I plot the bar (x position)
thing = np.arange(len(binZ)-1)
label_names = [ '{}'.format(x) for x in binZ ]
counts, bins = np.histogram(a,bins=binZ)
fig, ax0 = plt.subplots(1,1)
ax0.bar(thing,counts)
plt.xticks(thing,label_names)
plt.title('Nacs values between states {} {}'.format(s1,s2))
for xy in zip(thing, counts):
ax0.annotate('{}'.format(xy[1]), xy=xy)
cart = 0
s1 = 5
s2 = 4
p = 22
g=5
t=77
elem = np.abs(NACS[p,g,t,s1,s2,cart])
neighbors = np.abs(np.array([NACS[p+1,g,t,s1,s2,cart],
NACS[p-1,g,t,s1,s2,cart],
NACS[p,g+1,t,s1,s2,cart],
NACS[p,g-1,t,s1,s2,cart],
NACS[p,g,t+1,s1,s2,cart],
NACS[p,g,t-1,s1,s2,cart]]))
lol = neighbors - elem
differences = np.amin(lol)
print('{} {} {} {}'.format(elem, neighbors, lol, differences))
print('States({},{}) -> Cube({:2},{:2},{:2}): {:5.3e}'.format(s1,s2,p,g,t,differences))
NACS
cart = 0
for s1 in range(sL):
for s2 in range(s1):
#for p in qp.log_progress(range(pL),every=1,size=(pL)):
for p in range(1,pL-1):
for g in range(1,gL-1):
for t in range(1,tL-1):
elem = np.abs(NACS[p,g,t,s1,s2,cart])
neighbors = np.abs(np.array([NACS[p+1,g,t,s1,s2,cart],
NACS[p-1,g,t,s1,s2,cart],
NACS[p,g+1,t,s1,s2,cart],
NACS[p,g-1,t,s1,s2,cart],
NACS[p,g,t+1,s1,s2,cart],
NACS[p,g,t-1,s1,s2,cart]]))
differences = neighbors - elem
#print('{} {} {}'.format(elem, neighbors, differences))
if np.all(differences > 0.0001):
print('States({},{}) -> Cube({:2},{:2},{:2}): {:5.3e}'.format(s1,s2,p,g,t,differences))
Explanation: NACS ANALYSIS
End of explanation
# AAA is the plane at which I want to study the "only" point
AAA = NACS[10,:,:,1,2,2]
gam_That, the_That = np.unravel_index(AAA.argmin(), AAA.shape)
10, gam_That, the_That
phis[10],gams[gam_That],thes[the_That]
Explanation: NACS visualization
End of explanation
%%cython --annotate --compile-args=-fopenmp --link-args=-fopenmp --force
### #%%cython
### #%%cython --annotate
import numpy as np
cimport numpy as np
cimport cython
from cython.parallel import prange
@cython.boundscheck(False)
@cython.wraparound(False)
@cython.cdivision(True)
@cython.nonecheck(False)
cdef void neigh(double [:,:,:,:] nacs_2_1, double [:,:,:,::1] bigger_2_1_nacs):
cdef:
int pL = 25, gL = 26, tL = 100,coorL=3
int p1,g1,t1,p2,g2,t2,coor,tuplL,pg
double thresh
thresh = 0.0001
tuplL = pL*gL
for coor in range(coorL):
#for pg in prange(tuplL, nogil=True, schedule='dynamic',num_threads=16):
for pg in range(tuplL):
p1 = pg // gL
g1 = pg % gL
for t1 in range(tL):
# for p2 in range(pL):
# for g2 in range(gL):
# for t2 in range(tL):
# if abs(nacs_2_1[p1,g1,t1,coor]) < 0.000001:
# bigger_2_1_nacs[p1,g1,t1,coor] = nacs_2_1[p1,g1,t1,coor]*100
# elif abs(nacs_2_1[p1,g1,t1,coor]) < 0.00001:
bigger_2_1_nacs[p1,g1,t1,coor] = nacs_2_1[p1,g1,t1,coor]*100
# return(bigger_2_1_nacs)
def neighbor(nacs_2_1,bigger_2_1_nacs):
return np.asarray(neigh(nacs_2_1,bigger_2_1_nacs))
print('done')
%%time
state1 = 2
state2 = 1
nacs_2_1 = NACS[:,:,:,state1,state2,:]
nacs_other = NACS[:,:,:,state2,state1,:]
print(np.all(nacs_2_1 == -nacs_other))
print(nacs_2_1.shape)
bigger_2_1_nacs = np.zeros_like(nacs_2_1)
neighbor(nacs_2_1,bigger_2_1_nacs)
saveFile = False
dictio = {}
a=0
if saveFile:
for coord in range(3):
filename = '/home/alessio/k-nokick/IMPORTANTS/VISUALIZE_NACS/newNacSmoother/Nac{:04}.h5'.format(a)
# I reput the zeros out.
external = np.pad(vector[:,:,:,coord], ((15,15),(15,15),(30,30)), 'constant')
dictio['NACS'] = external
dictio['state1'] = state1
dictio['state2'] = state2
dictio['cart'] = coord
qp.writeH5fileDict(filename, dictio)
a += 1
filename = '/home/alessio/k-nokick/IMPORTANTS/VISUALIZE_NACS/newNacSmoother/Nac{:04}.h5'.format(a)
dictio['NACS'] = np.pad(nacs_2_1[:,:,:,coord], ((15,15),(15,15),(30,30)), 'constant')
dictio['state1'] = state1
dictio['state2'] = state2
dictio['cart'] = coord
qp.writeH5fileDict(filename, dictio)
a += 1
# PUT TRUE IF YOU WANT TO EXAGERATE AND CHANGE THE NACS
do_it = False
nacs_2_1 = nacs[:,:,:,1,2,:]
bigger_2_1_nacs = np.empty_like(nacs_2_1)
#print(bigger_2_1_nacs.shape)
pL,gL,tL,coorL = bigger_2_1_nacs.shape
for p in qp.log_progress(pL,every=1,size=(len(pL))):
for g in range(gL):
for t in range(tL):
for coor in range(coorL):
elem = nacs_2_1[p,g,t,coor]
if np.abs(elem) > 0.0001:
first = 2
secon = 4
# proximity(6)
bigger_2_1_nacs[p+1,g,t,coor] = elem/first
bigger_2_1_nacs[p-1,g,t,coor] = elem/first
bigger_2_1_nacs[p,g+1,t,coor] = elem/first
bigger_2_1_nacs[p,g-1,t,coor] = elem/first
bigger_2_1_nacs[p,g,t+1,coor] = elem/first
bigger_2_1_nacs[p,g,t-1,coor] = elem/first
# Corners (8)
bigger_2_1_nacs[p+1,g+1,t+1,coor] = elem/secon # 000
bigger_2_1_nacs[p+1,g+1,t-1,coor] = elem/secon
bigger_2_1_nacs[p+1,g-1,t+1,coor] = elem/secon
bigger_2_1_nacs[p+1,g-1,t-1,coor] = elem/secon # 011
bigger_2_1_nacs[p-1,g+1,t+1,coor] = elem/secon # 000
bigger_2_1_nacs[p-1,g+1,t-1,coor] = elem/secon
bigger_2_1_nacs[p-1,g-1,t+1,coor] = elem/secon
bigger_2_1_nacs[p-1,g-1,t-1,coor] = elem/secon # 011
# Half sides (12)
bigger_2_1_nacs[p+1,g,t+1,coor] = elem/secon
bigger_2_1_nacs[p+1,g,t-1,coor] = elem/secon
bigger_2_1_nacs[p-1,g,t+1,coor] = elem/secon
bigger_2_1_nacs[p-1,g,t-1,coor] = elem/secon
bigger_2_1_nacs[p+1,g+1,t,coor] = elem/secon
bigger_2_1_nacs[p+1,g-1,t,coor] = elem/secon
bigger_2_1_nacs[p-1,g+1,t,coor] = elem/secon
bigger_2_1_nacs[p-1,g-1,t,coor] = elem/secon
bigger_2_1_nacs[p,g+1,t+1,coor] = elem/secon
bigger_2_1_nacs[p,g+1,t-1,coor] = elem/secon
bigger_2_1_nacs[p,g-1,t+1,coor] = elem/secon
bigger_2_1_nacs[p,g-1,t-1,coor] = elem/secon
# 2 distant (6)
bigger_2_1_nacs[p+2,g,t,coor] = elem/secon
bigger_2_1_nacs[p-2,g,t,coor] = elem/secon
bigger_2_1_nacs[p,g+2,t,coor] = elem/secon
bigger_2_1_nacs[p,g-2,t,coor] = elem/secon
bigger_2_1_nacs[p,g,t+2,coor] = elem/secon
bigger_2_1_nacs[p,g,t-2,coor] = elem/secon
#print('{} {} {} {} {}'.format(p,g,t,coor,elem))
else:
bigger_2_1_nacs[p,g,t,coor] = elem
if do_it:
data_new = data
name_data_file_new = 'NAC_2_1_little_exagerated.pickle'
print(data_new.keys())
nacs[:,:,:,1,2,:] = bigger_2_1_nacs
nacs[:,:,:,2,1,:] = -bigger_2_1_nacs
data_new['smoCube'] = nacs
pickle.dump( data_new, open( name_data_file_new, "wb" ) )
Explanation: here we try to make interpolated "unified" NAC values for S_2 - S_1
End of explanation
dipo = data['dipCUBE']
DIPO = dipo[15:-15,15:-15,30:-30]
dipo.shape, DIPO.shape
plt.close('all')
def do3dplot(xs,ys,zss):
'with mesh function'
fig = plt.figure(figsize=(9,9))
ax = fig.add_subplot(111, projection='3d')
X,Y = np.meshgrid(ys,xs)
#ax.set_zlim(-1, 1)
#ax.scatter(X, Y, zss)
ax.plot_surface(X, Y, zss,cmap=cm.coolwarm, linewidth=1, antialiased=False)
fig.canvas.layout.height = '800px'
fig.tight_layout()
def visualize_this_thing(thing,state1,state2,cart,kind,dim):
along = ['X','Y','Z']
print('DIPOLE between state ({},{}) along {} - Doing cut in {} with value ({:8.4f},{:8.4f}) - shape: {}'.format(state1,
state2,
along[cart],
kind,
dimV[kind][dim],
dims[kind][dim],
thing.shape))
if kind == 'Phi':
pot = thing[dim,:,:,cart,state1,state2]
print('Looking at DIPOLE with indexes [{},:,:,{},{},{}]'.format(dim,cart,state1,state2))
do3dplot(gams,thes,pot)
elif kind == 'Gam':
print('Looking at DIPOLE with indexes [:,{},:,{},{},{}]'.format(dim,cart,state1,state2))
pot = thing[:,dim,:,cart,state1,state2]
do3dplot(phis,thes,pot)
elif kind == 'The':
print('Looking at DIPOLE with indexes [:,:,{},{},{},{}]'.format(dim,cart,state1,state2))
pot = thing[:,:,dim,cart,state1,state2]
do3dplot(phis,gams,pot)
dimV = { 'Phi': phiV, 'Gam': gamV, 'The': theV } # real values
dims = { 'Phi': phis, 'Gam': gams, 'The': thes } # for labels
kinds = ['Phi','Gam','The']
def fun_pot2D(kind,state1, state2, cart,dim):
visualize_this_thing(DIPO, state1, state2, cart, kind, dim)
def nested(kinds):
dimensionV = dimV[kinds]
interact(fun_pot2D, kind=fixed(kinds),
state1 = widgets.IntSlider(min=0,max=7,step=1,value=0,continuous_update=False),
state2 = widgets.IntSlider(min=0,max=7,step=1,value=1,continuous_update=False),
cart = widgets.IntSlider(min=0,max=2,step=1,value=2,continuous_update=False),
dim = widgets.IntSlider(min=0,max=(len(dimensionV)-1),step=1,value=0,continuous_update=False))
interact(nested, kinds = ['Gam','Phi','The']);
import ipyvolume as ipv
def do3dplot2(xs,ys,zss):
X,Y = np.meshgrid(ys,xs)
ipv.figure()
ipv.plot_surface(X, zss, Y, color="orange")
ipv.plot_wireframe(X, zss, Y, color="red")
ipv.show()
Explanation: DIPOLES visualization
End of explanation
pot = data['potCube'] - data['potCube'].min()
A = pot
# find the minimum index having the shape
phi_min, gam_min, the_min, state_min = np.unravel_index(A.argmin(), A.shape)
phi_min, gam_min, the_min, state_min
Explanation: Minimum geometry is found by getting the minimum on the ground state potential
End of explanation
nacs.shape
B = nacs[:,:,:,:,1,0]
# this should be absolute value
phi_ci, gam_ci, the_ci, cart_ci = np.unravel_index(B.argmax(), B.shape)
np.unravel_index(B.argmax(), B.shape)
phis_ext[16],gams_ext[15],thes_ext[112]
phi_ci, gam_ci, the_ci = [16,15,112]
Explanation: CI geometry by taking the maximum NAC value between 0 and 1
End of explanation
# I start by making a cube of 0.
# boolean that creates (and overwrite) the file
create_mask_file = False
ZERO = np.zeros_like(pot[:,:,:,0])
print(ZERO.shape)
region_a = np.zeros_like(ZERO)
region_b = np.zeros_like(ZERO)
region_c = np.zeros_like(ZERO)
# for pi, p in qp.log_progress(enumerate(phis_n),every=1,size=(len(phis_n))):
# p_linea_a = 18
# g_linea_S = 30
# t_linea_b = 112
p_linea_a, g_linea_S, t_linea_b = 18, 30, 112
m_coeff,q_coeff = 1.7,75
for p,phi in qp.log_progress(enumerate(phis_ext),every=1,size=(len(phis_ext))):
for g,gam in enumerate(gams_ext):
lineValue_theta = m_coeff * g + q_coeff
for t,the in enumerate(thes_ext):
if p > p_linea_a:
region_a[p,g,t] = 1
if p <= p_linea_a:
if t > lineValue_theta:
region_c[p,g,t] = 1
else:
region_b[p,g,t] = 1
# if t > t_linea_b and g < g_linea_S:
# region_c[p,g,t] = 1
# else:
# region_b[p,g,t] = 1
# to paint cubes on the verge I make zero the sides values
if p==0 or g == 0 or t == 0 or p == len(phis_ext)-1 or g == len(gams_ext)-1 or t == len(thes_ext)-1:
region_a[p,g,t] = 0
region_b[p,g,t] = 0
region_c[p,g,t] = 0
regions = [{'label' : 'FC', 'cube': region_a},{'label' : 'reactants', 'cube': region_b},{'label' : 'products', 'cube': region_c}]
if create_mask_file:
print('I created the regions pickle file')
pickle.dump(regions, open('regions.pickle', "wb" ) )
else:
qp.warning("file region NOT written, check the variable 'create_mask_file' if you want to write region file")
Explanation: Product/reactant catcher
Here I want to generate the cubes of 1 and 0 to catch different regions of my cube.
So, now I want to do this. I want to create several cubes with different regions (basically the cubes are of 1 and 0). A is FC region, B is PRODUCT region and C is REACTANT
End of explanation
# I start by making a cube of 0.
# boolean that creates (and overwrite) the file
create_adv_mask_files = False
ZERO = np.zeros_like(pot[:,:,:,0])
ones = np.ones_like(pot[:,:,:,0])
print(ZERO.shape)
mask_CI = np.zeros_like(ZERO)
mask_AFC = np.zeros_like(ZERO)
# 18, 30, 112
#p_cube1, p_cube2, g_cube1, g_cube2, t_cube1, t_cube2 = 12, 29, 15, 32, 82, 142
p_CI_cube1, p_CI_cube2, g_CI_cube1, g_CI_cube2, t_CI_cube1, t_CI_cube2 = 12, 29, 15, 32, 82, 142
p_AFC_cube1, p_AFC_cube2, g_AFC_cube1, g_AFC_cube2, t_AFC_cube1, t_AFC_cube2 = 0, 54, 0, 55, 85, 159
for p,phi in qp.log_progress(enumerate(phis_ext),every=1,size=(len(phis_ext))):
for g,gam in enumerate(gams_ext):
for t,the in enumerate(thes_ext):
if p > p_CI_cube1 and p < p_CI_cube2 and g > g_CI_cube1 and g < g_CI_cube2 and t > t_CI_cube1 and t < t_CI_cube2:
mask_CI[p,g,t] = 1
if p > p_AFC_cube1 and p < p_AFC_cube2 and g > g_AFC_cube1 and g < g_AFC_cube2 and t > t_AFC_cube1 and t < t_AFC_cube2:
mask_AFC[p,g,t] = 1
# to paint cubes on the verge I make zero the sides values
if p==0 or g == 0 or t == 0 or p == len(phis_ext)-1 or g == len(gams_ext)-1 or t == len(thes_ext)-1:
ones[p,g,t] = 0
mask_AFC[p,g,t] = 0
masks_only_one = [{'label' : 'All', 'cube': ones, 'states':[0,1,2,3,4,5,6,7], 'show' : False}]
masks = [
{'label' : 'All', 'cube': ones, 'states':[0,1,2,3,4,5,6,7], 'show' : False},
{'label' : 'AfterFC', 'cube': mask_AFC, 'states':[0,1,2,3,4,5,6,7], 'show' : True},
{'label' : 'CI', 'cube': mask_CI, 'states':[0,1,2,3,4,5,6,7], 'show' : True},
]
# {'label' : 'Mask CI', 'cube': mask_a, 'states':[0,1]}]
#masks = [{'label' : 'Mask', 'cube': ones, 'states':[0,1,2,3,4,5,6,7]}]
if create_adv_mask_files:
print('I created the regions pickle file')
pickle.dump(masks_only_one, open('advanced_masks_onlyONE.pickle', "wb" ))
pickle.dump(masks, open('advanced_masks.pickle', "wb" ))
else:
qp.warning("file advanced_masks NOT written, check the variable 'create_adv_mask_files' if you want to write region file")
Explanation: HERE THE REGIONS FOR ADVANCED MASKS
End of explanation
# dipo_min = dipo[phi_min, gam_min, the_min]
# dipo_ci = dipo[phi_ci, gam_ci, the_ci]
# difference_dipo = dipo_ci - dipo_min
# for i in range(8):
# permanent = difference_dipo[:,i,i]
# print('S_{} -> {}'.format(i,permanent))
# dipo_min[:,1,2],dipo_min[:,0,1],dipo_min[:,0,6],dipo_min[:,0,3],dipo_min[:,0,2],dipo_min[:,0,7]
Explanation: Here I check the direction of the permanent dipoles.
End of explanation
# npy = '/home/alessio/Desktop/NAC_CORRECTION_NOVEMBER2018/dataprova.npy'
# dictio = np.load(npy)[()]
# dictio.keys()
# NACS2 = dictio['nacCUBE']
# NACS2.shape
# def do3dplot(xs,ys,zss):
# 'with mesh function'
# fig = plt.figure(figsize=(9,9))
# ax = fig.add_subplot(111, projection='3d')
# X,Y = np.meshgrid(ys,xs)
# #ax.set_zlim(-1, 1)
# #ax.scatter(X, Y, zss)
# ax.plot_wireframe(X, Y, zss)
# fig.tight_layout()
# def visualize_this_thing(thing,state1,state2,cart,kind,dim):
# print(thing.shape)
# print('\nWARNING, this is not fully correct!!! Not SMO and not really what you think\n')
# along = ['Phi','Gam','The']
# print('NAC between state ({},{}) along {}\nDoing cut in {} with value ({:8.4f},{:8.4f})'.format(state1,
# state2,
# along[cart],
# kind,
# dimV[kind][dim],
# dims[kind][dim]))
# if kind == 'Phi':
# pot = thing[dim,:,:,state1,state2,0,cart]
# print('\nLooking at SMO with indexes [{},:,:,{},{},{}]'.format(dim, state1,state2,cart))
# do3dplot(gams,thes,pot)
# elif kind == 'Gam':
# print('\nLooking at SMO with indexes [:,{},:,{},{},{}]'.format(dim, state1,state2,cart))
# pot = thing[:,dim,:,state1,state2,0,cart]
# do3dplot(phis,thes,pot)
# elif kind == 'The':
# print('\nLooking at SMO with indexes [:,:,{},{},{},{}]'.format(dim, state1,state2,cart))
# pot = thing[:,:,dim,state1,state2,0,cart]
# do3dplot(phis,gams,pot)
# dimV = { 'Phi': phiV, 'Gam': gamV, 'The': theV } # real values
# dims = { 'Phi': phis, 'Gam': gams, 'The': thes } # for labels
# kinds = ['Phi','Gam','The']
# def fun_pot2D(kind,state1, state2, cart,dim):
# visualize_this_thing(NACS2, state1, state2, cart, kind, dim)
# def nested(kinds):
# dimensionV = dimV[kinds]
# interact(fun_pot2D, kind=fixed(kinds), state1 = widgets.IntSlider(min=0,max=7,step=1,value=0), state2 = widgets.IntSlider(min=0,max=7,step=1,value=0), cart = widgets.IntSlider(min=0,max=2,step=1,value=0), dim = widgets.IntSlider(min=0,max=(len(dimensionV)-1),step=1,value=0))
# interact(nested, kinds = ['Phi','Gam','The']);
Explanation: temporary cells for last correction sign
End of explanation
# print(data.keys())
# data_new = data
# nacs_new = data['smoCube']
# NACS_new = nacs_new[15:-15,15:-15,30:-30]
# print(NACS_new.shape,nacs_new.shape)
# phi_ext_000_000 = 29
# phi_prev = 28
# phi_next = 30
# new_nacs = np.copy(nacs)
# for g in range(56):
# for t in range(160):
# not_correct = nacs[phi_ext_000_000,g,t]
# correct_prev = nacs[phi_prev ,g,t]
# correct_next = nacs[phi_next ,g,t]
# #if np.linalg.norm(not_correct) > 0.001:
# # print('{} {}\nThis {} \nMiddle {}\n After {}'.format(g,t,correct_prev[:,:,1], not_correct[:,:,1],correct_next[:,:,1]))
# for state1 in range(8):
# for state2 in range(8):
# for cart in range(3):
# value_prev = correct_prev[state1,state2,cart]
# value_this = not_correct [state1,state2,cart]
# value_next = correct_next[state1,state2,cart]
# average = (value_prev + value_next)/2
# if np.sign(average) == np.sign(value_this):
# new_value = value_this
# else:
# new_value = -value_this
# new_nacs[phi_ext_000_000,g,t,state1,state2,cart] = new_value
# def do3dplot(xs,ys,zss):
# 'with mesh function'
# fig = plt.figure(figsize=(9,9))
# ax = fig.add_subplot(111, projection='3d')
# X,Y = np.meshgrid(ys,xs)
# #ax.set_zlim(-1, 1)
# #ax.scatter(X, Y, zss)
# ax.plot_wireframe(X, Y, zss)
# fig.tight_layout()
# def visualize_this_thing(thing,state1,state2,cart,kind,dim):
# print(thing.shape)
# along = ['Phi','Gam','The']
# print('NAC between state ({},{}) along {}\nDoing cut in {} with value ({:8.4f},{:8.4f})'.format(state1,
# state2,
# along[cart],
# kind,
# dimV[kind][dim],
# dims[kind][dim]))
# if kind == 'Phi':
# pot = thing[dim,:,:,state1,state2,cart]
# print('\nLooking at SMO with indexes [{},:,:,{},{},{}]'.format(dim, state1,state2,cart))
# do3dplot(gams_ext,thes_ext,pot)
# elif kind == 'Gam':
# print('\nLooking at SMO with indexes [:,{},:,{},{},{}]'.format(dim, state1,state2,cart))
# pot = thing[:,dim,:,state1,state2,cart]
# do3dplot(phis_ext,thes_ext,pot)
# elif kind == 'The':
# print('\nLooking at SMO with indexes [:,:,{},{},{},{}]'.format(dim, state1,state2,cart))
# pot = thing[:,:,dim,state1,state2,cart]
# do3dplot(phis_ext,gams_ext,pot)
# dimV = { 'Phi': phiV_ext, 'Gam': gamV_ext, 'The': theV_ext } # real values
# dims = { 'Phi': phis_ext, 'Gam': gams_ext, 'The': thes_ext } # for labels
# kinds = ['Phi','Gam','The']
# def fun_pot2D(kind,state1, state2, cart,dim):
# visualize_this_thing(new_nacs, state1, state2, cart, kind, dim)
# def nested(kinds):
# dimensionV = dimV[kinds]
# interact(fun_pot2D, kind=fixed(kinds), state1 = widgets.IntSlider(min=0,max=7,step=1,value=0), state2 = widgets.IntSlider(min=0,max=7,step=1,value=0), cart = widgets.IntSlider(min=0,max=2,step=1,value=0), dim = widgets.IntSlider(min=0,max=(len(dimensionV)-1),step=1,value=0))
# interact(nested, kinds = ['Phi','Gam','The']);
Explanation: sign flipper on extrapolated SMO cube
you used the cells below to correct NAC on the main plane... it was still flipping
End of explanation
#name_data_file_new = 'newExtrapolated_allCorrectionSECOND.pickle'
#data_new.keys()
# data_new['smoCube'] = new_nacs
# pickle.dump( data_new, open( name_data_file_new, "wb" ) )
Explanation: Things regarding writing down the Pickle file
End of explanation
folder = '.'
a=0
saveFile = False
for state1 in range(8):
for state2 in range(state1):
for cart in range(3):
dictio = {}
cartL = ['X','Y','Z']
print('Nacs ({},{}) along {} -> {:04}'.format(state1,state2,cartL[cart],a))
a+=1
if saveFile:
filename = 'Nac{:04}.h5'.format(a)
dictio['NACS'] = nacs[:,:,:,state1,state2,cart]
dictio['state1'] = state1
dictio['state2'] = state2
dictio['cart'] = cart
qp.writeH5fileDict(filename, dictio)
Explanation: those cells here are used to visualize in 3d space the dipoles/nac
End of explanation
# lol2 is the new function to be added
phi_index = 16
theta_index = 81
state_index = 0
lol = pot[phi_index,:,theta_index,state_index]
num = 15
constant = 0.001
lol2 = np.zeros_like(lol)
for i in range(num):
lol2[i] = constant * (i-num)**2
#print('{} {} {}'.format(i,num,i-num))
fig = plt.figure()
plt.title('Gamma wall')
plt.xlabel('Gamma')
plt.ylabel('Energy')
plt.plot(lol)
plt.plot(lol2+lol);
newpot = np.zeros_like(pot)
for p in range(55):
for t in range(160):
for s in range(8):
newpot[p,:,t,s] = pot[p,:,t,s] + lol2
do_it = False
if do_it:
data_new = data
name_data_file_new = 'newExtrapolated_gammaExtrExag.pickle'
data_new.keys()
data_new['potCube'] = newpot
pickle.dump( data_new, open( name_data_file_new, "wb" ) )
else:
qp.warning('Here it is set to false, new file is NOT created')
fig = plt.figure()
phi = 20
the = 100
plt.plot(pot[phi,:,the,1])
plt.plot(newpot[phi,:,the,1]);
Explanation: those to make the wall on extrapolated gamma values
End of explanation |
1,852 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using Tensorflow with H2O
This notebook shows how to use the tensorflow backend to tackle a simple image classification problem.
We start by connecting to our h2o cluster
Step1: Image Classification Task
H2O DeepWater allows you to specify a list of URIs (file paths) or URLs (links) to images, together with a response column (either a class membership (enum) or regression target (numeric)).
For this example, we use a small dataset that has a few hundred images, and three classes
Step2: To build a LeNet image classification model in H2O, simply specify network = "lenet" and backend="tensorflow" to use the our pre-built TensorFlow lenet implementation
Step3: DeepFeatures
We can also compute the output of any hidden layer, if we know its name.
Step4: Custom models
If you'd like to build your own Tensorflow network architecture, then this is easy as well.
In this example script, we are using the Tensorflow backend.
Models can easily be imported/exported between H2O and Tensorflow since H2O uses Tensorflow's format for model definition.
Step5: Custom models with Keras
It is also possible to use libraries/APIs such as Keras to define the network architecture. | Python Code:
import sys, os
import h2o
from h2o.estimators.deepwater import H2ODeepWaterEstimator
import os.path
from IPython.display import Image, display, HTML
import pandas as pd
import numpy as np
import random
PATH=os.path.expanduser("~/h2o-3")
h2o.init(port=54321, nthreads=-1)
if not H2ODeepWaterEstimator.available(): exit
!nvidia-smi
%matplotlib inline
from IPython.display import Image, display, HTML
import matplotlib.pyplot as plt
Explanation: Using Tensorflow with H2O
This notebook shows how to use the tensorflow backend to tackle a simple image classification problem.
We start by connecting to our h2o cluster:
End of explanation
frame = h2o.import_file(PATH + "/bigdata/laptop/deepwater/imagenet/cat_dog_mouse.csv")
print(frame.dim)
print(frame.head(5))
Explanation: Image Classification Task
H2O DeepWater allows you to specify a list of URIs (file paths) or URLs (links) to images, together with a response column (either a class membership (enum) or regression target (numeric)).
For this example, we use a small dataset that has a few hundred images, and three classes: cat, dog and mouse.
End of explanation
model = H2ODeepWaterEstimator(epochs=500, network = "lenet", backend="tensorflow")
model.train(x=[0],y=1, training_frame=frame)
model.show()
model = H2ODeepWaterEstimator(epochs=100, backend="tensorflow",
image_shape=[28,28],
network="user",
network_definition_file=PATH + "/examples/deeplearning/notebooks/pretrained/lenet_28x28x3_3.meta",
network_parameters_file=PATH + "/examples/deeplearning/notebooks/pretrained/lenet-100epochs")
model.train(x=[0],y=1, training_frame=frame)
model.show()
Explanation: To build a LeNet image classification model in H2O, simply specify network = "lenet" and backend="tensorflow" to use the our pre-built TensorFlow lenet implementation:
End of explanation
model.deepfeatures(frame, "fc1/Relu")
Explanation: DeepFeatures
We can also compute the output of any hidden layer, if we know its name.
End of explanation
def simple_model(w, h, channels, classes):
import json
import tensorflow as tf
from tensorflow.python.framework import ops
# always create a new graph inside ipython or
# the default one will be used and can lead to
# unexpected behavior
graph = tf.Graph()
with graph.as_default():
size = w * h * channels
x = tf.placeholder(tf.float32, [None, size])
W = tf.Variable(tf.zeros([size, classes]))
b = tf.Variable(tf.zeros([classes]))
y = tf.matmul(x, W) + b
predictions = tf.nn.softmax(y)
# labels
y_ = tf.placeholder(tf.float32, [None, classes])
# train
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=y, labels=y_))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
tf.add_to_collection(ops.GraphKeys.TRAIN_OP, train_step)
tf.add_to_collection("predictions", predictions)
# this is required by the h2o tensorflow backend
global_step = tf.Variable(0, name="global_step", trainable=False)
init = tf.global_variables_initializer()
tf.add_to_collection(ops.GraphKeys.INIT_OP, init.name)
tf.add_to_collection("logits", y)
saver = tf.train.Saver()
meta = json.dumps({
"inputs": {"batch_image_input": x.name, "categorical_labels": y_.name},
"outputs": {"categorical_logits": y.name},
"parameters": {"global_step": global_step.name},
})
print(meta)
tf.add_to_collection("meta", meta)
filename = "/tmp/lenet_tensorflow.meta"
tf.train.export_meta_graph(filename, saver_def=saver.as_saver_def())
return filename
filename = simple_model(28, 28, 3, classes=3)
model = H2ODeepWaterEstimator(epochs=500,
network_definition_file=filename, ## specify the model
image_shape=[28,28], ## provide expected (or matching) image size
channels=3,
backend="tensorflow",
)
model.train(x=[0], y=1, training_frame=frame)
model.show()
Explanation: Custom models
If you'd like to build your own Tensorflow network architecture, then this is easy as well.
In this example script, we are using the Tensorflow backend.
Models can easily be imported/exported between H2O and Tensorflow since H2O uses Tensorflow's format for model definition.
End of explanation
import tensorflow as tf
import json
from keras.layers.core import Dense, Flatten, Reshape
from keras.layers.convolutional import Conv2D
from keras.layers.pooling import MaxPooling2D
from keras import backend as K
from keras.objectives import categorical_crossentropy
from tensorflow.python.framework import ops
def keras_model(w, h, channels, classes):
# always create a new graph inside ipython or
# the default one will be used and can lead to
# unexpected behavior
graph = tf.Graph()
with graph.as_default():
size = w * h * channels
# Input images fed via H2O
inp = tf.placeholder(tf.float32, [None, size])
# Actual labels used for training fed via H2O
labels = tf.placeholder(tf.float32, [None, classes])
# Keras network
x = Reshape((w, h, channels))(inp)
x = Conv2D(20, (5, 5), padding='same', activation='relu')(x)
x = MaxPooling2D((2,2))(x)
x = Conv2D(50, (5, 5), padding='same', activation='relu')(x)
x = MaxPooling2D((2,2))(x)
x = Flatten()(x)
x = Dense(500, activation='relu')(x)
out = Dense(classes)(x)
predictions = tf.nn.softmax(out)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=labels,logits=out))
train_step = tf.train.AdamOptimizer(1e-3).minimize(loss)
init_op = tf.global_variables_initializer()
# Metadata required by H2O
tf.add_to_collection(ops.GraphKeys.INIT_OP, init_op.name)
tf.add_to_collection(ops.GraphKeys.TRAIN_OP, train_step)
tf.add_to_collection("logits", out)
tf.add_to_collection("predictions", predictions)
meta = json.dumps({
"inputs": {"batch_image_input": inp.name,
"categorical_labels": labels.name},
"outputs": {"categorical_logits": out.name,
"layers": ','.join([m.name for m in tf.get_default_graph().get_operations()])},
"parameters": {}
})
tf.add_to_collection("meta", meta)
# Save the meta file with the graph
saver = tf.train.Saver()
filename = "/tmp/keras_tensorflow.meta"
tf.train.export_meta_graph(filename, saver_def=saver.as_saver_def())
return filename
filename = keras_model(28, 28, 3, classes=3)
model = H2ODeepWaterEstimator(epochs=50,
network_definition_file=filename, ## specify the model
image_shape=[28,28], ## provide expected (or matching) image size
channels=3,
backend="tensorflow",
)
model.train(x=[0], y=1, training_frame=frame)
model.show()
Explanation: Custom models with Keras
It is also possible to use libraries/APIs such as Keras to define the network architecture.
End of explanation |
1,853 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Source alignment and coordinate frames
This tutorial shows how to visually assess the spatial alignment of MEG sensor
locations, digitized scalp landmark and sensor locations, and MRI volumes. This
alignment process is crucial for computing the forward solution, as is
understanding the different coordinate frames involved in this process.
Step1: .. raw
Step2: Coordinate frame definitions
Neuromag/Elekta/MEGIN head coordinate frame ("head",
Step3: A good example
Here is the same plot, this time with the trans properly defined
(using a precomputed transformation matrix).
Step4: Visualizing the transformations
Let's visualize these coordinate frames using just the scalp surface; this
will make it easier to see their relative orientations. To do this we'll
first load the Freesurfer scalp surface, then apply a few different
transforms to it. In addition to the three coordinate frames discussed above,
we'll also show the "mri_voxel" coordinate frame. Unlike MRI Surface RAS,
"mri_voxel" has its origin in the corner of the volume (the left-most,
posterior-most coordinate on the inferior-most MRI slice) instead of at the
center of the volume. "mri_voxel" is also not an RAS coordinate system
Step5: Now that we've transformed all the points, let's plot them. We'll use the
same colors used by ~mne.viz.plot_alignment and use
Step6: The relative orientations of the coordinate frames can be inferred by
observing the direction of the subject's nose. Notice also how the origin of
the
Step7: Defining the head↔MRI trans using the GUI
You can try creating the head↔MRI transform yourself using
Step8: Alignment without MRI
The surface alignments above are possible if you have the surfaces available
from Freesurfer. | Python Code:
import os.path as op
import numpy as np
import nibabel as nib
from scipy import linalg
import mne
from mne.io.constants import FIFF
data_path = mne.datasets.sample.data_path()
subjects_dir = op.join(data_path, 'subjects')
raw_fname = op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw.fif')
trans_fname = op.join(data_path, 'MEG', 'sample',
'sample_audvis_raw-trans.fif')
raw = mne.io.read_raw_fif(raw_fname)
trans = mne.read_trans(trans_fname)
src = mne.read_source_spaces(op.join(subjects_dir, 'sample', 'bem',
'sample-oct-6-src.fif'))
# Load the T1 file and change the header information to the correct units
t1w = nib.load(op.join(data_path, 'subjects', 'sample', 'mri', 'T1.mgz'))
t1w = nib.Nifti1Image(t1w.dataobj, t1w.affine)
t1w.header['xyzt_units'] = np.array(10, dtype='uint8')
t1_mgh = nib.MGHImage(t1w.dataobj, t1w.affine)
Explanation: Source alignment and coordinate frames
This tutorial shows how to visually assess the spatial alignment of MEG sensor
locations, digitized scalp landmark and sensor locations, and MRI volumes. This
alignment process is crucial for computing the forward solution, as is
understanding the different coordinate frames involved in this process.
:depth: 2
Let's start out by loading some data.
End of explanation
fig = mne.viz.plot_alignment(raw.info, trans=trans, subject='sample',
subjects_dir=subjects_dir, surfaces='head-dense',
show_axes=True, dig=True, eeg=[], meg='sensors',
coord_frame='meg', mri_fiducials='estimated')
mne.viz.set_3d_view(fig, 45, 90, distance=0.6, focalpoint=(0., 0., 0.))
print('Distance from head origin to MEG origin: %0.1f mm'
% (1000 * np.linalg.norm(raw.info['dev_head_t']['trans'][:3, 3])))
print('Distance from head origin to MRI origin: %0.1f mm'
% (1000 * np.linalg.norm(trans['trans'][:3, 3])))
dists = mne.dig_mri_distances(raw.info, trans, 'sample',
subjects_dir=subjects_dir)
print('Distance from %s digitized points to head surface: %0.1f mm'
% (len(dists), 1000 * np.mean(dists)))
Explanation: .. raw:: html
<style>
.pink {color:DarkSalmon; font-weight:bold}
.blue {color:DeepSkyBlue; font-weight:bold}
.gray {color:Gray; font-weight:bold}
.magenta {color:Magenta; font-weight:bold}
.purple {color:Indigo; font-weight:bold}
.green {color:LimeGreen; font-weight:bold}
.red {color:Red; font-weight:bold}
</style>
.. role:: pink
.. role:: blue
.. role:: gray
.. role:: magenta
.. role:: purple
.. role:: green
.. role:: red
Understanding coordinate frames
For M/EEG source imaging, there are three coordinate frames must be
brought into alignment using two 3D transformation matrices <wiki_xform_>_
that define how to rotate and translate points in one coordinate frame
to their equivalent locations in another. The three main coordinate frames
are:
:blue:"meg": the coordinate frame for the physical locations of MEG
sensors
:gray:"mri": the coordinate frame for MRI images, and scalp/skull/brain
surfaces derived from the MRI images
:pink:"head": the coordinate frame for digitized sensor locations and
scalp landmarks ("fiducials")
Each of these are described in more detail in the next section.
A good way to start visualizing these coordinate frames is to use the
mne.viz.plot_alignment function, which is used for creating or inspecting
the transformations that bring these coordinate frames into alignment, and
displaying the resulting alignment of EEG sensors, MEG sensors, brain
sources, and conductor models. If you provide subjects_dir and
subject parameters, the function automatically loads the subject's
Freesurfer MRI surfaces. Important for our purposes, passing
show_axes=True to ~mne.viz.plot_alignment will draw the origin of each
coordinate frame in a different color, with axes indicated by different sized
arrows:
shortest arrow: (R)ight / X
medium arrow: forward / (A)nterior / Y
longest arrow: up / (S)uperior / Z
Note that all three coordinate systems are RAS coordinate frames and
hence are also right-handed_ coordinate systems. Finally, note that the
coord_frame parameter sets which coordinate frame the camera
should initially be aligned with. Let's take a look:
End of explanation
mne.viz.plot_alignment(raw.info, trans=None, subject='sample', src=src,
subjects_dir=subjects_dir, dig=True,
surfaces=['head-dense', 'white'], coord_frame='meg')
Explanation: Coordinate frame definitions
Neuromag/Elekta/MEGIN head coordinate frame ("head", :pink:pink axes)
The head coordinate frame is defined through the coordinates of
anatomical landmarks on the subject's head: usually the Nasion (NAS),
and the left and right preauricular points (LPA and RPA).
Different MEG manufacturers may have different definitions of the head
coordinate frame. A good overview can be seen in the
FieldTrip FAQ on coordinate systems.
For Neuromag/Elekta/MEGIN, the head coordinate frame is defined by the
intersection of
the line between the LPA (:red:red sphere) and RPA
(:purple:purple sphere), and
the line perpendicular to this LPA-RPA line one that goes through
the Nasion (:green:green sphere).
The axes are oriented as X origin→RPA, Y origin→NAS,
Z origin→upward (orthogonal to X and Y).
.. note:: The required 3D coordinates for defining the head coordinate
frame (NAS, LPA, RPA) are measured at a stage separate from
the MEG data recording. There exist numerous devices to
perform such measurements, usually called "digitizers". For
example, see the devices by the company Polhemus_.
MEG device coordinate frame ("meg", :blue:blue axes)
The MEG device coordinate frame is defined by the respective MEG
manufacturers. All MEG data is acquired with respect to this coordinate
frame. To account for the anatomy and position of the subject's head, we
use so-called head position indicator (HPI) coils. The HPI coils are
placed at known locations on the scalp of the subject and emit
high-frequency magnetic fields used to coregister the head coordinate
frame with the device coordinate frame.
From the Neuromag/Elekta/MEGIN user manual:
The origin of the device coordinate system is located at the center
of the posterior spherical section of the helmet with X axis going
from left to right and Y axis pointing front. The Z axis is, again
normal to the plane with positive direction up.
.. note:: The HPI coils are shown as :magenta:magenta spheres.
Coregistration happens at the beginning of the recording and
the head↔meg transformation matrix is stored in
raw.info['dev_head_t'].
MRI coordinate frame ("mri", :gray:gray axes)
Defined by Freesurfer, the "MRI surface RAS" coordinate frame has its
origin at the center of a 256×256×256 1mm anisotropic volume (though the
center may not correspond to the anatomical center of the subject's
head).
.. note:: We typically align the MRI coordinate frame to the head
coordinate frame through a
rotation and translation matrix <wiki_xform_>_,
that we refer to in MNE as trans.
A bad example
Let's try using ~mne.viz.plot_alignment with trans=None, which
(incorrectly!) equates the MRI and head coordinate frames.
End of explanation
mne.viz.plot_alignment(raw.info, trans=trans, subject='sample',
src=src, subjects_dir=subjects_dir, dig=True,
surfaces=['head-dense', 'white'], coord_frame='meg')
Explanation: A good example
Here is the same plot, this time with the trans properly defined
(using a precomputed transformation matrix).
End of explanation
# The head surface is stored in "mri" coordinate frame
# (origin at center of volume, units=mm)
seghead_rr, seghead_tri = mne.read_surface(
op.join(subjects_dir, 'sample', 'surf', 'lh.seghead'))
# To put the scalp in the "head" coordinate frame, we apply the inverse of
# the precomputed `trans` (which maps head → mri)
mri_to_head = linalg.inv(trans['trans'])
scalp_pts_in_head_coord = mne.transforms.apply_trans(
mri_to_head, seghead_rr, move=True)
# To put the scalp in the "meg" coordinate frame, we use the inverse of
# raw.info['dev_head_t']
head_to_meg = linalg.inv(raw.info['dev_head_t']['trans'])
scalp_pts_in_meg_coord = mne.transforms.apply_trans(
head_to_meg, scalp_pts_in_head_coord, move=True)
# The "mri_voxel"→"mri" transform is embedded in the header of the T1 image
# file. We'll invert it and then apply it to the original `seghead_rr` points.
# No unit conversion necessary: this transform expects mm and the scalp surface
# is defined in mm.
vox_to_mri = t1_mgh.header.get_vox2ras_tkr()
mri_to_vox = linalg.inv(vox_to_mri)
scalp_points_in_vox = mne.transforms.apply_trans(
mri_to_vox, seghead_rr, move=True)
Explanation: Visualizing the transformations
Let's visualize these coordinate frames using just the scalp surface; this
will make it easier to see their relative orientations. To do this we'll
first load the Freesurfer scalp surface, then apply a few different
transforms to it. In addition to the three coordinate frames discussed above,
we'll also show the "mri_voxel" coordinate frame. Unlike MRI Surface RAS,
"mri_voxel" has its origin in the corner of the volume (the left-most,
posterior-most coordinate on the inferior-most MRI slice) instead of at the
center of the volume. "mri_voxel" is also not an RAS coordinate system:
rather, its XYZ directions are based on the acquisition order of the T1 image
slices.
End of explanation
def add_head(renderer, points, color, opacity=0.95):
renderer.mesh(*points.T, triangles=seghead_tri, color=color,
opacity=opacity)
renderer = mne.viz.backends.renderer.create_3d_figure(
size=(600, 600), bgcolor='w', scene=False)
add_head(renderer, seghead_rr, 'gray')
add_head(renderer, scalp_pts_in_meg_coord, 'blue')
add_head(renderer, scalp_pts_in_head_coord, 'pink')
add_head(renderer, scalp_points_in_vox, 'green')
mne.viz.set_3d_view(figure=renderer.figure, distance=800,
focalpoint=(0., 30., 30.), elevation=105, azimuth=180)
renderer.show()
Explanation: Now that we've transformed all the points, let's plot them. We'll use the
same colors used by ~mne.viz.plot_alignment and use :green:green for the
"mri_voxel" coordinate frame:
End of explanation
# Get the nasion
nasion = [p for p in raw.info['dig'] if
p['kind'] == FIFF.FIFFV_POINT_CARDINAL and
p['ident'] == FIFF.FIFFV_POINT_NASION][0]
assert nasion['coord_frame'] == FIFF.FIFFV_COORD_HEAD
nasion = nasion['r'] # get just the XYZ values
# Transform it from head to MRI space (recall that `trans` is head → mri)
nasion_mri = mne.transforms.apply_trans(trans, nasion, move=True)
# Then transform to voxel space, after converting from meters to millimeters
nasion_vox = mne.transforms.apply_trans(
mri_to_vox, nasion_mri * 1e3, move=True)
# Plot it to make sure the transforms worked
renderer = mne.viz.backends.renderer.create_3d_figure(
size=(400, 400), bgcolor='w', scene=False)
add_head(renderer, scalp_points_in_vox, 'green', opacity=1)
renderer.sphere(center=nasion_vox, color='orange', scale=10)
mne.viz.set_3d_view(figure=renderer.figure, distance=600.,
focalpoint=(0., 125., 250.), elevation=45, azimuth=180)
renderer.show()
Explanation: The relative orientations of the coordinate frames can be inferred by
observing the direction of the subject's nose. Notice also how the origin of
the :green:mri_voxel coordinate frame is in the corner of the volume
(above, behind, and to the left of the subject), whereas the other three
coordinate frames have their origin roughly in the center of the head.
Example: MRI defacing
For a real-world example of using these transforms, consider the task of
defacing the MRI to preserve subject anonymity. If you know the points in
the "head" coordinate frame (as you might if you're basing the defacing on
digitized points) you would need to transform them into "mri" or "mri_voxel"
in order to apply the blurring or smoothing operations to the MRI surfaces or
images. Here's what that would look like (we'll use the nasion landmark as a
representative example):
End of explanation
# mne.gui.coregistration(subject='sample', subjects_dir=subjects_dir)
Explanation: Defining the head↔MRI trans using the GUI
You can try creating the head↔MRI transform yourself using
:func:mne.gui.coregistration.
First you must load the digitization data from the raw file
(Head Shape Source). The MRI data is already loaded if you provide the
subject and subjects_dir. Toggle Always Show Head Points to see
the digitization points.
To set the landmarks, toggle Edit radio button in MRI Fiducials.
Set the landmarks by clicking the radio button (LPA, Nasion, RPA) and then
clicking the corresponding point in the image.
After doing this for all the landmarks, toggle Lock radio button. You
can omit outlier points, so that they don't interfere with the finetuning.
.. note:: You can save the fiducials to a file and pass
mri_fiducials=True to plot them in
:func:mne.viz.plot_alignment. The fiducials are saved to the
subject's bem folder by default.
* Click Fit Head Shape. This will align the digitization points to the
head surface. Sometimes the fitting algorithm doesn't find the correct
alignment immediately. You can try first fitting using LPA/RPA or fiducials
and then align according to the digitization. You can also finetune
manually with the controls on the right side of the panel.
* Click Save As... (lower right corner of the panel), set the filename
and read it with :func:mne.read_trans.
For more information, see step by step instructions
in these slides
<https://www.slideshare.net/mne-python/mnepython-coregistration>_.
Uncomment the following line to align the data yourself.
End of explanation
sphere = mne.make_sphere_model(info=raw.info, r0='auto', head_radius='auto')
src = mne.setup_volume_source_space(sphere=sphere, pos=10.)
mne.viz.plot_alignment(
raw.info, eeg='projected', bem=sphere, src=src, dig=True,
surfaces=['brain', 'outer_skin'], coord_frame='meg', show_axes=True)
Explanation: Alignment without MRI
The surface alignments above are possible if you have the surfaces available
from Freesurfer. :func:mne.viz.plot_alignment automatically searches for
the correct surfaces from the provided subjects_dir. Another option is
to use a spherical conductor model <eeg_sphere_model>. It is
passed through bem parameter.
End of explanation |
1,854 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Create entry points to spark
Step1: load iris data
Step2: Merge features to create a features column
Step3: Index label column with StringIndexer
Import libraries
Step4: Build pipeline
Try to use pipeline whenever you can to get used to this format.
Step5: Transform data
Step6: Check the data one more time
Step7: Naive Bayes classification
Split data into training and test sets
Step8: Build cross-validation model
Estimator
Step9: Parameter grid
Step10: Evaluator
There are three categories in the label column. Therefore, we use MulticlassClassificationEvaluator
Step11: Build cross-validation model
Step12: Fit cross-validation model
Step13: Prediction on training and test sets
Step14: Best model from cross validation
Step15: Prediction accurary
Four accuracy matrices are avaiable for this evaluator.
* f1
* weightedPrecision
* weightedRecall
* accuracy
Prediction accuracy on training data
Step16: Prediction accuracy on test data
Step17: Confusion matrix
Confusion matrix on training data
Step18: Confusion matrix on test data | Python Code:
from pyspark import SparkContext
sc = SparkContext(master = 'local')
from pyspark.sql import SparkSession
spark = SparkSession.builder \
.appName("Python Spark SQL basic example") \
.config("spark.some.config.option", "some-value") \
.getOrCreate()
Explanation: Create entry points to spark
End of explanation
iris = spark.read.csv('data/iris.csv', header=True, inferSchema=True)
iris.show(5)
iris.dtypes
iris.describe().show()
Explanation: load iris data
End of explanation
from pyspark.ml.linalg import Vectors
from pyspark.sql import Row
iris2 = iris.rdd.map(lambda x: Row(features=Vectors.dense(x[:-1]), species=x[-1])).toDF()
iris2.show(5)
Explanation: Merge features to create a features column
End of explanation
from pyspark.ml.feature import StringIndexer
from pyspark.ml import Pipeline
Explanation: Index label column with StringIndexer
Import libraries
End of explanation
stringindexer = StringIndexer(inputCol='species', outputCol='label')
stages = [stringindexer]
pipeline = Pipeline(stages=stages)
Explanation: Build pipeline
Try to use pipeline whenever you can to get used to this format.
End of explanation
iris_df = pipeline.fit(iris2).transform(iris2)
iris_df.show(5)
Explanation: Transform data
End of explanation
iris_df.describe().show(5)
iris_df.dtypes
Explanation: Check the data one more time
End of explanation
train, test = iris_df.randomSplit([0.8, 0.2], seed=1234)
Explanation: Naive Bayes classification
Split data into training and test sets
End of explanation
from pyspark.ml.classification import NaiveBayes
naivebayes = NaiveBayes(featuresCol="features", labelCol="label")
Explanation: Build cross-validation model
Estimator
End of explanation
from pyspark.ml.tuning import ParamGridBuilder
param_grid = ParamGridBuilder().\
addGrid(naivebayes.smoothing, [0, 1, 2, 4, 8]).\
build()
Explanation: Parameter grid
End of explanation
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
evaluator = MulticlassClassificationEvaluator()
Explanation: Evaluator
There are three categories in the label column. Therefore, we use MulticlassClassificationEvaluator
End of explanation
from pyspark.ml.tuning import CrossValidator
crossvalidator = CrossValidator(estimator=naivebayes, estimatorParamMaps=param_grid, evaluator=evaluator)
Explanation: Build cross-validation model
End of explanation
crossvalidation_mode = crossvalidator.fit(train)
Explanation: Fit cross-validation model
End of explanation
pred_train = crossvalidation_mode.transform(train)
pred_train.show(5)
pred_test = crossvalidation_mode.transform(test)
pred_test.show(5)
Explanation: Prediction on training and test sets
End of explanation
print("The parameter smoothing has best value:",
crossvalidation_mode.bestModel._java_obj.getSmoothing())
Explanation: Best model from cross validation
End of explanation
print('training data (f1):', evaluator.setMetricName('f1').evaluate(pred_train), "\n",
'training data (weightedPrecision): ', evaluator.setMetricName('weightedPrecision').evaluate(pred_train),"\n",
'training data (weightedRecall): ', evaluator.setMetricName('weightedRecall').evaluate(pred_train),"\n",
'training data (accuracy): ', evaluator.setMetricName('accuracy').evaluate(pred_train))
Explanation: Prediction accurary
Four accuracy matrices are avaiable for this evaluator.
* f1
* weightedPrecision
* weightedRecall
* accuracy
Prediction accuracy on training data
End of explanation
print('test data (f1):', evaluator.setMetricName('f1').evaluate(pred_test), "\n",
'test data (weightedPrecision): ', evaluator.setMetricName('weightedPrecision').evaluate(pred_test),"\n",
'test data (weightedRecall): ', evaluator.setMetricName('weightedRecall').evaluate(pred_test),"\n",
'test data (accuracy): ', evaluator.setMetricName('accuracy').evaluate(pred_test))
Explanation: Prediction accuracy on test data
End of explanation
train_conf_mat = pred_train.select('label', 'prediction')
train_conf_mat.rdd.zipWithIndex().countByKey()
Explanation: Confusion matrix
Confusion matrix on training data
End of explanation
test_conf_mat = pred_test.select('label', 'prediction')
test_conf_mat.rdd.zipWithIndex().countByKey()
Explanation: Confusion matrix on test data
End of explanation |
1,855 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Anna KaRNNa
In this notebook, we'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
Step1: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
Step2: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
Step3: And we can see the characters encoded as integers.
Step4: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
Step5: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this
Step6: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
Step7: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob. This will be a scalar, that is a 0-D tensor. To make a scalar, you create a placeholder without giving it a size.
Exercise
Step8: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. For example,
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow will create different weight matrices for all cell objects. Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.
We also need to create an initial cell state of all zeros. This can be done like so
python
initial_state = cell.zero_state(batch_size, tf.float32)
Exercise
Step9: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character, so we want this layer to have size $C$, the number of classes/characters we have in our text.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells. We get the LSTM output as a list, lstm_output. First we need to concatenate this whole list into one array with tf.concat. Then, reshape it (with tf.reshape) to size $(M * N) \times L$.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
Exercise
Step10: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
Exercise
Step11: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
Step12: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
Exercise
Step13: Hyperparameters
Here are the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular
Step14: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
Exercise
Step15: Saved checkpoints
Read up on saving and loading checkpoints here
Step16: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
Step17: Here, pass in the path to a checkpoint and sample from the network. | Python Code:
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
Explanation: Anna KaRNNa
In this notebook, we'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
Explanation: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
End of explanation
text[:100]
Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
End of explanation
encoded[:100]
Explanation: And we can see the characters encoded as integers.
End of explanation
len(vocab)
Explanation: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
End of explanation
def get_batches(arr, n_seqs, n_steps):
'''Create a generator that returns batches of size
n_seqs x n_steps from arr.
Arguments
---------
arr: Array you want to make batches from
n_seqs: Batch size, the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the number of characters per batch and number of batches we can make
characters_per_batch = n_seqs * n_steps
n_batches = len(arr)//characters_per_batch
# Keep only enough characters to make full batches
arr = arr[:n_batches*characters_per_batch]
# Reshape into n_seqs rows
arr = arr.reshape((n_seqs,-1))
for n in range(0, arr.shape[1], n_steps):
# The features
x = arr[:,n:n+n_steps]
# The targets, shifted by one
y = np.zeros_like(x)
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
yield x, y
Explanation: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:
<img src="assets/sequence_batching@1x.png" width=500px>
<br>
We have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator.
The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep.
After that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$ where $K$ is the number of batches.
Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this:
python
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
where x is the input batch and y is the target batch.
The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide.
Exercise: Write the code for creating batches in the function below. The exercises in this notebook will not be easy. I've provided a notebook with solutions alongside this notebook. If you get stuck, checkout the solutions. The most important thing is that you don't copy and paste the code into here, type out the solution code yourself.
End of explanation
batches = get_batches(encoded, 10, 50)
x, y = next(batches)
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])
Explanation: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
End of explanation
def build_inputs(batch_size, num_steps):
''' Define placeholders for inputs, targets, and dropout
Arguments
---------
batch_size: Batch size, number of sequences per batch
num_steps: Number of sequence steps in a batch
'''
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32,shape=(batch_size,num_steps),name='inputs')
targets = tf.placeholder(tf.int32,shape=(batch_size,num_steps),name='targets')
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder(tf.float32,name='keep_prob')
return inputs, targets, keep_prob
Explanation: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob. This will be a scalar, that is a 0-D tensor. To make a scalar, you create a placeholder without giving it a size.
Exercise: Create the input placeholders in the function below.
End of explanation
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
''' Build LSTM cell.
Arguments
---------
keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
lstm_size: Size of the hidden layers in the LSTM cells
num_layers: Number of LSTM layers
batch_size: Batch size
'''
### Build the LSTM Cell
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell outputs
drop = tf.contrib.rnn.DropoutWrapper(lstm,output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop]*num_layers)
initial_state = cell.zero_state(batch_size,tf.float32)
return cell, initial_state
Explanation: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. For example,
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow will create different weight matrices for all cell objects. Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.
We also need to create an initial cell state of all zeros. This can be done like so
python
initial_state = cell.zero_state(batch_size, tf.float32)
Exercise: Below, implement the build_lstm function to create these LSTM cells and the initial state.
End of explanation
def build_output(lstm_output, in_size, out_size):
''' Build a softmax layer, return the softmax output and logits.
Arguments
---------
lstm_output: List of output tensors from the LSTM layer
in_size: Size of the input tensor, for example, size of the LSTM cells
out_size: Size of this softmax layer
'''
# Reshape output so it's a bunch of rows, one row for each step for each sequence.
# Concatenate lstm_output over axis 1 (the columns)
seq_output = tf.concat(lstm_output,axis=1)
# Reshape seq_output to a 2D tensor with lstm_size columns
x = tf.reshape(seq_output,shape=(-1,in_size))
# Connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
# Create the weight and bias variables here
softmax_w = tf.Variable(tf.truncated_normal((in_size,out_size),mean=0.0,stddev=0.1))
softmax_b = tf.Variable(tf.zeros(out_size))
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and sequence
logits = tf.matmul(x,softmax_w)+softmax_b
# Use softmax to get the probabilities for predicted characters
out = tf.nn.softmax(logits,name="predictions")
return out, logits
Explanation: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character, so we want this layer to have size $C$, the number of classes/characters we have in our text.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells. We get the LSTM output as a list, lstm_output. First we need to concatenate this whole list into one array with tf.concat. Then, reshape it (with tf.reshape) to size $(M * N) \times L$.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
Exercise: Implement the output layer in the function below.
End of explanation
def build_loss(logits, targets, lstm_size, num_classes):
''' Calculate the loss from the logits and the targets.
Arguments
---------
logits: Logits from final fully connected layer
targets: Targets for supervised learning
lstm_size: Number of LSTM hidden units
num_classes: Number of classes in targets
'''
# One-hot encode targets and reshape to match logits, one row per sequence per step
y_one_hot = tf.one_hot(targets,num_classes)
y_reshaped = tf.reshape(y_one_hot,logits.get_shape())
# Softmax cross entropy loss
loss = tf.softmax_cross_entropy_with_logits(logits=logits,labels=y_reshaped)
loss = tf.reduce_mean(loss)
return loss
Explanation: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
Exercise: Implement the loss calculation in the function below.
End of explanation
def build_optimizer(loss, learning_rate, grad_clip):
''' Build optmizer for training, using gradient clipping.
Arguments:
loss: Network loss
learning_rate: Learning rate for optimizer
'''
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
return optimizer
Explanation: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
End of explanation
class CharRNN:
def __init__(self, num_classes, batch_size=64, num_steps=50,
lstm_size=128, num_layers=2, learning_rate=0.001,
grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
else:
batch_size, num_steps = batch_size, num_steps
tf.reset_default_graph()
# Build the input placeholder tensors
self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps)
# Build the LSTM cell
cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, self.keep_prob)
### Run the data through the RNN layers
# First, one-hot encode the input tokens
x_one_hot = tf.one_hot(self.inputs,num_classes)
# Run each sequence step through the RNN with tf.nn.dynamic_rnn
outputs, state = tf.nn.dynamic_rnn(cell,x_one_hot,initial_state=self.initial_state)
self.final_state = state
# Get softmax predictions and logits
self.prediction, self.logits = build_output(outputs,lstm_size,num_classes)
# Loss and optimizer (with gradient clipping)
self.loss = build_loss(self.logits,self.targets,lstm_size,num_classes)
self.optimizer = build.optimizer(self.loss,learning_rate,grad_clip)
Explanation: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
Exercise: Use the functions you've implemented previously and tf.nn.dynamic_rnn to build the network.
End of explanation
batch_size = 10 # Sequences per batch
num_steps = 50 # Number of sequence steps per batch
lstm_size = 128 # Size of hidden layers in LSTMs
num_layers = 2 # Number of LSTM layers
learning_rate = 0.01 # Learning rate
keep_prob = 0.5 # Dropout keep probability
Explanation: Hyperparameters
Here are the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)
Approximate number of parameters
The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:
The number of parameters in your model. This is printed when you start training.
The size of your dataset. 1MB file is approximately 1 million characters.
These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.
Best models strategy
The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
End of explanation
epochs = 20
epochs = 2
# Save every N iterations
save_every_n = 200
model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
lstm_size=lstm_size, num_layers=num_layers,
learning_rate=learning_rate)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
counter = 0
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for x, y in get_batches(encoded, batch_size, num_steps):
counter += 1
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.loss,
model.final_state,
model.optimizer],
feed_dict=feed)
end = time.time()
print('Epoch: {}/{}... '.format(e+1, epochs),
'Training Step: {}... '.format(counter),
'Training loss: {:.4f}... '.format(batch_loss),
'{:.4f} sec/batch'.format((end-start)))
if (counter % save_every_n == 0):
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
Explanation: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
Exercise: Set the hyperparameters above to train the network. Watch the training loss, it should be consistently dropping. Also, I highly advise running this on a GPU.
End of explanation
tf.train.get_checkpoint_state('checkpoints')
Explanation: Saved checkpoints
Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
End of explanation
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
tf.train.latest_checkpoint('checkpoints')
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i600_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i1200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
Explanation: Here, pass in the path to a checkpoint and sample from the network.
End of explanation |
1,856 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Load a cell
Only need a part of it, so using only the first 29 cycles (splitting on cycle 30)
Step1: OCV rlx curves
Step2: Extract OCV points
Using the select_ocv_points function from cellpy.utils.ocv_rlx.
```python
def select_ocv_points(
cellpydata,
cycles=None,
selection_method="martin",
number_of_points=5,
interval=10,
relative_voltage=False,
report_times=False,
direction=None,
)
Step4: Delta V
Step5: raw | Python Code:
dd = cellreader.get(filename, logging_mode="INFO")
d, _ = helpers.split_experiment(dd, 90)
Explanation: Load a cell
Only need a part of it, so using only the first 29 cycles (splitting on cycle 30)
End of explanation
ocv_cycles = d.get_ocv(
interpolated=True, number_of_points=40, direction="down"
).reset_index(drop=True)
ocv_cycles.head()
%%opts Curve [width=600] (alpha=0.9, color=hv.Palette('Magma'))
single_curves = hv.Curve(ocv_cycles, kdims=["Step_Time", "Cycle_Index"], vdims=["Voltage"]).groupby("Cycle_Index").overlay()
single_curves.opts(tools=["hover"])
Explanation: OCV rlx curves
End of explanation
p_fixed_time, i1 = ocv_rlx.select_ocv_points(
d, selection_method="fixed_times", direction="both", return_times=True
)
p_martin, i2 = ocv_rlx.select_ocv_points(d, direction="both", return_times=True)
i1.head()
i2.head()
p_martin.head()
p_fx_down = p_fixed_time.loc[p_fixed_time.type == "ocvrlx_down"]
p_m_down = p_martin.loc[p_martin.type == "ocvrlx_down"]
p_fx_down_fast = p_fx_down.loc[p_fx_down.step == 15]
p_m_down_fast = p_m_down.loc[p_m_down.step == 15]
p_fx_down_slow = p_fx_down.loc[p_fx_down.step == 8]
p_m_down_slow = p_m_down.loc[p_m_down.step == 8]
p_m_down.plot(x="cycle", y=p_m_down.columns.drop("cycle").drop("step"))
p_fx_down.plot(x="cycle", y=[c for c in p_fx_down.columns if c.startswith("point_")])
p_fx_down_fast.plot(
x="cycle", y=[c for c in p_fx_down.columns if c.startswith("point_")]
)
p_m_down_fast.plot(
x="cycle", y=[c for c in p_fx_down.columns if c.startswith("point_")]
)
p_m_down_slow.plot(
x="cycle", y=[c for c in p_m_down_slow.columns if c.startswith("point_")]
)
ax1 = p_fx_down_slow.plot(
x="cycle", y=[c for c in p_fx_down_slow.columns if c.startswith("point_")]
)
ycols = [c for c in p_fx_down_slow.columns if c.startswith("point_")]
xcol = "cycle"
fig, (ax1, ax2) = plt.subplots(1, 2)
fig.suptitle("Fixed dt")
ax1.set_ylabel("voltage (vs. Li/Li+)")
p_fx_down_slow.plot(x=xcol, y=ycols, ax=ax1, title="slow cycles")
p_fx_down_fast.plot(x=xcol, y=ycols, ax=ax2, title="fast cycles", legend=False)
plt.tight_layout()
ycols = [c for c in p_m_down_slow.columns if c.startswith("point_")]
xcol = "cycle"
fig, (ax1, ax2) = plt.subplots(1, 2)
fig.suptitle("Martin")
ax1.set_ylabel("voltage (vs. Li/Li+)")
p_m_down_slow.plot(x=xcol, y=ycols, ax=ax1, title="slow cycles")
p_m_down_fast.plot(x=xcol, y=ycols, ax=ax2, title="fast cycles", legend=False)
plt.tight_layout()
Explanation: Extract OCV points
Using the select_ocv_points function from cellpy.utils.ocv_rlx.
```python
def select_ocv_points(
cellpydata,
cycles=None,
selection_method="martin",
number_of_points=5,
interval=10,
relative_voltage=False,
report_times=False,
direction=None,
):
Select points from the ocvrlx steps.
Args:
cellpydata: CellpyData-object
cycles: list of cycle numbers to process (optional)
selection_method: criteria for selecting points
martin: select first and last, and then last/2, last/2/2 etc.
until you have reached the wanted number of points.
fixed_times: select first, and then
number_of_points: number of points you want.
interval: interval between each point (in use only for methods
where interval makes sense). If it is a list, then
number_of_points will be calculated as len(interval) + 1 (and
override the set number_of_points).
relative_voltage: set to True if you would like the voltage to be
relative to the voltage before starting the ocv rlx step.
Defaults to False. Remark that for the initial rxl step (when
you just have put your cell on the tester) does not have any
prior voltage. The relative voltage will then be versus the
first measurement point.
report_times: also report the ocv rlx total time if True (defaults
to False)
direction ("up", "down" or "both"): select "up" if you would like
to process only the ocv rlx steps where the voltage is relaxing
upwards and vize versa. Defaults to "both".
Returns:
pandas.DataFrame
```
End of explanation
p_m_down_slow["delta"] = p_m_down_slow["point_04"] - p_m_down_slow["point_00"]
p_m_down_fast["delta"] = p_m_down_fast["point_04"] - p_m_down_fast["point_00"]
ycols = "delta"
xcol = "cycle"
fig, ax = plt.subplots()
fig.suptitle("Martin")
ax.set_ylabel("delta V (point_04 - point_00)")
p_m_down_slow.plot(x=xcol, y=ycols, ax=ax, label="slow cycles")
p_m_down_fast.plot(x=xcol, y=ycols, ax=ax, label="fast cycles");
# plt.tight_layout()
p_fx_down_slow["delta"] = p_fx_down_slow["point_04"] - p_fx_down_slow["point_00"]
p_fx_down_fast["delta"] = p_fx_down_fast["point_04"] - p_fx_down_fast["point_00"]
ycols = "delta"
xcol = "cycle"
fig, ax = plt.subplots()
fig.suptitle("fixed")
ax.set_ylabel("delta V (point_04 - point_00)")
p_fx_down_slow.plot(x=xcol, y=ycols, ax=ax, label="slow cycles")
p_fx_down_fast.plot(x=xcol, y=ycols, ax=ax, label="fast cycles");
# plt.tight_layout()
Explanation: Delta V
End of explanation
plotutils.cycle_info_plot(d)
Explanation: raw
End of explanation |
1,857 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing
Step8: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step10: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step12: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
Step15: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below
Step18: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch.
Step21: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
Step24: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
Step27: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
Step30: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note
Step33: Build the Neural Network
Apply the functions you implemented above to
Step34: Neural Network Training
Hyperparameters
Tune the following parameters
Step36: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save the batch_size and save_path parameters for inference.
Step43: Checkpoint
Step46: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
Step48: Translate
This will translate translate_sentence from English to French. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
source_id_text = [[source_vocab_to_int.get(letter, source_vocab_to_int['<UNK>'])
for letter in line.split(' ')] for line in source_text.split('\n')]
target_id_text = [[target_vocab_to_int.get(letter, target_vocab_to_int['<UNK>'])
for letter in line.split(' ')] + [target_vocab_to_int['<EOS>']]
for line in target_text.split('\n')]
return source_id_text, target_id_text
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
def model_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
inputs = tf.placeholder(tf.int32, [None, None], name = 'input')
targets = tf.placeholder(tf.int32, [None, None], name = 'targets')
learning_rate = tf.placeholder(tf.float32, shape = None, name = 'learning_rate')
keep_prob = tf.placeholder(tf.float32, shape = None, name = 'keep_prob')
return inputs, targets, learning_rate, keep_prob
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoding_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)
End of explanation
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for dencoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
GO_ID = target_vocab_to_int['<GO>']
target_data = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
concat_data = tf.fill([batch_size, 1], GO_ID)
target_data = tf.concat([concat_data, target_data], 1)
return target_data
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_decoding_input(process_decoding_input)
Explanation: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch.
End of explanation
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
encoding_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)]
* num_layers)
encoding_cell = tf.contrib.rnn.DropoutWrapper(encoding_cell, keep_prob)
_, rnn_state = tf.nn.dynamic_rnn(encoding_cell, rnn_inputs, dtype = tf.float32)
return rnn_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
End of explanation
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, keep_prob)
train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell,
train_decoder_fn,
dec_embed_input,
sequence_length,
scope = decoding_scope)
train_logits = output_fn(train_pred)
return train_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
Explanation: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
End of explanation
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: The maximum allowed time steps to decode
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, keep_prob)
infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(
output_fn, encoder_state, dec_embeddings,
start_of_sequence_id, end_of_sequence_id,
maximum_length - 1, vocab_size)
inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(
dec_cell, infer_decoder_fn, scope = decoding_scope)
return inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
Explanation: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
End of explanation
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
dec_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)
dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, keep_prob)
with tf.variable_scope("decoding") as decoding_scope:
output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size,
None, scope = decoding_scope)
train_logits = decoding_layer_train(encoder_state, dec_cell,
dec_embed_input, sequence_length,
decoding_scope, output_fn, keep_prob)
decoding_scope.reuse_variables()
inference_logits = decoding_layer_infer(encoder_state, dec_cell,
dec_embeddings, target_vocab_to_int['<GO>'],
target_vocab_to_int['<EOS>'], sequence_length - 1,
vocab_size, decoding_scope, output_fn, keep_prob)
return train_logits, inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
enc_embed_input = tf.contrib.layers.embed_sequence(input_data,
source_vocab_size,
enc_embedding_size)
encoder_state = encoding_layer(enc_embed_input, rnn_size,
num_layers, keep_prob)
dec_input = process_decoding_input(target_data, target_vocab_to_int, batch_size)
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
return decoding_layer(dec_embed_input, dec_embeddings, encoder_state,
target_vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Apply embedding to the input data for the encoder.
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.
Apply embedding to the target data for the decoder.
Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
End of explanation
# Number of Epochs
epochs = 4
# Batch Size
batch_size = 512
# RNN Size
rnn_size = 100
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 50
decoding_embedding_size = 50
# Learning Rate
learning_rate = 0.01
# Dropout Keep Probability
keep_probability = 0.9
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_source_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import time
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
Explanation: Checkpoint
End of explanation
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
sentence = sentence.lower()
word_list = [vocab_to_int.get(word, vocab_to_int['<UNK>'])
for word in sentence.split(' ')]
return word_list
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation |
1,858 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Multiclass Support Vector Machine exercise
(Adapted from Stanford University's CS231n Open Courseware)
Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the HW page on the course website.
In this exercise you will
Step1: CIFAR-10 Data Loading and Preprocessing
Step2: SVM Classifier
Your code for this section will all be written inside cs231n/classifiers/linear_svm.py.
As you can see, we have prefilled the function compute_loss_naive which uses for loops to evaluate the multiclass SVM loss function.
Step3: The grad returned from the function above is right now all zero. Derive and implement the gradient for the SVM cost function and implement it inline inside the function svm_loss_naive. You will find it helpful to interleave your new code inside the existing function.
To check that you have correctly implemented the gradient correctly, you can numerically estimate the gradient of the loss function and compare the numeric estimate to the gradient that you computed. We have provided code that does this for you
Step4: Inline Question 1
Step5: Stochastic Gradient Descent
We now have vectorized and efficient expressions for the loss, the gradient and our gradient matches the numerical gradient. We are therefore ready to do SGD to minimize the loss. | Python Code:
# Run some setup code for this notebook.
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
# This is a bit of magic to make matplotlib figures appear inline in the
# notebook rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
Explanation: Multiclass Support Vector Machine exercise
(Adapted from Stanford University's CS231n Open Courseware)
Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the HW page on the course website.
In this exercise you will:
implement a fully-vectorized loss function for the SVM
implement the fully-vectorized expression for its analytic gradient
check your implementation using numerical gradient
use a validation set to tune the learning rate and regularization strength
optimize the loss function with SGD
visualize the final learned weights
End of explanation
# Load the raw CIFAR-10 data.
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir, num_of_batches=6)
# Increase num_of_batches to 6 if you have sufficient memory
# As a sanity check, we print out the size of the training and test data.
print 'Training data shape: ', X_train.shape
print 'Training labels shape: ', y_train.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# Subsample the data for more efficient code execution in this exercise.
num_training = 49000
#Increase this if you have memory: num_training = 49000
num_validation = 1000
num_test = 1000
# Our validation set will be num_validation points from the original
# training set.
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
# Our training set will be the first num_train points from the original
# training set.
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
# We use the first num_test points of the original test set as our
# test set.
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
print 'Train data shape: ', X_train.shape
print 'Train labels shape: ', y_train.shape
print 'Validation data shape: ', X_val.shape
print 'Validation labels shape: ', y_val.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
# Preprocessing: reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_val = np.reshape(X_val, (X_val.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
# As a sanity check, print out the shapes of the data
print 'Training data shape: ', X_train.shape
print 'Validation data shape: ', X_val.shape
print 'Test data shape: ', X_test.shape
# Preprocessing: subtract the mean image
# first: compute the image mean based on the training data
mean_image = np.mean(X_train, axis=0)
print mean_image[:10] # print a few of the elements
plt.figure(figsize=(4,4))
plt.imshow(mean_image.reshape((32,32,3)).astype('uint8')) # visualize the mean image
# second: subtract the mean image from train and test data
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
# third: append the bias dimension of ones (i.e. bias trick) so that our SVM
# only has to worry about optimizing a single weight matrix W.
# Also, lets transform both data matrices so that each image is a column.
X_train = np.hstack([X_train, np.ones((X_train.shape[0], 1))]).T
X_val = np.hstack([X_val, np.ones((X_val.shape[0], 1))]).T
X_test = np.hstack([X_test, np.ones((X_test.shape[0], 1))]).T
print X_train.shape, X_val.shape, X_test.shape
Explanation: CIFAR-10 Data Loading and Preprocessing
End of explanation
# Evaluate the naive implementation of the loss we provided for you:
from cs231n.classifiers.linear_svm import svm_loss_naive
import time
# generate a random SVM weight matrix of small numbers
W = np.random.randn(10, 3073) * 0.0001
loss, grad = svm_loss_naive(W, X_train, y_train, 0.00001)
print 'loss: %f' % (loss, )
Explanation: SVM Classifier
Your code for this section will all be written inside cs231n/classifiers/linear_svm.py.
As you can see, we have prefilled the function compute_loss_naive which uses for loops to evaluate the multiclass SVM loss function.
End of explanation
# Once you've implemented the gradient, recompute it with the code below
# and gradient check it with the function we provided for you
# Compute the loss and its gradient at W.
loss, grad = svm_loss_naive(W, X_train, y_train, 0.0)
# Numerically compute the gradient along several randomly chosen dimensions, and
# compare them with your analytically computed gradient. The numbers should match
# almost exactly along all dimensions.
from cs231n.gradient_check import grad_check_sparse
f = lambda w: svm_loss_naive(w, X_train, y_train, 0.0)[0]
grad_numerical = grad_check_sparse(f, W, grad, 10)
Explanation: The grad returned from the function above is right now all zero. Derive and implement the gradient for the SVM cost function and implement it inline inside the function svm_loss_naive. You will find it helpful to interleave your new code inside the existing function.
To check that you have correctly implemented the gradient correctly, you can numerically estimate the gradient of the loss function and compare the numeric estimate to the gradient that you computed. We have provided code that does this for you:
End of explanation
# Next implement the function svm_loss_vectorized; for now only compute the loss;
# we will implement the gradient in a moment.
tic = time.time()
loss_naive, grad_naive = svm_loss_naive(W, X_train, y_train, 0.00001)
toc = time.time()
print 'Naive loss: %e computed in %fs' % (loss_naive, toc - tic)
from cs231n.classifiers.linear_svm import svm_loss_vectorized
tic = time.time()
loss_vectorized, _ = svm_loss_vectorized(W, X_train, y_train, 0.00001)
toc = time.time()
print 'Vectorized loss: %e computed in %fs' % (loss_vectorized, toc - tic)
# The losses should match but your vectorized implementation should be much faster.
print 'difference: %f' % (loss_naive - loss_vectorized)
# Complete the implementation of svm_loss_vectorized, and compute the gradient
# of the loss function in a vectorized way.
# The naive implementation and the vectorized implementation should match, but
# the vectorized version should still be much faster.
tic = time.time()
_, grad_naive = svm_loss_naive(W, X_train, y_train, 0.00001)
toc = time.time()
print 'Naive loss and gradient: computed in %fs' % (toc - tic)
tic = time.time()
_, grad_vectorized = svm_loss_vectorized(W, X_train, y_train, 0.00001)
toc = time.time()
print 'Vectorized loss and gradient: computed in %fs' % (toc - tic)
# The loss is a single number, so it is easy to compare the values computed
# by the two implementations. The gradient on the other hand is a matrix, so
# we use the Frobenius norm to compare them.
difference = np.linalg.norm(grad_naive - grad_vectorized, ord='fro')
print 'difference: %f' % difference
Explanation: Inline Question 1:
It is possible that once in a while a dimension in the gradcheck will not match exactly. What could such a discrepancy be caused by? Is it a reason for concern? What is a simple example in one dimension where a gradient check could fail? Hint: the SVM loss function is not strictly speaking differentiable
Your Answer: It's caused by the indiff'ble points in the loss function, for our loss function it's around the point 1. If our optimal point lies around 1 it would become a problem for us. For one dimension we could look at y=abs(x) at x=0 it's not diff'ble, analytical approach would give either 1 or -1 for the gradient around zero, but numerical approach would give something different, depending on the step size, something within [-1,1]
End of explanation
# Now implement SGD in LinearSVM.train() function and run it with the code below
from cs231n.classifiers import LinearSVM
svm = LinearSVM()
tic = time.time()
loss_hist = svm.train(X_train, y_train, learning_rate=1e-7, reg=5e4,
num_iters=1500, verbose=True)
toc = time.time()
print 'That took %fs' % (toc - tic)
# A useful debugging strategy is to plot the loss as a function of
# iteration number:
plt.plot(loss_hist)
plt.xlabel('Iteration number')
plt.ylabel('Loss value')
# Write the LinearSVM.predict function and evaluate the performance on both the
# training and validation set
y_train_pred = svm.predict(X_train)
print 'training accuracy: %f' % (np.mean(y_train == y_train_pred), )
y_val_pred = svm.predict(X_val)
print 'validation accuracy: %f' % (np.mean(y_val == y_val_pred), )
# Use the validation set to tune hyperparameters (regularization strength and
# learning rate). You should experiment with different ranges for the learning
# rates and regularization strengths; if you are careful you should be able to
# get a classification accuracy of about 0.4 on the validation set.
learning_rates = np.arange(5)*2e-8+3e-8
regularization_strengths = np.arange(5)*5e3+2e4
# results is dictionary mapping tuples of the form
# (learning_rate, regularization_strength) to tuples of the form
# (training_accuracy, validation_accuracy). The accuracy is simply the fraction
# of data points that are correctly classified.
results = {}
best_val = -1 # The highest validation accuracy that we have seen so far.
best_svm = None # The LinearSVM object that achieved the highest validation rate.
################################################################################
# TODO: #
# Write code that chooses the best hyperparameters by tuning on the validation #
# set. For each combination of hyperparameters, train a linear SVM on the #
# training set, compute its accuracy on the training and validation sets, and #
# store these numbers in the results dictionary. In addition, store the best #
# validation accuracy in best_val and the LinearSVM object that achieves this #
# accuracy in best_svm. #
# #
# Hint: You should use a small value for num_iters as you develop your #
# validation code so that the SVMs don't take much time to train; once you are #
# confident that your validation code works, you should rerun the validation #
# code with a larger value for num_iters. #
################################################################################
for lr in learning_rates:
for rs in regularization_strengths:
svm = LinearSVM()
svm.train(X_train, y_train, learning_rate=lr, reg=rs,
num_iters=1500)
results[(lr,rs)]= (np.mean(svm.predict(X_train)==y_train),
np.mean(svm.predict(X_val)==y_val))
if best_val < results[(lr,rs)][1]:
best_val = results[(lr,rs)][1]
best_svm = svm
print best_val
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print 'lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy)
print 'best validation accuracy achieved during cross-validation: %f' % best_val
# Visualize the cross-validation results
import math
x_scatter = [math.log10(x[0]) for x in results]
y_scatter = [math.log10(x[1]) for x in results]
# plot training accuracy
sz = [results[x][0]*1500 for x in results] # default size of markers is 20
plt.subplot(1,2,1)
plt.scatter(x_scatter, y_scatter, sz)
plt.xlabel('log learning rate')
plt.ylabel('log regularization strength')
plt.title('CIFAR-10 training accuracy')
# plot validation accuracy
sz = [results[x][1]*1500 for x in results] # default size of markers is 20
plt.subplot(1,2,2)
plt.scatter(x_scatter, y_scatter, sz)
plt.xlabel('log learning rate')
plt.ylabel('log regularization strength')
plt.title('CIFAR-10 validation accuracy')
# Evaluate the best svm on test set
y_test_pred = best_svm.predict(X_test)
test_accuracy = np.mean(y_test == y_test_pred)
print 'linear SVM on raw pixels final test set accuracy: %f' % test_accuracy
# Visualize the learned weights for each class.
# Depending on your choice of learning rate and regularization strength, these may
# or may not be nice to look at.
w = best_svm.W[:,:-1] # strip out the bias
w = w.reshape(10, 32, 32, 3)
w_min, w_max = np.min(w), np.max(w)
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for i in xrange(10):
plt.subplot(2, 5, i + 1)
# Rescale the weights to be between 0 and 255
wimg = 255.0 * (w[i].squeeze() - w_min) / (w_max - w_min)
plt.imshow(wimg.astype('uint8'))
plt.axis('off')
plt.title(classes[i])
Explanation: Stochastic Gradient Descent
We now have vectorized and efficient expressions for the loss, the gradient and our gradient matches the numerical gradient. We are therefore ready to do SGD to minimize the loss.
End of explanation |
1,859 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Making batch recommendations using GraphLab Create
In this notebook we will show a complete recommender system implemented using GraphLab's deployment tools. This recommender example is common in many batch scenarios, where a new recommender is trained on a periodic basis, with the generated recommendations persisted to a relational database used by the web application.
The data we will use in this notebook is the same as the Building a Recommender with Ratings Data notebook, but without the exploration and prototyping parts.
The pipeline will contain the following tasks
Step1: Clean the data
The first task in this pipeline will take data, clean it, and transform it into an SFrame. In this task, the raw data is read using graphlab.SFrame.read_csv, with the file path provided as a parameter to the Task. Once the data is loaded into an SFrame, we clean it by calling dropna() on the SFrame. The code that will run when the task is executed is
Step2: Train the model
Now that the data is cleaned and ready as an SFrame, we need to train a model in this recommendation system. To train the model, we need the SFrame created in the previous Task.
Step3: Generate Recommendations
With the previous task there is now a trained model that we should use for generating recommendations. With a Task now specified that trains a model, it can be improved independently from the task that generates recommendations from that model. To generate recommendations we need the trained model to use, and the users needing recommendations.
Here is the code for generating recommendations froma trained model
Step4: Running and Monitoring this recommender
Now that the tasks are defined for this pipeline, let's compose them together to create a Job. Using the late-binding feature of the Data Pipelines framework, the parameters, inputs, and outputs that have not been specified with the Task can be specified at runtime. We will use this feature to specify the database parameters for the 'persist' task, and then raw data location for the 'clean' task.
Create a Job
Step5: The job is started asynchronously in the background, and we can query for its status calling get_status on the Job instance returned
Step6: If you don't want to wait for the job to complete, you can use the get_results function which waits for the job to complete before you get thee results.
Step7: To see more information about the job, print the job object
Step8: Let us try and visualize the recommendations.
Step9: Persist Recommendations
Now that recommendations have been generated, the final step in this pipeline is to save them to a relational database. The main applicaton queries this database for user recommendations as users are interacting with the application. For this task, we will use MySQL as an example, but that can easily be substituted with a different database.
The DB table needed to run this example looks like the following
Step10: Note
Step11: Save Recommendations to Database
Note
Step12: The job is now 'Completed'.
Running in EC2 or Hadoop
Data Pipelines also supports running the same pipeline in EC2 or Hadoop YARN clusters (CDH5). In order to run this pipeline in those environments, simply add an environment parameter to graphlab.deploy.job.create API. No code needs to change, and the GraphLab Data Pipelines framework takes care of installing and configuring what is needed to run this pipeline in the specified environment.
To create an EC2 environment
Step13: To create a Hadoop environment | Python Code:
import graphlab
Explanation: Making batch recommendations using GraphLab Create
In this notebook we will show a complete recommender system implemented using GraphLab's deployment tools. This recommender example is common in many batch scenarios, where a new recommender is trained on a periodic basis, with the generated recommendations persisted to a relational database used by the web application.
The data we will use in this notebook is the same as the Building a Recommender with Ratings Data notebook, but without the exploration and prototyping parts.
The pipeline will contain the following tasks:
Clean and transform data
Train a Recommender model
Generate Recommendations for users
Persist Recommendations to a MySQL database
Each of these tasks will be defined as a function and executed as a Job using GraphLab. And finally, we will cover how to Run and monitor these pipelines. Remember, when using GraphLab Data Pipelines, the Tasks and Jobs created are managed objects, so they must have unique names.
This notebook uses GraphLab Create 1.3.
End of explanation
def clean_data(path):
import graphlab as gl
sf = gl.SFrame.read_csv(path, delimiter='\t')
sf['rating'] = sf['rating'].astype(int)
sf = sf.dropna()
sf.rename({'user':'user_id', 'movie':'movie_id'})
# To simplify this example, only keep 0.1% of the number of rows from the input data
sf = sf.sample(0.001)
return sf
Explanation: Clean the data
The first task in this pipeline will take data, clean it, and transform it into an SFrame. In this task, the raw data is read using graphlab.SFrame.read_csv, with the file path provided as a parameter to the Task. Once the data is loaded into an SFrame, we clean it by calling dropna() on the SFrame. The code that will run when the task is executed is:
End of explanation
def train_model(data):
import graphlab as gl
model = gl.recommender.create(data, user_id='user_id', item_id='movie_id', target='rating')
return model
Explanation: Train the model
Now that the data is cleaned and ready as an SFrame, we need to train a model in this recommendation system. To train the model, we need the SFrame created in the previous Task.
End of explanation
def gen_recs(model, data):
recs = model.recommend(data['user_id'])
return recs
Explanation: Generate Recommendations
With the previous task there is now a trained model that we should use for generating recommendations. With a Task now specified that trains a model, it can be improved independently from the task that generates recommendations from that model. To generate recommendations we need the trained model to use, and the users needing recommendations.
Here is the code for generating recommendations froma trained model:
End of explanation
def my_batch_job(path):
data = clean_data(path)
model = train_model(data)
recs = gen_recs(model, data)
return recs
job = graphlab.deploy.job.create(my_batch_job,
path = 'https://static.turi.com/datasets/movie_ratings/sample.small')
Explanation: Running and Monitoring this recommender
Now that the tasks are defined for this pipeline, let's compose them together to create a Job. Using the late-binding feature of the Data Pipelines framework, the parameters, inputs, and outputs that have not been specified with the Task can be specified at runtime. We will use this feature to specify the database parameters for the 'persist' task, and then raw data location for the 'clean' task.
Create a Job
End of explanation
job.get_status()
Explanation: The job is started asynchronously in the background, and we can query for its status calling get_status on the Job instance returned:
End of explanation
recs = job.get_results() # Blocking call which waits for the job to complete.
Explanation: If you don't want to wait for the job to complete, you can use the get_results function which waits for the job to complete before you get thee results.
End of explanation
print job
Explanation: To see more information about the job, print the job object:
End of explanation
graphlab.canvas.set_target('ipynb') # show Canvas inline to IPython Notebook
recs.show()
Explanation: Let us try and visualize the recommendations.
End of explanation
@graphlab.deploy.required_packages(['mysql-connector-python'])
def persist_to_db(recs, dbhost, dbuser, dbpass, dbport, dbtable, dbname):
import mysql.connector
from mysql.connector import errorcode
conn = mysql.connector.connect(host=dbhost, user=dbuser, password=dbpass, port=dbport)
conn.database = dbname
cursor = conn.cursor()
# this example expects the table to be empty, minor changes here if you want to
# update existing users' recommendations instead.
add_row_sql = ("INSERT INTO " + dbtable + " (user_id, movie_id, score, rank) "
"VALUES (%(user_id)s, %(movie_id)s, %(score)s, %(rank)s)")
print "Begin - Writing recommendations to DB...."
for row in recs:
cursor.execute(add_row_sql, row)
print "End - Writing recommendations to DB...."
# commit recommendations to database
conn.commit()
Explanation: Persist Recommendations
Now that recommendations have been generated, the final step in this pipeline is to save them to a relational database. The main applicaton queries this database for user recommendations as users are interacting with the application. For this task, we will use MySQL as an example, but that can easily be substituted with a different database.
The DB table needed to run this example looks like the following:
+----------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+----------+-------------+------+-----+---------+-------+
| user_id | varchar(50) | NO | | NULL | |
| movie_id | varchar(50) | NO | | NULL | |
| score | float | NO | | NULL | |
| rank | int(8) | NO | | NULL | |
+----------+-------------+------+-----+---------+-------+
To create a table in MySQL with this schema:
CREATE TABLE recommendations (user_id VARCHAR(50),
movie_id VARCHAR(50), score FLOAT, rank INT(8));
End of explanation
# install the mysql-connector-python package locally, if not running from a virtualenv then sudo may be required
!pip install --allow-external mysql-connector-python mysql-connector-python
Explanation: Note: An important note about this Task is that is requires using the mysql-connector-python package, which is not in standard Python. Using GraphLab Create, specyfing that this package is required is easily done in the Task definition. When running this task in a remote enviroment (EC2 or Hadoop) the framework will make sure this python package is installed prior to execution.
In order to run this pipeline locally, please install the mysql-connector-python package on your machine.
End of explanation
job = graphlab.deploy.job.create(persist_to_db,
recs = recs,
dbhost = '10.10.2.2', # change these db params appropriately
dbuser = 'test',
dbpass = 'secret',
dbname = 'users',
dbport = 3306,
dbtable = 'recommendations')
results = job.get_results()
Explanation: Save Recommendations to Database
Note: Obviously change the following database parameters to ones that match the database you are connecting to. Also, remember to install the mysql-python-connector package on your machine before running this job.
End of explanation
ec2 = graphlab.deploy.Ec2Config(aws_access_key_id='<key>',
aws_secret_key='<secret>')
c = graphlab.deploy.ec2_cluster.create(name='ec2cluster',
s3_path='s3://my_bucket',
ec2_config=ec2)
Explanation: The job is now 'Completed'.
Running in EC2 or Hadoop
Data Pipelines also supports running the same pipeline in EC2 or Hadoop YARN clusters (CDH5). In order to run this pipeline in those environments, simply add an environment parameter to graphlab.deploy.job.create API. No code needs to change, and the GraphLab Data Pipelines framework takes care of installing and configuring what is needed to run this pipeline in the specified environment.
To create an EC2 environment:
End of explanation
c = graphlab.deploy.hadoop_cluster.create(name='hd',
turi_dist_path='hdfs://some.domain.com/user/name/dd-deployment',
hadoop_conf_dir='~/yarn-config)
Explanation: To create a Hadoop environment:
End of explanation |
1,860 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have two arrays: | Problem:
import numpy as np
a = np.array(
[[[ 0, 1, 2, 3],
[ 2, 3, 4, 5],
[ 4, 5, 6, 7]],
[[ 6, 7, 8, 9],
[ 8, 9, 10, 11],
[10, 11, 12, 13]],
[[12, 13, 14, 15],
[14, 15, 16, 17],
[16, 17, 18, 19]]]
)
b = np.array(
[[0, 1, 2],
[2, 1, 3],
[1, 0, 3]]
)
arr = np.take_along_axis(a, b[..., np.newaxis], axis=-1)[..., 0]
result = np.sum(a) - np.sum(arr) |
1,861 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Directed, Polar Heat Diffusion
Step2: Definitions
Definitions
Step3: Example 1
Example 1 is a small system set up to run out of heat defined by the short set of relations
A -| B
A -| C
B -> C
with weight matrix $W$ (indexed in alphabetical order)
Step4: Example 2
Diffusion on synthetic data.
Architecture
Step5: Example 3
A random graph with more positive edges.
Step6: Example 4
A random graph with more positive edges
Step7: Example 5
Step8: Example 6 - Chaotic Increasing System
Step9: This is the first example of a system coming to a non-zero steady state! One of the reasons is any system that has a sink will always hemmorage heat out of the sink.
Some ideas on how to deal with this
Step11: Strategy 2
Step13: Strategy 3 | Python Code:
import random
import sys
import time
from abc import ABC, abstractmethod
from collections import defaultdict
from dataclasses import dataclass
from itertools import product
from typing import Optional
import matplotlib as mpl
import matplotlib.pyplot as plt
import networkx as nx
import numpy as np
import pandas as pd
from IPython.display import Markdown
from sklearn.preprocessing import normalize
from tqdm import tqdm_notebook as tqdm
%matplotlib inline
mpl.rcParams['figure.figsize'] = [8.0, 3.0]
print(time.asctime())
print(sys.version)
# My favorite seed
np.random.seed(127)
random.seed(127)
def draw(graph):
edges = graph.edges()
pos = nx.spring_layout(graph)
colors = []
for (u,v,attrib_dict) in list(graph.edges.data()):
colors.append('blue' if attrib_dict['weight'] == 1 else 'red')
nx.draw(graph, pos=pos, edges=edges, edge_color=colors, node_size=60)
def assign_bernoulli_polarity(graph, p:float = 0.5) -> None:
Bigger probability means more positive edges.
for u, v, k in graph.edges(keys=True):
graph.edges[u, v, k]['weight'] = 1 if random.random() < p else -1
# Insulation parameters to check
alphas = (
0.1,
0.01,
0.001,
)
n_subplots_x = 3
n_subplots_y = int((1 + len(alphas)) / n_subplots_x)
n_subplots_x, n_subplots_y
Explanation: Directed, Polar Heat Diffusion
End of explanation
class BaseDiffuser(ABC):
def __init__(self, graph: nx.DiGraph, alpha: float, steps: Optional[int] = None) -> None:
self.alpha = alpha
self.deltas = []
self.heats = []
self.steps = steps or int(30 / self.alpha)
self.weights = self.calculate_weights(graph)
@staticmethod
@abstractmethod
def calculate_weights(graph):
raise NotImplementedError
@abstractmethod
def run(self, heat, tqdm_kwargs=None) -> None:
raise NotImplementedError
def _plot_diffusion_title(self):
return f'Diffusion ($\\alpha={self.alpha}$)'
def plot(self, heat_plt_kwargs=None, deriv_plt_kwargs=None) -> None:
fig, (lax, rax) = plt.subplots(1, 2)
lax.set_title(self._plot_diffusion_title())
lax.set_ylabel('Heat')
lax.set_xlabel('Time')
pd.DataFrame(self.heats).plot.line(ax=lax, logx=True, **(heat_plt_kwargs or {}))
rax.set_title('Derivative of Sum of Absolute Heats')
rax.set_ylabel('Change in Sum of Absolute Heats')
rax.set_xlabel('Time')
derivative = [
(x2 - x1)
for x1, x2 in zip(self.deltas, self.deltas[1:])
]
pd.DataFrame(derivative).plot.line(ax=rax, logx=True, legend=False, **(deriv_plt_kwargs or {}))
plt.tight_layout(rect=[0, 0, 1, 0.95])
return fig, (lax, rax)
@staticmethod
def optimize_alpha_multirun(graph, alphas, heat):
alpha_heats = {}
alpha_deltas = {}
for alpha in alphas:
diffuser = Diffuser(graph, alpha)
diffuser.run(heat)
alpha_heats[alpha] = diffuser.heats
alpha_deltas[alpha] = diffuser.deltas
return alpha_deltas, alpha_heats
@classmethod
def optimize_alpha_multiplot(cls, graph, alphas, heat, heat_plt_kwargs=None, deriv_plt_kwargs=None):
ds, hs = cls.optimize_alpha_multirun(graph, alphas, heat)
cls._optimize_alpha_multiplot_helper(hs, plt_kwargs=heat_plt_kwargs)
cls._optimize_alpha_multiplot_deriv_helper(ds, plt_kwargs=deriv_plt_kwargs)
@staticmethod
def _optimize_alpha_multiplot_helper(hs, plt_kwargs=None):
fig, axes = plt.subplots(n_subplots_y, n_subplots_x)
for alpha, ax in zip(alphas, axes.ravel()):
ax.set_title(f'$\\alpha={alpha}$')
ax.set_ylabel('Heat')
ax.set_xlabel('Time')
pd.DataFrame(hs[alpha]).plot.line(ax=ax, logx=True, **(plt_kwargs or {}))
plt.suptitle(f'Diffusion ($\\alpha={alpha}$)')
plt.tight_layout(rect=[0, 0, 1, 0.95])
@staticmethod
def _optimize_alpha_multiplot_deriv_helper(ds, plt_kwargs=None):
fig, axes = plt.subplots(n_subplots_y, n_subplots_x)
for alpha, ax in zip(ds, axes.ravel()):
ax.set_title(f'$\\alpha={alpha}$')
ax.set_ylabel('Change in Sum of Heats')
ax.set_xlabel('Time')
derivative = [
(x2 - x1)
for x1, x2 in zip(ds[alpha], ds[alpha][1:])
]
pd.DataFrame(derivative).plot.line(ax=ax, logx=True, legend=False, **(plt_kwargs or {}))
plt.suptitle('Derivative of Sum of Absolute Heats')
plt.tight_layout(rect=[0, 0, 1, 0.95])
@classmethod
def multiplot(cls, graphs_and_heats, alpha):
for graph, init_h in graphs_and_heats:
d = cls(graph, alpha=alpha)
d.run(init_h)
fig, axes = d.plot(heat_plt_kwargs=dict(legend=False))
fig.suptitle(graph.name)
plt.show()
class InsulatedDiffuser(BaseDiffuser):
def run(self, heat, tqdm_kwargs=None) -> None:
for _ in tqdm(range(self.steps), leave=False, desc=f'alpha: {self.alpha}'):
delta = heat @ self.weights
self.deltas.append(np.sum(np.abs(delta)))
heat = (1 - self.alpha) * heat + self.alpha * delta
self.heats.append(heat)
class Diffuser(InsulatedDiffuser):
@staticmethod
def calculate_weights(graph):
adj = nx.to_numpy_array(graph)
return normalize(adj, norm='l1')
Explanation: Definitions
Definitions:
Directed graph $G$ is a defined as:
$G = (V, E)$
Where edges $E$ are a subset of pairs of verticies $V$:
$E \subseteq V \times V$
Edges $(V_i, V_j) \in E$ are weighted according to weighting function $w$
$w: V \times V \to {-1, 0, 1}$
where edges with positive polarity have weight $w(V_i, V_j) = 1$, negative polarity have weight of $w(V_i, V_j) = -1$, and missing from the graph have $w(V_i, V_j) = 0$. More succinctly, the weights can be represented with weight matrix $W$ defined as
$W_{i,j} = w(V_i, V_j)$
Nodes have initial heats represented as vector $h^0 \in \mathbb{R}^{|V|}$
Exploration of Update Strategies
Strategy 1: Update with L1 Norm and Insulation
Heat flows through the out-edges of $V_i$ divided evenly among its neighbors. This first means that $W$ must be row-wise normalized (the "L1-norm"). It can be redefined as:
$W_{i,j} = \frac{w(V_i, V_j)}{\sum_{k=0}^{|V|} w(V_i, V_k)}$
Luckily, sklearn.preprocessing.normalize does the trick.
However, only percentage, $\alpha$, of the heat on a given node is allowed to flow at any given step. The remaining percentage of the heat ($1 - \alpha$) stays.
Derivations and Musings
Heat flows through the out-edges of $V_i$ divided evenly among its neighbors.
$\delta_{in}^t(i) = \sum_{j=1}^{|V|} h_j^t W_{j, i} = h^t W_{., i}$
$\delta_{out}^t(i) = \sum_{j=1}^{|V|} h_i^t W_{i, j}$
$\delta^t(i) = \delta_{in}^t(i) - \delta_{out}^t(i)$
Using step size $\alpha$, the new heat at time point $t + 1$ is
$h^{t+1}_i = (1 - \alpha) h^t_i + \alpha \delta^t(i)$
Therefore
$h^{t+1} = (1 - \alpha) h^t + \alpha \delta^t$
End of explanation
example_1_graph = nx.DiGraph()
example_1_graph.name = 'Example 1 - Small Decreasing Graph'
example_1_graph.add_edges_from([
('A', 'B', dict(weight=-1)),
('A', 'C', dict(weight=-1)),
('B', 'C', dict(weight=+1)),
])
plt.figure(figsize=(3, 3))
draw(example_1_graph)
plt.title(f'Visualization of ${example_1_graph}$')
plt.show()
example_1_init_h = np.array([5.0, 2.0, 2.0])
Diffuser.optimize_alpha_multiplot(example_1_graph, alphas, example_1_init_h)
Explanation: Example 1
Example 1 is a small system set up to run out of heat defined by the short set of relations
A -| B
A -| C
B -> C
with weight matrix $W$ (indexed in alphabetical order):
$W=\begin{bmatrix}
0 & -1 & -1 \
0 & 0 & 1 \
0 & 0 & 0
\end{bmatrix}$
End of explanation
example_2_graph = nx.scale_free_graph(n=20, alpha=.31, beta=.64, gamma=.05)
example_2_graph.name = 'Example 2 - Random Graph with Even Polarity'
assign_bernoulli_polarity(example_2_graph, p=0.5)
draw(example_2_graph)
example_2_init_h = np.random.normal(size=example_2_graph.number_of_nodes())
Diffuser.optimize_alpha_multiplot(example_2_graph, alphas, example_2_init_h, heat_plt_kwargs=dict(legend=False))
Explanation: Example 2
Diffusion on synthetic data.
Architecture: directed scale-free with:
$n=20$
$\alpha=0.31$
$\beta=0.64$
$\gamma=0.05$
Polarity: bernoulli with:
$\rho=0.5$
Initial Heat: normal distribution with:
$\mu=0$
$\sigma=1$
End of explanation
example_3_graph = nx.scale_free_graph(n=20, alpha=.31, beta=.64, gamma=.05)
example_3_graph.name = 'Example 3 - Random Graph with Mostly Negative Polarity'
assign_bernoulli_polarity(example_3_graph, p=0.3)
example_3_init_h = np.random.normal(size=example_3_graph.number_of_nodes())
diffuser = Diffuser(example_3_graph, alpha=0.01)
diffuser.run(example_3_init_h)
diffuser.plot(heat_plt_kwargs=dict(legend=False))
plt.show()
Explanation: Example 3
A random graph with more positive edges.
End of explanation
example_4_graph = nx.scale_free_graph(n=20, alpha=.31, beta=.64, gamma=.05)
example_4_graph.name = 'Example 4 - Random Graph with Mostly Positive Polarity'
assign_bernoulli_polarity(example_4_graph, p=0.7)
example_4_init_h = np.random.normal(size=example_4_graph.number_of_nodes())
diffuser = Diffuser(example_4_graph, alpha=0.01)
diffuser.run(example_4_init_h)
diffuser.plot(heat_plt_kwargs=dict(legend=False))
plt.show()
Explanation: Example 4
A random graph with more positive edges
End of explanation
example_5_graph = nx.DiGraph()
example_5_graph.name = 'Example 5 - Small Increasing Graph'
example_5_graph.add_edges_from([
(0, 1, dict(weight=+1)),
(0, 2, dict(weight=+1)),
(1, 2, dict(weight=+1)),
])
plt.figure(figsize=(3, 3))
draw(example_5_graph)
plt.title(f'Visualization of ${example_5_graph}$')
plt.show()
example_5_init_h = np.random.normal(size=example_5_graph.number_of_nodes())
diffuser = Diffuser(example_5_graph, alpha=0.01)
diffuser.run(example_5_init_h)
diffuser.plot()
plt.show()
Explanation: Example 5
End of explanation
example_6_graph = nx.DiGraph()
example_6_graph.name = 'Example 6 - Small Chaotic Increasing Graph'
example_6_graph.add_edges_from([
(0, 1, dict(weight=+1)),
(1, 2, dict(weight=+1)),
(2, 0, dict(weight=+1)),
])
plt.figure(figsize=(3, 3))
draw(example_6_graph)
plt.title(f'Visualization of ${example_6_graph}$')
plt.show()
example_6_init_h = np.random.normal(size=example_6_graph.number_of_nodes())
diffuser = Diffuser(example_6_graph, alpha=0.01)
diffuser.run(example_6_init_h)
diffuser.plot()
plt.show()
Explanation: Example 6 - Chaotic Increasing System
End of explanation
example_graphs = [
(example_1_graph, example_1_init_h),
(example_2_graph, example_2_init_h),
(example_3_graph, example_3_init_h),
(example_4_graph, example_4_init_h),
(example_5_graph, example_5_init_h),
(example_6_graph, example_6_init_h),
]
Explanation: This is the first example of a system coming to a non-zero steady state! One of the reasons is any system that has a sink will always hemmorage heat out of the sink.
Some ideas on how to deal with this:
Scale how much heat that can go into a node based on how much heat it always has (differential equations approach)
Self-connect all nodes
Self-connect only sink nodes (ones with no out-edges)
End of explanation
class SelfConnectedInsulatedDiffuser(InsulatedDiffuser):
def _plot_diffusion_title(self):
return f'Self-Connected Insulated Diffusion ($\\alpha={self.alpha}$)'
@staticmethod
def calculate_weights(graph):
adj = nx.to_numpy_array(graph)
for i in range(adj.shape[0]):
adj[i, i] = 1.0
return normalize(adj, norm='l1')
SelfConnectedInsulatedDiffuser.multiplot(example_graphs, alpha=0.01)
Explanation: Strategy 2: Self-connect nodes
All nodes diffuse a bit of heat to themselves, independent of their insulation. This means that the weight matrix gets redefined to have 1's on the diagnal.
End of explanation
class AntiSelfConnectedInsulatedDiffuser(InsulatedDiffuser):
def _plot_diffusion_title(self):
return f'Self-Connected Insulated Diffusion ($\\alpha={self.alpha}$)'
@staticmethod
def calculate_weights(graph):
adj = nx.to_numpy_array(graph)
for i in range(adj.shape[0]):
adj[i, i] = -1.0
return normalize(adj, norm='l1')
AntiSelfConnectedInsulatedDiffuser.multiplot(example_graphs, alpha=0.01)
Explanation: Strategy 3: Anti-self connectivity
End of explanation |
1,862 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Quantum State Tomography with Iterative Maximum Likelihood Estimation.
Author
Step4: Define the operator measured, how to obtain it from a density matrix and the iterative operator for MaxLikelihood.
Step5: Take an example density matrix and reconstruct it
Step6: Displace and measure populations
Step7: Random initial state
Step8: Wigner function plot and measurement statistics
The x marks the displacement angles.
Step9: The actual MLE iterations for various measurement settings
Step10: Reconstructed states
Step11: Population measurement from reconstructed states
Step12: QuTiP details
Step13: Plot and save the wigner function for making animation of MLE
make sure you have the images/wigner folder created
Step14: Make a gif with the Wigner plots
Install imageio for this to work | Python Code:
import numpy as np
from qutip import Qobj, rand_dm, fidelity, displace, qdiags, qeye, expect
from qutip.states import coherent, coherent_dm, thermal_dm, fock_dm
from qutip.visualization import plot_wigner, hinton
from qutip.wigner import qfunc
import qutip
import matplotlib.pyplot as plt
from matplotlib import animation
# some pretty printing and animation stuff
from IPython.display import clear_output
Explanation: Quantum State Tomography with Iterative Maximum Likelihood Estimation.
Author: Shahnawaz Ahmed
Email: shahnawaz.ahmed95gmail.com
GitHub: quantshah
In this notebook, we use QuTiP to perform Quantum State Tomography by counting photon number statistics of a resonator as discussed in [1]. The iterative Maximum Likelihood Estimation method is used to start from a random guess of the density matrix and repeatedly apply an operator to obtain the true density matrix of the state [2].
We measure the probability of observing a certain number of photons $\langle n \rangle$ after displacing the state by various angles. This is done by applying the displacement operator to the density matrix of the state $D(\beta) \rho D^{\dagger}(\beta)$. Then, using the photon number statistics for various measurement settings, i.e., values of $\beta_i$, we can recreate the density matrix.
This is done by an iterative Maximum Likelihood method by repeatedly applying an operator $R$ which is a function of the measured value of the observable $f_i$, current estimate of the probability from the density matrix, $p_i$ and measurement setting (the displaced basis in this case $|y_i \rangle \langle y_i| D(\beta) |n_i \rangle \langle n_i| D^{\dagger}(\beta))$, where $n_i$ denotes the fock basis operator for measuring $i$ photons.
References
[1] Shen, Chao, et al. "Optimized tomography of continuous variable systems using excitation counting." Physical Review A 94.5 (2016): 052327.
Link: https://arxiv.org/abs/1606.07554
[2] Řeháček, J., Z. Hradil, and M. Ježek. "Iterative algorithm for reconstruction of entangled states." Physical Review A 63.4 (2001): 040303.
End of explanation
Iterative Maximum Likelihood estimation based on photon number counting.
def measure_population(beta, rho):
Measures the photon number statistics for state rho when displaced
by angle alpha.
Parameters
----------
alpha: np.complex
A complex displacement.
rho:
The density matrix as a QuTiP Qobj (`qutip.Qobj`)
Returns
-------
population: ndarray
A 1D array for the probabilities for populations.
hilbertsize = rho.shape[0]
# Apply a displacement to the state and then measure the diagonals.
D = displace(hilbertsize, beta)
rho_disp = D*rho*D.dag()
populations = np.real(np.diagonal(rho_disp.full()))
return populations
def roperator(beta, rho, measured):
Calculates the iterative ratio operator for measured probability for photons
(j) to the analytical prediction for some rho.
Parameters
----------
beta: list_like
A list of the displacements that were applied to the state before
measurement.
rho: `qutip.Qobj`
The current estimate of the density matrix.
measured: list_like
A list of the measurement statistics (diagonal terms) for each beta.
Returns
-------
R: `qutip.Qobj`
The iterative operator which we are going to apply for state
reconstruction.
hilbert_size = rho.shape[0]
# initialize an empty operator and build it
R = 0*qeye(hilbert_size)
calculated_measurements = measure_population(beta, rho)
for n in range(hilbert_size):
op = fock_dm(hilbert_size, n)
D = displace(hilbert_size, beta)
displaced_D = D.dag()*op*D
ratio = measured[n]/(calculated_measurements[n] + 1e-6)
displaced_D = ratio*displaced_D
R += displaced_D
return R
Explanation: Define the operator measured, how to obtain it from a density matrix and the iterative operator for MaxLikelihood.
End of explanation
hilbert_size = 32
alpha_range = 1.9
alphas = np.array([alpha_range, -alpha_range - 1j*alpha_range,
-alpha_range + 1j*alpha_range])
rho_true = sum([coherent_dm(hilbert_size, a) for a in alphas])/3
Explanation: Take an example density matrix and reconstruct it
End of explanation
betas = [1.7, -2, 2.2j, -2.1 - 2.4j, -2 + 2j]
measured_populations = [measure_population(b, rho_true) for b in betas]
width = 1
Explanation: Displace and measure populations
End of explanation
random_rho = rand_dm(hilbert_size)
hinton(random_rho)
plt.show()
Explanation: Random initial state
End of explanation
fig, ax = plt.subplots(1, 3, figsize=(15, 5))
indices = np.arange(hilbert_size)
plot_wigner(random_rho, fig, ax[0])
ax[0].scatter(np.real(betas), np.imag(betas), marker="x")
ax[0].set_title("Random inital state wigner function")
for i in range(len(betas)):
ax[1].bar(indices, measured_populations[i],
label = r"$beta = {}$".format(i), width=(i+1)/12)
ax[1].set_title("Population measurement statistics")
ax[1].set_xlabel("n")
ax[1].set_ylabel("Photon number probability")
plot_wigner(rho_true, fig, ax[2])
ax[2].scatter(np.real(betas), np.imag(betas), marker="x")
ax[2].set_title("Target state wigner function")
plt.show()
Explanation: Wigner function plot and measurement statistics
The x marks the displacement angles.
End of explanation
rho_t = []
max_iter = 100
for iterations in range(max_iter):
for i in range(len(betas)):
rho_t.append(random_rho)
rop = roperator(betas[i], random_rho, measured_populations[i])
random_rho = rop*random_rho*rop
# Trace renorm
random_rho = random_rho/random_rho.tr()
# Compute fidelity
fidel = fidelity(random_rho, rho_true)
if iterations % 5 == 0:
print(r"Fidelity: {}".format(fidel))
clear_output(wait=0.2)
if fidel > 0.99:
break
Explanation: The actual MLE iterations for various measurement settings
End of explanation
fig, ax = plt.subplots(1, 2, figsize=(9, 5))
plot_wigner(random_rho, fig=fig, ax=ax[1])
plot_wigner(rho_true, fig=fig, ax=ax[0], cmap="RdBu")
ax[0].set_title("Target state")
ax[1].set_title("Reconstructed state")
plt.show
Explanation: Reconstructed states
End of explanation
examples = 5
for i in range(examples):
idx = np.random.choice(range(len(betas)))
beta = betas[idx]
measured = measured_populations[idx]
plt.bar(indices, measure_population(beta, random_rho),
label = "Reconstructed statistics",
width=(i+1)/5)
plt.bar(indices, measured,
label = r"Simulated true measurement values, $\beta$ = {}".format(
np.round(beta,
2)),
width=(i+1)/8)
plt.xlabel(r"n")
plt.ylabel(r"$\langle n \rangle$")
plt.legend()
plt.show()
Explanation: Population measurement from reconstructed states
End of explanation
qutip.about()
Explanation: QuTiP details
End of explanation
# for i in range(len(rho_t)):
# fig, ax = plt.subplots(1, 2, figsize=(15, 7))
# indices = np.arange(hilbert_size)
# plot_wigner(rho_t[i], fig, ax[0])
# ax[0].scatter(np.real(betas), np.imag(betas), marker="x")
# hinton(rho_t[i], ax=ax[1])
# ax[1].set_title("Reconstructed Density matrix at iteration {}".format(str(i)))
# plt.savefig("images/wigner/"+str(i)+".png", bbox_inches='tight')
# plt.close()
Explanation: Plot and save the wigner function for making animation of MLE
make sure you have the images/wigner folder created
End of explanation
# import imageio
# png_dir = 'images/wigner/'
# images = []
# interval = 20 # intervals to pick to plot
# for i in range(0, len(rho_t), interval):
# file_name = str(i)+".png"
# file_path = os.path.join(png_dir, file_name)
# images.append(imageio.imread(file_path))
# imageio.mimsave('reconstruction3.gif', images, loop=1) #make loop=0 to keep looping
Explanation: Make a gif with the Wigner plots
Install imageio for this to work
End of explanation |
1,863 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Chemistry Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 1.8. Coupling With Chemical Reactivity
Is Required
Step12: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step13: 2.2. Code Version
Is Required
Step14: 2.3. Code Languages
Is Required
Step15: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required
Step16: 3.2. Split Operator Advection Timestep
Is Required
Step17: 3.3. Split Operator Physical Timestep
Is Required
Step18: 3.4. Split Operator Chemistry Timestep
Is Required
Step19: 3.5. Split Operator Alternate Order
Is Required
Step20: 3.6. Integrated Timestep
Is Required
Step21: 3.7. Integrated Scheme Type
Is Required
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required
Step23: 4.2. Convection
Is Required
Step24: 4.3. Precipitation
Is Required
Step25: 4.4. Emissions
Is Required
Step26: 4.5. Deposition
Is Required
Step27: 4.6. Gas Phase Chemistry
Is Required
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required
Step30: 4.9. Photo Chemistry
Is Required
Step31: 4.10. Aerosols
Is Required
Step32: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required
Step33: 5.2. Global Mean Metrics Used
Is Required
Step34: 5.3. Regional Metrics Used
Is Required
Step35: 5.4. Trend Metrics Used
Is Required
Step36: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required
Step37: 6.2. Matches Atmosphere Grid
Is Required
Step38: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required
Step39: 7.2. Canonical Horizontal Resolution
Is Required
Step40: 7.3. Number Of Horizontal Gridpoints
Is Required
Step41: 7.4. Number Of Vertical Levels
Is Required
Step42: 7.5. Is Adaptive Grid
Is Required
Step43: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required
Step44: 8.2. Use Atmospheric Transport
Is Required
Step45: 8.3. Transport Details
Is Required
Step46: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required
Step47: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required
Step48: 10.2. Method
Is Required
Step49: 10.3. Prescribed Climatology Emitted Species
Is Required
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step51: 10.5. Interactive Emitted Species
Is Required
Step52: 10.6. Other Emitted Species
Is Required
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required
Step54: 11.2. Method
Is Required
Step55: 11.3. Prescribed Climatology Emitted Species
Is Required
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step57: 11.5. Interactive Emitted Species
Is Required
Step58: 11.6. Other Emitted Species
Is Required
Step59: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required
Step60: 12.2. Prescribed Upper Boundary
Is Required
Step61: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required
Step62: 13.2. Species
Is Required
Step63: 13.3. Number Of Bimolecular Reactions
Is Required
Step64: 13.4. Number Of Termolecular Reactions
Is Required
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required
Step67: 13.7. Number Of Advected Species
Is Required
Step68: 13.8. Number Of Steady State Species
Is Required
Step69: 13.9. Interactive Dry Deposition
Is Required
Step70: 13.10. Wet Deposition
Is Required
Step71: 13.11. Wet Oxidation
Is Required
Step72: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required
Step73: 14.2. Gas Phase Species
Is Required
Step74: 14.3. Aerosol Species
Is Required
Step75: 14.4. Number Of Steady State Species
Is Required
Step76: 14.5. Sedimentation
Is Required
Step77: 14.6. Coagulation
Is Required
Step78: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required
Step79: 15.2. Gas Phase Species
Is Required
Step80: 15.3. Aerosol Species
Is Required
Step81: 15.4. Number Of Steady State Species
Is Required
Step82: 15.5. Interactive Dry Deposition
Is Required
Step83: 15.6. Coagulation
Is Required
Step84: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required
Step85: 16.2. Number Of Reactions
Is Required
Step86: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required
Step87: 17.2. Environmental Conditions
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'inm', 'sandbox-1', 'atmoschem')
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: INM
Source ID: SANDBOX-1
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:05
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation |
1,864 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TimML Notebook 1
A well in uniform flow
Consider a well in the middle aquifer of a three aquifer system. Aquifer properties are given in Table 1. The well is located at $(x,y)=(0,0)$, the discharge is $Q=10,000$ m$^3$/d and the radius is 0.2 m. There is a uniform flow from West to East with a gradient of 0.002. The head is fixed to 20 m at a distance of 10,000 m downstream of the well. Here is the cookbook recipe to build this model
Step1: Questions
Step2: Exercise 1b
What is the head at the well?
Step3: Exercise 1c
Create a contour plot of the head in the three aquifers. Use a window with lower left hand corner $(x,y)=(−3000,−3000)$ and upper right hand corner $(x,y)=(3000,3000)$. Notice that the heads in the three aquifers are almost equal at three times the largest leakage factor.
Step4: Exercise 1d
Create a contour plot of the head in aquifer 1 with labels along the contours. Labels are added when the labels keyword argument is set to True. The number of decimal places can be set with the decimals keyword argument, which is zero by default.
Step5: Exercise 1e
Create a contour plot with a vertical cross-section below it. Start three pathlines from $(x,y)=(-2000,-1000)$ at levels $z=-120$, $z=-60$, and $z=-10$. Try a few other starting locations.
Step6: Exercise 1f
Add an abandoned well that is screened in both aquifer 0 and aquifer 1, located at $(x, y) = (100, 100)$ and create contour plot of all aquifers near the well (from (-200,-200) till (200,200)). What are the discharge and the head at the abandoned well? Note that you have to solve the model again! | Python Code:
%matplotlib inline
from pylab import *
from timml import *
figsize=(8, 8)
ml = ModelMaq(kaq=[10, 20, 5],
z=[0, -20, -40, -80, -90, -140],
c=[4000, 10000])
w = Well(ml, xw=0, yw=0, Qw=10000, rw=0.2, layers=1)
Constant(ml, xr=10000, yr=0, hr=20, layer=0)
Uflow(ml, slope=0.002, angle=0)
ml.solve()
Explanation: TimML Notebook 1
A well in uniform flow
Consider a well in the middle aquifer of a three aquifer system. Aquifer properties are given in Table 1. The well is located at $(x,y)=(0,0)$, the discharge is $Q=10,000$ m$^3$/d and the radius is 0.2 m. There is a uniform flow from West to East with a gradient of 0.002. The head is fixed to 20 m at a distance of 10,000 m downstream of the well. Here is the cookbook recipe to build this model:
Import pylab to use numpy and plotting: from pylab import *
Set figures to be in the notebook with %matplotlib notebook
Import everything from TimML: from timml import *
Create the model and give it a name, for example ml with the command ml = ModelMaq(kaq, z, c) (substitute the correct lists for kaq, z, and c).
Enter the well with the command w = Well(ml, xw, yw, Qw, rw, layers), where the well is called w.
Enter uniform flow with the command Uflow(ml, slope, angle).
Enter the reference head with Constant(ml, xr, yr, head, layer).
Solve the model ml.solve()
Table 1: Aquifer data for exercise 1
|Layer |$k$ (m/d)|$z_b$ (m)|$z_t$|$c$ (days)|
|-------------|--------:|--------:|----:|---------:|
|Aquifer 0 | 10 | -20 | 0 | - |
|Leaky Layer 1| - | -40 | -20 | 4000 |
|Aquifer 1 | 20 | -80 | -40 | - |
|Leaky Layer 2| - | -90 | -80 | 10000 |
|Aquifer 2 | 5 | -140 | -90 | - ||
End of explanation
print('The leakage factors of the aquifers are:')
print(ml.aq.lab)
Explanation: Questions:
Exercise 1a
What are the leakage factors of the aquifer system?
End of explanation
print('The head at the well is:')
print(w.headinside())
Explanation: Exercise 1b
What is the head at the well?
End of explanation
ml.contour(win=[-3000, 3000, -3000, 3000], ngr=50, layers=[0, 1, 2], levels=10,
legend=True, figsize=figsize)
Explanation: Exercise 1c
Create a contour plot of the head in the three aquifers. Use a window with lower left hand corner $(x,y)=(−3000,−3000)$ and upper right hand corner $(x,y)=(3000,3000)$. Notice that the heads in the three aquifers are almost equal at three times the largest leakage factor.
End of explanation
ml.contour(win=[-3000, 3000, -3000, 3000], ngr=50, layers=[1], levels=np.arange(30, 45, 1),
labels=True, legend=['layer 1'], figsize=figsize)
Explanation: Exercise 1d
Create a contour plot of the head in aquifer 1 with labels along the contours. Labels are added when the labels keyword argument is set to True. The number of decimal places can be set with the decimals keyword argument, which is zero by default.
End of explanation
win=[-3000, 3000, -3000, 3000]
ml.plot(win=win, orientation='both', figsize=figsize)
ml.tracelines(-2000 * ones(3), -1000 * ones(3), [-120, -60, -10], hstepmax=50,
win=win, orientation='both')
ml.tracelines(0 * ones(3), 1000 * ones(3), [-120, -50, -10], hstepmax=50,
win=win, orientation='both')
Explanation: Exercise 1e
Create a contour plot with a vertical cross-section below it. Start three pathlines from $(x,y)=(-2000,-1000)$ at levels $z=-120$, $z=-60$, and $z=-10$. Try a few other starting locations.
End of explanation
ml = ModelMaq(kaq=[10, 20, 5],
z=[0, -20, -40, -80, -90, -140],
c=[4000, 10000])
w = Well(ml, xw=0, yw=0, Qw=10000, rw=0.2, layers=1)
Constant(ml, xr=10000, yr=0, hr=20, layer=0)
Uflow(ml, slope=0.002, angle=0)
wabandoned = Well(ml, xw=100, yw=100, Qw=0, rw=0.2, layers=[0, 1])
ml.solve()
ml.contour(win=[-200, 200, -200, 200], ngr=50, layers=[0, 2],
levels=20, color=['C0', 'C1', 'C2'], legend=True, figsize=figsize)
print('The head at the abandoned well is:')
print(wabandoned.headinside())
print('The discharge at the abandoned well is:')
print(wabandoned.discharge())
Explanation: Exercise 1f
Add an abandoned well that is screened in both aquifer 0 and aquifer 1, located at $(x, y) = (100, 100)$ and create contour plot of all aquifers near the well (from (-200,-200) till (200,200)). What are the discharge and the head at the abandoned well? Note that you have to solve the model again!
End of explanation |
1,865 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Around 25 minutes into this lecture, there is some good discussion of the PageRank algorithm. I have always wanted to code up a basic version of this algorithm, so this is a great excuse. This algorithm is probably one of the cleanest examples of Markov Chains that I have seen, and obviously its application was quite successful.
<!-- TEASER_END -->
Step1: Building the Matrix
To build the link matrix (basically an adjacency matrix for web pages), we need to look at the links referenced by every single page. For every page referenced by a page, we will add a 1 to the associated column. Adding a small term eps to all entries, in order guarantee the matrix is fully connected, we will then have a stochastic matrix which is suitable for Markov chain simulations!
Step2: Turn the Beat Around
Now that the PageRank for each page is calculated, how can we actually perform a search?
We simply need to create an index of every word in a page. When we search for words, we will then sort the output by the PageRank of those pages, thus ordering the links by the importance we associated with that page.
Step3: Ranking The Results
With words indexed, we can now complete the task. Searching for a particular word (in this case, 'film'), we get back all the pages with references and counts. Sorting these so that the highest pagerank comes first, we see the Googley(TM) result for our tiny web. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from utils import progress_bar_downloader
import os
pages_link = 'http://www.cs.ubc.ca/~nando/340-2009/lectures/pages.zip'
dlname = 'pages.zip'
#This will unzip into a directory called pages
if not os.path.exists('./%s' % dlname):
progress_bar_downloader(pages_link, dlname)
os.system('unzip %s' % dlname)
else:
print('%s already downloaded!' % dlname)
Explanation: Around 25 minutes into this lecture, there is some good discussion of the PageRank algorithm. I have always wanted to code up a basic version of this algorithm, so this is a great excuse. This algorithm is probably one of the cleanest examples of Markov Chains that I have seen, and obviously its application was quite successful.
<!-- TEASER_END -->
End of explanation
#Quick and dirty link parsing as per http://www.cs.ubc.ca/~nando/540b-2011/lectures/book540.pdf
links = {}
for fname in os.listdir(dlname[:-4]):
links[fname] = []
f = open(dlname[:-4] + '/' + fname)
for line in f.readlines():
while True:
p = line.partition('<a href="http://')[2]
if p == '':
break
url, _, line = p.partition('\">')
links[fname].append(url)
f.close()
import numpy as np
import matplotlib.pyplot as plt
num_pages = len(links.keys())
G = np.zeros((num_pages, num_pages))
#Assign identity numbers to each page, along with a reverse lookup
idx = {}
lookup = {}
for n,k in enumerate(sorted(links.keys())):
idx[k] = n
lookup[n] = k
#Go through all keys, and add a 1 for each link to another page
for k in links.keys():
v = links[k]
for e in v:
G[idx[k],idx[e]] = 1
#Add a small value (epsilon) to ensure a fully connected graph
eps = 1. / num_pages
G += eps * np.ones((num_pages, num_pages))
G = G / np.sum(G, axis=1)
#Now we run the Markov Chain until it converges from random initialization
init = np.random.rand(1, num_pages)
init = init / np.sum(init)
probs = [init]
p = init
for i in range(100):
p = np.dot(p, G)
probs.append(p)
for i in range(num_pages):
plt.plot([step[0, i] for step in probs], label=lookup[i], lw=2)
plt.legend()
Explanation: Building the Matrix
To build the link matrix (basically an adjacency matrix for web pages), we need to look at the links referenced by every single page. For every page referenced by a page, we will add a 1 to the associated column. Adding a small term eps to all entries, in order guarantee the matrix is fully connected, we will then have a stochastic matrix which is suitable for Markov chain simulations!
End of explanation
search = {}
for fname in os.listdir(dlname[:-4]):
f = open(dlname[:-4] + '/' + fname)
for line in f.readlines():
#Ignore header lines
if '<' in line or '>' in line:
continue
words = line.strip().split(' ')
words = filter(lambda x: x != '', words)
#Remove references like [1], [2]
words = filter(lambda x: not ('[' in x or ']' in x), words)
for word in words:
if word in search:
if fname in search[word]:
search[word][fname] += 1
else:
search[word][fname] = 1
else:
search[word] = {fname: 1}
f.close()
Explanation: Turn the Beat Around
Now that the PageRank for each page is calculated, how can we actually perform a search?
We simply need to create an index of every word in a page. When we search for words, we will then sort the output by the PageRank of those pages, thus ordering the links by the importance we associated with that page.
End of explanation
def get_pr(fname):
return probs[-1][0, idx[fname]]
r = search['film']
print(sorted(r, reverse=True, key=get_pr))
Explanation: Ranking The Results
With words indexed, we can now complete the task. Searching for a particular word (in this case, 'film'), we get back all the pages with references and counts. Sorting these so that the highest pagerank comes first, we see the Googley(TM) result for our tiny web.
End of explanation |
1,866 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Processing Steps
Determine noise parameters of noise model
This will be based on <a href="http
Step1: First we will take a look at the fluorescence "base line"
Step2: Now compare the fluorescence of a regular signal to its relative fluorescence
Step3: Now lets compare the maximum of each individual time signal within the relative fluorescences with the maximum of the regular data and the signal mean | Python Code:
%matplotlib inline
from load_environment import * # python file with imports and basics to set up this computing environment
Explanation: Processing Steps
Determine noise parameters of noise model
This will be based on <a href="http://www.cs.tut.fi/~foi/papers/Foi-PoissonianGaussianClippedRaw-2007-IEEE_TIP.pdf">this paper</a>. I will be using the <a href="">ClipPoisGaus_stdEst2D</a> code provided by the authors.
Spatial filtering with (block matching 3d filter) OR (extended empirical orthogonal functions)
Improve the temporal signal (Extended Kalman Filter)
Determine the Optical Flow with Farneback's algorithm (comparing polynomial approximations)
Notes
Choose the number of principle components to keep, $k$, based on how much variance, $\alpha$, we want to maintain. For example $$\frac{\sum_{i=1}^{k}\lambda_i}{\sum_{i=1}^{N}\lambda_i} \ge \alpha$$
Use Noise Assisted Data Analysis (NADA) techniques. Similar to what was done at COAPS. Come up with an approximation to the magnitude of the noise in the signal (talk to James) then add thousands of noisy images to the original data and average them out.
Keep in mind the notion of 'relative fluorescence'. Average the first 32 frames where no stimulus is being applied to approximate the "base line" for the fluorescence, $f_0$. i.e. $$f_0 = \frac{1}{32}\sum_{i=1 }^{32}f_i$$ Then the relative fluorescence for the k$^{th}$ frame will be $$\frac{\Delta f_k}{f} = \frac{f_k-f_0}{f_0}$$
If sampling frequency is twice the frequency of the signal the sampling theorem says no aliasing will occur
60Hz noise is common in data obtained from electronic sensors being powered by a 60 Hz alternating current
Ask about the "Figure of Merit" and the "Recording Efficiency" for the Prarie two-photon microscope
Whitening makes the autocorrelation of the signal "narrower". This can help to localize in time. However it may also reduce (make worse) the SNR
SNR $= \frac{P_s}{P_n} = \bigg(\frac{A_s}{A_n}\bigg)^2$ where $P$ is the power and $A$ is the amplitude. Use the RMS of the powers or amplitudes to determine the single coefficient.
To determine the Astrocyte morphology find the pixel-wise maximum of the relative fluorescence $\Delta f_k$
Karhunen-Loève Theorem (KL): Building blocks for all statistical decomposition techniques
Karhunen-Loève Transform (KLT) decorrelates the signal
The Karhunen–Loève expansion minimizes the total mean square error
Principle Components Analysis
Developed by Harold Hotelling in 1933
Discrete analogue to the KLT
Favored because it reduces to a simple numerical eigen value problem
The total variance is equal to the sum of the eigen values
KL is known by many names, PCA, Proper Orthogonal Decomposition (POD), Empirical Orthogonal Functions (EOF)
Karhunen–Loève expansion is closely related to the Singular Value Decomposition. If one has independent vector observations from a vector valued stochastic process then the left singular vectors are maximum likelihood estimates of the ensemble KL expansion
Unlike the PCA, the EOF finds both spatial AND temporal patterns
Some Examples Demonstrating Concepts Above
End of explanation
f0 = np.average(data[:32], axis=0)
plt.imshow(f0); plt.title("Average of First 32 Frames"); plt.show()
Explanation: First we will take a look at the fluorescence "base line"
End of explanation
plt.subplot(121)
f41 = data[41]
plt.imshow(f41); plt.title("Unprocessed Fluorescence")
plt.subplot(122)
plt.imshow((f41-f0)/f0); plt.title("Relative Fluorescence"); plt.show()
Explanation: Now compare the fluorescence of a regular signal to its relative fluorescence
End of explanation
cpy = data[32:].copy() # Not to ruin future experiments on the original data
plt.subplot(131); plt.imshow(cpy.max(axis=0)); plt.title("Maximum F");
for f in cpy:
f = (f-f0)/f0
maxFluorescence = cpy.max(axis=0)
plt.subplot(132); plt.imshow(maxFluorescence); plt.title("Maximum $\Delta F$");
plt.subplot(133); plt.imshow(np.average(cpy, axis=0)); plt.title("Temporal Mean"); plt.show()
Explanation: Now lets compare the maximum of each individual time signal within the relative fluorescences with the maximum of the regular data and the signal mean
End of explanation |
1,867 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Taxi fare prediction using the Chicago Taxi Trips dataset
Table of contents
Overview
Dataset
Objective
Costs
Data analysis
Fit a simple linear regression model
Save the model and upload to a Cloud Storage bucket
Deploy the model on Vertex AI with support for Vertex Explainable AI
Get explanations from the deployed model
Clean up
Overview
<a name="section-1"></a>
This notebook demonstrates analysis, feature selection, model building, and deployment with Vertex Explainable AI configured on Vertex AI, using a subset of the Chicago Taxi Trips dataset for taxi-fare prediction.
Note
Step1: Otherwise, set your project ID here.
Step2: Select or create a Cloud Storage bucket for storing the model
When you create a model resource on Vertex AI using the Cloud SDK, you need to give a Cloud Storage bucket uri of the model where the model is stored. Using the model saved, you can then create a Vertex AI model and endpoint resources in order to serve online predictions.
Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets. You may also change the LOCATION variable, which is used for operations throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are available.
Step3: <b>Only if your bucket doesn't already exist</b>
Step4: Next, validate access to your Cloud Storage bucket by examining its contents
Step5: Import the required libraries and define constants
Step7: The dataset is quite a large and noisy one, so data from a specific date range will be used. Based on various blogs and resources that are available online, many of them seem to have used the data from around May 2018 which gave some really good results compared to the other date ranges. While there are also some complicated research models proposed for the same problem, like considering the weather data, holidays and seasons, the current notebook only explores a simple linear regression model, as our main objective is to demonstrate the model deployment with Vertex Explainable AI configured on Vertex AI.
Accessing the data through "BigQuery in Notebooks"
The "BigQuery in Notebooks" feature of Vertex AI Workbench managed notebooks lets you use BigQuery and its features from the notebook itself eliminating the need to switch between tabs everytime. For every cell in the notebook, there is an option for the BigQuery integration at the top right, and selecting it enables you to compose an SQL query that can be executed in BigQuery.
The chosen dataset consists of the following fields
Step8: Check the fields in the data and their shape.
Step9: Check some sample data.
Step10: Check the dtypes of fields in the data.
Step11: Check for null values in the dataframe.
Step12: Depending on the percentage of null values in the data, one can choose to either drop them or impute them with mean/median (for numerical values) and mode (for categorical values). In the current data, there doesn't seem to be any null values.
Check the numerical distributions of the fields (numerical). In case there are any fields with constant values, those fields can be dropped as they don't add any value to the model.
Step13: In the current dataset, trip_total is the target field. To access the fields by their type easily, identify the categorical and numerical fields in the data and save them.
Step14: Analyze numerical data
<a name="section-5"></a>
To further anaylyze the data, there are various plots that can be used on numerical and categorical fields. In case of numerical data, one can use histograms and box plots while bar charts are suited for categorical data to better understand the distribution of the data and the outliers in the data.
Plot histograms and box plots on the numerical fields.
Step15: The field trip_seconds describes the time taken for the trip in seconds. Optionally, it can be converted into hours.
Step16: Similarly, another field trip_speed can be added by dividing trip_miles and trip_hours to understand the speed of the trip in miles/hour.
Step17: So far you've only looked at the univariate plots. To better understand the relationship between the variables, a pair-plot can be plotted.
Step18: From the box plots and the histograms visualized so far, it is evident that there are some outliers causing skewness in the data which perhaps could be removed. Also, you can see some linear relationships between the independent variables considered in the pair-plot, for example, trip_seconds and trip_miles and the dependant variable trip_total.
Restrict the data based on the following conditions to remove the outliers in the data to some extent
Step19: Analyze categorical data
Further, explore the categorical data by plotting the distribution of all the levels in each field.
Step20: From the above analysis, one can see that almost 99% of the transaction types are Cash and Credit Card. While there are also other type of transactions, their distribution is negligible. In such a case, the lower distribution levels can be dropped. On the other hand, the total number of pickup and dropoff community areas both seem to have the same levels which make sense. In this case also, one can choose to omit the lower distribution levels but you'd have to make sure that both the fields have the same levels afterward. In the current notebook, keep them as is and proceed with the modeling.
The relationships between the target variable and the categorical fields can be represented through box plots. For each level, the corresponding distribution of the target variable can be identified.
Step21: There seems to be one case where the trip_total is over 3000 and has the same pickup and dropoff community area
Step22: Keep only the Credit Card and Cash payment types. Further, encode them by assigning 0 for Credit Card and 1 for Cash payment types.
Step23: There are also useful timestamp fields in the data. trip_start_timestamp represents the start timestamp of the taxi trip and fields like what day of week it was and what hour it was can be derived from it.
Step24: Since the current dataset is limited to only a week, if there isn't much variation in the newly derived fields with respect to the target variable, they can be dropped.
Plot sum and average of the trip_total with respect to the dayofweek.
Step25: Plot sum and average of the trip_total with respect to the hour.
Step26: As these plots don't seem to have constant figures with respect to the target variable across their levels, they can be considered for training. In fact, to simplify things these derived features can be bucketed into fewer levels.
The dayofweek field can be bucketed into a binary field considering whether or not it was a weekend. If it is a weekday, the record can be assigned 1, else 0. Similarly, the hour field can also be bucketed and encoded. The normal working hours in Chicago can be assumed to be between 8AM-10PM and if the value falls in between the working hours, it can be encoded as 1, else 0.
Step27: Check the data distribution before training the model.
Step28: Divide the data into train and test sets
Split the preprocessed dataset into train and test sets so that the linear regression model can be validated on the test set.
Step29: Fit a simple linear regression model
<a name="section-6"></a>
Fit a linear regression model using scikit-learn's LinearRegression method on the train data.
Step30: Print the R2 score and RMSE values for the model on train and test sets.
Step31: A low RMSE error and a train and test R2 score of 0.93 suggests that the model is fitted well. Further, the coefficients learned by the model for each of its independent variables can also be checked by checking the coef_ attribute of the sklearn model.
Check the coefficients learned by the model.
Step32: Save the model and upload to a Cloud Storage bucket
<a name="section-7"></a>
To deploy the model on Vertex AI, the model needs to be stored in a Cloud Storage bucket first.
Step33: Deploy the model on Vertex AI with support for Vertex Explainable AI
<a name="section-8"></a>
Configure Vertex Explainable AI before deploying the model. For further details, see Configuring Vertex Explainable AI in Vertex AI models.
Step34: Create a model resource from the uploaded model with explanation metadata configured.
Step35: Create an Endpoint resource for the model.
Step36: Save the Endpoint Id for inference.
Step37: Deploy the model to the created endpoint with the required machine type.
Step38: Save the ID of the deployed model. The ID of the deployed model can also checked using the endpoint.list_models() method.
Step39: Get explanations from the deployed model
<a name="section-9"></a>
For testing the deployed online model, select two instances from the test data as payload.
Step42: Call the endpoint with the payload request and parse the response for explanations. The explanations consists of attributions on the independent variables used for training the model which are based on the configured attribution method. In this case, we've used the Sampled Shapely method which assigns credit for the outcome to each feature, and considers different permutations of the features. This method provides a sampling approximation of exact Shapely values. Further information on the attribution methods for explanations can be found at Overview of Explainable AI.
Step43: Next steps
Since the Chicago Taxi Trips dataset is continuously updating, one can preform the same kind of analysis and model training every time a new set of data is available. The date range can also be increased from a week to a month or more depending on the quality of the data. Most of the steps followed in this notebook would still be valid and can be applied over the new data unless the data is too noisy. Perhaps, the notebook itself can be scheduled to run at the specified times to retrain the model using the scheduling option of Vertex AI Workbench's executor.
Clean up
<a name="section-10"></a>
Delete the resources created in this notebook.
Undeploy the model by specifying the DEPLOYED_MODEL_ID.
Step44: Delete the endpoint resource.
Step45: Delete the model resource.
Step46: Remove the contents of the created Cloud Storage bucket. | Python Code:
import os
PROJECT_ID = ""
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
Explanation: Taxi fare prediction using the Chicago Taxi Trips dataset
Table of contents
Overview
Dataset
Objective
Costs
Data analysis
Fit a simple linear regression model
Save the model and upload to a Cloud Storage bucket
Deploy the model on Vertex AI with support for Vertex Explainable AI
Get explanations from the deployed model
Clean up
Overview
<a name="section-1"></a>
This notebook demonstrates analysis, feature selection, model building, and deployment with Vertex Explainable AI configured on Vertex AI, using a subset of the Chicago Taxi Trips dataset for taxi-fare prediction.
Note: This notebook file was developed to run in a Vertex AI Workbench managed notebooks instance using the Python (Local) kernel. Some components of this notebook may not work in other notebook environments.
Dataset
<a name="section-2"></a>
The Chicago Taxi Trips dataset includes taxi trips from 2013 to the present, reported to the city of Chicago in its role as a regulatory agency. To protect privacy but allow for aggregate analyses, the taxi ID is consistent for any given taxi medallion number but does not show the number, census tracts are suppressed in some cases, and times are rounded to the nearest 15 minutes. Due to the data reporting process, not all trips are reported but the city believes that most are. This dataset is publicly available on BigQuery as a public dataset with the table ID bigquery-public-data.chicago_taxi_trips.taxi_trips and also as a public dataset on Kaggle at Chicago Taxi Trips.
For more information about this dataset and how it was created, see the Chicago Digital website.
Objective
<a name="section-3"></a>
The goal of this notebook is to provide an overview on the latest Vertex AI features like Explainable AI and "BigQuery in Notebooks" by trying to solve a taxi fare prediction problem. The steps followed in this notebook include:
Loading the dataset using "BigQuery in Notebooks".
Performing exploratory data analysis on the dataset.
Feature selection and preprocessing.
Building a linear regression model using scikit-learn.
Configuring the model for Vertex Explainable AI.
Deploying the model to Vertex AI.
Testing the deployed model.
Clean up.
Costs
<a name="section-4"></a>
This tutorial uses the following billable components of Google Cloud:
Vertex AI
BigQuery
Cloud Storage
Learn about Vertex AI
pricing, BigQuery pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Before you begin
Set your project ID
If you don't know your project ID, you may be able to get your project ID using gcloud.
End of explanation
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
Explanation: Otherwise, set your project ID here.
End of explanation
BUCKET_NAME = "[your-bucket-name]"
BUCKET_URI = f"gs://{BUCKET_NAME}"
LOCATION = "us-central1"
from datetime import datetime
# Set a default bucket name in case bucket name is not given
if BUCKET_NAME == "" or BUCKET_NAME == "[your-bucket-name]" or BUCKET_NAME is None:
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
BUCKET_NAME = PROJECT_ID + "aip-" + TIMESTAMP
BUCKET_URI = "gs://" + BUCKET_NAME
Explanation: Select or create a Cloud Storage bucket for storing the model
When you create a model resource on Vertex AI using the Cloud SDK, you need to give a Cloud Storage bucket uri of the model where the model is stored. Using the model saved, you can then create a Vertex AI model and endpoint resources in order to serve online predictions.
Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets. You may also change the LOCATION variable, which is used for operations throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are available.
End of explanation
! gsutil mb -l $REGION $BUCKET_NAME
Explanation: <b>Only if your bucket doesn't already exist</b>: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_NAME
Explanation: Next, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
import matplotlib.pyplot as plt
# load the required libraries
import pandas as pd
import seaborn as sns
%matplotlib inline
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.model_selection import train_test_split
Explanation: Import the required libraries and define constants
End of explanation
# The following two lines are only necessary to run once.
# Comment out otherwise for speed-up.
from google.cloud.bigquery import Client
client = Client()
query = select
taxi_id, trip_start_timestamp,
trip_seconds, trip_miles, trip_total,
payment_type, pickup_community_area,
dropoff_community_area
from `bigquery-public-data.chicago_taxi_trips.taxi_trips`
where
trip_start_timestamp >= '2018-05-12' and
trip_end_timestamp <= '2018-05-18' and
trip_seconds > 0 and trip_seconds < 6*60*60 and
trip_miles > 0 and
trip_total > 3 and
pickup_community_area is not NULL and
dropoff_community_area is not NULL
job = client.query(query)
df = job.to_dataframe()
Explanation: The dataset is quite a large and noisy one, so data from a specific date range will be used. Based on various blogs and resources that are available online, many of them seem to have used the data from around May 2018 which gave some really good results compared to the other date ranges. While there are also some complicated research models proposed for the same problem, like considering the weather data, holidays and seasons, the current notebook only explores a simple linear regression model, as our main objective is to demonstrate the model deployment with Vertex Explainable AI configured on Vertex AI.
Accessing the data through "BigQuery in Notebooks"
The "BigQuery in Notebooks" feature of Vertex AI Workbench managed notebooks lets you use BigQuery and its features from the notebook itself eliminating the need to switch between tabs everytime. For every cell in the notebook, there is an option for the BigQuery integration at the top right, and selecting it enables you to compose an SQL query that can be executed in BigQuery.
The chosen dataset consists of the following fields:
unique_key : Unique identifier for the trip.
taxi_id : A unique identifier for the taxi.
trip_start_timestamp: When the trip started, rounded to the nearest 15 minutes.
trip_end_timestamp: When the trip ended, rounded to the nearest 15 minutes.
trip_seconds: Time of the trip in seconds.
trip_miles: Distance of the trip in miles.
pickup_census_tract: The Census Tract where the trip began. For privacy, this Census Tract is not shown for some trips.
dropoff_census_tract: The Census Tract where the trip ended. For privacy, this Census Tract is not shown for some trips.
pickup_community_area: The Community Area where the trip began.
dropoff_community_area: The Community Area where the trip ended.
fare: The fare for the trip.
tips: The tip for the trip. Cash tips generally will not be recorded.
tolls: The tolls for the trip.
extras: Extra charges for the trip.
trip_total: Total cost of the trip, the total of the fare, tips, tolls, and extras.
payment_type: Type of payment for the trip.
company: The taxi company.
pickup_latitude: The latitude of the center of the pickup census tract or the community area if the census tract has been hidden for privacy.
pickup_longitude: The longitude of the center of the pickup census tract or the community area if the census tract has been hidden for privacy.
pickup_location: The location of the center of the pickup census tract or the community area if the census tract has been hidden for privacy.
dropoff_latitude: The latitude of the center of the dropoff census tract or the community area if the census tract has been hidden for privacy.
dropoff_longitude: The longitude of the center of the dropoff census tract or the community area if the census tract has been hidden for privacy.
dropoff_location: The location of the center of the dropoff census tract or the community area if the census tract has been hidden for privacy.
Among the available fields in the dataset, only the fields that seem common and relevant for analysis and modeling like taxi_id, trip_start_timestamp, trip_seconds, trip_miles, payment_type and trip_total are selected. Further, the field trip_total is treated as the target variable that would be predicted by the machine learning model. Apparently, this field is a summation of the fare,tips,tolls and extras fields and so because of their correlation with the target variable, they are being excluded for modeling. Due to the volume of the data, a subset of the dataset over the course of one week, 12-May-2018 to 18-May-2018 is being considered. Within this date range itself, the datapoints can be noisy and so a few conditions like the following are considered:
Time taken for the trip > 0.
Distance covered during the trip > 0.
Total trip charges > 0 and
Pickup and dropoff areas are valid (not empty).
@bigquery
select
-- select the required fields
taxi_id, trip_start_timestamp,
trip_seconds, trip_miles, trip_total,
payment_type
from bigquery-public-data.chicago_taxi_trips.taxi_trips
where
-- specify the required criteria
trip_start_timestamp >= '2018-05-12' and
trip_end_timestamp <= '2018-05-18' and
trip_seconds > 0 and
trip_miles > 0 and
trip_total > 3 and
pickup_community_area is not NULL and
dropoff_community_area is not NULL
The BigQuery integration also lets you load the queried data into a pandas dataframe using the Query and load as DataFrame button. Clicking the button adds a new cell below that provides a code snippet to load the data into a dataframe.
End of explanation
# check the dataframe's shape
print(df.shape)
# check the columns in the dataframe
df.columns
Explanation: Check the fields in the data and their shape.
End of explanation
df.head()
Explanation: Check some sample data.
End of explanation
df.dtypes
Explanation: Check the dtypes of fields in the data.
End of explanation
df.info()
Explanation: Check for null values in the dataframe.
End of explanation
df.describe().T
Explanation: Depending on the percentage of null values in the data, one can choose to either drop them or impute them with mean/median (for numerical values) and mode (for categorical values). In the current data, there doesn't seem to be any null values.
Check the numerical distributions of the fields (numerical). In case there are any fields with constant values, those fields can be dropped as they don't add any value to the model.
End of explanation
target = "trip_total"
categ_cols = ["payment_type", "pickup_community_area", "dropoff_community_area"]
num_cols = ["trip_seconds", "trip_miles"]
Explanation: In the current dataset, trip_total is the target field. To access the fields by their type easily, identify the categorical and numerical fields in the data and save them.
End of explanation
for i in num_cols + [target]:
_, ax = plt.subplots(1, 2, figsize=(12, 4))
df[i].plot(kind="hist", bins=100, ax=ax[0])
ax[0].set_title(str(i) + " -Histogram")
df[i].plot(kind="box", ax=ax[1])
ax[1].set_title(str(i) + " -Boxplot")
plt.show()
Explanation: Analyze numerical data
<a name="section-5"></a>
To further anaylyze the data, there are various plots that can be used on numerical and categorical fields. In case of numerical data, one can use histograms and box plots while bar charts are suited for categorical data to better understand the distribution of the data and the outliers in the data.
Plot histograms and box plots on the numerical fields.
End of explanation
df["trip_hours"] = round(df["trip_seconds"] / 3600, 2)
df["trip_hours"].plot(kind="box")
Explanation: The field trip_seconds describes the time taken for the trip in seconds. Optionally, it can be converted into hours.
End of explanation
df["trip_speed"] = round(df["trip_miles"] / df["trip_hours"], 2)
df["trip_speed"].plot(kind="box")
Explanation: Similarly, another field trip_speed can be added by dividing trip_miles and trip_hours to understand the speed of the trip in miles/hour.
End of explanation
sns.pairplot(
data=df[["trip_seconds", "trip_miles", "trip_total", "trip_speed"]].sample(10000)
)
plt.show()
Explanation: So far you've only looked at the univariate plots. To better understand the relationship between the variables, a pair-plot can be plotted.
End of explanation
# set constraints to remove outliers
df = df[df["trip_total"] > 3]
df = df[(df["trip_miles"] > 0) & (df["trip_miles"] < 300)]
df = df[df["trip_seconds"] >= 60]
df = df[df["trip_hours"] <= 2]
df = df[df["trip_speed"] <= 70]
df.reset_index(drop=True, inplace=True)
df.shape
Explanation: From the box plots and the histograms visualized so far, it is evident that there are some outliers causing skewness in the data which perhaps could be removed. Also, you can see some linear relationships between the independent variables considered in the pair-plot, for example, trip_seconds and trip_miles and the dependant variable trip_total.
Restrict the data based on the following conditions to remove the outliers in the data to some extent :
- Total charge being at least more than $3.
- Total miles driven greater than 0 and less than 300 miles.
- Total seconds driven at least 1 minute.
- Total hours driven not more than 2 hours.
- Speed of the trip not being more than 70 mph.
These conditions are based on some general assumptions as clearly there were some recording errors like speed being greater than 500 mph and travel-time being more than 5 hours that led to outliers in the data.
End of explanation
for i in categ_cols:
print(df[i].unique().shape)
df[i].value_counts(normalize=True).plot(kind="bar", figsize=(10, 4))
plt.title(i)
plt.show()
Explanation: Analyze categorical data
Further, explore the categorical data by plotting the distribution of all the levels in each field.
End of explanation
for i in categ_cols:
plt.figure(figsize=(10, 4))
sns.boxplot(x=i, y=target, data=df)
plt.xticks(rotation=45)
plt.title(i)
plt.show()
Explanation: From the above analysis, one can see that almost 99% of the transaction types are Cash and Credit Card. While there are also other type of transactions, their distribution is negligible. In such a case, the lower distribution levels can be dropped. On the other hand, the total number of pickup and dropoff community areas both seem to have the same levels which make sense. In this case also, one can choose to omit the lower distribution levels but you'd have to make sure that both the fields have the same levels afterward. In the current notebook, keep them as is and proceed with the modeling.
The relationships between the target variable and the categorical fields can be represented through box plots. For each level, the corresponding distribution of the target variable can be identified.
End of explanation
df = df[df["trip_total"] < 3000].reset_index(drop=True)
Explanation: There seems to be one case where the trip_total is over 3000 and has the same pickup and dropoff community area: 28 is clearly an outlier compared to the rest of the points. This datapoint can be removed.
End of explanation
# add payment_type
df = df[df["payment_type"].isin(["Credit Card", "Cash"])].reset_index(drop=True)
# encode the payment types
df["payment_type"] = df["payment_type"].apply(
lambda x: 0 if x == "Credit Card" else (1 if x == "Cash" else None)
)
Explanation: Keep only the Credit Card and Cash payment types. Further, encode them by assigning 0 for Credit Card and 1 for Cash payment types.
End of explanation
df["trip_start_timestamp"] = pd.to_datetime(df["trip_start_timestamp"])
df["dayofweek"] = df["trip_start_timestamp"].dt.dayofweek
df["hour"] = df["trip_start_timestamp"].dt.hour
Explanation: There are also useful timestamp fields in the data. trip_start_timestamp represents the start timestamp of the taxi trip and fields like what day of week it was and what hour it was can be derived from it.
End of explanation
# plot sum and average of trip_total w.r.t the dayofweek
_, ax = plt.subplots(1, 2, figsize=(10, 4))
df[["dayofweek", "trip_total"]].groupby("dayofweek").trip_total.sum().plot(
kind="bar", ax=ax[0]
)
ax[0].set_title("Sum of trip_total")
df[["dayofweek", "trip_total"]].groupby("dayofweek").trip_total.mean().plot(
kind="bar", ax=ax[1]
)
ax[1].set_title("Avg. of trip_total")
plt.show()
Explanation: Since the current dataset is limited to only a week, if there isn't much variation in the newly derived fields with respect to the target variable, they can be dropped.
Plot sum and average of the trip_total with respect to the dayofweek.
End of explanation
_, ax = plt.subplots(1, 2, figsize=(10, 4))
df[["hour", "trip_total"]].groupby("hour").trip_total.sum().plot(kind="bar", ax=ax[0])
ax[0].set_title("Sum of trip_total")
df[["hour", "trip_total"]].groupby("hour").trip_total.mean().plot(kind="bar", ax=ax[1])
ax[1].set_title("Avg. of trip_total")
plt.show()
Explanation: Plot sum and average of the trip_total with respect to the hour.
End of explanation
# bucket and encode the dayofweek and hour
df["dayofweek"] = df["dayofweek"].apply(lambda x: 0 if x in [5, 6] else 1)
df["hour"] = df["hour"].apply(lambda x: 0 if x in [23, 0, 1, 2, 3, 4, 5, 6, 7] else 1)
Explanation: As these plots don't seem to have constant figures with respect to the target variable across their levels, they can be considered for training. In fact, to simplify things these derived features can be bucketed into fewer levels.
The dayofweek field can be bucketed into a binary field considering whether or not it was a weekend. If it is a weekday, the record can be assigned 1, else 0. Similarly, the hour field can also be bucketed and encoded. The normal working hours in Chicago can be assumed to be between 8AM-10PM and if the value falls in between the working hours, it can be encoded as 1, else 0.
End of explanation
df.describe().T
Explanation: Check the data distribution before training the model.
End of explanation
cols = [
"trip_seconds",
"trip_miles",
"payment_type",
"pickup_community_area",
"dropoff_community_area",
"dayofweek",
"hour",
"trip_speed",
]
x = df[cols].copy()
y = df[target].copy()
# split the data into 75-25% ratio
X_train, X_test, y_train, y_test = train_test_split(
x, y, train_size=0.75, test_size=0.25, random_state=13
)
X_train.shape, X_test.shape
Explanation: Divide the data into train and test sets
Split the preprocessed dataset into train and test sets so that the linear regression model can be validated on the test set.
End of explanation
# Building the regression model
reg = LinearRegression()
reg.fit(X_train, y_train)
Explanation: Fit a simple linear regression model
<a name="section-6"></a>
Fit a linear regression model using scikit-learn's LinearRegression method on the train data.
End of explanation
# print test R2 score
y_train_pred = reg.predict(X_train)
train_score = r2_score(y_train, y_train_pred)
train_rmse = mean_squared_error(y_train, y_train_pred, squared=False)
y_test_pred = reg.predict(X_test)
test_score = r2_score(y_test, y_test_pred)
test_rmse = mean_squared_error(y_test, y_test_pred, squared=False)
print("Train R2-score:", train_score, "Train RMSE:", train_rmse)
print("Test R2-score:", test_score, "Test RMSE:", test_rmse)
Explanation: Print the R2 score and RMSE values for the model on train and test sets.
End of explanation
coef_df = pd.DataFrame({"col": cols, "coeff": reg.coef_})
coef_df.set_index("col").plot(kind="bar")
Explanation: A low RMSE error and a train and test R2 score of 0.93 suggests that the model is fitted well. Further, the coefficients learned by the model for each of its independent variables can also be checked by checking the coef_ attribute of the sklearn model.
Check the coefficients learned by the model.
End of explanation
import joblib
from google.cloud import storage
FILE_NAME = "model.joblib"
joblib.dump(reg, FILE_NAME)
# Upload the saved model file to Cloud Storage
BLOB_PATH = "taxicab_fare_prediction/"
BLOB_NAME = BLOB_PATH + FILE_NAME
bucket = storage.Client().bucket(BUCKET_NAME)
blob = bucket.blob(BLOB_NAME)
blob.upload_from_filename(FILE_NAME)
Explanation: Save the model and upload to a Cloud Storage bucket
<a name="section-7"></a>
To deploy the model on Vertex AI, the model needs to be stored in a Cloud Storage bucket first.
End of explanation
MODEL_DISPLAY_NAME = "taxi_fare_prediction_model"
ARTIFACT_GCS_PATH = f"{BUCKET_URI}/{BLOB_PATH}"
# Feature-name(Inp_feature) and Output-name(Model_output) can be arbitrary
exp_metadata = {"inputs": {"Input_feature": {}}, "outputs": {"Predicted_taxi_fare": {}}}
Explanation: Deploy the model on Vertex AI with support for Vertex Explainable AI
<a name="section-8"></a>
Configure Vertex Explainable AI before deploying the model. For further details, see Configuring Vertex Explainable AI in Vertex AI models.
End of explanation
from google.cloud import aiplatform
from google.cloud.aiplatform_v1.types import SampledShapleyAttribution
from google.cloud.aiplatform_v1.types.explanation import ExplanationParameters
# Create a Vertex AI model resource with support for Vertex Explainable AI
aiplatform.init(project=PROJECT, location=LOCATION)
model = aiplatform.Model.upload(
display_name=MODEL_DISPLAY_NAME,
artifact_uri=ARTIFACT_GCS_PATH,
serving_container_image_uri="us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.0-24:latest",
explanation_metadata=exp_metadata,
explanation_parameters=ExplanationParameters(
sampled_shapley_attribution=SampledShapleyAttribution(path_count=25)
),
)
model.wait()
print(model.display_name)
print(model.resource_name)
Explanation: Create a model resource from the uploaded model with explanation metadata configured.
End of explanation
ENDPOINT_DISPLAY_NAME = "taxi_fare_prediction_endpoint"
endpoint = aiplatform.Endpoint.create(
display_name=ENDPOINT_DISPLAY_NAME, project=PROJECT, location=LOCATION
)
print(endpoint.display_name)
print(endpoint.resource_name)
Explanation: Create an Endpoint resource for the model.
End of explanation
ENDPOINT_ID = ""
Explanation: Save the Endpoint Id for inference.
End of explanation
DEPLOYED_MODEL_NAME = "taxi_fare_prediction_deployment"
MACHINE_TYPE = "n1-standard-2"
# deploy the model to the endpoint
model.deploy(
endpoint=endpoint,
deployed_model_display_name=DEPLOYED_MODEL_NAME,
machine_type=MACHINE_TYPE,
)
model.wait()
print(model.display_name)
print(model.resource_name)
Explanation: Deploy the model to the created endpoint with the required machine type.
End of explanation
DEPLOYED_MODEL_ID = ""
Explanation: Save the ID of the deployed model. The ID of the deployed model can also checked using the endpoint.list_models() method.
End of explanation
# format the top 2 test instances as the request's payload
test_json = {"instances": [X_test.iloc[0].tolist(), X_test.iloc[1].tolist()]}
Explanation: Get explanations from the deployed model
<a name="section-9"></a>
For testing the deployed online model, select two instances from the test data as payload.
End of explanation
features = X_train.columns.to_list()
def plot_attributions(attrs):
Function to plot the features and their attributions for an instance
rows = {"feature_name": [], "attribution": []}
for i, val in enumerate(features):
rows["feature_name"].append(val)
rows["attribution"].append(attrs["Input_feature"][i])
attr_df = pd.DataFrame(rows).set_index("feature_name")
attr_df.plot(kind="bar")
plt.show()
return
def explain_tabular_sample(
project: str, location: str, endpoint_id: str, instances: list
):
Function to make an explanation request for the specified payload and generate feature attribution plots
aiplatform.init(project=project, location=location)
endpoint = aiplatform.Endpoint(endpoint_id)
response = endpoint.explain(instances=instances)
print("#" * 10 + "Explanations" + "#" * 10)
for explanation in response.explanations:
print(" explanation")
# Feature attributions.
attributions = explanation.attributions
for attribution in attributions:
print(" attribution")
print(" baseline_output_value:", attribution.baseline_output_value)
print(" instance_output_value:", attribution.instance_output_value)
print(" output_display_name:", attribution.output_display_name)
print(" approximation_error:", attribution.approximation_error)
print(" output_name:", attribution.output_name)
output_index = attribution.output_index
for output_index in output_index:
print(" output_index:", output_index)
plot_attributions(attribution.feature_attributions)
print("#" * 10 + "Predictions" + "#" * 10)
for prediction in response.predictions:
print(prediction)
return response
test_json = [X_test.iloc[0].tolist(), X_test.iloc[1].tolist()]
prediction = explain_tabular_sample(PROJECT, LOCATION, ENDPOINT_ID, test_json)
Explanation: Call the endpoint with the payload request and parse the response for explanations. The explanations consists of attributions on the independent variables used for training the model which are based on the configured attribution method. In this case, we've used the Sampled Shapely method which assigns credit for the outcome to each feature, and considers different permutations of the features. This method provides a sampling approximation of exact Shapely values. Further information on the attribution methods for explanations can be found at Overview of Explainable AI.
End of explanation
endpoint.undeploy(deployed_model_id=DEPLOYED_MODEL_ID)
Explanation: Next steps
Since the Chicago Taxi Trips dataset is continuously updating, one can preform the same kind of analysis and model training every time a new set of data is available. The date range can also be increased from a week to a month or more depending on the quality of the data. Most of the steps followed in this notebook would still be valid and can be applied over the new data unless the data is too noisy. Perhaps, the notebook itself can be scheduled to run at the specified times to retrain the model using the scheduling option of Vertex AI Workbench's executor.
Clean up
<a name="section-10"></a>
Delete the resources created in this notebook.
Undeploy the model by specifying the DEPLOYED_MODEL_ID.
End of explanation
endpoint.delete()
Explanation: Delete the endpoint resource.
End of explanation
model.delete()
Explanation: Delete the model resource.
End of explanation
! gsutil -m rm -r $BUCKET_URI
Explanation: Remove the contents of the created Cloud Storage bucket.
End of explanation |
1,868 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Image Gradients
In this notebook we'll introduce the TinyImageNet dataset and a deep CNN that has been pretrained on this dataset. You will use this pretrained model to compute gradients with respect to images, and use these image gradients to produce class saliency maps and fooling images.
Step1: Introducing TinyImageNet
The TinyImageNet dataset is a subset of the ILSVRC-2012 classification dataset. It consists of 200 object classes, and for each object class it provides 500 training images, 50 validation images, and 50 test images. All images have been downsampled to 64x64 pixels. We have provided the labels for all training and validation images, but have withheld the labels for the test images.
We have further split the full TinyImageNet dataset into two equal pieces, each with 100 object classes. We refer to these datasets as TinyImageNet-100-A and TinyImageNet-100-B; for this exercise you will work with TinyImageNet-100-A.
To download the data, go into the cs231n/datasets directory and run the script get_tiny_imagenet_a.sh. Then run the following code to load the TinyImageNet-100-A dataset into memory.
NOTE
Step2: TinyImageNet-100-A classes
Since ImageNet is based on the WordNet ontology, each class in ImageNet (and TinyImageNet) actually has several different names. For example "pop bottle" and "soda bottle" are both valid names for the same class. Run the following to see a list of all classes in TinyImageNet-100-A
Step3: Visualize Examples
Run the following to visualize some example images from random classses in TinyImageNet-100-A. It selects classes and images randomly, so you can run it several times to see different images.
Step4: Pretrained model
We have trained a deep CNN for you on the TinyImageNet-100-A dataset that we will use for image visualization. The model has 9 convolutional layers (with spatial batch normalization) and 1 fully-connected hidden layer (with batch normalization).
To get the model, run the script get_pretrained_model.sh from the cs231n/datasets directory. After doing so, run the following to load the model from disk.
Step5: Pretrained model performance
Run the following to test the performance of the pretrained model on some random training and validation set images. You should see training accuracy around 90% and validation accuracy around 60%; this indicates a bit of overfitting, but it should work for our visualization experiments.
Step8: Saliency Maps
Using this pretrained model, we will compute class saliency maps as described in Section 3.1 of [1].
As mentioned in Section 2 of the paper, you should compute the gradient of the image with respect to the unnormalized class score, not with respect to the normalized class probability.
You will need to use the forward and backward methods of the PretrainedCNN class to compute gradients with respect to the image. Open the file cs231n/classifiers/pretrained_cnn.py and read the documentation for these methods to make sure you know how they work. For example usage, you can see the loss method. Make sure to run the model in test mode when computing saliency maps.
[1] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. "Deep Inside Convolutional Networks
Step9: Once you have completed the implementation in the cell above, run the following to visualize some class saliency maps on the validation set of TinyImageNet-100-A.
Step11: Fooling Images
We can also use image gradients to generate "fooling images" as discussed in [2]. Given an image and a target class, we can perform gradient ascent over the image to maximize the target class, stopping when the network classifies the image as the target class. Implement the following function to generate fooling images.
[2] Szegedy et al, "Intriguing properties of neural networks", ICLR 2014
Step12: Run the following to choose a random validation set image that is correctly classified by the network, and then make a fooling image. | Python Code:
# As usual, a bit of setup
import time, os, json
import numpy as np
import skimage.io
import matplotlib.pyplot as plt
from cs231n.classifiers.pretrained_cnn import PretrainedCNN
from cs231n.data_utils import load_tiny_imagenet
from cs231n.image_utils import blur_image, deprocess_image
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
Explanation: Image Gradients
In this notebook we'll introduce the TinyImageNet dataset and a deep CNN that has been pretrained on this dataset. You will use this pretrained model to compute gradients with respect to images, and use these image gradients to produce class saliency maps and fooling images.
End of explanation
data = load_tiny_imagenet('cs231n/datasets/tiny-imagenet-100-A/tiny-imagenet-100-A', subtract_mean=True)
Explanation: Introducing TinyImageNet
The TinyImageNet dataset is a subset of the ILSVRC-2012 classification dataset. It consists of 200 object classes, and for each object class it provides 500 training images, 50 validation images, and 50 test images. All images have been downsampled to 64x64 pixels. We have provided the labels for all training and validation images, but have withheld the labels for the test images.
We have further split the full TinyImageNet dataset into two equal pieces, each with 100 object classes. We refer to these datasets as TinyImageNet-100-A and TinyImageNet-100-B; for this exercise you will work with TinyImageNet-100-A.
To download the data, go into the cs231n/datasets directory and run the script get_tiny_imagenet_a.sh. Then run the following code to load the TinyImageNet-100-A dataset into memory.
NOTE: The full TinyImageNet-100-A dataset will take up about 250MB of disk space, and loading the full TinyImageNet-100-A dataset into memory will use about 2.8GB of memory.
End of explanation
for i, names in enumerate(data['class_names']):
print i, ' '.join('"%s"' % name for name in names)
Explanation: TinyImageNet-100-A classes
Since ImageNet is based on the WordNet ontology, each class in ImageNet (and TinyImageNet) actually has several different names. For example "pop bottle" and "soda bottle" are both valid names for the same class. Run the following to see a list of all classes in TinyImageNet-100-A:
End of explanation
# Visualize some examples of the training data
classes_to_show = 7
examples_per_class = 5
class_idxs = np.random.choice(len(data['class_names']), size=classes_to_show, replace=False)
for i, class_idx in enumerate(class_idxs):
train_idxs, = np.nonzero(data['y_train'] == class_idx)
train_idxs = np.random.choice(train_idxs, size=examples_per_class, replace=False)
for j, train_idx in enumerate(train_idxs):
img = deprocess_image(data['X_train'][train_idx], data['mean_image'])
plt.subplot(examples_per_class, classes_to_show, 1 + i + classes_to_show * j)
if j == 0:
plt.title(data['class_names'][class_idx][0])
plt.imshow(img)
plt.gca().axis('off')
plt.show()
Explanation: Visualize Examples
Run the following to visualize some example images from random classses in TinyImageNet-100-A. It selects classes and images randomly, so you can run it several times to see different images.
End of explanation
model = PretrainedCNN(h5_file='cs231n/datasets/pretrained_model.h5')
Explanation: Pretrained model
We have trained a deep CNN for you on the TinyImageNet-100-A dataset that we will use for image visualization. The model has 9 convolutional layers (with spatial batch normalization) and 1 fully-connected hidden layer (with batch normalization).
To get the model, run the script get_pretrained_model.sh from the cs231n/datasets directory. After doing so, run the following to load the model from disk.
End of explanation
batch_size = 100
# Test the model on training data
mask = np.random.randint(data['X_train'].shape[0], size=batch_size)
X, y = data['X_train'][mask], data['y_train'][mask]
y_pred = model.loss(X).argmax(axis=1)
print 'Training accuracy: ', (y_pred == y).mean()
# Test the model on validation data
mask = np.random.randint(data['X_val'].shape[0], size=batch_size)
X, y = data['X_val'][mask], data['y_val'][mask]
y_pred = model.loss(X).argmax(axis=1)
print 'Validation accuracy: ', (y_pred == y).mean()
Explanation: Pretrained model performance
Run the following to test the performance of the pretrained model on some random training and validation set images. You should see training accuracy around 90% and validation accuracy around 60%; this indicates a bit of overfitting, but it should work for our visualization experiments.
End of explanation
def indices_to_one_hot(data, nb_classes):
Convert an iterable of indices to one-hot encoded labels.
targets = np.array(data).reshape(-1)
return np.eye(nb_classes)[targets]
def compute_saliency_maps(X, y, model):
Compute a class saliency map using the model for images X and labels y.
Input:
- X: Input images, of shape (N, 3, H, W)
- y: Labels for X, of shape (N,)
- model: A PretrainedCNN that will be used to compute the saliency map.
Returns:
- saliency: An array of shape (N, H, W) giving the saliency maps for the input
images.
saliency = None
##############################################################################
# TODO: Implement this function. You should use the forward and backward #
# methods of the PretrainedCNN class, and compute gradients with respect to #
# the unnormalized class score of the ground-truth classes in y. #
##############################################################################
out, cache = model.forward(X)
dout = indices_to_one_hot(y,100)
dX, grads = model.backward(dout, cache)
saliency = np.max(np.abs(dX),axis=1)
##############################################################################
# END OF YOUR CODE #
##############################################################################
return saliency
Explanation: Saliency Maps
Using this pretrained model, we will compute class saliency maps as described in Section 3.1 of [1].
As mentioned in Section 2 of the paper, you should compute the gradient of the image with respect to the unnormalized class score, not with respect to the normalized class probability.
You will need to use the forward and backward methods of the PretrainedCNN class to compute gradients with respect to the image. Open the file cs231n/classifiers/pretrained_cnn.py and read the documentation for these methods to make sure you know how they work. For example usage, you can see the loss method. Make sure to run the model in test mode when computing saliency maps.
[1] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. "Deep Inside Convolutional Networks: Visualising
Image Classification Models and Saliency Maps", ICLR Workshop 2014.
End of explanation
def show_saliency_maps(mask):
mask = np.asarray(mask)
X = data['X_val'][mask]
y = data['y_val'][mask]
saliency = compute_saliency_maps(X, y, model)
for i in xrange(mask.size):
plt.subplot(2, mask.size, i + 1)
plt.imshow(deprocess_image(X[i], data['mean_image']))
plt.axis('off')
plt.title(data['class_names'][y[i]][0])
plt.subplot(2, mask.size, mask.size + i + 1)
plt.title(mask[i])
plt.imshow(saliency[i])
plt.axis('off')
plt.gcf().set_size_inches(10, 4)
plt.show()
# Show some random images
mask = np.random.randint(data['X_val'].shape[0], size=5)
show_saliency_maps(mask)
# These are some cherry-picked images that should give good results
show_saliency_maps([128, 3225, 2417, 1640, 4619])
Explanation: Once you have completed the implementation in the cell above, run the following to visualize some class saliency maps on the validation set of TinyImageNet-100-A.
End of explanation
def make_fooling_image(X, target_y, model):
Generate a fooling image that is close to X, but that the model classifies
as target_y.
Inputs:
- X: Input image, of shape (1, 3, 64, 64)
- target_y: An integer in the range [0, 100)
- model: A PretrainedCNN
Returns:
- X_fooling: An image that is close to X, but that is classifed as target_y
by the model.
X_fooling = X.copy()
##############################################################################
# TODO: Generate a fooling image X_fooling that the model will classify as #
# the class target_y. Use gradient ascent on the target class score, using #
# the model.forward method to compute scores and the model.backward method #
# to compute image gradients. #
# #
# HINT: For most examples, you should be able to generate a fooling image #
# in fewer than 100 iterations of gradient ascent. #
##############################################################################
#current_loss, grads = model.loss(X_fooling,target_y)
scores, cache = model.forward(X_fooling)
i = 0
while scores.argmax() != target_y:
print(i,scores.argmax(),target_y)
dout = indices_to_one_hot(target_y,100)
dX, grads = model.backward(dout, cache)
X_fooling += 200 * dX
scores, cache = model.forward(X_fooling)
i += 1
##############################################################################
# END OF YOUR CODE #
##############################################################################
return X_fooling
Explanation: Fooling Images
We can also use image gradients to generate "fooling images" as discussed in [2]. Given an image and a target class, we can perform gradient ascent over the image to maximize the target class, stopping when the network classifies the image as the target class. Implement the following function to generate fooling images.
[2] Szegedy et al, "Intriguing properties of neural networks", ICLR 2014
End of explanation
# Find a correctly classified validation image
while True:
i = np.random.randint(data['X_val'].shape[0])
X = data['X_val'][i:i+1]
y = data['y_val'][i:i+1]
y_pred = model.loss(X)[0].argmax()
if y_pred == y: break
target_y = 67
X_fooling = make_fooling_image(X, target_y, model)
# Make sure that X_fooling is classified as y_target
scores = model.loss(X_fooling)
assert scores[0].argmax() == target_y, 'The network is not fooled!'
# Show original image, fooling image, and difference
plt.subplot(1, 3, 1)
plt.imshow(deprocess_image(X, data['mean_image']))
plt.axis('off')
plt.title(data['class_names'][y][0])
plt.subplot(1, 3, 2)
plt.imshow(deprocess_image(X_fooling, data['mean_image'], renorm=True))
plt.title(data['class_names'][target_y][0])
plt.axis('off')
plt.subplot(1, 3, 3)
plt.title('Difference')
plt.imshow(deprocess_image(X - X_fooling, data['mean_image']))
plt.axis('off')
plt.show()
Explanation: Run the following to choose a random validation set image that is correctly classified by the network, and then make a fooling image.
End of explanation |
1,869 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2A.ml - Machine Learning et Marketting - correction
Classification binaire, correction.
Step1: Données
Tout d'abord, on récupère la base de données
Step2: Exercice 1
Step3: On traite les variables catégorielles
Step4: On construit les deux matrices $(X,Y)$ = (features, classe).
Remarque
Step5: Quelques corrélations sont très grandes malgré tout
Step6: On divise en base d'apprentissage et de test
Step7: Puis on cale un modèle d'apprentissage
Step8: La méthode ravel évite de prendre en compte l'index de Y_train. La méthode train_test_split conserve dans l'index les positions initiales des élèments. Mais l'index fait que Y_train[0] ne désigne pas le premier élément de Y_train mais le premier élément du tableau initial. Y_train.ravel()[0] désigne bien le premier élément du tableau. On calcule ensuite la matrice de confusion (Confusion matrix)
Step9: Si le model choisi est un GradientBoostingClassifier, on peut regarder l'importance des variables dans la construction du résultat. Le graphe suivant est inspiré de la page Gradient Boosting regression même si ce n'est pas une régression qui a été utilisée ici.
Step10: Il faut tout de même rester prudent quant à l'interprétation du graphe précédent. La documentation au sujet de limportance des features précise plus ou moins comment sont calculés ces chiffres. Toutefois, lorsque des variables sont très corrélées, elles sont plus ou moins interchangeables. Tout dépend alors comment l'algorithme d'apprentissage choisit telle ou telle variables, toujours dans le même ordre ou dans un ordre aléatoire.
variables
On utilise le code de la séance 3 Analyse en Composantes Principales pour observer les variables.
Step11: Les variables les plus dissemblables sont celles qui contribuent le plus. Toutefois, à la vue de ce graphique, il apparaît qu'il faut normaliser les données avant d'interpréter l'ACP
Step12: Nettement mieux. En règle générale, il est préférable de normaliser ses données avant d'apprendre un modèle. Cela n'est pas toujours nécessaire (comme pour les arbres de décision). Toutefois, numériquement, avoir des données d'ordre de grandeur très différent introduit toujours plus d'approximations.
Step13: C'est plus ou moins équivalent lorsque les variables sont normalisées dans ce cas. Il faudrait vérifier sur la courbe ROC.
Exercice 2
Step14: On construit le vecteur des bonnes réponses
Step15: Ce score n'est pas si mal pour un premier essai. On n'a pas tenu compte du fait que la classe 1 est sous-représentée (voir Quelques astuces pour faire du machine learning. A priori, ce ne devrait pas être le cas du GradientBoostingClassifier. C'est une famille de modèles qui, lors de l'apprentissage, pondère davantage les exemples où ils font des erreurs. L'algorithme de boosting le plus connu est AdaBoost.
On tire maintenant deux échantillons aléatoires qu'on ajoute au graphique précédent | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: 2A.ml - Machine Learning et Marketting - correction
Classification binaire, correction.
End of explanation
url = "https://archive.ics.uci.edu/ml/machine-learning-databases/00222/"
file = "bank.zip"
import pyensae.datasource
data = pyensae.datasource.download_data(file, website=url)
import pandas
df = pandas.read_csv("bank.csv",sep=";")
df.tail()
Explanation: Données
Tout d'abord, on récupère la base de données : Bank Marketing Data Set.
End of explanation
import numpy
import numpy as np
numerique = [ c for c,d in zip(df.columns,df.dtypes) if d == numpy.int64 ]
categories = [ c for c in df.columns if c not in numerique and c not in ["y"] ]
target = "y"
print(numerique)
print(categories)
print(target)
num = df[ numerique ]
cat = df[ categories ]
tar = df[ target ]
Explanation: Exercice 1 : prédire y en fonction des attributs
Les données ne sont pas toutes au format numérique, il faut convertir les variables catégorielles. Pour cela, on utilise la fonction DictVectorizer.
End of explanation
from sklearn.feature_extraction import DictVectorizer
prep = DictVectorizer()
cat_as_dicts = [dict(r.iteritems()) for _, r in cat.iterrows()]
temp = prep.fit_transform(cat_as_dicts)
cat_exp = temp.toarray()
prep.feature_names_
Explanation: On traite les variables catégorielles :
End of explanation
cat_exp_df = pandas.DataFrame( cat_exp, columns = prep.feature_names_ )
reject = ['contact=unknown', 'default=yes', 'education=unknown', 'housing=yes','job=unknown',
'loan=yes', 'marital=single', 'month=sep', 'poutcome=unknown']
keep = [ c for c in cat_exp_df.columns if c not in reject ]
cat_exp_df_nocor = cat_exp_df [ keep ]
X = pandas.concat ( [ num, cat_exp_df_nocor ], axis= 1)
Y = tar.apply( lambda r : (1.0 if r == "yes" else 0.0))
X.shape, Y.shape
Explanation: On construit les deux matrices $(X,Y)$ = (features, classe).
Remarque : certains modèles d'apprentissage n'acceptent pas les corrélations. Lorsqu'on crée des variables catégorielles à choix unique, les sommes des colonnes associées à une catégories fait nécessairement un. Avec deux variables catégorielles, on introduit nécessairement des corrélations. On pense à enlever les dernières catégories : 'contact=unknown', 'default=yes', 'education=unknown', 'housing=yes', 'job=unknown', 'loan=yes', 'marital=single', 'month=sep', 'poutcome=unknown'.
End of explanation
import numpy
numpy.corrcoef(X)
Explanation: Quelques corrélations sont très grandes malgré tout :
End of explanation
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.33)
Explanation: On divise en base d'apprentissage et de test :
End of explanation
from sklearn.ensemble import GradientBoostingClassifier, RandomForestClassifier
type_classifier = GradientBoostingClassifier
clf = type_classifier()
clf = clf.fit(X_train, Y_train.ravel())
Explanation: Puis on cale un modèle d'apprentissage :
End of explanation
from sklearn.metrics import confusion_matrix
for x,y in [ (X_train, Y_train), (X_test, Y_test) ]:
yp = clf.predict(x)
cm = confusion_matrix(y.ravel(), yp.ravel())
print(cm)
import matplotlib.pyplot as plt
plt.matshow(cm)
plt.title('Confusion matrix')
plt.colorbar()
plt.ylabel('True label')
plt.xlabel('Predicted label')
Explanation: La méthode ravel évite de prendre en compte l'index de Y_train. La méthode train_test_split conserve dans l'index les positions initiales des élèments. Mais l'index fait que Y_train[0] ne désigne pas le premier élément de Y_train mais le premier élément du tableau initial. Y_train.ravel()[0] désigne bien le premier élément du tableau. On calcule ensuite la matrice de confusion (Confusion matrix) :
End of explanation
import numpy as np
feature_name = X.columns
limit = 20
feature_importance = clf.feature_importances_[:20]
feature_importance = 100.0 * (feature_importance / feature_importance.max())
sorted_idx = np.argsort(feature_importance)
pos = np.arange(sorted_idx.shape[0]) + .5
plt.subplot(1, 2, 2)
plt.barh(pos, feature_importance[sorted_idx], align='center')
plt.yticks(pos, feature_name[sorted_idx])
plt.xlabel('Relative Importance')
plt.title('Variable Importance')
Explanation: Si le model choisi est un GradientBoostingClassifier, on peut regarder l'importance des variables dans la construction du résultat. Le graphe suivant est inspiré de la page Gradient Boosting regression même si ce n'est pas une régression qui a été utilisée ici.
End of explanation
from sklearn.decomposition import PCA
pca = PCA(n_components=4)
x_transpose = X.T
pca.fit(x_transpose)
plt.bar(numpy.arange(len(pca.explained_variance_ratio_))+0.5, pca.explained_variance_ratio_)
plt.title("Variance expliquée")
import warnings
warnings.filterwarnings('ignore')
X_reduced = pca.transform(x_transpose)
plt.figure(figsize=(18,6))
plt.scatter(X_reduced[:, 0], X_reduced[:, 1])
for label, x, y in zip(x_transpose.index, X_reduced[:, 0], X_reduced[:, 1]):
plt.annotate(
label,
xy = (x, y), xytext = (-10, 10),
textcoords = 'offset points', ha = 'right', va = 'bottom',
bbox = dict(boxstyle = 'round,pad=0.5', fc = 'yellow', alpha = 0.5),
arrowprops = dict(arrowstyle = '->', connectionstyle = 'arc3,rad=0'))
Explanation: Il faut tout de même rester prudent quant à l'interprétation du graphe précédent. La documentation au sujet de limportance des features précise plus ou moins comment sont calculés ces chiffres. Toutefois, lorsque des variables sont très corrélées, elles sont plus ou moins interchangeables. Tout dépend alors comment l'algorithme d'apprentissage choisit telle ou telle variables, toujours dans le même ordre ou dans un ordre aléatoire.
variables
On utilise le code de la séance 3 Analyse en Composantes Principales pour observer les variables.
End of explanation
from sklearn.preprocessing import normalize
xnorm = normalize(x_transpose)
pca = PCA(n_components=10)
pca.fit(xnorm)
plt.bar(numpy.arange(len(pca.explained_variance_ratio_))+0.5, pca.explained_variance_ratio_)
plt.title("Variance expliquée")
X_reduced = pca.transform(xnorm)
plt.figure(figsize=(18,6))
plt.scatter(X_reduced[:, 0], X_reduced[:, 1])
for label, x, y in zip(x_transpose.index, X_reduced[:, 0], X_reduced[:, 1]):
plt.annotate(
label,
xy = (x, y), xytext = (-10, 10),
textcoords = 'offset points', ha = 'right', va = 'bottom',
bbox = dict(boxstyle = 'round,pad=0.5', fc = 'yellow', alpha = 0.5),
arrowprops = dict(arrowstyle = '->', connectionstyle = 'arc3,rad=0'))
Explanation: Les variables les plus dissemblables sont celles qui contribuent le plus. Toutefois, à la vue de ce graphique, il apparaît qu'il faut normaliser les données avant d'interpréter l'ACP :
End of explanation
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import Normalizer
from sklearn.ensemble import GradientBoostingClassifier
clf = Pipeline([
('normalize', Normalizer()),
('classification', GradientBoostingClassifier())
])
clf = clf.fit(X_train, Y_train.ravel())
from sklearn.metrics import confusion_matrix
x,y = X_test, Y_test
yp = clf.predict(x)
cm2 = confusion_matrix(y, yp)
print("non normalisé\n",cm)
print("normalisé\n",cm2)
Explanation: Nettement mieux. En règle générale, il est préférable de normaliser ses données avant d'apprendre un modèle. Cela n'est pas toujours nécessaire (comme pour les arbres de décision). Toutefois, numériquement, avoir des données d'ordre de grandeur très différent introduit toujours plus d'approximations.
End of explanation
from sklearn.metrics import roc_curve, auc
probas = clf.predict_proba(X_test)
probas[:5]
Explanation: C'est plus ou moins équivalent lorsque les variables sont normalisées dans ce cas. Il faudrait vérifier sur la courbe ROC.
Exercice 2 : tracer la courbe ROC
On utilise l'exemple Receiver Operating Characteristic (ROC) qu'il faut modifié car la réponse juste dans notre cas est le fait de prédire la bonne classe. Cela veut dire qu'il y a deux cas pour lesquels le modèle prédit le bon résultat : on choisit la classe qui la probabilité la plus forte.
End of explanation
rep = [ ]
yt = Y_test.ravel()
for i in range(probas.shape[0]):
p0,p1 = probas[i,:]
exp = yt[i]
if p0 > p1 :
if exp == 0 :
# bonne réponse
rep.append ( (1, p0) )
else :
# mauvaise réponse
rep.append( (0,p0) )
else :
if exp == 0 :
# mauvaise réponse
rep.append ( (0, p1) )
else :
# bonne réponse
rep.append( (1,p1) )
mat_rep = numpy.array(rep)
mat_rep[:5]
"taux de bonne réponse",sum(mat_rep[:,0]/len(mat_rep)) # voir matrice de confusion
fpr, tpr, thresholds = roc_curve(mat_rep[:,0], mat_rep[:, 1])
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC')
plt.legend(loc="lower right")
Explanation: On construit le vecteur des bonnes réponses :
End of explanation
import random
Y1 = numpy.array([ random.randint(0,1) == 0 for i in range(0,mat_rep.shape[0]) ])
Y2 = numpy.array([ random.randint(0,1) == 0 for i in range(0,mat_rep.shape[0]) ])
fpr1, tpr1, thresholds1 = roc_curve(mat_rep[Y1,0], mat_rep[Y1, 1])
roc_auc1 = auc(fpr1, tpr1)
fpr2, tpr2, thresholds2 = roc_curve(mat_rep[Y2,0], mat_rep[Y2, 1])
roc_auc2 = auc(fpr2, tpr2)
print(fpr1.shape,tpr1.shape,fpr2.shape,tpr2.shape)
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % roc_auc)
ax.plot([0, 1,2], [0, 1,2], 'k--')
ax.set_xlim([0.0, 1.0])
ax.set_ylim([0.0, 1.0])
ax.set_xlabel('False Positive Rate')
ax.set_ylabel('True Positive Rate')
ax.set_title('ROC')
ax.plot(fpr1, tpr1, label='ech 1, area=%0.2f' % roc_auc1)
ax.plot(fpr2, tpr2, label='ech 2, area=%0.2f' % roc_auc2)
ax.legend(loc="lower right")
Explanation: Ce score n'est pas si mal pour un premier essai. On n'a pas tenu compte du fait que la classe 1 est sous-représentée (voir Quelques astuces pour faire du machine learning. A priori, ce ne devrait pas être le cas du GradientBoostingClassifier. C'est une famille de modèles qui, lors de l'apprentissage, pondère davantage les exemples où ils font des erreurs. L'algorithme de boosting le plus connu est AdaBoost.
On tire maintenant deux échantillons aléatoires qu'on ajoute au graphique précédent :
End of explanation |
1,870 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Essentially same as otbn_find_bits.ipynb but streamlined for 100M captures.
Step1: optional, if we need to plot to understand why we're not finding good bit times
Step2: p384 alignment method
Step3: Superimpose all the bits!
Plot overlayed bit traces to visualize alignment and guess at success of time extraction
Step4: Now try resync
Step6: Original approach
Step7: Find runs of samples below threshold value
Step8: Use these runs to guess at bit start times
Step9: Now we make the bit start times more accurate by using the single isolated large peak that's about 650 samples in
Step10: What if we use the SAD approach to find bits instead?
Step11: Average 'one' and 'zero'
Step12: attack using just the sum of the power trace segment
Step13: attack using markers | Python Code:
import numpy as np
wave = np.load('waves_p256_100M_2s.npy')
#wave = np.load('waves_p256_100M_2s_12bits.npy')
#wave = np.load('waves_p256_100M_2s_12bits830.npy')
#wave = np.load('waves_p256_100M_2s_12bitsf0c.npy')
import numpy as np
import pandas as pd
from scipy import signal
def butter_highpass(cutoff, fs, order=5):
nyq = 0.5 * fs
normal_cutoff = cutoff / nyq
b, a = signal.butter(order, normal_cutoff, btype='high', analog=False)
return b, a
def butter_highpass_filter(data, cutoff, fs, order=9):
b, a = butter_highpass(cutoff, fs, order=order)
y = signal.filtfilt(b, a, data)
return y
filtered_wave = butter_highpass_filter(wave, 6e6, 100e6) # for NON-streamed 100M capture
Explanation: Essentially same as otbn_find_bits.ipynb but streamlined for 100M captures.
End of explanation
#samples = len(waves[0])
samples = 600000
base = 0
import holoviews as hv
from holoviews.operation import decimate
from holoviews.operation.datashader import datashade, shade, dynspread
hv.extension('bokeh')
wf = datashade(hv.Curve(filtered_wave[base:base+samples]), cmap=['black'])
(wf).opts(width=2000, height=600)
Explanation: optional, if we need to plot to understand why we're not finding good bit times:
End of explanation
def moving_average(x, w):
return np.convolve(x, np.ones(w), 'valid') / w
mfw = moving_average(np.abs(filtered_wave), 3000)
len(mfw)
samples = 600000
base = 0
mwf = datashade(hv.Curve(mfw[base:base+samples]), cmap=['black'])
mwf.opts(width=2000, height=600)
base = 0
samples = len(filtered_wave)
from scipy.signal import find_peaks
peaks, _ = find_peaks(-mfw[base:base+samples], distance=30000)
len(peaks), peaks
bit_starts3 = peaks[1:]
bit_starts3
deltas = []
good_deltas = []
good_bits = 0
for i in range(len(bit_starts3)-2):
delta = bit_starts3[i+1] - bit_starts3[i]
deltas.append(delta)
print(delta, end='')
if 32000 < delta < 32300:
good_bits += 1
good_deltas.append(delta)
print()
else:
print(' oops!')
good_bits
hv.Curve(good_deltas).opts(width=2000, height=900)
duration = int(np.average(good_deltas))
duration, np.average(good_deltas), max(good_deltas)-min(good_deltas)
bbstarts = []
for i in range(256):
bbstarts.append(42970 + i*32153)
Explanation: p384 alignment method:
End of explanation
bit_starts = bit_starts3[:256]
#bit_starts = bbstarts
bits = []
bit_size = bit_starts[1] - bit_starts[0]
for start in bit_starts:
bits.append(filtered_wave[start:start+bit_size])
len(bits)
duration
# Can plot all the bits, but it's slow:
#numbits = len(bits)
#duration = 1000
duration = 32152
numbits = 4
import holoviews as hv
from holoviews.operation import decimate
from holoviews.operation.datashader import datashade, shade, dynspread
hv.extension('bokeh')
xrange = range(duration)
from operator import mul
from functools import reduce
curves = [hv.Curve(zip(xrange, filtered_wave[bit_starts[i]:bit_starts[i]+duration])) for i in range(numbits)]
#curves = [hv.Curve(zip(xrange, filtered_wave[bbstarts[i]:bbstarts[i]+duration])) for i in range(numbits)]
reduce(mul, curves).opts(width=2000, height=900)
Explanation: Superimpose all the bits!
Plot overlayed bit traces to visualize alignment and guess at success of time extraction:
End of explanation
import chipwhisperer.analyzer.preprocessing as preprocess
resync = preprocess.ResyncDTW()
import fastdtw as fastdtw
def align_traces(N, r, ref, trace, cython=True):
#try:
if cython:
# cython version can't take numpy.memmap inputs, so we convert them to arrays:
aref = np.array(list(ref))
atrace = np.array(list(trace))
dist, path = fastdtw.fastdtw(aref, atrace, radius=r, dist=None)
else:
dist, path = old_dtw(ref, trace, radius=r, dist=None)
#except:
# return None
px = [x for x, y in path]
py = [y for x, y in path]
n = [0] * N
s = [0.0] * N
for x, y in path:
s[x] += trace[y]
n[x] += 1
ret = [s[i] / n[i] for i in range(N)]
return ret
ref = bits[0]
target = filtered_wave[bit_starts[1]:bit_starts[1]+duration]
from tqdm.notebook import tnrange
realigns = [ref]
for b in tnrange(1,256):
target = bits[b]
realigns.append(np.asarray(align_traces(N=len(ref), r=3, ref=ref, trace=target)))
#numbits = len(bits)
numbits = 40
#curves = [hv.Curve(zip(xrange, realigns[i])) for i in range(numbits)]
curves = [hv.Curve(zip(xrange, realigns[i])) for i in range(128,160)]
reduce(mul, curves).opts(width=2000, height=900)
b0 = hv.Curve(ref)
b1 = hv.Curve(target)
re = hv.Curve(realigned)
#(b0 * b1 * re).opts(width=2000, height=900)
#(b0 * b1).opts(width=2000, height=900)
(b0 * re).opts(width=2000, height=900)
Explanation: Now try resync:
End of explanation
def contiguous_regions(condition):
Finds contiguous True regions of the boolean array "condition". Returns
a 2D array where the first column is the start index of the region and the
second column is the end index.
# Find the indicies of changes in "condition"
d = np.diff(condition.astype(int))
idx, = d.nonzero()
# We need to start things after the change in "condition". Therefore,
# we'll shift the index by 1 to the right.
idx += 1
if condition[0]:
# If the start of condition is True prepend a 0
idx = np.r_[0, idx]
if condition[-1]:
# If the end of condition is True, append the length of the array
idx = np.r_[idx, condition.size] # Edit
# Reshape the result into two columns
idx.shape = (-1,2)
return idx
Explanation: Original approach:
End of explanation
# for 100M NOT streamed:
THRESHOLD = 0.015
MIN_RUN_LENGTH = 60 # default for the 128 1's / 128 0's
#MIN_RUN_LENGTH = 40
STOP=len(filtered_wave)
#STOP=360000
condition = np.abs(filtered_wave[:STOP]) < THRESHOLD
# Print the start and stop indices of each region where the absolute
# values of x are below 1, and the min and max of each of these regions
results = contiguous_regions(condition)
#print(len(results))
goods = results[np.where(results[:,1] - results[:,0] > MIN_RUN_LENGTH)]
print(len(goods))
# to help debug:
last_stop = 0
for g in goods:
start = g[0]
stop = g[1]
l = stop-start
delta = start - last_stop
if 13000 < delta < 18000:
stat = 'ok'
else:
stat = 'OOOOPS?!?'
print('%8d %8d %8d %8d %s' % (l, delta, start, stop, stat))
last_stop = stop
Explanation: Find runs of samples below threshold value:
(keep only runs that are long enough)
End of explanation
raw_starts = []
for i in range(1, len(goods), 2):
raw_starts.append(goods[i][1])
raw_starts[:12]
duration = raw_starts[1] - raw_starts[0]
print(duration)
Explanation: Use these runs to guess at bit start times:
End of explanation
wstart = 500
wend = 700
#wstart = 1550
#wend = 1620
base = np.argmax(filtered_wave[raw_starts[0]+wstart:raw_starts[0]+wend])
bit_starts = [raw_starts[0]]
for s in raw_starts[1:]:
loc = np.argmax(filtered_wave[s+wstart:s+wend])
offset = base-loc
#print(offset)
bit_starts.append(s + offset)
len(raw_starts), len(bit_starts)
for b in range(11):
delta = raw_starts[b+1] - raw_starts[b]
print(delta, end='')
if not 31000 < delta < 33000:
print(' Ooops!')
else:
print()
Explanation: Now we make the bit start times more accurate by using the single isolated large peak that's about 650 samples in:
hmm, not sure if this actually improves the results...
End of explanation
from bokeh.plotting import figure, show
from bokeh.resources import INLINE
from bokeh.io import output_notebook
output_notebook(INLINE)
samples = 120000
xrange = range(samples)
S = figure(width=2000, height=900)
S.line(xrange, filtered_wave[:samples], color='blue')
show(S)
#base = 45973
#base = 43257
base = 45067
#cycles = 32150 # full bit
#cycles = 32150//2 # half bit
cycles = 2000 # something short
#cycles = 80000 # *more* than one bit
refbit = filtered_wave[base:base+cycles]
from tqdm.notebook import tnrange
diffs = []
for i in tnrange(78000, 500000):
diffs.append(np.sum(abs(refbit - filtered_wave[i:i+len(refbit)])))
base + 31350
import holoviews as hv
from holoviews.operation import decimate
from holoviews.operation.datashader import datashade, shade, dynspread
hv.extension('bokeh')
datashade(hv.Curve(diffs)).opts(width=2000, height=900)
Explanation: What if we use the SAD approach to find bits instead?
End of explanation
duration
#starts = raw_starts
#starts = bit_starts
starts = bit_starts3[:256]
# f0c: 1111_0000_1111
avg_trace = np.zeros(duration)
avg_ones = np.zeros(duration)
avg_zeros = np.zeros(duration)
for i, start in enumerate(starts[:12]):
avg_trace += filtered_wave[start:start+duration]
#if i < 6:
if i < 4 or i > 7:
avg_ones += filtered_wave[start:start+duration]
#elif i < 12:
elif 3 < i < 8:
avg_zeros += filtered_wave[start:start+duration]
avg_trace /= 12 #len(bit_starts)
#avg_ones /= 6 #len(bit_starts)/2
#avg_zeros /= 6 #len(bit_starts)/2
avg_ones /= 8 #len(bit_starts)/2
avg_zeros /= 4 #len(bit_starts)/2
for b in range(10):
print(len(realigns[b]))
duration = 32151
avg_trace = np.zeros(duration)
avg_ones = np.zeros(duration)
avg_zeros = np.zeros(duration)
for i in range(256):
avg_trace += realigns[i]
if i < 128:
avg_ones += realigns[i]
else:
avg_zeros += realigns[i]
avg_trace /= 256
avg_ones /= 128
avg_zeros /= 128
# what if we don't realign?
duration = 32151
avg_trace = np.zeros(duration)
avg_ones = np.zeros(duration)
avg_zeros = np.zeros(duration)
for i in range(256):
avg_trace += bits[i]
if i < 128:
avg_ones += bits[i]
else:
avg_zeros += bits[i]
avg_trace /= 256
avg_ones /= 128
avg_zeros /= 128
import holoviews as hv
from holoviews.operation import decimate
from holoviews.operation.datashader import datashade, shade, dynspread
hv.extension('bokeh')
xrange = range(duration)
cavg_all = datashade(hv.Curve(avg_trace), cmap=['black'])
cavg_ones = datashade(hv.Curve(avg_ones), cmap=['blue'])
cavg_zeros = datashade(hv.Curve(avg_zeros), cmap=['green'])
cdiff = datashade(hv.Curve((avg_ones - avg_zeros)), cmap=['red'])
#(cavg_all * cavg_ones * cavg_zeros).opts(width=2000, height=900)
#(cdiff * cavg_all).opts(width=2000, height=600)
#(cavg_ones*cavg_zeros).opts(width=2000, height=600)
(cavg_zeros*cavg_ones).opts(width=2000, height=600)
(cdiff).opts(width=2000, height=600)
np.average(avg_ones), np.average(avg_zeros)
np.sum(abs(avg_ones)) / np.sum(abs(avg_zeros))
Explanation: Average 'one' and 'zero'
End of explanation
scores = []
#for b in bit_starts:
for b in raw_starts:
scores.append(np.sum(abs(filtered_wave[b:b+duration])))
cscores = hv.Curve(scores[:12])
(cscores).opts(width=2000, height=600)
Explanation: attack using just the sum of the power trace segment:
End of explanation
markers = np.where((avg_ones - avg_zeros) > 0.01)[0]
#markers = np.where(abs(avg_ones - avg_zeros) > 0.005)[0]
len(markers)
markers
scores = []
for b in starts:
score = 0
for marker in markers:
#score += abs(filtered_wave[b + marker])
score += filtered_wave[b + marker]
scores.append(score)
cscores = hv.Curve(scores)
(cscores).opts(width=2000, height=600)
scores = []
for b in range(256):
score = 0
for marker in markers:
score += abs(realigns[b][marker])
scores.append(score)
scores = []
for b in range(256):
score = 0
for marker in markers:
score += bits[b][marker]
scores.append(score)
scores = []
for b in range(256):
score = 0
for m in range(18000,19200):
score += abs(bits[b][m])
scores.append(score)
np.average(scores[:128]), np.average(scores[128:])
np.average(scores[:10])
np.average(scores[128:138])
scores[128:138]
max(scores), min(scores)
Explanation: attack using markers:
End of explanation |
1,871 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 8</font>
Download
Step1: Bokeh
Caso o Bokeh não esteja instalado, executar no prompt ou terminal
Step2: Gráfico de Barras
Step3: ScatterPlot
Step4: Gráfico de Círculos
Step5: Gráfico com Dados Geofísicos | Python Code:
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
Explanation: <font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 8</font>
Download: http://github.com/dsacademybr
End of explanation
# Importando o módulo Bokeh
import bokeh
from bokeh.io import show, output_notebook
from bokeh.plotting import figure, output_file
from bokeh.models import ColumnDataSource
from bokeh.transform import factor_cmap
from bokeh.palettes import Spectral6
# Carregando o Bokeh
output_notebook()
# Arquivo gerado pela visualização
output_file("Bokeh-Grafico-Interativo.html")
p = figure()
type(p)
p.line([1, 2, 3, 4, 5], [6, 7, 2, 4, 5], line_width = 2)
show(p)
Explanation: Bokeh
Caso o Bokeh não esteja instalado, executar no prompt ou terminal: pip install bokeh
End of explanation
# Criando um novo gráfico
output_file("Bokeh-Grafico-Barras.html")
fruits = ['Maças', 'Peras', 'Tangerinas', 'Uvas', 'Melancias', 'Morangos']
counts = [5, 3, 4, 2, 4, 6]
source = ColumnDataSource(data=dict(fruits=fruits, counts=counts))
p = figure(x_range=fruits, plot_height=350, toolbar_location=None, title="Contagem de Frutas")
p.vbar(x='fruits',
top='counts',
width=0.9,
source=source,
legend_label="fruits",
line_color='white',
fill_color=factor_cmap('fruits', palette=Spectral6, factors=fruits))
p.xgrid.grid_line_color = None
p.y_range.start = 0
p.y_range.end = 9
p.legend.orientation = "horizontal"
p.legend.location = "top_center"
show(p)
Explanation: Gráfico de Barras
End of explanation
# Construindo um ScatterPlot
from bokeh.plotting import figure, show, output_file
from bokeh.sampledata.iris import flowers
colormap = {'setosa': 'red', 'versicolor': 'green', 'virginica': 'blue'}
colors = [colormap[x] for x in flowers['species']]
p = figure(title = "Iris Morphology")
p.xaxis.axis_label = 'Petal Length'
p.yaxis.axis_label = 'Petal Width'
p.circle(flowers["petal_length"], flowers["petal_width"], color=colors, fill_alpha=0.2, size=10)
output_file("Bokeh_grafico_Iris.html", title="iris.py example")
show(p)
Explanation: ScatterPlot
End of explanation
from bokeh.plotting import figure, output_file, show
# Outuput
output_file("Bokeh-Grafico-Circulos.html")
p = figure(plot_width = 400, plot_height = 400)
# Adicionando círculos ao gráfico
p.circle([1, 2, 3, 4, 5], [6, 7, 2, 4, 5], size = 20, color = "navy", alpha = 0.5)
# Mostrando o resultado
show(p)
Explanation: Gráfico de Círculos
End of explanation
# Geojson
from bokeh.io import output_file, show
from bokeh.models import GeoJSONDataSource
from bokeh.plotting import figure
from bokeh.sampledata.sample_geojson import geojson
geo_source = GeoJSONDataSource(geojson=geojson)
p = figure()
p.circle(x = 'x', y = 'y', alpha = 0.9, source = geo_source)
output_file("Bokeh-GeoJSON.html")
show(p)
# Baixando o diretório de dados de exemplo do Bokeh
bokeh.sampledata.download()
# Mapa
from bokeh.io import show
from bokeh.models import (ColumnDataSource, HoverTool, LogColorMapper)
from bokeh.palettes import Viridis6 as palette
from bokeh.plotting import figure
from bokeh.sampledata.us_counties import data as counties
from bokeh.sampledata.unemployment import data as unemployment
# palette.reverse()
counties = {code: county for code, county in counties.items() if county["state"] == "tx"}
county_xs = [county["lons"] for county in counties.values()]
county_ys = [county["lats"] for county in counties.values()]
county_names = [county['name'] for county in counties.values()]
county_rates = [unemployment[county_id] for county_id in counties]
color_mapper = LogColorMapper(palette=palette)
source = ColumnDataSource(data = dict(x = county_xs,
y = county_ys,
name = county_names,
rate = county_rates,))
TOOLS = "pan,wheel_zoom,reset,hover,save"
p = figure(title = "Texas Unemployment, 2009",
tools = TOOLS,
x_axis_location = None,
y_axis_location = None)
p.grid.grid_line_color = None
p.patches('x', 'y', source = source,
fill_color = {'field': 'rate', 'transform': color_mapper},
fill_alpha = 0.7, line_color = "white", line_width = 0.5)
hover = p.select_one(HoverTool)
hover.point_policy = "follow_mouse"
hover.tooltips = [
("Name", "@name"),
("Unemployment rate)", "@rate%"),
("(Long, Lat)", "($x, $y)"),
]
show(p)
Explanation: Gráfico com Dados Geofísicos
End of explanation |
1,872 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<br/><br/>
skutil
Skutil brings the best of both worlds to H2O and sklearn, delivering an easy transition into the world of distributed computing that H2O offers, while providing the same, familiar interface that sklearn users have come to know and love. This notebook will give an example of how to use skutil preprocessors with H2OEstimators and H2OFrames.
Author
Step1: Initialize H2O
First, we'll start our H2O cluster...
Step2: Load data
We'll load sklearn's breast cancer data. Using skutil's from_pandas method, we can upload a Pandas frame to the H2O cloud
Step3: train/test split
Sklearn provides a great mechanism for splitting data into a train and validation set. Skutil provides the same mechanism for h2o frames. This cell does the following
Step4: preprocessing with skutil.h2o
Skutil provides an h2o module which delivers some skutil feature_selection classes that can operate on an H2OFrame. Each BaseH2OTransformer has the following __init__ signature
Step5: Multicollinearity
Multicollinearity (MC) can be detrimental to the fit of parametric models (for our example, we're going to use a tree-based model, which is non-parametric, but the demo is still useful), and can cause confounding results in some models' variable importances. With skutil, we can filter out features that are correlated beyond a certain absolute threshold. When a violating correlation is identified, the feature with the highest mean absolute correlation is removed (see also).
Before filtering out collinear features, let's take a look at the correlation matrix.
Step6: Dropping features
As you'll see in the next section (Pipelines), where certain preprocessing steps take place matters. If there are a subset of features on which you don't want to model or process, you can drop them out. Sometimes this is more effective than creating a list of potentially thousands of feature names to pass as the feature_names parameter.
Step7: skutil.h2o modeling
Skutil's h2o module allows us to form the Pipeline objects we're familiar with from sklearn. This permits us to string a series of preprocessors together, with an optional H2OEstimator as the last step. Like sklearn Pipelines, the first argument is a single list of length-two tuples (where the first arg is the name of the step, and the second is the Estimator/Transformer), however the H2OPipeline takes two more arguments
Step8: Which features were retained?
We can see which features were modeled on with the training_cols_ attribute of the fitted pipe.
Step9: Hyperparameter optimization
With relatively little effort, we got > 93% accuracy on our validation set! Can we improve that? We can use sklearn-esque grid searches, which also allow us to search over preprocessor objects to optimize a set of hyperparameters.
Step10: Model evaluation
Beyond merely observing our validation set score, we can dig into the cross validation scores of each model in our H2O grid search, and select the model that has not only the best mean score, but the model that minimizes variability in the CV scores.
Step11: Variable importance
We can easily extract the best model's variable importances like so
Step12: Model evaluation—introduce the validation set
So our best estimator achieves a mean cross validation accuracy of 93%! We can predict on our best estimator as follows
Step13: Model selection
(Not shown
Step14: Loading and making predictions
Step15: Cleanup
Always make sure to shut down your cluster... | Python Code:
from __future__ import print_function, division, absolute_import
import warnings
import skutil
import sklearn
import h2o
import pandas as pd
import numpy as np
# we'll be plotting inline...
%matplotlib inline
print('Skutil version: %s' % skutil.__version__)
print('H2O version: %s' % h2o.__version__)
print('Numpy version: %s' % np.__version__)
print('Sklearn version: %s' % sklearn.__version__)
print('Pandas version: %s' % pd.__version__)
Explanation: <br/><br/>
skutil
Skutil brings the best of both worlds to H2O and sklearn, delivering an easy transition into the world of distributed computing that H2O offers, while providing the same, familiar interface that sklearn users have come to know and love. This notebook will give an example of how to use skutil preprocessors with H2OEstimators and H2OFrames.
Author: Taylor G Smith
Contact: tgsmith61591@gmail.com
Python packages you will need:
- python 2.7
- numpy >= 1.6
- scipy >= 0.17
- scikit-learn >= 0.16
- pandas >= 0.18
- cython >= 0.22
- h2o >= 3.8.2.9
Misc. requirements (for compiling Fortran a la f2py):
- gfortran
- gcc
- Note that the El Capitan Apple Developer tool upgrade necessitates upgrading this! Use:
brew upgrade gcc
This notebook is intended for an audience with a working understanding of machine learning principles and a background in Python development, ideally sklearn or H2O users. Note that this notebook is not designed to teach machine learning, but to demonstrate use of the skutil package.
Procession of events:
Data split—always the first step!
Preprocessing:
Balance response classes in train set
Remove near-zero variance features
Remove multicollinear features
Modeling
Formulate pipeline
Grid search
Model selection
... (not shown here, but other models built)
All models finally evaluated against holdout
Model persistence
End of explanation
with warnings.catch_warnings():
warnings.simplefilter('ignore')
# I started this cluster up via CLI with:
# $ java -Xmx2g -jar /anaconda/h2o_jar/h2o.jar
h2o.init(ip='10.7.187.84', port=54321, start_h2o=False)
Explanation: Initialize H2O
First, we'll start our H2O cluster...
End of explanation
from sklearn.datasets import load_breast_cancer
from skutil.h2o.util import from_pandas
# import data, load into pandas
bc = load_breast_cancer()
X = pd.DataFrame.from_records(data=bc.data, columns=bc.feature_names)
X['target'] = bc.target
# push to h2o cloud
X = from_pandas(X)
print(X.shape)
X.head()
# Here are our feature names:
x = list(bc.feature_names)
y = 'target'
Explanation: Load data
We'll load sklearn's breast cancer data. Using skutil's from_pandas method, we can upload a Pandas frame to the H2O cloud
End of explanation
from skutil.h2o import h2o_train_test_split
# first, let's make sure our target is a factor
X[y] = X[y].asfactor()
# we'll use 75% of the data for training, 25%
X_train, X_val = h2o_train_test_split(X, train_size=0.75, random_state=42)
# make sure we did it right...
# assert X.shape[0] == (X_train.shape[0] + X_val.shape[0])
Explanation: train/test split
Sklearn provides a great mechanism for splitting data into a train and validation set. Skutil provides the same mechanism for h2o frames. This cell does the following:
Makes the response variable an enum
Creates two splits:
X_train: 75%
X_val: 25%
End of explanation
from skutil.h2o import H2ONearZeroVarianceFilterer
# Let's determine whether we're at risk for any near-zero variance
nzv = H2ONearZeroVarianceFilterer(feature_names=x, target_feature=y, threshold=1e-4)
nzv.fit(X_train)
# let's see if anything was dropped...
nzv.drop_
nzv.var_
Explanation: preprocessing with skutil.h2o
Skutil provides an h2o module which delivers some skutil feature_selection classes that can operate on an H2OFrame. Each BaseH2OTransformer has the following __init__ signature:
BaseH2OTransformer(self, feature_names=None, target_feature=None)
The selector will only operate on the feature_names (if provided—else it will operate on all features) and will always exclude the target_feature.
The first step would be to ensure our data is balanced, as we don't want imbalanced minority/majority classes. The problem of class imbalance is well-documented, and many solutions have been proposed. Skutil provides a mechanism by which we could over-sample the minority class using the H2OOversamplingClassBalancer, or under-sample the majority class using the H2OUndersamplingClassBalancer.
Fortunately for us, the classes in this dataset are fairly balanced, so we can move on to the next piece.
Handling near-zero variance
Some predictors contain few unique values and are considered "near-zero variance" predictors. For parametric many models, this may cause the fit to be unstable. Skutil's NearZeroVarianceFilterer and H2ONearZeroVarianceFilterer drop features with variance below a given threshold (based on caret's preprocessor).
Note: sklearn added this in 0.18 (released last week) under VarianceThreshold
End of explanation
from skutil.h2o import h2o_corr_plot
# note that we want to exclude the target!!
h2o_corr_plot(X_train[x], xticklabels=x, yticklabels=x)
from skutil.h2o import H2OMulticollinearityFilterer
# Are we at risk of any multicollinearity?
mcf = H2OMulticollinearityFilterer(feature_names=x, target_feature=y, threshold=0.90)
mcf.fit(X_train)
# we can look at the dropped features
mcf.correlations_
Explanation: Multicollinearity
Multicollinearity (MC) can be detrimental to the fit of parametric models (for our example, we're going to use a tree-based model, which is non-parametric, but the demo is still useful), and can cause confounding results in some models' variable importances. With skutil, we can filter out features that are correlated beyond a certain absolute threshold. When a violating correlation is identified, the feature with the highest mean absolute correlation is removed (see also).
Before filtering out collinear features, let's take a look at the correlation matrix.
End of explanation
from skutil.h2o import H2OFeatureDropper
# maybe I don't like 'mean fractal dimension'
dropper = H2OFeatureDropper(feature_names=['mean fractal dimension'], target_feature=y)
transformed = dropper.fit_transform(X_train)
# we can ensure it's not there
assert not 'mean fractal dimension' in transformed.columns
Explanation: Dropping features
As you'll see in the next section (Pipelines), where certain preprocessing steps take place matters. If there are a subset of features on which you don't want to model or process, you can drop them out. Sometimes this is more effective than creating a list of potentially thousands of feature names to pass as the feature_names parameter.
End of explanation
from skutil.h2o import H2OPipeline
from h2o.estimators import H2ORandomForestEstimator
from skutil.h2o.metrics import h2o_accuracy_score # same as sklearn's, but with H2OFrames
# let's fit a pipeline with our estimator...
pipe = H2OPipeline([
('nzv', H2ONearZeroVarianceFilterer(threshold=1e-1)),
('mcf', H2OMulticollinearityFilterer(threshold=0.95)),
('rf' , H2ORandomForestEstimator(ntrees=50, max_depth=8, min_rows=5))
],
# feature_names is the set of features the first transformer
# will operate on. The remaining features will be passed
# to the next step
feature_names=x,
target_feature=y)
# fit...
pipe = pipe.fit(X_train)
# eval accuracy on validation set
pred = pipe.predict(X_val)
actual = X_val[y]
pred = pred['predict']
print('Validation accuracy: %.5f' % h2o_accuracy_score(actual, pred))
Explanation: skutil.h2o modeling
Skutil's h2o module allows us to form the Pipeline objects we're familiar with from sklearn. This permits us to string a series of preprocessors together, with an optional H2OEstimator as the last step. Like sklearn Pipelines, the first argument is a single list of length-two tuples (where the first arg is the name of the step, and the second is the Estimator/Transformer), however the H2OPipeline takes two more arguments: feature_names and target_feature.
Note that the feature_names arg is the names the first preprocessor will operate on; after that, all remaining feature names (i.e., not the target) will be passed to the next processor.
End of explanation
pipe.training_cols_
Explanation: Which features were retained?
We can see which features were modeled on with the training_cols_ attribute of the fitted pipe.
End of explanation
from skutil.h2o import H2ORandomizedSearchCV
from skutil.h2o import H2OKFold
from scipy.stats import uniform, randint
# define our random state
rand_state = 2016
# we have the option to choose the model that maximizes CV scores,
# or the model that minimizes std deviations between CV scores.
# let's choose the former for this example
minimize = 'bias'
# let's redefine our pipeline
pipe = H2OPipeline([
('nzv', H2ONearZeroVarianceFilterer()),
('mcf', H2OMulticollinearityFilterer()),
('rf' , H2ORandomForestEstimator(seed=rand_state))
])
# our hyperparameters over which to search...
hyper = {
'nzv__threshold' : uniform(1e-4,1e-1), # see scipy.stats.uniform:
'mcf__threshold' : uniform(0.7, 0.29), # uniform in range (0.7 + 0.29)
'rf__ntrees' : randint(50, 100),
'rf__max_depth' : randint(10, 12),
'rf__min_rows' : randint(25, 50)
}
# define our grid search
search = H2ORandomizedSearchCV(
estimator=pipe,
param_grid=hyper,
feature_names=x,
target_feature=y,
n_iter=2, # keep it small for our demo...
random_state=rand_state,
scoring='accuracy_score',
cv=H2OKFold(n_folds=3, shuffle=True, random_state=rand_state),
verbose=3,
minimize=minimize
)
# fit
search.fit(X_train)
Explanation: Hyperparameter optimization
With relatively little effort, we got > 93% accuracy on our validation set! Can we improve that? We can use sklearn-esque grid searches, which also allow us to search over preprocessor objects to optimize a set of hyperparameters.
End of explanation
from skutil.utils import report_grid_score_detail
# now let's look deeper...
sort_by = 'std' if minimize == 'variance' else 'score'
report_grid_score_detail(search, charts=True, sort_results=True,
ascending=minimize=='variance',
sort_by=sort_by)
Explanation: Model evaluation
Beyond merely observing our validation set score, we can dig into the cross validation scores of each model in our H2O grid search, and select the model that has not only the best mean score, but the model that minimizes variability in the CV scores.
End of explanation
search.varimp()
Explanation: Variable importance
We can easily extract the best model's variable importances like so:
End of explanation
val_preds = search.predict(X_val)
# print accuracy
print('Validation accuracy: %.5f' % h2o_accuracy_score(actual, val_preds['predict']))
val_preds.head()
Explanation: Model evaluation—introduce the validation set
So our best estimator achieves a mean cross validation accuracy of 93%! We can predict on our best estimator as follows:
End of explanation
import os
# get absolute path
cwd = os.getcwd()
model_path = os.path.join(cwd, 'grid.pkl')
# save -- it's that easy!!!
search.save(location=model_path, warn_if_exists=False)
Explanation: Model selection
(Not shown: other models we built and evaluated against the validation set (once!)—we only introduce the holdout set at the very end)
In a real situation, you probably will have a holdout set, and will have built several models. After you have a collection of models and you'd like to select one, you introduce the holdout set only once!
Model persistence
When we find a model that performs well, we can save it to disk for later use:
End of explanation
search = H2ORandomizedSearchCV.load(model_path)
new_predictions = search.predict(X_val)
new_predictions.head()
Explanation: Loading and making predictions
End of explanation
h2o.shutdown(prompt=False) # shutdown cluster
os.unlink(model_path) # remove the pickle file...
Explanation: Cleanup
Always make sure to shut down your cluster...
End of explanation |
1,873 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Run this notebook to produce the cutout catalogs!
Potential TODO
Step1: Create the knownlens catalog
Step2: Convert the annotated catalog and knownlens catalog into cluster catalogs and cutouts | Python Code:
import pandas as pd
import swap
base_collection_path = '/nfs/slac/g/ki/ki18/cpd/swap/pickles/15.09.02/'
base_directory = '/nfs/slac/g/ki/ki18/cpd/swap_catalog_diagnostics/'
annotated_catalog_path = base_directory + 'annotated_catalog.csv'
cut_empty = True
stages = [1, 2]
categories = ['ID', 'ZooID', 'location', 'mean_probability', 'category', 'kind', 'flavor',
'state', 'status', 'truth', 'stage', 'line']
annotation_categories = ['At_X', 'At_Y', 'PD', 'PL']
catalog = []
for stage in stages:
print(stage)
collection_path = base_collection_path + 'stage{0}'.format(stage) + '/CFHTLS_collection.pickle'
collection = swap.read_pickle(collection_path, 'collection')
for ID in collection.list():
subject = collection.member[ID]
catalog_i = []
# for stage1 we shall skip the tests for now
if (stage == 1) * (subject.category == 'test'):
continue
# flatten out x and y. also cut out empty entries
annotationhistory = subject.annotationhistory
x_unflat = annotationhistory['At_X']
x = np.array([xi for xj in x_unflat for xi in xj])
# cut out catalogs with no clicks
if (len(x) < 1) and (cut_empty):
continue
# oh yeah there's that absolutely nutso entry with 50k clicks
if len(x) > 10000:
continue
for category in categories:
if category == 'stage':
catalog_i.append(stage)
elif category == 'line':
catalog_i.append(line)
else:
catalog_i.append(subject.__dict__[category])
for category in annotation_categories:
catalog_i.append(list(annotationhistory[category]))
catalog.append(catalog_i)
catalog = pd.DataFrame(catalog, columns=categories + annotation_categories)
# save catalog
catalog.to_csv(annotated_catalog_path)
Explanation: Run this notebook to produce the cutout catalogs!
Potential TODO: Write code for creating the pickles?
Potential TODO: Write code for downloading all the fields in advance?
Create the annotated csv catalog
End of explanation
knownlens_dir = '/nfs/slac/g/ki/ki18/cpd/code/strongcnn/catalog/knownlens/'
knownlensID = pd.read_csv(knownlens_dir + 'knownlensID', sep=' ')
listfiles_d1_d11 = pd.read_csv(knownlens_dir + 'listfiles_d1_d11.txt', sep=' ')
knownlenspath = knownlens_dir + 'knownlens.csv'
X2 = listfiles_d1_d11[listfiles_d1_d11['CFHTID'].isin(knownlensID['CFHTID'])] # cuts down to like 212 entries.
ZooID = []
for i in range(len(Y)):
ZooID.append(X2['ZooID'][X2['CFHTID'] == knownlensID['CFHTID'][i]].values[0])
knownlensID['ZooID'] = ZooID
knownlensID.to_csv(knownlenspath)
Explanation: Create the knownlens catalog
End of explanation
# code to regenerate the catalogs
base_directory = '/nfs/slac/g/ki/ki18/cpd/swap_catalog_diagnostics/'
cluster_directory = base_directory
## uncomment this line when updating the shared catalog!
# base_directory = '/nfs/slac/g/ki/ki18/cpd/swap_catalog/'
# cluster_directory = base_directory + 'clusters/'
field_directory = base_directory
knownlens_path = base_directory + 'knownlens.csv'
collection_path = base_directory + 'annotated_catalog.csv'
catalog_path = cluster_directory + 'catalog.csv'
# if we're rerunning this code, we should remove the old cluster pngs,
# all of which have *_*.png
from glob import glob
files_to_delete = glob(cluster_directory + '*_*.png')
from os import remove
for delete_this_file in files_to_delete:
remove(delete_this_file)
# run create catalog code. This can take a while.
from subprocess import call
command = ['python', '/nfs/slac/g/ki/ki18/cpd/code/strongcnn/code/create_catalogs.py',
'--collection', collection_path,
'--knownlens', knownlens_path,
'--clusters', cluster_directory,
'--fields', field_directory,
#'--augment', augmented_directory,
#'--do_a_few', '100',
]
call(command)
Explanation: Convert the annotated catalog and knownlens catalog into cluster catalogs and cutouts
End of explanation |
1,874 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Midterm Review
CSCI 1360E
Step1: Answering this is not simply taking what's in the autograder and copy-pasting it into your solution
Step2: The whole point is that your code should generalize to any possible input.
To that end, you want to perform the actual operations required
Step3: With great looping power comes great looping responsibility.
The question that involved finding the first negative number in a list of numbers gave a lot of folks problems. By that, I mean folks combined simultaneous for and while loops, inadvertently creating more problems with some very difficult-to-follow program behavior.
Thing to remember
Step4: From A2
zip() is an amazing mechanism for looping with MULTIPLE collections at once
There were very few students who deigned to use zip(); if you can learn how to use it, it will make your life considerably easier whenever you need to loop through multiple lists at the same time.
Take the question on computing magnitudes of 3D vectors.
Step5: Since all three lists--X, Y, and Z--are the same length, you could run a for loop with an index through one of them, and use that index across all three. That would work just fine.
Step6: ...but it's very verbose, and can get a bit difficult to follow.
If, instead, you use zip, you don't need to worry about using range or computing the len of a list or even extracting the right x,y,z from each index
Step7: Look how much cleaner that is!
The zip function is amazing. You can certainly get along without it, as shown in the previous slide, but it handles so much of that work for you, so I encourage you to practice using it.
Indentation in Python may be the most important rule of all.
I cannot overemphasize how important it is for your code indentation to be precise and exact.
Indentation dictates whether a line of code is part of an if statement or not, part of a for loop or not, part of a try block or not, even part of a function or not.
There were quite a few examples of code that was completely correct, but it wasn't indented properly--for example, it wasn't indented under a for loop, so the line was executed only once, after the for loop finished running.
From A3
if statements don't always need an else.
I saw this a lot
Step8: if statements are adults; they can handle being short-staffed, as it were. If there's literally nothing to do in an else clause, you're perfectly able to omit it entirely
Step9: An actual example of reference versus value.
This was the bonus question from A3 about building a list-of-lists matrix.
Some of you had a very clever solution that technically worked, but would fail spectacularly the moment you actually tried to use the matrix built by the function.
In short
Step10: Certainly looks ok--3 "rows" (i.e. lists), each with 4 0s in them. Why is this a problem?
Let's try changing one of the 0s to something else. Say, change the 0 in the upper left (position 0, 0) to a 99.
Step11: Now if we print this... what do you think we'll see?
Step12: cue The Thriller
This is a pass-by-reference problem. You've appended three references to the same object, so when you update one of them, you actually update them all.
The fix is to use a nested-for loop to create everything on-the-fly
Step13: From A4
len(ndarray) versus ndarray.shape
For the question about checking that the lengths of two NumPy arrays were equal, a lot of people chose this route
Step14: which works, but only for one-dimensional arrays.
For anything other than 1-dimensional arrays, things get problematic
Step15: These definitely are not equal in length. But that's because len doesn't measure length of matrices...it only measures the number of rows (i.e., the first axis--which in this case is 5 in both, hence it thinks they're equal).
You definitely want to get into the habit of using the .shape property of NumPy arrays
Step16: We get the answer we expect.
Other Questions from the Google Hangouts Review Session
The Tale of Two for Loops
Step17: 1
Step18: 2
Step19: In general
Step20: Normalization by any other name
This confused some folks, namely because "normalization" can mean a lot of different things. In particular, two different types of normalization were conflated
Step21: This can then be condensed into the 3 lines required by the question | Python Code:
number = 3.14159265359
Explanation: Midterm Review
CSCI 1360E: Foundations for Informatics and Analytics
Material
Anything in Lectures 1 through 10 are fair game!
Anything in assignments 1 through 4 are fair game!
Topics
Data Science
- Definition
- Intrinsic interdisciplinarity
- "Greater Data Science"
Python Language
- Philosophy
- Compiled vs Interpreted
- Variables, literals, types, operators (arithmetic and comparative)
- Casting, typing system
- Syntax (role of whitespace)
Data Structures
- Collections (lists, sets, tuples, dictionaries)
- Iterators, generators, and list comprehensions
- Loops (for, while), loop control (break, continue), and utility looping functions (zip, enumerate)
- Variable unpacking
- Indexing and slicing
- Differences in indexing between collection types (tuples versus sets, lists versus dictionaries)
Conditionals
- if / elif / else structure
- Boolean algebra (stringing together multiple conditions with or and and)
Exception handling
- try / except structure, and what goes in each block
Functions
- Defining functions
- Philosophy of a function
- Defining versus calling (invoking) a function
- Positional (required) versus default (optional) arguments
- Keyword arguments
- Functions that take any number of arguments
- Object references, and their behaviors in Python
NumPy
- Importing external libraries
- The NumPy ndarray, its properties (.shape), and indexing
- NumPy submodules
- Vectorized arithmetic in lieu of explicit loops
- NumPy array dimensions, or axes, and how they relate to the .shape property
- Array broadcasting, uses and rules
- Fancy indexing with boolean and integer arrays
Midterm Logistics
The format will be very close to that of JupyterHub assignments (there may or may not be autograders to help).
It will be 90 minutes. Don't expect any flexibility in this time limit, so plan accordingly.
You are NOT allowed to use internet resources or collaborate with your classmates (enforced by the honor system), but you ARE allowed to use lecture and assignment materials from this course, as well as terminals in the JupyterHub environment or on your local machine.
I will be available on Slack for questions most of the day tomorrow, from 9am until about 3pm (then will be back online around 4pm until 5pm). Shoot me a direct message if you have a conceptual / technical question relating to the midterm, and I'll do my best to answer ASAP.
JupyterHub Logistics
The midterm will be released on JupyterHub at 12:00am on Thursday, June 29.
It will be collected at 12:00am on Friday, June 30. The release and collection will be done by automated scripts, so believe me when I say there won't be any flexibility on the parts of these mechanisms.
Within that 24-hour window, you can start the midterm (by "Fetch"-ing it on JupyterHub) whenever you like.
ONCE YOU FETCH THE MIDTERM, YOU WILL HAVE 90 MINUTES FROM THAT MOMENT TO SUBMIT THE COMPLETED MIDTERM BACK.
Furthermore, it's up to you to keep track of that time. Look at your system clock when you click "Fetch", or use the timer app on your smartphone, to help you track your time use. Once the 90 minutes are up, the exam is considered late.
In theory, this should allow you to take the midterm when it is most convenient for you. Obviously you should probably start no later than 10:30PM tomorrow, since any submissions after midnight on Friday will be considered late, even if you started at 11:58PM.
Tough Assignment Questions and Concepts
From A1
Do NOT hard-code answers!
For example, take the question on taking the square root of a number and converting it to a string:
End of explanation
number = "1.7724538509055743"
Explanation: Answering this is not simply taking what's in the autograder and copy-pasting it into your solution:
End of explanation
number = 3.14159265359
number = number ** 0.5 # Raise to the 0.5, which means square root.
number = str(number) # Cast to a string.
Explanation: The whole point is that your code should generalize to any possible input.
To that end, you want to perform the actual operations required: as stated in the directions, this involves taking the square root and converting the answer to a string:
End of explanation
def first_negative(numbers):
num = 0
index = 0
while numbers[index] > 0:
index += 1
num = numbers[index]
return num
first_negative([1, 2, 3, -1])
first_negative([10, -10, -100, -50, -75, 10])
Explanation: With great looping power comes great looping responsibility.
The question that involved finding the first negative number in a list of numbers gave a lot of folks problems. By that, I mean folks combined simultaneous for and while loops, inadvertently creating more problems with some very difficult-to-follow program behavior.
Thing to remember: both loops can solve the same problems, but they lend themselves to different ones. So in almost all cases, you'll only need 1 of them to solve a given problem.
In this case: if you need to perform operations on every element of a list, for is your friend. If you need to do something repeatedly until some condition is satisfied, while is your operator. This question better fits the latter than the former.
End of explanation
def compute_3dmagnitudes(X, Y, Z):
magnitudes = []
### BEGIN SOLUTION
### END SOLUTION
return magnitudes
Explanation: From A2
zip() is an amazing mechanism for looping with MULTIPLE collections at once
There were very few students who deigned to use zip(); if you can learn how to use it, it will make your life considerably easier whenever you need to loop through multiple lists at the same time.
Take the question on computing magnitudes of 3D vectors.
End of explanation
def compute_3dmagnitudes(X, Y, Z):
magnitudes = []
length = len(X)
for i in range(length):
# Pull out the corresponding (x, y, z) coordinates.
x = X[i]
y = Y[i]
z = Z[i]
### Do the magnitude computation ###
return magnitudes
Explanation: Since all three lists--X, Y, and Z--are the same length, you could run a for loop with an index through one of them, and use that index across all three. That would work just fine.
End of explanation
def compute_3dmagnitudes(X, Y, Z):
magnitudes = []
for x, y, z in zip(X, Y, Z):
pass
### Do the magnitude computation ###
return magnitudes
Explanation: ...but it's very verbose, and can get a bit difficult to follow.
If, instead, you use zip, you don't need to worry about using range or computing the len of a list or even extracting the right x,y,z from each index:
End of explanation
def list_of_positive_indices(numbers):
indices = []
for index, element in enumerate(numbers):
if element > 0:
indices.append(index)
else:
pass # Why are we here? What is our purpose? Do we even exist?
return indices
Explanation: Look how much cleaner that is!
The zip function is amazing. You can certainly get along without it, as shown in the previous slide, but it handles so much of that work for you, so I encourage you to practice using it.
Indentation in Python may be the most important rule of all.
I cannot overemphasize how important it is for your code indentation to be precise and exact.
Indentation dictates whether a line of code is part of an if statement or not, part of a for loop or not, part of a try block or not, even part of a function or not.
There were quite a few examples of code that was completely correct, but it wasn't indented properly--for example, it wasn't indented under a for loop, so the line was executed only once, after the for loop finished running.
From A3
if statements don't always need an else.
I saw this a lot:
End of explanation
def list_of_positive_indices(numbers):
indices = []
for index, element in enumerate(numbers):
if element > 0:
indices.append(index)
return indices
Explanation: if statements are adults; they can handle being short-staffed, as it were. If there's literally nothing to do in an else clause, you're perfectly able to omit it entirely:
End of explanation
def make_matrix(rows, cols):
pre_built_row = []
# Build a single row that has <cols> 0s.
for j in range(cols):
pre_built_row.append(0)
# Now build a list of the rows.
matrix = []
for i in range(rows):
matrix.append(pre_built_row)
return matrix
m = make_matrix(3, 4)
print(m)
Explanation: An actual example of reference versus value.
This was the bonus question from A3 about building a list-of-lists matrix.
Some of you had a very clever solution that technically worked, but would fail spectacularly the moment you actually tried to use the matrix built by the function.
In short: rather than construct a matrix of 0s one element at a time, the strategy was to pre-construct a row of 0s, and then use just 1 loop to append this pre-built list a certain number of times.
It was clever in that it avoided the need for nested loops, which are certainly difficult to write and understand under the best of circumstances! But you'd see some odd behavior if you tried to use the matrix that came out...
End of explanation
m[0][0] = 99
Explanation: Certainly looks ok--3 "rows" (i.e. lists), each with 4 0s in them. Why is this a problem?
Let's try changing one of the 0s to something else. Say, change the 0 in the upper left (position 0, 0) to a 99.
End of explanation
print(m)
Explanation: Now if we print this... what do you think we'll see?
End of explanation
def make_matrix(rows, cols):
matrix = []
for i in range(rows):
matrix.append([]) # First, append an empty list for the new row.
for j in range(cols):
matrix[i].append(0) # Now grow that empty list.
return matrix
m = make_matrix(3, 4)
print(m)
m[0][0] = 99
print(m)
Explanation: cue The Thriller
This is a pass-by-reference problem. You've appended three references to the same object, so when you update one of them, you actually update them all.
The fix is to use a nested-for loop to create everything on-the-fly:
End of explanation
# Some test data
import numpy as np
x = np.random.random(10)
y = np.random.random(10)
len(x) == len(y)
Explanation: From A4
len(ndarray) versus ndarray.shape
For the question about checking that the lengths of two NumPy arrays were equal, a lot of people chose this route:
End of explanation
x = np.random.random((5, 5)) # A 5x5 matrix
y = np.random.random((5, 10)) # A 5x10 matrix
len(x) == len(y)
Explanation: which works, but only for one-dimensional arrays.
For anything other than 1-dimensional arrays, things get problematic:
End of explanation
x = np.random.random((5, 5)) # A 5x5 matrix
y = np.random.random((5, 10)) # A 5x10 matrix
x.shape == y.shape
Explanation: These definitely are not equal in length. But that's because len doesn't measure length of matrices...it only measures the number of rows (i.e., the first axis--which in this case is 5 in both, hence it thinks they're equal).
You definitely want to get into the habit of using the .shape property of NumPy arrays:
End of explanation
import numpy as np
# Generate a random list to work with as an example.
some_list = np.random.random(10).tolist()
print(some_list)
Explanation: We get the answer we expect.
Other Questions from the Google Hangouts Review Session
The Tale of Two for Loops
End of explanation
for element in some_list:
print(element)
Explanation: 1: Looping through elements
End of explanation
list_length = len(some_list)
for index in range(list_length): # [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
element = some_list[index]
print(element)
Explanation: 2: Looping over indices of elements
End of explanation
def count_substring(base_string, substring, case_insensitive = True):
count = 0
if case_insensitive == True:
base_string = base_string.lower()
#base_string = base_string.upper()
length = len(substring)
index = 0
while (index + length) < len(base_string):
# Sliding window.
substring_to_test = base_string[index : (index + length)]
if substring_to_test == substring:
count += 1
index += 1
return count
Explanation: In general:
If you don't care about list order, or where you are in the list--use "loop by element"
If ordering of elements MATTERS, or where you are in the list during a loop is important--use "loop by index"
Sliding windows for finding substrings in a longer string
From A4, Bonus Part A:
End of explanation
numbers = [10, 20, 30, 40]
print(sum(numbers))
numbers = [10/100, 20/100, 30/100, 40/100]
# (0.1 + 0.2 + 0.3 + 0.4) = 1.0
print(sum(numbers))
import numpy as np
def normalize(something):
# Compute the normalizing constant
s = something.sum()
# Use vectorized programming (broadcasting) to normalize each element
# without the need for any loops
normalized = (something / s)
return normalized
Explanation: Normalization by any other name
This confused some folks, namely because "normalization" can mean a lot of different things. In particular, two different types of normalization were conflated:
1: Rescale vector elements so the vector's magnitude is 1
NOT the same thing as having all the vector elements SUM to 1
2: Rescale vector elements so they all sum to 1
What the Bonus, Part B in A4 was actually asking for (even though the autograder was terrible)
tl;dr These are both perfectly valid forms of normalization. It's just that the autograder was horrible. Here's what the spirit of the question was asking for:
End of explanation
import numpy as np # 1
def normalize(something): # 2
return something / something.sum() # 3
Explanation: This can then be condensed into the 3 lines required by the question:
End of explanation |
1,875 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
EXERCISE
Step1: Exercise
Step2: Some information about the seismic cube
Step3: Exercise
Step4: Exercise
Step5: Exercise
Step6: Exercise | Python Code:
import numpy as np
import matplotlib.pyplot as plt
% matplotlib inline
Explanation: EXERCISE: Seismic – an array of numbers
The numpy array object
End of explanation
import time
start = time.time()
data = np.loadtxt('data/seismic_cube.txt')
end = time.time()
elapsed = end - start
print("Time taken to read volume: {:.2f} seconds.".format(elapsed))
Explanation: Exercise:
This exercise requires the file called seismic_cube.txt (120 MB). Download it and place it in your data folder,
a) Use np.loadtxt(...) to load the text file named data/seismic_cube.txt into an numpy array called data
b) How many dimensions does this data array have?
c) How many elements does data have in each dimension (np.shape)?
d) How much space does this object take up in memory?
e) How much time did it take? (seconds) (import time)
End of explanation
nIL = 194 # number of inlines
nXL = 299 # number of crosslines
nt = 463 # number of samples per trace
dt = 0.004 # sample rate in seconds
Explanation: Some information about the seismic cube:
Number of inlines: 194
Number of crosslines: 299
Number of samples per trace: 463
Sample rate in seconds: 0.004
I'm giving you this information, but in practice, this would probably come from the file's header or meta-data section
End of explanation
# enter your code here
Explanation: Exercise: Use numpy's reshape function to turn this this array into a 3D array. Print the shape attribute of the reshaped object to verify that this object has the correct number of inlines and crosslines
End of explanation
# enter your code here
Explanation: Exercise: Get rid of some bad data for the last few samples on each trace; take the first 450 samples (of the last dimension)
End of explanation
s = xline[int(line.shape[0]/2),...]
s.shape
%matplotlib inline
t = np.arange(0, 450 * 4, 4)
a = 12
#
fig = plt.figure(figsize=(3,9), facecolor='w')
ax = fig.add_axes([0.1, 0.1, 0.8, 0.8])
ax.plot(s,t, 'g--')
ax.set_xlim(-a*s.std(), a*s.std())
ax.grid()
fig.savefig('dotted_green.png', dpi=300)
#plt.plot(s, t) #, 'g*', lw=2.0, alpha=0.25)
#plt.xlim(-2.0*s.std(), 2.0*s.std())
Explanation: Exercise: create a new object called xline find the middle trace on crossline 150
What are the dimensions of xline
Make a plot of xline using plt.imshow(xline)
Pass an argument into the imshow function change the ugly default to your favourite colourbar!
Is there anything funny / wrong / interesting about the line? Can you fix it?
Use matplotlib.pyplot's figure function to make a figure object, fig
Exercise: grab the trace in the middel of xline to answer the following questions
a) How many time samples are in this trace?
b) What is the range of peak to peak values of the trace?
c) what is the maximum value along this trace? And where is it located?
d) what is the minumum value along this trace? And where is it located?
e) Seismic traces should have a mean value close to zero. Does this trace have seismic trace that is close to zero?
End of explanation
from scipy import fft
S = abs(fft(s))
power = 20*np.log10(a)
faxis = np.fft.fftfreq( len(power), d = dt)
Explanation: Exercise: Plot this seismic trace (vertically!)
- make a "time axis": `np.arange(first_sample, last_sample, sample_rate)
- use matplotlib's `plt.plot` function (documentation!)
- other `plt` functions to check out: `plt.xlim`, `plt.title`, `plt.ylabel`, `plt.invert_yaxis()`, 'plt.grid`
Trace statistics
Exercise: use the plt.hist(s, bins) function to create a histogram with 100 bins
As suspected, most of the data values are close to zero, and fan outward (more or less symmetrically) to larger positive and negative values.
To be sure, we can include the values from the entire line to build up better statitics. However, in order to pass a 2D array to the <code>plt.hist</code> function, we have to unravel it first,
Trace Bandwidth
Let's look at the frequency content of this trace. To do this we will need to use the the fast fourier transform function from the Scipy FFT module. It helps to know that the sample rate of the trace is 0.04 s
End of explanation |
1,876 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Autofig Limits
Step1: Here we'll explore the different limit-styles on two plots - the first (blue) where the independent-variable is in the x-dimension, and the second (red) with an external independent-variable. We'll also throw a green line on each with "consider_for_limits" set to False. This means that it will still be drawn in each frame but will not affect any automatically determined limits.
For the sake of consistency, we'll leave the padding at its default value of 10% throughout.
Step2: NOTE
Step3:
Step4: Automatic Symmetric Fixed-Limits
By setting the limits to 'symmetric', the limits will be computed as above, but forced to be symmetric about zero.
Step5: User-Defined Fixed-Limits
By manually setting either or both of the bounds on the limits, we can override the automatic behavior, but still get fixed limits throughout the animation.
NOTE
Step6: We can of course allow either the lower or upper bound to still remain automatic
Step7: and we can also set fixed limits for the y-limits
Step8: For completeness, we'll include an example with the external independent-variable as well
Step9: Automatic Sliding-Limits
By setting the limits to a single value instead of a tuple, the range of the limits will remain fixed, re-centering on the "current" value for each frame (where this centered value is determined as the average position of the highlighted markers, ignoring any in which consider_for_limits=False).
By setting this single value to None, the range itself will automatically be determined. The range is determined as follows
Step10: In the example above, since the independent-variable is in the same direction as the sliding axes, there is no spread in the x-direction of the current points. Therefore the range fallsback on 10% of the full range.
Let's instead set the y-limits to sliding.
Step11:
Step12: To truly center the central values, we can allow both limits to automatically slide.
Step13: User-Defined Sliding-Limits
By setting the limits to a single float, the range used during sliding limits can manually be set.
Step14:
Step15: Here we'll provide a user-defined range in a dimension that differs between the two plotted calls. Note how the axes limits are centered on the average value between the two highlighted points at any given frame.
Step16: Automatic Per-Frame Limits
By setting the limits to 'frame', they are automatically determined per-frame based on the settings provided by uncover and padding.
Step17:
Step18: Because these are recomputed per-frame, the result for external independent-variables can look a little different as the axes can be stretched in any direction to account for the "addition" of new data.
Step19: | Python Code:
import autofig
import numpy as np
#autofig.inline()
t = np.linspace(0, 2*np.pi, 101)
x = np.sin(t)
y1 = np.cos(t)
y2 = -0.5*y1
y3 = 1.5*y1
Explanation: Autofig Limits
End of explanation
fig1 = autofig.Figure()
fig1.plot(x=t, y=y1, i='x', marker='None', color='b', linestyle='solid', uncover=True)
fig1.plot(x=t, y=y2, i='x', marker='None', color='b', linestyle='dashed', uncover=True)
fig1.plot(x=t, y=y3, i='x', marker='None', color='g', linestyle='dashdot',
uncover=True, consider_for_limits=False)
mplfig = autofig.draw()
fig2 = autofig.Figure()
fig2.plot(x=x, y=y1, i=t, marker='None', c='r', linestyle='solid', uncover=True)
fig2.plot(x=x, y=y2, i=t, marker='None', c='r', linestyle='dashed', uncover=True)
fig2.plot(x=x, y=y3, i=t, marker='None', c='g', linestyle='dashed',
uncover=True, consider_for_limits=False)
mplfig = autofig.draw()
Explanation: Here we'll explore the different limit-styles on two plots - the first (blue) where the independent-variable is in the x-dimension, and the second (red) with an external independent-variable. We'll also throw a green line on each with "consider_for_limits" set to False. This means that it will still be drawn in each frame but will not affect any automatically determined limits.
For the sake of consistency, we'll leave the padding at its default value of 10% throughout.
End of explanation
fig1.axes[0].x.lim = (None, None)
anim = fig1.animate(i=t[::2],
save='limits_fig1_fixed_automatic.gif', save_kwargs={'writer': 'imagemagick'})
Explanation: NOTE: setting axes limits (or any axes property, for that matter) is possible directly from the plot call via the xlim keyword. Note that the latest value sent to that axes will take priority.
Automatic Fixed-Limits (default)
By default all limits are set to (None, None) which means that both the upper and lower bounds of the limits will be set automatically based on all (not just currently visible) data.
NOTE: this is equivalent to setting limits to 'fixed'.
End of explanation
fig2.axes[0].x.lim = (None, None)
anim = fig2.animate(i=t[::2],
save='limits_fig2_fixed_automatic.gif', save_kwargs={'writer': 'imagemagick'})
Explanation:
End of explanation
fig1.axes[0].x.lim = 'symmetric'
anim = fig1.animate(i=t[::2],
save='limits_fig1_fixed_symmetric.gif', save_kwargs={'writer': 'imagemagick'})
Explanation: Automatic Symmetric Fixed-Limits
By setting the limits to 'symmetric', the limits will be computed as above, but forced to be symmetric about zero.
End of explanation
fig1.axes[0].x.lim = (1,2)
fig1.axes[0].y.lim = (None, None)
anim = fig1.animate(i=t[::2],
save='limits_fig1_fixed_user.gif', save_kwargs={'writer': 'imagemagick'})
Explanation: User-Defined Fixed-Limits
By manually setting either or both of the bounds on the limits, we can override the automatic behavior, but still get fixed limits throughout the animation.
NOTE: padding is not applied on top of a provided value.
TODO: since the ylim are still auto-fixed, they are going on all of the data, not just the visible data in the visible x-range. I'm not quite sure the best way to handle this...
End of explanation
fig1.axes[0].x.lim = (None, 4)
fig1.axes[0].y.lim = (None, None)
anim = fig1.animate(i=t[::2],
save='limits_fig1_fixed_user_upper.gif', save_kwargs={'writer': 'imagemagick'})
Explanation: We can of course allow either the lower or upper bound to still remain automatic:
End of explanation
fig1.axes[0].x.lim = (1,2)
fig1.axes[0].y.lim = (-0.5,0.5)
anim = fig1.animate(i=t[::2],
save='limits_fig1_fixed_xy.gif', save_kwargs={'writer': 'imagemagick'})
Explanation: and we can also set fixed limits for the y-limits:
End of explanation
fig2.axes[0].x.lim = (-0.5, 0.5)
fig2.axes[0].y.lim = (None, None)
anim = fig2.animate(i=t[::2],
save='limits_fig2_fixed_user.gif', save_kwargs={'writer': 'imagemagick'})
Explanation: For completeness, we'll include an example with the external independent-variable as well:
End of explanation
fig1.axes[0].x.lim = None
fig1.axes[0].y.lim = (None, None)
anim = fig1.animate(i=t[::2],
save='limits_fig1_sliding_automatic.gif', save_kwargs={'writer': 'imagemagick'})
Explanation: Automatic Sliding-Limits
By setting the limits to a single value instead of a tuple, the range of the limits will remain fixed, re-centering on the "current" value for each frame (where this centered value is determined as the average position of the highlighted markers, ignoring any in which consider_for_limits=False).
By setting this single value to None, the range itself will automatically be determined. The range is determined as follows:
if there is any spread in the central positions, the range is set as pad*max(spread)
10% of the full axes
NOT YET IMPLEMENTED: the maximum range needed to contain the mesh plots
NOTE: this is equivalent to setting limits to 'sliding'
NOTE: the automatic determination of range is somewhat computationally expensive. To save time, provide the range as shown in the following section.
End of explanation
fig1.axes[0].x.lim = (None, None)
fig1.axes[0].y.lim = None
anim = fig1.animate(i=t[::2],
save='limits_fig1_sliding_y.gif', save_kwargs={'writer': 'imagemagick'})
Explanation: In the example above, since the independent-variable is in the same direction as the sliding axes, there is no spread in the x-direction of the current points. Therefore the range fallsback on 10% of the full range.
Let's instead set the y-limits to sliding.
End of explanation
fig2.axes[0].x.lim = (None, None)
fig2.axes[0].y.lim = None
anim = fig2.animate(i=t[::2],
save='limits_fig2_sliding_y.gif', save_kwargs={'writer': 'imagemagick'})
Explanation:
End of explanation
fig2.axes[0].x.lim = None
fig2.axes[0].y.lim = None
anim = fig2.animate(i=t[::2],
save='limits_fig2_sliding_xy.gif', save_kwargs={'writer': 'imagemagick'})
Explanation: To truly center the central values, we can allow both limits to automatically slide.
End of explanation
fig1.axes[0].x.lim = 4.0
fig1.axes[0].y.lim = (None, None)
anim = fig1.animate(i=t[::2],
save='limits_fig1_sliding_user.gif', save_kwargs={'writer': 'imagemagick'})
Explanation: User-Defined Sliding-Limits
By setting the limits to a single float, the range used during sliding limits can manually be set.
End of explanation
fig2.axes[0].x.lim = 4.0
fig2.axes[0].y.lim = (None, None)
anim = fig2.animate(i=t[::2],
save='limits_fig2_sliding_user.gif', save_kwargs={'writer': 'imagemagick'})
Explanation:
End of explanation
fig2.axes[0].x.lim = (None, None)
fig2.axes[0].y.lim = 4.0
anim = fig2.animate(i=t[::2],
save='limits_fig2_sliding_user_y.gif', save_kwargs={'writer': 'imagemagick'})
Explanation: Here we'll provide a user-defined range in a dimension that differs between the two plotted calls. Note how the axes limits are centered on the average value between the two highlighted points at any given frame.
End of explanation
fig1.axes[0].x.lim = 'frame'
fig1.axes[0].y.lim = (None, None)
anim = fig1.animate(i=t[::2],
save='limits_fig1_frame.gif', save_kwargs={'writer': 'imagemagick'})
Explanation: Automatic Per-Frame Limits
By setting the limits to 'frame', they are automatically determined per-frame based on the settings provided by uncover and padding.
End of explanation
fig1.axes[0].x.lim = 'frame'
fig1.axes[0].y.lim = 'frame'
anim = fig1.animate(i=t[::2],
save='limits_fig1_frame_xy.gif', save_kwargs={'writer': 'imagemagick'})
Explanation:
End of explanation
fig2.axes[0].x.lim = 'frame'
fig2.axes[0].y.lim = (None, None)
anim = fig2.animate(i=t[::2],
save='limits_fig2_frame.gif', save_kwargs={'writer': 'imagemagick'})
Explanation: Because these are recomputed per-frame, the result for external independent-variables can look a little different as the axes can be stretched in any direction to account for the "addition" of new data.
End of explanation
fig2.axes[0].x.lim = 'frame'
fig2.axes[0].y.lim = 'frame'
anim = fig2.animate(i=t[::2],
save='limits_fig2_frame_xy.gif', save_kwargs={'writer': 'imagemagick'})
Explanation:
End of explanation |
1,877 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cnrm-cerfacs', 'sandbox-3', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: CNRM-CERFACS
Source ID: SANDBOX-3
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:52
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
1,878 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basic plot example
Step1: Here we will set the fields to one of several values so that we can see pre-configured examples. | Python Code:
from matplotlib.pyplot import figure, plot, xlabel, ylabel, title, show
from IPython.display import display
text = widgets.FloatText()
floatText = widgets.FloatText(description='MyField',min=-5,max=5)
floatSlider = widgets.FloatSlider(description='MyField',min=-5,max=5)
#https://ipywidgets.readthedocs.io/en/stable/examples/Widget%20Basics.html
float_link = widgets.jslink((floatText, 'value'), (floatSlider, 'value'))
Explanation: Basic plot example
End of explanation
floatSlider.value=1
txtArea = widgets.Text()
display(txtArea)
myb= widgets.Button(description="234")
def add_text(b):
txtArea.value = txtArea.value + txtArea.value
myb.on_click(add_text)
display(myb)
Explanation: Here we will set the fields to one of several values so that we can see pre-configured examples.
End of explanation |
1,879 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
연립방정식과 역행렬
다음과 같이 $x_1, x_2, \cdots, x_n$ 이라는 $n$ 개의 미지수를 가지는 방정식을 연립 방정식(system of equations)이라고 한다.
$$
\begin{matrix}
a_{11} x_1 & + \;& a_{12} x_2 &\; + \cdots + \;& a_{1M} x_M &\; = \;& b_1 \
a_{21} x_1 & + \;& a_{22} x_2 &\; + \cdots + \;& a_{2M} x_M &\; = \;& b_2 \
\vdots\;\;\; & & \vdots\;\;\; & & \vdots\;\;\; & & \;\vdots \
a_{N1} x_1 & + \;& a_{N2} x_2 &\; + \cdots + \;& a_{NM} x_M &\; = \;& b_N \
\end{matrix}
$$
행렬의 곱셈을 이용하면 이 연립 방정식은 다음과 같이 간단하게 쓸 수 있다.
$$ Ax = b $$
이 식에서 $A, x, b$ 는 다음과 같이 정의한다.
$$
A =
\begin{bmatrix}
a_{11} & a_{12} & \cdots & a_{1M} \
a_{21} & a_{22} & \cdots & a_{2M} \
\vdots & \vdots & \ddots & \vdots \
a_{N1} & a_{N2} & \cdots & a_{NM} \
\end{bmatrix}
$$
$$
x =
\begin{bmatrix}
x_1 \ x_2 \ \vdots \ x_M
\end{bmatrix}
$$
$$
b=
\begin{bmatrix}
b_1 \ b_2 \ \vdots \ b_N
\end{bmatrix}
$$
$$
Ax = b
\;\;\;\;\;
\rightarrow
\;\;\;\;\;
\begin{bmatrix}
a_{11} & a_{12} & \cdots & a_{1M} \
a_{21} & a_{22} & \cdots & a_{2M} \
\vdots & \vdots & \ddots & \vdots \
a_{N1} & a_{N2} & \cdots & a_{NM} \
\end{bmatrix}
\begin{bmatrix}
x_1 \ x_2 \ \vdots \ x_M
\end{bmatrix}
=
\begin{bmatrix}
b_1 \ b_2 \ \vdots \ b_N
\end{bmatrix}
$$
만약 $A, x, b$가 행렬이 아닌 실수라면 이 식은 나눗셈을 사용하여 다음과 같이 쉽게 풀 수 있을 것이다.
$$ x = \dfrac{b}{A} $$
그러나 행렬은 나눗셈이 정의되지 않으므로 이 식은 사용할 수 없다. 대신 역행렬(inverse)을 사용하여 이 식을 쉽게 풀 수 있다.
역행렬
정방 행렬(square matrix) $A\;(A \in \mathbb{R}^{M \times M}) $ 에 대한 역행렬은 $A^{-1}$ 이란 기호로 표시한다.
역행렬 $A^{-1}$은 원래의 행렬 $A$와 다음 관계를 만족하는 정방 행렬을 말한다. $I$는 단위 행렬(identity matrix)이다.
$$ A^{-1} A = A A^{-1} = I $$
두 개 이상의 정방 행렬의 곱은 마찬가지로 같은 크기의 정방행렬이 되는데 이러한 행렬의 곱의 역행렬에 대해서는 다음 성질이 성립한다.
$$ (AB)^{-1} = B^{-1} A^{-1} $$
$$ (ABC)^{-1} = C^{-1} B^{-1} A^{-1} $$
역행렬과 연립 방정식의 해
미지수의 수와 방정식의 수가 같다면 행렬 $A$ 는 정방 행렬이 된다.
만약 행렬 $A$의 역행렬 $ A^{-1} $ 이 존재한다면 역행렬의 정의에서 연립 방정식의 해는 다음과 같이 구해진다.
$$ Ax = b $$
$$ A^{-1}Ax = A^{-1}b $$
$$ Ix = A^{-1}b $$
$$ x = A^{-1}b $$
NumPy의 역행렬 계산
NumPy의 linalg 서브패키지에는 역행렬을 구하는 inv 라는 명령어가 존재한다. 그러나 실제 계산시에는 수치해석 상의 여러가지 문제로 inv 명령어 보다는 lstsq (least square) 명령어를 사용한다.
Step1: 위 해결 방법에는 두 가지 의문이 존재한다. 우선 역행렬이 존재하는지 어떻게 알 수 있는가? 또 두 번째 만약 미지수의 수와 방정식의 수가 다르다면 어떻게 되는가?
행렬식
우선 역행렬이 존재하는지 알아보는 방법의 하나로 행렬식(determinant)라는 정방 행렬의 특징을 계산하는 방법이다. 행렬 $A$ 에 대한 행렬식은 $\text{det}A$라는 기호로 표기한다.
행렬식(determinant)의 수학적인 정의는 상당히 복잡하므로 여기에서는 생략한다. 다만 몇가지 크기의 정방 행렬에 대해서는 다음과 같은 수식으로 구할 수 있다.
1×1 행렬의 행렬식
$$\det\begin{bmatrix}a\end{bmatrix}=a$$
2×2 행렬의 행렬식
$$\det\begin{bmatrix}a&b\c&d\end{bmatrix}=ad-bc$$
3×3 행렬의 행렬식
$$\det\begin{bmatrix}a&b&c\d&e&f\g&h&i\end{bmatrix}=aei+bfg+cdh-ceg-bdi-afh$$
NumPy에서는 det 명령으로 행렬식의 값을 구할 수 있다.
Step2: 행렬식과 역행렬 사이에는 다음의 관계가 있다.
행렬식의 값이 0이 아니면 역행렬이 존재한다. 반대로 역행렬이 존재하면 행렬식의 값은 0이 아니다.
최소 자승 문제
연립 방정식은 다음과 같은 세 종류가 있다.
미지수의 수가 방정식의 수와 같다. ($N = M$)
미지수의 수가 방정식의 수보다 적다. ($N < M$)
미지수의 수가 방정식의 수보다 많다. ($N > M$)
1번의 경우는 앞에서 다루었다. 2번의 경우에는 너무 많은 해가 존재할 수 있다. 3번의 경우에는 2번과 반대로 모든 조건을 만족하는 해가 하나도 존재할 수 없을 수도 있다.
그런데 데이터 분석 문제에서는 $A$ 를 feature matrix, $x$ 를 가중치 벡터 $w$ 라고 보았을 때 데이터의 수가 가중치의 갯수보다 많은 경우가 일반적이다. 다만 이 때는 방정식이 정확하게 등호를 이루기를 바라지는 않는다. 즉, 대략적으로만 좌변과 우변이 비슷하면 되는 경우이다.
$$ Ax \approx b $$
이 경우에는 좌변과 우변의 차이를 최소하하는 문제로 바꾸어 풀 수 있다.
$$ e = Ax-b $$
$$ e^Te = (Ax-b)^T(Ax-b) $$
$$ x = \text{arg} \min_x e^Te = \text{arg} \min_x \; (Ax-b)^T(Ax-b) $$
이러한 문제를 최소 자승(Least Square) 문제라고 한다.
최소 자승 문제의 답은 $A^TA$ 는 항상 정방행렬이 된다는 점을 사용하여 다음과 같이 풀 수 있다.
$$ Ax = b $$
$$ A^TAx = A^Tb $$
$$ (A^TA)x = A^Tb $$
$$ x = (A^TA)^{-1}A^T b $$
$$ x = ((A^TA)^{-1}A^T) b $$
이 값이 최소 자승 문제의 답이 된다는 것은 행렬의 미분을 사용하여 증명할 수 있다. 여기에서 행렬 $(A^TA)^{-1}A^T$ 를 행렬 $A$ 의 의사 역행렬(pseudo inverse)라고 하며 다음과 같이 $ A^{+}$ 로 표기하기도 한다.
$$ A^{+} = (A^TA)^{-1}A^T $$
NumPy의 lstsq 명령은 사실 이러한 최소 자승 문제를 푸는 명령이다. | Python Code:
A = np.array([[1, 3, -2], [3, 5, 6], [2, 4, 3]])
A
b = np.array([[5], [7], [8]])
b
Ainv = np.linalg.inv(A)
Ainv
x = np.dot(Ainv, b)
x
np.dot(A, x) - b
x, resid, rank, s = np.linalg.lstsq(A, b)
x
Explanation: 연립방정식과 역행렬
다음과 같이 $x_1, x_2, \cdots, x_n$ 이라는 $n$ 개의 미지수를 가지는 방정식을 연립 방정식(system of equations)이라고 한다.
$$
\begin{matrix}
a_{11} x_1 & + \;& a_{12} x_2 &\; + \cdots + \;& a_{1M} x_M &\; = \;& b_1 \
a_{21} x_1 & + \;& a_{22} x_2 &\; + \cdots + \;& a_{2M} x_M &\; = \;& b_2 \
\vdots\;\;\; & & \vdots\;\;\; & & \vdots\;\;\; & & \;\vdots \
a_{N1} x_1 & + \;& a_{N2} x_2 &\; + \cdots + \;& a_{NM} x_M &\; = \;& b_N \
\end{matrix}
$$
행렬의 곱셈을 이용하면 이 연립 방정식은 다음과 같이 간단하게 쓸 수 있다.
$$ Ax = b $$
이 식에서 $A, x, b$ 는 다음과 같이 정의한다.
$$
A =
\begin{bmatrix}
a_{11} & a_{12} & \cdots & a_{1M} \
a_{21} & a_{22} & \cdots & a_{2M} \
\vdots & \vdots & \ddots & \vdots \
a_{N1} & a_{N2} & \cdots & a_{NM} \
\end{bmatrix}
$$
$$
x =
\begin{bmatrix}
x_1 \ x_2 \ \vdots \ x_M
\end{bmatrix}
$$
$$
b=
\begin{bmatrix}
b_1 \ b_2 \ \vdots \ b_N
\end{bmatrix}
$$
$$
Ax = b
\;\;\;\;\;
\rightarrow
\;\;\;\;\;
\begin{bmatrix}
a_{11} & a_{12} & \cdots & a_{1M} \
a_{21} & a_{22} & \cdots & a_{2M} \
\vdots & \vdots & \ddots & \vdots \
a_{N1} & a_{N2} & \cdots & a_{NM} \
\end{bmatrix}
\begin{bmatrix}
x_1 \ x_2 \ \vdots \ x_M
\end{bmatrix}
=
\begin{bmatrix}
b_1 \ b_2 \ \vdots \ b_N
\end{bmatrix}
$$
만약 $A, x, b$가 행렬이 아닌 실수라면 이 식은 나눗셈을 사용하여 다음과 같이 쉽게 풀 수 있을 것이다.
$$ x = \dfrac{b}{A} $$
그러나 행렬은 나눗셈이 정의되지 않으므로 이 식은 사용할 수 없다. 대신 역행렬(inverse)을 사용하여 이 식을 쉽게 풀 수 있다.
역행렬
정방 행렬(square matrix) $A\;(A \in \mathbb{R}^{M \times M}) $ 에 대한 역행렬은 $A^{-1}$ 이란 기호로 표시한다.
역행렬 $A^{-1}$은 원래의 행렬 $A$와 다음 관계를 만족하는 정방 행렬을 말한다. $I$는 단위 행렬(identity matrix)이다.
$$ A^{-1} A = A A^{-1} = I $$
두 개 이상의 정방 행렬의 곱은 마찬가지로 같은 크기의 정방행렬이 되는데 이러한 행렬의 곱의 역행렬에 대해서는 다음 성질이 성립한다.
$$ (AB)^{-1} = B^{-1} A^{-1} $$
$$ (ABC)^{-1} = C^{-1} B^{-1} A^{-1} $$
역행렬과 연립 방정식의 해
미지수의 수와 방정식의 수가 같다면 행렬 $A$ 는 정방 행렬이 된다.
만약 행렬 $A$의 역행렬 $ A^{-1} $ 이 존재한다면 역행렬의 정의에서 연립 방정식의 해는 다음과 같이 구해진다.
$$ Ax = b $$
$$ A^{-1}Ax = A^{-1}b $$
$$ Ix = A^{-1}b $$
$$ x = A^{-1}b $$
NumPy의 역행렬 계산
NumPy의 linalg 서브패키지에는 역행렬을 구하는 inv 라는 명령어가 존재한다. 그러나 실제 계산시에는 수치해석 상의 여러가지 문제로 inv 명령어 보다는 lstsq (least square) 명령어를 사용한다.
End of explanation
np.random.seed(0)
A = np.random.randn(3, 3)
A
np.linalg.det(A)
Explanation: 위 해결 방법에는 두 가지 의문이 존재한다. 우선 역행렬이 존재하는지 어떻게 알 수 있는가? 또 두 번째 만약 미지수의 수와 방정식의 수가 다르다면 어떻게 되는가?
행렬식
우선 역행렬이 존재하는지 알아보는 방법의 하나로 행렬식(determinant)라는 정방 행렬의 특징을 계산하는 방법이다. 행렬 $A$ 에 대한 행렬식은 $\text{det}A$라는 기호로 표기한다.
행렬식(determinant)의 수학적인 정의는 상당히 복잡하므로 여기에서는 생략한다. 다만 몇가지 크기의 정방 행렬에 대해서는 다음과 같은 수식으로 구할 수 있다.
1×1 행렬의 행렬식
$$\det\begin{bmatrix}a\end{bmatrix}=a$$
2×2 행렬의 행렬식
$$\det\begin{bmatrix}a&b\c&d\end{bmatrix}=ad-bc$$
3×3 행렬의 행렬식
$$\det\begin{bmatrix}a&b&c\d&e&f\g&h&i\end{bmatrix}=aei+bfg+cdh-ceg-bdi-afh$$
NumPy에서는 det 명령으로 행렬식의 값을 구할 수 있다.
End of explanation
A = np.array([[2, 0], [-1, 1], [0, 2]])
A
b = np.array([[1], [0], [-1]])
b
Apinv = np.dot(np.linalg.inv(np.dot(A.T, A)), A.T)
Apinv
x = np.dot(Apinv, b)
x
np.dot(A, x) - b
x, resid, rank, s = np.linalg.lstsq(A, b)
x
Explanation: 행렬식과 역행렬 사이에는 다음의 관계가 있다.
행렬식의 값이 0이 아니면 역행렬이 존재한다. 반대로 역행렬이 존재하면 행렬식의 값은 0이 아니다.
최소 자승 문제
연립 방정식은 다음과 같은 세 종류가 있다.
미지수의 수가 방정식의 수와 같다. ($N = M$)
미지수의 수가 방정식의 수보다 적다. ($N < M$)
미지수의 수가 방정식의 수보다 많다. ($N > M$)
1번의 경우는 앞에서 다루었다. 2번의 경우에는 너무 많은 해가 존재할 수 있다. 3번의 경우에는 2번과 반대로 모든 조건을 만족하는 해가 하나도 존재할 수 없을 수도 있다.
그런데 데이터 분석 문제에서는 $A$ 를 feature matrix, $x$ 를 가중치 벡터 $w$ 라고 보았을 때 데이터의 수가 가중치의 갯수보다 많은 경우가 일반적이다. 다만 이 때는 방정식이 정확하게 등호를 이루기를 바라지는 않는다. 즉, 대략적으로만 좌변과 우변이 비슷하면 되는 경우이다.
$$ Ax \approx b $$
이 경우에는 좌변과 우변의 차이를 최소하하는 문제로 바꾸어 풀 수 있다.
$$ e = Ax-b $$
$$ e^Te = (Ax-b)^T(Ax-b) $$
$$ x = \text{arg} \min_x e^Te = \text{arg} \min_x \; (Ax-b)^T(Ax-b) $$
이러한 문제를 최소 자승(Least Square) 문제라고 한다.
최소 자승 문제의 답은 $A^TA$ 는 항상 정방행렬이 된다는 점을 사용하여 다음과 같이 풀 수 있다.
$$ Ax = b $$
$$ A^TAx = A^Tb $$
$$ (A^TA)x = A^Tb $$
$$ x = (A^TA)^{-1}A^T b $$
$$ x = ((A^TA)^{-1}A^T) b $$
이 값이 최소 자승 문제의 답이 된다는 것은 행렬의 미분을 사용하여 증명할 수 있다. 여기에서 행렬 $(A^TA)^{-1}A^T$ 를 행렬 $A$ 의 의사 역행렬(pseudo inverse)라고 하며 다음과 같이 $ A^{+}$ 로 표기하기도 한다.
$$ A^{+} = (A^TA)^{-1}A^T $$
NumPy의 lstsq 명령은 사실 이러한 최소 자승 문제를 푸는 명령이다.
End of explanation |
1,880 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Functions
Functions and function arguments
Functions are the building blocks of writing software. If a function is associated with an object and it's data, it is called a method.
Functions are defined using the keyword def.
There are two types of arguments
* regular arguments, which must always be given when calling the function
* keyword arguments, that have a default value that can be overriden if desired
Values are returned using the return keyword. If not return is defined, the default return value of all functions and methods is None, which is the null object in Python.
Step1: Python has special syntax for catching an arbitary number of parameters. For regular parameters it is a variable with one asterisk * and for keyword parameters it is a variable with two asterisks. It is conventional to name these *args and **kwargs, but this is not required.
Step3: The length of sequences can be checked using the built-in len() function.
It is standard practice to document a function using docstrings. A docstring is just a simple triple-quoted string immediately after the function definition. It is also possible to have docstrings in the beginning of a source code file and after a class definition.
Step4: Functions as parameters
Functions are first-class citizens in Python, which means that they can be e.g. passed to other functions. This is the first step into the world of functional programming, an elegant weapon for a more civilized age.
Step5: Extra
Step6: Now if we want to sort it using Python's built-in sort() function the sort won't know which attribute to base the sorting on.
Fortunately the sort() function takes a named parameter called key which is a function to be called on each item in the list. The return value will be used for the name.
(Python's sort() sorts the list in-place. If you want to keep the list unmodified use sorted())
Step7: This is all nice and well, but now you have a function called get_age that you don't intend to use a second time.
An alternative way to give this would be using a lambda expression. | Python Code:
def my_function(arg_one, arg_two, optional_1=6, optional_2="seven"):
return " ".join([str(arg_one), str(arg_two), str(optional_1), str(optional_2)])
print(my_function("a", "b"))
print(my_function("a", "b", optional_2="eight"))
#go ahead and try out different components
Explanation: Functions
Functions and function arguments
Functions are the building blocks of writing software. If a function is associated with an object and it's data, it is called a method.
Functions are defined using the keyword def.
There are two types of arguments
* regular arguments, which must always be given when calling the function
* keyword arguments, that have a default value that can be overriden if desired
Values are returned using the return keyword. If not return is defined, the default return value of all functions and methods is None, which is the null object in Python.
End of explanation
def count_args(*args, **kwargs):
print("i was called with " + str(len(args)) + " arguments and " + str(len(kwargs)) + " keyword arguments")
count_args(1, 2, 3, 4, 5, foo=1, bar=2)
Explanation: Python has special syntax for catching an arbitary number of parameters. For regular parameters it is a variable with one asterisk * and for keyword parameters it is a variable with two asterisks. It is conventional to name these *args and **kwargs, but this is not required.
End of explanation
def random():
Always the number 4.
Chosen by fair dice roll. Guaranteed to be random.
return 4
Explanation: The length of sequences can be checked using the built-in len() function.
It is standard practice to document a function using docstrings. A docstring is just a simple triple-quoted string immediately after the function definition. It is also possible to have docstrings in the beginning of a source code file and after a class definition.
End of explanation
def print_dashes():
print("---")
def print_asterisks():
print("***")
def pretty_print(string, function):
function()
print(string)
function()
pretty_print("hello", print_dashes)
pretty_print("hey", print_asterisks)
Explanation: Functions as parameters
Functions are first-class citizens in Python, which means that they can be e.g. passed to other functions. This is the first step into the world of functional programming, an elegant weapon for a more civilized age.
End of explanation
dictionaries = [
{"name": "Jack", "age": 35, "telephone": "555-1234"},
{"name": "Jane", "age": 40, "telephone": "555-3331"},
{"name": "Joe", "age": 20, "telephone": "555-8765"}
]
Explanation: Extra: Lambda
When we use the keyword def we are making a named function. Sometimes we want a simple function to use once without without binding it to any name.
Consider the following data structure.
End of explanation
def get_age(x):
return x["age"]
dictionaries.sort(key=get_age)
dictionaries
Explanation: Now if we want to sort it using Python's built-in sort() function the sort won't know which attribute to base the sorting on.
Fortunately the sort() function takes a named parameter called key which is a function to be called on each item in the list. The return value will be used for the name.
(Python's sort() sorts the list in-place. If you want to keep the list unmodified use sorted())
End of explanation
dictionaries.sort(key=lambda x: x["age"], reverse=True)
dictionaries
Explanation: This is all nice and well, but now you have a function called get_age that you don't intend to use a second time.
An alternative way to give this would be using a lambda expression.
End of explanation |
1,881 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SciDB and Machine Learning on Wearable Data
This work is motivated by the following publication out of the IHI 2012 - 2ND ACM SIGHIT International Health Informatics Symposium
Step1: We loaded all of the data from the paper into a single array, indexed by
* subject
Step2: 1.1 Quick Fetch and Browse Routines
Because SciDB indexes and clusters data on dimensions, it is efficient to retrieve time slices. Below we'll define a function get_series that will let us quickly retrieve data for specific subject, day and time intervals
Step3: The timeseries are downloaded as Pandas dataframes and we can easily visualize them. We'll create quick plotter function below and visualize subject 0, day 1, from 8
Step4: For another example - looks like Subject 3 stayed up late on the morning of Day 3. They go to sleep sometime between midnight and 1 AM
Step5: We can also easily aggregate data into a daily summary. From this we can see most have good coverage but sometimes data is missing. Day 0 usually does not start at midnight.
Step6: 2. Analytics
Step7: We can now run our function on an example timeseries and plot it alongside. Our activity score increases as the subject wakes up
Step8: 2.1 Applying a Function through Streaming
Taking our binned_activity function from above we will now do the following in SciDB
Step9: 2.1.1 Notes on Streaming and Python Environments
Very often foks use custom enviroments and additional package managers like Conda. If that's the case, keep in mind that the python process that is invoked by SciDB's stream() is the default Python process for the Linux user that's running the database. Note also that the stream process does not run in an interactive shell. So, typically, executing a python command inside stream will run /usr/bin/python even if Conda is configured otherwise for user scidb.
However, we can easily add some paths for a differnet environment. For example, the scidbstrm package comes with a nice python_map shorthand
Step10: To run from a different Python environment, all we need to do is prepend some environment exports
Step11: For more notes about Conda, environments and non-interactive shells, see a helpful post here
Step12: Notice that after the streaming the array now has "placeholder" dimensions and we've converted our subject and day fields to attributes
Step13: We can fetch a whole day's worth of activity for a particular subject. It's also interesting to look at inter-day and inter-subject comparisons of peak activity times
Step14: Finally, we can compare the average activity level across subjects. It's an interesting illustration, however one should keep in mind that it's vulnerable to device-to-device variation as well as missing data
Step15: Our binned_activity function is a very rough prototype but we'll draw attention to how easy it is to modify that function - adding a filter, interpolating, taking a more realistic integral - and re-run on all the data using SciDB.
3. In-Database Machine Learning
We'll build up on the streaming paradigm seen above to execute a machine learning exercise on the data. We'll perform the following
Step17: 3.1 Training the Partial Models
Note the binned dataset is smaller than the original and it's surely possible to download it. Here we'll illustrate an in-DB parallel approach that will scale well for hundreds of such subjects. Note the use of filter with training=1 which will use only the "training" half of the data.
We train the models in parallel
Step18: For each instance that had binned data there's now a model decorated with the number of rows that it was trained on
Step19: 3.2 Combining the Models
In a fashion very similar to Dr. Vernica's blog post, we combine all the partially-trained models
Step21: 3.3 Making Predictions
Now that we have our model, we can use it to make predictions. Below we'll run it on the remainder of the data, filtering for training = 0.
Step22: 3.4 How did we do?
We can pull out and view the predictions for one subject-day like so. Turns out we're correct most of the time, but there some mis-labels
Step23: And we can look at every 1-minute bin we have predictions for and compare our predictions to ground truth
Step24: The vast majority of our predictions are accurate but there's room to improve the model. Below is a visualization of the above table. We use randomized jitter to help visualize the relative number of points in each bin | Python Code:
from scidbpy import connect
import getpass
import requests
import warnings
warnings.filterwarnings("ignore")
requests.packages.urllib3.disable_warnings(requests.packages.urllib3.exceptions.InsecureRequestWarning)
db = connect(scidb_url="https://localhost:8083",
scidb_auth=('root', getpass.getpass('Please enter your password: ')),
verify=False)
Explanation: SciDB and Machine Learning on Wearable Data
This work is motivated by the following publication out of the IHI 2012 - 2ND ACM SIGHIT International Health Informatics Symposium: https://dl.acm.org/citation.cfm?doid=2110363.2110375
The authors explored ways to detect sleep using a wrist-worn accelerometer and light sensor. In this Notebook, we explore their data loaded in SciDB via a few queries and analytics. We'll start off by running simple fetch and summary queries to familiarize ourselves with the data. Then we'll explore ways to use SciDB streaming to bring analytics to the data and execute data-wide computations in the cluster. Finally, we'll build towards a machine learning algorithm that can detect sleep with some accuracy.
The objective of this work is not so much to train a good model, but to demonstrate the SciDB-powered workflow itself. The key take-away is that SciDB helps easily orchestrate parallelism for complex calculations that can be expressed in popular languages like R and Python.
1. Connect and Explore
The Usual AMI password is 'Paradigm4' - enter it when prompted.
End of explanation
ihi_schema = db.show(db.arrays.IHI_DATA)[:]['schema'][0]
ihi_schema
db.summarize(db.arrays.IHI_DATA)[:]
db.limit(db.arrays.IHI_DATA, 5)[:]
Explanation: We loaded all of the data from the paper into a single array, indexed by
* subject: a simple numeric identifier,
* day: a simple counter starting at day 0 for each subject and
* mil: the number of milliseconds elapsed since start of day (0 to 86,400,000).
Each cell in the array has the following attributes:
* acc_x,y,z: the 3D accelerometer readings
* light: the light sensor output
* sleep: the "ground truth" as to whether or not the subject is sleeping at that time (1 means awake, 2 means asleep)
Let's take a look at a few entries:
End of explanation
#A helper time conversion routine
from datetime import time
def time_to_millis(t):
return long(t.hour * 3600000 + t.minute * 60000 + t.second * 1000 + long(t.microsecond / 1000))
def get_series(subject, day, t_start, t_end):
if type(t_start) is time:
t_start = time_to_millis(t_start)
if type(t_end) is time:
t_end = time_to_millis(t_end)
query = db.filter(db.arrays.IHI_DATA, "subject = {} and day = {} and mil>={} and mil <={}".format(
subject, day, t_start, t_end))
return query[:]
d = get_series(subject=0, day=1, t_start = time(8,30,0), t_end=time(9,30,0))
d.head()
Explanation: 1.1 Quick Fetch and Browse Routines
Because SciDB indexes and clusters data on dimensions, it is efficient to retrieve time slices. Below we'll define a function get_series that will let us quickly retrieve data for specific subject, day and time intervals:
End of explanation
import matplotlib.pyplot as plt
def plot_series(d):
d = d.sort_values(by='mil')
plt.rcParams['figure.figsize'] = (18, 1.5)
d1 =d[['mil','acc_x','acc_y','acc_z']]
d1.plot(x='mil', ylim=(-5,260), title='Accelerometer')
d2 =d[['mil','light']]
d2.plot(x='mil', ylim=(-5,260), title = 'Light Sensor')
d3 =d[['mil','sleep']]
d3.plot(x='mil', ylim=(0.95,2.05), title = 'Sleep (Reported)')
plt.show(block=True)
def get_and_plot(subject, day, t_start, t_end):
d = get_series(subject = subject, day = day, t_start = t_start, t_end = t_end)
plot_series(d)
get_and_plot(subject=0, day=1, t_start = time(8,50,0), t_end=time(9,30,0))
Explanation: The timeseries are downloaded as Pandas dataframes and we can easily visualize them. We'll create quick plotter function below and visualize subject 0, day 1, from 8:50 AM to 9:30 AM. Looks like our subject is waking up at right around that time:
End of explanation
get_and_plot(subject=3, day=3, t_start = time(0,0,0), t_end=time(1,0,0))
Explanation: For another example - looks like Subject 3 stayed up late on the morning of Day 3. They go to sleep sometime between midnight and 1 AM:
End of explanation
daily_summary = db.aggregate(
db.apply(
db.arrays.IHI_DATA,
"mil", "mil"
),
"count(*) as num_samples",
"min(mil) as t_start",
"max(mil) as t_end",
"subject", "day"
)[:]
daily_summary.sort_values(by=['subject','day']).head()
Explanation: We can also easily aggregate data into a daily summary. From this we can see most have good coverage but sometimes data is missing. Day 0 usually does not start at midnight.
End of explanation
def binned_activity(d):
import pandas as pd
bin_millis = 60000 * 15
d1 = d[['mil', 'acc_x','acc_y','acc_z']]
d2 = d1.shift(1)
d2.columns = ['mil_0', 'acc_x_0', 'acc_y_0', 'acc_z_0']
dm = pd.concat([d1,d2], axis=1)
dm['activity'] = pow(pow(dm['acc_x'] - dm['acc_x_0'], 2) +
pow(dm['acc_y'] - dm['acc_y_0'], 2) +
pow(dm['acc_z'] - dm['acc_z_0'], 2), 0.5)
dm['bin'] = (dm['mil'] / (bin_millis)).astype(long)
dmm = dm.groupby(['bin'], as_index=False)[['activity']].sum()
dmm['mil'] = dmm['bin'] * bin_millis + (bin_millis/2)
dmm['subject'] = d['subject'][0]
dmm['day'] = d['day'][0]
dmm = dmm[['subject', 'day', 'mil', 'activity']]
return(dmm)
Explanation: 2. Analytics: Computing an Activity Score
Let's try to calculate the total "amount of movement" that the subject is performing. There are many different approaches in the literature: counting the number of times the accelerometer crosses a threshold (ZCM), proportional integration (PIM), time above threshold and so on. It is also recommended to pre-filter the signal to exlude vibrations that are not of human origin. In this particular case we don't have a lot of information about the device (a custom made prototype) nor even what units the acceleration is captured in.
We'll create a simple example function that will add up Euclidean acceleromter distances from the current reading to the previous reading, over a fixed time window (15 minutes). Thus for each 15-minute window, the user gets an "activity" score sum. The score is 0 when the accelerometer series as flat. The more change there is, the higher the score.
Down the road, we'll show how to use streaming to execute the arbitrary supplied function on all data in parallel. We'll then leave the development of a more realistic function to the user:
End of explanation
d = get_series(subject=0, day=1, t_start = time(8,30,0), t_end=time(9,30,0))
dm = binned_activity(d)
print(dm)
plot_series(d)
dm[['mil','activity']].plot(x='mil', color='green', title = "Activity Score",
xlim=(min(d['mil']), max(d['mil']) ))
plt.show(block=True)
Explanation: We can now run our function on an example timeseries and plot it alongside. Our activity score increases as the subject wakes up:
End of explanation
#Remove the array if exists
try:
db.remove(db.arrays.IHI_BINNED_ACTIVITY)
except:
print("Array not found")
Explanation: 2.1 Applying a Function through Streaming
Taking our binned_activity function from above we will now do the following in SciDB:
1. Upload the code for binned_activity to the SicDB cluster
2. In parallel, run binned_activity on every subject, ouputting the activity for every 15-minute period
3. Gather and store results as a new array IHI_BINNED_ACTIVITY
SciDB makes this quite straightforward, modulo a few small aspects. SciDB streaming will execute the function on one chunk of data at a time, and the IHI_DATA array is chunked into 1-hour intervals. The 15 minute windows evenly divide the hour, thus we won't see any overlap issues. If the window were, say, 23 minutes, we would need to write some extra code to redimension the data prior to streaming.
Note also the import pandas as pd line is inside the body of the function. This is not common but will do the right thing: Python is smart enough to import modules only once.
End of explanation
import scidbstrm
scidbstrm.python_map
Explanation: 2.1.1 Notes on Streaming and Python Environments
Very often foks use custom enviroments and additional package managers like Conda. If that's the case, keep in mind that the python process that is invoked by SciDB's stream() is the default Python process for the Linux user that's running the database. Note also that the stream process does not run in an interactive shell. So, typically, executing a python command inside stream will run /usr/bin/python even if Conda is configured otherwise for user scidb.
However, we can easily add some paths for a differnet environment. For example, the scidbstrm package comes with a nice python_map shorthand:
End of explanation
snowflake_python_map='''\'
export VIRTUAL_ENV="/home/scidb/anaconda2/envs/snowflakes"
export PATH="$VIRTUAL_ENV/bin:$PATH"
python -uc "import scidbstrm; scidbstrm.map(scidbstrm.read_func())" \' '''
Explanation: To run from a different Python environment, all we need to do is prepend some environment exports:
End of explanation
#ETA on this is about 1 minute
import scidbstrm
db_fun = db.input(upload_data=scidbstrm.pack_func(binned_activity)).store()
db.stream(
db.apply(
db.arrays.IHI_DATA,
"mil, mil",
"subject, subject",
"day, day"
),
snowflake_python_map,
"'format=feather'",
"'types=int64,int64,int64,double'",
"'names=subject,day,mil,activity'",
'_sg({}, 0)'.format(db_fun.name)
).store(db.arrays.IHI_BINNED_ACTIVITY)
Explanation: For more notes about Conda, environments and non-interactive shells, see a helpful post here: https://gist.github.com/datagrok/2199506
For more notes about Streaming and security, see
https://github.com/paradigm4/stream#stability-and-security
We now use our script to run the binned_activity function on all data:
End of explanation
db.show(db.arrays.IHI_BINNED_ACTIVITY)[:]['schema'][0]
db.limit(db.arrays.IHI_BINNED_ACTIVITY, 5).fetch(atts_only=True).sort_values(by=['subject','day'])
Explanation: Notice that after the streaming the array now has "placeholder" dimensions and we've converted our subject and day fields to attributes:
End of explanation
s2_day3_activity = db.filter(
db.arrays.IHI_BINNED_ACTIVITY,
"subject = 2 and day = 3"
)[:]
s2_day3_activity = s2_day3_activity.sort_values(by='mil')
s2_day3_activity['hour'] = s2_day3_activity['mil'] / 3600000
s2_day3_activity[['hour','activity']].plot(x='hour')
plt.show(block=True)
Explanation: We can fetch a whole day's worth of activity for a particular subject. It's also interesting to look at inter-day and inter-subject comparisons of peak activity times:
End of explanation
activity_stats = db.grouped_aggregate(
db.grouped_aggregate(
db.arrays.IHI_BINNED_ACTIVITY,
"sum(activity) as daily_activity", "subject, day"
),
"avg(daily_activity) as avg_daily_activity",
"stdev(daily_activity) as stdev_daily_activity",
"count(*) as num_days",
"subject"
).fetch(atts_only=True)
activity_stats
activity_stats.sort_values(by='subject').plot(y='avg_daily_activity', x='subject', kind ='bar')
plt.show()
Explanation: Finally, we can compare the average activity level across subjects. It's an interesting illustration, however one should keep in mind that it's vulnerable to device-to-device variation as well as missing data:
End of explanation
try:
db.remove(db.arrays.IHI_BINNED_FEATURES)
except:
print("Array not found")
feature_binning_period = 1 * 60 * 1000 #Break up the data into 1-minute bins
db.apply(
db.grouped_aggregate(
db.apply(
db.arrays.IHI_DATA,
"bin_millis", "mil/({p}) * ({p}) + ({p})/2".format(p=feature_binning_period)
),
"sum(light) as total_light",
"var(acc_x) as acc_x_var",
"var(acc_y) as acc_y_var",
"var(acc_z) as acc_z_var",
"max(sleep) as sleep",
"subject, day, bin_millis"
),
"training", "random()%2"
).store(db.arrays.IHI_BINNED_FEATURES)
db.op_count(db.arrays.IHI_BINNED_FEATURES)[:]
Explanation: Our binned_activity function is a very rough prototype but we'll draw attention to how easy it is to modify that function - adding a filter, interpolating, taking a more realistic integral - and re-run on all the data using SciDB.
3. In-Database Machine Learning
We'll build up on the streaming paradigm seen above to execute a machine learning exercise on the data. We'll perform the following:
Compute several binned features on the data - binned variance for accelerometers and the total amount of light as measured by the light sensor
Randomly split the binned features into "training" and "validation" sets
Use the Stochastic Gradient Descent Classifier from scikit-learn to train several models on the training set inside SciDB in Parallel
Combine the trained models into a single Voting Classifier prediction model, store that as an array in SciDB.
Evaluate the model on the remaining "validation" set and compare it to ground truth.
Many of these steps are built on this blog post: http://rvernica.github.io/2017/10/streaming-machine-learning
In fact we use a very similar classifier. Consult that post for additional clarifications.
First, the binning can be done entirely using SciDB aggregation. The splitting into "training" and "validation" is achieved by apply-ing a value to each field that is either 0 or 1.
End of explanation
import scidbstrm
class Train:
model = None
count = 0
@staticmethod
def map(df):
dft = df[['acc_x_var','acc_y_var', 'acc_z_var', 'total_light']]
Train.model.partial_fit(numpy.matrix(dft),
df['sleep'],
[1,2])
Train.count += len(df)
return None
@staticmethod
def finalize():
if Train.count == 0:
return None
buf = io.BytesIO()
sklearn.externals.joblib.dump(Train.model, buf)
return pandas.DataFrame({
'count': [Train.count],
'model': [buf.getvalue()]})
ar_fun = db.input(upload_data=scidbstrm.pack_func(Train)).store()
#Once again, don't forget our environment variables:
python_run = '
export VIRTUAL_ENV="/home/scidb/anaconda2/envs/snowflakes"
export PATH="$VIRTUAL_ENV/bin:$PATH"
python -uc "
import io
import numpy
import pandas
import scidbstrm
import sklearn.externals
import sklearn.linear_model
Train = scidbstrm.read_func()
Train.model = sklearn.linear_model.SGDClassifier()
scidbstrm.map(Train.map, Train.finalize)
"'
que = db.stream(
db.filter(
db.arrays.IHI_BINNED_FEATURES,
#Note: computed variance can be NULL if a bin input segment (1 minute) has only a single value in it
"training=1 and acc_x_var is not null and acc_y_var is not null and acc_z_var is not null"
),
python_run,
"'format=feather'",
"'types=int64,binary'",
"'names=count,model'",
'_sg({}, 0)'.format(ar_fun.name)
).store(
db.arrays.IHI_PARTIAL_MODEL)
Explanation: 3.1 Training the Partial Models
Note the binned dataset is smaller than the original and it's surely possible to download it. Here we'll illustrate an in-DB parallel approach that will scale well for hundreds of such subjects. Note the use of filter with training=1 which will use only the "training" half of the data.
We train the models in parallel:
End of explanation
db.scan(db.arrays.IHI_PARTIAL_MODEL)[:]
Explanation: For each instance that had binned data there's now a model decorated with the number of rows that it was trained on:
End of explanation
def merge_models(df):
import io
import pandas
import sklearn.ensemble
import sklearn.externals
estimators = [sklearn.externals.joblib.load(io.BytesIO(byt))
for byt in df['model']]
if not estimators:
return None
labelencoder = sklearn.preprocessing.LabelEncoder()
labelencoder.fit([0,1,2])
model = sklearn.ensemble.VotingClassifier(())
model.estimators_ = estimators
model.le_ = labelencoder
buf = io.BytesIO()
sklearn.externals.joblib.dump(model, buf)
return pandas.DataFrame({'count': df.sum()['count'],
'model': [buf.getvalue()]})
ar_fun = db.input(upload_data=scidbstrm.pack_func(merge_models)).store()
que = db.unpack(
#The unpack puts all the models into a single chunk (assuming there aren't more than 1M instances)
db.arrays.IHI_PARTIAL_MODEL,
"i",
"10000000"
).stream(
snowflake_python_map,
"'format=feather'",
"'types=int64,binary'",
"'names=count,model'",
'_sg({}, 0)'.format(ar_fun.name)
).store(
db.arrays.IHI_FINAL_MODEL)
db.scan(db.arrays.IHI_FINAL_MODEL)[:]
Explanation: 3.2 Combining the Models
In a fashion very similar to Dr. Vernica's blog post, we combine all the partially-trained models:
End of explanation
try:
db.remove(db.arrays.IHI_PREDICTED_SLEEP)
except:
print("Array not found")
class Predict:
model = None
@staticmethod
def map(df):
dfp = numpy.matrix(df[['acc_x_var','acc_y_var', 'acc_z_var', 'total_light']])
#We're creating a new column; Arrow will complain if it's not Unicode:
df[u'pred'] = Predict.model.predict(dfp)
df = df [['subject', 'day', 'bin_millis', 'sleep', 'pred']]
return df
ar_fun = db.input(
upload_data=scidbstrm.pack_func(Predict)
).cross_join(
db.arrays.IHI_FINAL_MODEL
).store()
python_run = '
export VIRTUAL_ENV="/home/scidb/anaconda2/envs/snowflakes"
export PATH="$VIRTUAL_ENV/bin:$PATH"
python -uc "
import dill
import io
import numpy
import scidbstrm
import sklearn.externals
df = scidbstrm.read()
Predict = dill.loads(df.iloc[0, 0])
Predict.model = sklearn.externals.joblib.load(io.BytesIO(df.iloc[0, 2]))
scidbstrm.write()
scidbstrm.map(Predict.map)
"'
que = db.filter(
db.arrays.IHI_BINNED_FEATURES,
"training = 0 and acc_x_var is not null and acc_y_var is not null and acc_z_var is not null"
).stream(
python_run,
"'format=feather'",
"'types=int64,int64,int64,double,int64'",
"'names=subject,day,bin_millis,sleep,prediction'",
'_sg({}, 0)'.format(ar_fun.name)
).store(
db.arrays.IHI_PREDICTED_SLEEP)
Explanation: 3.3 Making Predictions
Now that we have our model, we can use it to make predictions. Below we'll run it on the remainder of the data, filtering for training = 0.
End of explanation
s4d6 = db.filter(db.arrays.IHI_PREDICTED_SLEEP, 'subject=4 and day=6').fetch(atts_only=True)
s4d6 = s4d6.sort_values(by='bin_millis')
s4d6['hour'] = s4d6['bin_millis'] / 3600000
plt.rcParams['figure.figsize'] = (18, 2)
s4d6[['hour','sleep']].plot(x='hour', title = "Sleep (Actual)")
s4d6[['hour','prediction']].plot(x='hour', color='green', title = "Sleep (Predicted)")
plt.show(block=True)
Explanation: 3.4 How did we do?
We can pull out and view the predictions for one subject-day like so. Turns out we're correct most of the time, but there some mis-labels:
End of explanation
result = db.grouped_aggregate(db.arrays.IHI_PREDICTED_SLEEP, "count(*)", "sleep, prediction").fetch(atts_only=True)
result
Explanation: And we can look at every 1-minute bin we have predictions for and compare our predictions to ground truth:
End of explanation
result = db.project(db.arrays.IHI_PREDICTED_SLEEP, "sleep, prediction")[:]
import matplotlib, numpy
def rand_jitter(arr):
return arr + numpy.random.randn(len(arr)) * .2
plt.rcParams['figure.figsize'] = (8, 8)
matplotlib.pyplot.xticks([1,2])
matplotlib.pyplot.yticks([1,2])
matplotlib.pyplot.xlabel('Sleep (Actual)')
matplotlib.pyplot.ylabel('Sleep (Predicted)')
matplotlib.pyplot.plot(
rand_jitter(result['sleep']), rand_jitter(result['prediction']), '.', ms=1)
plt.show()
Explanation: The vast majority of our predictions are accurate but there's room to improve the model. Below is a visualization of the above table. We use randomized jitter to help visualize the relative number of points in each bin:
End of explanation |
1,882 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Matplotlib Exercise 1
Imports
Step1: Line plot of sunspot data
Download the .txt data for the "Yearly mean total sunspot number [1700 - now]" from the SILSO website. Upload the file to the same directory as this notebook.
Step2: Use np.loadtxt to read the data into a NumPy array called data. Then create two new 1d NumPy arrays named years and ssc that have the sequence of year and sunspot counts.
Step3: Make a line plot showing the sunspot count as a function of year.
Customize your plot to follow Tufte's principles of visualizations.
Adjust the aspect ratio/size so that the steepest slope in your plot is approximately 1.
Customize the box, grid, spines and ticks to match the requirements of this data.
Step4: Describe the choices you have made in building this visualization and how they make it effective.
YOUR ANSWER HERE
Now make 4 subplots, one for each century in the data set. This approach works well for this dataset as it allows you to maintain mild slopes while limiting the overall width of the visualization. Perform similar customizations as above | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
Explanation: Matplotlib Exercise 1
Imports
End of explanation
import os
assert os.path.isfile('yearssn.dat')
Explanation: Line plot of sunspot data
Download the .txt data for the "Yearly mean total sunspot number [1700 - now]" from the SILSO website. Upload the file to the same directory as this notebook.
End of explanation
#Btw, I asked Granger if I could turn this in during class and he said yes! Please don't dock my points!
data = np.loadtxt('yearssn.dat', dtype ='float',unpack=True)
years,ssc = data
# Another way
# np.ravel(data)
#even_index = np.arange(0,len(data),2)
# odd_index = np.arange(1,len(data),2)
# even_index
# years = data[even_index]
# ssc = data[odd_index]
assert len(years)==315
assert years.dtype==np.dtype(float)
assert len(ssc)==315
assert ssc.dtype==np.dtype(float)
Explanation: Use np.loadtxt to read the data into a NumPy array called data. Then create two new 1d NumPy arrays named years and ssc that have the sequence of year and sunspot counts.
End of explanation
plt.plot(years, ssc)
plt.xlabel('Years') #x labels
plt.ylabel('Sun Spot Count') #ylabels
plt.title('Sun Spot Count vs. Year') #Main Title
plt.grid(True) #plot grid
plt.box(False) #Taking out the box!!!
fig = plt.gcf()
# ????
fig.set_figwidth(100)
assert True # leave for grading
Explanation: Make a line plot showing the sunspot count as a function of year.
Customize your plot to follow Tufte's principles of visualizations.
Adjust the aspect ratio/size so that the steepest slope in your plot is approximately 1.
Customize the box, grid, spines and ticks to match the requirements of this data.
End of explanation
years_1 = years[years < 1800]
years_2 = years[years < 1900]
years_2 = years_2[years_2 >= 1800]
years_3 = years[years < 2000]
years_3 = years_3[years_3 >= 1900]
years_4 = years[years < 2100]
years_4 = years_4[years_4 >= 2000]
ssc_1 = ssc[years < 1800]
ssc_2 = ssc[years < 1900]
ssc_2 = ssc_2[years_2 >= 1800]
ssc_3 = ssc[years < 2000]
ssc_3 = ssc[years_3 >= 1900]
ssc_4 = ssc[years < 2100]
ssc_4 = ssc[years_4 >= 2000]
f, axarr = plt.subplots(4, sharex=True)
axarr[0].plot(years_1, ssc_1)
axarr[0].set_xlim([years_1.min(), years_1.max()])
axarr[1].plot(years_2, ssc_2)
axarr[1].set_xlim([years_2.min(), years_2.max()])
axarr[2].plot(years_3, ssc_3)
axarr[2].set_xlim([years_3.min(), years_3.max()])
axarr[3].plot(years_4, ssc_4)
axarr[3].set_xlim([years_4.min(), years_4.max()])
assert True # leave for grading
Explanation: Describe the choices you have made in building this visualization and how they make it effective.
YOUR ANSWER HERE
Now make 4 subplots, one for each century in the data set. This approach works well for this dataset as it allows you to maintain mild slopes while limiting the overall width of the visualization. Perform similar customizations as above:
Customize your plot to follow Tufte's principles of visualizations.
Adjust the aspect ratio/size so that the steepest slope in your plot is approximately 1.
Customize the box, grid, spines and ticks to match the requirements of this data.
End of explanation |
1,883 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
3D MHD models
This notebook explains how to use cubic results of 3D MHD models on a uniform grid in CRPropa.
Supplied data
The fields need to be supplied in a raw binary file that contains only single floats, arranged as follows
Step1: to make use of periodicity of the provided data grid, use
Step2: to not follow particles forever, use
Step3: Uniform injection
The most simple scenario of UHECR sources is a uniform distribution of their sources. This can be realized via use of
Step4: Injection following density field
The distribution of gas density can be used as a probability density function for the injection of particles from random positions.
Step5: Mass Halo injection
Alternatively, for the CLUES models, we also provide a list of mass halo positions. These positions can be used as sources with the same properties by use of the following
Step6: additional source properties
Step7: Observer
To register particles, an observer has to be defined. In the provided constrained simulations the position of the Milky Way is, by definition, in the center of the volume.
Step8: finally run the simulation by | Python Code:
from crpropa import *
## settings for MHD model (must be set according to model)
filename_bfield = "clues_primordial.dat" ## filename of the magnetic field
gridOrigin = Vector3d(0,0,0) ## origin of the 3D data, preferably at boxOrigin
gridSize = 1024 ## size of uniform grid in data points
h = 0.677 ## dimensionless Hubble parameter
size = 249.827/h *Mpc ## physical edgelength of volume in Mpc
b_factor = 1. ## global renormalization factor for the field
## settings of simulation
boxOrigin = Vector3d( 0, 0, 0,) ## origin of the full box of the simulation
boxSize = Vector3d( size, size, size ) ## end of the full box of the simulation
## settings for computation
minStep = 10.*kpc ## minimum length of single step of calculation
maxStep = 4.*Mpc ## maximum length of single step of calculation
tolerance = 1e-2 ## tolerance for error in iterative calculation of propagation step
spacing = size/(gridSize) ## resolution, physical size of single cell
m = ModuleList()
## instead of computing propagation without Lorentz deflection via
# m.add(SimplePropagation(minStep,maxStep))
## initiate grid to hold field values
vgrid = Grid3f( gridOrigin, gridSize, spacing )
## load values to the grid
loadGrid( vgrid, filename_bfield, b_factor )
## use grid as magnetic field
bField = MagneticFieldGrid( vgrid )
## add propagation module to the simulation to activate deflection in supplied field
m.add(PropagationCK( bField, tolerance, minStep, maxStep))
#m.add(DeflectionCK( bField, tolerance, minStep, maxStep)) ## this was used in older versions of CRPropa
Explanation: 3D MHD models
This notebook explains how to use cubic results of 3D MHD models on a uniform grid in CRPropa.
Supplied data
The fields need to be supplied in a raw binary file that contains only single floats, arranged as follows: Starting with the cell values (Bx,By,Bz for magnetic field or rho for density) at the origin of the box, the code continues to read along z, then y and finally x.
On https://crpropa.github.io/CRPropa3/ under "Additional resources" you can find a number of MHD models used with CRPropa in the literature.
Note:
The parameters used for the following example refer to the MHD model by Hackstein et al. (2018), as provided under "Additional resources". However, CRPropa does in general not take any warranty on the accuracy of any of those external data files.
Note that in some previous version of this notebook the used MHD model has not been representing the results from Hackstein et al. (2018). This has been due to two issues: (1.) the size of the grid has not taken the dimensionless Hubble parameter into account and (2.) the X- and Z-coordinates of the available data files have been transposed. But since 20.05.2022 both of these issues have been fixed and the following example can be used to include the MHD model data from Hackstein et al. (2018).
End of explanation
m.add( PeriodicBox( boxOrigin, boxSize ) )
Explanation: to make use of periodicity of the provided data grid, use
End of explanation
m.add( MaximumTrajectoryLength( 400*Mpc ) )
Explanation: to not follow particles forever, use
End of explanation
source = Source()
source.add( SourceUniformBox( boxOrigin, boxSize ))
Explanation: Uniform injection
The most simple scenario of UHECR sources is a uniform distribution of their sources. This can be realized via use of
End of explanation
filename_density = "mass-density_clues.dat" ## filename of the density field
source = Source()
## initialize grid to hold field values
mgrid = ScalarGrid( gridOrigin, gridSize, spacing )
## load values to grid
loadGrid( mgrid, filename_density )
## add source module to simulation
source.add( SourceDensityGrid( mgrid ) )
Explanation: Injection following density field
The distribution of gas density can be used as a probability density function for the injection of particles from random positions.
End of explanation
import numpy as np
filename_halos = 'clues_halos.dat'
# read data from file
data = np.loadtxt(filename_halos, unpack=True, skiprows=39)
sX = data[0]
sY = data[1]
sZ = data[2]
mass_halo = data[5]
## find only those mass halos inside the provided volume (see Hackstein et al. 2018 for more details)
Xdown= sX >= 0.25
Xup= sX <= 0.75
Ydown= sY >= 0.25
Yup= sY <= 0.75
Zdown= sZ >= 0.25
Zup= sZ <= 0.75
insider= Xdown*Xup*Ydown*Yup*Zdown*Zup
## transform relative positions to physical positions within given grid
sX = (sX[insider]-0.25)*2*size
sY = (sY[insider]-0.25)*2*size
sZ = (sZ[insider]-0.25)*2*size
## collect all sources in the multiple sources container
smp = SourceMultiplePositions()
for i in range(0,len(sX)):
pos = Vector3d( sX[i], sY[i], sZ[i] )
smp.add( pos, 1. )
## add collected sources
source = Source()
source.add( smp )
Explanation: Mass Halo injection
Alternatively, for the CLUES models, we also provide a list of mass halo positions. These positions can be used as sources with the same properties by use of the following
End of explanation
## use isotropic emission from all sources
source.add( SourceIsotropicEmission() )
## set particle type to be injected
A, Z = 1, 1 # proton
source.add( SourceParticleType( nucleusId(A,Z) ) )
## set injected energy spectrum
Emin, Emax = 1*EeV, 1000*EeV
specIndex = -1
source.add( SourcePowerLawSpectrum( Emin, Emax, specIndex ) )
Explanation: additional source properties
End of explanation
filename_output = 'data/output_MW.txt'
obsPosition = Vector3d(0.5*size,0.5*size,0.5*size) # position of observer, MW is in center of constrained simulations
obsSize = 800*kpc ## physical size of observer sphere
## initialize observer that registers particles that enter into sphere of given size around its position
obs = Observer()
obs.add( ObserverSmallSphere( obsPosition, obsSize ) )
## write registered particles to output file
obs.onDetection( TextOutput( filename_output ) )
## choose to not further follow particles paths once detected
obs.setDeactivateOnDetection(True)
## add observer to module list
m.add(obs)
Explanation: Observer
To register particles, an observer has to be defined. In the provided constrained simulations the position of the Milky Way is, by definition, in the center of the volume.
End of explanation
N = 1000
m.showModules() ## optional, see summary of loaded modules
m.setShowProgress(True) ## optional, see progress during runtime
m.run(source, N, True) ## perform simulation with N particles injected from source
Explanation: finally run the simulation by
End of explanation |
1,884 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Adding new backends
Step1: In the sisl.viz framework, the rendering part of the visualization is completely detached from the processing part. Because of that, we have the flexibility to add new ways of generating the final product by registering what we call backends.
We will guide you through how you might customize this part of the framework. There are however, very distinct scenarios where you might find yourself. Each of the following sections explains the details of each situation, which are ordered in increasing complexity.
<div class="alert alert-info">
Note
Even if you want to go to the most complex situation, make sure that you first understand the simpler ones!
</div>
Extending an existing backend
This is by far the easiest situation. For example, sisl already provides a backend to plot bands with plotly, but you are not totally happy with the way it's done.
In this case, you grab the provided backend
Step2: And then create your own class that inherits from it
Step3: The only thing left to do now is to let BandsPlot know that there's a new backend available. This action is called registering a backend.
Step4: All good, you can already use your new backend!
Step5: Now that we know that it can be registered, we can try to add new functionality. But of course, we need to know how the backend works if we need to modify it. All backends to draw bands inherit from BandsBackend, and you can find some information there on how it works. Let's read its documentation
Step6: <div class="alert alert-info">
Note
This already gives you an overview of how the backend works. If you want to know the very fine details, you can always go to the source code.
</div>
So, clearly PlotlyBandsBackend already contains the draw_gap method, otherwise it would not work.
From the workflow description, we understand that each band is drawn with the _draw_band method, which calls the generic draw_line method. In plotly, line information is passed as dictionaries that contain several parameters. One of them is, for example, showlegend, which controls whether the line appears in the legend. We can use therefore our plotly knowledge to only show at the legend those bands that are below the fermi level
Step7: This is not very interesting, but it does its job at illustrating the fact that you can register a slightly modified backend.
You could use your fresh knowledge to, for example draw something after the bands are drawn
Step8: We finish this section by stating that
Step9: And also a generic bands backend
Step10: In these cases, your situation is not that bad. As you saw, the template backends make use of generic functions like draw_line as much as they can, so the effort to implement a plotly bands backend is reduced to those things that can't be generalized in that way.
One thing is for sure, we need to combine the two pieces to create the backend that we want
Step11: But is this enough? Let's see the documentation of BandsBackend one more time
Step12: So, there are to things that need to be implemented
Step13: Quite simple, isn't it? It seems like we are provided with the coordinates of the gap and then we can display it however we want.
Step14: Let's see our masterpiece
Step15: Beautiful!
So, to end this section, just two remarks
Step16: <div class="alert alert-info">
Note
You can always look at the help of each specific method to understand exactly what you need to implement. E.g. `help(Backend.draw_line)`.
</div>
To make it simple, let's say we want to create a backend for "text". This backend will store everything as text in its state, and it will print it on show. Here would be a minimal design
Step17: This could very well be our generic backend for the "text" framework. Now we can use the knowledge of the previous section to create a backend for the bands plot
Step18: And everything works great! Note that since the backend is independent of the processing logic, I can use any setting of BandsPlot and it will work | Python Code:
import sisl
import sisl.viz
# This is a toy band structure to illustrate the concepts treated throughout the notebook
geom = sisl.geom.graphene(orthogonal=True)
H = sisl.Hamiltonian(geom)
H.construct([(0.1, 1.44), (0, -2.7)], )
band_struct = sisl.BandStructure(H, [[0,0,0], [0.5,0,0]], 10, ["Gamma", "X"])
Explanation: Adding new backends
End of explanation
from sisl.viz.backends.plotly import PlotlyBandsBackend
Explanation: In the sisl.viz framework, the rendering part of the visualization is completely detached from the processing part. Because of that, we have the flexibility to add new ways of generating the final product by registering what we call backends.
We will guide you through how you might customize this part of the framework. There are however, very distinct scenarios where you might find yourself. Each of the following sections explains the details of each situation, which are ordered in increasing complexity.
<div class="alert alert-info">
Note
Even if you want to go to the most complex situation, make sure that you first understand the simpler ones!
</div>
Extending an existing backend
This is by far the easiest situation. For example, sisl already provides a backend to plot bands with plotly, but you are not totally happy with the way it's done.
In this case, you grab the provided backend:
End of explanation
class MyOwnBandsBackend(PlotlyBandsBackend):
pass
Explanation: And then create your own class that inherits from it:
End of explanation
from sisl.viz import BandsPlot
BandsPlot.backends.register("plotly_myown", MyOwnBandsBackend)
# Pass default=True if you want to make it the default backend
Explanation: The only thing left to do now is to let BandsPlot know that there's a new backend available. This action is called registering a backend.
End of explanation
band_struct.plot(backend="plotly_myown")
Explanation: All good, you can already use your new backend!
End of explanation
from sisl.viz.backends.templates import BandsBackend
print(BandsBackend.__doc__)
Explanation: Now that we know that it can be registered, we can try to add new functionality. But of course, we need to know how the backend works if we need to modify it. All backends to draw bands inherit from BandsBackend, and you can find some information there on how it works. Let's read its documentation:
End of explanation
# Create my new backend
class MyOwnBandsBackend(PlotlyBandsBackend):
def _draw_band(self, x, y, *args, **kwargs):
kwargs["showlegend"] = bool(y.max() < 0)
super()._draw_band(x, y, *args, **kwargs)
# And register it again
BandsPlot.backends.register("plotly_myown", MyOwnBandsBackend)
band_struct.plot(backend="plotly_myown")
Explanation: <div class="alert alert-info">
Note
This already gives you an overview of how the backend works. If you want to know the very fine details, you can always go to the source code.
</div>
So, clearly PlotlyBandsBackend already contains the draw_gap method, otherwise it would not work.
From the workflow description, we understand that each band is drawn with the _draw_band method, which calls the generic draw_line method. In plotly, line information is passed as dictionaries that contain several parameters. One of them is, for example, showlegend, which controls whether the line appears in the legend. We can use therefore our plotly knowledge to only show at the legend those bands that are below the fermi level:
End of explanation
class MyOwnBandsBackend(PlotlyBandsBackend):
def draw_bands(self, *args, **kwargs):
super().draw_bands(*args, **kwargs)
# Now that all bands are drawn, draw a very interesting line at -2eV.
self.add_hline(y=-2, line_color="red")
BandsPlot.backends.register("plotly_myown", MyOwnBandsBackend)
band_struct.plot(backend="plotly_myown")
Explanation: This is not very interesting, but it does its job at illustrating the fact that you can register a slightly modified backend.
You could use your fresh knowledge to, for example draw something after the bands are drawn:
End of explanation
from sisl.viz.backends.plotly import PlotlyBackend
Explanation: We finish this section by stating that:
To extend a backend, you have to have some knowledge about the corresponding framework (in this case plotly)
You don't need to create a new backend for every modification. You can modify plots interactively however you want after the plot is generated. Creating a backend that extends an existing one is only useful if there are changes that you will always want to do because of personal preference or because you are building a graphical interface, for example.
Creating a backend for a supported framework
Now imagine that, for some reason, sisl didn't provide a PlotlyBandsBackend. However, sisl does have a generic plotly backend:
End of explanation
from sisl.viz.backends.templates import BandsBackend
Explanation: And also a generic bands backend:
End of explanation
class MyPlotlyBandsBackend(BandsBackend, PlotlyBackend):
pass
Explanation: In these cases, your situation is not that bad. As you saw, the template backends make use of generic functions like draw_line as much as they can, so the effort to implement a plotly bands backend is reduced to those things that can't be generalized in that way.
One thing is for sure, we need to combine the two pieces to create the backend that we want:
End of explanation
print(BandsBackend.__doc__)
Explanation: But is this enough? Let's see the documentation of BandsBackend one more time:
End of explanation
help(BandsBackend.draw_gap)
Explanation: So, there are to things that need to be implemented: draw_spin_textured_band and draw_gap.
We won't bother to give our backend support for spin texture representations, but the draw_gap method is compulsory, so we have no choice. Let's understand what is expected from this method:
End of explanation
class MyPlotlyBandsBackend(BandsBackend, PlotlyBackend):
def draw_gap(self, ks, Es, color, name, **kwargs):
self.draw_line(
ks, Es, name=name,
text=f"{Es[1]- Es[0]:.2f} eV",
mode="lines+markers",
line={"color": color},
marker_symbol = ["triangle-up", "triangle-down"],
marker={"color": color, "size": 20},
**kwargs
)
# Make it the default backend for bands, since it is awesome.
BandsPlot.backends.register("plotly_fromscratch", MyPlotlyBandsBackend, default=True)
Explanation: Quite simple, isn't it? It seems like we are provided with the coordinates of the gap and then we can display it however we want.
End of explanation
band_struct.plot(gap=True)
Explanation: Let's see our masterpiece:
End of explanation
from sisl.viz.backends.templates import Backend
print(Backend.__doc__)
Explanation: Beautiful!
So, to end this section, just two remarks:
We have understood that if the framework is supported, the starting point is to combine the generic backend for the framework (PlotlyBackend) with the template backend of the specific plot (BandsBackend). Afterwards, we may have to tweak things a little.
Knowing how the generic framework backend works helps to make your code simpler. E.g. if you check PlotlyBackend.__doc__, you will find that we could have easily included some defaults for the axes titles.
Creating a backend for a non supported framework
Armed with our knowledge from the previous sections, we face the most difficult of the challenges: there's not even a generic backend for the framework that we want to use.
What we have to do is quite clear, develop our own generic backend. But how? Let's go to the Backend class for help:
End of explanation
class TextBackend(Backend):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.text = ""
def clear(self):
self.text = ""
def draw_line(self, x, y, name, **kwargs):
self.text += f"\nLINE: {name}\n{x}\n{y}"
def draw_scatter(self, x, y, name, **kwargs):
self.text += f"\nSCATTER: {name}\n{x}\n{y}"
def draw_on(self, other_backend):
# Set the text attribute to the other backend's text, but store ours
self_text = self.text
self.text = other_backend.text
# Make the plot draw the figure
self._plot.get_figure(backend=self._backend_name, clear_fig=False)
# Restore our text attribute
self.text = self_text
def show(self):
print(self.text)
Explanation: <div class="alert alert-info">
Note
You can always look at the help of each specific method to understand exactly what you need to implement. E.g. `help(Backend.draw_line)`.
</div>
To make it simple, let's say we want to create a backend for "text". This backend will store everything as text in its state, and it will print it on show. Here would be a minimal design:
End of explanation
class TextBandsBackend(BandsBackend, TextBackend):
def draw_gap(self, ks, Es, name, **kwargs):
self.draw_line(ks, Es, name=name)
# Register it, as always
BandsPlot.backends.register("text", TextBandsBackend)
band_struct.plot(backend="text", gap=True, _debug=True)
Explanation: This could very well be our generic backend for the "text" framework. Now we can use the knowledge of the previous section to create a backend for the bands plot:
End of explanation
bands_plot = band_struct.plot(backend="text", gap=True, _debug=True)
bands_plot.update_settings(
bands_range=[0,1],
custom_gaps=[{"from": "Gamma", "to": "Gamma"}, {"from": "X", "to": "X"}]
)
Explanation: And everything works great! Note that since the backend is independent of the processing logic, I can use any setting of BandsPlot and it will work:
End of explanation |
1,885 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Submodular-Optimization-&-Influence-Maximization" data-toc-modified-id="Submodular-Optimization-&-Influence-Maximization-1"><span class="toc-item-num">1 </span>Submodular Optimization & Influence Maximization</a></span><ul class="toc-item"><li><span><a href="#Influence-Maximization-(IM)" data-toc-modified-id="Influence-Maximization-(IM)-1.1"><span class="toc-item-num">1.1 </span>Influence Maximization (IM)</a></span></li><li><span><a href="#Getting-Started" data-toc-modified-id="Getting-Started-1.2"><span class="toc-item-num">1.2 </span>Getting Started</a></span></li><li><span><a href="#Spread-Process---Independent-Cascade-(IC)" data-toc-modified-id="Spread-Process---Independent-Cascade-(IC)-1.3"><span class="toc-item-num">1.3 </span>Spread Process - Independent Cascade (IC)</a></span></li><li><span><a href="#Greedy-Algorithm" data-toc-modified-id="Greedy-Algorithm-1.4"><span class="toc-item-num">1.4 </span>Greedy Algorithm</a></span></li><li><span><a href="#Submodular-Optimization" data-toc-modified-id="Submodular-Optimization-1.5"><span class="toc-item-num">1.5 </span>Submodular Optimization</a></span></li><li><span><a href="#Cost-Effective-Lazy-Forward-(CELF)-Algorithm" data-toc-modified-id="Cost-Effective-Lazy-Forward-(CELF)-Algorithm-1.6"><span class="toc-item-num">1.6 </span>Cost Effective Lazy Forward (CELF) Algorithm</a></span></li><li><span><a href="#Larger-Network" data-toc-modified-id="Larger-Network-1.7"><span class="toc-item-num">1.7 </span>Larger Network</a></span></li><li><span><a href="#Conclusion" data-toc-modified-id="Conclusion-1.8"><span class="toc-item-num">1.8 </span>Conclusion</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
Step1: Submodular Optimization & Influence Maximization
The content and example in this documentation is build on top of the wonderful blog post at the following link. Blog
Step2: Spread Process - Independent Cascade (IC)
IM algorithms solve the optimization problem for a given spread or propagation process. We therefore first need to specify a function that simulates the spread from a given seed set across the network. We'll simulate the influence spread using the popular Independent Cascade (IC) model, although there are many others we could have chosen.
Independent Cascade starts by having an initial set of seed nodes, $A_0$, that start the diffusion process, and the process unfolds in discrete steps according to the following randomized rule
Step4: We calculate the expected spread of a given seed set by taking the average over a large number of Monte Carlo simulations. The outer loop in the function iterates over each of these simulations and calculates the spread for each iteration, at the end, the mean of each iteration will be our unbiased estimation for the expected spread of the seed nodes we've provided. The actual number of simulation required is up to debate, through experiment I found 1,000 to work well enough, whereas 100 was too low. On the other hand, the paper even set the simulation number up to 10,000.
Within each Monte Carlo iteration, we simulate the spread of influence throughout the network over time, where a different "time period" occurs within each of the while loop iterations, which checks whether any new nodes were activated in the previous time step. If no new nodes were activated (when new_active is an empty list and therefore evaluates to False) then the independent cascade process terminates, and the function moves onto the next simulation after recording the total spread for this simulation. The term total spread here refers to the number of nodes ultimately activated (some algorithms are framed in terms of the "additional spread" in which case we would subtract the size of the seed set so the code would be amended to len(active) - len(seed_nodes).
Greedy Algorithm
With our spread function in hand, we can now turn to the IM algorithms themselves. We begin with the Greedy algorithm. The method is referred to as greedy as it adds the node that currently provides the best spread to our solution set without considering if it is actually the optimal solution in the long run, to elaborate the process is
Step5: Submodular Optimization
Now that we have a brief understanding of the IM problem and taken a first stab at solving this problem, let's take a step back and formally discuss submodular optimization. A function $f$ is said to be submodular if it satisfies the diminishing return property. More formally, if we were given a ground set $V$, a function $f
Step7: Cost Effective Lazy Forward (CELF) Algorithm
CELF Algorithm was developed by Leskovec et al. (2007). In other places, this is referred to as the Lazy Greedy Algorithm. Although the Greedy algorithm is much quicker than solving the full problem, it is still very slow when used on realistically sized networks. CELF was one of the first significant subsequent improvements.
CELF exploits the sub-modularity property of the spread function, which implies that the marginal spread of a given node in one iteration of the Greedy algorithm cannot be any larger than its marginal spread in the previous iteration. This helps us to choose the nodes for which we evaluate the spread function in a more sophisticated manner, rather than simply evaluating the spread for all nodes. More specifically, in the first round, we calculate the spread for all nodes (like Greedy) and store them in a list/heap, which is then sorted. Naturally, the top node is added to the seed set in the first iteration, and then removed from the list/heap. In the next iteration, only the spread for the top node is calculated. If, after resorting, that node remains at the top of the list/heap, then it must have the highest marginal gain of all nodes. Why? Because we know that if we calculated the marginal gain for all other nodes, they'd be lower than the value currently in the list (due to submodularity) and therefore the "top node" would remain on top. This process continues, finding the node that remains on top after calculating its marginal spread, and then adding it to the seed set. By avoiding calculating the spread for many nodes, CELF turns out to be much faster than Greedy, which we'll show below.
The celf() function below that implements the algorithm, is split into two components. The first component, like the Greedy algorithm, iterates over each node in the graph and selects the node with the highest spread into the seed set. However, it also stores the spreads of each node for use in the second component.
The second component iterates to find the remaining $k-1$ seed nodes. Within each iteration, the algorithm evaluates the marginal spread of the top node. If, after resorting, the top node stays in place then that node is selected as the next seed node. If not, then the marginal spread of the new top node is evaluated and so on.
Like greedy(), the function returns the optimal seed set, the resulting spread and the time taken to compute each iteration. In addition, it also returns the list lookups, which keeps track of how many spread calculations were performed at each iteration. We didn't bother doing this for greedy() because we know the number of spread calculations in iteration $i$ is $N-i-1$.
Step8: Larger Network
Now that we know both algorithms at least work correctly for a simple network for which we know the answer, we move on to a more generic graph to compare the performance and efficiency of each method. Any igraph network object will work, but for the purposes of this post we will use a random Erdos-Renyi graph with 100 nodes and 300 edges. The exact type of graph doesn't matter as the main points hold for any graph. Rather than explicitly defining the nodes and edges like we did above, here we make use of the .Erdos_Renyi() method to automatically create the graph.
Step9: Given the graph, we again compare both optimizers with the same parameter. Again for the n_iters parameter, it is not uncommon to see it set to a much higher number in literatures, such as 10,000 to get a more accurate estimate of spread, we chose a lower number here so we don't have to wait as long for the results
Step10: Thankfully, both optimization method yields the same solution set.
In the next few code chunk, we will use some of the information we've stored while performing the optimizing to perform a more thorough comparison. First, by plotting the resulting expected spread from both optimization method. We can see both methods yield the same expected spread.
Step11: We now compare the speed of each algorithm. The plot below shows that the computation time of Greedy is larger than CELF for all seed set sizes greater than 1 and the difference in computational times grows exponentially with the size of the seed set. This is because Greedy must compute the spread of $N-i-1$ nodes in iteration $i$ whereas CELF generally performs far fewer spread computations after the first iteration.
Step12: We can get some further insight into the superior computational efficiency of CELF by observing how many "node lookups" it had to perform during each of the 10 rounds. The list that records this information shows that the first round iterated over all 100 nodes of the network. This is identical to Greedy which is why the graph above shows that the running time is equivalent for $k=1$. However, for subsequent iterations, there are far fewer spread computations because the marginal spread of a node in a previous iteration is a good indicator for its marginal spread in a future iteration. Note the relationship between the values below and the corresponding computation time presented in the graph above. There is a visible jump in the blue line for higher values of the "node lookups". This again solidifies the fact that while CELF produces identical solution set as Greedy, it usually has enormous speedups over the standard Greedy procedure. | Python Code:
# code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', '..', 'notebook_format'))
from formats import load_style
load_style(plot_style=False)
os.chdir(path)
# 1. magic for inline plot
# 2. magic to print version
# 3. magic so that the notebook will reload external python modules
# 4. magic to enable retina (high resolution) plots
# https://gist.github.com/minrk/3301035
%matplotlib inline
%load_ext watermark
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format='retina'
import time
import numpy as np
import matplotlib.pyplot as plt
from igraph import Graph # pip install python-igraph
%watermark -a 'Ethen' -d -t -v -p igraph,numpy,matplotlib
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Submodular-Optimization-&-Influence-Maximization" data-toc-modified-id="Submodular-Optimization-&-Influence-Maximization-1"><span class="toc-item-num">1 </span>Submodular Optimization & Influence Maximization</a></span><ul class="toc-item"><li><span><a href="#Influence-Maximization-(IM)" data-toc-modified-id="Influence-Maximization-(IM)-1.1"><span class="toc-item-num">1.1 </span>Influence Maximization (IM)</a></span></li><li><span><a href="#Getting-Started" data-toc-modified-id="Getting-Started-1.2"><span class="toc-item-num">1.2 </span>Getting Started</a></span></li><li><span><a href="#Spread-Process---Independent-Cascade-(IC)" data-toc-modified-id="Spread-Process---Independent-Cascade-(IC)-1.3"><span class="toc-item-num">1.3 </span>Spread Process - Independent Cascade (IC)</a></span></li><li><span><a href="#Greedy-Algorithm" data-toc-modified-id="Greedy-Algorithm-1.4"><span class="toc-item-num">1.4 </span>Greedy Algorithm</a></span></li><li><span><a href="#Submodular-Optimization" data-toc-modified-id="Submodular-Optimization-1.5"><span class="toc-item-num">1.5 </span>Submodular Optimization</a></span></li><li><span><a href="#Cost-Effective-Lazy-Forward-(CELF)-Algorithm" data-toc-modified-id="Cost-Effective-Lazy-Forward-(CELF)-Algorithm-1.6"><span class="toc-item-num">1.6 </span>Cost Effective Lazy Forward (CELF) Algorithm</a></span></li><li><span><a href="#Larger-Network" data-toc-modified-id="Larger-Network-1.7"><span class="toc-item-num">1.7 </span>Larger Network</a></span></li><li><span><a href="#Conclusion" data-toc-modified-id="Conclusion-1.8"><span class="toc-item-num">1.8 </span>Conclusion</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
End of explanation
source = [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 2, 3, 4, 5]
target = [2, 3, 4, 5, 6, 7, 8, 9, 2, 3, 4, 5, 6, 7, 8, 9, 6, 7, 8, 9]
# create a directed graph
graph = Graph(directed=True)
# add the nodes/vertices (the two are used interchangeably) and edges
# 1. the .add_vertices method adds the number of vertices
# to the graph and igraph uses integer vertex id starting from zero
# 2. to add edges, we call the .add_edges method, where edges
# are specified by a tuple of integers.
graph.add_vertices(10)
graph.add_edges(zip(source, target))
print('vertices count:', graph.vcount())
print('edges count:', graph.ecount())
# a graph api should allow us to retrieve the neighbors of a node
print('neighbors: ', graph.neighbors(2, mode='out'))
# or create an adjacency list of the graph,
# as we can see node 0 and 1 are the most influential
# as the two nodes are connected to a lot of other nodes
graph.get_adjlist()
Explanation: Submodular Optimization & Influence Maximization
The content and example in this documentation is build on top of the wonderful blog post at the following link. Blog: Influence Maximization in Python - Greedy vs CELF.
Influence Maximization (IM)
Influence Maximization (IM) is a field of network analysis with a lot of applications - from viral marketing to disease modeling and public health interventions. IM is the task of finding a small subset of nodes in a network such that the resulting "influence" propagating from that subset reaches the largest number of nodes in the network. "Influence" represents anything that can be passed across connected peers within a network, such as information, behavior, disease or product adoption. To make it even more concrete, IM can be used to answer the question:
If we can try to convince a subset of individuals to adopt a new product or innovation, and the goal is to trigger a large cascade of further adoptions, which set of individuals should we target?
Kempe et al. (2003) were the first to formalize IM as the following combinatorial optimization problem: Given a network with $n$ nodes and given a "spreading" or propagation process on that network, choose a "seed set" $S$ of size $k<n$ to maximize the number of nodes in the network that are ultimately influenced.
Solving this problem turns out to be extremely computationally burdensome. For example, in a relatively small network of 1,000 nodes, there are ${n\choose k} \approx 8$ trillion different possible candidates of size $k=5$ seed sets, which is impossible to solve directly even on state-of-the-art high performance computing resources. Consequently, over the last 15 years, researchers has been actively trying to find approximate solutions to the problem that can be solved quickly. This notebook walks through:
How to implement two of the earliest and most fundamental approximation algorithms in Python - the Greedy and the CELF algorithms - and compares their performance.
We will also spend some time discussing the field of submodular optimization, as it turns out, the combinatorial optimization problem we described above is submodular.
Getting Started
We begin by loading a few modules. There are many popular network modeling packages, but we'll use the igraph package. Don't worry if you're not acquainted with the library, we will explain the syntax, and if you like, you can even swap it out with a different graph library that you prefer.
We'll first test these algorithms to see if they can produce the correct solution for a simple example for which we know the two nodes which are the most influential. Below we create a 10-node/20-edge directed igraph network object. This artificially created network is designed to ensure that nodes 0 and 1 are the most influential. We do this by creating 8 links outgoing from each of these nodes compared to only 1 outgoing links for the other 8 nodes. We also ensure nodes 0 and 1 are not neighbors so that having one in the seed set does not make the other redundant.
End of explanation
def compute_independent_cascade(graph, seed_nodes, prob, n_iters=1000):
total_spead = 0
# simulate the spread process over multiple runs
for i in range(n_iters):
np.random.seed(i)
active = seed_nodes[:]
new_active = seed_nodes[:]
# for each newly activated nodes, find its neighbors that becomes activated
while new_active:
activated_nodes = []
for node in new_active:
neighbors = graph.neighbors(node, mode='out')
success = np.random.uniform(0, 1, len(neighbors)) < prob
activated_nodes += list(np.extract(success, neighbors))
# ensure the newly activated nodes doesn't already exist
# in the final list of activated nodes before adding them
# to the final list
new_active = list(set(activated_nodes) - set(active))
active += new_active
total_spead += len(active)
return total_spead / n_iters
# assuming we start with 1 seed node
seed_nodes = [0]
compute_independent_cascade(graph, seed_nodes, prob=0.2)
Explanation: Spread Process - Independent Cascade (IC)
IM algorithms solve the optimization problem for a given spread or propagation process. We therefore first need to specify a function that simulates the spread from a given seed set across the network. We'll simulate the influence spread using the popular Independent Cascade (IC) model, although there are many others we could have chosen.
Independent Cascade starts by having an initial set of seed nodes, $A_0$, that start the diffusion process, and the process unfolds in discrete steps according to the following randomized rule:
When node $v$ first becomes active in step $t$, it is given a single chance to activate each currently inactive
neighbor $w$; this process succeeds with a probability $p_{v,w}$, a parameter of the system — independently of the history thus far. If $v$ succeeds, then $w$ will become active in step $t + 1$; but whether or not $v$ succeeds in this current step $t$, it cannot make any further attempts to activate $w$ in subsequent rounds. This process runs until no more activations are possible. Here, we assume that the nodes are progressive, meaning the node will only go from inactive to active, but not the other way around.
End of explanation
def greedy(graph, k, prob=0.2, n_iters=1000):
Find k nodes with the largest spread (determined by IC) from a igraph graph
using the Greedy Algorithm.
# we will be storing elapsed time and spreads along the way, in a setting where
# we only care about the final solution, we don't need to record these
# additional information
elapsed = []
spreads = []
solution = []
start_time = time.time()
for _ in range(k):
best_node = -1
best_spread = -np.inf
# loop over nodes that are not yet in our final solution
# to find biggest marginal gain
nodes = set(range(graph.vcount())) - set(solution)
for node in nodes:
spread = compute_independent_cascade(graph, solution + [node], prob, n_iters)
if spread > best_spread:
best_spread = spread
best_node = node
solution.append(best_node)
spreads.append(best_spread)
elapse = round(time.time() - start_time, 3)
elapsed.append(elapse)
return solution, spreads, elapsed
# the result tells us greedy algorithm was able to find the two most influential
# node, node 0 and node 1
k = 2
prob = 0.2
n_iters = 1000
greedy_solution, greedy_spreads, greedy_elapsed = greedy(graph, k, prob, n_iters)
print('solution: ', greedy_solution)
print('spreads: ', greedy_spreads)
print('elapsed: ', greedy_elapsed)
Explanation: We calculate the expected spread of a given seed set by taking the average over a large number of Monte Carlo simulations. The outer loop in the function iterates over each of these simulations and calculates the spread for each iteration, at the end, the mean of each iteration will be our unbiased estimation for the expected spread of the seed nodes we've provided. The actual number of simulation required is up to debate, through experiment I found 1,000 to work well enough, whereas 100 was too low. On the other hand, the paper even set the simulation number up to 10,000.
Within each Monte Carlo iteration, we simulate the spread of influence throughout the network over time, where a different "time period" occurs within each of the while loop iterations, which checks whether any new nodes were activated in the previous time step. If no new nodes were activated (when new_active is an empty list and therefore evaluates to False) then the independent cascade process terminates, and the function moves onto the next simulation after recording the total spread for this simulation. The term total spread here refers to the number of nodes ultimately activated (some algorithms are framed in terms of the "additional spread" in which case we would subtract the size of the seed set so the code would be amended to len(active) - len(seed_nodes).
Greedy Algorithm
With our spread function in hand, we can now turn to the IM algorithms themselves. We begin with the Greedy algorithm. The method is referred to as greedy as it adds the node that currently provides the best spread to our solution set without considering if it is actually the optimal solution in the long run, to elaborate the process is:
We start with an empty seed set/nodes.
For all the nodes that are not in the seed set/nodes, we find the node with the largest spread and adds it to the seed
We repeat step 2 until $k$ seed nodes are found.
This algorithm only needs to calculate the spread of $\sum_{i=0}^k (n-i)\approx kn$ nodes, which is just 5,000 in the case of our 1,000 node and $k=5$ network (a lot less that 8 trillion!). Of course, this computational improvement comes at the cost of the resulting seed set only being an approximate solution to the IM problem because it only considers the incremental spread of the $k$ nodes individually rather than combined. Fortunately, this seemingly naive greedy algorithm is theoretically guaranteed to choose a seed set whose spread will be at least 63% of the spread of the optimal seed set. The proof of the guarantee relies heavily on the "submodular" property of spread functions, which will be explained in more detail in later section.
The following greedy() function implements the algorithm. It produces the optimal set of k seed nodes for the graph graph. Apart from returning the optimal seed set, it also records average spread of that seed set along with a list showing the cumulative time taken to complete each iteration, we will use these information to compare with a different algorithm, CELF, in later section.
End of explanation
# if we check the solutions from the greedy algorithm we've
# implemented above, we can see that our solution is in fact
# submodular, as the spread we get is in diminshing order
np.diff(np.hstack([np.array([0]), greedy_spreads]))
Explanation: Submodular Optimization
Now that we have a brief understanding of the IM problem and taken a first stab at solving this problem, let's take a step back and formally discuss submodular optimization. A function $f$ is said to be submodular if it satisfies the diminishing return property. More formally, if we were given a ground set $V$, a function $f:2^V \rightarrow \mathbb{R}$ (the function's space is 2 power $V$, as the function can either contain or not contain each element in the set $V$). The submodular property is defined as:
\begin{align}
f(A \cup {i}) - f(A) \geq f(B \cup {i}) - f(B)
\end{align}
For any $A \subseteq B \subseteq V$ and $i \in V \setminus B$. Hence by adding any element $i$ to $A$, which is a subset of $B$ yields as least as much value (or more) if we were to add $i$ to $B$. In other words, the marginal gain of adding $i$ to $A$ should be greater or equal to the marginal gain of adding $i$ to $B$ if $A$ is a subset of $B$.
The next property is known as monotone. We say that a submodular function is monotone if for any $A \subseteq B
\subseteq V$, we have $f(A) \leq f(B)$. This means that adding more elements to a set cannot decrease its value.
For example: Let $f(X)=max(X)$. We have the set $X= {1,2,3,4,5}$, and we choose $A={1,2}$ and $B={1,2,5}$. Given those information, we can see $f(A)=2$ and $f(B)=5$ and the marginal gain of items 3,4 is :
\begin{align}
f(3 \, | \, A) = 1 \ \nonumber
f(4 \, | \, B) = 0 \ \nonumber
f(3 \, | \, A) = 2 \ \nonumber
f(4 \, | \, B) = 0
\end{align}
Here we use the shorthand $f(i \, | \, A)$, to denote $f(A \cup {i}) - f(A)$.
Note that $f(i \, | \, A) \ge f(i \, | \, B)$ for any choice of $i$, $A$ and $B$. This is because $f$ is submodular and monotone. To recap, submodular functions has the diminishing return property saying adding an element to a larger set results in smaller marginal increase in the value of $f$ (compared to adding the element to a smaller set). And monotone ensures that adding additional element to the solution set does not decrease the function's value.
Since the functions we're dealing with functions that are monotone, the set with maximum value is always including everything from the ground set $V$. But what we're actually interested in is when we impose a cardinality constraint - that is, finding the set of size at most k that maximizes the utility. Formally:
\begin{align}
A^* = \underset{A: |A| \leq k}{\text{argmax}} \,\, f(A)
\end{align}
For instance, in our IM problem, we are interested in finding the subset $k$ nodes that generates the largest influence. The greedy algorithm we showed above is one approach of solving this combinatorial problem.
Given a ground set $V$, if we're interested in populating a solution set of size $k$.
The algorithm starts with the empty set $A_0$
Then repeats the following step for $i = 0, ... , (k-1)$:
\begin{align}
A_{i+1} = A_{i} \cup { \underset{v \in V \setminus A_i}{\text{argmax}} \,\, f(A_i \cup {v}) }
\end{align}
From a theoretical standpoint, this procedure guarantees a solution that has a score of 0.63 of the optimal set.
End of explanation
import heapq
def celf(graph, k, prob, n_iters=1000):
Find k nodes with the largest spread (determined by IC) from a igraph graph
using the Cost Effective Lazy Forward Algorithm, a.k.a Lazy Greedy Algorithm.
start_time = time.time()
# find the first node with greedy algorithm:
# python's heap is a min-heap, thus
# we negate the spread to get the node
# with the maximum spread when popping from the heap
gains = []
for node in range(graph.vcount()):
spread = compute_independent_cascade(graph, [node], prob, n_iters)
heapq.heappush(gains, (-spread, node))
# we pop the heap to get the node with the best spread,
# when storing the spread to negate it again to store the actual spread
spread, node = heapq.heappop(gains)
solution = [node]
spread = -spread
spreads = [spread]
# record the number of times the spread is computed
lookups = [graph.vcount()]
elapsed = [round(time.time() - start_time, 3)]
for _ in range(k - 1):
node_lookup = 0
matched = False
while not matched:
node_lookup += 1
# here we need to compute the marginal gain of adding the current node
# to the solution, instead of just the gain, i.e. we need to subtract
# the spread without adding the current node
_, current_node = heapq.heappop(gains)
spread_gain = compute_independent_cascade(
graph, solution + [current_node], prob, n_iters) - spread
# check if the previous top node stayed on the top after pushing
# the marginal gain to the heap
heapq.heappush(gains, (-spread_gain, current_node))
matched = gains[0][1] == current_node
# spread stores the cumulative spread
spread_gain, node = heapq.heappop(gains)
spread -= spread_gain
solution.append(node)
spreads.append(spread)
lookups.append(node_lookup)
elapse = round(time.time() - start_time, 3)
elapsed.append(elapse)
return solution, spreads, elapsed, lookups
k = 2
prob = 0.2
n_iters = 1000
celf_solution, celf_spreads, celf_elapsed, celf_lookups = celf(graph, k, prob, n_iters)
print('solution: ', celf_solution)
print('spreads: ', celf_spreads)
print('elapsed: ', celf_elapsed)
print('lookups: ', celf_lookups)
Explanation: Cost Effective Lazy Forward (CELF) Algorithm
CELF Algorithm was developed by Leskovec et al. (2007). In other places, this is referred to as the Lazy Greedy Algorithm. Although the Greedy algorithm is much quicker than solving the full problem, it is still very slow when used on realistically sized networks. CELF was one of the first significant subsequent improvements.
CELF exploits the sub-modularity property of the spread function, which implies that the marginal spread of a given node in one iteration of the Greedy algorithm cannot be any larger than its marginal spread in the previous iteration. This helps us to choose the nodes for which we evaluate the spread function in a more sophisticated manner, rather than simply evaluating the spread for all nodes. More specifically, in the first round, we calculate the spread for all nodes (like Greedy) and store them in a list/heap, which is then sorted. Naturally, the top node is added to the seed set in the first iteration, and then removed from the list/heap. In the next iteration, only the spread for the top node is calculated. If, after resorting, that node remains at the top of the list/heap, then it must have the highest marginal gain of all nodes. Why? Because we know that if we calculated the marginal gain for all other nodes, they'd be lower than the value currently in the list (due to submodularity) and therefore the "top node" would remain on top. This process continues, finding the node that remains on top after calculating its marginal spread, and then adding it to the seed set. By avoiding calculating the spread for many nodes, CELF turns out to be much faster than Greedy, which we'll show below.
The celf() function below that implements the algorithm, is split into two components. The first component, like the Greedy algorithm, iterates over each node in the graph and selects the node with the highest spread into the seed set. However, it also stores the spreads of each node for use in the second component.
The second component iterates to find the remaining $k-1$ seed nodes. Within each iteration, the algorithm evaluates the marginal spread of the top node. If, after resorting, the top node stays in place then that node is selected as the next seed node. If not, then the marginal spread of the new top node is evaluated and so on.
Like greedy(), the function returns the optimal seed set, the resulting spread and the time taken to compute each iteration. In addition, it also returns the list lookups, which keeps track of how many spread calculations were performed at each iteration. We didn't bother doing this for greedy() because we know the number of spread calculations in iteration $i$ is $N-i-1$.
End of explanation
np.random.seed(1234)
graph = Graph.Erdos_Renyi(n=100, m=300, directed=True)
Explanation: Larger Network
Now that we know both algorithms at least work correctly for a simple network for which we know the answer, we move on to a more generic graph to compare the performance and efficiency of each method. Any igraph network object will work, but for the purposes of this post we will use a random Erdos-Renyi graph with 100 nodes and 300 edges. The exact type of graph doesn't matter as the main points hold for any graph. Rather than explicitly defining the nodes and edges like we did above, here we make use of the .Erdos_Renyi() method to automatically create the graph.
End of explanation
k = 10
prob = 0.1
n_iters = 1500
celf_solution, celf_spreads, celf_elapsed, celf_lookups = celf(graph, k, prob, n_iters)
greedy_solution, greedy_spreads, greedy_elapsed = greedy(graph, k, prob, n_iters)
# print resulting solution
print('celf output: ' + str(celf_solution))
print('greedy output: ' + str(greedy_solution))
Explanation: Given the graph, we again compare both optimizers with the same parameter. Again for the n_iters parameter, it is not uncommon to see it set to a much higher number in literatures, such as 10,000 to get a more accurate estimate of spread, we chose a lower number here so we don't have to wait as long for the results
End of explanation
# change default style figure and font size
plt.rcParams['figure.figsize'] = 8, 6
plt.rcParams['font.size'] = 12
lw = 4
fig = plt.figure(figsize=(9,6))
ax = fig.add_subplot(111)
ax.plot(range(1, len(greedy_spreads) + 1), greedy_spreads, label="Greedy", color="#FBB4AE", lw=lw)
ax.plot(range(1, len(celf_spreads) + 1), celf_spreads, label="CELF", color="#B3CDE3", lw=lw)
ax.legend(loc=2)
plt.ylabel('Expected Spread')
plt.title('Expected Spread')
plt.xlabel('Size of Seed Set')
plt.tick_params(bottom=False, left=False)
plt.show()
Explanation: Thankfully, both optimization method yields the same solution set.
In the next few code chunk, we will use some of the information we've stored while performing the optimizing to perform a more thorough comparison. First, by plotting the resulting expected spread from both optimization method. We can see both methods yield the same expected spread.
End of explanation
lw = 4
fig = plt.figure(figsize=(9,6))
ax = fig.add_subplot(111)
ax.plot(range(1, len(greedy_elapsed) + 1), greedy_elapsed, label="Greedy", color="#FBB4AE", lw=lw)
ax.plot(range(1, len(celf_elapsed) + 1), celf_elapsed, label="CELF", color="#B3CDE3", lw=lw)
ax.legend(loc=2)
plt.ylabel('Computation Time (Seconds)')
plt.xlabel('Size of Seed Set')
plt.title('Computation Time')
plt.tick_params(bottom=False, left=False)
plt.show()
Explanation: We now compare the speed of each algorithm. The plot below shows that the computation time of Greedy is larger than CELF for all seed set sizes greater than 1 and the difference in computational times grows exponentially with the size of the seed set. This is because Greedy must compute the spread of $N-i-1$ nodes in iteration $i$ whereas CELF generally performs far fewer spread computations after the first iteration.
End of explanation
celf_lookups
Explanation: We can get some further insight into the superior computational efficiency of CELF by observing how many "node lookups" it had to perform during each of the 10 rounds. The list that records this information shows that the first round iterated over all 100 nodes of the network. This is identical to Greedy which is why the graph above shows that the running time is equivalent for $k=1$. However, for subsequent iterations, there are far fewer spread computations because the marginal spread of a node in a previous iteration is a good indicator for its marginal spread in a future iteration. Note the relationship between the values below and the corresponding computation time presented in the graph above. There is a visible jump in the blue line for higher values of the "node lookups". This again solidifies the fact that while CELF produces identical solution set as Greedy, it usually has enormous speedups over the standard Greedy procedure.
End of explanation |
1,886 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<small><i>The K-means section of this notebook was put together by Jake Vanderplas. Source and license info is on GitHub.</i></small>
Clustering
Step1: Introducing K-Means
K Means is an algorithm for unsupervised clustering
Step2: By eye, it is relatively easy to pick out the four clusters. If you were to perform an exhaustive search for the different segmentations of the data, however, the search space would be exponential in the number of points. Fortunately, there is a well-known Expectation Maximization (EM) procedure which scikit-learn implements, so that KMeans can be solved relatively quickly.
Step3: The algorithm identifies the four clusters of points in a manner very similar to what we would do by eye!
The K-Means Algorithm
Step4: This algorithm will (often) converge to the optimal cluster centers.
KMeans Caveats
The convergence of this algorithm is not guaranteed; for that reason, scikit-learn by default uses a large number of random initializations and finds the best results.
Also, the number of clusters must be set beforehand... there are other clustering algorithms for which this requirement may be lifted.
Clusters must be of similar size, because random initialization will prefer the larger clusters by default, and the smaller clusters will be ignored
Enter .. networks!
Let's take a step back and talk about graph definitions for a second. A Graph (or "network") is a set of nodes (or "verticies") that are connected to each other via edges (or "links") | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
# use seaborn plotting defaults
import seaborn as sns; sns.set()
Explanation: <small><i>The K-means section of this notebook was put together by Jake Vanderplas. Source and license info is on GitHub.</i></small>
Clustering: K-Means In-Depth
Here we'll explore K Means Clustering, which is an unsupervised clustering technique.
We'll start with our standard set of initial imports
End of explanation
from sklearn.datasets.samples_generator import make_blobs
X, y = make_blobs(n_samples=300, centers=4,
random_state=0, cluster_std=0.60)
plt.scatter(X[:, 0], X[:, 1], s=50);
Explanation: Introducing K-Means
K Means is an algorithm for unsupervised clustering: that is, finding clusters in data based on the data attributes alone (not the labels).
K Means is a relatively easy-to-understand algorithm. It searches for cluster centers which are the mean of the points within them, such that every point is closest to the cluster center it is assigned to.
Let's look at how KMeans operates on the simple clusters we looked at previously. To emphasize that this is unsupervised, we'll not plot the colors of the clusters:
End of explanation
from sklearn.cluster import KMeans
est = KMeans(4) # 4 clusters
est.fit(X)
y_kmeans = est.predict(X)
plt.scatter(X[:, 0], X[:, 1], c=y_kmeans, s=50, cmap='rainbow');
Explanation: By eye, it is relatively easy to pick out the four clusters. If you were to perform an exhaustive search for the different segmentations of the data, however, the search space would be exponential in the number of points. Fortunately, there is a well-known Expectation Maximization (EM) procedure which scikit-learn implements, so that KMeans can be solved relatively quickly.
End of explanation
from networkplots import plot_kmeans_interactive
plot_kmeans_interactive();
Explanation: The algorithm identifies the four clusters of points in a manner very similar to what we would do by eye!
The K-Means Algorithm: Expectation Maximization
K-Means is an example of an algorithm which uses an Expectation-Maximization approach to arrive at the solution.
Expectation-Maximization is a two-step approach which works as follows:
Guess some cluster centers
Repeat until converged
A. Assign points to the nearest cluster center
B. Set the cluster centers to the mean
Let's quickly visualize this process:
End of explanation
from bokeh.io import output_notebook
# This line is required for the plots to appear in the notebooks
output_notebook()
%load_ext autoreload
%autoreload 2
import networkplots
networkplots.explore_phenograph()
Explanation: This algorithm will (often) converge to the optimal cluster centers.
KMeans Caveats
The convergence of this algorithm is not guaranteed; for that reason, scikit-learn by default uses a large number of random initializations and finds the best results.
Also, the number of clusters must be set beforehand... there are other clustering algorithms for which this requirement may be lifted.
Clusters must be of similar size, because random initialization will prefer the larger clusters by default, and the smaller clusters will be ignored
Enter .. networks!
Let's take a step back and talk about graph definitions for a second. A Graph (or "network") is a set of nodes (or "verticies") that are connected to each other via edges (or "links"):
A graph $G = (V, E)$ is a set of vertices $V$ and edges $E$
Graphs can be directed if the edges point in specific directions between edges:
Or graphs can be undirected if the edges have no direction:
In this class, we'll be using undirected graphs.
Community detection!
Finding community structure within networks is a well-established problem in the social sciences. Given pairwise connections between people, can you guess what are the local communities? How can you partition the graph to be a bunch of mini-graphs?
PhenoGraph
PhenoGraph creates a $k$-nearest neighbor graph, where each cell is connected to the top $k$ cells it is closest to (in our case, which ones it is closet to in spearman correlation)
Notice that $k$ here indicates the number of connections each cell is allowed to have, compared to $k$-means clustering where $k$ indicated how many clusters you thought were in your data.
Then, after graph creation, PhenoGraph detects the number of communities using a measure called "Modularity," which measures how connected a subgroup is, compared to if the edges between nodes were randomly distributed
Modularity ($Q$) ranges from -1 to 1, where -1 means the subgraphs aren't connected to each other and 1 means the subgraphs is maximally connected
Modularity has a resolution limit. The smallest group it can find is limited by the total number of connections (edges) in the graph. If the number of edges is $m$, then the smallest findable module is $\sqrt{2m}$. How does the number of neighbors $k$ affect the total number of edges?
This is an unsupervised algorithm - you don't need to know the number of groups in the data before you try using it
We'll be using the phenograph package from Dana Pe'er's lab which was origianlly published in this paper: http://www.cell.com/cell/abstract/S0092-8674(15)00637-6
As a reference, we'll be performing clustering on the Spearman correlation between cells.
End of explanation |
1,887 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 The TensorFlow Authors.
Step1: Migrate from TPU embedding_columns to TPUEmbedding layer
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: And prepare a simple dataset for demonstration purposes
Step3: TensorFlow 1
Step4: Next, convert the sparse categorical inputs to a dense representation with tpu.experimental.embedding_column, where dimension is the width of the embedding table. It will store an embedding vector for each of the num_buckets.
Step5: Now, define the TPU-specific embedding configuration via tf.estimator.tpu.experimental.EmbeddingConfigSpec. You will pass it later to tf.estimator.tpu.TPUEstimator as an embedding_config_spec parameter.
Step6: Next, to use a TPUEstimator, define
Step7: With those functions defined, create a tf.distribute.cluster_resolver.TPUClusterResolver that provides the cluster information, and a tf.compat.v1.estimator.tpu.RunConfig object.
Along with the model function you have defined, you can now create a TPUEstimator. Here, you will simplify the flow by skipping checkpoint savings. Then, you will specify the batch size for both training and evaluation for the TPUEstimator.
Step8: Call TPUEstimator.train to begin training the model
Step9: Then, call TPUEstimator.evaluate to evaluate the model using the evaluation data
Step10: TensorFlow 2
Step11: Next, prepare your data. This is similar to how you created a dataset in the TensorFlow 1 example, except the dataset function is now passed a tf.distribute.InputContext object rather than a params dict. You can use this object to determine the local batch size (and which host this pipeline is for, so you can properly partition your data).
When using the tfrs.layers.embedding.TPUEmbedding API, it is important to include the drop_remainder=True option when batching the dataset with Dataset.batch, since TPUEmbedding requires a fixed batch size.
Additionally, the same batch size must be used for evaluation and training if they are taking place on the same set of devices.
Finally, you should use tf.keras.utils.experimental.DatasetCreator along with the special input option—experimental_fetch_to_device=False—in tf.distribute.InputOptions (which holds strategy-specific configurations). This is demonstrated below
Step12: Next, once the data is prepared, you will create a TPUStrategy, and define a model, metrics, and an optimizer under the scope of this strategy (Strategy.scope).
You should pick a number for steps_per_execution in Model.compile since it specifies the number of batches to run during each tf.function call, and is critical for performance. This argument is similar to iterations_per_loop used in TPUEstimator.
The features and table configuration that were specified in TensorFlow 1 via the tf.tpu.experimental.embedding_column (and tf.tpu.experimental.shared_embedding_column) can be specified directly in TensorFlow 2 via a pair of configuration objects
Step13: With that, you are ready to train the model with the training dataset
Step14: Finally, evaluate the model using the evaluation dataset | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 The TensorFlow Authors.
End of explanation
!pip install tensorflow-recommenders
import tensorflow as tf
import tensorflow.compat.v1 as tf1
# TPUEmbedding layer is not part of TensorFlow.
import tensorflow_recommenders as tfrs
Explanation: Migrate from TPU embedding_columns to TPUEmbedding layer
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/migrate/tpu_embedding">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/migrate/tpu_embedding.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/migrate/tpu_embedding.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/migrate/tpu_embedding.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This guide demonstrates how to migrate embedding training on on TPUs from TensorFlow 1's embedding_column API with TPUEstimator to TensorFlow 2's TPUEmbedding layer API with TPUStrategy.
Embeddings are (large) matrices. They are lookup tables that map from a sparse feature space to dense vectors. Embeddings provide efficient and dense representations, capturing complex similarities and relationships between features.
TensorFlow includes specialized support for training embeddings on TPUs. This TPU-specific embedding support allows you to train embeddings that are larger than the memory of a single TPU device, and to use sparse and ragged inputs on TPUs.
In TensorFlow 1, tf.compat.v1.estimator.tpu.TPUEstimator is a high level API that encapsulates training, evaluation, prediction, and exporting for serving with TPUs. It has special support for tf.compat.v1.tpu.experimental.embedding_column.
To implement this in TensorFlow 2, use the TensorFlow Recommenders' tfrs.layers.embedding.TPUEmbedding layer. For training and evaluation, use a TPU distribution strategy—tf.distribute.TPUStrategy—which is compatible with the Keras APIs for, for example, model building (tf.keras.Model), optimizers (tf.keras.optimizers.Optimizer), and training with Model.fit or a custom training loop with tf.function and tf.GradientTape.
For additional information, refer to the tfrs.layers.embedding.TPUEmbedding layer's API documentation, as well as the tf.tpu.experimental.embedding.TableConfig and tf.tpu.experimental.embedding.FeatureConfig docs for additional information. For an overview of tf.distribute.TPUStrategy, check out the Distributed training guide and the Use TPUs guide. If you're migrating from TPUEstimator to TPUStrategy, check out the TPU migration guide.
Setup
Start by installing TensorFlow Recommenders and importing some necessary packages:
End of explanation
features = [[1., 1.5]]
embedding_features_indices = [[0, 0], [0, 1]]
embedding_features_values = [0, 5]
labels = [[0.3]]
eval_features = [[4., 4.5]]
eval_embedding_features_indices = [[0, 0], [0, 1]]
eval_embedding_features_values = [4, 3]
eval_labels = [[0.8]]
Explanation: And prepare a simple dataset for demonstration purposes:
End of explanation
embedding_id_column = (
tf1.feature_column.categorical_column_with_identity(
key="sparse_feature", num_buckets=10))
Explanation: TensorFlow 1: Train embeddings on TPUs with TPUEstimator
In TensorFlow 1, you set up TPU embeddings using the tf.compat.v1.tpu.experimental.embedding_column API and train/evaluate the model on TPUs with tf.compat.v1.estimator.tpu.TPUEstimator.
The inputs are integers ranging from zero to the vocabulary size for the TPU embedding table. Begin with encoding the inputs to categorical ID with tf.feature_column.categorical_column_with_identity. Use "sparse_feature" for the key parameter, since the input features are integer-valued, while num_buckets is the vocabulary size for the embedding table (10).
End of explanation
embedding_column = tf1.tpu.experimental.embedding_column(
embedding_id_column, dimension=5)
Explanation: Next, convert the sparse categorical inputs to a dense representation with tpu.experimental.embedding_column, where dimension is the width of the embedding table. It will store an embedding vector for each of the num_buckets.
End of explanation
embedding_config_spec = tf1.estimator.tpu.experimental.EmbeddingConfigSpec(
feature_columns=(embedding_column,),
optimization_parameters=(
tf1.tpu.experimental.AdagradParameters(0.05)))
Explanation: Now, define the TPU-specific embedding configuration via tf.estimator.tpu.experimental.EmbeddingConfigSpec. You will pass it later to tf.estimator.tpu.TPUEstimator as an embedding_config_spec parameter.
End of explanation
def _input_fn(params):
dataset = tf1.data.Dataset.from_tensor_slices((
{"dense_feature": features,
"sparse_feature": tf1.SparseTensor(
embedding_features_indices,
embedding_features_values, [1, 2])},
labels))
dataset = dataset.repeat()
return dataset.batch(params['batch_size'], drop_remainder=True)
def _eval_input_fn(params):
dataset = tf1.data.Dataset.from_tensor_slices((
{"dense_feature": eval_features,
"sparse_feature": tf1.SparseTensor(
eval_embedding_features_indices,
eval_embedding_features_values, [1, 2])},
eval_labels))
dataset = dataset.repeat()
return dataset.batch(params['batch_size'], drop_remainder=True)
def _model_fn(features, labels, mode, params):
embedding_features = tf1.keras.layers.DenseFeatures(embedding_column)(features)
concatenated_features = tf1.keras.layers.Concatenate(axis=1)(
[embedding_features, features["dense_feature"]])
logits = tf1.layers.Dense(1)(concatenated_features)
loss = tf1.losses.mean_squared_error(labels=labels, predictions=logits)
optimizer = tf1.train.AdagradOptimizer(0.05)
optimizer = tf1.tpu.CrossShardOptimizer(optimizer)
train_op = optimizer.minimize(loss, global_step=tf1.train.get_global_step())
return tf1.estimator.tpu.TPUEstimatorSpec(mode, loss=loss, train_op=train_op)
Explanation: Next, to use a TPUEstimator, define:
- An input function for the training data
- An evaluation input function for the evaluation data
- A model function for instructing the TPUEstimator how the training op is defined with the features and labels
End of explanation
cluster_resolver = tf1.distribute.cluster_resolver.TPUClusterResolver(tpu='')
print("All devices: ", tf1.config.list_logical_devices('TPU'))
tpu_config = tf1.estimator.tpu.TPUConfig(
iterations_per_loop=10,
per_host_input_for_training=tf1.estimator.tpu.InputPipelineConfig
.PER_HOST_V2)
config = tf1.estimator.tpu.RunConfig(
cluster=cluster_resolver,
save_checkpoints_steps=None,
tpu_config=tpu_config)
estimator = tf1.estimator.tpu.TPUEstimator(
model_fn=_model_fn, config=config, train_batch_size=8, eval_batch_size=8,
embedding_config_spec=embedding_config_spec)
Explanation: With those functions defined, create a tf.distribute.cluster_resolver.TPUClusterResolver that provides the cluster information, and a tf.compat.v1.estimator.tpu.RunConfig object.
Along with the model function you have defined, you can now create a TPUEstimator. Here, you will simplify the flow by skipping checkpoint savings. Then, you will specify the batch size for both training and evaluation for the TPUEstimator.
End of explanation
estimator.train(_input_fn, steps=1)
Explanation: Call TPUEstimator.train to begin training the model:
End of explanation
estimator.evaluate(_eval_input_fn, steps=1)
Explanation: Then, call TPUEstimator.evaluate to evaluate the model using the evaluation data:
End of explanation
cluster_resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
tf.config.experimental_connect_to_cluster(cluster_resolver)
tf.tpu.experimental.initialize_tpu_system(cluster_resolver)
print("All devices: ", tf.config.list_logical_devices('TPU'))
Explanation: TensorFlow 2: Train embeddings on TPUs with TPUStrategy
In TensorFlow 2, to train on the TPU workers, use tf.distribute.TPUStrategy together with the Keras APIs for model definition and training/evaluation. (Refer to the Use TPUs guide for more examples of training with Keras Model.fit and a custom training loop (with tf.function and tf.GradientTape).)
Since you need to perform some initialization work to connect to the remote cluster and initialize the TPU workers, start by creating a TPUClusterResolver to provide the cluster information and connect to the cluster. (Learn more in the TPU initialization section of the Use TPUs guide.)
End of explanation
global_batch_size = 8
def _input_dataset(context: tf.distribute.InputContext):
dataset = tf.data.Dataset.from_tensor_slices((
{"dense_feature": features,
"sparse_feature": tf.SparseTensor(
embedding_features_indices,
embedding_features_values, [1, 2])},
labels))
dataset = dataset.shuffle(10).repeat()
dataset = dataset.batch(
context.get_per_replica_batch_size(global_batch_size),
drop_remainder=True)
return dataset.prefetch(2)
def _eval_dataset(context: tf.distribute.InputContext):
dataset = tf.data.Dataset.from_tensor_slices((
{"dense_feature": eval_features,
"sparse_feature": tf.SparseTensor(
eval_embedding_features_indices,
eval_embedding_features_values, [1, 2])},
eval_labels))
dataset = dataset.repeat()
dataset = dataset.batch(
context.get_per_replica_batch_size(global_batch_size),
drop_remainder=True)
return dataset.prefetch(2)
input_options = tf.distribute.InputOptions(
experimental_fetch_to_device=False)
input_dataset = tf.keras.utils.experimental.DatasetCreator(
_input_dataset, input_options=input_options)
eval_dataset = tf.keras.utils.experimental.DatasetCreator(
_eval_dataset, input_options=input_options)
Explanation: Next, prepare your data. This is similar to how you created a dataset in the TensorFlow 1 example, except the dataset function is now passed a tf.distribute.InputContext object rather than a params dict. You can use this object to determine the local batch size (and which host this pipeline is for, so you can properly partition your data).
When using the tfrs.layers.embedding.TPUEmbedding API, it is important to include the drop_remainder=True option when batching the dataset with Dataset.batch, since TPUEmbedding requires a fixed batch size.
Additionally, the same batch size must be used for evaluation and training if they are taking place on the same set of devices.
Finally, you should use tf.keras.utils.experimental.DatasetCreator along with the special input option—experimental_fetch_to_device=False—in tf.distribute.InputOptions (which holds strategy-specific configurations). This is demonstrated below:
End of explanation
strategy = tf.distribute.TPUStrategy(cluster_resolver)
with strategy.scope():
optimizer = tf.keras.optimizers.Adagrad(learning_rate=0.05)
dense_input = tf.keras.Input(shape=(2,), dtype=tf.float32, batch_size=global_batch_size)
sparse_input = tf.keras.Input(shape=(), dtype=tf.int32, batch_size=global_batch_size)
embedded_input = tfrs.layers.embedding.TPUEmbedding(
feature_config=tf.tpu.experimental.embedding.FeatureConfig(
table=tf.tpu.experimental.embedding.TableConfig(
vocabulary_size=10,
dim=5,
initializer=tf.initializers.TruncatedNormal(mean=0.0, stddev=1)),
name="sparse_input"),
optimizer=optimizer)(sparse_input)
input = tf.keras.layers.Concatenate(axis=1)([dense_input, embedded_input])
result = tf.keras.layers.Dense(1)(input)
model = tf.keras.Model(inputs={"dense_feature": dense_input, "sparse_feature": sparse_input}, outputs=result)
model.compile(optimizer, "mse", steps_per_execution=10)
Explanation: Next, once the data is prepared, you will create a TPUStrategy, and define a model, metrics, and an optimizer under the scope of this strategy (Strategy.scope).
You should pick a number for steps_per_execution in Model.compile since it specifies the number of batches to run during each tf.function call, and is critical for performance. This argument is similar to iterations_per_loop used in TPUEstimator.
The features and table configuration that were specified in TensorFlow 1 via the tf.tpu.experimental.embedding_column (and tf.tpu.experimental.shared_embedding_column) can be specified directly in TensorFlow 2 via a pair of configuration objects:
- tf.tpu.experimental.embedding.FeatureConfig
- tf.tpu.experimental.embedding.TableConfig
(Refer to the associated API documentation for more details.)
End of explanation
model.fit(input_dataset, epochs=5, steps_per_epoch=10)
Explanation: With that, you are ready to train the model with the training dataset:
End of explanation
model.evaluate(eval_dataset, steps=1, return_dict=True)
Explanation: Finally, evaluate the model using the evaluation dataset:
End of explanation |
1,888 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regexs
Up until now, to search in text we have used string methods find, startswith, endswith, etc. But sometimes you need more power.
Regular expressions are their own little language that allows you to search through text and find matches with incredibly complex patterns.
A regular expression, also referred to as "regex" or "regexp", provides a concise and flexible means for matching strings of text, such as particular characters, words, or patterns of characters.
To use regular you need to import python's regex library re
https
Step1: Searching
The simplest thing you can do with regexs in python is search through text to see if there is a match. To do this you use the methods search or match. match only checks if it matches at the beginning of the string and search check the whole string.
re.match(pattern, string)
re.search(pattern, string)
Step2: TRY IT
Search for the word May in the django logs
Special Characters
So far we can't do anything that you couldn't do with find, but don't worry. Regexs have many special characters to allow you to look for thing like the beginning of a word, whitespace or classes of characters.
You include the character in the pattern.
^ Matches the beginning of a line
$ Matches the end of the line
. Matches any character
\s Matches whitespace
\S Matches any non-whitespace character
* Repeats a character zero or more times
*? Repeats a character zero or more times (non-greedy)
+ Repeats a character one or more times
+? Repeats a character one or more times (non-greedy)
? Repeats a character 0 or one time
[aeiou] Matches a single character in the listed set
[^XYZ] Matches a single character not in the listed set
[a-z0-9] The set of characters can include a range
{10} Specifics a match the preceding character(s) {num} number or times
\d Matches any digit
\b Matches a word boundary
Hint if you want to match the literal character (like $) as opposed to its special meaning, you would escape it with a \
Step3: TRY IT
Match anything between angled brackets < >
Ignoring case
match and search both take an optional third argument that allows you to include flags. The most common flag is ignore case.
re.search(pattern, string, re.IGNORECASE)
re.match(pattern, string, re.IGNORECASE)
Step4: TRY IT
search for 'django' in 'Both Django and Flask are very useful python frameworks' ignoring case
Extracting Matches
Finding is only half the battle. You can also extract what you match.
To get the string that your regex matched you can store the match object in a variable and run the group method on that
m = re.search(pattern, string)
print m.group(0)
Step5: If you want to find all the matches, not just the first, you can use the findall method. It returns a list of all the matches
re.findall(pattern, string)
Step6: If you want to have only part of the match returned to you in findall, you can use parenthesis to set a capture point
pattern = 'sads (part to capture) asdjklajsd'
print re.findall(pattern, string) # prints part to capture
Step7: TRY IT
Capture the host of the email address (alphanumerics between @ and .com) Hint remember to escape the . in .com
Practice
There is a lot more that you can do, but it can feel overwhelming. The best way to learn is with practice. A great way to experiment is this website http | Python Code:
import re
# To run the examples we are going to use some of the logs from the
# django project, a web framework for python
django_logs = '''commit 722344ee59fb89ea2cd5b906d61b35f76579de4e
Author: Simon Charette <charette.s@gmail.com>
Date: Thu May 19 09:31:49 2016 -0400
Refs #24067 -- Fixed contenttypes rename tests failures on Oracle.
Broke the initial migration in two to work around #25530 and added
'django.contrib.auth' to the available_apps to make sure its tables are also
flushed as Oracle doesn't implement cascade deletion in sql_flush().
Thanks Tim for the report.
commit 9fed4ec418a4e391a3af8790137ab147efaf17c2
Author: Simon Charette <charette.s@gmail.com>
Date: Sat May 21 13:18:22 2016 -0400
Removed an obsolete comment about a fixed ticket.
commit 94486fb005e878d629595942679ba6d23401bc22
Author: Markus Holtermann <info@markusholtermann.eu>
Date: Sat May 21 13:20:40 2016 +0200
Revert "Disable patch coverage checks"
Mistakenly pushed to django/django instead of another repo
This reverts commit 6dde884c01156e36681aa51a5e0de4efa9575cfd.
commit 6dde884c01156e36681aa51a5e0de4efa9575cfd
Author: Markus Holtermann <info@markusholtermann.eu>
Date: Sat May 21 13:18:18 2016 +0200
Disable patch coverage checks
commit 46a38307c245ab7ed0b4d5d5ebbaf523a81e3b75
Author: Tim Graham <timograham@gmail.com>
Date: Fri May 20 10:50:51 2016 -0400
Removed versionadded/changed annotations for 1.9.
commit 1915a7e5c56d996b0e98decf8798c7f47ff04e76
Author: Tim Graham <timograham@gmail.com>
Date: Fri May 20 09:18:55 2016 -0400
Increased the default PBKDF2 iterations.
commit 97c3dfe12e095005dad9e6750ad5c5a54eee8721
Author: Tim Graham <timograham@gmail.com>
Date: Thu May 19 22:28:24 2016 -0400
Added stub 1.11 release notes.
commit 8df083a3ce21ca73ff77d3844a578f3da3ae78d7
Author: Tim Graham <timograham@gmail.com>
Date: Thu May 19 22:20:21 2016 -0400
Bumped version; master is now 1.11 pre-alpha.'''
Explanation: Regexs
Up until now, to search in text we have used string methods find, startswith, endswith, etc. But sometimes you need more power.
Regular expressions are their own little language that allows you to search through text and find matches with incredibly complex patterns.
A regular expression, also referred to as "regex" or "regexp", provides a concise and flexible means for matching strings of text, such as particular characters, words, or patterns of characters.
To use regular you need to import python's regex library re
https://docs.python.org/2/library/re.html
End of explanation
print(re.match('a', 'abcde'))
print(re.match('c', 'abcde'))
print(re.search('a', 'abcde'))
print(re.search('c', 'abcde'))
print(re.match('version', django_logs))
print(re.search('version', django_logs))
if re.search('commit', django_logs):
print("Someone has been doing work.")
Explanation: Searching
The simplest thing you can do with regexs in python is search through text to see if there is a match. To do this you use the methods search or match. match only checks if it matches at the beginning of the string and search check the whole string.
re.match(pattern, string)
re.search(pattern, string)
End of explanation
# Start simple, match any character 2 times
print(re.search('..', django_logs))
# just to prove it works
print(re.search('..', 'aa'))
print(re.search('..', 'a'))
print(re.search('..', '^%'))
# to match a commit hash (numbers and letters a-f repeated) we can use a regex
commit_pattern = '[0-9a-f]+'
print(re.search(commit_pattern, django_logs))
# Let's match the time syntax
time_pattern = '\d\d:\d\d:\d\d'
time_pattern = '\d{2}:\d{2}:\d{2}'
print(re.search(time_pattern, django_logs))
Explanation: TRY IT
Search for the word May in the django logs
Special Characters
So far we can't do anything that you couldn't do with find, but don't worry. Regexs have many special characters to allow you to look for thing like the beginning of a word, whitespace or classes of characters.
You include the character in the pattern.
^ Matches the beginning of a line
$ Matches the end of the line
. Matches any character
\s Matches whitespace
\S Matches any non-whitespace character
* Repeats a character zero or more times
*? Repeats a character zero or more times (non-greedy)
+ Repeats a character one or more times
+? Repeats a character one or more times (non-greedy)
? Repeats a character 0 or one time
[aeiou] Matches a single character in the listed set
[^XYZ] Matches a single character not in the listed set
[a-z0-9] The set of characters can include a range
{10} Specifics a match the preceding character(s) {num} number or times
\d Matches any digit
\b Matches a word boundary
Hint if you want to match the literal character (like $) as opposed to its special meaning, you would escape it with a \
End of explanation
print(re.search('markus holtermann', django_logs))
print(re.search('markus holtermann', django_logs, re.IGNORECASE))
Explanation: TRY IT
Match anything between angled brackets < >
Ignoring case
match and search both take an optional third argument that allows you to include flags. The most common flag is ignore case.
re.search(pattern, string, re.IGNORECASE)
re.match(pattern, string, re.IGNORECASE)
End of explanation
# Let's match the time syntax
time_pattern = '\d\d:\d\d:\d\d'
m = re.search(time_pattern, django_logs)
print(m.group(0))
Explanation: TRY IT
search for 'django' in 'Both Django and Flask are very useful python frameworks' ignoring case
Extracting Matches
Finding is only half the battle. You can also extract what you match.
To get the string that your regex matched you can store the match object in a variable and run the group method on that
m = re.search(pattern, string)
print m.group(0)
End of explanation
time_pattern = '\d\d:\d\d:\d\d'
print(re.findall(time_pattern, django_logs))
Explanation: If you want to find all the matches, not just the first, you can use the findall method. It returns a list of all the matches
re.findall(pattern, string)
End of explanation
time_pattern = '(\d\d):\d\d:\d\d'
hours = re.findall(time_pattern, django_logs)
print(sorted(hours))
# you can capture more than one match
time_pattern = '(\d\d):(\d\d):\d\d'
times = re.findall(time_pattern, django_logs)
print(times)
# Unpacking the tuple in the first line
for hours, mins in times:
print("{} hr {} min".format(hours, mins))
Explanation: If you want to have only part of the match returned to you in findall, you can use parenthesis to set a capture point
pattern = 'sads (part to capture) asdjklajsd'
print re.findall(pattern, string) # prints part to capture
End of explanation
# Lets try some now
Explanation: TRY IT
Capture the host of the email address (alphanumerics between @ and .com) Hint remember to escape the . in .com
Practice
There is a lot more that you can do, but it can feel overwhelming. The best way to learn is with practice. A great way to experiment is this website http://www.regexr.com/ You can put a section of text and see what regexs match patterns in your text. The site also has a cheatsheet for special characters.
End of explanation |
1,889 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: 4chan Sample Thread Exploration
This notebook contains the cleaning and exploration of the chan_example csv which is hosted on the far-right s3 bucket. It contains cleaning out the html links from the text of the messages with beautiful soup, grouping the messages into their threads, and an exploratory sentiment analysis.
Further work could be to get the topic modelling for messages working and perhaps look at sentiment regarding different topics.
Step2: Message Threads
Forchan messages are all part of a message thread, which can be reassembled by following the parents for each post and chaining them back together. This code creates a thread ID and maps that thread ID to the corresponding messages.
I don't know currently whether or not messages are linear, or if they can be a tree structure. This section of code simply tries to find which messages belong to which threads
Looks like a thread is all just grouped by the parent comment. Doh
Here i'll group the threads into a paragraph like structure and store it in a dictionary with the key being the parent chan_id.
Step3: Now we can do some topic modeling on the different threads
Following along with the topic modelling tweet exploration, we're going to tokenize our messages and then build a corpus from it. We'll then use the gensim library to run our topic model over the tokenized messages
Step5: Creating an Emotion Sentiment Classifier
Labeled dataset provided by @crowdflower hosted on data.world. Dataset contains 40,000 tweets which are labeled as one of 13 emotions. Here I looked at the top 5 emotions, since the bottom few had very tweets by comparison, so it would be hard to get a properly split dataset on for train/testing. Probably the one i'd want to include that wasn't included yet is anger, but neutral, worry, happinness, sadness, love are pretty good starting point for emotion classification regarding news tweets.
https
Step6: 64% test accuracy on the test is nothing to phone home about. It's also likely to be a lot less accurate on our data from the 4chan messages, since those will be using much different language than the messages in our training set.
Step7: Looking at this sample of 10 posts, I'm not convinced in the accuracy of this classifier on the far-right data, but out of curiosity, what did it classifer the | Python Code:
import boto3
from bs4 import BeautifulSoup
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
session = boto3.Session(profile_name='default')
s3 = session.resource('s3')
bucket = s3.Bucket("far-right")
session.available_profiles
# print all objects in bucket
for obj in bucket.objects.all():
if "chan" in obj.key:
#print(obj.key)
pass
bucket.download_file('fourchan/chan_example.csv', 'chan_example.csv')
chan = pd.read_csv("chan_example.csv")
# remove the newline tags. They're not useful for our analysis and just clutter the text.
chan.com = chan.com.astype(str).apply(lambda x: x.replace("<br>", " "))
bucket.download_file('info-source/daily/20170228/fourchan/fourchan_1204.json', '2017-02-28-1204.json')
chan2 = pd.read_json("2017-02-28-1204.json")
soup = BeautifulSoup(chan.com[19], "lxml")
quotes = soup.find("span")
for quote in quotes.contents:
print(quote.replace(">>", ""))
parent = soup.find("a")
print(parent.contents[0].replace(">>", ""))
print(chan.com[19])
# If there's a quote and then the text, this would work.
print(chan.com[19].split("</span>")[-1])
def split_comment(comment):
Splits up a comment into parent, quotes, and text
# I used lxml to
soup = BeautifulSoup(comment, "lxml")
quotes, quotelink, text = None, None, None
try:
quotes = soup.find("span")
quotes = [quote.replace(">>", "") for quote in quotes.contents]
except:
pass
try:
quotelink = soup.find("a").contents[0].replace(">>", "")
except:
pass
# no quote or parent
if quotes is None and quotelink is None:
text = comment
# Parent but no quote
if quotelink is not None and quotes is None:
text = comment.split("a>")[-1]
# There is a quote
if quotes is not None:
text = comment.split("</span>")[-1]
return {'quotes':quotes, 'quotelink': quotelink, 'text': text}
df = pd.DataFrame({'quotes':[], 'quotelink':[], 'text':[]})
for comment in chan['com']:
df = df.append(split_comment(comment), ignore_index = True)
full = pd.merge(chan, df, left_index = True, right_index = True)
quotes = pd.Series()
quotelinks = pd.Series()
texts = pd.Series()
for comment in chan['com']:
parse = split_comment(comment)
quotes.append(pd.Series(parse['quotes']))
quotelinks.append(pd.Series(parse['quotelink']))
texts.append(pd.Series(parse['text']))
chan['quotes'] = quotes
chan['quotelinks'] = quotelinks
chan['text'] = texts
Explanation: 4chan Sample Thread Exploration
This notebook contains the cleaning and exploration of the chan_example csv which is hosted on the far-right s3 bucket. It contains cleaning out the html links from the text of the messages with beautiful soup, grouping the messages into their threads, and an exploratory sentiment analysis.
Further work could be to get the topic modelling for messages working and perhaps look at sentiment regarding different topics.
End of explanation
threads = full['parent'].unique()
full_text = {}
for thread in threads:
full_text[int(thread)] = ". ".join(full[full['parent'] == thread]['text'])
Explanation: Message Threads
Forchan messages are all part of a message thread, which can be reassembled by following the parents for each post and chaining them back together. This code creates a thread ID and maps that thread ID to the corresponding messages.
I don't know currently whether or not messages are linear, or if they can be a tree structure. This section of code simply tries to find which messages belong to which threads
Looks like a thread is all just grouped by the parent comment. Doh
Here i'll group the threads into a paragraph like structure and store it in a dictionary with the key being the parent chan_id.
End of explanation
import gensim
import pyLDAvis.gensim as gensimvis
import pyLDAvis
tokenized_messages = []
for msg in nlp.pipe(full['text'], n_threads = 100, batch_size = 100):
ents = msg.ents
msg = [token.lemma_ for token in msg if token.is_alpha and not token.is_stop]
tokenized_messages.append(msg)
# Build the corpus using gensim
dictionary = gensim.corpora.Dictionary(tokenized_messages)
msg_corpus = [dictionary.doc2bow(x) for x in tokenized_messages]
msg_dictionary = gensim.corpora.Dictionary([])
# gensim.corpora.MmCorpus.serialize(tweets_corpus_filepath, tweets_corpus)
Explanation: Now we can do some topic modeling on the different threads
Following along with the topic modelling tweet exploration, we're going to tokenize our messages and then build a corpus from it. We'll then use the gensim library to run our topic model over the tokenized messages
End of explanation
import nltk
from nltk.classify import NaiveBayesClassifier
from nltk.classify import accuracy
from nltk import WordNetLemmatizer
lemma = nltk.WordNetLemmatizer()
df = pd.read_csv('https://query.data.world/s/8c7bwy8c55zx1t0c4yyrnjyax')
emotions = list(df.groupby("sentiment").agg("count").sort_values(by = "content", ascending = False).head(6).index)
print(emotions)
emotion_subset = df[df['sentiment'].isin(emotions)]
def format_sentence(sent):
ex = [i.lower() for i in sent.split()]
lemmas = [lemma.lemmatize(i) for i in ex]
return {word: True for word in nltk.word_tokenize(" ".join(lemmas))}
def create_train_vector(row):
Formats a row when used in df.apply to create a train vector to be used by a
Naive Bayes Classifier from the nltk library.
sentiment = row[1]
text = row[3]
return [format_sentence(text), sentiment]
train = emotion_subset.apply(create_train_vector, axis = 1)
# Split off 10% of our train vector to be for test.
test = train[:int(0.1*len(train))]
train = train[int(0.9)*len(train):]
emotion_classifier = NaiveBayesClassifier.train(train)
print(accuracy(emotion_classifier, test))
Explanation: Creating an Emotion Sentiment Classifier
Labeled dataset provided by @crowdflower hosted on data.world. Dataset contains 40,000 tweets which are labeled as one of 13 emotions. Here I looked at the top 5 emotions, since the bottom few had very tweets by comparison, so it would be hard to get a properly split dataset on for train/testing. Probably the one i'd want to include that wasn't included yet is anger, but neutral, worry, happinness, sadness, love are pretty good starting point for emotion classification regarding news tweets.
https://data.world/crowdflower/sentiment-analysis-in-text
End of explanation
emotion_classifier.show_most_informative_features()
for comment in full['text'].head(10):
print(emotion_classifier.classify(format_sentence(comment)), ": ", comment)
Explanation: 64% test accuracy on the test is nothing to phone home about. It's also likely to be a lot less accurate on our data from the 4chan messages, since those will be using much different language than the messages in our training set.
End of explanation
full['emotion'] = full['text'].apply(lambda x: emotion_classifier.classify(format_sentence(x)))
grouped_emotion_messages = full.groupby('emotion').count()[[2]]
grouped_emotion_messages.columns = ["count"]
grouped_emotion_messages
grouped_emotion_messages.plot.bar()
Explanation: Looking at this sample of 10 posts, I'm not convinced in the accuracy of this classifier on the far-right data, but out of curiosity, what did it classifer the
End of explanation |
1,890 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Train a gesture recognition model for microcontroller use
This notebook demonstrates how to train a 20kb gesture recognition model for TensorFlow Lite for Microcontrollers. It will produce the same model used in the magic_wand example application.
The model is designed to be used with Google Colaboratory.
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step1: Prepare the data
Next, we'll download the data and extract it into the expected location within the training scripts' directory.
Step2: We'll then run the scripts that split the data into training, validation, and test sets.
Step3: Load TensorBoard
Now, we set up TensorBoard so that we can graph our accuracy and loss as training proceeds.
Step4: Begin training
The following cell will begin the training process. Training will take around 5 minutes on a GPU runtime. You'll see the metrics in TensorBoard after a few epochs.
Step5: Create a C source file
The train.py script writes a model, model.tflite, to the training scripts' directory.
In the following cell, we convert this model into a C++ source file we can use with TensorFlow Lite for Microcontrollers. | Python Code:
# Clone the repository from GitHub
!git clone --depth 1 -q https://github.com/tensorflow/tensorflow
# Copy the training scripts into our workspace
!cp -r tensorflow/tensorflow/lite/micro/examples/magic_wand/train train
Explanation: Train a gesture recognition model for microcontroller use
This notebook demonstrates how to train a 20kb gesture recognition model for TensorFlow Lite for Microcontrollers. It will produce the same model used in the magic_wand example application.
The model is designed to be used with Google Colaboratory.
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/examples/magic_wand/train/train_magic_wand_model.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/examples/magic_wand/train/train_magic_wand_model.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Training is much faster using GPU acceleration. Before you proceed, ensure you are using a GPU runtime by going to Runtime -> Change runtime type and selecting GPU. Training will take around 5 minutes on a GPU runtime.
Configure dependencies
Run the following cell to ensure the correct version of TensorFlow is used.
We'll also clone the TensorFlow repository, which contains the training scripts, and copy them into our workspace.
End of explanation
# Download the data we will use to train the model
!wget http://download.tensorflow.org/models/tflite/magic_wand/data.tar.gz
# Extract the data into the train directory
!tar xvzf data.tar.gz -C train 1>/dev/null
Explanation: Prepare the data
Next, we'll download the data and extract it into the expected location within the training scripts' directory.
End of explanation
# The scripts must be run from within the train directory
%cd train
# Prepare the data
!python data_prepare.py
# Split the data by person
!python data_split_person.py
Explanation: We'll then run the scripts that split the data into training, validation, and test sets.
End of explanation
# Load TensorBoard
%load_ext tensorboard
%tensorboard --logdir logs/scalars
Explanation: Load TensorBoard
Now, we set up TensorBoard so that we can graph our accuracy and loss as training proceeds.
End of explanation
!python train.py --model CNN --person true
Explanation: Begin training
The following cell will begin the training process. Training will take around 5 minutes on a GPU runtime. You'll see the metrics in TensorBoard after a few epochs.
End of explanation
# Install xxd if it is not available
!apt-get -qq install xxd
# Save the file as a C source file
!xxd -i model.tflite > /content/model.cc
# Print the source file
!cat /content/model.cc
Explanation: Create a C source file
The train.py script writes a model, model.tflite, to the training scripts' directory.
In the following cell, we convert this model into a C++ source file we can use with TensorFlow Lite for Microcontrollers.
End of explanation |
1,891 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Задача 2
Step1: 1. Сформировать СЛАУ для многочлена первой степени, который должен совпадать с функцией в точках 1 и 15.
Step2: 2. Многочлен второй степени в точка 1, 8, 15.
Step3: 3. Многочлен третьей степени в точка 1, 4, 10, 15. | Python Code:
from math import sin, exp
def func(x):
return sin(x / 5.) * exp(x / 10.) + 5. * exp(-x / 2.)
import numpy as np
from scipy import linalg
arrCoordinates = np.arange(1., 15.1, 0.1)
arrFunction = np.array([func(coordinate) for coordinate in arrCoordinates])
Explanation: Задача 2: аппроксимация функции
End of explanation
#многочлен первой степени
arrCoord1 = np.array([1, 15])
N = 2
arrA1 = np.empty((0, N))
for i in xrange(N):
arrA1Line = list()
for j in xrange(N):
arrA1Line.append(arrCoord1[i] ** j)
arrA1 = np.append(arrA1, np.array([arrA1Line]), axis = 0)
arrB1 = np.array([func(coordinate) for coordinate in arrCoord1])
print arrCoord1
print arrA1
print arrB1
arrX1 = linalg.solve(arrA1, arrB1)
print arrX1
def func1(x): return arrX1[0] + arrX1[1] * x
arrFunc1 = np.array([func1(coordinate) for coordinate in arrCoordinates])
%matplotlib inline
import matplotlib.pylab as plt
plt.plot(arrCoordinates, arrFunction, arrCoordinates, arrFunc1)
plt.show()
Explanation: 1. Сформировать СЛАУ для многочлена первой степени, который должен совпадать с функцией в точках 1 и 15.
End of explanation
#многочлен второй степени
arrCoord2 = np.array([1, 8, 15])
N = 3
arrA2 = np.empty((0, N))
for i in xrange(N):
arrA2Line = list()
for j in xrange(N):
arrA2Line.append(arrCoord2[i] ** j)
arrA2 = np.append(arrA2, np.array([arrA2Line]), axis = 0)
arrB2 = np.array([func(coordinate) for coordinate in arrCoord2])
print arrCoord2
print arrA2
print arrB2
arrX2 = linalg.solve(arrA2, arrB2)
print arrX2
def func2(x): return arrX2[0] + arrX2[1] * x + arrX2[2] * (x ** 2)
arrFunc2 = np.array([func2(coordinate) for coordinate in arrCoordinates])
plt.plot(arrCoordinates, arrFunction, arrCoordinates, arrFunc1, arrCoordinates, arrFunc2)
plt.show()
Explanation: 2. Многочлен второй степени в точка 1, 8, 15.
End of explanation
#многочлен третьей степени
arrCoord3 = np.array([1, 4, 10, 15])
N = 4
arrA3 = np.empty((0, N))
for i in xrange(N):
arrA3Line = list()
for j in xrange(N):
arrA3Line.append(arrCoord3[i] ** j)
arrA3 = np.append(arrA3, np.array([arrA3Line]), axis = 0)
arrB3 = np.array([func(coordinate) for coordinate in arrCoord3])
print arrCoord3
print arrA3
print arrB3
arrX3 = linalg.solve(arrA3, arrB3)
print arrX3
def func3(x): return arrX3[0] + arrX3[1] * x + arrX3[2] * (x ** 2) + arrX3[3] * (x ** 3)
arrFunc3 = np.array([func3(coordinate) for coordinate in arrCoordinates])
plt.plot(arrCoordinates, arrFunction, arrCoordinates, arrFunc1, arrCoordinates, arrFunc2, arrCoordinates, arrFunc3)
plt.show()
with open('answer2.txt', 'w') as fileAnswer:
for item in arrX3:
fileAnswer.write(str(item) + ' ')
Explanation: 3. Многочлен третьей степени в точка 1, 4, 10, 15.
End of explanation |
1,892 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
所有生成器都是迭代器,因为生成器完全实现了迭代器接口,不过迭代器一般用于从集合取出元素,生成器用于 “凭空” 创造元素。斐波那契数列例子可以很好的说明两者区别:斐波那契数列中的数有无穷个,在一个集合里放不下。
在 Python 3 中,生成器有广泛用途。现在即使是内置的 range() 函数也要返回一个类似生成器的对象,而以前返回完整列表。如果一定让 range() 函数返回列表,必须明确指明(例如,list(range(100)))。
在 Python 中,所有集合都能迭代。在 Python 内部,迭代器用于支持:
for 循环
构建和扩展集合类型
逐行遍历文本文件
列表推导,字典推导和集合推导
元组拆包
调用函数时,使用 * 拆包
本章探讨以下话题:
语言内部使用 iter(...) 内置函数处理可迭代对象的方式
如何使用 Python 经典的迭代器模式
详细说明生成器函数的工作原理
如何使用生成器函数或生成器表达式代替经典的迭代器
如何使用标准库中通用的生成器函数
如何使用 yield from 语句合并生成器
案例分析: 在一个数据库转换工具中使用生成器处理大型数据集
为什么生成器和协程看似相同,其实差别很大,不能混淆
Sentence 类第 1 版:单词序列
我们创建一个类,并向它传入一些包含文本的字符串,然后可以逐个单词迭代,第 1 版要实现序列协议,这个类的对象可以迭代,因为所有序列都可以迭代 -- 这一点前面已经说过,现在说明真正的原因
下面展示了一个可以通过索引从文本提取单词的类:
Step1: 我们都知道,序列可以迭代,下面说明具体原因: iter 函数
解释器需要迭代对象 x 时候,会自动调用 iter(x)
内置的 iter 函数有以下作用。
检查对象是否实现了 __iter__ 方法,如果实现了就调用它,获取一个迭代器
如果没有实现 __iter__ 方法,但是实现了 __getitem__ 方法,Python 会创建一个迭代器,尝试按顺序(从索引 0 开始)获取元素
如果尝试失败,Python 抛出 TypeError 异常,通常提示 C object is not iterable,其中 C 是目标对象所属的类
任何 Pytho 序列都可迭代的原因是实现了 __getitem__ 方法。其实标准的序列也都实现了 __iter__ 方法,因此我们也应该这么做。之所以对 __getitem__ 方法特殊处理,是为了向后兼容,未来可能不会再这么做
11 章提到过,这是鸭子类型的极端形式,不仅要实现特殊的 __iter__ 方法,还要实现 __getitem__ 方法,而且 __getitem__ 方法的参数是从 0 开始的整数(int),这样才认为对象是可迭代的。
在白鹅类型理论中,可迭代对象定义的简单一些,不过没那么灵活,如果实现了 __iter__ 方法,那么就认为对象是可迭代的。此时,不需要创建子类,也不需要注册,因为 abc.Iterable 类实现了 __subclasshook__ 方法,下面举个例子:
Step2: 不过要注意,前面定义的 Sentence 类是可迭代的,却无法通过 issubclass(Sentence, abc.Iterable) 测试
从 Python 3.4 开始,检测对象 x 是否可迭代,最准确的方法是调用 iter(x) 函数,如果不可迭代,再处理 TypeError 异常,这回比使用 isinstance(x, abc.Iterable) 更准确,因为 iter(x) 会考虑到 __getitem__ 方法
迭代对象之前显式检查或许没必要,因为试图迭代不可迭代对象时,抛出的错误很明显。如果除了跑出 TypeError 异常之外还要进一步处理,可以使用 try/except 块,无需显式检查。如果要保存对象,等以后迭代,或许可以显式检查,因为这种情况需要尽早捕捉错误
可迭代对象与迭代器对比
可迭代对象:
使用 iter 内置函数可以获取迭代器对象。如果对象实现了能返回迭代器的 __iter__ 方法,那么对象可迭代。序列都可以迭代:实现了 __getitem__ 方法,而且其参数是从 0 开始的索引,这种对象也可以迭代。
我们要明确可迭代对象和迭代器之间的关系: Python 从可迭代的对象中获取迭代器
下面是一个 for 循环,迭代一个字符串,这里字符串 'ABC' 是可迭代对象,背后有迭代器,只是我们看不到
Step3: 如果用 while 循环,要像下面这样:
Step4: 标准迭代器接口有两个方法:
__next__ 返回下一个可用的元素,如果没有元素了,抛出 StopIteration 异常
__iter__ 返回 self,以便在应该使用可迭代对象的地方使用迭代器,比如 for 循环
这个接口在 collections.abc.Iterator 抽象基类中,这个类定义了 __next__ 抽象方法,而且继承自 Iterable 类: __iter__ 抽象方法则在 Iterable 类中定义
abc.Iterator 抽象基类中 __subclasshook__ 的方法作用就是检查有没有 __iter__ 和 __next__ 属性
检查对象 x 是否为 迭代器 的最好方式是调用 isinstance(x, abc.Iterator)。得益于 Iterator.__subclasshook__ 方法,即使对象 x 所属的类不是 Iterator 类的真实子类或虚拟子类,也能这样检查
下面可以看到 Sentence 类如何使用 iter 函数构建迭代器,和如何使用 next 函数使用迭代器
Step5: 因为迭代器只需要 __next__ 和 __iter__ 两个方法,所以除了调用 next() 方法,以及捕获 StopIteration 异常之外,没有办法检查是否还有遗留元素。此外,也没有办法 ”还原“ 迭代器。如果想再次迭代,那就要调用 iter(...) 传入之前构造迭代器传入的可迭代对象。传入迭代器本身没用,因为前面说过 Iterator.__iter__ 方法实现方式是返回实例本身,所以传入迭代器无法还原已经耗尽的迭代器
我们可以得出迭代器定义如下:实现了无参数的 __next__ 方法,返回序列中的下一个元素,如果没有元素了,那么抛出 StopIteration 异常。Python 中迭代器还实现了 __iter__ 方法,因此迭代器也可以迭代。因为内置的 iter(...) 函数会对序列做特殊处理,所以第 1 版 的 Sentence 类可以迭代。
Sentence 类第 2 版:典型的迭代器
这一版根据《设计模式:可复用面向对象软件的基础》一书给出的模型,实现典型的迭代器设计模式。注意,这不符合 Python 的习惯做法,后面重构时候会说明原因。不过,通过这一版能明确可迭代集合和迭代器对象之间的区别
下面的类可以迭代,因为实现了 __iter__ 方法,构建并返回一个 SentenceIterator 实例,《设计模式:可复用面向对象软件的基础》一书就是这样描述迭代器设计模式的。
这里之所以这么做,是为了清楚的说明可迭代的对象和迭代器之间的重要区别,以及二者间的联系。
Step6: 注意,对于这个例子来说,没有必要在 SentenceIterator 类中实现 __iter__ 方法,不过这么做是对的,因为迭代器应该实现 __next__ 和 __iter__ 两个方法,而且这么做能让迭代器通过 issubclass(SentenceInterator, abc.Iterator) 测试。如果让 SentenceIterator 继承 abc.Iterator 类,那么它会继承 abc.Iterator.__iter__ 这个具体方法
注意 SentenceIterator 类的大多数代码在处理迭代器内部状态,稍后会说明如何简化,不过我们先讨论一个看似合理实则错误的实现捷径
把 Sentence 变成迭代器:坏主意
构建可迭代的对象和迭代器经常出现错误,原因是混淆了二者。要知道,可迭代对象有个 __iter__ 方法,每次实例化一个新的迭代器,迭代器要实现 __next__ 方法,返回单个元素,此外要实现 __iter__ 方法,返回迭代器本身。
因此,迭代器可以迭代,但是可迭代的对象不是迭代器
除了 __iter__ 方法之外,你可能还想在 Sentence 类中实现 __next__ 方法,让 Sentence 实例既是可迭代对象,也是自身迭代器,可是这种想法非常糟糕,这也是常见的反模式
迭代器模式可以用来:
访问一个聚合对象的内容而无需暴露它的内部表示
支持对聚合对象的多种遍历
为遍历不同的聚合结构提供一个统一的接口(即支持多态迭代)
为了“支持多种遍历”,必须能从同一个迭代的实例中获取多个独立的迭代器,而且各个迭代器要能维护自身的内部状态,因此这一模式正确的实现方法是,每次调用 iter(my_iterable) 都新建一个独立的迭代器,这就是为什么这个示例需要定义 SentenceIterator 类
可迭代对象一定不能是自身的迭代器,也就是说,可迭代对象必须实现 __iter__ 方法,但不能实现 __next__ 方法。另一方面,迭代器应该可以一直迭代,迭代器的 __iter__ 应该返回自身
Sentence 类第 3 版:生成器函数
实现同样功能,却符合 Python 习惯的方式是,用生成器函数替代 SentenceIterator 类。先看下面的例子:
Step7: 在这个例子中,迭代器其实是生成器对象,每次调用 __iter__ 方法都会自动创建,因为这里的 __iter__ 方法是生成器函数
生成器函数的工作原理
只要 Python 函数定义体中有 yield 关键字,该函数就是生成器函数,调用生成器函数时,会返回一个生成器对象。也就是说,生成器函数是生成器工厂
下面用一个特别简单的函数说明生成器行为:
Step8: 生成器函数会创建一个生成器对象,包装生成器函数的定义体。把生成器传给 next(..) 函数时,生成器函数会向前,执行函数定义体中的下一个 yield 语句,返回产出的值,并在函数定义体的当前位置暂停。最终函数的定义体返回时,外层的生成器对象会抛出 StopIteration 异常 -- 这一点与迭代器协议一致
下面例子更清楚的说明了生成器函数定义体的执行过程:
Step9: 现在在我们应该知道 Sentence.__iter__ 作用了: __iter__ 方法是生成器函数,调用时会构建一个实现了迭代器接口的生成器对象,因此不用再定义 SentenceIterator 类了。
这一版 Sentence 类比之前简短多了,但还不够懒惰,懒惰实现是指尽可能延后生成值,这样能节省内存,或许还可以避免做无用的处理
Sentence 类第 4 版:惰性实现
设计 Iterator 接口时考虑了惰性:next(my_iterator) 一次生成一个元素。惰性求值和及早求值是编程语言理论的技术术语
目前的 Sentence 类不具有惰性,因为 __init__ 方法急迫的构建好了文本中的单词列表,然后绑定到 self.words 属性上。这样就得到处理后的整个文本,列表使用的内存量可能与文本本身一样多(获取更多,这取决于文本中有多少非单词字符)。如果只需迭代前几个单词,大多数工作都是白费力气。
re.finditer 函数是 re.findall 函数的惰性版本,返回的不是列表,而是一个生成器,按需生成 re.MatchObject 实例。如果有很多匹配,re.finditer 能节省大量内存。如果我们要使用这个函数让上一版 Sentence 类变得懒惰,即只在需要时才生成下一个单词。代码如下所示:
Step10: 生成器表达式
简单的生成器函数,如前面的例子中使用的那个,可以替换成生成器表达式
生成器表达式可以理解为列表推导式的惰性版本:不会迫切的构建列表,而是返回一共额生成器,按需惰性产称元素。也就是说,如果列表推导是制造列表的工厂,那么生成器表达式是制造生成器的工厂
下面展示了一个生成器表达式,并与列表推导式对比:
Step11: 可以看出,生成器表达式会产出生成器,因此可以使用生成器表达式进一步减少 Sentence 类的代码:
Step12: 这里用的是生成器表达式构建生成器,然后将其返回,不过最终效果一样:调用 __iter__ 方法会得到一个生成器对象
生成器表达式是语法糖:完全可以替换成生成器函数,不过有时使用生成器表达式更加便利
何时使用生成器表达式
遇到简单的情况,可以使用成器表达式,因为因为这样扫一眼就知道代码作用
如果生成器表达式要分成多行,最好使用生成器函数,提高可读性
如果函数或构造方法只有一个参数,传入生成器表达式时不用写一堆调用函数的括号,再写一堆括号围住生成器表达式,只写一对括号就行,如果生成器表达式后面还有其他参数,那么必须使用括号围住,否则会抛出 SynataxError 异常
另一个例子:等差数列生成器
Step13: 上面的类完全可以用一个生成器函数代替
Step14: 上面的实现很棒,但是要记住,标准库中有很多现成的生成器,下面会用 itertools 模块实现,这个版本更棒
使用 itertools 生成等差数列
itertools 提供了 19 个生成器函数,结合起来很有意思。
例如 itertools.count 函数返回的生成器能生成多个数。如果不传入参数,itertools.count 函数会生成从 0 开始的整数数列。不过,我们可以提供 start 和 step 值,这样实现的作用与 aritprog_gen 函数相似
Step15: 然而 itertools.count 函数从不停止,因此,调用 list(count())) 会产生一个特别大的列表,超出可用的内存
不过,itertools.takewhile 函数不同,他会生成一个使用另一个生成器的生成器,在指定条件计算结果为 False 时候停止,因此,可以把这两个函数结合:
Step16: 所以,我们可以将等差数列写成这样:
Step17: 注意, aritprog_gen 不是生成器函数,因为没有 yield 关键字,但是会返回一个生成器,因此它和其他的生成器函数一样,是一个生成器工厂函数
标准库中的生成器函数
标准库中有很多生成器,有用于逐行迭代文本文件的对象,还有出色的 os.walk 函数,不过本节专注于通用的函数:参数为任意可迭代对象,返回值是生成器,用于生成选中的,计算出的和重新排列的元素。
第一组是过滤生成器函数,如下:
Step18: 下面是映射生成器函数:
Step19: 接下来是用于合并的生成器函数:
Step20: itertools.product 生成器是计算笛卡尔积的惰性方式,从输入的各个迭代对象中获取元素,合并成由 N 个元素构成的元组,与嵌套的 for 循环效果一样。repeat指明重复处理多少次可迭代对象。下面演示 itertools.product 的用法
Step21: 把输入的各个元素扩展成多个输出元素的生成器函数:
Step22: itertools 中 combinations, comb 和 permutations 生成器函数,连同 product 函数称为组合生成器。itertool.product 和其余组合学函数有紧密关系,如下:
Step23: 用于重新排列元素的生成器函数:
Step24: Python 3.3 中新语法 yield from
如果生成器函数需要产生两一个生成器生成的值,传统方法是使用 for 循环
Step25: chain 生成器函数把操作依次交给接收到的各个可迭代对象处理。为此 Python 3.3 引入了新语法,如下:
Step26: 可迭代的归约函数
接受可迭代对象,然后返回单个结果,叫归约函数。
Step27: 还有一个内置的函数接受一个可迭代对象,返回不同的值 -- sorted,reversed 是生成器函数,与此不同,sorted 会构建并返回真正的列表,毕竟要读取每一个元素才能排序。它返回的是一个排好序的列表。这里提到 sorted,是因为它可以处理任何可迭代对象
当然,sorted 和这些归约函数只能处理最终会停止的可迭代对象,这些函数会一直收集元素,永远无法返回结果
深入分析 iter 函数
iter 函数还有一个鲜为人知的用法:传两个参数,使用常规的函数或任何可调用的对象创建迭代器。这样使用时,第一个参数必须是可调用对象,用于不断调用(没有参数),产出各个值,第二个是哨符,是个标记值,当可调用对象返回这个值时候,触发迭代器抛
出 StopIteration 异常,而不产出哨符。
下面是掷骰子,直到掷出 1
Step28: 内置函数 iter 的文档有一个实用的例子,逐行读取文件,直到遇到空行或者到达文件末尾为止:
Step29: 把生成器当成协程
Python 2.2 引入了 yield 关键字实现的生成器函数,Python 2.5 为生成器对象添加了额外的方法和功能,其中最引人关注的是 .send() 方法
与 .__next__() 方法一样,.send() 方法致使生成器前进到下一个 yield 语句。不过 send() 方法还允许使用生成器的客户把数据发给自己,即不管传给 .send() 方法什么参数,那个参数都会成为生成器函数定义体中对应的 yield 表达式的值。也就是说,.send() 方法允许在客户代码和生成器之间双向交换数据。而 .__next__() 方法只允许客户从生成器中获取数据
这是一项重要的 “改进”,甚至改变了生成器本性,这样使用的话,生成器就变成了协程。所以要提醒一下:
生成器用于生成供迭代的数据
协程是数据的消费者
为了避免脑袋爆炸,不能把两个概念混为一谈
协程与迭代无关
注意,虽然在协程中会使用 yield 产出值,但这与迭代无关
延伸阅读
有个简单的生成器函数例子
Step30: 我们无法通过函数调用抽象产出这个过程,下面似乎能抽象产出这个过程:
Step31: 调用 f() 会得到一个死循环,而不是生成器,因为 yield 只能将最近的外层函数变成生成器函数。虽然生成器函数看起来像函数,可是我们不能通过简单的函数调用把职责委托给另一个生成器函数。
Python 新引入的 yield from 语法允许生成器或协程把工作委托给第三方完成,这样就无需嵌套 for 循环作为变通了。在函数调用前面加上 yield from 能 ”解决“ 上面的问题,如下: | Python Code:
import re
import reprlib
RE_WORD = re.compile('\w+')
class Sentence:
def __init__(self, text):
self.text = text
# 返回一个字符串列表,里面的元素是正则表达式的全部非重叠匹配
self.words = RE_WORD.findall(text)
def __getitem__(self, index):
return self.words[index]
# 为了完善序列协议,我们实现了 __len__ 方法,不过,为了让对象可迭代,没必要实现这个方法
def __len__(self):
return len(self.words)
def __repr__(self):
# 下面这个函数用于生成大型数据结构的简略字符串表示形式
return 'Sentence(%s)' % reprlib.repr(self.text)
s = Sentence('"The time has come,", the Walrus said')
s
for word in s:
print(word)
list(s)
s[0], s[-1]
Explanation: 所有生成器都是迭代器,因为生成器完全实现了迭代器接口,不过迭代器一般用于从集合取出元素,生成器用于 “凭空” 创造元素。斐波那契数列例子可以很好的说明两者区别:斐波那契数列中的数有无穷个,在一个集合里放不下。
在 Python 3 中,生成器有广泛用途。现在即使是内置的 range() 函数也要返回一个类似生成器的对象,而以前返回完整列表。如果一定让 range() 函数返回列表,必须明确指明(例如,list(range(100)))。
在 Python 中,所有集合都能迭代。在 Python 内部,迭代器用于支持:
for 循环
构建和扩展集合类型
逐行遍历文本文件
列表推导,字典推导和集合推导
元组拆包
调用函数时,使用 * 拆包
本章探讨以下话题:
语言内部使用 iter(...) 内置函数处理可迭代对象的方式
如何使用 Python 经典的迭代器模式
详细说明生成器函数的工作原理
如何使用生成器函数或生成器表达式代替经典的迭代器
如何使用标准库中通用的生成器函数
如何使用 yield from 语句合并生成器
案例分析: 在一个数据库转换工具中使用生成器处理大型数据集
为什么生成器和协程看似相同,其实差别很大,不能混淆
Sentence 类第 1 版:单词序列
我们创建一个类,并向它传入一些包含文本的字符串,然后可以逐个单词迭代,第 1 版要实现序列协议,这个类的对象可以迭代,因为所有序列都可以迭代 -- 这一点前面已经说过,现在说明真正的原因
下面展示了一个可以通过索引从文本提取单词的类:
End of explanation
from collections import abc
class Foo:
def __iter__(self):
pass
issubclass(Foo, abc.Iterable)
f = Foo()
isinstance(f, abc.Iterable)
Explanation: 我们都知道,序列可以迭代,下面说明具体原因: iter 函数
解释器需要迭代对象 x 时候,会自动调用 iter(x)
内置的 iter 函数有以下作用。
检查对象是否实现了 __iter__ 方法,如果实现了就调用它,获取一个迭代器
如果没有实现 __iter__ 方法,但是实现了 __getitem__ 方法,Python 会创建一个迭代器,尝试按顺序(从索引 0 开始)获取元素
如果尝试失败,Python 抛出 TypeError 异常,通常提示 C object is not iterable,其中 C 是目标对象所属的类
任何 Pytho 序列都可迭代的原因是实现了 __getitem__ 方法。其实标准的序列也都实现了 __iter__ 方法,因此我们也应该这么做。之所以对 __getitem__ 方法特殊处理,是为了向后兼容,未来可能不会再这么做
11 章提到过,这是鸭子类型的极端形式,不仅要实现特殊的 __iter__ 方法,还要实现 __getitem__ 方法,而且 __getitem__ 方法的参数是从 0 开始的整数(int),这样才认为对象是可迭代的。
在白鹅类型理论中,可迭代对象定义的简单一些,不过没那么灵活,如果实现了 __iter__ 方法,那么就认为对象是可迭代的。此时,不需要创建子类,也不需要注册,因为 abc.Iterable 类实现了 __subclasshook__ 方法,下面举个例子:
End of explanation
s = 'ABC'
for char in s:
print(char)
Explanation: 不过要注意,前面定义的 Sentence 类是可迭代的,却无法通过 issubclass(Sentence, abc.Iterable) 测试
从 Python 3.4 开始,检测对象 x 是否可迭代,最准确的方法是调用 iter(x) 函数,如果不可迭代,再处理 TypeError 异常,这回比使用 isinstance(x, abc.Iterable) 更准确,因为 iter(x) 会考虑到 __getitem__ 方法
迭代对象之前显式检查或许没必要,因为试图迭代不可迭代对象时,抛出的错误很明显。如果除了跑出 TypeError 异常之外还要进一步处理,可以使用 try/except 块,无需显式检查。如果要保存对象,等以后迭代,或许可以显式检查,因为这种情况需要尽早捕捉错误
可迭代对象与迭代器对比
可迭代对象:
使用 iter 内置函数可以获取迭代器对象。如果对象实现了能返回迭代器的 __iter__ 方法,那么对象可迭代。序列都可以迭代:实现了 __getitem__ 方法,而且其参数是从 0 开始的索引,这种对象也可以迭代。
我们要明确可迭代对象和迭代器之间的关系: Python 从可迭代的对象中获取迭代器
下面是一个 for 循环,迭代一个字符串,这里字符串 'ABC' 是可迭代对象,背后有迭代器,只是我们看不到
End of explanation
s = 'ABC'
it = iter(s)
while True:
try:
print(next(it))
except StopIteration: # 这个异常表示迭代器到头了
del it
break
Explanation: 如果用 while 循环,要像下面这样:
End of explanation
s3 = Sentence('Pig and Pepper')
it = iter(s3)
it
next(it)
next(it)
next(it)
next(it)
list(it) # 到头后,迭代器没用了
list(s3) # 如果想再次迭代,要重新构建迭代器
Explanation: 标准迭代器接口有两个方法:
__next__ 返回下一个可用的元素,如果没有元素了,抛出 StopIteration 异常
__iter__ 返回 self,以便在应该使用可迭代对象的地方使用迭代器,比如 for 循环
这个接口在 collections.abc.Iterator 抽象基类中,这个类定义了 __next__ 抽象方法,而且继承自 Iterable 类: __iter__ 抽象方法则在 Iterable 类中定义
abc.Iterator 抽象基类中 __subclasshook__ 的方法作用就是检查有没有 __iter__ 和 __next__ 属性
检查对象 x 是否为 迭代器 的最好方式是调用 isinstance(x, abc.Iterator)。得益于 Iterator.__subclasshook__ 方法,即使对象 x 所属的类不是 Iterator 类的真实子类或虚拟子类,也能这样检查
下面可以看到 Sentence 类如何使用 iter 函数构建迭代器,和如何使用 next 函数使用迭代器
End of explanation
import re
import reprlib
RE_WORD = re.compile('\w+')
class Sentence:
def __init__(self, text):
self.text = text
self.words = RE_WORD.findall(text)
def __repr__(self):
return 'Sentence(%s)' % reprlib.repr(self.text)
def __iter__(self):
return SentenceIterator(self.words)
class SentenceIterator:
def __init__(self, words):
self.words = words
self.index = 0
def __next__(self):
try:
word = self.words[self.index]
except IndexError:
raise StopIteration
self.index += 1
return word
def __iter__(self):
return self
Explanation: 因为迭代器只需要 __next__ 和 __iter__ 两个方法,所以除了调用 next() 方法,以及捕获 StopIteration 异常之外,没有办法检查是否还有遗留元素。此外,也没有办法 ”还原“ 迭代器。如果想再次迭代,那就要调用 iter(...) 传入之前构造迭代器传入的可迭代对象。传入迭代器本身没用,因为前面说过 Iterator.__iter__ 方法实现方式是返回实例本身,所以传入迭代器无法还原已经耗尽的迭代器
我们可以得出迭代器定义如下:实现了无参数的 __next__ 方法,返回序列中的下一个元素,如果没有元素了,那么抛出 StopIteration 异常。Python 中迭代器还实现了 __iter__ 方法,因此迭代器也可以迭代。因为内置的 iter(...) 函数会对序列做特殊处理,所以第 1 版 的 Sentence 类可以迭代。
Sentence 类第 2 版:典型的迭代器
这一版根据《设计模式:可复用面向对象软件的基础》一书给出的模型,实现典型的迭代器设计模式。注意,这不符合 Python 的习惯做法,后面重构时候会说明原因。不过,通过这一版能明确可迭代集合和迭代器对象之间的区别
下面的类可以迭代,因为实现了 __iter__ 方法,构建并返回一个 SentenceIterator 实例,《设计模式:可复用面向对象软件的基础》一书就是这样描述迭代器设计模式的。
这里之所以这么做,是为了清楚的说明可迭代的对象和迭代器之间的重要区别,以及二者间的联系。
End of explanation
import re
import reprlib
RE_WORD = re.compile('\w+')
class Sentence:
def __init__(self, text):
self.text = text
self.words = RE_WORD.findall(text)
def __repr__(self):
return 'Sentence(%s)' % reprlib.repr(self.text)
def __iter__(self):
for word in self.words:
yield word
# 这个 return 不是必要的,生成器函数不会抛出 StopIteration 异常,
#而是在生成全部值之后直接退出
return
a = Sentence('hello world')
one = iter(a)
print(next(one))
two = iter(a)
print(next(two)) # 两个迭代器之间不会互相干扰
Explanation: 注意,对于这个例子来说,没有必要在 SentenceIterator 类中实现 __iter__ 方法,不过这么做是对的,因为迭代器应该实现 __next__ 和 __iter__ 两个方法,而且这么做能让迭代器通过 issubclass(SentenceInterator, abc.Iterator) 测试。如果让 SentenceIterator 继承 abc.Iterator 类,那么它会继承 abc.Iterator.__iter__ 这个具体方法
注意 SentenceIterator 类的大多数代码在处理迭代器内部状态,稍后会说明如何简化,不过我们先讨论一个看似合理实则错误的实现捷径
把 Sentence 变成迭代器:坏主意
构建可迭代的对象和迭代器经常出现错误,原因是混淆了二者。要知道,可迭代对象有个 __iter__ 方法,每次实例化一个新的迭代器,迭代器要实现 __next__ 方法,返回单个元素,此外要实现 __iter__ 方法,返回迭代器本身。
因此,迭代器可以迭代,但是可迭代的对象不是迭代器
除了 __iter__ 方法之外,你可能还想在 Sentence 类中实现 __next__ 方法,让 Sentence 实例既是可迭代对象,也是自身迭代器,可是这种想法非常糟糕,这也是常见的反模式
迭代器模式可以用来:
访问一个聚合对象的内容而无需暴露它的内部表示
支持对聚合对象的多种遍历
为遍历不同的聚合结构提供一个统一的接口(即支持多态迭代)
为了“支持多种遍历”,必须能从同一个迭代的实例中获取多个独立的迭代器,而且各个迭代器要能维护自身的内部状态,因此这一模式正确的实现方法是,每次调用 iter(my_iterable) 都新建一个独立的迭代器,这就是为什么这个示例需要定义 SentenceIterator 类
可迭代对象一定不能是自身的迭代器,也就是说,可迭代对象必须实现 __iter__ 方法,但不能实现 __next__ 方法。另一方面,迭代器应该可以一直迭代,迭代器的 __iter__ 应该返回自身
Sentence 类第 3 版:生成器函数
实现同样功能,却符合 Python 习惯的方式是,用生成器函数替代 SentenceIterator 类。先看下面的例子:
End of explanation
def gen_123():
yield 1
yield 2
yield 3
gen_123
gen_123()
for i in gen_123():
print(i)
g = gen_123()
next(g)
next(g)
next(g)
next(g) # 生成器函数定义体执行完毕后,跑出 StopIteration 异常
Explanation: 在这个例子中,迭代器其实是生成器对象,每次调用 __iter__ 方法都会自动创建,因为这里的 __iter__ 方法是生成器函数
生成器函数的工作原理
只要 Python 函数定义体中有 yield 关键字,该函数就是生成器函数,调用生成器函数时,会返回一个生成器对象。也就是说,生成器函数是生成器工厂
下面用一个特别简单的函数说明生成器行为:
End of explanation
def gen_AB():
print('start')
yield 'A'
print('continue')
yield 'B'
print('end')
for c in gen_AB():
print('-->', c)
Explanation: 生成器函数会创建一个生成器对象,包装生成器函数的定义体。把生成器传给 next(..) 函数时,生成器函数会向前,执行函数定义体中的下一个 yield 语句,返回产出的值,并在函数定义体的当前位置暂停。最终函数的定义体返回时,外层的生成器对象会抛出 StopIteration 异常 -- 这一点与迭代器协议一致
下面例子更清楚的说明了生成器函数定义体的执行过程:
End of explanation
import re
import reprlib
RE_WORD = re.compile('\w+')
class Sentence:
def __init__(self, text):
self.text = text
def __repr__(self):
return 'Sentence(%s)' % reprlib.repr(self.text)
def __iter__(self):
for match in RE_WORD.finditer(self.text):
yield match.group() # 从 MatchObject 实例中提取匹配正则表达式的具体文本
Explanation: 现在在我们应该知道 Sentence.__iter__ 作用了: __iter__ 方法是生成器函数,调用时会构建一个实现了迭代器接口的生成器对象,因此不用再定义 SentenceIterator 类了。
这一版 Sentence 类比之前简短多了,但还不够懒惰,懒惰实现是指尽可能延后生成值,这样能节省内存,或许还可以避免做无用的处理
Sentence 类第 4 版:惰性实现
设计 Iterator 接口时考虑了惰性:next(my_iterator) 一次生成一个元素。惰性求值和及早求值是编程语言理论的技术术语
目前的 Sentence 类不具有惰性,因为 __init__ 方法急迫的构建好了文本中的单词列表,然后绑定到 self.words 属性上。这样就得到处理后的整个文本,列表使用的内存量可能与文本本身一样多(获取更多,这取决于文本中有多少非单词字符)。如果只需迭代前几个单词,大多数工作都是白费力气。
re.finditer 函数是 re.findall 函数的惰性版本,返回的不是列表,而是一个生成器,按需生成 re.MatchObject 实例。如果有很多匹配,re.finditer 能节省大量内存。如果我们要使用这个函数让上一版 Sentence 类变得懒惰,即只在需要时才生成下一个单词。代码如下所示:
End of explanation
def gen_AB():
print('start')
yield 'A'
print('continue')
yield 'B'
print('end')
res1 = [x * 3 for x in gen_AB()]
for i in res1:
print('-->', i)
res2 = (x * 3 for x in gen_AB())
res2
for i in res2:
print('-->', i)
Explanation: 生成器表达式
简单的生成器函数,如前面的例子中使用的那个,可以替换成生成器表达式
生成器表达式可以理解为列表推导式的惰性版本:不会迫切的构建列表,而是返回一共额生成器,按需惰性产称元素。也就是说,如果列表推导是制造列表的工厂,那么生成器表达式是制造生成器的工厂
下面展示了一个生成器表达式,并与列表推导式对比:
End of explanation
import re
import reprlib
RE_WORD = re.compile('\w+')
class Sentence:
def __init__(self, text):
self.text = text
def __repr__(self):
return 'Sentence(%s)' % reprlib.repr(self.text)
def __iter__(self):
return (match.group() for match in RE_WORD.finditer(self.text))
Explanation: 可以看出,生成器表达式会产出生成器,因此可以使用生成器表达式进一步减少 Sentence 类的代码:
End of explanation
class ArithmeticProgression:
def __init__(self, begin, step, end=None):
self.begin = begin
self.step = step
self.end = end # 无穷数列
def __iter__(self):
# self 赋值给 result,不过要先强制转成前面加法表达式类型(两个支持加法的对象返回一个对象)
result = type(self.begin + self.step)(self.begin)
forever = self.end is None
index = 0
while forever or result < self.end:
yield result
index += 1
result = self.begin + self.step * index
ap = ArithmeticProgression(0, 1, 3)
list(ap)
ap = ArithmeticProgression(1, 5, 3)
list(ap)
ap = ArithmeticProgression(0, 1 / 3, 1)
list(ap)
Explanation: 这里用的是生成器表达式构建生成器,然后将其返回,不过最终效果一样:调用 __iter__ 方法会得到一个生成器对象
生成器表达式是语法糖:完全可以替换成生成器函数,不过有时使用生成器表达式更加便利
何时使用生成器表达式
遇到简单的情况,可以使用成器表达式,因为因为这样扫一眼就知道代码作用
如果生成器表达式要分成多行,最好使用生成器函数,提高可读性
如果函数或构造方法只有一个参数,传入生成器表达式时不用写一堆调用函数的括号,再写一堆括号围住生成器表达式,只写一对括号就行,如果生成器表达式后面还有其他参数,那么必须使用括号围住,否则会抛出 SynataxError 异常
另一个例子:等差数列生成器
End of explanation
def aritprog_gen(begin, step, end=None):
result = type(begin + step)(begin)
forever = end is None
index = 0
while forever or result < end:
yield result
index += 1
result = begin + step * index
Explanation: 上面的类完全可以用一个生成器函数代替
End of explanation
import itertools
gen = itertools.count(1, .5)
next(gen)
next(gen)
next(gen)
next(gen)
Explanation: 上面的实现很棒,但是要记住,标准库中有很多现成的生成器,下面会用 itertools 模块实现,这个版本更棒
使用 itertools 生成等差数列
itertools 提供了 19 个生成器函数,结合起来很有意思。
例如 itertools.count 函数返回的生成器能生成多个数。如果不传入参数,itertools.count 函数会生成从 0 开始的整数数列。不过,我们可以提供 start 和 step 值,这样实现的作用与 aritprog_gen 函数相似
End of explanation
gen = itertools.takewhile(lambda n: n < 3, itertools.count(1, .5))
list(gen)
Explanation: 然而 itertools.count 函数从不停止,因此,调用 list(count())) 会产生一个特别大的列表,超出可用的内存
不过,itertools.takewhile 函数不同,他会生成一个使用另一个生成器的生成器,在指定条件计算结果为 False 时候停止,因此,可以把这两个函数结合:
End of explanation
import itertools
def aritprog_gen(begin, step, end=None):
first = type(begin+step)(begin)
ap_gen = itertools.count(first, step)
if end is not None:
ap_gen = itertools.takewhile(lambda n: n < end, ap_gen)
return ap_gen
Explanation: 所以,我们可以将等差数列写成这样:
End of explanation
def vowel(c):
return c.lower() in 'aeiou'
# 字符串各个元素传给 vowel 函数,为真则返回对应元素
list(filter(vowel, 'Aardvark'))
import itertools
# 与上面相反
list(itertools.filterfalse(vowel, 'Aardvark'))
# 处理 字符串,跳过 vowel 为真的元素,然后产出剩余的元素,不再检查
list(itertools.dropwhile(vowel, 'Aardvark'))
#返回真值对应的元素,立即停止,不再检查
list(itertools.takewhile(vowel, 'Aardvark'))
# 并行处理两个迭代对象,如果第二个是真值,则返回第一个
list(itertools.compress('Aardvark', (1, 0, 1, 1, 0, 1)))
list(itertools.islice('Aardvark', 4))
list(itertools.islice('Aardvark', 4, 7))
list(itertools.islice('Aardvark', 1, 7, 2))
Explanation: 注意, aritprog_gen 不是生成器函数,因为没有 yield 关键字,但是会返回一个生成器,因此它和其他的生成器函数一样,是一个生成器工厂函数
标准库中的生成器函数
标准库中有很多生成器,有用于逐行迭代文本文件的对象,还有出色的 os.walk 函数,不过本节专注于通用的函数:参数为任意可迭代对象,返回值是生成器,用于生成选中的,计算出的和重新排列的元素。
第一组是过滤生成器函数,如下:
End of explanation
sample = [5, 4, 2, 8, 7, 6, 3, 0, 9, 1]
import itertools
# 产出累计的总和
list(itertools.accumulate(sample))
# 如果提供了函数,那么把前两个元素给他,然后把计算结果和下一个元素给它,以此类推
list(itertools.accumulate(sample, min))
list(itertools.accumulate(sample, max))
import operator
list(itertools.accumulate(sample, operator.mul)) # 计算乘积
list(itertools.accumulate(range(1, 11), operator.mul))
list(enumerate('albatroz', 1)) #从 1 开始,为字母编号
import operator
list(map(operator.mul, range(11), range(11)))
# 计算两个可迭代对象中对应位置的两个之和,元素最少的迭代完毕就停止
list(map(operator.mul, range(11), [2, 4, 8]))
list(map(lambda a, b: (a, b), range(11), [2, 4, 8]))
import itertools
# starmap 把第二个参数的每个元素传给第一个函数 func,产出结果,
# 输入的可迭代对象应该产出可迭代对象 iit,
# 然后以(func(*iit) 这种形式调用 func)
list(itertools.starmap(operator.mul, enumerate('albatroz', 1)))
sample = [5, 4, 2, 8, 7, 6, 3, 0, 9, 1]
# 计算平均值
list(itertools.starmap(lambda a, b: b / a,
enumerate(itertools.accumulate(sample), 1)))
Explanation: 下面是映射生成器函数:
End of explanation
# 先产生第一个元素,然后产生第二个参数的所有元素,以此类推,无缝连接到一起
list(itertools.chain('ABC', range(2)))
list(itertools.chain(enumerate('ABC')))
# chain.from_iterable 函数从可迭代对象中获取每个元素,
# 然后按顺序把元素连接起来,前提是各个元素本身也是可迭代对象
list(itertools.chain.from_iterable(enumerate('ABC')))
list(zip('ABC', range(5), [10, 20, 30, 40])) #只要有一个生成器到头,就停止
# 处理到最长的迭代器到头,短的会填充 None
list(itertools.zip_longest('ABC', range(5)))
list(itertools.zip_longest('ABC', range(5), fillvalue='?')) # 填充问号
Explanation: 接下来是用于合并的生成器函数:
End of explanation
list(itertools.product('ABC', range(2)))
suits = 'spades hearts diamonds clubs'.split()
list(itertools.product('AK', suits))
# 传入一个可迭代对象,产生一系列只有一个元素的元祖,不是特别有用
list(itertools.product('ABC'))
# repeat = N 重复 N 次处理各个可迭代对象
list(itertools.product('ABC', repeat=2))
list(itertools.product(range(2), repeat=3))
rows = itertools.product('AB', range(2), repeat=2)
for row in rows: print(row)
Explanation: itertools.product 生成器是计算笛卡尔积的惰性方式,从输入的各个迭代对象中获取元素,合并成由 N 个元素构成的元组,与嵌套的 for 循环效果一样。repeat指明重复处理多少次可迭代对象。下面演示 itertools.product 的用法
End of explanation
ct = itertools.count()
next(ct) # 不能构建 ct 列表,因为 ct 是无穷的
next(ct), next(ct), next(ct)
list(itertools.islice(itertools.count(1, .3), 3))
cy = itertools.cycle('ABC')
next(cy)
list(itertools.islice(cy, 7))
rp = itertools.repeat(7) # 重复出现指定元素
next(rp), next(rp)
list(itertools.repeat(8, 4)) # 4 次数字 8
list(map(operator.mul, range(11), itertools.repeat(5)))
Explanation: 把输入的各个元素扩展成多个输出元素的生成器函数:
End of explanation
# 'ABC' 中每两个元素 len() == 2 的各种组合
list(itertools.combinations('ABC', 2))
# 包括相同元素的每两个元素的各种组合
list(itertools.combinations_with_replacement('ABC', 2))
# 每两个元素的各种排列
list(itertools.permutations('ABC', 2))
list(itertools.product('ABC', repeat=2))
Explanation: itertools 中 combinations, comb 和 permutations 生成器函数,连同 product 函数称为组合生成器。itertool.product 和其余组合学函数有紧密关系,如下:
End of explanation
# 产出由两个元素组成的元素,形式为 (key, group),其中 key 是分组标准,
#group 是生成器,用于产出分组里的元素
list(itertools.groupby('LLLAAGGG'))
for char, group in itertools.groupby('LLLLAAAGG'):
print(char, '->', list(group))
animals = ['duck', 'eagle', 'rat', 'giraffe', 'bear',
'bat', 'dolphin', 'shark', 'lion']
animals.sort(key=len)
animals
for length, group in itertools.groupby(animals, len):
print(length, '->', list(group))
# 使用 reverse 生成器从右往左迭代 animals
for length, group in itertools.groupby(reversed(animals), len):
print(length, '->', list(group))
# itertools 产生多个生成器,每个生成器都产出输入的各个元素
list(itertools.tee('abc'))
g1, g2 = itertools.tee('abc')
next(g1)
next(g2)
next(g2)
list(g1)
list(g2)
list(zip(*itertools.tee('ABC')))
Explanation: 用于重新排列元素的生成器函数:
End of explanation
def chain(*iterables): # 自己写的 chain 函数,标准库中的 chain 是用 C 写的
for it in iterables:
for i in it:
yield i
s = 'ABC'
t = tuple(range(3))
list(chain(s, t))
Explanation: Python 3.3 中新语法 yield from
如果生成器函数需要产生两一个生成器生成的值,传统方法是使用 for 循环
End of explanation
def chain(*iterables):
for i in iterables:
yield from i # 详细语法在 16 章讲
list(chain(s, t))
Explanation: chain 生成器函数把操作依次交给接收到的各个可迭代对象处理。为此 Python 3.3 引入了新语法,如下:
End of explanation
all([1, 2, 3]) # 所有元素为真返回 True
all([1, 0, 3])
any([1, 2, 3]) # 有元素为真就返回 True
any([1, 0, 3])
any([0, 0, 0])
any([])
g = (n for n in [0, 0.0, 7, 8])
any(g)
next(g) # any 碰到一个为真就不往下判断了
Explanation: 可迭代的归约函数
接受可迭代对象,然后返回单个结果,叫归约函数。
End of explanation
from random import randint
def d6():
return randint(1, 6)
d6_iter = iter(d6, 1)
d6_iter
for roll in d6_iter:
print(roll)
Explanation: 还有一个内置的函数接受一个可迭代对象,返回不同的值 -- sorted,reversed 是生成器函数,与此不同,sorted 会构建并返回真正的列表,毕竟要读取每一个元素才能排序。它返回的是一个排好序的列表。这里提到 sorted,是因为它可以处理任何可迭代对象
当然,sorted 和这些归约函数只能处理最终会停止的可迭代对象,这些函数会一直收集元素,永远无法返回结果
深入分析 iter 函数
iter 函数还有一个鲜为人知的用法:传两个参数,使用常规的函数或任何可调用的对象创建迭代器。这样使用时,第一个参数必须是可调用对象,用于不断调用(没有参数),产出各个值,第二个是哨符,是个标记值,当可调用对象返回这个值时候,触发迭代器抛
出 StopIteration 异常,而不产出哨符。
下面是掷骰子,直到掷出 1
End of explanation
# for line in iter(fp.readline, '\n'):
# process_line(line)
Explanation: 内置函数 iter 的文档有一个实用的例子,逐行读取文件,直到遇到空行或者到达文件末尾为止:
End of explanation
def f():
x=0
while True:
x += 1
yield x
Explanation: 把生成器当成协程
Python 2.2 引入了 yield 关键字实现的生成器函数,Python 2.5 为生成器对象添加了额外的方法和功能,其中最引人关注的是 .send() 方法
与 .__next__() 方法一样,.send() 方法致使生成器前进到下一个 yield 语句。不过 send() 方法还允许使用生成器的客户把数据发给自己,即不管传给 .send() 方法什么参数,那个参数都会成为生成器函数定义体中对应的 yield 表达式的值。也就是说,.send() 方法允许在客户代码和生成器之间双向交换数据。而 .__next__() 方法只允许客户从生成器中获取数据
这是一项重要的 “改进”,甚至改变了生成器本性,这样使用的话,生成器就变成了协程。所以要提醒一下:
生成器用于生成供迭代的数据
协程是数据的消费者
为了避免脑袋爆炸,不能把两个概念混为一谈
协程与迭代无关
注意,虽然在协程中会使用 yield 产出值,但这与迭代无关
延伸阅读
有个简单的生成器函数例子
End of explanation
def f():
def do_yield(n):
yield n
x = 0
while True:
x += 1
do_yield(x)
Explanation: 我们无法通过函数调用抽象产出这个过程,下面似乎能抽象产出这个过程:
End of explanation
def f():
def do_yield(n):
yield n
x = 0
while True:
x += 1
yield from do_yield(x)
Explanation: 调用 f() 会得到一个死循环,而不是生成器,因为 yield 只能将最近的外层函数变成生成器函数。虽然生成器函数看起来像函数,可是我们不能通过简单的函数调用把职责委托给另一个生成器函数。
Python 新引入的 yield from 语法允许生成器或协程把工作委托给第三方完成,这样就无需嵌套 for 循环作为变通了。在函数调用前面加上 yield from 能 ”解决“ 上面的问题,如下:
End of explanation |
1,893 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
p-Hacking and Multiple Comparisons Bias
By Delaney Mackenzie and Maxwell Margenot.
Part of the Quantopian Lecture Series
Step1: Refresher
Step2: If we add some noise our coefficient will drop.
Step3: p-value Refresher
For more info on p-values see this lecture. What's important to remember is they're used to test a hypothesis given some data. Here we are testing the hypothesis that a relationship exists between two series given the series values.
IMPORTANT
Step4: Experiment - Running Many Tests
We'll start by defining a data frame.
Step5: Now we'll populate it by adding N randomly generated timeseries of length T.
Step6: Now we'll run a test on all pairs within our data looking for instances where our p-value is below our defined cutoff of 5%.
Step7: Before we check how many significant results we got, let's run out some math to check how many we'd expect. The formula for the number of pairs given N series is
$$\frac{N(N-1)}{2}$$
There are no relationships in our data as it's all randomly generated. If our test is properly calibrated we should expect a false positive rate of 5% given our 5% cutoff. Therefore we should expect the following number of pairs that achieved significance based on pure random chance.
Step8: Now let's compare to how many we actually found.
Step9: We shouldn't expect the numbers to match too closely here on a consistent basis as we've only run one experiment. If we run many of these experiments we should see a convergence to what we'd expect.
Repeating the Experiment
Step10: The average over many experiments should be closer.
Step11: Visualizing What's Going On
What's happening here is that p-values should be uniformly distributed, given no signal in the underlying data. Basically, they carry no information whatsoever and will be equally likely to be 0.01 as 0.99. Because they're popping out randomly, you will expect a certain percentage of p-values to be underneath any threshold you choose. The lower the threshold the fewer will pass your test.
Let's visualize this by making a modified function that returns p-values.
Step12: We'll now collect a bunch of pvalues. As in any case we'll want to collect quite a number of p-values to start getting a sense of how the underlying distribution looks. If we only collect few, it will be noisy like this
Step13: Let's dial up our N parameter to get a better sense. Keep in mind that the number of p-values will increase at a rate of
$$\frac{N (N-1)}{2}$$
or approximately quadratically. Therefore we don't need to increase N by much.
Step14: Starting to look pretty flat, as we expected. Lastly, just to visualize the process of drawing a cutoff, we'll draw two artificial lines.
Step15: We can see that with a lower cutoff we should expect to get fewer false positives. Let's check that with our above experiment.
Step16: And finally compare it to what we expected.
Step17: Sensitivity / Specificity Tradeoff
As with any adjustment of p-value cutoff, we have a tradeoff. A lower cutoff decreases the rate of false positives, but also decreases the chance we find a real relationship (true positive). So you can't just decrease your cutoff to solve this problem.
https | Python Code:
import numpy as np
import pandas as pd
import scipy.stats as stats
import matplotlib.pyplot as plt
Explanation: p-Hacking and Multiple Comparisons Bias
By Delaney Mackenzie and Maxwell Margenot.
Part of the Quantopian Lecture Series:
www.quantopian.com/lectures
github.com/quantopian/research_public
Multiple comparisons bias is a pervasive problem in statistics, data science, and in general forecasting/predictions. The short explanation is that the more tests you run, the more likely you are to get an outcome that you want/expect. If you ignore the multitude of tests that failed, you are clearly setting yourself up for failure by misinterpreting what's going on in your data.
A particularly common example of this is when looking for relationships in large data sets comprising of many indepedent series or variables. In this case you run a test each time you evaluate whether a relationship exists between a set of variables.
Statistics Merely Illuminates This Issue
Most folks also fall prey to multiple comparisons bias in real life. Any time you make a decision you are effectively taking an action based on an hypothesis. That hypothesis is often tested. You can end up unknowingly making many tests in your daily life.
An example might be deciding which medicine is helping cure a cold you have. Many people will take multiple medicines at once to try and get rid of symptoms. You may think that a certain medicine worked, when in reality none did and the cold just happened to start getting better at some point.
The point here is that this problem doesn't stem from statistical testing and p-values. Rather, these techniques give us much more information about the problem and when it might be occuring.
End of explanation
X = pd.Series(np.random.normal(0, 1, 100))
Y = X
r_s = stats.spearmanr(Y, X)
print 'Spearman Rank Coefficient: ', r_s[0]
print 'p-value: ', r_s[1]
Explanation: Refresher: Spearman Rank Correlation
Please refer to this lecture for more full info, but here is a very brief refresher on Spearman Rank Correlation.
It's a variation of correlation that takes into account the ranks of the data. This can help with weird distributions or outliers that would confuse other measures. The test also returns a p-value, which is key here.
A higher coefficient means a stronger estimated relationship.
End of explanation
X = pd.Series(np.random.normal(0, 1, 100))
Y = X + np.random.normal(0, 1, 100)
r_s = stats.spearmanr(Y, X)
print 'Spearman Rank Coefficient: ', r_s[0]
print 'p-value: ', r_s[1]
Explanation: If we add some noise our coefficient will drop.
End of explanation
# Setting a cutoff of 5% means that there is a 5% chance
# of us getting a significant p-value given no relationship
# in our data (false positive).
# NOTE: This is only true if the test's assumptions have been
# satisfied and the test is therefore properly calibrated.
# All tests have different assumptions.
cutoff = 0.05
X = pd.Series(np.random.normal(0, 1, 100))
Y = X + np.random.normal(0, 1, 100)
r_s = stats.spearmanr(Y, X)
print 'Spearman Rank Coefficient: ', r_s[0]
if r_s[1] < cutoff:
print 'There is significant evidence of a relationship.'
else:
print 'There is not significant evidence of a relationship.'
Explanation: p-value Refresher
For more info on p-values see this lecture. What's important to remember is they're used to test a hypothesis given some data. Here we are testing the hypothesis that a relationship exists between two series given the series values.
IMPORTANT: p-values must be treated as binary.
A common mistake is that p-values are treated as more or less significant. This is bad practice as it allows for what's known as p-hacking and will result in more false positives than you expect. Effectively, you will be too likely to convince yourself that relationships exist in your data.
To treat p-values as binary, a cutoff must be set in advance. Then the p-value must be compared with the cutoff and treated as significant/not signficant. Here we'll show this.
The Cutoff is our Significance Level
We can refer to the cutoff as our significance level because a lower cutoff means that results which pass it are significant at a higher level of confidence. So if you have a cutoff of 0.05, then even on random data 5% of tests will pass based on chance. A cutoff of 0.01 reduces this to 1%, which is a more stringent test. We can therefore have more confidence in our results.
End of explanation
df = pd.DataFrame()
Explanation: Experiment - Running Many Tests
We'll start by defining a data frame.
End of explanation
N = 20
T = 100
for i in range(N):
X = np.random.normal(0, 1, T)
X = pd.Series(X)
name = 'X%s' % i
df[name] = X
df.head()
Explanation: Now we'll populate it by adding N randomly generated timeseries of length T.
End of explanation
cutoff = 0.05
significant_pairs = []
for i in range(N):
for j in range(i+1, N):
Xi = df.iloc[:, i]
Xj = df.iloc[:, j]
results = stats.spearmanr(Xi, Xj)
pvalue = results[1]
if pvalue < cutoff:
significant_pairs.append((i, j))
Explanation: Now we'll run a test on all pairs within our data looking for instances where our p-value is below our defined cutoff of 5%.
End of explanation
(N * (N-1) / 2) * 0.05
Explanation: Before we check how many significant results we got, let's run out some math to check how many we'd expect. The formula for the number of pairs given N series is
$$\frac{N(N-1)}{2}$$
There are no relationships in our data as it's all randomly generated. If our test is properly calibrated we should expect a false positive rate of 5% given our 5% cutoff. Therefore we should expect the following number of pairs that achieved significance based on pure random chance.
End of explanation
len(significant_pairs)
Explanation: Now let's compare to how many we actually found.
End of explanation
def do_experiment(N, T, cutoff=0.05):
df = pd.DataFrame()
# Make random data
for i in range(N):
X = np.random.normal(0, 1, T)
X = pd.Series(X)
name = 'X%s' % i
df[name] = X
significant_pairs = []
# Look for relationships
for i in range(N):
for j in range(i+1, N):
Xi = df.iloc[:, i]
Xj = df.iloc[:, j]
results = stats.spearmanr(Xi, Xj)
pvalue = results[1]
if pvalue < cutoff:
significant_pairs.append((i, j))
return significant_pairs
num_experiments = 100
results = np.zeros((num_experiments,))
for i in range(num_experiments):
# Run a single experiment
result = do_experiment(20, 100, cutoff=0.05)
# Count how many pairs
n = len(result)
# Add to array
results[i] = n
Explanation: We shouldn't expect the numbers to match too closely here on a consistent basis as we've only run one experiment. If we run many of these experiments we should see a convergence to what we'd expect.
Repeating the Experiment
End of explanation
np.mean(results)
Explanation: The average over many experiments should be closer.
End of explanation
def get_pvalues_from_experiment(N, T):
df = pd.DataFrame()
# Make random data
for i in range(N):
X = np.random.normal(0, 1, T)
X = pd.Series(X)
name = 'X%s' % i
df[name] = X
pvalues = []
# Look for relationships
for i in range(N):
for j in range(i+1, N):
Xi = df.iloc[:, i]
Xj = df.iloc[:, j]
results = stats.spearmanr(Xi, Xj)
pvalue = results[1]
pvalues.append(pvalue)
return pvalues
Explanation: Visualizing What's Going On
What's happening here is that p-values should be uniformly distributed, given no signal in the underlying data. Basically, they carry no information whatsoever and will be equally likely to be 0.01 as 0.99. Because they're popping out randomly, you will expect a certain percentage of p-values to be underneath any threshold you choose. The lower the threshold the fewer will pass your test.
Let's visualize this by making a modified function that returns p-values.
End of explanation
pvalues = get_pvalues_from_experiment(10, 100)
plt.hist(pvalues)
plt.ylabel('Frequency')
plt.title('Observed p-value');
Explanation: We'll now collect a bunch of pvalues. As in any case we'll want to collect quite a number of p-values to start getting a sense of how the underlying distribution looks. If we only collect few, it will be noisy like this:
End of explanation
pvalues = get_pvalues_from_experiment(50, 100)
plt.hist(pvalues)
plt.ylabel('Frequency')
plt.title('Observed p-value');
Explanation: Let's dial up our N parameter to get a better sense. Keep in mind that the number of p-values will increase at a rate of
$$\frac{N (N-1)}{2}$$
or approximately quadratically. Therefore we don't need to increase N by much.
End of explanation
pvalues = get_pvalues_from_experiment(50, 100)
plt.vlines(0.01, 0, 150, colors='r', linestyle='--', label='0.01 Cutoff')
plt.vlines(0.05, 0, 150, colors='r', label='0.05 Cutoff')
plt.hist(pvalues, label='P-Value Distribution')
plt.legend()
plt.ylabel('Frequency')
plt.title('Observed p-value');
Explanation: Starting to look pretty flat, as we expected. Lastly, just to visualize the process of drawing a cutoff, we'll draw two artificial lines.
End of explanation
num_experiments = 100
results = np.zeros((num_experiments,))
for i in range(num_experiments):
# Run a single experiment
result = do_experiment(20, 100, cutoff=0.01)
# Count how many pairs
n = len(result)
# Add to array
results[i] = n
np.mean(results)
Explanation: We can see that with a lower cutoff we should expect to get fewer false positives. Let's check that with our above experiment.
End of explanation
(N * (N-1) / 2) * 0.01
Explanation: And finally compare it to what we expected.
End of explanation
num_experiments = 100
results = np.zeros((num_experiments,))
N = 20
T = 100
desired_level = 0.05
num_tests = N * (N - 1) / 2
new_cutoff = desired_level / num_tests
for i in range(num_experiments):
# Run a single experiment
result = do_experiment(20, 100, cutoff=new_cutoff)
# Count how many pairs
n = len(result)
# Add to array
results[i] = n
np.mean(results)
Explanation: Sensitivity / Specificity Tradeoff
As with any adjustment of p-value cutoff, we have a tradeoff. A lower cutoff decreases the rate of false positives, but also decreases the chance we find a real relationship (true positive). So you can't just decrease your cutoff to solve this problem.
https://en.wikipedia.org/wiki/Sensitivity_and_specificity
Reducing Multiple Comparisons Bias
You can't really eliminate multiple comparisons bias, but you can reduce how much it impacts you. To do so we have two options.
Option 1: Run fewer tests.
This is often the best option. Rather than just sweeping around hoping you hit an interesting signal, use your expert knowledge of the system to develop a great hypothesis and test that. This process of exploring the data, coming up with a hypothesis, then gathering more data and testing the hypothesis on the new data is considered the gold standard in statistical and scientific research. It's crucial that the data set on which you develop your hypothesis is not the one on which you test it. Because you found the effect while exploring, the test will likely pass and not really tell you anything. What you want to know is how consistent the effect is. Moving to new data and testing there will not only mean you only run one test, but will be an 'unbiased estimator' of whether your hypothesis is true. We discuss this a lot in other lectures.
Option 2: Adjustment Factors and Bon Ferroni Correction
WARNING: This section gets a little technical. Unless you're comfortable with significance levels, we recommend looking at the code examples first and maybe reading the linked articles before fully diving into the text.
If you must run many tests, try to correct your p-values. This means applying a correction factor to the cutoff you desire to obtain the one actually used when determining whether p-values are significant. The most conservative and common correction factor is Bon Ferroni.
Example: Bon Ferroni Correction
The concept behind Bon Ferroni is quite simple. It just says that if we run $m$ tests, and we have a significance level/cutoff of $a$, then we should use $a/m$ as our new cutoff when determining significance. The math works out because of the following.
Let's say we run $m$ tests. We should expect to see $ma$ false positives based on random chance that pass out cutoff. If we instead use $a/m$ as our cutoff, then we should expect to see $ma/m = a$ tests that pass our cutoff. Therefore we are back to our desired false positive rate of $a$.
Let's try it on our experiment above.
End of explanation |
1,894 | Given the following text description, write Python code to implement the functionality described.
Description:
Find the total marks obtained according to given marking scheme
Function that calculates marks . ; for not attempt score + 0 ; for each correct answer score + 3 ; for each wrong answer score - 1 ; calculate total marks ; Driver code
| Python Code:
def markingScheme(N , answerKey , studentAnswer ) :
positive = 0
negative = 0
notattempt = 0
for i in range(0 , N ) :
if(studentAnswer[i ] == 0 ) :
notattempt += 1
elif(answerKey[i ] == studentAnswer[i ] ) :
positive += 1
elif(answerKey[i ] != studentAnswer[i ] ) :
negative += 1
return(positive * 3 ) +(negative * - 1 )
def main() :
answerKey =[1 , 2 , 3 , 4 , 1 ]
studentAnswer =[1 , 2 , 3 , 4 , 0 ]
N = 5
print(markingScheme(N , answerKey , studentAnswer ) )
|
1,895 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Conservative Estimation using a Grid Seach Minimization
This notebook illustrates the different steps for a conservative estimation using a grid search minimization.
Classic Libraries
Step1: Additive model
The first example of conservative estimation consider an additive model $\eta
Step2: Dimension 2
We consider the problem in dimension $d=2$ and a number of pairs $p=1$ for gaussian margins.
Step3: Copula families
We consider a gaussian copula for this first example
Step4: Estimations
We create an instance of the main class for a conservative estimate.
Step5: First, we compute the quantile at independence
Step6: We aim to minimize the output quantile. To do that, we create a q_func object from the function quantile_func to associate a probability $\alpha$ to a function that computes the empirical quantile from a given sample.
Step7: The computation returns a DependenceResult instance. This object gather the informations of the computation. It also computes the output quantity of interest (which can also be changed).
Step8: A boostrap can be done on the output quantity
Step9: And we can plot it
Step10: Grid Search Approach
Firstly, we consider a grid search approach in order to compare the perfomance with the iterative algorithm. The discretization can be made on the parameter space or on other concordance measure such as the kendall's Tau. This below example shows a grid-search on the parameter space.
Step11: The computation returns a ListDependenceResult which is a list of DependenceResult instances and some bonuses.
Step12: Lets set the quantity function and search for the minimum among the grid results.
Step13: We can plot the result in grid results. The below figure shows the output quantiles in function of the dependence parameters.
Step14: As for the individual problem, we can do a boostrap also, for each parameters. Because we have $K$ parameters, we can do a bootstrap for the $K$ samples, compute the $K$ quantiles for all the bootstrap and get the minimum quantile for each bootstrap.
Step15: For the parameter that have the most occurence for the minimum, we compute its bootstrap mean.
Step16: Kendall's Tau
An interesting feature is to convert the dependence parameters to Kendall's Tau values.
Step17: As we can see, the bounds
With bounds on the dependencies
An interesting option in the ConservativeEstimate class is to bound the dependencies, due to some prior informations.
Step18: Saving the results
It is usefull to save the result in a file to load it later and compute other quantities or anything you need!
Step19: Taking the extreme values of the dependence parameter
If the output quantity of interest seems to have a monotonicity with the dependence parameter, it is better to directly take the bounds of the dependence problem. Obviously, the minimum should be at the edges of the design space
Step20: Higher Dimension
We consider the problem in dimension $d=5$.
Step21: Copula families with one dependent pair
We consider a gaussian copula for this first example, but for the moment only one pair is dependent.
Step22: We reset the families and bounds for the current instance. (I don't want to create a new instance, just to check if the setters are good).
Step23: Let's do the grid search to see
Step24: The quantile is lower compare to the problem of dimension 1. Indeed, there is more variables, more uncertainty, so a larger deviation of the output.
Step25: Copula families with all dependent pairs
We consider a gaussian copula for this first example, but for the moment only one pair is dependent.
Step26: With one fixed pair
Step27: Save the used grid and load it again
Step28: Then gather the results from the same grid with the same configurations
Step29: Because the configurations are the same, we can gather the results from two different runs | Python Code:
import openturns as ot
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
random_state = 123
np.random.seed(random_state)
Explanation: Conservative Estimation using a Grid Seach Minimization
This notebook illustrates the different steps for a conservative estimation using a grid search minimization.
Classic Libraries
End of explanation
from depimpact.tests import func_sum
help(func_sum)
Explanation: Additive model
The first example of conservative estimation consider an additive model $\eta : \mathbb R^d \rightarrow \mathbb R$ with Gaussian margins. The objectives are to estimate a quantity of interest $\mathcal C(Y)$ of the model output distribution. Unfortunately, the dependence structure is unknown. In order to be conservative we aim to give bounds to $\mathcal C(Y)$.
The model
This example consider the simple additive example.
End of explanation
dim = 2
margins = [ot.Normal()]*dim
Explanation: Dimension 2
We consider the problem in dimension $d=2$ and a number of pairs $p=1$ for gaussian margins.
End of explanation
families = np.zeros((dim, dim), dtype=int)
families[1, 0] = 1
Explanation: Copula families
We consider a gaussian copula for this first example
End of explanation
from depimpact import ConservativeEstimate
quant_estimate = ConservativeEstimate(model_func=func_sum, margins=margins, families=families)
Explanation: Estimations
We create an instance of the main class for a conservative estimate.
End of explanation
n = 1000
indep_result = quant_estimate.independence(n_input_sample=n, random_state=random_state)
Explanation: First, we compute the quantile at independence
End of explanation
from dependence import quantile_func
alpha = 0.05
q_func = quantile_func(alpha)
indep_result.q_func = q_func
Explanation: We aim to minimize the output quantile. To do that, we create a q_func object from the function quantile_func to associate a probability $\alpha$ to a function that computes the empirical quantile from a given sample.
End of explanation
sns.jointplot(indep_result.input_sample[:, 0], indep_result.input_sample[:, 1]);
h = sns.distplot(indep_result.output_sample_id, axlabel='Model output', label="Output Distribution")
plt.plot([indep_result.quantity]*2, h.get_ylim(), label='Quantile at %d%%' % (alpha*100))
plt.legend(loc=0)
print('Output quantile :', indep_result.quantity)
Explanation: The computation returns a DependenceResult instance. This object gather the informations of the computation. It also computes the output quantity of interest (which can also be changed).
End of explanation
indep_result.compute_bootstrap(n_bootstrap=5000)
Explanation: A boostrap can be done on the output quantity
End of explanation
sns.distplot(indep_result.bootstrap_sample, axlabel='Output quantile');
ci = [0.025, 0.975]
quantity_ci = indep_result.compute_quantity_bootstrap_ci(ci)
h = sns.distplot(indep_result.output_sample_id, axlabel='Model output', label="Output Distribution")
plt.plot([indep_result.quantity]*2, h.get_ylim(), 'g-', label='Quantile at %d%%' % (alpha*100))
plt.plot([quantity_ci[0]]*2, h.get_ylim(), 'g--', label='%d%% confidence intervals' % ((1. - (ci[0] + 1. - ci[1]))*100))
plt.plot([quantity_ci[1]]*2, h.get_ylim(), 'g--')
plt.legend(loc=0)
print('Quantile at independence: %.2f with a C.O.V at %.1f %%' % (indep_result.boot_mean, indep_result.boot_cov))
Explanation: And we can plot it
End of explanation
%%snakeviz
K = 500
n = 10000
grid_type = 'lhs'
dep_measure = 'parameter'
grid_result = quant_estimate.gridsearch(n_dep_param=K, n_input_sample=n, grid_type=grid_type, dep_measure=dep_measure,
random_state=random_state)
Explanation: Grid Search Approach
Firstly, we consider a grid search approach in order to compare the perfomance with the iterative algorithm. The discretization can be made on the parameter space or on other concordance measure such as the kendall's Tau. This below example shows a grid-search on the parameter space.
End of explanation
print('The computation did %d model evaluations.' % (grid_result.n_evals))
Explanation: The computation returns a ListDependenceResult which is a list of DependenceResult instances and some bonuses.
End of explanation
grid_result.q_func = q_func
min_result = grid_result.min_result
print('Minimum quantile: {} at param: {}'.format(min_result.quantity, min_result.dep_param))
Explanation: Lets set the quantity function and search for the minimum among the grid results.
End of explanation
plt.plot(grid_result.dep_params, grid_result.quantities, '.', label='Quantiles')
plt.plot(min_result.dep_param[0], min_result.quantity, 'ro', label='minimum')
plt.xlabel('Dependence parameter')
plt.ylabel('Quantile value')
plt.legend(loc=0);
Explanation: We can plot the result in grid results. The below figure shows the output quantiles in function of the dependence parameters.
End of explanation
grid_result.compute_bootstraps(n_bootstrap=5000)
boot_min_quantiles = grid_result.bootstrap_samples.min(axis=0)
boot_argmin_quantiles = grid_result.bootstrap_samples.argmin(axis=0).ravel().tolist()
boot_min_params = [grid_result.dep_params[idx][0] for idx in boot_argmin_quantiles]
fig, axes = plt.subplots(1, 2, figsize=(14, 5))
sns.distplot(boot_min_quantiles, axlabel="Minimum quantiles", ax=axes[0])
sns.distplot(boot_min_params, axlabel="Parameters of the minimum", ax=axes[1])
Explanation: As for the individual problem, we can do a boostrap also, for each parameters. Because we have $K$ parameters, we can do a bootstrap for the $K$ samples, compute the $K$ quantiles for all the bootstrap and get the minimum quantile for each bootstrap.
End of explanation
# The parameter with most occurence
boot_id_min = max(set(boot_argmin_quantiles), key=boot_argmin_quantiles.count)
boot_min_result = grid_result[boot_id_min]
boot_mean = boot_min_result.bootstrap_sample.mean()
boot_std = boot_min_result.bootstrap_sample.std()
print('Worst Quantile: {} at {} with a C.O.V of {} %'.format(boot_min_result.boot_mean, min_result.dep_param, boot_min_result.boot_cov*100.))
Explanation: For the parameter that have the most occurence for the minimum, we compute its bootstrap mean.
End of explanation
plt.plot(grid_result.kendalls, grid_result.quantities, '.', label='Quantiles')
plt.plot(min_result.kendall_tau, min_result.quantity, 'ro', label='Minimum quantile')
plt.xlabel("Kendall's tau")
plt.ylabel('Quantile')
plt.legend(loc=0);
Explanation: Kendall's Tau
An interesting feature is to convert the dependence parameters to Kendall's Tau values.
End of explanation
bounds_tau = np.asarray([[0., 0.7], [0.1, 0.]])
quant_estimate.bounds_tau = bounds_tau
K = 20
n = 10000
grid_type = 'lhs'
grid_result = quant_estimate.gridsearch(n_dep_param=K, n_input_sample=n, grid_type=grid_type, random_state=random_state)
grid_result.q_func = q_func
min_result = grid_result.min_result
print('Minimum quantile: {} at param: {}'.format(min_result.quantity, min_result.dep_param))
plt.plot(grid_result.dep_params, grid_result.quantities, '.', label='Quantiles')
plt.plot(min_result.dep_param[0], min_result.quantity, 'ro', label='minimum')
plt.xlabel('Dependence parameter')
plt.ylabel('Quantile value')
plt.legend(loc=0);
Explanation: As we can see, the bounds
With bounds on the dependencies
An interesting option in the ConservativeEstimate class is to bound the dependencies, due to some prior informations.
End of explanation
filename = './result.hdf'
grid_result.to_hdf(filename)
from dependence import ListDependenceResult
load_grid_result = ListDependenceResult.from_hdf(filename, q_func=q_func, with_input_sample=False)
np.testing.assert_array_equal(grid_result.output_samples, load_grid_result.output_samples)
import os
os.remove(filename)
Explanation: Saving the results
It is usefull to save the result in a file to load it later and compute other quantities or anything you need!
End of explanation
K = None
n = 1000
grid_type = 'vertices'
grid_result = quant_estimate.gridsearch(n_dep_param=K, n_input_sample=n, grid_type=grid_type, random_state=random_state)
grid_result.q_func = q_func
print("Kendall's Tau : {}, Quantile: {}".format(grid_result.kendalls.ravel(), grid_result.quantities))
from depimpact.plots import matrix_plot_input
matrix_plot_input(grid_result.min_result);
Explanation: Taking the extreme values of the dependence parameter
If the output quantity of interest seems to have a monotonicity with the dependence parameter, it is better to directly take the bounds of the dependence problem. Obviously, the minimum should be at the edges of the design space
End of explanation
dim = 5
quant_estimate.margins = [ot.Normal()]*dim
Explanation: Higher Dimension
We consider the problem in dimension $d=5$.
End of explanation
families = np.zeros((dim, dim), dtype=int)
families[2, 0] = 1
quant_estimate.families = families
families
quant_estimate.bounds_tau = None
quant_estimate.bounds_tau
Explanation: Copula families with one dependent pair
We consider a gaussian copula for this first example, but for the moment only one pair is dependent.
End of explanation
quant_estimate.vine_structure
Explanation: We reset the families and bounds for the current instance. (I don't want to create a new instance, just to check if the setters are good).
End of explanation
K = 20
n = 10000
grid_type = 'vertices'
grid_result = quant_estimate.gridsearch(n_dep_param=K, n_input_sample=n, grid_type=grid_type, random_state=random_state)
Explanation: Let's do the grid search to see
End of explanation
grid_result.q_func = q_func
min_result = grid_result.min_result
print('Worst Quantile: {} at {}'.format(min_result.quantity, min_result.dep_param))
matrix_plot_input(min_result)
plt.plot(grid_result.dep_params, grid_result.quantities, '.', label='Quantiles')
plt.plot(min_result.dep_param[0], min_result.quantity, 'ro', label='Minimum')
plt.xlabel('Dependence parameter')
plt.ylabel('Quantile value')
plt.legend(loc=0);
Explanation: The quantile is lower compare to the problem of dimension 1. Indeed, there is more variables, more uncertainty, so a larger deviation of the output.
End of explanation
families = np.zeros((dim, dim), dtype=int)
for i in range(1, dim):
for j in range(i):
families[i, j] = 1
quant_estimate.margins = margins
quant_estimate.families = families
quant_estimate.vine_structure = None
quant_estimate.bounds_tau = None
quant_estimate.bounds_tau
K = 100
n = 1000
grid_type = 'lhs'
grid_result = quant_estimate.gridsearch(n_dep_param=K, n_input_sample=n, grid_type=grid_type, random_state=random_state)
min_result = grid_result.min_result
print('Worst Quantile: {0} at {1}'.format(min_result.quantity, min_result.dep_param))
Explanation: Copula families with all dependent pairs
We consider a gaussian copula for this first example, but for the moment only one pair is dependent.
End of explanation
families[3, 2] = 0
quant_estimate = ConservativeEstimate(model_func=func_sum, margins=margins, families=families)
K = 100
n = 10000
grid_type = 'lhs'
grid_result = quant_estimate.gridsearch(n_dep_param=K, n_input_sample=n, grid_type=grid_type,
q_func=q_func, random_state=random_state)
min_result = grid_result.min_result
print('Worst Quantile: {0} at {1}'.format(min_result.quantity, min_result.dep_param))
grid_result.vine_structure
from depimpact.plots import matrix_plot_input
matrix_plot_input(min_result)
Explanation: With one fixed pair
End of explanation
K = 100
n = 1000
grid_type = 'lhs'
grid_result_1 = quant_estimate.gridsearch(n_dep_param=K, n_input_sample=n, grid_type=grid_type, save_grid=True, grid_path='./output')
grid_result_2 = quant_estimate.gridsearch(n_dep_param=K, n_input_sample=n, grid_type=grid_type,
q_func=q_func, use_grid=0, grid_path='./output')
Explanation: Save the used grid and load it again
End of explanation
grid_result_1.n_input_sample, grid_result_2.n_input_sample
grid_result = grid_result_1 + grid_result_2
Explanation: Then gather the results from the same grid with the same configurations
End of explanation
grid_result.n_input_sample
Explanation: Because the configurations are the same, we can gather the results from two different runs
End of explanation |
1,896 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interactive Widgets
Using interact
Source Link
Step1: Note the semicolon
Step2: Booleans create checkbox
Step3: Using decorators
Step4: From Portilla's notes
This examples clarifies how interact process its keyword arguments
Step5: Function Annotations
Step6: multiple instances remain in sync!
Step7: There are client-server nuances!
Step8: Source
With mroe text descriptions
Step9: This is not working! | Python Code:
# Start with some imports!
from __future__ import print_function
from ipywidgets import interact, interactive, fixed
import ipywidgets as widgets
# Very basic function
def f(x):
return x
help(interact)
Explanation: Interactive Widgets
Using interact
Source Link
End of explanation
# Generate a slider to interact with
interact(f, x=10);
interact(f, x=10,);
Explanation: Note the semicolon
End of explanation
# Booleans generate check-boxes
interact(f, x=True);
# Strings generate text areas
interact(f, x='Hi there!');
Explanation: Booleans create checkbox
End of explanation
# Using a decorator!
@interact(x=True, y=1.0)
def g(x, y):
return (x, y)
# Again, a simple function
def h(p, q):
return (p, q)
interact(h, p=5, q=fixed(20));
interact(f, x=widgets.IntSlider(min=-10., max=30, value=10));
Explanation: Using decorators
End of explanation
# Min,Max slider with Tuples
interact(f, x=(0,4));
# (min, max, step)
interact(f, x=(0,8,2));
interact(f, x=(0.0,10.0));
interact(f, x=(0.0,10.0,0.01));
@interact(x=(0.0,20.0,0.5))
def h(x=5.5):
return x
interact(f, x=('apples','oranges'));
interact(f, x={'one': 10, 'two': 20});
Explanation: From Portilla's notes
This examples clarifies how interact process its keyword arguments:
If the keyword argument is a Widget instance with a value attribute, that widget is used. Any widget with a value attribute can be used, even custom ones.
Otherwise, the value is treated as a widget abbreviation that is converted to a widget before it is used.
The following table gives an overview of different widget abbreviations:
<table class="table table-condensed table-bordered">
<tr><td><strong>Keyword argument</strong></td><td><strong>Widget</strong></td></tr>
<tr><td>`True` or `False`</td><td>Checkbox</td></tr>
<tr><td>`'Hi there'`</td><td>Text</td></tr>
<tr><td>`value` or `(min,max)` or `(min,max,step)` if integers are passed</td><td>IntSlider</td></tr>
<tr><td>`value` or `(min,max)` or `(min,max,step)` if floats are passed</td><td>FloatSlider</td></tr>
<tr><td>`('orange','apple')` or `{'one':1,'two':2}`</td><td>Dropdown</td></tr>
</table>
End of explanation
def f(x:True): # python 3 only
return x
from IPython.utils.py3compat import annotate
@annotate(x=True)
def f(x):
return x
interact(f);
def f(a, b):
return a+b
w = interactive(f, a=10, b=20)
type(w)
w.children
from IPython.display import display
display(w)
w.kwargs
w.result
from ipywidgets import *
IntSlider()
from IPython.display import display
w = IntSlider()
display(w)
display(w)
Explanation: Function Annotations
End of explanation
w.close()
w = IntSlider()
display(w)
w.value
w.value = 100
w.keys
Text(value='Hello World!')
Text(value='Hello World!', disabled=True)
from traitlets import link
a = FloatText()
b = FloatSlider()
display(a,b)
mylink = link((a, 'value'), (b, 'value'))
mylink.unlink()
print(widgets.Button.on_click.__doc__)
from IPython.display import display
button = widgets.Button(description="Click Me!")
display(button)
def on_button_clicked(b):
print("Button clicked.")
button.on_click(on_button_clicked)
text = widgets.Text()
display(text)
def handle_submit(sender):
print(text.value)
text.on_submit(handle_submit)
print(widgets.Widget.on_trait_change.__doc__)
print(widgets.obse)
int_range = widgets.IntSlider()
display(int_range)
def on_value_change(name, value):
print(value)
int_range.on_trait_change(on_value_change, 'value')
import traitlets
# Create Caption
caption = widgets.Label(value = 'The values of slider1 and slider2 are synchronized')
# Create IntSlider
slider1 = widgets.IntSlider(description='Slider 1')
slider2 = widgets.IntSlider(description='Slider 2')
# Use trailets to link
l = traitlets.link((slider1, 'value'), (slider2, 'value'))
# Display!
display(caption, slider1, slider2)
# Create Caption
caption = widgets.Latex(value = 'Changes in source values are reflected in target1')
# Create Sliders
source = widgets.IntSlider(description='Source')
target1 = widgets.IntSlider(description='Target 1')
# Use dlink
dl = traitlets.dlink((source, 'value'), (target1, 'value'))
display(caption, source, target1)
# May get an error depending on order of cells being run!
l.unlink()
dl.unlink()
Explanation: multiple instances remain in sync!
End of explanation
# NO LAG VERSION
caption = widgets.Latex(value = 'The values of range1 and range2 are synchronized')
range1 = widgets.IntSlider(description='Range 1')
range2 = widgets.IntSlider(description='Range 2')
l = widgets.jslink((range1, 'value'), (range2, 'value'))
display(caption, range1, range2)
# NO LAG VERSION
caption = widgets.Latex(value = 'Changes in source_range values are reflected in target_range1')
source_range = widgets.IntSlider(description='Source range')
target_range1 = widgets.IntSlider(description='Target range ')
dl = widgets.jsdlink((source_range, 'value'), (target_range1, 'value'))
display(caption, source_range, target_range1)
l.unlink()
dl.unlink()
import ipywidgets as widgets
# Show all available widgets!
widgets.Widget.widget_types.values()
widgets.FloatSlider(
value=7.5,
min=5.0,
max=10.0,
step=0.1,
description='Test:',
)
widgets.FloatSlider(
value=7.5,
min=5.0,
max=10.0,
step=0.1,
description='Test',
orientation='vertical',
)
widgets.FloatProgress(
value=7.5,
min=5.0,
max=10.0,
step=0.1,
description='Loading:',
)
widgets.BoundedFloatText(
value=7.5,
min=5.0,
max=10.0,
description='Text:',
)
widgets.FloatText(
value=7.5,
description='Any:',
)
widgets.ToggleButton(
description='Click me',
value=False,
)
widgets.Checkbox(
description='Check me',
value=True,
)
widgets.Valid(
value=True,
)
from IPython.display import display
w = widgets.Dropdown(
options=['1', '2', '3'],
value='2',
description='Number:',
)
display(w)
# Show value
w.value
w = widgets.Dropdown(
options={'One': 1, 'Two': 2, 'Three': 3},
value=2,
description='Number:')
display(w)
w.value
widgets.RadioButtons(
description='Pizza topping:',
options=['pepperoni', 'pineapple', 'anchovies'],
)
widgets.Select(
description='OS:',
options=['Linux', 'Windows', 'OSX'],
)
widgets.ToggleButtons(
description='Speed:',
options=['Slow', 'Regular', 'Fast'],
)
w = widgets.SelectMultiple(
description="Fruits",
options=['Apples', 'Oranges', 'Pears'])
display(w)
w.value
widgets.Text(
description='String:',
value='Hello World',
)
widgets.Textarea(
description='String:',
value='Hello World',
)
widgets.Latex(
value="$$\\frac{n!}{k!(n-k)!}$$",
)
widgets.HTML(
value="Hello <b>World</b>"
)
widgets.Button(description='Click me')
Explanation: There are client-server nuances!
End of explanation
%%html
<style>
.example-container { background: #999999; padding: 2px; min-height: 100px; }
.example-container.sm { min-height: 50px; }
.example-box { background: #9999FF; width: 50px; height: 50px; text-align: center; vertical-align: middle; color: white; font-weight: bold; margin: 2px;}
.example-box.med { width: 65px; height: 65px; }
.example-box.lrg { width: 80px; height: 80px; }
</style>
import ipywidgets as widgets
from IPython.display import display
button = widgets.Button(
description='Hello World!',
width=100, # Integers are interpreted as pixel measurements.
height='2em', # em is valid HTML unit of measurement.
color='lime', # Colors can be set by name,
background_color='#0022FF', # and also by color code.
border_color='cyan')
display(button)
Explanation: Source
With mroe text descriptions
End of explanation
from IPython.display import display
float_range = widgets.FloatSlider()
string = widgets.Text(value='hi')
container = widgets.Box(children=[float_range, string])
container.border_color = 'red'
container.border_style = 'dotted'
container.border_width = 3
display(container) # Displays the `container` and all of it's children.
container = widgets.Box()
container.border_color = 'red'
container.border_style = 'dotted'
container.border_width = 3
display(container)
int_range = widgets.IntSlider()
container.children=[int_range]
name1 = widgets.Text(description='Location:')
zip1 = widgets.BoundedIntText(description='Zip:', min=0, max=99999)
page1 = widgets.Box(children=[name1, zip1])
name2 = widgets.Text(description='Location:')
zip2 = widgets.BoundedIntText(description='Zip:', min=0, max=99999)
page2 = widgets.Box(children=[name2, zip2])
accord = widgets.Accordion(children=[page1, page2], width=400)
display(accord)
accord.set_title(0, 'From')
accord.set_title(1, 'To')
name = widgets.Text(description='Name:', padding=4)
color = widgets.Dropdown(description='Color:', padding=4, options=['red', 'orange', 'yellow', 'green', 'blue', 'indigo', 'violet'])
page1 = widgets.Box(children=[name, color], padding=4)
age = widgets.IntSlider(description='Age:', padding=4, min=0, max=120, value=50)
gender = widgets.RadioButtons(description='Gender:', padding=4, options=['male', 'female'])
page2 = widgets.Box(children=[age, gender], padding=4)
tabs = widgets.Tab(children=[page1, page2])
display(tabs)
tabs.set_title(0, 'Name')
tabs.set_title(1, 'Details')
display(widgets.Text(description="a:"))
display(widgets.Text(description="aa:"))
display(widgets.Text(description="aaa:"))
display(widgets.Text(description="a:"))
display(widgets.Text(description="aa:"))
display(widgets.Text(description="aaa:"))
display(widgets.Text(description="aaaaaaaaaaaaaaaaaa:"))
display(widgets.Text(description="a:"))
display(widgets.Text(description="aa:"))
display(widgets.Text(description="aaa:"))
display(widgets.Text())
buttons = [widgets.Button(description=str(i)) for i in range(3)]
display(*buttons)
container = widgets.HBox(children=buttons)
display(container)
container = widgets.VBox(children=buttons)
display(container)
container = widgets.FlexBox(children=buttons)
display(container)
w1 = widgets.Latex(value="First line")
w2 = widgets.Latex(value="Second line")
w3 = widgets.Latex(value="Third line")
display(w1, w2, w3)
w2.visible=None
w2.visible=False
w2.visible=True
form = widgets.VBox()
first = widgets.Text(description="First:")
last = widgets.Text(description="Last:")
student = widgets.Checkbox(description="Student:", value=False)
school_info = widgets.VBox(visible=False, children=[
widgets.Text(description="School:"),
widgets.IntText(description="Grade:", min=0, max=12)
])
pet = widgets.Text(description="Pet:")
form.children = [first, last, student, school_info, pet]
display(form)
def on_student_toggle(name, value):
if value:
school_info.visible = True
else:
school_info.visible = False
student.on_trait_change(on_student_toggle, 'value')
Explanation: This is not working!
End of explanation |
1,897 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Adding all icons in a single call
Step2: Explicit loop allow for customization in the loop.
Step4: FastMarkerCluster is not as flexible as MarkerCluster but, like the name suggests, it is faster. | Python Code:
icon_create_function = \
function(cluster) {
return L.divIcon({
html: '<b>' + cluster.getChildCount() + '</b>',
className: 'marker-cluster marker-cluster-large',
iconSize: new L.Point(20, 20)
});
}
from folium.plugins import MarkerCluster
m = folium.Map(
location=[np.mean(lats), np.mean(lons)], tiles="Cartodb Positron", zoom_start=1
)
marker_cluster = MarkerCluster(
locations=locations,
popups=popups,
name="1000 clustered icons",
overlay=True,
control=True,
icon_create_function=icon_create_function,
)
marker_cluster.add_to(m)
folium.LayerControl().add_to(m)
m
Explanation: Adding all icons in a single call
End of explanation
%%time
m = folium.Map(
location=[np.mean(lats), np.mean(lons)],
tiles='Cartodb Positron',
zoom_start=1
)
marker_cluster = MarkerCluster(
name='1000 clustered icons',
overlay=True,
control=False,
icon_create_function=None
)
for k in range(size):
location = lats[k], lons[k]
marker = folium.Marker(location=location)
popup = 'lon:{}<br>lat:{}'.format(location[1], location[0])
folium.Popup(popup).add_to(marker)
marker_cluster.add_child(marker)
marker_cluster.add_to(m)
folium.LayerControl().add_to(m);
m
Explanation: Explicit loop allow for customization in the loop.
End of explanation
from folium.plugins import FastMarkerCluster
%%time
m = folium.Map(
location=[np.mean(lats), np.mean(lons)],
tiles='Cartodb Positron',
zoom_start=1
)
FastMarkerCluster(data=list(zip(lats, lons))).add_to(m)
folium.LayerControl().add_to(m);
m
callback = \
function (row) {
var icon, marker;
icon = L.AwesomeMarkers.icon({
icon: "map-marker", markerColor: "red"});
marker = L.marker(new L.LatLng(row[0], row[1]));
marker.setIcon(icon);
return marker;
};
m = folium.Map(
location=[np.mean(lats), np.mean(lons)], tiles="Cartodb Positron", zoom_start=1
)
FastMarkerCluster(data=list(zip(lats, lons)), callback=callback).add_to(m)
m
Explanation: FastMarkerCluster is not as flexible as MarkerCluster but, like the name suggests, it is faster.
End of explanation |
1,898 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Regressão Logística com Regularização
Nesta parte do trabalho, será implementada a Regressão Logística Regularizada
para prever se os microchips de uma usina de fabricação passam na garantia
de qualidade (QA). Durante a QA, cada microchip passa por vários testes para
garantir se está funcionando corretamente. Dessa forma, a Gestão de Produto da
fábrica terá o resultados de teste para alguns microchips em dois testes diferentes.
A partir desses dois testes, será determinado se os microchips deveriam ser
aceitos ou rejeitados. Para auxiliar a tomar a decisão, há um conjunto de dados
com resultados de testes anteriores sobre microchips, a partir do qual é possível construir
um modelo de Regressão Logística.
O arquivo {ex2data2.txt} contém os dados a serem usados nessa parte do trabalho. A primeira
coluna corresponde aos resultados do primeiro teste, enquanto que a segunda coluna corresponde
aos resultados do segundo teste. A terceira coluna contém os valores da classe (y = 0 significa
rejeitado no teste, e y = 1 significa aceito no teste).
1.1 Visualização dos Dados
Para a maioria dos conjuntos de dados do mundo real, não é possível criar um gráfico para
visualizar seus pontos. Mas, para o conjunto de dados fornecido, isso é possível. Implemente
um script em Python que produza um gráfico de dispersão (scatter plot) dos dados fornecidos.
Após finalizado, seu script deve produzir um resultado similar ao apresentado na Figura abaixo.
Step1: 1.2 Mapeamento de características (feature mapping)
Uma maneira de tornar os dados mais apropriados para a classificação é criar
mais características a partir das já existentes. Para isso, você deve criar uma
função mapFeature. Essa função deve ser implementada em um arquivo de
nome mapFeature.py, que irá mapear as características para todos os termos
polinomiais de x1 e x2, até a sexta potência. Como resultado desse mapeamento, nosso
vetor de duas características (os escores nos dois testes de QA) será transformado
em um vetor de 28 dimensões.
Um classificador que usa regressão logística treinado nesse vetor de características
de maior dimensão terá uma fronteira de decisão mais complexa e parecerá não-linear
quando desenhado em um gráfico bidimensional.
Embora o mapeamento de características nos permita construir um classificador mais expressivo,
também é mais suscetível a sobreajuste (overfitting). Desse modo, será implementada a
Regressão Logística Regularizada sobre os dados fornecidos e também verá como a regularização pode
ajudar a combater o problema do sobreajuste.
Step2: 1.3 Função de custo e gradiente
Agora, você deverá implementar o código para calcular a função de custo e
o gradiente para a regressão logística regularizada. Crie um arquivo de nome
costFunctionReg.py que contém uma função de nome costFunctionReg.py
e que computa o custo e o gradiente. Lembre-se de que a função de custo
regularizada na regressão logística é dada por
Step3: 1.3.1 Testando a Função de Custo e o Gradiente
Step4: 1.4 Esboço da fronteira de decisão
Nessa parte, você deve esboçar (plotar) a fronteira de decisão que foi aprendida
para separar os exemplos positivos dos negativos. Crie uma arquivo de nome
plotDecisionBoundary.py, para criar esse gráfico que traça o limite da decisão
não-linear. Seu gráfico deve ser semelhante ao apresentado na Figura abaixo.
Step5: 1.3 Visualização de J(ø)
Para melhor entender a função de custo, você irá nessa parte do trabalho plotar o custo sobre
uma grade bidimensional de valores de 0 e de 1. Para isso, você deve usar sua implementação da função computarCusto.
O código que você deve implementar deve gerar um array bidimensional de valores de J(ø). Os valores gerados pelo seu código devem estar na faixa a seguir
Step6: 2. Regressão Linear com Múltiplas Variáveis
Nessa parte do trabalho, você irá implementar a regressão linear com múltiplas variáveis para predizer
o preço de venda de imóveis. O arquivo ex1data2.txt contém informações acerca de preços de imóveis.
A primeira coluna corersponde ao tamanho do imóvel (em pés quadrados). A segunda coluna corresponde à
quantidade de dormitórios no imóvel em questão. A terceira coluna corresponde ao preço do imóvel.
Visualização dos dados
Step7: 2.1 Normalização das características
Se você inspecionar os valores do conjunto de dados fornecido, irá notar que os tamanhos dos imóveis são
aproximadamente 1000 vezes maiores que as quantidades encontradas na coluna de quantidade de dormitórios.
Sua tarefa nessa parte é implementar uma função denominada normalizarCaracteristica em um arquivo denominado
normalizarCaracteristica.py. Essa função deve
Step8: 2.2 Gradiente descendente
Anteriormente, você implementou o GD em uma regressão linear univariada. A única diferença agora é que há mais uma característica na matriz de dados X. A função de hipótese h(x) e a atualização dos gradientes em lote permanecem inalteradas. Você deve implementar código nos arquivos denominados computarCustoMulti.py e gdmulti.py para implementar a função de custo e o algoritmo GD para regressão linear com múltiplas variáveis, respectivamente. Se o seu código na parte anterior (variável única) já provê suporte a múltiplas variáveis, você também pode reusá-lo aqui. Se assegure de que o seu código dá suporte a qualquer número de características e está bem vetorizado.
Step9: 3. Regressão Logística
Nessa parte do trabalho, você irá implementar a regressão logística. Em particular,
você irá criar uma classificador para predizer se um estudante será admitido em uma
universidade, com base nos resultados de duas avaliações. Suponha que estão disponíveis
dados históricos acerca de realizações passadas dessas avaliações, e que esses dados
históricos podem ser usados como conjunto de treinamento. Para cada exemplo desse conjunto
de treinamento, temos as notas das duas avaliações e a decisão acerca do candidato
(aprovado ou reprovado).
Sua tarefa é construir um modelo de classificação que provê uma estimativa da probabilidade
de admissão de um candidato, com base na notas que ele obteve nas duas avaliações.
O arquivo ex2data1.txt contém os dados a serem usados nessa parte do trabalho.
3.1 Visualização dos dados
Antes de começar a implementar qualquer algoritmo de aprendizado, é adequado
visualizar os dados, quando possível. Nessa parte do trabalho, você deve
carregar o arquivo com o conjunto de treinamento e plotar (i.e., produzir um
gráfico) os pontos de dados. O resultado dessa tarefa deve ser um gráfico similar
ao apresentado na Figura abaixo.
Step10: 3.2 Implementação
3.2.1 Função sigmoide
Como primeiro passo nessa parte, implemente a função em Python que calcula
o valor da função sigmoide. Defina essa função em um arquivo denominado
sigmoide.py, de tal forma que ela possa ser chamada de outras parte do seu
código. Após finalizar sua implementação, você pode verificar sua corretude
Step11: 3.2.2 Função de custo e gradiente
Agora, você deverá implementar a função de custo para a regressão logística.
Essa função deve retornar o valor de função de custo e o gradiente. Implemente
esse código em um arquivo denominado funcaoCustoRegressaoLogistica.py.
Lembre-se de que o gradiente é um vetor com o mesmo número de elementos que ø.
Uma vez que tenha implementado essa função, realize uma chamada usando
o valor inicial de ø. Você deve confirmar que o valor produzido é aproximadamente 0.693.
Step12: 3.2.3 Aprendizado dos parâmetros
Para a regressão logística, o objetivo é minimizar J(ø) com relação ao vetor
de parâmetros ø. Sendo assim, nessa parte você deve implementar uma função
em Python para encontrar o vetor ø que minimiza a função de custo. Utilize a
função funcaoCustoRegressaoLogistica que você implementou previamente.
Step13: 3.2.4 Avaliação do modelo
Após o aprendizado dos parâmetros, você pode usar o modelo correspondente
para predizer se um candidato qualquer será aprovado. Para um candidato
com notas 45 e 85 na primeira e segunda avaliações, respectivamente, você deve
esperar que ele seja aprovado com probabilidade de 77.6%.
Outro modo de avaliar a qualidade dos parâmetros é verificar o quão bem o
modelo aprendido prediz os pontos de dados do conjunto de treinamento. Nessa
parte, você deve implementar uma função denominada predizer. Essa função
deve produzir os valores 0 ou 1, dados um exemplo do conjunto de treinamento
o vetor de parâmetros ø. Use essa função para produzir a porcentagem de
acertos do seu classificador sobre o conjunto de treinamento.
Step14: 2 - L2 Regularization
The standard way to avoid overfitting is called L2 regularization. It consists of appropriately modifying your cost function, from | Python Code:
#import os
import pandas as pd
import numpy as np
import matplotlib as plt
from numpy import loadtxt, where, append, zeros, ones, array, linspace, logspace
from pylab import scatter, show, legend, xlabel, ylabel
#%matplotlib inline
# Carregando o arquivo gerado pelo MATLAB
#import scipy.io
#mat = scipy.io.loadmat('file.mat')
# Construindo um dataset com base num Dataframe, já identificando colunas e exibindo seus primeiros 20 registros.
df = pd.read_csv('am-T2-dados/ex2data2.txt', names=['QATest1', 'QATest2', 'QAcceptance'])
df.head()
# Visualização da Distribuição dos Dados conforme Histograma
df.hist()
plt.pyplot.show()
df.QATest1.hist(), df.QATest2.hist(), df.QAcceptance.hist()
plt.pyplot.show()
df.describe()
df.QATest1.describe().round(2)
df.QATest2.describe().round(2)
df.QAcceptance.describe().round(2)
#df.drop(labels='QATest1', axis=1)
#df.drop(labels='QATest2', axis=1)
#df.drop(labels='QAcceptance', axis=1)
#load the dataset
data = np.loadtxt('am-T2-dados/ex2data2.txt', delimiter=',')
X = data[:, 0:2]
y = data[:, 2]
pos = where(y == 1)
neg = where(y == 0)
scatter(X[pos, 0], X[pos, 1], marker='+', c='xkcd:black', label='Not Admitted')
scatter(X[neg, 0], X[neg, 1], marker='o', c='xkcd:yellow', label='Admitted')
legend(['Admitted', 'Not Admitted'])
xlabel('Exam 1 score')
ylabel('Exam 2 score')
show()
Explanation: 1. Regressão Logística com Regularização
Nesta parte do trabalho, será implementada a Regressão Logística Regularizada
para prever se os microchips de uma usina de fabricação passam na garantia
de qualidade (QA). Durante a QA, cada microchip passa por vários testes para
garantir se está funcionando corretamente. Dessa forma, a Gestão de Produto da
fábrica terá o resultados de teste para alguns microchips em dois testes diferentes.
A partir desses dois testes, será determinado se os microchips deveriam ser
aceitos ou rejeitados. Para auxiliar a tomar a decisão, há um conjunto de dados
com resultados de testes anteriores sobre microchips, a partir do qual é possível construir
um modelo de Regressão Logística.
O arquivo {ex2data2.txt} contém os dados a serem usados nessa parte do trabalho. A primeira
coluna corresponde aos resultados do primeiro teste, enquanto que a segunda coluna corresponde
aos resultados do segundo teste. A terceira coluna contém os valores da classe (y = 0 significa
rejeitado no teste, e y = 1 significa aceito no teste).
1.1 Visualização dos Dados
Para a maioria dos conjuntos de dados do mundo real, não é possível criar um gráfico para
visualizar seus pontos. Mas, para o conjunto de dados fornecido, isso é possível. Implemente
um script em Python que produza um gráfico de dispersão (scatter plot) dos dados fornecidos.
Após finalizado, seu script deve produzir um resultado similar ao apresentado na Figura abaixo.
End of explanation
def map_feature(X1, X2):
'''
Função que mapeia características p/ os termos polinomiais X1 e X2 até a 6ª potência.
Retorna um novo conjunto com mais características, através do algoritmo de mapping
X1, X2, X1 ** 2, X2 ** 2, X1*X2, X1*X2 ** 2, etc...
Os parâmetros X1, X2 devem ser do mesmo tamanho
'''
# Potência padrão para o mapeamento
potencia = 6
X1.shape = (X1.size, 1)
X2.shape = (X2.size, 1)
features = np.ones(shape=(X1.size, 1))
for i in range(1, potencia + 1):
for j in range(i + 1):
r = (X1 ** (i - j)) * (X2 ** j)
features = append(features, r, axis=1)
return features
X = data[:, 0:2]
y = data[:, 2]
pos = where(y == 1)
neg = where(y == 0)
scatter(X[pos, 0], X[pos, 1], marker='o', c='b')
scatter(X[neg, 0], X[neg, 1], marker='x', c='r')
xlabel('Microchip Test 1')
ylabel('Microchip Test 2')
legend(['y = 1', 'y = 0'])
m, n = X.shape
y.shape = (m, 1)
it = map_feature(X[:, 0], X[:, 1])
it.shape
print(pd.DataFrame(it))
Explanation: 1.2 Mapeamento de características (feature mapping)
Uma maneira de tornar os dados mais apropriados para a classificação é criar
mais características a partir das já existentes. Para isso, você deve criar uma
função mapFeature. Essa função deve ser implementada em um arquivo de
nome mapFeature.py, que irá mapear as características para todos os termos
polinomiais de x1 e x2, até a sexta potência. Como resultado desse mapeamento, nosso
vetor de duas características (os escores nos dois testes de QA) será transformado
em um vetor de 28 dimensões.
Um classificador que usa regressão logística treinado nesse vetor de características
de maior dimensão terá uma fronteira de decisão mais complexa e parecerá não-linear
quando desenhado em um gráfico bidimensional.
Embora o mapeamento de características nos permita construir um classificador mais expressivo,
também é mais suscetível a sobreajuste (overfitting). Desse modo, será implementada a
Regressão Logística Regularizada sobre os dados fornecidos e também verá como a regularização pode
ajudar a combater o problema do sobreajuste.
End of explanation
from numpy import loadtxt, where, zeros, e, array, log, ones, append, linspace
from pylab import scatter, show, legend, xlabel, ylabel, contour, title
from scipy.optimize import fmin_bfgs
def sigmoid(X):
'''Compute the sigmoid function '''
den = 1.0 + e ** (-1.0 * X)
d = 1.0 / den
return d
def cost_function_reg(theta, X, y, l):
'''Compute the cost and partial derivatives as grads
'''
h = sigmoid(X.dot(theta))
thetaR = theta[1:, 0]
J = (1.0 / m) * ((-y.T.dot(log(h))) - ((1 - y.T).dot(log(1.0 - h)))) + (l / (2.0 * m)) * (thetaR.T.dot(thetaR))
delta = h - y
sumdelta = delta.T.dot(X[:, 1])
grad1 = (1.0 / m) * sumdelta
XR = X[:, 1:X.shape[1]]
sumdelta = delta.T.dot(XR)
grad = (1.0 / m) * (sumdelta + l * thetaR)
out = zeros(shape=(grad.shape[0], grad.shape[1] + 1))
out[:, 0] = grad1
out[:, 1:] = grad
return J.flatten(), out.T.flatten()
Explanation: 1.3 Função de custo e gradiente
Agora, você deverá implementar o código para calcular a função de custo e
o gradiente para a regressão logística regularizada. Crie um arquivo de nome
costFunctionReg.py que contém uma função de nome costFunctionReg.py
e que computa o custo e o gradiente. Lembre-se de que a função de custo
regularizada na regressão logística é dada por:
$$J_{regularizado} = \small \underbrace{-\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(x^{(i)}\right) + (1-y^{(i)})\log\left(1- x^{(i)}\right) \large{)} }\text{Função de Custo} + \underbrace{\frac{\lambda}{2m} \sum\limits{j = 1}^{n}\ {\theta}{j}^{2} }\text{Fator Regularização} $$
Depois de concluir a implementação da função costFunctionReg, você deve
testar a corretude dela usando o valor inicial de ${\theta}$ (inicializado todo com zeros).
Você deve ver que o custo é de cerca de 0.693.
Porém, usando a função costFunctionReg, você agora deve computar os valores ótimos para ${\theta}$.
End of explanation
from scipy.optimize import fmin_bfgs
m, n = X.shape
y.shape = (m, 1)
it = map_feature(X[:, 0], X[:, 1])
#Initialize theta parameters
initial_theta = zeros(shape=(it.shape[1], 1))
#Set regularization parameter lambda to 1
regularizacao = 1
# Compute and display initial cost and gradient for regularized logistic regression
cost, grad = cost_function_reg(initial_theta, it, y, regularizacao)
#def decorated_cost(theta):
# return cost_function_reg(theta, it, y, l)
#cost_function_reg(theta, it, y, l)
#print fmin_bfgs(decorated_cost, initial_theta, args=(it, y, l), maxfun=400)
cost, grad
Explanation: 1.3.1 Testando a Função de Custo e o Gradiente
End of explanation
#Plot Boundary
l = 1
m, n = X.shape
y.shape = (m, 1)
it = map_feature(X[:, 0], X[:, 1])
theta = ones(shape=(it.shape[1], 1))
u = linspace(-1, 1.5, 50)
v = linspace(-1, 1.5, 50)
z = zeros(shape=(len(u), len(v)))
for i in range(len(u)):
for j in range(len(v)):
z[i, j] = (map_feature(array(u[i]), array(v[j])).dot(array(theta)))
z = z.T
contour(u, v, z)
title('lambda = %f' % l)
xlabel('Microchip Test 1')
ylabel('Microchip Test 2')
legend(['y = 1', 'y = 0', 'Decision boundary'])
show()
#load the dataset
data2 = np.loadtxt('am-T2-dados/ex2data2.txt', delimiter=',')
#y = np.c_[data2[:,2]]
y = data2[:,2]
y.shape = (y.size, 1)
X = data2[:,0:2]
print(y.shape)
#it = map_feature(X[:, 0], X[:, 1])
X_feature = map_feature(X[:, 0], X[:, 1])
X_feature.shape
def costFunctionReg(theta, reg, X, y):
m = y.size
h = sigmoid(X.dot(theta))
J = -1*(1/m)*(np.log(h).T.dot(y)+np.log(1-h).T.dot(1-y)) + (reg/(2*m))*np.sum(np.square(theta[1:]))
if np.isnan(J[0]):
return(np.inf)
return(J[0])
def gradientReg(theta, reg, X, y):
m = y.size
h = sigmoid(X.dot(theta.reshape(-1,1)))
grad = (1/m)*X.T.dot(h-y) + (reg/m)*np.r_[[[0]],theta[1:].reshape(-1,1)]
return(grad.flatten())
initial_theta = np.zeros(X_feature.shape[1])
costFunctionReg(initial_theta, 1, X_feature, y)
initial_theta
Explanation: 1.4 Esboço da fronteira de decisão
Nessa parte, você deve esboçar (plotar) a fronteira de decisão que foi aprendida
para separar os exemplos positivos dos negativos. Crie uma arquivo de nome
plotDecisionBoundary.py, para criar esse gráfico que traça o limite da decisão
não-linear. Seu gráfico deve ser semelhante ao apresentado na Figura abaixo.
End of explanation
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from scipy.optimize import minimize
from sklearn.preprocessing import PolynomialFeatures
pd.set_option('display.notebook_repr_html', False)
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', 150)
pd.set_option('display.max_seq_items', None)
#%config InlineBackend.figure_formats = {'pdf',}
%matplotlib inline
import seaborn as sns
sns.set_context('notebook')
sns.set_style('white')
def predict(theta, X, threshold=0.5):
p = sigmoid(X.dot(theta.T)) >= threshold
return(p.astype('int'))
def loaddata(file, delimeter):
data = np.loadtxt(file, delimiter=delimeter)
print('Dimensions: ',data.shape)
print(data[1:6,:])
return(data)
def plotData(data, label_x, label_y, label_pos, label_neg, axes=None):
# Get indexes for class 0 and class 1
neg = data[:,2] == 0
pos = data[:,2] == 1
# If no specific axes object has been passed, get the current axes.
if axes == None:
axes = plt.gca()
axes.scatter(data[pos][:,0], data[pos][:,1], marker='+', c='k', s=60, linewidth=2, label=label_pos)
axes.scatter(data[neg][:,0], data[neg][:,1], c='y', s=60, label=label_neg)
axes.set_xlabel(label_x)
axes.set_ylabel(label_y)
axes.legend(frameon= True, fancybox = True);
fig, axes = plt.subplots(1,3, sharey = True, figsize=(17,5))
# Decision boundaries
# Lambda = 0 : No regularization --> too flexible, overfitting the training data
# Lambda = 1 : Looks about right
# Lambda = 100 : Too much regularization --> high bias
for i, C in enumerate([0, 1, 100]):
# Optimize costFunctionReg
res2 = minimize(costFunctionReg, initial_theta, args=(C, X_feature, y), method=None, jac=gradientReg, options={'maxiter':3000})
# Accuracy
accuracy = 100*sum(predict(res2.x, X_feature) == y.ravel())/y.size
# Scatter plot of X,y
plotData(data2, 'Microchip Test 1', 'Microchip Test 2', 'y = 1', 'y = 0', axes.flatten()[i])
# Plot decisionboundary
x1_min, x1_max = X[:,0].min(), X[:,0].max(),
x2_min, x2_max = X[:,1].min(), X[:,1].max(),
xx1, xx2 = np.meshgrid(np.linspace(x1_min, x1_max), np.linspace(x2_min, x2_max))
h = sigmoid(X_feature.fit_transform(np.c_[xx1.ravel(), xx2.ravel()]).dot(res2.x))
h = h.reshape(xx1.shape)
axes.flatten()[i].contour(xx1, xx2, h, [0.5], linewidths=1, colors='g');
axes.flatten()[i].set_title('Train accuracy {}% with Lambda = {}'.format(np.round(accuracy, decimals=2), C))
Explanation: 1.3 Visualização de J(ø)
Para melhor entender a função de custo, você irá nessa parte do trabalho plotar o custo sobre
uma grade bidimensional de valores de 0 e de 1. Para isso, você deve usar sua implementação da função computarCusto.
O código que você deve implementar deve gerar um array bidimensional de valores de J(ø). Os valores gerados pelo seu código devem estar na faixa a seguir: -10;0;+10 e -1;+4. Utilize incremento de 0.01 para gerar os valores de 0 e de 1.
A seguir, usando a função matplotlib.pyplot.contour da biblioteca matplotlib, produza um gráfico de curvas de contorno (contour plot). Também
utilizando a biblioteca matplotlib, crie um gráfico da superfície correspondente a J(ø).
End of explanation
# Carregando o dataset na forma de txt, porque tem melhor desempenho p/ representação gráfica e
# principalmente na vertorização, se comparado ao Dataframe, utilizado na primeira seção deste trabalho.
data = np.loadtxt('am-T1-dados/ex1data2.txt', delimiter=',')
data
# Inicializando visualização gráfica
plano = plt.figure()
plano = plano.add_subplot(111, projection='3d')
# Criação dos pontos ('x' vermelhos) para o gráfico
for c, m in [('red', 'x')]:
tamanho = data[:, 0]
quartos = data[:, 1]
valor = data[:, 2]
plano.scatter(tamanho, quartos, valor, c=c, marker=m)
# Plotando o Gráfico
plano.set_xlabel('Tamanho')
plano.set_ylabel('Nº de Quartos')
plano.set_zlabel('Valor do Imóvel')
show()
Explanation: 2. Regressão Linear com Múltiplas Variáveis
Nessa parte do trabalho, você irá implementar a regressão linear com múltiplas variáveis para predizer
o preço de venda de imóveis. O arquivo ex1data2.txt contém informações acerca de preços de imóveis.
A primeira coluna corersponde ao tamanho do imóvel (em pés quadrados). A segunda coluna corresponde à
quantidade de dormitórios no imóvel em questão. A terceira coluna corresponde ao preço do imóvel.
Visualização dos dados
End of explanation
def normalizarCaracteristica(X):
'''
Retorna uma versão normalizada de X onde o valor médio de cada característica é 0
e o desvio padrão é 1. Este é frequentemente um bom passo de pré-processamento
a ser feito ao trabalhar com algoritmos de aprendizado.
'''
valormedio = []
desviopadrao = []
X_Normalizado = X
for i in range(X.shape[1]):
m = np.mean(X[:, i])
s = np.std(X[:, i])
valormedio.append(m)
desviopadrao.append(s)
X_Normalizado[:, i] = (X_Normalizado[:, i] - m) / s
return X_Normalizado, valormedio, desviopadrao
Explanation: 2.1 Normalização das características
Se você inspecionar os valores do conjunto de dados fornecido, irá notar que os tamanhos dos imóveis são
aproximadamente 1000 vezes maiores que as quantidades encontradas na coluna de quantidade de dormitórios.
Sua tarefa nessa parte é implementar uma função denominada normalizarCaracteristica em um arquivo denominado
normalizarCaracteristica.py. Essa função deve:
subtrair o valor médio de todas as características do conjunto de dados.
após subtrair a média, dividir cada característica pelos seus respectivos desvios padrões.
A sua função normalizarCaracteristica deve a matriz de dados X de dados como parâmetro (na forma de um numpy array). Além disso, essa função deve funcionar com conjuntos de dados de variados tamanhos (qualquer quantidade
de características / exemplos). Repare que cada coluna da matriz de dados X passada para a função normalizarCaracteristica deve corresponder a um característica.
Nota de Implementação: Ao normalizar as características, é importante armazenar os valores utilizados para a normalização - o valor médio e o desvio padrão utilizados para a normalização. Depois de aprender os parâmetros do modelo, muitas vezes queremos prever os preços das casas que não temos visto antes. Dado um novo valor x (área da sala de estar e número de quartos), devemos normalizar x usando a média e o desvio padrão que nós previamente
calculamos a partir do conjunto de treinamento.
End of explanation
def computarCustoMulti(X, y, theta):
'''
Função que computa o custo para Regressão Linear com múltiplas variáveis.
'''
# Número do conjunto de treinamento
m = y.size
J = (1 / (2 * m)) * (X.dot(theta) - y).T.dot(X.dot(theta) - y)
return J
def gradienteDescendenteMulti(X, y, theta, alpha, iteracoes):
'''
Essa função calcula o gradiente descendente conforme o Theta, e com
etapas de iteracoes gradiente mediante a taxa de aprendizado em Alpha.
'''
m = y.size
J = np.zeros(shape=(iteracoes, 1))
for i in range(iteracoes):
hipotese = X.dot(theta)
for it in range(theta.size):
temp = X[:, it]
temp.shape = (m, 1)
errors_x1 = (hipotese - y) * temp
theta[it][0] = theta[it][0] - alpha * (1.0 / m) * errors_x1.sum()
J[i, 0] = computarCustoMulti(X, y, theta)
return J
X = data[:, :2]
y = data[:, 2]
# Tamanho do conjunto de treinamento
m = y.size
y.shape = (m,1)
# Normalizando X, obtendo Média e Desvio-padrão
x, media, desviopadrao = normalizarCaracteristica(X)
# Adicionando uma coluna de 1's ao novo X
Xnovo = np.ones(shape=(m, 3))
Xnovo[:, 1:3] = x
# Atributos para a função GradienteDescendenteMulti
iteracao = 100 # Número de repeticões p/ o algoritmo
alpha = 0.01 # Taxa de aprendizado
# Inicializando o Theta p/ execução da função GradienteDescendenteMulti
theta = np.zeros(shape=(3, 1))
J = gradienteDescendenteMulti(Xnovo, y, theta, alpha, iteracao)
plot(np.arange(iteracao), J)
xlabel('Iteracões')
ylabel('Função de Custo')
show()
Explanation: 2.2 Gradiente descendente
Anteriormente, você implementou o GD em uma regressão linear univariada. A única diferença agora é que há mais uma característica na matriz de dados X. A função de hipótese h(x) e a atualização dos gradientes em lote permanecem inalteradas. Você deve implementar código nos arquivos denominados computarCustoMulti.py e gdmulti.py para implementar a função de custo e o algoritmo GD para regressão linear com múltiplas variáveis, respectivamente. Se o seu código na parte anterior (variável única) já provê suporte a múltiplas variáveis, você também pode reusá-lo aqui. Se assegure de que o seu código dá suporte a qualquer número de características e está bem vetorizado.
End of explanation
# carregando os dados
data = np.loadtxt('am-T1-dados/ex2data1.txt', delimiter=',', usecols=(0,1,2), unpack=True)
# Transportando matriz
X = np.transpose(np.array(data[:2]))
y = np.transpose(np.array(data[2:]))
# Tamanho do conjunto de treinamento
m = y.size
# Adicionando uma coluna de 1's ao novo X
X = np.insert(X,0,1,axis=1)
# Classificando a amostra em Positiva (data[:, 2]=1) e Negativa(data[:, 2]=0)
X_Admitted = np.array([X[i] for i in range(X.shape[0]) if y[i] == 1])
X_Nadmitted = np.array([X[i] for i in range(X.shape[0]) if y[i] == 0])
plt.figure(figsize=(16,8))
plt.plot(X_Admitted[:, 1], X_Admitted[:, 2],'k+',label='Admitted')
plt.plot(X_Nadmitted[:, 1], X_Nadmitted[:, 2],'yo',label='Not admitted')
plt.title('Gráfico Decision Boundary p/ admissão de candidato', fontsize=18, fontweight='bold')
plt.xlabel('Exam 1 Score', fontweight='bold')
plt.ylabel('Exam 2 Score', fontweight='bold')
plt.legend()
plt.grid(False)
plt.show()
Explanation: 3. Regressão Logística
Nessa parte do trabalho, você irá implementar a regressão logística. Em particular,
você irá criar uma classificador para predizer se um estudante será admitido em uma
universidade, com base nos resultados de duas avaliações. Suponha que estão disponíveis
dados históricos acerca de realizações passadas dessas avaliações, e que esses dados
históricos podem ser usados como conjunto de treinamento. Para cada exemplo desse conjunto
de treinamento, temos as notas das duas avaliações e a decisão acerca do candidato
(aprovado ou reprovado).
Sua tarefa é construir um modelo de classificação que provê uma estimativa da probabilidade
de admissão de um candidato, com base na notas que ele obteve nas duas avaliações.
O arquivo ex2data1.txt contém os dados a serem usados nessa parte do trabalho.
3.1 Visualização dos dados
Antes de começar a implementar qualquer algoritmo de aprendizado, é adequado
visualizar os dados, quando possível. Nessa parte do trabalho, você deve
carregar o arquivo com o conjunto de treinamento e plotar (i.e., produzir um
gráfico) os pontos de dados. O resultado dessa tarefa deve ser um gráfico similar
ao apresentado na Figura abaixo.
End of explanation
def sigmoid(x):
'''
A função sigmoid
'''
g = np.array([x]).flatten()
s = 1 / (1 + np.exp(-g))
return s
print('\t ########################### Teste para função sigmoid(0) ###########################\n')
print('\t O Valor da Sigmoid(0) é', sigmoid(0))
print('\t O Valor da Sigmoid([0,1,2,3000]) é', sigmoid(np.array([0,1,2,3000])))
print('\t ######################################################################################\n')
# Exibindo o gráfico da função Sigmoid
X_teste = np.arange(-6,6,.5)
plt.plot(X_teste, sigmoid(X_teste))
plt.title("Função Sigmoid", fontsize=18, fontweight='bold')
plt.grid(True)
plt.show()
Explanation: 3.2 Implementação
3.2.1 Função sigmoide
Como primeiro passo nessa parte, implemente a função em Python que calcula
o valor da função sigmoide. Defina essa função em um arquivo denominado
sigmoide.py, de tal forma que ela possa ser chamada de outras parte do seu
código. Após finalizar sua implementação, você pode verificar sua corretude:
Para a sigmoide(0), o valor retornado deve ser 0.5.
Para valores muito grandes positivos (ou negativos), ela retornará um valor muito próximo de 1 (ou de 0).
O seu código também deve funcionar com vetores (i.e., o seu código deve estar vetorizado). Em particular, se
uma matriz for passada, o seu código deve aplicar a função sigmoide a cada componente.
End of explanation
def custoJ(theta, X, y):
'''
A função custoJ retorna o valor de função de custo:
X é uma matrix com n-colunas e m-linhas
y é um vetor com m-linhas
theta é um vetor n-dimensional
Obs.: Será utilizada para facilitar o cálculo de minimização.
'''
m = len(y)
H = sigmoid(X.dot(theta).T)
J = -np.sum( y* np.log(H) + (1-y) * np.log(1-H))/m
return J
def custoRegressaoLogistica(theta, X, y):
'''
A função funcaoCustoRegressaoLogistica retorna o valor de função de custo e o gradiente.
Returna J, gradiente:
X é uma matrix com n-colunas e m-linhas
y é um vetor com m-linhas
theta é um vetor n-dimensional
'''
# Calcula o Custo
m = len(y)
H = sigmoid(X.dot(theta).T)
J = -np.sum( y* np.log(H) + (1-y) * np.log(1-H))/m
# Calcula o Gradiente
erro = H-y
gradiente = []
for i in range(len(X.columns)):
gradiente.append(np.sum(erro*(X.iloc[:,i]))/m)
return J, gradiente
Explanation: 3.2.2 Função de custo e gradiente
Agora, você deverá implementar a função de custo para a regressão logística.
Essa função deve retornar o valor de função de custo e o gradiente. Implemente
esse código em um arquivo denominado funcaoCustoRegressaoLogistica.py.
Lembre-se de que o gradiente é um vetor com o mesmo número de elementos que ø.
Uma vez que tenha implementado essa função, realize uma chamada usando
o valor inicial de ø. Você deve confirmar que o valor produzido é aproximadamente 0.693.
End of explanation
from scipy import optimize
def minimizar(theta, X, y):
'''
A função minimizar J(ø) com relação ao vetor de parâmetros ø
'''
minimo = optimize.fmin(func=custoJ, x0=theta, args=(X, y), maxiter=1000, full_output=True)
return minimo[0], minimo[1]
# Utilizando uma segunda estrutura de dados
import pandas as pd
dfQA = pd.read_csv('am-T2-dados/ex2data2.txt', names=['Exame1', 'Exame2', 'Admissao'])
X = dfQA.iloc[:, :2]
y = dfQA.iloc[:, 2]
X.head()
#add a column of ones to the feature matrix X to account for theta 0
m = len(y)
X.insert(0, "theta0",value=pd.Series(np.ones([m])))
X.head()
# Apresentando uma visualização gráfica com base na classificação
from seaborn import lmplot
import matplotlib.pyplot as plt
g = lmplot("Exame1", "Exame2", hue="Admissao", data=dfQA, fit_reg=True, palette = "dark", markers = ["o","x"], legend = True)
plt.xlabel("Exame 1 Score")
plt.ylabel("Exame 2 Score")
plot_x = np.array([min(X.iloc[:,2])-2, max(X.iloc[:,2])+2])
plt.ylim(30,100)
plt.show()
theta0 = np.zeros([X.shape[1], 1])
hypothesis = sigmoid(X.dot(theta0).T)
print(hypothesis)
print(custoRegressaoLogistica(theta0, X, y))
#
print(custoJ(theta0, X, y))
# Avaliando a minimização
X_min, theta_min = minimizar(theta0, X, y)
print(X_min, theta_min)
Explanation: 3.2.3 Aprendizado dos parâmetros
Para a regressão logística, o objetivo é minimizar J(ø) com relação ao vetor
de parâmetros ø. Sendo assim, nessa parte você deve implementar uma função
em Python para encontrar o vetor ø que minimiza a função de custo. Utilize a
função funcaoCustoRegressaoLogistica que você implementou previamente.
End of explanation
def predizer(theta, X):
P = sigmoid(X.dot(theta))
return (P >= 0.5).astype(int)
# Predição de Admissão do candidato
A = np.array([1,45,85])
H = sigmoid(A.dot(X_min))
# Efetuando predição p/ Admissão de um candidato com notas 45 e 85 na primeira e segunda avaliações
print('\t ###################################### ALUNO TESTE #################################\n')
print('\t Para as notas 45 e 85 no 1º e 2º Exame prevê a probabilidade de admissão de %f' % H)
print('\t ######################################################################################\n')
Explanation: 3.2.4 Avaliação do modelo
Após o aprendizado dos parâmetros, você pode usar o modelo correspondente
para predizer se um candidato qualquer será aprovado. Para um candidato
com notas 45 e 85 na primeira e segunda avaliações, respectivamente, você deve
esperar que ele seja aprovado com probabilidade de 77.6%.
Outro modo de avaliar a qualidade dos parâmetros é verificar o quão bem o
modelo aprendido prediz os pontos de dados do conjunto de treinamento. Nessa
parte, você deve implementar uma função denominada predizer. Essa função
deve produzir os valores 0 ou 1, dados um exemplo do conjunto de treinamento
o vetor de parâmetros ø. Use essa função para produzir a porcentagem de
acertos do seu classificador sobre o conjunto de treinamento.
End of explanation
import numpy as np
# função de custo
def loss_function(theta, X, y, Lambda):
m = y.size
H = np.subtract(X,y)
J = 1/(2*m) * np.sum(H**2)
custo = J + (1/(2*m) * np.multiply(Lambda,np.sum(theta**2)))
return custo
def prints(self, epoch):
print("--epoca %s: " % epoch)
print("loss: ", self.loss[epoch])
print("theta: ", self.theta_0.reshape(theta[0].shape[0]), self.theta_n.reshape(theta[1].shape[0]))
def gradient_descent(self, epochs, X, Y, learning_rate, Lambda, m, print_results):
for i in xrange(epochs):
# calcula H
H = np.dot(self.theta_n.T, X) + self.theta_0
# calcula gradientes
gH = H - Y
gTheta_n = np.dot(X, gH.T)/m
gTheta_0 = np.sum(gH)/m
# calcula função de custo
loss = self.loss_function(Y, gH, Lambda, m)
self.loss.append(loss)
# atualiza pesos
self.theta_0 -= learning_rate*gTheta_0
self.theta_n = self.theta_n*(1-(learning_rate*Lambda/m)) - learning_rate*gTheta_n
if print_results:
self.prints(i)
# calcula função de custo final
# calcula H
H = np.dot(self.theta_n.T, X) + self.theta_0
# calcula gradientes
gH = H - Y
loss = self.loss_function(Y, gH, Lambda, m)
self.loss.append(loss)
Explanation: 2 - L2 Regularization
The standard way to avoid overfitting is called L2 regularization. It consists of appropriately modifying your cost function, from:
$$J = -\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{L}\right) + (1-y^{(i)})\log\left(1- a^{L}\right) \large{)} \tag{1}$$
To:
$$J_{regularized} = \small \underbrace{-\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{L}\right) + (1-y^{(i)})\log\left(1- a^{L}\right) \large{)} }\text{cross-entropy cost} + \underbrace{\frac{1}{m} \frac{\lambda}{2} \sum\limits_l\sum\limits_k\sum\limits_j W{k,j}^{[l]2} }_\text{L2 regularization cost} \tag{2}$$
Let's modify your cost and observe the consequences.
Exercise: Implement compute_cost_with_regularization() which computes the cost given by formula (2). To calculate $\sum\limits_k\sum\limits_j W_{k,j}^{[l]2}$ , use :
python
np.sum(np.square(Wl))
Note that you have to do this for $W^{[1]}$, $W^{[2]}$ and $W^{[3]}$, then sum the three terms and multiply by $ \frac{1}{m} \frac{\lambda}{2} $.
End of explanation |
1,899 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Une exploration visuelle de l'algorithme du Simplexe en 3D avec Python
Dans ce notebook (utilisant Python 3), je souhaite montrer des animations de l'algorithme du Simplexe, un peu comme dans la vidéo suivante
Step1: Dépendances
On a sûrement besoin de Numpy et Matplotlib
Step2: On a besoin de la fonction scipy.optimize.linprog(method="simplex") du module scipy.optimize
Step3: On a aussi besoin de la fonction IPython.display.Latex pour facilement afficher du code LaTeX généré depuis nos cellules Python
Step4: Par exemple
Step5: On va avoir besoin des widgets IPywidgets, plus tard
Step6: Et enfin, de l'extension itikz
Step7: Première expérience
Déjà, je vais écrire le problème étudié comme un dictionnaire, que l'on pourra passer à scipy.optimize.linprog(method="simplex")
Step8: Puis une petite fonction qui s'occupe de prendre ce dictionnaire et le donner à scipy.optimize.linprog(method="simplex")
Step9: On va déjà vérifier que l'on peut résoudre ces deux exemples de problème de programmation linéaire
Step10: C'est bien la solution $x^* = [0, 300, 100]$, avec un objectif valant $+3100$, qui était trouvée dans la vidéo !
Et si on ajoute un callback ?
Step11: Afficher un système d'équation en LaTeX
Step12: On a donc récupéré un certain nombre d'objets résultat intermédiaire d'optimisation
Step15: En fait, je me rends compte que les informations données par ces results successifs ne sont pas suffisantes pour afficher des équations comme dans la vidéo.
Implémentation maison du Simplexe en dimension 3
Exemples
Suite des expérimentations
On va écrire une fonction qui produit du code LaTeX représentant ce système d'optimisation, au cours des réécritures qu'il subit
Step16: TODO
Step17: Allez on essaie
Step18: Ajouter des figures TikZ
Avec itikz
Step19: Par exemple on peut afficher un premier exemple, avant de chercher à les faire bouger
Step24: Maintenant on peut chercher à contrôler la position du point objectif actuel
Step25: Et en rendant cela interactif, on peut jouer avec ça.
<span style="color | Python Code:
from IPython.display import YouTubeVideo
# https://www.youtube.com/watch?v=W_U8ozVsh8s
YouTubeVideo("W_U8ozVsh8s", width=944, height=531)
Explanation: Une exploration visuelle de l'algorithme du Simplexe en 3D avec Python
Dans ce notebook (utilisant Python 3), je souhaite montrer des animations de l'algorithme du Simplexe, un peu comme dans la vidéo suivante :
<iframe width="500" height="250" src="https://www.youtube.com/embed/W_U8ozVsh8s" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
J'aimerai bien écrire un petit morceau de code Python qui fait les étapes suivantes :
on lui donne le programme linéaire à résoudre (éventuellement sous forme acceptée par lp_solve) ;
il résoud le problème avec scipy.optimize.linprog(method="simplex"), et s'arrête s'il n'y a pas de solution trouvée par le simplexe ;
puis utilise le callback de cette fonction pour afficher des équations en LaTeX représentant l'évolution du système et des variables de la base et hors base ;
j'aimerai bien avoir une animation étape par étape, avec un simple "slider" avec le widget interact ;
bonus : afficher un graphique 3D, avec TikZ ?
Ce document ne sera pas :
une implémentation maison de l'algorithme du simplexe : c'est trop long et je n'ai pas le temps en ce moment ;
des explications sur l'algorithme du simplexe : pour cela regardez les notes de cours de ALGO2, et la page Wikipédia sur l'algorithme du Simplexe ;
probablement capable de se faire exporter proprement en HTML statique ;
et pas non plus capable de se faire exporter proprement en PDF.
A propros
Auteur : Lilian Besson
License : MIT
Date : 09/02/2021
Cours : ALGO2 @ ENS Rennes
Vidéo d'explication
Regardez cette vidéo.
End of explanation
import numpy as np
import matplotlib.pyplot as plt
Explanation: Dépendances
On a sûrement besoin de Numpy et Matplotlib :
End of explanation
from scipy.optimize import linprog
Explanation: On a besoin de la fonction scipy.optimize.linprog(method="simplex") du module scipy.optimize :
End of explanation
from IPython.display import Latex, display
Explanation: On a aussi besoin de la fonction IPython.display.Latex pour facilement afficher du code LaTeX généré depuis nos cellules Python :
End of explanation
def display_cos_power(power=1):
return display(Latex(fr"$$\cos(x)^{power} = 0$$"))
for power in range(1, 5):
display_cos_power(power)
Explanation: Par exemple :
End of explanation
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
interactive(display_cos_power,
power=(1, 10, 1)
)
Explanation: On va avoir besoin des widgets IPywidgets, plus tard :
End of explanation
%load_ext itikz
Explanation: Et enfin, de l'extension itikz
End of explanation
# Objective Function: 50x_1 + 80x_2
# Constraint 1: 5x_1 + 2x_2 <= 20
# Constraint 2: -10x_1 + -12x_2 <= -90
problem1 = {
# Cost function: 50x_1 + 80x_2
"cost": [50, 80],
# Coefficients for inequalities
"A_ub": [[5, 2], [-10, -12]],
# Constraints for inequalities: 20 and -90
"b_ub": [20, -90],
# Bounds on x, 0 <= x_i <= +oo by default
"bounds": (0, None),
}
# Objective Function: maximize x_1 + 6*x_2 + 13*x_3
# => so cost will be opposite
# Constraint 1: x1 <= 200
# Constraint 2: x2 <= 300
# Constraint 3: x1+x2+x3 <= 400
# Constraint 2: x2+3x3 <= 600
problem2 = {
# Cost function: minimize -1*x_1 + -6*x_2 + -13*x_3
"cost": [-1, -6, -13],
# Coefficients for inequalities
"A_ub": [
[1, 0, 0],
[0, 1, 0],
[1, 1, 1],
[0, 1, 3],
],
# Constraints for inequalities:
"b_ub": [200, 300, 400, 600],
# Bounds on x, 0 <= x_i <= +oo by default
"bounds": (0, None),
}
Explanation: Première expérience
Déjà, je vais écrire le problème étudié comme un dictionnaire, que l'on pourra passer à scipy.optimize.linprog(method="simplex") :
End of explanation
def linprog_wrapper(problem, **kwargs):
result = linprog(
problem["cost"],
A_ub=problem["A_ub"],
b_ub=problem["b_ub"],
bounds=problem["bounds"],
method="simplex",
**kwargs
)
return result
Explanation: Puis une petite fonction qui s'occupe de prendre ce dictionnaire et le donner à scipy.optimize.linprog(method="simplex") :
End of explanation
linprog_wrapper(problem1)
linprog_wrapper(problem2)
Explanation: On va déjà vérifier que l'on peut résoudre ces deux exemples de problème de programmation linéaire :
End of explanation
def round(np_array):
res = np.array(np.round(np_array), dtype=int)
if res.size > 1:
return list(res)
else:
return res
def dummy_callback(r):
print(f"\n- Itération #{r['nit']}, phase {r['phase']} :")
fun = round(r['fun'])
print(f" Valeur objectif = {fun}")
slack = round(r['slack'])
print(f" Variables d'écart = {slack}")
x = round(r['x'])
print(f" Variables objectif = {x}")
# print(r)
linprog_wrapper(problem2, callback=dummy_callback)
Explanation: C'est bien la solution $x^* = [0, 300, 100]$, avec un objectif valant $+3100$, qui était trouvée dans la vidéo !
Et si on ajoute un callback ?
End of explanation
step_by_step_results = []
step_by_step_nitphase = []
def print_and_store_callback(r):
global step_by_step_results, step_by_step_nitphase
nit, phase = r['nit'], r['phase']
print(f"\n- Itération #{nit}, phase {phase} :")
fun = round(r['fun'])
print(f" Valeur objectif = {fun}")
slack = round(r['slack'])
print(f" Variables d'écart = {slack}")
x = round(r['x'])
print(f" Variables objectif = {x}")
if (nit, phase) not in step_by_step_nitphase:
step_by_step_results.append(r)
step_by_step_nitphase.append((nit, phase))
step_by_step_results = []
result_final = linprog_wrapper(problem2, callback=print_and_store_callback)
print(result_final)
step_by_step_results.append(result_final)
Explanation: Afficher un système d'équation en LaTeX
End of explanation
len(step_by_step_results)
Explanation: On a donc récupéré un certain nombre d'objets résultat intermédiaire d'optimisation :
End of explanation
def equation_latex_from_step(result):
return r
\text{Maximiser} + cout + r\\
\begin{cases}
\end{cases}
Explanation: En fait, je me rends compte que les informations données par ces results successifs ne sont pas suffisantes pour afficher des équations comme dans la vidéo.
Implémentation maison du Simplexe en dimension 3
Exemples
Suite des expérimentations
On va écrire une fonction qui produit du code LaTeX représentant ce système d'optimisation, au cours des réécritures qu'il subit :
End of explanation
def interactive_latex_exploration(problem):
problem_solved = make_show_latex(problem1)
if problem_solved.status != 0:
print("Error: problem was not solve correctly, stopping this...")
interactive_function = make_show_latex(problem1)
max_step = problem_solved.nitint
return interact(, step=(0, max_step))
Explanation: TODO: terminer ça !
Ajouter de l'interactivité
End of explanation
interactive_latex_exploration(problem)
Explanation: Allez on essaie :
End of explanation
%load_ext itikz
Explanation: Ajouter des figures TikZ
Avec itikz
End of explanation
%%itikz --temp-dir --file-prefix simplex-example-
\documentclass[tikz]{standalone}
\usepackage{amsfonts}
\begin{document}
% from http://people.irisa.fr/Francois.Schwarzentruber/algo2/ notes
\usetikzlibrary{arrows,patterns,topaths,shadows,shapes,positioning}
\begin{tikzpicture}[scale=0.012, opacity=0.7]
\tikzstyle{point} = [fill=red, circle, inner sep=0.8mm];
\draw[->] (0, 0, 0) -- (300, 0, 0) node[right] {a};
\draw[->] (0, 0, 0) -- (0, 350, 0) node[above] {b};
\draw[->] (0, 0, 0) -- (0, 0, 300) node[below] {c};
\coordinate (O) at (0,0,0);
\coordinate (D) at (200,0,0);
\coordinate (E) at (200, 0, 200);
\coordinate (F) at (0, 0, 200);
\coordinate (G) at (0, 300,0);
\coordinate (C) at (200,200,0);
\coordinate (A) at (100,300, 0);
\coordinate (B) at (0,300, 100);
\draw[fill=blue!20] (O) -- (D) -- (E) -- (F) -- (O) -- cycle;
\draw[fill=blue!20] (D) -- (C) -- (E) -- cycle;
\draw[fill=blue!20] (G) -- (B) -- (F) -- (O) -- cycle;
\draw[fill=blue!20] (B) -- (A) -- (C) --(E) -- cycle;
\draw[fill=blue!20] (B) -- (F) -- (E) -- cycle;
\draw[fill=blue!20] (B) -- (A) -- (G) -- cycle;
\node[point] at (0,0,0) {}; % TODO make this argument of function
\end{tikzpicture}
\end{document}
Explanation: Par exemple on peut afficher un premier exemple, avant de chercher à les faire bouger :
End of explanation
simplex_example_str = ""
def default_cost(a, b, c):
1*{a} + 6*{b} + 13*{c}
return 1*a + 6*b + 13*c
def show_tikz_figure_with_point(a=0, b=0, c=0, cost=default_cost):
# TODO generate nice LaTeX equations
if cost:
current_cost = cost(a, b, c)
cost_doc = cost.__doc__.format(a=a, b=b, c=c)
print(f"Coût = {cost_doc} = {current_cost}")
equation_latex = f\
Cout $f(a,b,c) = {cost_doc} = {current_cost}$.\
display(Latex(equation_latex))
# now tikz
global simplex_example_str
simplex_example_str = r
\documentclass[tikz]{standalone}
\begin{document}
% from http://people.irisa.fr/Francois.Schwarzentruber/algo2/ notes
\usetikzlibrary{arrows,patterns,topaths,shadows,shapes,positioning}
\begin{tikzpicture}[scale=0.016, opacity=0.7]
\tikzstyle{point} = [fill=red, circle, inner sep=0.8mm];
\draw[->] (0, 0, 0) -- (300, 0, 0) node[right] {a};
\draw[->] (0, 0, 0) -- (0, 350, 0) node[above] {b};
\draw[->] (0, 0, 0) -- (0, 0, 300) node[below] {c};
\coordinate (O) at (0,0,0);
\coordinate (D) at (200,0,0);
\coordinate (E) at (200, 0, 200);
\coordinate (F) at (0, 0, 200);
\coordinate (G) at (0, 300,0);
\coordinate (C) at (200,200,0);
\coordinate (A) at (100,300, 0);
\coordinate (B) at (0,300, 100);
\draw[fill=blue!20] (O) -- (D) -- (E) -- (F) -- (O) -- cycle;
\draw[fill=blue!20] (D) -- (C) -- (E) -- cycle;
\draw[fill=blue!20] (G) -- (B) -- (F) -- (O) -- cycle;
\draw[fill=blue!20] (B) -- (A) -- (C) --(E) -- cycle;
\draw[fill=blue!20] (B) -- (F) -- (E) -- cycle;
\draw[fill=blue!20] (B) -- (A) -- (G) -- cycle;
\node[point] at ( + f"{a}, {b}, {c}" + ) {};
\end{tikzpicture}
\end{document}
#print(simplex_example_str)
# TODO: run this from this function?
#%itikz --temp-dir --file-prefix simplex-example- simplex_example_str
return get_ipython().run_line_magic(
"itikz", "--temp-dir --file-prefix simplex-example- simplex_example_str"
)
show_tikz_figure_with_point(0, 0, 0)
Explanation: Maintenant on peut chercher à contrôler la position du point objectif actuel :
a,b,c sera $x_1, x_2, x_3$.
End of explanation
interact(
show_tikz_figure_with_point,
a = (-100, 300, 10),
b = (-100, 300, 10),
c = (-100, 300, 10),
cost = fixed(default_cost)
)
linprog_wrapper(problem2, callback=dummy_callback)
Explanation: Et en rendant cela interactif, on peut jouer avec ça.
<span style="color:red;">ATTENTION : même si les widgets sont présents dans une version statique de cette page (au format HTML ou sur nbviewer.jupyter.org, la figure ne peut pas être modifée. Si vous souhaitez expérimenter de votre côté, il faut exécuter le notebook localement depuis votre propre Jupyter, ou avec MyBinder en cliquant sur un des boutons suivants :</span>
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.