Unnamed: 0
int64
0
16k
text_prompt
stringlengths
110
62.1k
code_prompt
stringlengths
37
152k
1,600
Given the following text description, write Python code to implement the functionality described below step by step Description: Sequence classification with LSTM Step1: We will treat the MNIST image $\in \mathcal{R}^{28 \times 28}$ as $28$ sequences of a vector $\mathbf{x} \in \mathcal{R}^{28}$. Our simple RNN consists of One input layer which converts a $28$ dimensional input to an $128$ dimensional hidden layer, One intermediate recurrent neural network (LSTM) One output layer which converts an $128$ dimensional output of the LSTM to $10$ dimensional output indicating a class label. <img src="images/etc/rnn_input3.jpg" width="700" height="400" > Contruct a Recurrent Neural Network Step2: Out Network looks like this <img src="images/etc/rnn_mnist_look.jpg" width="700" height="400" > Define functions Step3: Run! Step4: What we have done so far is to feed 28 sequences of vectors $ \mathbf{x} \in \mathcal{R}^{28}$. What will happen if we feed first 25 sequences of $\mathbf{x}$? Step5: What's going on inside the RNN? Inputs to the RNN Step6: Reshaped inputs Step7: Feeds Step8: Each indivisual input to the LSTM Step9: Each indivisual intermediate state Step10: Actual input to the LSTM (List) Step11: Output from the LSTM (List) Step12: Final prediction
Python Code: import tensorflow as tf import tensorflow.examples.tutorials.mnist.input_data as input_data import numpy as np import matplotlib.pyplot as plt %matplotlib inline print ("Packages imported") mnist = input_data.read_data_sets("data/", one_hot=True) trainimgs, trainlabels, testimgs, testlabels \ = mnist.train.images, mnist.train.labels, mnist.test.images, mnist.test.labels ntrain, ntest, dim, nclasses \ = trainimgs.shape[0], testimgs.shape[0], trainimgs.shape[1], trainlabels.shape[1] print ("MNIST loaded") Explanation: Sequence classification with LSTM End of explanation diminput = 28 dimhidden = 128 dimoutput = nclasses nsteps = 28 weights = { 'hidden': tf.Variable(tf.random_normal([diminput, dimhidden])), 'out': tf.Variable(tf.random_normal([dimhidden, dimoutput])) } biases = { 'hidden': tf.Variable(tf.random_normal([dimhidden])), 'out': tf.Variable(tf.random_normal([dimoutput])) } def _RNN(_X, _istate, _W, _b, _nsteps, _name): # 1. Permute input from [batchsize, nsteps, diminput] # => [nsteps, batchsize, diminput] _X = tf.transpose(_X, [1, 0, 2]) # 2. Reshape input to [nsteps*batchsize, diminput] _X = tf.reshape(_X, [-1, diminput]) # 3. Input layer => Hidden layer _H = tf.matmul(_X, _W['hidden']) + _b['hidden'] # 4. Splite data to 'nsteps' chunks. An i-th chunck indicates i-th batch data _Hsplit = tf.split(0, _nsteps, _H) # 5. Get LSTM's final output (_LSTM_O) and state (_LSTM_S) # Both _LSTM_O and _LSTM_S consist of 'batchsize' elements # Only _LSTM_O will be used to predict the output. with tf.variable_scope(_name): lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(dimhidden, forget_bias=1.0) _LSTM_O, _LSTM_S = tf.nn.rnn(lstm_cell, _Hsplit, initial_state=_istate) # 6. Output _O = tf.matmul(_LSTM_O[-1], _W['out']) + _b['out'] # Return! return { 'X': _X, 'H': _H, 'Hsplit': _Hsplit, 'LSTM_O': _LSTM_O, 'LSTM_S': _LSTM_S, 'O': _O } print ("Network ready") Explanation: We will treat the MNIST image $\in \mathcal{R}^{28 \times 28}$ as $28$ sequences of a vector $\mathbf{x} \in \mathcal{R}^{28}$. Our simple RNN consists of One input layer which converts a $28$ dimensional input to an $128$ dimensional hidden layer, One intermediate recurrent neural network (LSTM) One output layer which converts an $128$ dimensional output of the LSTM to $10$ dimensional output indicating a class label. <img src="images/etc/rnn_input3.jpg" width="700" height="400" > Contruct a Recurrent Neural Network End of explanation learning_rate = 0.001 x = tf.placeholder("float", [None, nsteps, diminput]) istate = tf.placeholder("float", [None, 2*dimhidden]) # state & cell => 2x n_hidden y = tf.placeholder("float", [None, dimoutput]) myrnn = _RNN(x, istate, weights, biases, nsteps, 'basic') pred = myrnn['O'] cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y)) optm = tf.train.AdamOptimizer(learning_rate).minimize(cost) # Adam Optimizer accr = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(pred,1), tf.argmax(y,1)), tf.float32)) init = tf.initialize_all_variables() print ("Network Ready!") Explanation: Out Network looks like this <img src="images/etc/rnn_mnist_look.jpg" width="700" height="400" > Define functions End of explanation training_epochs = 5 batch_size = 128 display_step = 1 sess = tf.Session() sess.run(init) summary_writer = tf.train.SummaryWriter('/tmp/tensorflow_logs', graph=sess.graph) print ("Start optimization") for epoch in range(training_epochs): avg_cost = 0. total_batch = int(mnist.train.num_examples/batch_size) # Loop over all batches for i in range(total_batch): batch_xs, batch_ys = mnist.train.next_batch(batch_size) batch_xs = batch_xs.reshape((batch_size, nsteps, diminput)) # Fit training using batch data feeds = {x: batch_xs, y: batch_ys, istate: np.zeros((batch_size, 2*dimhidden))} sess.run(optm, feed_dict=feeds) # Compute average loss avg_cost += sess.run(cost, feed_dict=feeds)/total_batch # Display logs per epoch step if epoch % display_step == 0: print ("Epoch: %03d/%03d cost: %.9f" % (epoch, training_epochs, avg_cost)) feeds = {x: batch_xs, y: batch_ys, istate: np.zeros((batch_size, 2*dimhidden))} train_acc = sess.run(accr, feed_dict=feeds) print (" Training accuracy: %.3f" % (train_acc)) testimgs = testimgs.reshape((ntest, nsteps, diminput)) feeds = {x: testimgs, y: testlabels, istate: np.zeros((ntest, 2*dimhidden))} test_acc = sess.run(accr, feed_dict=feeds) print (" Test accuracy: %.3f" % (test_acc)) print ("Optimization Finished.") Explanation: Run! End of explanation # How may sequences will we use? nsteps2 = 25 # Test with truncated inputs testimgs = testimgs.reshape((ntest, nsteps, diminput)) testimgs_trucated = np.zeros(testimgs.shape) testimgs_trucated[:, 28-nsteps2:] = testimgs[:, :nsteps2, :] feeds = {x: testimgs_trucated, y: testlabels, istate: np.zeros((ntest, 2*dimhidden))} test_acc = sess.run(accr, feed_dict=feeds) print (" If we use %d seqs, test accuracy becomes %.3f" % (nsteps2, test_acc)) Explanation: What we have done so far is to feed 28 sequences of vectors $ \mathbf{x} \in \mathcal{R}^{28}$. What will happen if we feed first 25 sequences of $\mathbf{x}$? End of explanation batch_size = 5 xtest, _ = mnist.test.next_batch(batch_size) print ("Shape of 'xtest' is %s" % (xtest.shape,)) Explanation: What's going on inside the RNN? Inputs to the RNN End of explanation # Reshape (this will go into the network) xtest1 = xtest.reshape((batch_size, nsteps, diminput)) print ("Shape of 'xtest1' is %s" % (xtest1.shape,)) Explanation: Reshaped inputs End of explanation feeds = {x: xtest1, istate: np.zeros((batch_size, 2*dimhidden))} Explanation: Feeds: inputs and initial states End of explanation rnnout_X = sess.run(myrnn['X'], feed_dict=feeds) print ("Shape of 'rnnout_X' is %s" % (rnnout_X.shape,)) Explanation: Each indivisual input to the LSTM End of explanation rnnout_H = sess.run(myrnn['H'], feed_dict=feeds) print ("Shape of 'rnnout_H' is %s" % (rnnout_H.shape,)) Explanation: Each indivisual intermediate state End of explanation rnnout_Hsplit = sess.run(myrnn['Hsplit'], feed_dict=feeds) print ("Type of 'rnnout_Hsplit' is %s" % (type(rnnout_Hsplit))) print ("Length of 'rnnout_Hsplit' is %s and the shape of each item is %s" % (len(rnnout_Hsplit), rnnout_Hsplit[0].shape)) Explanation: Actual input to the LSTM (List) End of explanation rnnout_LSTM_O = sess.run(myrnn['LSTM_O'], feed_dict=feeds) print ("Type of 'rnnout_LSTM_O' is %s" % (type(rnnout_LSTM_O))) print ("Length of 'rnnout_LSTM_O' is %s and the shape of each item is %s" % (len(rnnout_LSTM_O), rnnout_LSTM_O[0].shape)) Explanation: Output from the LSTM (List) End of explanation rnnout_O = sess.run(myrnn['O'], feed_dict=feeds) print ("Shape of 'rnnout_O' is %s" % (rnnout_O.shape,)) Explanation: Final prediction End of explanation
1,601
Given the following text description, write Python code to implement the functionality described below step by step Description: Logistic Regression Step1: We need to define the sigmoid function $S(t) Step2: As we are using NumPy to compute $\exp(t)$, we can feed this function with a numpy array to compute the sigmoid function for every element of the array Step3: Next, we define the natural logarithm of the sigmoid function. If we implement this as log(sigmoid(t)) we will get overflow issues for negative values of $t$ such that $t < -1000$ as the expression np.exp(-t) will overflow. Step4: Let us compute $$ \ln\bigl(S(-1000)\bigr) = \ln\Bigl(\frac{1}{1 + \exp(1000)}\Bigr) = - \ln\bigl(1 + \exp(1000)\bigr) \approx -1000. $$ Step5: This is not what we expected. Step6: On the other hand, for $t < -100$ we have that $1 + \exp(-t) \approx \exp(-t)$ Step7: Therefore, if $t < -100$ we have Step8: Given a feature matrix X and a vector y of classification outputs, the log-likelihood function $\texttt{ll}(\textbf{X}, \textbf{y},\textbf{w})$ is mathematically defined as follows Step9: The function $\mathtt{gradLL}(\mathbf{x}, \mathbf{y}, \mathbf{w})$ computes the gradient of the log-likelihood according to the formula $$ \frac{\partial\quad}{\partial\, w_j}\ell\ell(\mathbf{X},\mathbf{y};\mathbf{w}) = \sum\limits_{i=1}^N y_i \cdot x_{i,j} \cdot S(-y_i \cdot \mathbf{x}_i \cdot \mathbf{w}). $$ The different components of this gradient are combined into a vector. The arguments are the same as the arguments to the function $\ell\ell$ that computes the log-likelihood, i.e. * $\textbf{X}$ is the feature matrix, $\textbf{X}[i]$ is the $i$-th feature vector. * $\textbf{y}$ is the output vector, $\textbf{y}[i] \in {-1,+1}$ for all $i$. * $\textbf{w}$ is the weight vector. Step10: The data we want to investigate is stored in the file 'exam.csv'. The first column of this file is an integer from the set ${0,1}$. The number is $0$ if the corresponding student has failed the exam and is $1$ otherwise. The second column is a floating point number that lists the number of hours that the student has studied. Step11: The file exam.csv contains fictional data about an exam. The first column contains the number 0 if the student has failed the exam and 1 otherwise. The second column contains the number of hours the student has studied for the given exam. Step12: To proceed, we will plot the data points. To this end we transform the lists Pass and Hours into numpy arrays. Step13: The number of students is stored in the variable n. Step14: We have to turn the vector x into the feature matrix X. Step15: We append the number $1.0$ in every row of X. Step16: Currently, the entries in the vector y are either $0$ or $1$. These values need to be transformed to $-1$ and $+1$. Step17: As we have no real clue about the weights, we set them to $0$ initially. Step18: Let us plot this function together with the data.
Python Code: import numpy as np Explanation: Logistic Regression End of explanation def sigmoid(t): return 1.0 / (1.0 + np.exp(-t)) Explanation: We need to define the sigmoid function $S(t) := \large \frac{1}{1 + \exp(-t)}$. End of explanation sigmoid(np.array([-1.0, 0.0, 1.0])) Explanation: As we are using NumPy to compute $\exp(t)$, we can feed this function with a numpy array to compute the sigmoid function for every element of the array: End of explanation np.exp(1000) Explanation: Next, we define the natural logarithm of the sigmoid function. If we implement this as log(sigmoid(t)) we will get overflow issues for negative values of $t$ such that $t < -1000$ as the expression np.exp(-t) will overflow. End of explanation -np.log(1 + np.exp(1000)) Explanation: Let us compute $$ \ln\bigl(S(-1000)\bigr) = \ln\Bigl(\frac{1}{1 + \exp(1000)}\Bigr) = - \ln\bigl(1 + \exp(1000)\bigr) \approx -1000. $$ End of explanation np.exp(100) Explanation: This is not what we expected. End of explanation 1 + np.exp(-(-100)) == np.exp(-(-100)) Explanation: On the other hand, for $t < -100$ we have that $1 + \exp(-t) \approx \exp(-t)$: End of explanation def logSigmoid(t): if t > -100: return -np.log(1.0 + np.exp(-t)) else: return t logSigmoid(-1000) Explanation: Therefore, if $t < -100$ we have: $$ \begin{array}{lcl} \ln\left(\large\frac{1}{1+\exp(-t)}\right) & = & -\ln\bigl(1+\exp(-t)\bigr) \ & \approx & -\ln\bigl(\exp(-t)\bigr) \ & = & t \end{array} $$ Hence $\ln\bigl(S(t)\bigr) \approx t$ for $t < -100$. The following implementation uses this approximation. End of explanation def ll(X, y, w): return np.sum([logSigmoid(y[i] * (X[i] @ w)) for i in range(len(X))]) Explanation: Given a feature matrix X and a vector y of classification outputs, the log-likelihood function $\texttt{ll}(\textbf{X}, \textbf{y},\textbf{w})$ is mathematically defined as follows: $$\ell\ell(\mathbf{X},\mathbf{y},\mathbf{w}) = \sum\limits_{i=1}^N \ln\Bigl(S\bigl(y_i \cdot(\mathbf{x}i \cdot \mathbf{w})\bigr)\Bigr) = \sum\limits{i=1}^N L\bigl(y_i \cdot(\mathbf{x}_i \cdot \mathbf{w})\bigr) $$ The value of the log-likelihood function is interpreted as the logarithm of the probability that our model of the classifier predicts the observed values $y_i$ when the features are given by the vector $\textbf{x}_i$ for all $i\in{1,\cdots,N}$. The arguments $\textbf{X}$, $\textbf{y}$, and $\textbf{w}$ are interpreted as follows: * $\textbf{X}$ is the feature matrix, $\textbf{X}[i]$ is the $i$-th feature vector, i.e we have $\textbf{X}[i] = \textbf{x}_i$ if we regard $\textbf{x}_i$ as a row vector. Furthermore, it is assumed that $\textbf{X}[i][0]$ is 1.0 for all $i$. Hence we have a feature that is constant for all examples. * $\textbf{y}$ is the output vector, $\textbf{y}[i] \in {-1,+1}$ for all $i$. * $\textbf{w}$ is the weight vector. End of explanation def gradLL(X, y, w): Gradient = [] for j in range(len(X[0])): L = [y[i] * X[i][j] * sigmoid(-y[i] * (X[i] @ w)) for i in range(len(X))] Gradient.append(sum(L)) return np.array(Gradient) Explanation: The function $\mathtt{gradLL}(\mathbf{x}, \mathbf{y}, \mathbf{w})$ computes the gradient of the log-likelihood according to the formula $$ \frac{\partial\quad}{\partial\, w_j}\ell\ell(\mathbf{X},\mathbf{y};\mathbf{w}) = \sum\limits_{i=1}^N y_i \cdot x_{i,j} \cdot S(-y_i \cdot \mathbf{x}_i \cdot \mathbf{w}). $$ The different components of this gradient are combined into a vector. The arguments are the same as the arguments to the function $\ell\ell$ that computes the log-likelihood, i.e. * $\textbf{X}$ is the feature matrix, $\textbf{X}[i]$ is the $i$-th feature vector. * $\textbf{y}$ is the output vector, $\textbf{y}[i] \in {-1,+1}$ for all $i$. * $\textbf{w}$ is the weight vector. End of explanation import csv Explanation: The data we want to investigate is stored in the file 'exam.csv'. The first column of this file is an integer from the set ${0,1}$. The number is $0$ if the corresponding student has failed the exam and is $1$ otherwise. The second column is a floating point number that lists the number of hours that the student has studied. End of explanation !cat exam.csv || type exam.csv with open('exam.csv') as file: reader = csv.reader(file, delimiter=',') count = 0 # line count Pass = [] Hours = [] for row in reader: if count != 0: # skip header Pass .append(float(row[0])) Hours.append(float(row[1])) count += 1 Explanation: The file exam.csv contains fictional data about an exam. The first column contains the number 0 if the student has failed the exam and 1 otherwise. The second column contains the number of hours the student has studied for the given exam. End of explanation y = np.array(Pass) x = np.array(Hours) x y import matplotlib.pyplot as plt import seaborn as sns plt.figure(figsize=(15, 10)) sns.set(style='darkgrid') plt.title('Pass/Fail vs. Hours of Study') plt.axvline(x=0.0, c='k') plt.axhline(y=0.0, c='k') plt.xlabel('Hours of Study') plt.ylabel('Pass = 1, Fail = 0') plt.xticks(np.arange(0.0, 6.0, step=0.5)) plt.yticks(np.arange(-0.0, 1.1, step=0.1)) plt.scatter(x, y, color='b') Explanation: To proceed, we will plot the data points. To this end we transform the lists Pass and Hours into numpy arrays. End of explanation n = len(y) n Explanation: The number of students is stored in the variable n. End of explanation x.shape X = np.reshape(x, (n, 1)) X Explanation: We have to turn the vector x into the feature matrix X. End of explanation X = np.append(X, np.ones((n, 1)), axis=-1) X Explanation: We append the number $1.0$ in every row of X. End of explanation y = 2 * y - 1 y Explanation: Currently, the entries in the vector y are either $0$ or $1$. These values need to be transformed to $-1$ and $+1$. End of explanation import gradient_ascent start = np.zeros((2,)) eps = 10 ** -8 f = lambda w: ll(X, y, w) gradF = lambda w: gradLL(X, y, w) w, _, _ = gradient_ascent.findMaximum(f, gradF, start, eps, True) beta = w[1] gamma = w[0] print(f'model: P(pass|hours) = S({beta} + {gamma} * hours)') Explanation: As we have no real clue about the weights, we set them to $0$ initially. End of explanation plt.figure(figsize=(15, 9)) sns.set_style('whitegrid') plt.title('Pass/Fail vs. Hours of Study') H = np.arange(0.0, 6.0, 0.05) P = sigmoid(beta + gamma * H) sns.lineplot(x=H, y=P, color='r') plt.axvline(x=0.0, c='k') plt.axhline(y=0.0, c='k') plt.xlabel('Hours of Study') plt.ylabel('Probability of Passing the Exam') plt.xticks(np.arange(0.0, 6.0, step=0.5)) plt.yticks(np.arange(-0.0, 1.01, step=0.1)) plt.scatter(x, (y + 1) / 2, color='b') plt.savefig('exam-probability.pdf') Explanation: Let us plot this function together with the data. End of explanation
1,602
Given the following text description, write Python code to implement the functionality described below step by step Description: Solving problems by Searching This notebook serves as supporting material for topics covered in Chapter 3 - Solving Problems by Searching and Chapter 4 - Beyond Classical Search from the book Artificial Intelligence Step1: Review Here, we learn about problem solving. Building goal-based agents that can plan ahead to solve problems, in particular, navigation problem/route finding problem. First, we will start the problem solving by precisely defining problems and their solutions. We will look at several general-purpose search algorithms. Broadly, search algorithms are classified into two types Step2: The Problem class has six methods. __init__(self, initial, goal) Step3: Now it's time to define our problem. We will define it by passing initial, goal, graph to GraphProblem. So, our problem is to find the goal state starting from the given initial state on the provided graph. Have a look at our romania_map, which is an Undirected Graph containing a dict of nodes as keys and neighbours as values. Step4: It is pretty straightforward to understand this romania_map. The first node Arad has three neighbours named Zerind, Sibiu, Timisoara. Each of these nodes are 75, 140, 118 units apart from Arad respectively. And the same goes with other nodes. And romania_map.locations contains the positions of each of the nodes. We will use the straight line distance (which is different from the one provided in romania_map) between two cities in algorithms like A*-search and Recursive Best First Search. Define a problem Step5: Romania map visualisation Let's have a visualisation of Romania map [Figure 3.2] from the book and see how different searching algorithms perform / how frontier expands in each search algorithm for a simple problem named romania_problem. Have a look at romania_locations. It is a dictionary defined in search module. We will use these location values to draw the romania graph using networkx. Step6: Let's start the visualisations by importing necessary modules. We use networkx and matplotlib to show the map in the notebook and we use ipywidgets to interact with the map to see how the searching algorithm works. Step7: Let's get started by initializing an empty graph. We will add nodes, place the nodes in their location as shown in the book, add edges to the graph. Step8: We have completed building our graph based on romania_map and its locations. It's time to display it here in the notebook. This function show_map(node_colors) helps us do that. We will be calling this function later on to display the map at each and every interval step while searching, using variety of algorithms from the book. Step9: We can simply call the function with node_colors dictionary object to display it. Step10: Voila! You see, the romania map as shown in the Figure[3.2] in the book. Now, see how different searching algorithms perform with our problem statements. Searching algorithms visualisations In this section, we have visualisations of the following searching algorithms Step12: Breadth first tree search We have a working implementation in search module. But as we want to interact with the graph while it is searching, we need to modify the implementation. Here's the modified breadth first tree search. Step13: Now, we use ipywidgets to display a slider, a button and our romania map. By sliding the slider we can have a look at all the intermediate steps of a particular search algorithm. By pressing the button Visualize, you can see all the steps without interacting with the slider. These two helper functions are the callback functions which are called when we interact with the slider and the button. Step14: Breadth first search Let's change all the node_colors to starting position and define a different problem statement. Step16: Uniform cost search Let's change all the node_colors to starting position and define a different problem statement. Step19: A* search Let's change all the node_colors to starting position and define a different problem statement.
Python Code: from search import * Explanation: Solving problems by Searching This notebook serves as supporting material for topics covered in Chapter 3 - Solving Problems by Searching and Chapter 4 - Beyond Classical Search from the book Artificial Intelligence: A Modern Approach. This notebook uses implementations from search.py module. Let's start by importing everything from search module. End of explanation %psource Problem Explanation: Review Here, we learn about problem solving. Building goal-based agents that can plan ahead to solve problems, in particular, navigation problem/route finding problem. First, we will start the problem solving by precisely defining problems and their solutions. We will look at several general-purpose search algorithms. Broadly, search algorithms are classified into two types: Uninformed search algorithms: Search algorithms which explore the search space without having any information about the problem other than its definition. Examples: Breadth First Search Depth First Search Depth Limited Search Iterative Deepening Search Informed search algorithms: These type of algorithms leverage any information (heuristics, path cost) on the problem to search through the search space to find the solution efficiently. Examples: Best First Search Uniform Cost Search A* Search Recursive Best First Search Don't miss the visualisations of these algorithms solving the route-finding problem defined on Romania map at the end of this notebook. Problem Let's see how we define a Problem. Run the next cell to see how abstract class Problem is defined in the search module. End of explanation %psource GraphProblem Explanation: The Problem class has six methods. __init__(self, initial, goal) : This is what is called a constructor and is the first method called when you create an instance of the class. initial specifies the initial state of our search problem. It represents the start state from where our agent begins its task of exploration to find the goal state(s) which is given in the goal parameter. actions(self, state) : This method returns all the possible actions agent can execute in the given state state. result(self, state, action) : This returns the resulting state if action action is taken in the state state. This Problem class only deals with deterministic outcomes. So we know for sure what every action in a state would result to. goal_test(self, state) : Given a graph state, it checks if it is a terminal state. If the state is indeed a goal state, value of True is returned. Else, of course, False is returned. path_cost(self, c, state1, action, state2) : Return the cost of the path that arrives at state2 as a result of taking action from state1, assuming total cost of c to get up to state1. value(self, state) : This acts as a bit of extra information in problems where we try to optimise a value when we cannot do a goal test. We will use the abstract class Problem to define our real problem named GraphProblem. You can see how we define GraphProblem by running the next cell. End of explanation romania_map = UndirectedGraph(dict( Arad=dict(Zerind=75, Sibiu=140, Timisoara=118), Bucharest=dict(Urziceni=85, Pitesti=101, Giurgiu=90, Fagaras=211), Craiova=dict(Drobeta=120, Rimnicu=146, Pitesti=138), Drobeta=dict(Mehadia=75), Eforie=dict(Hirsova=86), Fagaras=dict(Sibiu=99), Hirsova=dict(Urziceni=98), Iasi=dict(Vaslui=92, Neamt=87), Lugoj=dict(Timisoara=111, Mehadia=70), Oradea=dict(Zerind=71, Sibiu=151), Pitesti=dict(Rimnicu=97), Rimnicu=dict(Sibiu=80), Urziceni=dict(Vaslui=142))) romania_map.locations = dict( Arad=(91, 492), Bucharest=(400, 327), Craiova=(253, 288), Drobeta=(165, 299), Eforie=(562, 293), Fagaras=(305, 449), Giurgiu=(375, 270), Hirsova=(534, 350), Iasi=(473, 506), Lugoj=(165, 379), Mehadia=(168, 339), Neamt=(406, 537), Oradea=(131, 571), Pitesti=(320, 368), Rimnicu=(233, 410), Sibiu=(207, 457), Timisoara=(94, 410), Urziceni=(456, 350), Vaslui=(509, 444), Zerind=(108, 531)) Explanation: Now it's time to define our problem. We will define it by passing initial, goal, graph to GraphProblem. So, our problem is to find the goal state starting from the given initial state on the provided graph. Have a look at our romania_map, which is an Undirected Graph containing a dict of nodes as keys and neighbours as values. End of explanation romania_problem = GraphProblem('Arad', 'Bucharest', romania_map) Explanation: It is pretty straightforward to understand this romania_map. The first node Arad has three neighbours named Zerind, Sibiu, Timisoara. Each of these nodes are 75, 140, 118 units apart from Arad respectively. And the same goes with other nodes. And romania_map.locations contains the positions of each of the nodes. We will use the straight line distance (which is different from the one provided in romania_map) between two cities in algorithms like A*-search and Recursive Best First Search. Define a problem: Hmm... say we want to start exploring from Arad and try to find Bucharest in our romania_map. So, this is how we do it. End of explanation romania_locations = romania_map.locations print(romania_locations) Explanation: Romania map visualisation Let's have a visualisation of Romania map [Figure 3.2] from the book and see how different searching algorithms perform / how frontier expands in each search algorithm for a simple problem named romania_problem. Have a look at romania_locations. It is a dictionary defined in search module. We will use these location values to draw the romania graph using networkx. End of explanation %matplotlib inline import networkx as nx import matplotlib.pyplot as plt from matplotlib import lines from ipywidgets import interact import ipywidgets as widgets from IPython.display import display import time Explanation: Let's start the visualisations by importing necessary modules. We use networkx and matplotlib to show the map in the notebook and we use ipywidgets to interact with the map to see how the searching algorithm works. End of explanation # initialise a graph G = nx.Graph() # use this while labeling nodes in the map node_labels = dict() # use this to modify colors of nodes while exploring the graph. # This is the only dict we send to `show_map(node_colors)` while drawing the map node_colors = dict() for n, p in romania_locations.items(): # add nodes from romania_locations G.add_node(n) # add nodes to node_labels node_labels[n] = n # node_colors to color nodes while exploring romania map node_colors[n] = "white" # we'll save the initial node colors to a dict to use later initial_node_colors = dict(node_colors) # positions for node labels node_label_pos = { k:[v[0],v[1]-10] for k,v in romania_locations.items() } # use this while labeling edges edge_labels = dict() # add edges between cities in romania map - UndirectedGraph defined in search.py for node in romania_map.nodes(): connections = romania_map.get(node) for connection in connections.keys(): distance = connections[connection] # add edges to the graph G.add_edge(node, connection) # add distances to edge_labels edge_labels[(node, connection)] = distance Explanation: Let's get started by initializing an empty graph. We will add nodes, place the nodes in their location as shown in the book, add edges to the graph. End of explanation def show_map(node_colors): # set the size of the plot plt.figure(figsize=(18,13)) # draw the graph (both nodes and edges) with locations from romania_locations nx.draw(G, pos = romania_locations, node_color = [node_colors[node] for node in G.nodes()]) # draw labels for nodes node_label_handles = nx.draw_networkx_labels(G, pos = node_label_pos, labels = node_labels, font_size = 14) # add a white bounding box behind the node labels [label.set_bbox(dict(facecolor='white', edgecolor='none')) for label in node_label_handles.values()] # add edge lables to the graph nx.draw_networkx_edge_labels(G, pos = romania_locations, edge_labels=edge_labels, font_size = 14) # add a legend white_circle = lines.Line2D([], [], color="white", marker='o', markersize=15, markerfacecolor="white") orange_circle = lines.Line2D([], [], color="orange", marker='o', markersize=15, markerfacecolor="orange") red_circle = lines.Line2D([], [], color="red", marker='o', markersize=15, markerfacecolor="red") gray_circle = lines.Line2D([], [], color="gray", marker='o', markersize=15, markerfacecolor="gray") green_circle = lines.Line2D([], [], color="green", marker='o', markersize=15, markerfacecolor="green") plt.legend((white_circle, orange_circle, red_circle, gray_circle, green_circle), ('Un-explored', 'Frontier', 'Currently Exploring', 'Explored', 'Final Solution'), numpoints=1,prop={'size':16}, loc=(.8,.75)) # show the plot. No need to use in notebooks. nx.draw will show the graph itself. plt.show() Explanation: We have completed building our graph based on romania_map and its locations. It's time to display it here in the notebook. This function show_map(node_colors) helps us do that. We will be calling this function later on to display the map at each and every interval step while searching, using variety of algorithms from the book. End of explanation show_map(node_colors) Explanation: We can simply call the function with node_colors dictionary object to display it. End of explanation def final_path_colors(problem, solution): "returns a node_colors dict of the final path provided the problem and solution" # get initial node colors final_colors = dict(initial_node_colors) # color all the nodes in solution and starting node to green final_colors[problem.initial] = "green" for node in solution: final_colors[node] = "green" return final_colors def display_visual(user_input, algorithm=None, problem=None): if user_input == False: def slider_callback(iteration): # don't show graph for the first time running the cell calling this function try: show_map(all_node_colors[iteration]) except: pass def visualize_callback(Visualize): if Visualize is True: button.value = False global all_node_colors iterations, all_node_colors, node = algorithm(problem) solution = node.solution() all_node_colors.append(final_path_colors(problem, solution)) slider.max = len(all_node_colors) - 1 for i in range(slider.max + 1): slider.value = i #time.sleep(.5) slider = widgets.IntSlider(min=0, max=1, step=1, value=0) slider_visual = widgets.interactive(slider_callback, iteration = slider) display(slider_visual) button = widgets.ToggleButton(value = False) button_visual = widgets.interactive(visualize_callback, Visualize = button) display(button_visual) if user_input == True: node_colors = dict(initial_node_colors) if algorithm == None: algorithms = {"Breadth First Tree Search": breadth_first_tree_search, "Breadth First Search": breadth_first_search, "Uniform Cost Search": uniform_cost_search, "A-star Search": astar_search} algo_dropdown = widgets.Dropdown(description = "Search algorithm: ", options = sorted(list(algorithms.keys())), value = "Breadth First Tree Search") display(algo_dropdown) def slider_callback(iteration): # don't show graph for the first time running the cell calling this function try: show_map(all_node_colors[iteration]) except: pass def visualize_callback(Visualize): if Visualize is True: button.value = False problem = GraphProblem(start_dropdown.value, end_dropdown.value, romania_map) global all_node_colors if algorithm == None: user_algorithm = algorithms[algo_dropdown.value] # print(user_algorithm) # print(problem) iterations, all_node_colors, node = user_algorithm(problem) solution = node.solution() all_node_colors.append(final_path_colors(problem, solution)) slider.max = len(all_node_colors) - 1 for i in range(slider.max + 1): slider.value = i # time.sleep(.5) start_dropdown = widgets.Dropdown(description = "Start city: ", options = sorted(list(node_colors.keys())), value = "Arad") display(start_dropdown) end_dropdown = widgets.Dropdown(description = "Goal city: ", options = sorted(list(node_colors.keys())), value = "Fagaras") display(end_dropdown) button = widgets.ToggleButton(value = False) button_visual = widgets.interactive(visualize_callback, Visualize = button) display(button_visual) slider = widgets.IntSlider(min=0, max=1, step=1, value=0) slider_visual = widgets.interactive(slider_callback, iteration = slider) display(slider_visual) Explanation: Voila! You see, the romania map as shown in the Figure[3.2] in the book. Now, see how different searching algorithms perform with our problem statements. Searching algorithms visualisations In this section, we have visualisations of the following searching algorithms: Breadth First Tree Search - Implemented Depth First Tree Search Depth First Graph Search Breadth First Search - Implemented Best First Graph Search Uniform Cost Search - Implemented Depth Limited Search Iterative Deepening Search A*-Search - Implemented Recursive Best First Search We add the colors to the nodes to have a nice visualisation when displaying. So, these are the different colors we are using in these visuals: * Un-explored nodes - <font color='black'>white</font> * Frontier nodes - <font color='orange'>orange</font> * Currently exploring node - <font color='red'>red</font> * Already explored nodes - <font color='gray'>gray</font> Now, we will define some helper methods to display interactive buttons and sliders when visualising search algorithms. End of explanation def tree_search(problem, frontier): Search through the successors of a problem to find a goal. The argument frontier should be an empty queue. Don't worry about repeated paths to a state. [Figure 3.7] # we use these two variables at the time of visualisations iterations = 0 all_node_colors = [] node_colors = dict(initial_node_colors) #Adding first node to the queue frontier.append(Node(problem.initial)) node_colors[Node(problem.initial).state] = "orange" iterations += 1 all_node_colors.append(dict(node_colors)) while frontier: #Popping first node of queue node = frontier.pop() # modify the currently searching node to red node_colors[node.state] = "red" iterations += 1 all_node_colors.append(dict(node_colors)) if problem.goal_test(node.state): # modify goal node to green after reaching the goal node_colors[node.state] = "green" iterations += 1 all_node_colors.append(dict(node_colors)) return(iterations, all_node_colors, node) frontier.extend(node.expand(problem)) for n in node.expand(problem): node_colors[n.state] = "orange" iterations += 1 all_node_colors.append(dict(node_colors)) # modify the color of explored nodes to gray node_colors[node.state] = "gray" iterations += 1 all_node_colors.append(dict(node_colors)) return None def breadth_first_tree_search(problem): "Search the shallowest nodes in the search tree first." iterations, all_node_colors, node = tree_search(problem, FIFOQueue()) return(iterations, all_node_colors, node) Explanation: Breadth first tree search We have a working implementation in search module. But as we want to interact with the graph while it is searching, we need to modify the implementation. Here's the modified breadth first tree search. End of explanation all_node_colors = [] romania_problem = GraphProblem('Arad', 'Fagaras', romania_map) display_visual(user_input = False, algorithm = breadth_first_tree_search, problem = romania_problem) Explanation: Now, we use ipywidgets to display a slider, a button and our romania map. By sliding the slider we can have a look at all the intermediate steps of a particular search algorithm. By pressing the button Visualize, you can see all the steps without interacting with the slider. These two helper functions are the callback functions which are called when we interact with the slider and the button. End of explanation def breadth_first_search(problem): "[Figure 3.11]" # we use these two variables at the time of visualisations iterations = 0 all_node_colors = [] node_colors = dict(initial_node_colors) node = Node(problem.initial) node_colors[node.state] = "red" iterations += 1 all_node_colors.append(dict(node_colors)) if problem.goal_test(node.state): node_colors[node.state] = "green" iterations += 1 all_node_colors.append(dict(node_colors)) return(iterations, all_node_colors, node) frontier = FIFOQueue() frontier.append(node) # modify the color of frontier nodes to blue node_colors[node.state] = "orange" iterations += 1 all_node_colors.append(dict(node_colors)) explored = set() while frontier: node = frontier.pop() node_colors[node.state] = "red" iterations += 1 all_node_colors.append(dict(node_colors)) explored.add(node.state) for child in node.expand(problem): if child.state not in explored and child not in frontier: if problem.goal_test(child.state): node_colors[child.state] = "green" iterations += 1 all_node_colors.append(dict(node_colors)) return(iterations, all_node_colors, child) frontier.append(child) node_colors[child.state] = "orange" iterations += 1 all_node_colors.append(dict(node_colors)) node_colors[node.state] = "gray" iterations += 1 all_node_colors.append(dict(node_colors)) return None all_node_colors = [] romania_problem = GraphProblem('Arad', 'Bucharest', romania_map) display_visual(user_input = False, algorithm = breadth_first_search, problem = romania_problem) Explanation: Breadth first search Let's change all the node_colors to starting position and define a different problem statement. End of explanation def best_first_graph_search(problem, f): Search the nodes with the lowest f scores first. You specify the function f(node) that you want to minimize; for example, if f is a heuristic estimate to the goal, then we have greedy best first search; if f is node.depth then we have breadth-first search. There is a subtlety: the line "f = memoize(f, 'f')" means that the f values will be cached on the nodes as they are computed. So after doing a best first search you can examine the f values of the path returned. # we use these two variables at the time of visualisations iterations = 0 all_node_colors = [] node_colors = dict(initial_node_colors) f = memoize(f, 'f') node = Node(problem.initial) node_colors[node.state] = "red" iterations += 1 all_node_colors.append(dict(node_colors)) if problem.goal_test(node.state): node_colors[node.state] = "green" iterations += 1 all_node_colors.append(dict(node_colors)) return(iterations, all_node_colors, node) frontier = PriorityQueue(min, f) frontier.append(node) node_colors[node.state] = "orange" iterations += 1 all_node_colors.append(dict(node_colors)) explored = set() while frontier: node = frontier.pop() node_colors[node.state] = "red" iterations += 1 all_node_colors.append(dict(node_colors)) if problem.goal_test(node.state): node_colors[node.state] = "green" iterations += 1 all_node_colors.append(dict(node_colors)) return(iterations, all_node_colors, node) explored.add(node.state) for child in node.expand(problem): if child.state not in explored and child not in frontier: frontier.append(child) node_colors[child.state] = "orange" iterations += 1 all_node_colors.append(dict(node_colors)) elif child in frontier: incumbent = frontier[child] if f(child) < f(incumbent): del frontier[incumbent] frontier.append(child) node_colors[child.state] = "orange" iterations += 1 all_node_colors.append(dict(node_colors)) node_colors[node.state] = "gray" iterations += 1 all_node_colors.append(dict(node_colors)) return None def uniform_cost_search(problem): "[Figure 3.14]" iterations, all_node_colors, node = best_first_graph_search(problem, lambda node: node.path_cost) return(iterations, all_node_colors, node) all_node_colors = [] romania_problem = GraphProblem('Arad', 'Bucharest', romania_map) display_visual(user_input = False, algorithm = uniform_cost_search, problem = romania_problem) Explanation: Uniform cost search Let's change all the node_colors to starting position and define a different problem statement. End of explanation def best_first_graph_search(problem, f): Search the nodes with the lowest f scores first. You specify the function f(node) that you want to minimize; for example, if f is a heuristic estimate to the goal, then we have greedy best first search; if f is node.depth then we have breadth-first search. There is a subtlety: the line "f = memoize(f, 'f')" means that the f values will be cached on the nodes as they are computed. So after doing a best first search you can examine the f values of the path returned. # we use these two variables at the time of visualisations iterations = 0 all_node_colors = [] node_colors = dict(initial_node_colors) f = memoize(f, 'f') node = Node(problem.initial) node_colors[node.state] = "red" iterations += 1 all_node_colors.append(dict(node_colors)) if problem.goal_test(node.state): node_colors[node.state] = "green" iterations += 1 all_node_colors.append(dict(node_colors)) return(iterations, all_node_colors, node) frontier = PriorityQueue(min, f) frontier.append(node) node_colors[node.state] = "orange" iterations += 1 all_node_colors.append(dict(node_colors)) explored = set() while frontier: node = frontier.pop() node_colors[node.state] = "red" iterations += 1 all_node_colors.append(dict(node_colors)) if problem.goal_test(node.state): node_colors[node.state] = "green" iterations += 1 all_node_colors.append(dict(node_colors)) return(iterations, all_node_colors, node) explored.add(node.state) for child in node.expand(problem): if child.state not in explored and child not in frontier: frontier.append(child) node_colors[child.state] = "orange" iterations += 1 all_node_colors.append(dict(node_colors)) elif child in frontier: incumbent = frontier[child] if f(child) < f(incumbent): del frontier[incumbent] frontier.append(child) node_colors[child.state] = "orange" iterations += 1 all_node_colors.append(dict(node_colors)) node_colors[node.state] = "gray" iterations += 1 all_node_colors.append(dict(node_colors)) return None def astar_search(problem, h=None): A* search is best-first graph search with f(n) = g(n)+h(n). You need to specify the h function when you call astar_search, or else in your Problem subclass. h = memoize(h or problem.h, 'h') iterations, all_node_colors, node = best_first_graph_search(problem, lambda n: n.path_cost + h(n)) return(iterations, all_node_colors, node) all_node_colors = [] romania_problem = GraphProblem('Arad', 'Bucharest', romania_map) display_visual(user_input = False, algorithm = astar_search, problem = romania_problem) all_node_colors = [] # display_visual(user_input = True, algorithm = breadth_first_tree_search) display_visual(user_input = True) Explanation: A* search Let's change all the node_colors to starting position and define a different problem statement. End of explanation
1,603
Given the following text description, write Python code to implement the functionality described below step by step Description: 2.1 - 2.2 Migration Step1: By default, ld_func is set to 'interp'. This will interpolate the limb-darkening directly, without requiring a specific law/function. Note, however, that the bolometric limb-darkening does not have 'interp' as an option. Bolometric limb-darkening is only used for irradiation/reflection, and must be set manually. Step2: Back to the dataset-specific limb-darkening, we can see the available options besides 'interp'. Step3: And if we set the value of ld_func to anything other than 'interp', we'll now see new parameters for ld_coeffs_source. In PHOEBE 2.1, this would expose the ld_coeffs parameters instead. However, in PHOEBE 2.2+, limb-darkening will be interpolated automatically by default, requiring one extra step to manually set the coefficients. Step4: Here we see there are several options available for ld_coeffs_source. See the limb-darkening tutorial for more details. Step5: To manually set the coefficients, we must also set ld_coeffs_source to be 'none'. Step6: Now that ld_coeffs is visible, run_checks will fail if they are not of the correct length. Step7: By manually setting the value of ld_coeffs to an appropriate value, the checks should pass.
Python Code: import phoebe b = phoebe.default_binary() b.add_dataset('lc', dataset='lc01') print(b.filter(qualifier='ld*', dataset='lc01')) Explanation: 2.1 - 2.2 Migration: ld_coeffs_source PHOEBE 2.2 introduces the capability to interpolate limb-darkening coefficients for a given ld_func (i.e. linear, quadratic, etc). In order to do so, there is now a new parameter called ld_coeffs_source which will default to 'auto'. The ld_coeffs parameter will not be visibile, unless ld_func is some value other than the default value of 'interp' AND ld_coeffs_source is manually set to 'none'. Any script in which ld_coeffs was set manually, will now require an additional line setting ld_coeffs_source to 'none' (or alternatively removing the line setting ld_coeffs and instead relying on the new capability to interpolate). Below is an example exhibiting the new behavior. End of explanation print(b.filter(qualifier='ld*bol')) Explanation: By default, ld_func is set to 'interp'. This will interpolate the limb-darkening directly, without requiring a specific law/function. Note, however, that the bolometric limb-darkening does not have 'interp' as an option. Bolometric limb-darkening is only used for irradiation/reflection, and must be set manually. End of explanation print(b.get_parameter('ld_func', component='primary').choices) Explanation: Back to the dataset-specific limb-darkening, we can see the available options besides 'interp'. End of explanation b.set_value_all('ld_func', 'linear') print(b.filter(qualifier='ld*', dataset='lc01')) Explanation: And if we set the value of ld_func to anything other than 'interp', we'll now see new parameters for ld_coeffs_source. In PHOEBE 2.1, this would expose the ld_coeffs parameters instead. However, in PHOEBE 2.2+, limb-darkening will be interpolated automatically by default, requiring one extra step to manually set the coefficients. End of explanation print(b.get_parameter('ld_coeffs_source', component='primary').choices) Explanation: Here we see there are several options available for ld_coeffs_source. See the limb-darkening tutorial for more details. End of explanation b.set_value('ld_coeffs_source', component='primary', value='none') print(b.filter(qualifier='ld*', dataset='lc01')) Explanation: To manually set the coefficients, we must also set ld_coeffs_source to be 'none'. End of explanation print(b.run_checks()) Explanation: Now that ld_coeffs is visible, run_checks will fail if they are not of the correct length. End of explanation b.set_value('ld_coeffs', component='primary', value=[0.5]) print(b.filter(qualifier='ld*', dataset='lc01')) print(b.run_checks()) Explanation: By manually setting the value of ld_coeffs to an appropriate value, the checks should pass. End of explanation
1,604
Given the following text description, write Python code to implement the functionality described below step by step Description: Classify text with BERT Learning Objectives Learn how to load a pre-trained BERT model from TensorFlow Hub Learn how to build your own model by combining with a classifier Learn how to train a your BERT model by fine-tuning Learn how to save your trained model and use it Learn how to evaluate a text classification model This lab will show you how to fine-tune BERT to perform sentiment analysis on a dataset of plain-text IMDB movie reviews. In addition to training a model, you will learn how to preprocess text into an appropriate format. Before you start Please ensure you have a GPU (1 x NVIDIA Tesla K80 should be enough) attached to your Notebook instance to ensure that the training doesn't take too long. About BERT BERT and other Transformer encoder architectures have been wildly successful on a variety of tasks in NLP (natural language processing). They compute vector-space representations of natural language that are suitable for use in deep learning models. The BERT family of models uses the Transformer encoder architecture to process each token of input text in the full context of all tokens before and after, hence the name Step1: You will use the AdamW optimizer from tensorflow/models. Step2: To check if you have a GPU attached. Run the following. Step3: Sentiment Analysis This notebook trains a sentiment analysis model to classify movie reviews as positive or negative, based on the text of the review. You'll use the Large Movie Review Dataset that contains the text of 50,000 movie reviews from the Internet Movie Database. Download the IMDB dataset Let's download and extract the dataset, then explore the directory structure. TODO Step4: Next, you will use the text_dataset_from_directory utility to create a labeled tf.data.Dataset. The IMDB dataset has already been divided into train and test, but it lacks a validation set. Let's create a validation set using an 80 Step5: Let's take a look at a few reviews. Step7: Loading models from TensorFlow Hub For the purpose of this lab, we will be loading a model called Small BERT. Small BERT has the same general architecture as the original BERT but the has fewer and/or smaller Transformer blocks. Some other popular BERT models are BERT Base, ALBERT, BERT Experts, Electra. See the continued learning section at the end of this lab for more info. Aside from the models available below, there are multiple versions of the models that are larger and can yeld even better accuracy but they are too big to be fine-tuned on a single GPU. You will be able to do that on the Solve GLUE tasks using BERT on a TPU colab. You'll see in the code below that switching the tfhub.dev URL is enough to try any of these models, because all the differences between them are encapsulated in the SavedModels from TF Hub. Step8: The preprocessing model Text inputs need to be transformed to numeric token ids and arranged in several Tensors before being input to BERT. TensorFlow Hub provides a matching preprocessing model for each of the BERT models discussed above, which implements this transformation using TF ops from the TF.text library. It is not necessary to run pure Python code outside your TensorFlow model to preprocess text. The preprocessing model must be the one referenced by the documentation of the BERT model, which you can read at the URL printed above. For BERT models from the drop-down above, the preprocessing model is selected automatically. Note Step9: Let's try the preprocessing model on some text and see the output Step10: As you can see, now you have the 3 outputs from the preprocessing that a BERT model would use (input_words_id, input_mask and input_type_ids). Some other important points Step11: The BERT models return a map with 3 important keys Step12: The output is meaningless, of course, because the model has not been trained yet. Let's take a look at the model's structure. Step13: Model training You now have all the pieces to train a model, including the preprocessing module, BERT encoder, data, and classifier. Loss function Since this is a binary classification problem and the model outputs a probability (a single-unit layer), you'll use losses.BinaryCrossentropy loss function. TODO 4 Step14: Optimizer For fine-tuning, let's use the same optimizer that BERT was originally trained with Step15: Loading the BERT model and training Using the classifier_model you created earlier, you can compile the model with the loss, metric and optimizer. TODO 5 Step16: Note Step17: Evaluate the model Let's see how the model performs. Two values will be returned. Loss (a number which represents the error, lower values are better), and accuracy. Step18: Plot the accuracy and loss over time Based on the History object returned by model.fit(). You can plot the training and validation loss for comparison, as well as the training and validation accuracy Step19: In this plot, the red lines represents the training loss and accuracy, and the blue lines are the validation loss and accuracy. Export for inference Now you just save your fine-tuned model for later use. TODO 7 Step20: Let's reload the model so you can try it side by side with the model that is still in memory. Step21: Here you can test your model on any sentence you want, just add to the examples variable below. Step22: If you want to use your model on TF Serving, remember that it will call your SavedModel through one of its named signatures. In Python, you can test them as follows
Python Code: # A dependency of the preprocessing for BERT inputs !pip install -q --user tensorflow-text Explanation: Classify text with BERT Learning Objectives Learn how to load a pre-trained BERT model from TensorFlow Hub Learn how to build your own model by combining with a classifier Learn how to train a your BERT model by fine-tuning Learn how to save your trained model and use it Learn how to evaluate a text classification model This lab will show you how to fine-tune BERT to perform sentiment analysis on a dataset of plain-text IMDB movie reviews. In addition to training a model, you will learn how to preprocess text into an appropriate format. Before you start Please ensure you have a GPU (1 x NVIDIA Tesla K80 should be enough) attached to your Notebook instance to ensure that the training doesn't take too long. About BERT BERT and other Transformer encoder architectures have been wildly successful on a variety of tasks in NLP (natural language processing). They compute vector-space representations of natural language that are suitable for use in deep learning models. The BERT family of models uses the Transformer encoder architecture to process each token of input text in the full context of all tokens before and after, hence the name: Bidirectional Encoder Representations from Transformers. BERT models are usually pre-trained on a large corpus of text, then fine-tuned for specific tasks. Setup End of explanation !pip install -q --user tf-models-official import os import shutil import tensorflow as tf import tensorflow_hub as hub import tensorflow_text as text from official.nlp import optimization # to create AdamW optmizer import matplotlib.pyplot as plt tf.get_logger().setLevel('ERROR') Explanation: You will use the AdamW optimizer from tensorflow/models. End of explanation print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU'))) Explanation: To check if you have a GPU attached. Run the following. End of explanation url = 'https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz' #TODO #Set a path to a folder outside the git repo. This is important so data won't get indexed by git on Jupyter lab path = #example: '/home/jupyter/' dataset = tf.keras.utils.get_file('aclImdb_v1.tar.gz', url, untar=True, cache_dir=path, cache_subdir='') dataset_dir = os.path.join(os.path.dirname(dataset), 'aclImdb') train_dir = os.path.join(dataset_dir, 'train') # remove unused folders to make it easier to load the data remove_dir = os.path.join(train_dir, 'unsup') shutil.rmtree(remove_dir) Explanation: Sentiment Analysis This notebook trains a sentiment analysis model to classify movie reviews as positive or negative, based on the text of the review. You'll use the Large Movie Review Dataset that contains the text of 50,000 movie reviews from the Internet Movie Database. Download the IMDB dataset Let's download and extract the dataset, then explore the directory structure. TODO: Set path to a folder outside the git repo where the IMDB data will be downloaded End of explanation AUTOTUNE = tf.data.AUTOTUNE batch_size = 32 seed = 42 raw_train_ds = tf.keras.preprocessing.text_dataset_from_directory( path+'aclImdb/train', batch_size=batch_size, validation_split=0.2, subset='training', seed=seed) class_names = raw_train_ds.class_names train_ds = raw_train_ds.cache().prefetch(buffer_size=AUTOTUNE) val_ds = tf.keras.preprocessing.text_dataset_from_directory( path+'aclImdb/train', batch_size=batch_size, validation_split=0.2, subset='validation', seed=seed) val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE) test_ds = tf.keras.preprocessing.text_dataset_from_directory( path+'aclImdb/test', batch_size=batch_size) test_ds = test_ds.cache().prefetch(buffer_size=AUTOTUNE) Explanation: Next, you will use the text_dataset_from_directory utility to create a labeled tf.data.Dataset. The IMDB dataset has already been divided into train and test, but it lacks a validation set. Let's create a validation set using an 80:20 split of the training data by using the validation_split argument below. Note: When using the validation_split and subset arguments, make sure to either specify a random seed, or to pass shuffle=False, so that the validation and training splits have no overlap. End of explanation for text_batch, label_batch in train_ds.take(1): for i in range(3): print(f'Review: {text_batch.numpy()[i]}') label = label_batch.numpy()[i] print(f'Label : {label} ({class_names[label]})') Explanation: Let's take a look at a few reviews. End of explanation bert_model_name = 'small_bert/bert_en_uncased_L-4_H-512_A-8' @param ["bert_en_uncased_L-12_H-768_A-12", "bert_en_cased_L-12_H-768_A-12", "bert_multi_cased_L-12_H-768_A-12", "small_bert/bert_en_uncased_L-2_H-128_A-2", "small_bert/bert_en_uncased_L-2_H-256_A-4", "small_bert/bert_en_uncased_L-2_H-512_A-8", "small_bert/bert_en_uncased_L-2_H-768_A-12", "small_bert/bert_en_uncased_L-4_H-128_A-2", "small_bert/bert_en_uncased_L-4_H-256_A-4", "small_bert/bert_en_uncased_L-4_H-512_A-8", "small_bert/bert_en_uncased_L-4_H-768_A-12", "small_bert/bert_en_uncased_L-6_H-128_A-2", "small_bert/bert_en_uncased_L-6_H-256_A-4", "small_bert/bert_en_uncased_L-6_H-512_A-8", "small_bert/bert_en_uncased_L-6_H-768_A-12", "small_bert/bert_en_uncased_L-8_H-128_A-2", "small_bert/bert_en_uncased_L-8_H-256_A-4", "small_bert/bert_en_uncased_L-8_H-512_A-8", "small_bert/bert_en_uncased_L-8_H-768_A-12", "small_bert/bert_en_uncased_L-10_H-128_A-2", "small_bert/bert_en_uncased_L-10_H-256_A-4", "small_bert/bert_en_uncased_L-10_H-512_A-8", "small_bert/bert_en_uncased_L-10_H-768_A-12", "small_bert/bert_en_uncased_L-12_H-128_A-2", "small_bert/bert_en_uncased_L-12_H-256_A-4", "small_bert/bert_en_uncased_L-12_H-512_A-8", "small_bert/bert_en_uncased_L-12_H-768_A-12", "albert_en_base", "electra_small", "electra_base", "experts_pubmed", "experts_wiki_books", "talking-heads_base"] map_name_to_handle = { 'bert_en_uncased_L-12_H-768_A-12': 'https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/3', 'bert_en_cased_L-12_H-768_A-12': 'https://tfhub.dev/tensorflow/bert_en_cased_L-12_H-768_A-12/3', 'bert_multi_cased_L-12_H-768_A-12': 'https://tfhub.dev/tensorflow/bert_multi_cased_L-12_H-768_A-12/3', 'small_bert/bert_en_uncased_L-2_H-128_A-2': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-128_A-2/1', 'small_bert/bert_en_uncased_L-2_H-256_A-4': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-256_A-4/1', 'small_bert/bert_en_uncased_L-2_H-512_A-8': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-512_A-8/1', 'small_bert/bert_en_uncased_L-2_H-768_A-12': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-768_A-12/1', 'small_bert/bert_en_uncased_L-4_H-128_A-2': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-128_A-2/1', 'small_bert/bert_en_uncased_L-4_H-256_A-4': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-256_A-4/1', 'small_bert/bert_en_uncased_L-4_H-512_A-8': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-512_A-8/1', 'small_bert/bert_en_uncased_L-4_H-768_A-12': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-768_A-12/1', 'small_bert/bert_en_uncased_L-6_H-128_A-2': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-128_A-2/1', 'small_bert/bert_en_uncased_L-6_H-256_A-4': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-256_A-4/1', 'small_bert/bert_en_uncased_L-6_H-512_A-8': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-512_A-8/1', 'small_bert/bert_en_uncased_L-6_H-768_A-12': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-768_A-12/1', 'small_bert/bert_en_uncased_L-8_H-128_A-2': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-128_A-2/1', 'small_bert/bert_en_uncased_L-8_H-256_A-4': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-256_A-4/1', 'small_bert/bert_en_uncased_L-8_H-512_A-8': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-512_A-8/1', 'small_bert/bert_en_uncased_L-8_H-768_A-12': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-768_A-12/1', 'small_bert/bert_en_uncased_L-10_H-128_A-2': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-128_A-2/1', 'small_bert/bert_en_uncased_L-10_H-256_A-4': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-256_A-4/1', 'small_bert/bert_en_uncased_L-10_H-512_A-8': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-512_A-8/1', 'small_bert/bert_en_uncased_L-10_H-768_A-12': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-768_A-12/1', 'small_bert/bert_en_uncased_L-12_H-128_A-2': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-128_A-2/1', 'small_bert/bert_en_uncased_L-12_H-256_A-4': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-256_A-4/1', 'small_bert/bert_en_uncased_L-12_H-512_A-8': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-512_A-8/1', 'small_bert/bert_en_uncased_L-12_H-768_A-12': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-768_A-12/1', 'albert_en_base': 'https://tfhub.dev/tensorflow/albert_en_base/2', 'electra_small': 'https://tfhub.dev/google/electra_small/2', 'electra_base': 'https://tfhub.dev/google/electra_base/2', 'experts_pubmed': 'https://tfhub.dev/google/experts/bert/pubmed/2', 'experts_wiki_books': 'https://tfhub.dev/google/experts/bert/wiki_books/2', 'talking-heads_base': 'https://tfhub.dev/tensorflow/talkheads_ggelu_bert_en_base/1', } map_model_to_preprocess = { 'bert_en_uncased_L-12_H-768_A-12': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'bert_en_cased_L-12_H-768_A-12': 'https://tfhub.dev/tensorflow/bert_en_cased_preprocess/3', 'small_bert/bert_en_uncased_L-2_H-128_A-2': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-2_H-256_A-4': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-2_H-512_A-8': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-2_H-768_A-12': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-4_H-128_A-2': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-4_H-256_A-4': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-4_H-512_A-8': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-4_H-768_A-12': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-6_H-128_A-2': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-6_H-256_A-4': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-6_H-512_A-8': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-6_H-768_A-12': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-8_H-128_A-2': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-8_H-256_A-4': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-8_H-512_A-8': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-8_H-768_A-12': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-10_H-128_A-2': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-10_H-256_A-4': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-10_H-512_A-8': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-10_H-768_A-12': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-12_H-128_A-2': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-12_H-256_A-4': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-12_H-512_A-8': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-12_H-768_A-12': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'bert_multi_cased_L-12_H-768_A-12': 'https://tfhub.dev/tensorflow/bert_multi_cased_preprocess/3', 'albert_en_base': 'https://tfhub.dev/tensorflow/albert_en_preprocess/3', 'electra_small': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'electra_base': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'experts_pubmed': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'experts_wiki_books': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'talking-heads_base': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', } tfhub_handle_encoder = map_name_to_handle[bert_model_name] tfhub_handle_preprocess = map_model_to_preprocess[bert_model_name] print(f'BERT model selected : {tfhub_handle_encoder}') print(f'Preprocess model auto-selected: {tfhub_handle_preprocess}') Explanation: Loading models from TensorFlow Hub For the purpose of this lab, we will be loading a model called Small BERT. Small BERT has the same general architecture as the original BERT but the has fewer and/or smaller Transformer blocks. Some other popular BERT models are BERT Base, ALBERT, BERT Experts, Electra. See the continued learning section at the end of this lab for more info. Aside from the models available below, there are multiple versions of the models that are larger and can yeld even better accuracy but they are too big to be fine-tuned on a single GPU. You will be able to do that on the Solve GLUE tasks using BERT on a TPU colab. You'll see in the code below that switching the tfhub.dev URL is enough to try any of these models, because all the differences between them are encapsulated in the SavedModels from TF Hub. End of explanation bert_preprocess_model = #TODO: your code goes here Explanation: The preprocessing model Text inputs need to be transformed to numeric token ids and arranged in several Tensors before being input to BERT. TensorFlow Hub provides a matching preprocessing model for each of the BERT models discussed above, which implements this transformation using TF ops from the TF.text library. It is not necessary to run pure Python code outside your TensorFlow model to preprocess text. The preprocessing model must be the one referenced by the documentation of the BERT model, which you can read at the URL printed above. For BERT models from the drop-down above, the preprocessing model is selected automatically. Note: You will load the preprocessing model into a hub.KerasLayer to compose your fine-tuned model. This is the preferred API to load a TF2-style SavedModel from TF Hub into a Keras model. TODO 1: Use hub.KerasLaye to initialize the preprocessing End of explanation text_test = ['this is such an amazing movie!'] text_preprocessed = #TODO: Code goes here #This print box will help you inspect the keys in the pre-processed dictionary print(f'Keys : {list(text_preprocessed.keys())}') # 1. input_word_ids is the ids for the words in the tokenized sentence print(f'Shape : {text_preprocessed["input_word_ids"].shape}') print(f'Word Ids : {text_preprocessed["input_word_ids"][0, :12]}') #2. input_mask is the tokens which we are masking (masked language model) print(f'Input Mask : {text_preprocessed["input_mask"][0, :12]}') #3. input_type_ids is the sentence id of the input sentence. print(f'Type Ids : {text_preprocessed["input_type_ids"][0, :12]}') Explanation: Let's try the preprocessing model on some text and see the output: TODO 2: Call the preprocess model function and pass text_test End of explanation bert_model = hub.KerasLayer(tfhub_handle_encoder) bert_results = bert_model(text_preprocessed) print(f'Loaded BERT: {tfhub_handle_encoder}') print(f'Pooled Outputs Shape:{bert_results["pooled_output"].shape}') print(f'Pooled Outputs Values:{bert_results["pooled_output"][0, :12]}') print(f'Sequence Outputs Shape:{bert_results["sequence_output"].shape}') print(f'Sequence Outputs Values:{bert_results["sequence_output"][0, :12]}') Explanation: As you can see, now you have the 3 outputs from the preprocessing that a BERT model would use (input_words_id, input_mask and input_type_ids). Some other important points: - The input is truncated to 128 tokens. - The input_type_ids only have one value (0) because this is a single sentence input. For a multiple sentence input, it would have one number for each input. Since this text preprocessor is a TensorFlow model, It can be included in your model directly. Using the BERT model Before putting BERT into your own model, let's take a look at its outputs. You will load it from TF Hub and see the returned values. End of explanation def build_classifier_model(): # TODO: define your model here return tf.keras.Model(text_input, net) #Let's check that the model runs with the output of the preprocessing model. classifier_model = build_classifier_model() bert_raw_result = classifier_model(tf.constant(text_test)) print(tf.sigmoid(bert_raw_result)) Explanation: The BERT models return a map with 3 important keys: pooled_output, sequence_output, encoder_outputs: pooled_output to represent each input sequence as a whole. The shape is [batch_size, H]. You can think of this as an embedding for the entire movie review. sequence_output represents each input token in the context. The shape is [batch_size, seq_length, H]. You can think of this as a contextual embedding for every token in the movie review. encoder_outputs are the intermediate activations of the L Transformer blocks. outputs["encoder_outputs"][i] is a Tensor of shape [batch_size, seq_length, 1024] with the outputs of the i-th Transformer block, for 0 &lt;= i &lt; L. The last value of the list is equal to sequence_output. For the fine-tuning you are going to use the pooled_output array. Define your model You will create a very simple fine-tuned model, with the preprocessing model, the selected BERT model, one Dense and a Dropout layer. Note: for more information about the base model's input and output you can use just follow the model's url for documentation. Here specifically you don't need to worry about it because the preprocessing model will take care of that for you. TODO 3: Define your model. It should contain the preprocessing model, the selected BERT model (smallBERT), a dense layer and dropout layer HINT The order of the layers in the model should be: 1. Input Layer 2. Pre-processing Layer 3. Encoder Layer 4. From the BERT output map, use pooled_output 5. Dropout layer 6. Dense layer End of explanation tf.keras.utils.plot_model(classifier_model) Explanation: The output is meaningless, of course, because the model has not been trained yet. Let's take a look at the model's structure. End of explanation loss = #TODO: your code goes here metrics = #TODO: your code goes here Explanation: Model training You now have all the pieces to train a model, including the preprocessing module, BERT encoder, data, and classifier. Loss function Since this is a binary classification problem and the model outputs a probability (a single-unit layer), you'll use losses.BinaryCrossentropy loss function. TODO 4: define your loss and evaluation metric here. Since it is a binary classification use BinaryCrossentropy and BinaryAccuracy End of explanation epochs = 5 steps_per_epoch = tf.data.experimental.cardinality(train_ds).numpy() num_train_steps = steps_per_epoch * epochs num_warmup_steps = int(0.1*num_train_steps) init_lr = 3e-5 optimizer = optimization.create_optimizer(init_lr=init_lr, num_train_steps=num_train_steps, num_warmup_steps=num_warmup_steps, optimizer_type='adamw') Explanation: Optimizer For fine-tuning, let's use the same optimizer that BERT was originally trained with: the "Adaptive Moments" (Adam). This optimizer minimizes the prediction loss and does regularization by weight decay (not using moments), which is also known as AdamW. In past labs, we have been using the Adam optimizer which is a popular choice. However, for this lab we will be using a new optimizier which is meant to improve generalization. The intuition and algoritm behind AdamW can be found in this paper here. For the learning rate (init_lr), we use the same schedule as BERT pre-training: linear decay of a notional initial learning rate, prefixed with a linear warm-up phase over the first 10% of training steps (num_warmup_steps). In line with the BERT paper, the initial learning rate is smaller for fine-tuning (best of 5e-5, 3e-5, 2e-5). End of explanation #TODO: Model compile code goes here Explanation: Loading the BERT model and training Using the classifier_model you created earlier, you can compile the model with the loss, metric and optimizer. TODO 5: complile the model using the optimizer, loss and metrics you defined above End of explanation print(f'Training model with {tfhub_handle_encoder}') history = #TODO: model fit code goes here Explanation: Note: training time will vary depending on the complexity of the BERT model you have selected. TODO 6: write code to fit the model and start training End of explanation loss, accuracy = classifier_model.evaluate(test_ds) print(f'Loss: {loss}') print(f'Accuracy: {accuracy}') Explanation: Evaluate the model Let's see how the model performs. Two values will be returned. Loss (a number which represents the error, lower values are better), and accuracy. End of explanation history_dict = history.history print(history_dict.keys()) acc = history_dict['binary_accuracy'] val_acc = history_dict['val_binary_accuracy'] loss = history_dict['loss'] val_loss = history_dict['val_loss'] epochs = range(1, len(acc) + 1) fig = plt.figure(figsize=(10, 6)) fig.tight_layout() plt.subplot(2, 1, 1) # "bo" is for "blue dot" plt.plot(epochs, loss, 'r', label='Training loss') # b is for "solid blue line" plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') # plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.subplot(2, 1, 2) plt.plot(epochs, acc, 'r', label='Training acc') plt.plot(epochs, val_acc, 'b', label='Validation acc') plt.title('Training and validation accuracy') plt.xlabel('Epochs') plt.ylabel('Accuracy') plt.legend(loc='lower right') Explanation: Plot the accuracy and loss over time Based on the History object returned by model.fit(). You can plot the training and validation loss for comparison, as well as the training and validation accuracy: End of explanation dataset_name = 'imdb' saved_model_path = './{}_bert'.format(dataset_name.replace('/', '_')) #TODO: your code goes here Explanation: In this plot, the red lines represents the training loss and accuracy, and the blue lines are the validation loss and accuracy. Export for inference Now you just save your fine-tuned model for later use. TODO 7: Write code to save the model to saved_model_path End of explanation reloaded_model = tf.saved_model.load(saved_model_path) Explanation: Let's reload the model so you can try it side by side with the model that is still in memory. End of explanation def print_my_examples(inputs, results): result_for_printing = \ [f'input: {inputs[i]:<30} : score: {results[i][0]:.6f}' for i in range(len(inputs))] print(*result_for_printing, sep='\n') print() examples = [ 'this is such an amazing movie!', # this is the same sentence tried earlier 'The movie was great!', 'The movie was meh.', 'The movie was okish.', 'The movie was terrible...' ] reloaded_results = tf.sigmoid(reloaded_model(tf.constant(examples))) original_results = tf.sigmoid(classifier_model(tf.constant(examples))) print('Results from the saved model:') print_my_examples(examples, reloaded_results) print('Results from the model in memory:') print_my_examples(examples, original_results) Explanation: Here you can test your model on any sentence you want, just add to the examples variable below. End of explanation serving_results = reloaded_model \ .signatures['serving_default'](tf.constant(examples)) serving_results = tf.sigmoid(serving_results['classifier']) print_my_examples(examples, serving_results) Explanation: If you want to use your model on TF Serving, remember that it will call your SavedModel through one of its named signatures. In Python, you can test them as follows: End of explanation
1,605
Given the following text description, write Python code to implement the functionality described below step by step Description: LAB 3c Step1: Verify tables exist Run the following cells to verify that we have previously created the dataset and data tables. If not, go back to lab 1b_prepare_data_babyweight to create them. Step2: Lab Task #1 Step3: Get training information and evaluate Let's first look at our training statistics. Step4: Now let's evaluate our trained model on our eval dataset. Step5: Let's use our evaluation's mean_squared_error to calculate our model's RMSE. Step6: Lab Task #2 Step7: Let's first look at our training statistics. Step8: Now let's evaluate our trained model on our eval dataset. Step9: Let's use our evaluation's mean_squared_error to calculate our model's RMSE. Step10: Lab Task #3 Step11: Modify above prediction query using example from simulated dataset Use the feature values you made up above, however set is_male to "Unknown" and plurality to "Multiple(2+)". This is simulating us not knowing the gender or the exact plurality.
Python Code: %%bash sudo pip freeze | grep google-cloud-bigquery==1.6.1 || \ sudo pip install google-cloud-bigquery==1.6.1 Explanation: LAB 3c: BigQuery ML Model Deep Neural Network. Learning Objectives Create and evaluate DNN model with BigQuery ML Create and evaluate DNN model with feature engineering with ML.TRANSFORM. Calculate predictions with BigQuery's ML.PREDICT Introduction In this notebook, we will create multiple deep neural network models to predict the weight of a baby before it is born, using first no feature engineering and then the feature engineering from the previous lab using BigQuery ML. We will create and evaluate a DNN model using BigQuery ML, with and without feature engineering using BigQuery's ML.TRANSFORM and calculate predictions with BigQuery's ML.PREDICT. If you need a refresher, you can go back and look how we made a baseline model in the notebook BQML Baseline Model or how we combined linear models with feature engineering in the notebook BQML Linear Models with Feature Engineering. Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook. Load necessary libraries Check that the Google BigQuery library is installed and if not, install it. End of explanation %%bigquery -- LIMIT 0 is a free query; this allows us to check that the table exists. SELECT * FROM babyweight.babyweight_data_train LIMIT 0 %%bigquery -- LIMIT 0 is a free query; this allows us to check that the table exists. SELECT * FROM babyweight.babyweight_data_eval LIMIT 0 Explanation: Verify tables exist Run the following cells to verify that we have previously created the dataset and data tables. If not, go back to lab 1b_prepare_data_babyweight to create them. End of explanation %%bigquery CREATE OR REPLACE MODEL babyweight.model_4 OPTIONS ( # TODO: Add DNN options INPUT_LABEL_COLS=["weight_pounds"], DATA_SPLIT_METHOD="NO_SPLIT") AS SELECT # TODO: Add base features and label FROM babyweight.babyweight_data_train Explanation: Lab Task #1: Model 4: Increase complexity of model using DNN_REGRESSOR DNN_REGRESSOR is a new regression model_type vs. the LINEAR_REG that we have been using in previous labs. MODEL_TYPE="DNN_REGRESSOR" hidden_units: List of hidden units per layer; all layers are fully connected. Number of elements in the array will be the number of hidden layers. The default value for hidden_units is [Min(128, N / (𝜶(Ni+No)))] (1 hidden layer), with N the training data size, Ni, No the input layer and output layer units, respectively, 𝜶 is constant with value 10. The upper bound of the rule will make sure the model won’t be over fitting. Note that, we currently have a model size limitation to 256MB. dropout: Probability to drop a given coordinate during training; dropout is a very common technique to avoid overfitting in DNNs. The default value is zero, which means we will not drop out any coordinate during training. batch_size: Number of samples that will be served to train the network for each sub iteration. The default value is Min(1024, num_examples) to balance the training speed and convergence. Serving all training data in each sub-iteration may lead to convergence issues, and is not advised. Create DNN_REGRESSOR model Change model type to use DNN_REGRESSOR, add a list of integer HIDDEN_UNITS, and add an integer BATCH_SIZE. * Hint: Create a model_4. End of explanation %%bigquery SELECT * FROM ML.TRAINING_INFO(MODEL babyweight.model_4) Explanation: Get training information and evaluate Let's first look at our training statistics. End of explanation %%bigquery SELECT * FROM ML.EVALUATE(MODEL babyweight.model_4, ( SELECT weight_pounds, is_male, mother_age, plurality, gestation_weeks FROM babyweight.babyweight_data_eval )) Explanation: Now let's evaluate our trained model on our eval dataset. End of explanation %%bigquery SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL babyweight.model_4, ( SELECT weight_pounds, is_male, mother_age, plurality, gestation_weeks FROM babyweight.babyweight_data_eval )) Explanation: Let's use our evaluation's mean_squared_error to calculate our model's RMSE. End of explanation %%bigquery CREATE OR REPLACE MODEL babyweight.final_model TRANSFORM( weight_pounds, is_male, mother_age, plurality, gestation_weeks, # TODO: Add FEATURE CROSS of: # is_male, bucketed_mother_age, plurality, and bucketed_gestation_weeks OPTIONS ( # TODO: Add DNN options INPUT_LABEL_COLS=["weight_pounds"], DATA_SPLIT_METHOD="NO_SPLIT") AS SELECT * FROM babyweight.babyweight_data_train Explanation: Lab Task #2: Final Model: Apply the TRANSFORM clause Before we perform our prediction, we should encapsulate the entire feature set in a TRANSFORM clause as we did in the last notebook. This way we can have the same transformations applied for training and prediction without modifying the queries. Let's apply the TRANSFORM clause to the final model and run the query. End of explanation %%bigquery SELECT * FROM ML.TRAINING_INFO(MODEL babyweight.final_model) Explanation: Let's first look at our training statistics. End of explanation %%bigquery SELECT * FROM ML.EVALUATE(MODEL babyweight.final_model, ( SELECT * FROM babyweight.babyweight_data_eval )) Explanation: Now let's evaluate our trained model on our eval dataset. End of explanation %%bigquery SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL babyweight.final_model, ( SELECT * FROM babyweight.babyweight_data_eval )) Explanation: Let's use our evaluation's mean_squared_error to calculate our model's RMSE. End of explanation %%bigquery SELECT * FROM ML.PREDICT(MODEL babyweight.final_model, ( SELECT # TODO Add base features example from original dataset )) Explanation: Lab Task #3: Predict with final model. Now that you have evaluated your model, the next step is to use it to predict the weight of a baby before it is born, using BigQuery ML.PREDICT function. Predict from final model using an example from original dataset End of explanation %%bigquery SELECT * FROM ML.PREDICT(MODEL babyweight.final_model, ( SELECT # TODO Add base features example from simulated dataset )) Explanation: Modify above prediction query using example from simulated dataset Use the feature values you made up above, however set is_male to "Unknown" and plurality to "Multiple(2+)". This is simulating us not knowing the gender or the exact plurality. End of explanation
1,606
Given the following text description, write Python code to implement the functionality described below step by step Description: DEMO - Dowload Satellite-Temperature, use it to force model This demo requires that you download, from Brightspace, the following files and place them in your working directory Step1: The Objectives of this DEMO are Step2: There! Average SST for August 2016. You not only get the plot, but you also get the actual data. Check out sst Step3: Lets do some playing around... Lets plot the average temperature in February 2015 Step4: You can see that the water is colder in February (compared to the plot of August above), but also there are missing data because of more cloud cover in February. Lets query a larger region Step5: Noe lets use the rs.read_timeSeries function, which download all the monthly average for ONE single spot over a specified period. If no period is specified, the default is between 2010-01-01 and 2015-05-31 Step6: As you can see, you get the time-series of Sea Surface Temperature from the queried lat/lon. You get the plot plus the data. Note that there are some "empty spot" where there where clouds. Now lets import a modified "Mussel model" that accepts sst as an input, and uses it to force the model (before Temperature was constant throughout the entire model run). Lets see how the New "Mussel model" with forcing works Step7: Just to compare, lets run the "old" Mussel model
Python Code: import model_Mussel_IbarraEtal2014 as MusselModel days, dt, par, InitCond = MusselModel.load_defaults() output = MusselModel.run_model(days,dt,InitCond,par) MusselModel.plot_model(output) Explanation: DEMO - Dowload Satellite-Temperature, use it to force model This demo requires that you download, from Brightspace, the following files and place them in your working directory: model_Mussel_IbarraEtal2014.py read_satellites.py model_Mussel_IbarraEtal2014_SSTforcing.py From last week, do you remember the "Mussel" model? End of explanation import read_satellites as rs year = 2015 month = 8 minlat = 38 maxlat = 48 minlon = -67 maxlon = -50 isub = 0.5 # This one line of code does the whole thing (Download + Plot) lon, lat, sst = rs.read_frame(year,month,minlat,maxlat,minlon,maxlon,isub) Explanation: The Objectives of this DEMO are: Download temperature data from Satellites Use it to create a time-series of temperature for a particular location Use the time-series of temperature to "force" the mussel model, and impose the effect of time-varying temperature on Mussel growth The following snipet of code downloads the monthly-averaged Sea Surface Temperature calculated from the POES, AVHRR and GAC Satellites... and make a plot of the downloaded data. End of explanation sst Explanation: There! Average SST for August 2016. You not only get the plot, but you also get the actual data. Check out sst End of explanation month = 2 lon, lat, sst = rs.read_frame(year,month,minlat,maxlat,minlon,maxlon,isub) Explanation: Lets do some playing around... Lets plot the average temperature in February 2015 End of explanation minlon = -70 maxlon = -45 lon, lat, sst = rs.read_frame(year,month,minlat,maxlat,minlon,maxlon,isub) Explanation: You can see that the water is colder in February (compared to the plot of August above), but also there are missing data because of more cloud cover in February. Lets query a larger region: End of explanation import read_satellites as rs lat = 43 lon = -62 sst = rs.read_timeSeries(lat,lon) Explanation: Noe lets use the rs.read_timeSeries function, which download all the monthly average for ONE single spot over a specified period. If no period is specified, the default is between 2010-01-01 and 2015-05-31 End of explanation import model_Mussel_IbarraEtal2014_SSTforcing as MusselModel_SST days, dt, par, InitCond = MusselModel_SST.load_defaults() days, sst = MusselModel_SST.read_satellite_sst(dt,lat=43,lon=-62) output = MusselModel_SST.run_model(days,dt,InitCond,par,sst) MusselModel_SST.plot_model(output) Explanation: As you can see, you get the time-series of Sea Surface Temperature from the queried lat/lon. You get the plot plus the data. Note that there are some "empty spot" where there where clouds. Now lets import a modified "Mussel model" that accepts sst as an input, and uses it to force the model (before Temperature was constant throughout the entire model run). Lets see how the New "Mussel model" with forcing works: End of explanation import model_Mussel_IbarraEtal2014 as MusselModel days, dt, par, InitCond = MusselModel.load_defaults() output = MusselModel.run_model(days,dt,InitCond,par) MusselModel.plot_model(output) Explanation: Just to compare, lets run the "old" Mussel model: End of explanation
1,607
Given the following text description, write Python code to implement the functionality described below step by step Description: Nathan Yee Computation Bayesian Statistics Report01 License Step1: Twin brothers and bayes theorem Suppose we are asked the question Step2: So, we can conclude that Elvis had a 14.8% chance to identical twins with his brother. With Bayes' theorem (math) However, rather than using a huge tree, we can use Bayes' theorem to make a much more eligant solution. First, assuming we are only dealing with twins, we find must find P(male-male|monozygotic), P(male-male), and P(monozygotic). Then we can calculate P(monozygotic|male-male). Step3: The Dice Problem chapter 3 We are given dice 4, 6, 8, 12, and 20 sides. If we roll a a die many times at random, what is the probability that we roll each die. First we must define the Likelihood function for the dice. In this case, if we roll a number greater than that dice (roll 5 for 4 sided dice), the probability of that being the chosen dice goes to zero. Else, the probability is multiplied by 1 over the number of sides. Step4: Next create a dice object with dice of 4, 6, 8, 12 and 20 sides Step5: Roll a 6 and see the probabilities of being each dice Step6: Now roll a series of numbers Step7: For these roles, we see that the 8 sided dice is most probable. It is still possible for the 20 sided dice, but only with a .1% chance. The Train Problem chapter 3 Railroads number trains from 1 to N. One day you see a train numbered 60. How many trains does the railroad have? First define the Train suite. The likelihood is the same as the above dice problem. We can think of it like this Step8: Create train object and update with train number 60 Step9: Plot current probabilities of numbers of trains Step10: Because 60 is not actually a good guess, we will compute the mean of the posterior distribution Step11: The mean of the posterior distribution is the value that minimizes error. In simpler terms, we get the smallest number (error) when we subtract the actual number of trains from the mean of posterior distribution. Next, update the train with two more sightings, 50 and 90 Step12: After the two updates, the error minimizing value has gone down to 164. At the start of the problem, we assumed that there was an equal chance to any number of trains. However, most rail companies don't have thousands of trains. To better represent this fact, we can give each hypotheses greater for smaller numbers of trains. Step15: We initally thought that givin lower number of trains higher probabilities would give us a more accurate result. However, over just a few data points, we get a nearly identical graph to the one with linearly represented hypotheses. Original Bayes Problem - Two Watches Suppose you are a student who goes to various classes. Every morning you wake up and put on one of two watches. The first watch is on time. The second watch is 5 minutes slow. If you arrive to class 3 minutes late, what is the probability you wore the slow watch. Assume that arrival times follow the Gaussian function where b is an offset in minutes Step16: Next create the two hypotheses. As expected, before we see any class arival data, both watches have equal chances of being worn. Step17: As a sanity check, suppose we arrive to class exactly on time, and the next 5 minutes late Step18: Our model says that both hypotheses have still have the same probabilility, this makes sense because one hypotheses is centered at 0 and the other at 5. Now, lets see what happens if we visit 5 more classes Step19: After this series of updates, we have a slightly increased chance of using the on time watch. This makes sense because on average, the times have been slightly closer to 0 than -5. Even though our model performs reasonable close data, it falls apart if you arrive either really late or early. Suppose you arrive to class 11 minutes late
Python Code: from thinkbayes2 import Pmf, Suite import thinkplot import math % matplotlib inline Explanation: Nathan Yee Computation Bayesian Statistics Report01 License: Attribution 4.0 International (CC BY 4.0) End of explanation # calculate number of male-male dizygotic twins using the percentage of dizygotic and percentage of male-male DiMM = 100 * .92 * .25 # calculate number of male-male monozygotic twins using the percentage of monozygotic and percentage of male-male MoMM = 100 * .08 * .5 # calculate total number of male-male twins TotalMM = DiMM + MoMM print("Number of male-male dizygotic twins: {}".format(DiMM)) print("Number of male-male monozygotic twins: {}".format(MoMM)) print("Total number of male-male twins: {}".format(TotalMM)) # next we can calculate the fraction of male-male twins that are monozygotic fractionMoMM = MoMM / TotalMM percentMoMM = fractionMoMM * 100 print("Percentage of male-male monozygotic twins: {0:.1f}%".format(percentMoMM)) Explanation: Twin brothers and bayes theorem Suppose we are asked the question: <b>Elvis Presley had a twin brother who died at birth. What is the probability that Elvis was an identical twin?</b> In order to make our problem easier, we will clarify a few facts about identical twins. Identical twins are known as monozygotic twins, meaning that they both devolop from a single zygote. As a result, monozygotic twins are the same gender, either male-male or female-female. So, we rephrase our question: <b>What percentage of male-male twins are monozygotic.</b> In addition, here is an important fact: .08% of twins are monozygotic. Without Bayes' theorem (counting) We use a tree to visualize the problem. <img src="treeReport1.jpg" alt="Probability Tree" height="400" width="400"> Assuming we have 100 twins, lets calculate the number of male-male dizygotic, male-male monozygotic, and total number of male-male twins. End of explanation twins = dict() # first calculate the total percentage of male-male twins. We can do this by adding the percentage of male-male # monozygotic and the percentage of male-male dizygotic twins['male-male'] = (.08*.50 + .92*.25) twins['male-male|monozygotic'] = (.50) twins['monozygotic'] = (.08) print(twins['male-male']) print(twins['male-male|monozygotic']) print(twins['monozygotic']) # now using bayes theorem temp = twins['male-male|monozygotic'] * twins['monozygotic'] / twins['male-male'] print("P(monozygotic|male-male): {0:.3f}".format(temp)) Explanation: So, we can conclude that Elvis had a 14.8% chance to identical twins with his brother. With Bayes' theorem (math) However, rather than using a huge tree, we can use Bayes' theorem to make a much more eligant solution. First, assuming we are only dealing with twins, we find must find P(male-male|monozygotic), P(male-male), and P(monozygotic). Then we can calculate P(monozygotic|male-male). End of explanation class Dice(Suite): def Likelihood(self, data, hypo): if hypo < data: return 0 else: return 1 / hypo Explanation: The Dice Problem chapter 3 We are given dice 4, 6, 8, 12, and 20 sides. If we roll a a die many times at random, what is the probability that we roll each die. First we must define the Likelihood function for the dice. In this case, if we roll a number greater than that dice (roll 5 for 4 sided dice), the probability of that being the chosen dice goes to zero. Else, the probability is multiplied by 1 over the number of sides. End of explanation suite = Dice([4, 6, 8, 12, 20]) Explanation: Next create a dice object with dice of 4, 6, 8, 12 and 20 sides End of explanation suite.Update(6) suite.Print() Explanation: Roll a 6 and see the probabilities of being each dice End of explanation for roll in [6, 8, 7, 7, 5, 4]: suite.Update(roll) suite.Print() Explanation: Now roll a series of numbers End of explanation class Train(Suite): # hypo is the number of trains # data is an observed serial number def Likelihood(self, data, hypo): if data > hypo: return 0 else: return 1 / hypo Explanation: For these roles, we see that the 8 sided dice is most probable. It is still possible for the 20 sided dice, but only with a .1% chance. The Train Problem chapter 3 Railroads number trains from 1 to N. One day you see a train numbered 60. How many trains does the railroad have? First define the Train suite. The likelihood is the same as the above dice problem. We can think of it like this: each number correspond to a number of trains. If we see train N, then all hypothesis less than N are 0. Else, they are 1 / N. End of explanation hypos = range(1, 1001) train = Train(hypos) train.Update(60) Explanation: Create train object and update with train number 60 End of explanation thinkplot.Pdf(train) Explanation: Plot current probabilities of numbers of trains End of explanation def Mean(suite): total = 0 for hypo, prob in suite.Items(): total += hypo * prob return total print(Mean(train)) Explanation: Because 60 is not actually a good guess, we will compute the mean of the posterior distribution End of explanation for data in [50, 90]: train.Update(data) print(Mean(train)) thinkplot.Pdf(train) Explanation: The mean of the posterior distribution is the value that minimizes error. In simpler terms, we get the smallest number (error) when we subtract the actual number of trains from the mean of posterior distribution. Next, update the train with two more sightings, 50 and 90 End of explanation class Train2(Dice): def __init__(self, hypos, alpha=1.0): Pmf.__init__(self) for hypo in hypos: self.Set(hypo, hypo**(-alpha)) self.Normalize() hypos2 = range(1, 1001) train2 = Train2(hypos2) thinkplot.Pmf(train2) for data in [50, 60, 90]: train2.Update(data) thinkplot.Pmf(train2) Explanation: After the two updates, the error minimizing value has gone down to 164. At the start of the problem, we assumed that there was an equal chance to any number of trains. However, most rail companies don't have thousands of trains. To better represent this fact, we can give each hypotheses greater for smaller numbers of trains. End of explanation class Watch(Suite): Maps watch hypotheses to probabilities def f(x, b): f is a function that returns a Gaussian Function. Args: x (int): the primary variable b (int): a constant offset used to make fast or slow clocks return math.exp((-1 * (x-b)**2) / (32)) watch1_probs = dict() for i in range(-15,15): watch1_probs[i] = f(i, 0) watch2_probs = dict() for i in range(-15,15): watch2_probs[i] = f(i, -5) hypotheses = { 'watch 1':watch1_probs, 'watch 2':watch2_probs } def __init__(self, hypos): Pmf.__init__(self) for hypo in hypos: self.Set(hypo, 1) self.Normalize() def Likelihood(self, data, hypo): time = self.hypotheses[hypo] like = time[data] return like Explanation: We initally thought that givin lower number of trains higher probabilities would give us a more accurate result. However, over just a few data points, we get a nearly identical graph to the one with linearly represented hypotheses. Original Bayes Problem - Two Watches Suppose you are a student who goes to various classes. Every morning you wake up and put on one of two watches. The first watch is on time. The second watch is 5 minutes slow. If you arrive to class 3 minutes late, what is the probability you wore the slow watch. Assume that arrival times follow the Gaussian function where b is an offset in minutes: $$f(x) = e^{-\frac{(x-b)^2}{32}}$$ First we want to make sure that our gaussian function is a reasonable approximation of arrival time. Below is a plot of the function from 15 minutes late to 15 minutes early. With some quick looks at the graph, you can see that you arrive to class +- 2 minutes around 45% of the time which is reasonable most students. <img src="gaussianFunctions.png" alt="Gaussian Function" height="600" width="600"> Next we define our Watch Suite. Our hypotheses will be the watches described above: 'watch 1' is that you used the on time watch 'watch 2' is that you used the 5 minute slow watch End of explanation watches = Watch(['watch 1', 'watch 2']) watches.Print() Explanation: Next create the two hypotheses. As expected, before we see any class arival data, both watches have equal chances of being worn. End of explanation for arrival_time in [0,-5]: watches.Update(arrival_time) watches.Print() Explanation: As a sanity check, suppose we arrive to class exactly on time, and the next 5 minutes late End of explanation for arrival_time in [0,-2,-2,-3,-5]: watches.Update(arrival_time) watches.Print() Explanation: Our model says that both hypotheses have still have the same probabilility, this makes sense because one hypotheses is centered at 0 and the other at 5. Now, lets see what happens if we visit 5 more classes End of explanation watches.Update(-11) watches.Print() Explanation: After this series of updates, we have a slightly increased chance of using the on time watch. This makes sense because on average, the times have been slightly closer to 0 than -5. Even though our model performs reasonable close data, it falls apart if you arrive either really late or early. Suppose you arrive to class 11 minutes late End of explanation
1,608
Given the following text description, write Python code to implement the functionality described below step by step Description: <p> <img src="http Step1: A generalization using accumulation Step2: According to A162741, we can generalize the pattern above Step3: Unfolding a recurrence with generic coefficients Step4: A curious relation about Fibonacci numbers, in matrix notation
Python Code: %run "recurrences.py" %run "sums.py" %run "start_session.py" from itertools import accumulate def accumulating(acc, current): return Eq(acc.lhs + current.lhs, acc.rhs + current.rhs) Explanation: <p> <img src="http://www.cerm.unifi.it/chianti/images/logo%20unifi_positivo.jpg" alt="UniFI logo" style="float: left; width: 20%; height: 20%;"> <div align="right"> Massimo Nocentini<br> <small> <br>September {23, 26}, 2016: refactoring toward class-based code <br>September 22, 2016: Quicksort theory, average cases </small> </div> </p> <br> <p> <div align="center"> <b>Abstract</b><br> In this notebook we study two recurrence relations arising from the analysis of the `Quicksort` algorithm: numbers of checks and swaps are taken into account, in the average case. Such relations involve subterms where subscripts dependends on *one* dimension. They are a simple, but interesting, starting point to approach the general method of <b>recurrence unfolding</b>, an algorithmic/symbolical idea stretched further in other notebooks. </div> </p> End of explanation mapped = list(accumulate(mapped, accumulating)) mapped clear_cache() m,v,r = to_matrix_notation(mapped, f, [n-k for k in range(-2, 19)]) m,v,r m_sym = m.subs(inverted_fibs, simultaneous=True) m_sym[:,0] = m_sym[:,0].subs(f[2],f[1]) m_sym[1,2] = m_sym[1,2].subs(f[2],f[1]) m_sym # the following cell produces an error due to ordering, while `m * v` doesn't. #clear_cache() #m_sym * v to_matrix_notation(mapped, f, [n+k for k in range(-18, 3)]) Explanation: A generalization using accumulation End of explanation i = symbols('i') d = IndexedBase('d') k_fn_gen = Eq((k+1)*f[n], Sum(d[k,2*k-i]*f[n-i], (i, 0, 2*k))) d_triangle= {d[0,0]:1, d[n,2*n]:1, d[n,k]:d[n-1, k-1]+d[n-1,k]} k_fn_gen, d_triangle mapped = list(accumulate(mapped, accumulating)) mapped # skip this cell to maintain math coerent version def adjust(term): a_wild, b_wild = Wild('a', exclude=[f]), Wild('b') matched = term.match(a_wild*f[n+2] + b_wild) return -(matched[a_wild]-1)*f[n+2] m = fix_combination(mapped,adjust, lambda v, side: Add(v, side)) mapped = list(m) mapped to_matrix_notation(mapped, f, [n-k for k in range(-2, 19)]) mapped = list(accumulate(mapped, accumulating)) mapped to_matrix_notation(mapped, f, [n-k for k in range(-2, 19)]) mapped = list(accumulate(mapped, accumulating)) mapped to_matrix_notation(mapped, f, [n-k for k in range(-2, 19)]) mapped = list(accumulate(mapped, accumulating)) mapped to_matrix_notation(mapped, f, [n-k for k in range(-2, 19)]) Explanation: According to A162741, we can generalize the pattern above: End of explanation s = IndexedBase('s') a = IndexedBase('a') swaps_recurrence = Eq(n*s[n],(n+1)*s[n-1]+a[n]) swaps_recurrence boundary_conditions = {s[0]:Integer(0)} swaps_recurrence_spec=dict(recurrence_eq=swaps_recurrence, indexed=s, index=n, terms_cache=boundary_conditions) unfolded = do_unfolding_steps(swaps_recurrence_spec, 4) recurrence_eq = project_recurrence_spec(unfolded, recurrence_eq=True) recurrence_eq factored_recurrence_eq = project_recurrence_spec(factor_rhs_unfolded_rec(unfolded), recurrence_eq=True) factored_recurrence_eq factored_recurrence_eq.rhs.collect(s[n-5]).collect(a[n-4]) factored_recurrence_eq.subs(n,5) recurrence_eq.subs(n, 5) def additional_term(n): return (2*Integer(n)-3)/6 as_dict = {a[n]:additional_term(n) for n in range(1,6)} recurrence_eq.subs(n, 5).subs(as_dict) Explanation: Unfolding a recurrence with generic coefficients End of explanation d = 10 m = Matrix(d,d, lambda i,j: binomial(n-i,j)*binomial(n-j,i)) m f = IndexedBase('f') fibs = [fibonacci(i) for i in range(50)] mp = (ones(1,d)*m*ones(d,1))[0,0] odd_fibs_eq = Eq(f[2*n+1], mp, evaluate=True) odd_fibs_eq (m*ones(d,1)) Explanation: A curious relation about Fibonacci numbers, in matrix notation End of explanation
1,609
Given the following text description, write Python code to implement the functionality described below step by step Description: Make Template + Supernova Test insertion of SNe in desisim.templates.GALAXY. For now it will fail because metadata needed by the GALAXY is missing. Step1: Generate BGS Galaxy Just generate a vanilla galaxy from the BGS templates. Step2: Insert BGS + SN Ia Attempt to insert a Type Ia supernova into a BGS spectrum. This will fail for now.
Python Code: import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt from desisim.templates import BGS mpl.rc('font', size=14) Explanation: Make Template + Supernova Test insertion of SNe in desisim.templates.GALAXY. For now it will fail because metadata needed by the GALAXY is missing. End of explanation bgs_obj = BGS() flux, wave, meta, objmeta = bgs_obj.make_templates(nmodel=1, zrange=(0.1,0.1), seed=1) fig, ax = plt.subplots(1,1, figsize=(7,4)) ax.plot(wave, flux[0]) ax.set(xlabel='wavelength [$\AA$]', ylabel='flux') fig.tight_layout(); Explanation: Generate BGS Galaxy Just generate a vanilla galaxy from the BGS templates. End of explanation bgs_obj = BGS(add_SNeIa=True) flux, wave, meta, objmeta, snmeta = bgs_obj.make_templates(nmodel=1, zrange=(0.1,0.1), sne_fluxratiorange=(1,1), sne_filter='decam2014r', seed=1) plt.plot(wave, flux[0]) Explanation: Insert BGS + SN Ia Attempt to insert a Type Ia supernova into a BGS spectrum. This will fail for now. End of explanation
1,610
Given the following text description, write Python code to implement the functionality described. Description: Detect cycle in Directed Graph using Topological Sort Python3 program to implement the above approach ; Stack to store the visited vertices in the Topological Sort ; Store Topological Order ; Adjacency list to store edges ; To ensure visited vertex ; Function to perform DFS ; Set the vertex as visited ; Visit connected vertices ; Push into the stack on complete visit of vertex ; Function to check and return if a cycle exists or not ; Stores the position of vertex in topological order ; Pop all elements from stack ; Push element to get Topological Order ; Pop from the stack ; If parent vertex does not appear first ; Cycle exists ; Return false if cycle does not exist ; Function to add edges from u to v ; Driver Code ; Insert edges ; If cycle exist
Python Code: t = 0 n = 0 m = 0 a = 0 s =[] tsort =[] adj =[[ ] for i in range(100001 ) ] visited =[False for i in range(100001 ) ] def dfs(u ) : visited[u ] = 1 for it in adj[u ] : if(visited[it ] == 0 ) : dfs(it )   s . append(u )  def check_cycle() : pos = dict() ind = 0 while(len(s ) != 0 ) : pos[s[- 1 ] ] = ind tsort . append(s[- 1 ] ) ind += 1 s . pop()  for i in range(n ) : for it in adj[i ] : first = 0 if i not in pos else pos[i ] second = 0 if it not in pos else pos[it ] if(first > second ) : return True    return False  def addEdge(u , v ) : adj[u ] . append(v )  if __name__== "__main __": n = 4 m = 5 addEdge(0 , 1 ) addEdge(0 , 2 ) addEdge(1 , 2 ) addEdge(2 , 0 ) addEdge(2 , 3 ) for i in range(n ) : if(visited[i ] == False ) : dfs(i )   if(check_cycle() ) : print(' Yes ' )  else : print(' No ' )  
1,611
Given the following text description, write Python code to implement the functionality described below step by step Description: NbConvert Command line usage NbConvert is both a library and command line tool that allows you to convert notebooks to other formats. It ships with many common formats Step1: Html is the (configurable) default value. The verbose form of the same command as above is Step2: You can also convert to latex, which will extract the embeded images. If the embeded images are SVGs, inkscape is used to convert them to pdf Step3: Note that the latex conversion creates latex, not a PDF. To create a PDF you need the required third party packages to compile the latex. A --post flag is provided for convinience which allows you to have nbconvert automatically compile a PDF for you from your output. Step4: Custom templates Look at the first 20 lines of the python exporter Step5: From the code, you can see that non-code cells are also exported. If you want to change this behavior, you can use a custom template. The custom template inherits from the Python template and overwrites the markdown blocks so that they are empty. Step6: For details about the template syntax, refer to Jinja's manual. Template that use cells metadata The notebook file format supports attaching arbitrary JSON metadata to each cell. Here, as an exercise, you will use the metadata to tags cells. First you need to choose another notebook you want to convert to html, and tag some of the cells with metadata. You can refere to the file soln/celldiff.js as an example or follow the Javascript tutorial to figure out how do change cell metadata. Assuming you have a notebook with some of the cells tagged as Easy|Medium|Hard|&lt;None&gt;, the notebook can be converted specially using a custom template. Design your template in the cells provided below. The following, unorganized lines of code, may be of help
Python Code: %%bash ipython nbconvert 'Index.ipynb' Explanation: NbConvert Command line usage NbConvert is both a library and command line tool that allows you to convert notebooks to other formats. It ships with many common formats: html, latex, markdown, python, rst, and slides NbConvert relys on the Jinja templating engine, so implementing a new format or tweeking an existing one is easy. You can invoke nbconvert by running bash $ ipython nbconvert &lt;options and arguments&gt; Call ipython nbconvert with the --help flag or without any aruments to display the basic help. For detailed configuration help, use the --help-all flag. Basic export As a test, the Index.ipynb notebook in the directory will be convert. If you're converting a notebook with code in it, make sure to run the code cells that you're interested in before attempting to convert the notebook. Unless explicitly requested, nbconvert does not execute the code cells of the notebooks that it converts. End of explanation %%bash ipython nbconvert --to=html 'Index.ipynb' Explanation: Html is the (configurable) default value. The verbose form of the same command as above is End of explanation %%bash ipython nbconvert --to=latex 'Index.ipynb' Explanation: You can also convert to latex, which will extract the embeded images. If the embeded images are SVGs, inkscape is used to convert them to pdf: End of explanation %%bash ipython nbconvert --to=latex 'Index.ipynb' --post=pdf Explanation: Note that the latex conversion creates latex, not a PDF. To create a PDF you need the required third party packages to compile the latex. A --post flag is provided for convinience which allows you to have nbconvert automatically compile a PDF for you from your output. End of explanation pyfile = !ipython nbconvert --to python 'Index.ipynb' --stdout for l in pyfile[20:40]: print l Explanation: Custom templates Look at the first 20 lines of the python exporter End of explanation %%writefile simplepython.tpl {% extends 'python.tpl'%} {% block markdowncell -%} {% endblock markdowncell %} ## we also want to get rig of header cell {% block headingcell -%} {% endblock headingcell %} ## and let's change the appearance of input prompt {% block in_prompt %} # This was input cell with prompt number : {{ cell.prompt_number if cell.prompt_number else ' ' }} {%- endblock in_prompt %} pyfile = !ipython nbconvert --to python 'Index.ipynb' --stdout --template=simplepython.tpl for l in pyfile[4:40]: print l print '...' Explanation: From the code, you can see that non-code cells are also exported. If you want to change this behavior, you can use a custom template. The custom template inherits from the Python template and overwrites the markdown blocks so that they are empty. End of explanation %%bash # ipython nbconvert --to html <your chosen notebook.ipynb> --template=<your template file> %loadpy soln/coloreddiff.tpl # ipython nbconvert --to html '04 - Custom Display Logic.ipynb' --template=soln/coloreddiff.tpl Explanation: For details about the template syntax, refer to Jinja's manual. Template that use cells metadata The notebook file format supports attaching arbitrary JSON metadata to each cell. Here, as an exercise, you will use the metadata to tags cells. First you need to choose another notebook you want to convert to html, and tag some of the cells with metadata. You can refere to the file soln/celldiff.js as an example or follow the Javascript tutorial to figure out how do change cell metadata. Assuming you have a notebook with some of the cells tagged as Easy|Medium|Hard|&lt;None&gt;, the notebook can be converted specially using a custom template. Design your template in the cells provided below. The following, unorganized lines of code, may be of help: ``` {% extends 'html_full.tpl'%} {% block any_cell %} {{ super() }} <div style="background-color:red"> <div style='background-color:orange'> ``` If your key name under `cell.metadata.example.difficulty`, the following code would get the value of it: `cell['metadata'].get('example',{}).get('difficulty','')` Tip: Use `%%writefile` to edit the template in the notebook. End of explanation
1,612
Given the following text description, write Python code to implement the functionality described below step by step Description: Fizz Buzz with Tensor Flow. This notebook to explain the code from Fizz Buzz in Tensor Flow blog post written by Joel Grus You should read his post first it is super funny! His code try to play the Fizz Buzz game by using machine learning. This notebook is for real beginners who whant to understand the basis of TensorFlow by reading code. Feedback welcome @dh7net Let's start! The code contain several part Step1: Create the trainning set Encode the input (a number) This example convert the number to a binary representation Step2: Encode the result (fizz or buzz, none or both?) The fizz_buzz function calculate what the output should be, an encoded it to a 4 dimention vector. The fizz_buzz function take a number and a prediction, and output a string Step3: Create the training set Step4: Creation of the model The model is made of Step5: X is the input Y is the output w_h are the parameters between the input and the hidden layer w_o are the parameters between the hidden layer and the output Step6: To create the model we apply the w_h parameters to the input, and then we aply the relu function to calculate the value of the hidden layer. The w_o coeefient are used to calculate the output layer. No rectification is applyed py_x is the predicted value for a given input represented as a vector (dimention 4) Step7: Training Create the cost function The cost function measure how bad the model is. It is the distance between the prediction (py_x) and the reality (Y). Step8: softmax_cross_entropy_with_logits(py_x, Y) measure the distance between py_x and Y. SoftMax is the classical way to measure the distance between a predicted result and the actual result in a cost function. reduce_mean calculate the mean of a tensor. In this case the mean of the distance for the whole training set Train the model Training a model in TensorFlow is extremly simple, you just define a trainer operator! Step9: This operator will minimize the cost using the Gradient Descent witch is the most common optimizer to find parameters than will minimise the cost. We'll also define a prediction operator that will be able to output a prediction. * 0 means no fizz no buzz * 1 means fizz * 2 means buzz * 3 means fizzbuzz Step10: Iterate until the model is good enough One epoch consists of one full training cycle on the training set. Once every sample in the set is seen, you start again - marking the beginning of the 2nd epoch. source The training set is randomly permuted between each epoch. The learning is not done on the full set at once. Instead the learning set is divided in small batch and the learning is done for each of them. Step11: Here an example of index used for one epoch
Python Code: import numpy as np import tensorflow as tf Explanation: Fizz Buzz with Tensor Flow. This notebook to explain the code from Fizz Buzz in Tensor Flow blog post written by Joel Grus You should read his post first it is super funny! His code try to play the Fizz Buzz game by using machine learning. This notebook is for real beginners who whant to understand the basis of TensorFlow by reading code. Feedback welcome @dh7net Let's start! The code contain several part: * Create the training set * Encode the input (a number) * Encode the result (fizz or buzz, none or both?) * create the training set * Build a model * Train the model * Create a cost function * Iterate * Make prediction End of explanation NUM_DIGITS = 10 def binary_encode(i, num_digits): return np.array([i >> d & 1 for d in range(num_digits)]) #Let's check if it works for i in range(10): print i, binary_encode(i, NUM_DIGITS) Explanation: Create the trainning set Encode the input (a number) This example convert the number to a binary representation End of explanation def fizz_buzz_encode(i): if i % 15 == 0: return np.array([0, 0, 0, 1]) elif i % 5 == 0: return np.array([0, 0, 1, 0]) elif i % 3 == 0: return np.array([0, 1, 0, 0]) else: return np.array([1, 0, 0, 0]) def fizz_buzz(i, prediction): return [str(i), "fizz", "buzz", "fizzbuzz"][prediction] # let'see how the encoding works for i in range(1, 16): print i, fizz_buzz_encode(i) # and the decoding for i in range(1, 16): fizz_or_buzz_number = np.argmax(fizz_buzz_encode(i)) print i, fizz_or_buzz_number, fizz_buzz(i, fizz_or_buzz_number) Explanation: Encode the result (fizz or buzz, none or both?) The fizz_buzz function calculate what the output should be, an encoded it to a 4 dimention vector. The fizz_buzz function take a number and a prediction, and output a string End of explanation training_size = 2 ** NUM_DIGITS print "Size of the set:", training_size trX = np.array([binary_encode(i, NUM_DIGITS) for i in range(101, training_size)]) trY = np.array([fizz_buzz_encode(i) for i in range(101, training_size)]) print "First 15 values:" for i in range(101, 116): print i, trX[i], trY[i] Explanation: Create the training set End of explanation def init_weights(shape): return tf.Variable(tf.random_normal(shape, stddev=0.01)) Explanation: Creation of the model The model is made of: * one hidden layer that contains 100 neurons * one output layer The input is fully connected to the hidden layer and a relu function is applyed The relu function is a rectifier that just output zero if the input is negative. First we'll define an helper function to initialise parameters with randoms values End of explanation NUM_HIDDEN = 100 #Number of neuron in the hidden layer X = tf.placeholder("float", [None, NUM_DIGITS]) Y = tf.placeholder("float", [None, 4]) w_h = init_weights([NUM_DIGITS, NUM_HIDDEN]) w_o = init_weights([NUM_HIDDEN, 4]) Explanation: X is the input Y is the output w_h are the parameters between the input and the hidden layer w_o are the parameters between the hidden layer and the output End of explanation def model(X, w_h, w_o): h = tf.nn.relu(tf.matmul(X, w_h)) return tf.matmul(h, w_o) py_x = model(X, w_h, w_o) Explanation: To create the model we apply the w_h parameters to the input, and then we aply the relu function to calculate the value of the hidden layer. The w_o coeefient are used to calculate the output layer. No rectification is applyed py_x is the predicted value for a given input represented as a vector (dimention 4) End of explanation cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(py_x, Y)) Explanation: Training Create the cost function The cost function measure how bad the model is. It is the distance between the prediction (py_x) and the reality (Y). End of explanation train_op = tf.train.GradientDescentOptimizer(0.05).minimize(cost) Explanation: softmax_cross_entropy_with_logits(py_x, Y) measure the distance between py_x and Y. SoftMax is the classical way to measure the distance between a predicted result and the actual result in a cost function. reduce_mean calculate the mean of a tensor. In this case the mean of the distance for the whole training set Train the model Training a model in TensorFlow is extremly simple, you just define a trainer operator! End of explanation predict_op = tf.argmax(py_x, 1) Explanation: This operator will minimize the cost using the Gradient Descent witch is the most common optimizer to find parameters than will minimise the cost. We'll also define a prediction operator that will be able to output a prediction. * 0 means no fizz no buzz * 1 means fizz * 2 means buzz * 3 means fizzbuzz End of explanation BATCH_SIZE = 128 Explanation: Iterate until the model is good enough One epoch consists of one full training cycle on the training set. Once every sample in the set is seen, you start again - marking the beginning of the 2nd epoch. source The training set is randomly permuted between each epoch. The learning is not done on the full set at once. Instead the learning set is divided in small batch and the learning is done for each of them. End of explanation #random permutation of the index will be used during the training for each epoch permutation_index = np.random.permutation(range(len(trX))) for start in range(0, len(trX), BATCH_SIZE): end = start + BATCH_SIZE print "Batch starting at", start print permutation_index[start:end] # Launch the graph in a session sess = tf.Session() tf.initialize_all_variables().run(session=sess) for epoch in range(5000): # Shuffle the data before each training iteration. p = np.random.permutation(range(len(trX))) trX, trY = trX[p], trY[p] # Train in batches of 128 inputs. for start in range(0, len(trX), BATCH_SIZE): end = start + BATCH_SIZE sess.run(train_op, feed_dict={X: trX[start:end], Y: trY[start:end]}) # And print the current accuracy on the training data. if (epoch%100==0): # each 100 epoch, to not overflow the jupyter log # np.mean(A==B) return a number between 0 and 1. (true_count/total_count) print(epoch, np.mean(np.argmax(trY, axis=1) == sess.run(predict_op, feed_dict={X: trX, Y: trY}))) # And now for some fizz buzz numbers = np.arange(1, 101) teX = np.transpose(binary_encode(numbers, NUM_DIGITS)) teY = sess.run(predict_op, feed_dict={X: teX}) output = np.vectorize(fizz_buzz)(numbers, teY) print output sess.close() # don't forget to close the session if you don't use it anymore. Or use the *with* statement. # Lets check the quality Y = np.array([fizz_buzz_encode(i) for i in range(1,101)]) print "accuracy", np.mean(np.argmax(Y, axis=1) == teY) for i in range(1,100): actual = fizz_buzz(i, np.argmax(fizz_buzz_encode(i))) predicted = output[i-1] ok = True if actual <> predicted: ok = False print i, "{:>8}".format(actual), "{:>8}".format(predicted), ok Explanation: Here an example of index used for one epoch: End of explanation
1,613
Given the following text description, write Python code to implement the functionality described below step by step Description: Learn how to perform regression We're going to train a neural network that knows how to perform inference in robust linear regression models. The network will have as input Step2: First step Step3: Instantiate the generative model We'll define M_train, a PyMC model which doesn't have any data attached. We can use M_train.draw_from_prior() to construct synthetic datasets to use for training. Step4: Define a network which will invert this model Each of the 3 dimensions of the latent space will be modeled by a mixture of Gaussians. Step5: We can use our network to sample and to compute logpdfs. The primary interface is through .sample(parents), and .logpdf(parents, latent). Both of these expect pytorch tensors as inputs. Step7: Optimize network parameters The training_epoch code samples a synthetic dataset, and performs minibatch updates on it for a while. Optionally, it can decide when to stop by examining synthetic validation data. Step10: Define plotting and testing functions We'll use PyMC's default Metropolis-Hastings as a benchmark, and compare to sampling directly from the learned model, and importance sampling.
Python Code: import numpy as np import torch from torch.autograd import Variable import sys, inspect sys.path.insert(0, '..') %matplotlib inline import pymc import matplotlib.pyplot as plt from learn_smc_proposals import cde from learn_smc_proposals.utils import systematic_resample import seaborn as sns sns.set_context("notebook", font_scale=1.5, rc={"lines.markersize": 12}) sns.set_style('ticks') Explanation: Learn how to perform regression We're going to train a neural network that knows how to perform inference in robust linear regression models. The network will have as input: $x$, a vector of input values, and $y$, a vector of output values. It will learn how to perform posterior inference for the parameter vector $w$, in a linear regression model. End of explanation num_points = 10 # number of points in the synthetic dataset we train on def robust_regression(x, t, sigma_0=np.array([10.0, 1.0, .1]), epsilon=1.0): X: input (NxD matrix) t: output (N vector) sigma_0: prior std hyperparameter for weights epsilon: std hyperparameter for output noise if x is not None: N, D = x.shape assert D == 1 else: N = num_points D = 1 # assume our input variable is bounded by some constant const = 10.0 x = pymc.Uniform('x', lower=-const, upper=const, value=x, size=(N, D), observed=(x is not None)) # create design matrix (add intercept) @pymc.deterministic(plot=False) def X(x=x, N=N): return np.hstack((np.ones((N,1)), x, x**2)) w = pymc.Laplace('w', mu=np.zeros((D+2,)), tau=sigma_0**(-1.0)) @pymc.deterministic(plot=False, trace=False) def mu(X=X, w=w): return np.dot(X, w) y = pymc.NoncentralT('y', mu=mu, lam=epsilon**(-2.0), nu=4, value=t, observed=(t is not None)) return locals() Explanation: First step: let's define linear regression as a PyMC model. This model has: an intercept term, a linear term, and a quadratic term Laplace distribution (double exponential) priors on the weights T-distributed (heavy-tailed) likelihoods End of explanation M_train = pymc.Model(robust_regression(None, None)) def get_observed(model): return np.atleast_2d(np.concatenate((model.x.value.ravel(), model.y.value.ravel()))) def get_latent(model): return np.atleast_2d(model.w.value) def generate_synthetic(model, size=100): observed, latent = get_observed(model), get_latent(model) for i in xrange(size-1): model.draw_from_prior() observed = np.vstack((observed, get_observed(model))) latent = np.vstack((latent, get_latent(model))) return observed, latent gen_data = lambda num_samples: generate_synthetic(M_train, num_samples) example_minibatch = gen_data(100) Explanation: Instantiate the generative model We'll define M_train, a PyMC model which doesn't have any data attached. We can use M_train.draw_from_prior() to construct synthetic datasets to use for training. End of explanation observed_dim = num_points*2 latent_dim = 3 hidden_units = 300 hidden_layers = 2 mixture_components = 3 dist_est = cde.ConditionalRealValueMADE(observed_dim, latent_dim, hidden_units, hidden_layers, mixture_components) if torch.cuda.is_available(): dist_est.cuda() dist_est Explanation: Define a network which will invert this model Each of the 3 dimensions of the latent space will be modeled by a mixture of Gaussians. End of explanation example_parents = Variable(torch.FloatTensor(example_minibatch[0][:5])) example_latents = Variable(torch.FloatTensor(example_minibatch[1][:5])) if torch.cuda.is_available(): example_parents = example_parents.cuda() example_latents = example_latents.cuda() print "Sampled from p(latent|parents):\n\n", dist_est.sample(example_parents) print "Evaluate log p(latent|parents):\n\n", dist_est.logpdf(example_parents, example_latents) Explanation: We can use our network to sample and to compute logpdfs. The primary interface is through .sample(parents), and .logpdf(parents, latent). Both of these expect pytorch tensors as inputs. End of explanation def _iterate_minibatches(inputs, outputs, batchsize): for start_idx in range(0, len(inputs) - batchsize + 1, batchsize): excerpt = slice(start_idx, start_idx + batchsize) yield Variable(torch.FloatTensor(inputs[excerpt])), Variable(torch.FloatTensor(outputs[excerpt])) def training_step(optimizer, dist_est, gen_data, dataset_size, batch_size, max_local_iters=10, misstep_tolerance=0, verbose=False): Training function for fitting density estimator to simulator output # Train synthetic_ins, synthetic_outs = gen_data(dataset_size) validation_size = dataset_size/10 validation_ins, validation_outs = [Variable(torch.FloatTensor(t)) for t in gen_data(validation_size)] missteps = 0 num_batches = float(dataset_size)/batch_size USE_GPU = dist_est.parameters().next().is_cuda if USE_GPU: validation_ins = validation_ins.cuda() validation_outs = validation_outs.cuda() validation_err = -torch.mean(dist_est.logpdf(validation_ins, validation_outs)).data[0] for local_iter in xrange(max_local_iters): train_err = 0 for inputs, outputs in _iterate_minibatches(synthetic_ins, synthetic_outs, batch_size): optimizer.zero_grad() if USE_GPU: loss = -torch.mean(dist_est.logpdf(inputs.cuda(), outputs.cuda())) else: loss = -torch.mean(dist_est.logpdf(inputs, outputs)) loss.backward() optimizer.step() train_err += loss.data[0]/num_batches next_validation_err = -torch.mean(dist_est.logpdf(validation_ins, validation_outs)).data[0] if next_validation_err > validation_err: missteps += 1 validation_err = next_validation_err if missteps > misstep_tolerance: break if verbose: print train_err, validation_err, "(", local_iter+1, ")" return train_err, validation_err, local_iter+1 optimizer = torch.optim.Adam(dist_est.parameters()) trace_train = [] trace_validation = [] trace_local_iters = [] num_iterations = 500 dataset_size = 2500 batch_size = 250 for i in xrange(num_iterations): verbose = (i+1) % 25 == 0 if verbose: print "["+str(1+len(trace_train))+"]", t,v,l = training_step(optimizer, dist_est, gen_data, dataset_size, batch_size, verbose=verbose) trace_train.append(t) trace_validation.append(v) trace_local_iters.append(l) plt.figure(figsize=(10,3.5)) plt.plot(np.array(trace_train)) plt.plot(np.array(trace_validation)) plt.legend(['train error', 'validation error']); plt.plot(np.array(trace_local_iters)) plt.legend(['iterations per dataset']) Explanation: Optimize network parameters The training_epoch code samples a synthetic dataset, and performs minibatch updates on it for a while. Optionally, it can decide when to stop by examining synthetic validation data. End of explanation def gen_example_pair(model): model.draw_from_prior() data_x = model.X.value data_y = model.y.value true_w = model.w.value return data_x, data_y, true_w def estimate_MCMC(data_x, data_y, ns, iters=10000, burn=0.5): MCMC estimate of weight distribution mcmc_est = pymc.MCMC(robust_regression(data_x[:,1:2], data_y)) mcmc_est.sample(iters, burn=burn*iters, thin=np.ceil(burn*iters/ns)) trace_w = mcmc_est.trace('w').gettrace()[:ns] return trace_w def estimate_NN(network, data_x, data_y, ns): NN proposal density for weights nn_input = Variable(torch.FloatTensor(np.concatenate((data_x[:,1], data_y[:])))) print nn_input.size() nn_input = nn_input.unsqueeze(0).repeat(ns,1) if network.parameters().next().is_cuda: nn_input = nn_input.cuda() values, log_q = network.propose(nn_input) return values.cpu().data.numpy(), log_q.squeeze().cpu().data.numpy() def sample_prior_proposals(model, ns): samples = [] for n in xrange(ns): model.draw_from_prior() samples.append(model.w.value) return np.array(samples) def compare_and_plot(ns=100, alpha=0.05, data_x=None, data_y=None, true_w=None): model = pymc.Model(robust_regression(None, None)) prior_proposals = sample_prior_proposals(model, ns*10) if data_x is None: data_x, data_y, true_w = gen_example_pair(model) mcmc_trace = estimate_MCMC(data_x, data_y, ns) nn_proposals, logq = estimate_NN(dist_est, data_x, data_y, ns*10) mcmc_mean = mcmc_trace.mean(0) nn_mean = nn_proposals.mean(0) print print "True (generating) w:", true_w print "MCMC weight mean:", mcmc_mean print "NN weight proposal mean:", nn_mean domain = np.linspace(min(data_x[:,1])-2, max(data_x[:,1])+2, 50) plt.figure(figsize=(14,3)) plt.subplot(141) plt.plot(domain, mcmc_mean[0] + mcmc_mean[1]*domain + mcmc_mean[2]*domain**2, "b--") for i in range(ns): plt.plot(domain, mcmc_trace[i,0] + mcmc_trace[i,1]*domain + mcmc_trace[i,2]*domain**2, "b-", alpha=alpha) plt.plot(data_x[:,1], data_y, "k.") plt.xlim(np.min(domain),np.max(domain)) limy = plt.ylim() plt.legend(["MH posterior"]) ax = plt.subplot(143) plt.plot(domain, nn_mean[0] + nn_mean[1]*domain + nn_mean[2]*domain**2, "r--") for i in range(ns): plt.plot(domain, nn_proposals[i,0] + nn_proposals[i,1]*domain + nn_proposals[i,2]*domain**2, "r-", alpha=alpha) plt.plot(data_x[:,1], data_y, "k.") plt.legend(["NN proposal"]) plt.ylim(limy) plt.xlim(min(domain),max(domain)); ax.yaxis.set_ticklabels([]) ax = plt.subplot(142) prior_samples_mean = prior_proposals.mean(0) prior_proposals = prior_proposals[::10] plt.plot(domain, prior_samples_mean[0] + prior_samples_mean[1]*domain + prior_samples_mean[2]*domain**2, "c--") for i in range(ns): plt.plot(domain, prior_proposals[i,0] + prior_proposals[i,1]*domain + prior_proposals[i,2]*domain**2, "c-", alpha=alpha) plt.plot(data_x[:,1], data_y, "k.") plt.legend(["Prior"]) plt.ylim(limy) plt.xlim(min(domain),max(domain)); ax.yaxis.set_ticklabels([]) # compute NN-IS estimate logp = [] nn_test_model = pymc.Model(robust_regression(data_x[:,1:2], data_y)) for nnp in nn_proposals: nn_test_model.w.value = nnp try: next_logp = nn_test_model.logp except: next_logp = -np.Inf logp.append(next_logp) logp = np.array(logp) w = np.exp(logp - logq) / np.sum(np.exp(logp - logq)) nnis_mean = np.sum(w*nn_proposals.T,1) print "NN-IS estimated mean:", nnis_mean print "NN-IS ESS:", 1.0/np.sum(w**2), w.shape[0] ax = plt.subplot(144) plt.plot(domain, nnis_mean[0] + nnis_mean[1]*domain + nnis_mean[2]*domain**2, "g--") nn_resampled = nn_proposals[systematic_resample(np.log(w))][::10] for i in range(ns): plt.plot(domain, nn_resampled[i,0] + nn_resampled[i,1]*domain + nn_resampled[i,2]*domain**2, "g-", alpha=alpha) plt.plot(data_x[:,1], data_y, "k.") plt.legend(["NN-IS posterior"]) plt.ylim(limy) plt.xlim(min(domain),max(domain)); ax.yaxis.set_ticklabels([]) plt.tight_layout() compare_and_plot(); compare_and_plot(); compare_and_plot(); compare_and_plot(); compare_and_plot(); Explanation: Define plotting and testing functions We'll use PyMC's default Metropolis-Hastings as a benchmark, and compare to sampling directly from the learned model, and importance sampling. End of explanation
1,614
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: Image Classification In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images. Get the Data Run the following cell to download the CIFAR-10 dataset for python. Step2: Explore the Data The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following Step5: Implement Preprocess Functions Normalize In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x. Step8: One-hot encode Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function. Hint Step10: Randomize Data As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset. Preprocess all the data and save it Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation. Step12: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. Step17: Build the network For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project. Note Step20: Convolution and Max Pooling Layer Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling Step23: Flatten Layer Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option Step26: Fully-Connected Layer Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option Step29: Output Layer Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option Step32: Create Convolutional Model Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model Step35: Train the Neural Network Single Optimization Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following Step37: Show Stats Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy. Step38: Hyperparameters Tune the following parameters Step40: Train on a Single CIFAR-10 Batch Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section. Step42: Fully Train the Model Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches. Step45: Checkpoint The model has been saved to disk. Test Model Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
Python Code: DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm import problem_unittests as tests import tarfile cifar10_dataset_folder_path = 'cifar-10-batches-py' # Use Floyd's cifar-10 dataset if present floyd_cifar10_location = '/input/cifar-10/python.tar.gz' if isfile(floyd_cifar10_location): tar_gz_path = floyd_cifar10_location else: tar_gz_path = 'cifar-10-python.tar.gz' class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile(tar_gz_path): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar: urlretrieve( 'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz', tar_gz_path, pbar.hook) if not isdir(cifar10_dataset_folder_path): with tarfile.open(tar_gz_path) as tar: tar.extractall() tar.close() tests.test_folder_path(cifar10_dataset_folder_path) Explanation: Image Classification In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images. Get the Data Run the following cell to download the CIFAR-10 dataset for python. End of explanation %matplotlib inline %config InlineBackend.figure_format = 'retina' import helper import numpy as np # Explore the dataset batch_id = 1 sample_id = 5 helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id) Explanation: Explore the Data The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following: * airplane * automobile * bird * cat * deer * dog * frog * horse * ship * truck Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch. Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions. End of explanation def normalize(x): Normalize a list of sample image data in the range of 0 to 1 : x: List of image data. The image shape is (32, 32, 3) : return: Numpy array of normalize data min = np.amin(x) max = np.amax(x) return (x - min) / (max - min) DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_normalize(normalize) Explanation: Implement Preprocess Functions Normalize In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x. End of explanation x = np.array([6, 1, 5]) np.array([[1 if i == y else 0 for i in range(10)] for y in x]) def one_hot_encode(x): One hot encode a list of sample labels. Return a one-hot encoded vector for each label. : x: List of sample Labels : return: Numpy array of one-hot encoded labels return np.array([[1 if i == y else 0 for i in range(10)] for y in x]) DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_one_hot_encode(one_hot_encode) Explanation: One-hot encode Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function. Hint: Don't reinvent the wheel. End of explanation DON'T MODIFY ANYTHING IN THIS CELL # Preprocess Training, Validation, and Testing Data helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode) Explanation: Randomize Data As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset. Preprocess all the data and save it Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation. End of explanation DON'T MODIFY ANYTHING IN THIS CELL import pickle import problem_unittests as tests import helper # Load the Preprocessed Validation data valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb')) Explanation: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. End of explanation import tensorflow as tf def neural_net_image_input(image_shape): Return a Tensor for a batch of image input : image_shape: Shape of the images : return: Tensor for image input. return tf.placeholder(tf.float32, shape = [None] + list(image_shape), name = "x") def neural_net_label_input(n_classes): Return a Tensor for a batch of label input : n_classes: Number of classes : return: Tensor for label input. return tf.placeholder(tf.float32, shape = (None, n_classes), name = "y") def neural_net_keep_prob_input(): Return a Tensor for keep probability : return: Tensor for keep probability. # TODO: Implement Function return tf.placeholder(tf.float32, shape = None, name = "keep_prob") DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tf.reset_default_graph() tests.test_nn_image_inputs(neural_net_image_input) tests.test_nn_label_inputs(neural_net_label_input) tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input) Explanation: Build the network For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project. Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup. However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d. Let's begin! Input The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions * Implement neural_net_image_input * Return a TF Placeholder * Set the shape using image_shape with batch size set to None. * Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_label_input * Return a TF Placeholder * Set the shape using n_classes with batch size set to None. * Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_keep_prob_input * Return a TF Placeholder for dropout keep probability. * Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder. These names will be used at the end of the project to load your saved model. Note: None for shapes in TensorFlow allow for a dynamic size. End of explanation def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides): Apply convolution then max pooling to x_tensor :param x_tensor: TensorFlow Tensor :param conv_num_outputs: Number of outputs for the convolutional layer :param conv_ksize: kernal size 2-D Tuple for the convolutional layer :param conv_strides: Stride 2-D Tuple for convolution :param pool_ksize: kernel size 2-D Tuple for pool :param pool_strides: Stride 2-D Tuple for pool : return: A tensor that represents convolution and max pooling of x_tensor x_size = x_tensor.get_shape().as_list()[1:] W = tf.Variable(tf.truncated_normal(x_size + [conv_num_outputs], stddev=0.05)) b = tf.Variable(tf.zeros(conv_num_outputs)) x = tf.nn.conv2d(x_tensor, W, strides = (1, conv_strides[0], conv_strides[1], 1), padding = "SAME") x = tf.nn.bias_add(x, b) x = tf.nn.relu(x) x = tf.nn.max_pool(x, ksize = (1, pool_ksize[0], pool_ksize[1], 1), strides = (1, pool_strides[0], pool_strides[1], 1), padding = "SAME") return x DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_con_pool(conv2d_maxpool) Explanation: Convolution and Max Pooling Layer Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling: * Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor. * Apply a convolution to x_tensor using weight and conv_strides. * We recommend you use same padding, but you're welcome to use any padding. * Add bias * Add a nonlinear activation to the convolution. * Apply Max Pooling using pool_ksize and pool_strides. * We recommend you use same padding, but you're welcome to use any padding. Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers. End of explanation import numpy as np def flatten(x_tensor): Flatten x_tensor to (Batch Size, Flattened Image Size) : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions. : return: A tensor of size (Batch Size, Flattened Image Size). x_size = x_tensor.get_shape().as_list() return tf.reshape(x_tensor, shape=(-1, np.prod(x_size[1:]))) DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_flatten(flatten) Explanation: Flatten Layer Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. End of explanation def fully_conn(x_tensor, num_outputs): Apply a fully connected layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. x_size = x_tensor.get_shape().as_list()[1:] W = tf.Variable(tf.truncated_normal(x_size + [num_outputs], stddev=.05)) b = tf.Variable(tf.zeros(num_outputs)) x = tf.add(tf.matmul(x_tensor, W), b) x = tf.nn.relu(x) return x DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_fully_conn(fully_conn) Explanation: Fully-Connected Layer Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. End of explanation def output(x_tensor, num_outputs): Apply a output layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. x_size = x_tensor.get_shape().as_list()[1:] W = tf.Variable(tf.truncated_normal(x_size + [num_outputs], stddev=.05)) b = tf.Variable(tf.zeros(num_outputs)) x = tf.add(tf.matmul(x_tensor, W), b) return x DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_output(output) Explanation: Output Layer Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. Note: Activation, softmax, or cross entropy should not be applied to this. End of explanation def conv_net(x, keep_prob): Create a convolutional neural network model : x: Placeholder tensor that holds image data. : keep_prob: Placeholder tensor that hold dropout keep probability. : return: Tensor that represents logits # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers # Play around with different number of outputs, kernel size and stride # Function Definition from Above: # conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides) x = conv2d_maxpool(x, conv_num_outputs=64, conv_ksize=2, conv_strides=(1, 1), pool_ksize=(2, 2), pool_strides=(2, 2)) x = tf.layers.dropout(x, keep_prob) # x = conv2d_maxpool(x, conv_num_outputs=128, conv_ksize=3, # conv_strides=(1, 1), pool_ksize=(2, 2), pool_strides=(2, 2)) # x = conv2d_maxpool(x, conv_num_outputs=256, conv_ksize=3, # conv_strides=(1, 1), pool_ksize=(2, 2), pool_strides=(2, 2)) # TODO: Apply a Flatten Layer # Function Definition from Above: # flatten(x_tensor) x = flatten(x) # TODO: Apply 1, 2, or 3 Fully Connected Layers # Play around with different number of outputs # Function Definition from Above: # fully_conn(x_tensor, num_outputs) x = fully_conn(x, 512) x = tf.layers.dropout(x, keep_prob) # x = fully_conn(x, 128) # x = tf.layers.dropout(x, keep_prob) # x = fully_conn(x, 64) # x = tf.layers.dropout(x, keep_prob) # TODO: Apply an Output Layer # Set this to the number of classes # Function Definition from Above: # output(x_tensor, num_outputs) x = output(x, 10) # TODO: return output return x DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE ############################## ## Build the Neural Network ## ############################## # Remove previous weights, bias, inputs, etc.. tf.reset_default_graph() # Inputs x = neural_net_image_input((32, 32, 3)) y = neural_net_label_input(10) keep_prob = neural_net_keep_prob_input() # Model logits = conv_net(x, keep_prob) # Name logits Tensor, so that is can be loaded from disk after training logits = tf.identity(logits, name='logits') # Loss and Optimizer cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y)) optimizer = tf.train.AdamOptimizer().minimize(cost) # Accuracy correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy') tests.test_conv_net(conv_net) Explanation: Create Convolutional Model Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model: Apply 1, 2, or 3 Convolution and Max Pool layers Apply a Flatten Layer Apply 1, 2, or 3 Fully Connected Layers Apply an Output Layer Return the output Apply TensorFlow's Dropout to one or more layers in the model using keep_prob. End of explanation def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch): Optimize the session on a batch of images and labels : session: Current TensorFlow session : optimizer: TensorFlow optimizer function : keep_probability: keep probability : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data session.run(optimizer, feed_dict={keep_prob: keep_probability, x: feature_batch, y: label_batch}) pass DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_train_nn(train_neural_network) Explanation: Train the Neural Network Single Optimization Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following: * x for image input * y for labels * keep_prob for keep probability for dropout This function will be called for each batch, so tf.global_variables_initializer() has already been called. Note: Nothing needs to be returned. This function is only optimizing the neural network. End of explanation def print_stats(session, feature_batch, label_batch, cost, accuracy): Print information about loss and validation accuracy : session: Current TensorFlow session : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data : cost: TensorFlow cost function : accuracy: TensorFlow accuracy function # print(feature_batch) loss, acc = session.run([cost, accuracy], feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.}) print("Training Loss= " + \ "{:.6f}".format(loss) + ", Training Accuracy= " + \ "{:.5f}".format(acc)) valid_loss, valid_acc = session.run([cost, accuracy], feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.}) print("Validation Loss= " + \ "{:.6f}".format(valid_loss) + ", Validation Accuracy= " + \ "{:.5f}".format(valid_acc)) # batch_cost = session.run(cost, feed_dict={keep_probability: 1, # x: feature_batch, y: label_batch}) # batch_accuracy = session.run(accuracy, feed_dict={keep_probability: 1, # x: feature_batch, y: label_batch}) # valid_cost = session.run(cost, feed_dict={keep_probability: 1, # x: valid_features, y: valid_labels}) # valid_accuracy = session.run(accuracy, feed_dict={keep_probability: 1, # x: valid_features, y: valid_labels}) # print('Training Cost: {}'.format(batch_cost)) # print('Training Accuracy: {}'.format(batch_accuracy)) # print('Validation Cost: {}'.format(valid_cost)) # print('Validation Accuracy: {}'.format(valid_accuracy)) # print('Accuracy: {}'.format(accuracy)) pass Explanation: Show Stats Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy. End of explanation # TODO: Tune Parameters epochs = 10 batch_size = 256 keep_probability = .5 Explanation: Hyperparameters Tune the following parameters: * Set epochs to the number of iterations until the network stops learning or start overfitting * Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory: * 64 * 128 * 256 * ... * Set keep_probability to the probability of keeping a node using dropout End of explanation DON'T MODIFY ANYTHING IN THIS CELL print('Checking the Training on a Single Batch...') with tf.Session() as sess: # Initializing the variables sess.run(tf.global_variables_initializer()) # Training cycle for epoch in range(epochs): batch_i = 1 for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size): train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels) print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='') print_stats(sess, batch_features, batch_labels, cost, accuracy) Explanation: Train on a Single CIFAR-10 Batch Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section. End of explanation DON'T MODIFY ANYTHING IN THIS CELL save_model_path = './image_classification' print('Training...') with tf.Session() as sess: # Initializing the variables sess.run(tf.global_variables_initializer()) # Training cycle for epoch in range(epochs): # Loop over all batches n_batches = 5 for batch_i in range(1, n_batches + 1): for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size): train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels) print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='') print_stats(sess, batch_features, batch_labels, cost, accuracy) # Save Model saver = tf.train.Saver() save_path = saver.save(sess, save_model_path) Explanation: Fully Train the Model Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches. End of explanation DON'T MODIFY ANYTHING IN THIS CELL %matplotlib inline %config InlineBackend.figure_format = 'retina' import tensorflow as tf import pickle import helper import random # Set batch size if not already set try: if batch_size: pass except NameError: batch_size = 64 save_model_path = './image_classification' n_samples = 4 top_n_predictions = 3 def test_model(): Test the saved model against the test dataset test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb')) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load model loader = tf.train.import_meta_graph(save_model_path + '.meta') loader.restore(sess, save_model_path) # Get Tensors from loaded model loaded_x = loaded_graph.get_tensor_by_name('x:0') loaded_y = loaded_graph.get_tensor_by_name('y:0') loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') loaded_logits = loaded_graph.get_tensor_by_name('logits:0') loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0') # Get accuracy in batches for memory limitations test_batch_acc_total = 0 test_batch_count = 0 for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size): test_batch_acc_total += sess.run( loaded_acc, feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0}) test_batch_count += 1 print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count)) # Print Random Samples random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples))) random_test_predictions = sess.run( tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions), feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0}) helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions) test_model() Explanation: Checkpoint The model has been saved to disk. Test Model Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. End of explanation
1,615
Given the following text description, write Python code to implement the functionality described below step by step Description: Feature learning and write output Step1: Classification Step2: Sort results by accuracy of all features ('All' - Column 2) Step3: Confusion matrix According to results above, best classifier = LDA and best transformation = LDA. Step4: Export the confusion matrix into .csv because it is too large (137 x 137) to visualise. Step5: Use the figure functionality to zoom in the confusion matrix.
Python Code: print "mapping..." data_list, pcadata_list, ldadata_list, nmfdata_list, ssnmfdata_list, classlabs, audiolabs = mapper.map_and_average_frames(min_variance=0.99) mapper.write_output(data_list, pcadata_list, ldadata_list, nmfdata_list, ssnmfdata_list, classlabs, audiolabs) Explanation: Feature learning and write output End of explanation df_results = classification.classify_for_filenames(file_list=mapper.OUTPUT_FILES) Explanation: Classification End of explanation df_results_sorted = df_results.sort_values(2, ascending=False, inplace=False) df_results_sorted.head() print df_results_sorted.to_latex(index=False) Explanation: Sort results by accuracy of all features ('All' - Column 2) End of explanation CF, labels = classification.confusion_matrix_for_dataset(mapper.OUTPUT_FILES[0], classifier='LDA') Explanation: Confusion matrix According to results above, best classifier = LDA and best transformation = LDA. End of explanation np.savetxt('../data/confusion_matrix_labels.csv', labels, fmt='%s') np.savetxt('../data/confusion_matrix.csv', CF, fmt='%10.5f') Explanation: Export the confusion matrix into .csv because it is too large (137 x 137) to visualise. End of explanation %matplotlib notebook plt.figure(figsize=(15, 15)) classification.plot_CF(CF, labels=labels) Explanation: Use the figure functionality to zoom in the confusion matrix. End of explanation
1,616
Given the following text description, write Python code to implement the functionality described below step by step Description: Stochastic depth Dropout proved to be a working tool that improves the stability of a neural network. Essentially, dropout shuts down some neurons of a specific layer. Gao Huang, Yu Sun, Zhuang Liu What in the article "Deep Networks with Stochastic Depth" went into further and attempted to shut down whole blocks of layers. In this notebook, we will investigate whether the stochastic depth improves accuracy of neural networks. Pay attention to the file if you want to know how the Stochastic ResNet is implemented. Step1: In our expements we will work with MNIST dataset Step2: Firstly, let us define the shape of inputs of our model, loss function and an optimizer Step3: Secondly, we create pipelines for train and test Simple ResNet model Step4: The same thing for Stochastic ResNet model Step5: Let's train our models Step6: Show test accuracy for all iterations
Python Code: import sys import matplotlib.pyplot as plt from tqdm import tqdm_notebook as tqn %matplotlib inline sys.path.append('../../..') sys.path.append('../../utils') import utils from resnet_with_stochastic_depth import StochasticResNet from batchflow import B,V,F from batchflow.opensets import MNIST from batchflow.models.tf import ResNet50 Explanation: Stochastic depth Dropout proved to be a working tool that improves the stability of a neural network. Essentially, dropout shuts down some neurons of a specific layer. Gao Huang, Yu Sun, Zhuang Liu What in the article "Deep Networks with Stochastic Depth" went into further and attempted to shut down whole blocks of layers. In this notebook, we will investigate whether the stochastic depth improves accuracy of neural networks. Pay attention to the file if you want to know how the Stochastic ResNet is implemented. End of explanation dset = MNIST() Explanation: In our expements we will work with MNIST dataset End of explanation ResNet_config = { 'inputs': {'images': {'shape': (28, 28, 1)}, 'labels': {'classes': (10), 'transform': 'ohe', 'dtype': 'int64', 'name': 'targets'}}, 'input_block/inputs': 'images', 'loss': 'softmax_cross_entropy', 'optimizer': 'Adam', 'output': dict(ops=['accuracy']) } Stochastic_config = {**ResNet_config} Explanation: Firstly, let us define the shape of inputs of our model, loss function and an optimizer: End of explanation res_train_ppl = (dset.train.p .init_model('dynamic', ResNet50, 'resnet', config=ResNet_config) .train_model('resnet', feed_dict={'images': B('images'), 'labels': B('labels')})) res_test_ppl = (dset.test.p .init_variable('resacc', init_on_each_run=list) .import_model('resnet', res_train_ppl) .predict_model('resnet', fetches='output_accuracy', feed_dict={'images': B('images'), 'labels': B('labels')}, save_to=V('resacc'), mode='a')) Explanation: Secondly, we create pipelines for train and test Simple ResNet model End of explanation stochastic_train_ppl = (dset.train.p .init_model('dynamic', StochasticResNet, 'stochastic', config=Stochastic_config) .init_variable('stochasticacc', init_on_each_run=list) .train_model('stochastic', feed_dict={'images': B('images'), 'labels': B('labels')})) stochastic_test_ppl = (dset.test.p .init_variable('stochasticacc', init_on_each_run=list) .import_model('stochastic', stochastic_train_ppl) .predict_model('stochastic', fetches='output_accuracy', feed_dict={'images': B('images'), 'labels': B('labels')}, save_to=V('stochasticacc'), mode='a')) Explanation: The same thing for Stochastic ResNet model End of explanation for i in tqn(range(1000)): res_train_ppl.next_batch(400, n_epochs=None, shuffle=True) res_test_ppl.next_batch(400, n_epochs=None, shuffle=True) stochastic_train_ppl.next_batch(400, n_epochs=None, shuffle=True) stochastic_test_ppl.next_batch(400, n_epochs=None, shuffle=True) Explanation: Let's train our models End of explanation resnet_loss = res_test_ppl.get_variable('resacc') stochastic_loss = stochastic_test_ppl.get_variable('stochasticacc') utils.draw(resnet_loss, 'ResNet', stochastic_loss, 'Stochastic', window=20, type_data='accuracy') Explanation: Show test accuracy for all iterations End of explanation
1,617
Given the following text description, write Python code to implement the functionality described below step by step Description: Interferencia por haces múltiples. Filtros interferenciales. El siguiente notebook explica la irradiancia obtenida en transmisión y reflexión cuando un haz incide en una lámina delgada planoparalela considerando las múltiples reflexiones internas que se producen en dicha lámina. Se asume que el estudiante conoce el caso de interferencia por las dos primeras ondas reflejadas o transmitidas Planteamiento del problema Consideremos una lámina planoparalela de espesor h e índice de refracción n, rodeada de aire. El tratamiento con las 2 primeras ondas reflejadas o transmitidas, es una aproximación válida cuando los coeficientes de reflexión son bajos. En caso contrario, se hace necesario considerar todas las reflexiones internas producidas en la lámina, y consecuentemente, todas las ondas transmitidas o reflejadas. Vamos a ver qué ocurre con la irradiancia transmitida y reflejada en este caso. Lo que nos interesa es ver si cambia la posición de los máximos/mínimos obtenidos en el caso de la interferencia con 2 ondas. cambia el contraste. Step1: Cálculo del campo total en reflexión y en transmisión En la figura anterior se muestran las sucesivas reflexiones internas en la lámina que generan las ondas que queremos hacer interferir. El procedimiento sería análogo a lo ya estudiado con 2 ondas. Colocando una lente y observando en su plano focal obtendremos anillos de interferencia. Llamaremos al coeficiente de reflexión en la transición aire-lámina r, y al de transmisión t, mientras que llamaremos r' y t' a los coeficientes para la transición lámina-aire. Con esto en mente, podemos ver que las ondas que interfieren en transmisión son, $$E_{t1} = E_0 t t' e^{i \omega t}$$ $$E_{t2} = E_0 t t' r'^2 e^{i[ \omega t - \delta_G ]}$$ $$E_{t3} = E_0 t t' r'^4 e^{i[ \omega t - 2 \delta_G ]}$$ . . . $$E_{tN} = E_0 t t' r'^{2(N-1)} e^{i[ \omega t - (N-1) \delta_G ]}$$ donde $\delta_G = \frac{4 \pi}{\lambda} n h cos(\theta_t)$ es el desfase entre ondas sucesivas debido a la diferencia de caminos (el desfase debido a las reflexiones se tiene en cuenta en los coeficientes de reflexión). Igualmente, las ondas que interfieren en reflexión son, $$E_{r1} = E_0 r e^{i \omega t}$$ $$E_{r2} = E_0 t t' r' e^{i[ \omega t - \delta_G ]}$$ $$E_{r3} = E_0 t t' r'^3 e^{i[ \omega t - 2 \delta_G ]}$$ . . . $$E_{rN} = E_0 t t' r'^{(2N-3)} e^{i[ \omega t - (N-1) \delta_G ]}$$ Si la lámina es suficientemente larga o el ángulo de incidencia no muy grande, tendremos muchas ondas interfiriendo ($N \rightarrow \infty$). Sumando todas, tendremos el campo total en transmisión o reflexión. Esta suma puede realizarse y se obtiene, $$E_t = E_0 e^{i \omega t} \left[\frac{t t'}{1 - r^2 e^{-i\delta_G}} \right]$$ $$E_r = E_0 e^{i \omega t} \left[\frac{r(1 - e^{-i\delta_G})}{1 - r^2 e^{-i\delta_G}} \right]$$ donde en la última expresión se ha utilizado que $r = r'$ y que $t t' = 1 - r^2$ (por conservarción de la energía y dado que no consideramos absorción en el medio) Cálculo de la irradiancia De las expresiones anteriores llegamos a que la irradiancia en transmisión/reflexión es igual a, $$I_t = I_0 \left[\frac{(t t')^2}{1 + r^4 - 2 r^2 cos(\delta_G)} \right]$$ $$I_r = I_0 \left[\frac{2 r^2 (1 - cos(\delta_G))}{1 + r^4 - 2 r^2 cos(\delta_G)} \right]$$ Las ecuaciones anteriores se pueden transformar a expresiones más cómodas considerando que $cos \delta_G = 1 - 2 sen^2 (\delta_G /2) \;\;\;\;\;$ y definiendo un nuevo coeficiente denominado coeficiente de fineza que tiene en cuenta cuánto refleja cada cara de la lámina, $$F = \left( \frac{2 r}{1 -r^2} \right)$$ Así, $$I_t = \frac{I_0}{1 + F sen^2(\delta_G/2)}$$ $$I_r = I_0 \left[\frac{F sen^2(\delta_G/2)}{1 +F sen^2(\delta_G/2)} \right]$$ A la función $A(\delta) = \frac{1}{1 + F sen^2(\delta/2)} \;\;\;\;\;$ se le denomina función de Airy Análisis de las expresiones ¿Cuánto vale I_0 - I_t? $$I_0 - I_t = I_0 - \frac{I_0}{1 + F sen^2(\delta_G/2)} = I_0 \left[\frac{F sen^2(\delta_G/2)}{1 +F sen^2(\delta_G/2)} \right] = I_r$$ Por tanto, $I_0 = I_i + I_t \;\;\;\;$. Es decir, cuando la irradiancia en reflexión es máxima, la irradiancia en transmisión es mínima y viceversa. ¿Cuándo obtenemos máximos en $I_t$? Los máximos de la irradiancia transmitida se obtendrán cuando en su expresión, el denominador sea mínimo. Esto ocurre cuando $sen^2(\delta_G / 2) = 0 \;\;\;\;$, es decir, cuando $\delta_G = 2 m \pi \;\;\;$. Obtenemos pues, la misma condición de máximos en transmisión (y mínimos en reflexión por tanto) que en la interferencia de las 2 primeras ondas transmitidas/reflejadas. ¿Cuál es la forma de la función $I_t / I_0$?. Depende del valor de $F$. Vamos a dibujarla para varios valores del coeficiente de fineza Step2: Como vemos en la figura superior, cuanto mayor sea el valor del coeficiente de fineza, más estrechos son los máximos de $I_t$ lo que se traduce en anillos más estrechos en el patrón de interferencia que obtenemos a la salida de la lámina. Además el contraste es a su vez mayor. Hay que recordar que el coeficiente de fineza es mayor cuanto mayor sea la reflectividad en cada cara de la lámina. Se puede demostrar que la anchura de cada pico es igual a $\Delta \delta_G = 4/\sqrt{F}$ La figura superior muestra la irradiancia en función del desfase geométrico $\delta_G$. Pero éste a su vez depende de varios factores $\delta_G = 2 k n e cos(\theta_t)$. Si consideramos incidencia normal, $cos(\theta_t) = 1 \;\;\;$ y vemos que $\delta_G$ depende de la longitud de onda de la radiación incidente a través del vector de ondas $k = 2 \pi / \lambda $. Podemos pues dibujar la irradiancia total en función de $\lambda$ (para incidencia normal) Step3: Lo que vemos en la anterior figura es que si $F$ es suficientemente grande, podemos utilizar la lámina como un filtro espectral, ya que la transmitancia cae a valores muy cercanos a cero fuera de los picos que dan los máximos. Para aumentar $F$, se suelen usar recubrimientos metálicos (entonces los desfases debido a las reflexiones no son ya 0 ó $\pi$). Si queremos obtener máxima transmitancia para $\lambda_0$, el espesor habrá de ser tal que se cumpla, $$\frac{4 \pi}{\lambda_0} n e = 2 m \pi \implies e = \frac{m \lambda_0}{2 n}$$ Otro punto a destactar es que si se cumple la condición de máximo de transmitancia $\delta_G = 2 m \pi \;$ para una longitud de onda $\lambda_0 \;\;\;$ entonces también lo tendremos para las longitudes de onda $\lambda = \frac{\lambda_0}{m},, m = 2,3,... \;\;\;\;$ Normalmente, estas otras longitudes de onda se encuentran bastante alejadas y pueden ser filtradas añadiendo algún medio que las absorba (por ej. colorantes). Por último vamos a ver una figura de cómo se verían los anillos en el plano focal de una lente situada después de nuestra lámina planoparalela. Para observar cómo cambia la anchura de los anillos con el valor de $F$, cambiar su valor (en la parte superior del código) y volver a ejecutar la celda.
Python Code: from IPython.core.display import Image Image("http://upload.wikimedia.org/wikipedia/commons/thumb/8/89/Multiple_beam_interference.png/580px-Multiple_beam_interference.png") Explanation: Interferencia por haces múltiples. Filtros interferenciales. El siguiente notebook explica la irradiancia obtenida en transmisión y reflexión cuando un haz incide en una lámina delgada planoparalela considerando las múltiples reflexiones internas que se producen en dicha lámina. Se asume que el estudiante conoce el caso de interferencia por las dos primeras ondas reflejadas o transmitidas Planteamiento del problema Consideremos una lámina planoparalela de espesor h e índice de refracción n, rodeada de aire. El tratamiento con las 2 primeras ondas reflejadas o transmitidas, es una aproximación válida cuando los coeficientes de reflexión son bajos. En caso contrario, se hace necesario considerar todas las reflexiones internas producidas en la lámina, y consecuentemente, todas las ondas transmitidas o reflejadas. Vamos a ver qué ocurre con la irradiancia transmitida y reflejada en este caso. Lo que nos interesa es ver si cambia la posición de los máximos/mínimos obtenidos en el caso de la interferencia con 2 ondas. cambia el contraste. End of explanation # Programa para dibujar la irradiancia transmitida. Filtros interferenciales. import numpy as np from numpy import * from matplotlib.pyplot import * style.use('fivethirtyeight') %matplotlib inline import matplotlib matplotlib.rcParams.update({'font.size': 16}) fig = figure(figsize=(14,7)) deltaG = np.arange(0,8*pi,8*pi/200) # Creamos un vector de desfases geométricos F = np.array([0.2,1,50,200]) for i in np.arange(len(F)): It = 1.0/(1.0 + F[i]*sin(deltaG/2)**2) texto = 'F =' + str(F[i]) plot(deltaG,It,label=texto) xlabel(r'$\delta_G$') ylabel(r'$I_t/I_0$') legend() Explanation: Cálculo del campo total en reflexión y en transmisión En la figura anterior se muestran las sucesivas reflexiones internas en la lámina que generan las ondas que queremos hacer interferir. El procedimiento sería análogo a lo ya estudiado con 2 ondas. Colocando una lente y observando en su plano focal obtendremos anillos de interferencia. Llamaremos al coeficiente de reflexión en la transición aire-lámina r, y al de transmisión t, mientras que llamaremos r' y t' a los coeficientes para la transición lámina-aire. Con esto en mente, podemos ver que las ondas que interfieren en transmisión son, $$E_{t1} = E_0 t t' e^{i \omega t}$$ $$E_{t2} = E_0 t t' r'^2 e^{i[ \omega t - \delta_G ]}$$ $$E_{t3} = E_0 t t' r'^4 e^{i[ \omega t - 2 \delta_G ]}$$ . . . $$E_{tN} = E_0 t t' r'^{2(N-1)} e^{i[ \omega t - (N-1) \delta_G ]}$$ donde $\delta_G = \frac{4 \pi}{\lambda} n h cos(\theta_t)$ es el desfase entre ondas sucesivas debido a la diferencia de caminos (el desfase debido a las reflexiones se tiene en cuenta en los coeficientes de reflexión). Igualmente, las ondas que interfieren en reflexión son, $$E_{r1} = E_0 r e^{i \omega t}$$ $$E_{r2} = E_0 t t' r' e^{i[ \omega t - \delta_G ]}$$ $$E_{r3} = E_0 t t' r'^3 e^{i[ \omega t - 2 \delta_G ]}$$ . . . $$E_{rN} = E_0 t t' r'^{(2N-3)} e^{i[ \omega t - (N-1) \delta_G ]}$$ Si la lámina es suficientemente larga o el ángulo de incidencia no muy grande, tendremos muchas ondas interfiriendo ($N \rightarrow \infty$). Sumando todas, tendremos el campo total en transmisión o reflexión. Esta suma puede realizarse y se obtiene, $$E_t = E_0 e^{i \omega t} \left[\frac{t t'}{1 - r^2 e^{-i\delta_G}} \right]$$ $$E_r = E_0 e^{i \omega t} \left[\frac{r(1 - e^{-i\delta_G})}{1 - r^2 e^{-i\delta_G}} \right]$$ donde en la última expresión se ha utilizado que $r = r'$ y que $t t' = 1 - r^2$ (por conservarción de la energía y dado que no consideramos absorción en el medio) Cálculo de la irradiancia De las expresiones anteriores llegamos a que la irradiancia en transmisión/reflexión es igual a, $$I_t = I_0 \left[\frac{(t t')^2}{1 + r^4 - 2 r^2 cos(\delta_G)} \right]$$ $$I_r = I_0 \left[\frac{2 r^2 (1 - cos(\delta_G))}{1 + r^4 - 2 r^2 cos(\delta_G)} \right]$$ Las ecuaciones anteriores se pueden transformar a expresiones más cómodas considerando que $cos \delta_G = 1 - 2 sen^2 (\delta_G /2) \;\;\;\;\;$ y definiendo un nuevo coeficiente denominado coeficiente de fineza que tiene en cuenta cuánto refleja cada cara de la lámina, $$F = \left( \frac{2 r}{1 -r^2} \right)$$ Así, $$I_t = \frac{I_0}{1 + F sen^2(\delta_G/2)}$$ $$I_r = I_0 \left[\frac{F sen^2(\delta_G/2)}{1 +F sen^2(\delta_G/2)} \right]$$ A la función $A(\delta) = \frac{1}{1 + F sen^2(\delta/2)} \;\;\;\;\;$ se le denomina función de Airy Análisis de las expresiones ¿Cuánto vale I_0 - I_t? $$I_0 - I_t = I_0 - \frac{I_0}{1 + F sen^2(\delta_G/2)} = I_0 \left[\frac{F sen^2(\delta_G/2)}{1 +F sen^2(\delta_G/2)} \right] = I_r$$ Por tanto, $I_0 = I_i + I_t \;\;\;\;$. Es decir, cuando la irradiancia en reflexión es máxima, la irradiancia en transmisión es mínima y viceversa. ¿Cuándo obtenemos máximos en $I_t$? Los máximos de la irradiancia transmitida se obtendrán cuando en su expresión, el denominador sea mínimo. Esto ocurre cuando $sen^2(\delta_G / 2) = 0 \;\;\;\;$, es decir, cuando $\delta_G = 2 m \pi \;\;\;$. Obtenemos pues, la misma condición de máximos en transmisión (y mínimos en reflexión por tanto) que en la interferencia de las 2 primeras ondas transmitidas/reflejadas. ¿Cuál es la forma de la función $I_t / I_0$?. Depende del valor de $F$. Vamos a dibujarla para varios valores del coeficiente de fineza End of explanation # Programa para dibujar la irradiancia transmitida en función de la longitud de onda. Filtros interferenciales. fig = figure(figsize=(14,7)) Lambda = np.linspace(400,700,200) # Creamos un vector de longitudes de onda en el visible (en nm) F = np.array([0.2,1,50,200]) #mismo vector de coeficientes de fineza que en el caso anterior n = 1.38 # índice de refracción de la lámina (escogemos la del MgF2) e = 599 # escogemos el espesor de la lámina (en nm) deltaG = (4.0*pi/Lambda)*n*e for i in np.arange(len(F)): It = 1.0/(1.0 + F[i]*sin(deltaG/2)**2) texto = 'F =' + str(F[i]) plot(Lambda,It,label=texto) xlabel(r'$\lambda$ (nm)') ylabel(r'$I_t/I_0$') legend() Explanation: Como vemos en la figura superior, cuanto mayor sea el valor del coeficiente de fineza, más estrechos son los máximos de $I_t$ lo que se traduce en anillos más estrechos en el patrón de interferencia que obtenemos a la salida de la lámina. Además el contraste es a su vez mayor. Hay que recordar que el coeficiente de fineza es mayor cuanto mayor sea la reflectividad en cada cara de la lámina. Se puede demostrar que la anchura de cada pico es igual a $\Delta \delta_G = 4/\sqrt{F}$ La figura superior muestra la irradiancia en función del desfase geométrico $\delta_G$. Pero éste a su vez depende de varios factores $\delta_G = 2 k n e cos(\theta_t)$. Si consideramos incidencia normal, $cos(\theta_t) = 1 \;\;\;$ y vemos que $\delta_G$ depende de la longitud de onda de la radiación incidente a través del vector de ondas $k = 2 \pi / \lambda $. Podemos pues dibujar la irradiancia total en función de $\lambda$ (para incidencia normal) End of explanation # MODIFICAR ESTE VALOR PARA VER COMO CAMBIA LA FIGURA F = 20 # coeficiente de fineza #################### fig = figure(figsize=(10,10)) Lambda =590 #(nm) n = 1.38 # índice de refracción de la lámina (escogemos la del MgF2) e = 3230 # escogemos el espesor de la lámina (en nm) focal = 30 #(mm) x = np.linspace(-30,30,500) [X,Y] = meshgrid(x,x) rho = np.sqrt(X**2 + Y**2) #(mm) rhof = rho/focal theta_i = np.arctan2(focal,rho) theta_t = np.arcsin((1.0/n)*sin(theta_i)) deltaG = (4.0*pi/Lambda)*n*e*cos(theta_t) I_t = 1.0/(1.0 + F*sin(deltaG/2)**2) #plot(I_t[:,100]) pcolormesh(X,Y,I_t,cmap=cm.hot); Explanation: Lo que vemos en la anterior figura es que si $F$ es suficientemente grande, podemos utilizar la lámina como un filtro espectral, ya que la transmitancia cae a valores muy cercanos a cero fuera de los picos que dan los máximos. Para aumentar $F$, se suelen usar recubrimientos metálicos (entonces los desfases debido a las reflexiones no son ya 0 ó $\pi$). Si queremos obtener máxima transmitancia para $\lambda_0$, el espesor habrá de ser tal que se cumpla, $$\frac{4 \pi}{\lambda_0} n e = 2 m \pi \implies e = \frac{m \lambda_0}{2 n}$$ Otro punto a destactar es que si se cumple la condición de máximo de transmitancia $\delta_G = 2 m \pi \;$ para una longitud de onda $\lambda_0 \;\;\;$ entonces también lo tendremos para las longitudes de onda $\lambda = \frac{\lambda_0}{m},, m = 2,3,... \;\;\;\;$ Normalmente, estas otras longitudes de onda se encuentran bastante alejadas y pueden ser filtradas añadiendo algún medio que las absorba (por ej. colorantes). Por último vamos a ver una figura de cómo se verían los anillos en el plano focal de una lente situada después de nuestra lámina planoparalela. Para observar cómo cambia la anchura de los anillos con el valor de $F$, cambiar su valor (en la parte superior del código) y volver a ejecutar la celda. End of explanation
1,618
Given the following text description, write Python code to implement the functionality described below step by step Description: CPU Acceleration of Mandelbrot Generation In this example we use numba to accelerate the generation of the Mandelbrot set. The numba package allows us to compile python bytecode directly to machine instructions. It uses the LLVM compiler under the hood to compile optimized native code on the fly. Step2: Recall that the Mandelbrot set is the set of complex numbers $c$ for which the sequence $z_n$ stays bounded, where the sequence start from $z_0 = 0$ and is generated from the map $$ z_{n+1} = z_n^2 + c.$$ First we'll make a function to calculate how long before the sequence $z \rightarrow z^2 + c$ diverges. As the condition for divergence, we'll check to see when $|z|^2 > 4$. We will limit the check to some number max_iters, perhaps 255. Step3: Next we make a function to create the fracal. It will fill a two-dimensional integer array data with the number of iterations before the sequence diverged. Points inside the Mandelbrot set will have the value max_iters. Step4: Now we'll generate a fractal. We'll generate an image 1536 by 1024, covering $-2 \le x\le +1$ and $-1 \le y \le +1$. We also put timer commands around the function call so we can see how long it took. Step5: We can make this recalculate interactively by creating a function that returns a plot. Since we want to zoom over many orders of magnitude, the function arguments will be the center (x,y) and base-10 logarithm of the scale. Step6: Using IPython widgets and the ”interact_manual” command, we can make this more interactive. Note that most of the delay is the time to send the generate the graphical image and send it from the server. As you zoom in, round-off error in the floating point precision of the math will cause numerical artifacts.
Python Code: import numpy as np import bokeh.plotting as bk bk.output_notebook() from numba import jit from timeit import default_timer as timer from IPython.html.widgets import interact, interact_manual, fixed, FloatText Explanation: CPU Acceleration of Mandelbrot Generation In this example we use numba to accelerate the generation of the Mandelbrot set. The numba package allows us to compile python bytecode directly to machine instructions. It uses the LLVM compiler under the hood to compile optimized native code on the fly. End of explanation @jit(nopython=True) def mandel(x, y, max_iters): Return the number of iterations for the complex sequence z -> z**2 + c to exceed 2.0, where c = x + iy. c = complex(x, y) z = 0j for i in range(max_iters): z = z**2 + c if (z.real * z.real + z.imag * z.imag > 4.0): return i return max_iters Explanation: Recall that the Mandelbrot set is the set of complex numbers $c$ for which the sequence $z_n$ stays bounded, where the sequence start from $z_0 = 0$ and is generated from the map $$ z_{n+1} = z_n^2 + c.$$ First we'll make a function to calculate how long before the sequence $z \rightarrow z^2 + c$ diverges. As the condition for divergence, we'll check to see when $|z|^2 > 4$. We will limit the check to some number max_iters, perhaps 255. End of explanation @jit(nopython=True) def make_fractal(xmin, xmax, ymin, ymax, data, max_iters): height, width = data.shape dx = (xmax - xmin) / width dy = (ymax - ymin) / height for i in range(width): x = xmin + dx * (i + 0.5) for j in range(height): y = ymin + dy * (j + 0.5) data[j,i] = mandel(x, y, max_iters) return data Explanation: Next we make a function to create the fracal. It will fill a two-dimensional integer array data with the number of iterations before the sequence diverged. Points inside the Mandelbrot set will have the value max_iters. End of explanation N = 768, 512 data = np.zeros(N, np.uint8) xmin, xmax, ymin, ymax = -2.0, 1.0, -1.0, 1.0 start = timer() make_fractal(xmin, xmax, ymin, ymax, data, 255) end = timer() print("Generated fractal image in {time:.3f} ms".format(time = 1000 * (end - start))) fig = bk.figure(x_range=[xmin, xmax], y_range=[ymin, ymax], width=768, height=512) fig.image(image=[data], x=[xmin], y=[ymin], dw=[xmax-xmin], dh=[ymax-ymin], palette="YlOrBr9") bk.show(fig) Explanation: Now we'll generate a fractal. We'll generate an image 1536 by 1024, covering $-2 \le x\le +1$ and $-1 \le y \le +1$. We also put timer commands around the function call so we can see how long it took. End of explanation def calculate_plot(x, y, logscale): width = 3 * 10 ** logscale height = 2 * 10 ** logscale xmin, xmax = x - width/2, x + width/2 ymin, ymax = y - height/2, y + height/2 start = timer() make_fractal(xmin, xmax, ymin, ymax, data, 255) end = timer() print("Generated fractal image in {time:.3f} ms".format(time = 1000 * (end - start))) fig = bk.figure(x_range=[xmin, xmax], y_range=[ymin, ymax], width=768, height=512) fig.image(image=[data], x=[xmin], y=[ymin], dw=[xmax-xmin], dh=[ymax-ymin], palette="YlOrBr9") bk.show(fig) calculate_plot(-1.4,0,-2) calculate_plot(-1.405,0,-7) Explanation: We can make this recalculate interactively by creating a function that returns a plot. Since we want to zoom over many orders of magnitude, the function arguments will be the center (x,y) and base-10 logarithm of the scale. End of explanation interact_manual(calculate_plot, x=FloatText(-0.003001005), y=FloatText(0.64400092), logscale=(-8,0,0.1)) Explanation: Using IPython widgets and the ”interact_manual” command, we can make this more interactive. Note that most of the delay is the time to send the generate the graphical image and send it from the server. As you zoom in, round-off error in the floating point precision of the math will cause numerical artifacts. End of explanation
1,619
Given the following text description, write Python code to implement the functionality described below step by step Description: Image Augmentation Image Augmentation augments datasets (especially small datasets) to train model. The way to do image augmentation is to transform images by different ways. In this notebook we demonstrate how to do image augmentation using Analytics ZOO APIs. Step1: Create LocalImageSet Step2: Create DistributedImageSet Step3: Transform images Step4: Brightness Adjust the image brightness Step5: Hue Adjust image hue Step6: Saturation Adjust image saturation Step7: ChannelOrder Random change the channel of an image Step8: ColorJitter Random adjust brightness, contrast, hue, saturation Step9: Resize Resize the roi(region of interest) according to scale Step10: AspectScale Resize the image, keep the aspect ratio. scale according to the short edge Step11: RandomAspectScale Resize the image by randomly choosing a scale Step12: ChannelNormalize Image channel normalize Step13: PixelNormalize Pixel level normalizer, data(Pixel) = data(Pixel) - mean(Pixels) Step14: CenterCrop Crop a cropWidth x cropHeight patch from center of image. Step15: RandomCrop Random crop a cropWidth x cropHeight patch from an image. Step16: FixedCrop Crop a fixed area of image Step17: Filler Fill part of image with certain pixel value Step18: Expand Expand image, fill the blank part with the meanR, meanG, meanB Step19: HFlip Flip the image horizontally
Python Code: from zoo.common.nncontext import init_nncontext from zoo.feature.image import * import cv2 import numpy as np from IPython.display import Image, display sc = init_nncontext("Image Augmentation Example") Explanation: Image Augmentation Image Augmentation augments datasets (especially small datasets) to train model. The way to do image augmentation is to transform images by different ways. In this notebook we demonstrate how to do image augmentation using Analytics ZOO APIs. End of explanation # create LocalImageSet from an image local_image_set = ImageSet.read(os.getenv("ANALYTICS_ZOO_HOME")+"/apps/image-augmentation/image/test.jpg") # create LocalImageSet from an image folder local_image_set = ImageSet.read(os.getenv("ANALYTICS_ZOO_HOME")+"/apps/image-augmentation/image/") # create LocalImageSet from list of images image = cv2.imread(os.getenv("ANALYTICS_ZOO_HOME")+"/apps/image-augmentation/image/test.jpg") local_image_set = LocalImageSet([image]) print(local_image_set.get_image()) print('isDistributed: ', local_image_set.is_distributed(), ', isLocal: ', local_image_set.is_local()) Explanation: Create LocalImageSet End of explanation # create DistributedImageSet from an image distributed_image_set = ImageSet.read(os.getenv("ANALYTICS_ZOO_HOME")+"/apps/image-augmentation/image/test.jpg", sc, 2) # create DistributedImageSet from an image folder distributed_image_set = ImageSet.read(os.getenv("ANALYTICS_ZOO_HOME")+"/apps/image-augmentation/image/", sc, 2) # create LocalImageSet from image rdd image = cv2.imread(os.getenv("ANALYTICS_ZOO_HOME")+"/apps/image-augmentation/image/test.jpg") image_rdd = sc.parallelize([image], 2) label_rdd = sc.parallelize([np.array([1.0])], 2) distributed_image_set = DistributedImageSet(image_rdd, label_rdd) images_rdd = distributed_image_set.get_image() label_rdd = distributed_image_set.get_label() print(images_rdd) print(label_rdd) print('isDistributed: ', distributed_image_set.is_distributed(), ', isLocal: ', distributed_image_set.is_local()) print('total images:', images_rdd.count()) Explanation: Create DistributedImageSet End of explanation path = os.getenv("ANALYTICS_ZOO_HOME")+"/apps/image-augmentation/image/test.jpg" def transform_display(transformer, image_set): out = transformer(image_set) cv2.imwrite('/tmp/tmp.jpg', out.get_image(to_chw=False)[0]) display(Image(filename='/tmp/tmp.jpg')) Explanation: Transform images End of explanation brightness = ImageBrightness(0.0, 32.0) image_set = ImageSet.read(path) transform_display(brightness, image_set) Explanation: Brightness Adjust the image brightness End of explanation transformer = ImageHue(-18.0, 18.0) image_set = ImageSet.read(path) transform_display(transformer, image_set) Explanation: Hue Adjust image hue End of explanation transformer = ImageSaturation(10.0, 20.0) image_set= ImageSet.read(path) transform_display(transformer, image_set) Explanation: Saturation Adjust image saturation End of explanation transformer = ImageChannelOrder() image_set = ImageSet.read(path) transform_display(transformer, image_set) Explanation: ChannelOrder Random change the channel of an image End of explanation transformer = ImageColorJitter() image_set = ImageSet.read(path) transform_display(transformer, image_set) Explanation: ColorJitter Random adjust brightness, contrast, hue, saturation End of explanation transformer = ImageResize(300, 300) image_set = ImageSet.read(path) transform_display(transformer, image_set) Explanation: Resize Resize the roi(region of interest) according to scale End of explanation transformer = ImageAspectScale(200, max_size = 3000) image_set = ImageSet.read(path) transform_display(transformer, image_set) Explanation: AspectScale Resize the image, keep the aspect ratio. scale according to the short edge End of explanation transformer = ImageRandomAspectScale([100, 300], max_size = 3000) image_set = ImageSet.read(path) transform_display(transformer, image_set) Explanation: RandomAspectScale Resize the image by randomly choosing a scale End of explanation transformer = ImageChannelNormalize(20.0, 30.0, 40.0, 2.0, 3.0, 4.0) image_set = ImageSet.read(path) transform_display(transformer, image_set) Explanation: ChannelNormalize Image channel normalize End of explanation %%time print("PixelNormalize takes nearly one and a half minutes. Please wait a moment.") means = [2.0] * 3 * 500 * 375 transformer = ImagePixelNormalize(means) image_set = ImageSet.read(path) transform_display(transformer, image_set) Explanation: PixelNormalize Pixel level normalizer, data(Pixel) = data(Pixel) - mean(Pixels) End of explanation transformer = ImageCenterCrop(200, 200) image_set = ImageSet.read(path) transform_display(transformer, image_set) Explanation: CenterCrop Crop a cropWidth x cropHeight patch from center of image. End of explanation transformer = ImageRandomCrop(200, 200) image_set = ImageSet.read(path) transform_display(transformer, image_set) Explanation: RandomCrop Random crop a cropWidth x cropHeight patch from an image. End of explanation transformer = ImageFixedCrop(0.0, 0.0, 200.0, 200.0, False) image_set = ImageSet.read(path) transform_display(transformer, image_set) Explanation: FixedCrop Crop a fixed area of image End of explanation transformer = ImageFiller(0.0, 0.0, 0.5, 0.5, 255) image_set = ImageSet.read(path) transform_display(transformer, image_set) Explanation: Filler Fill part of image with certain pixel value End of explanation transformer = ImageExpand(means_r=123, means_g=117, means_b=104, max_expand_ratio=2.0) image_set = ImageSet.read(path) transform_display(transformer, image_set) Explanation: Expand Expand image, fill the blank part with the meanR, meanG, meanB End of explanation transformer = ImageHFlip() image_set = ImageSet.read(path) transform_display(transformer, image_set) Explanation: HFlip Flip the image horizontally End of explanation
1,620
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Land MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Conservation Properties 3. Key Properties --&gt; Timestepping Framework 4. Key Properties --&gt; Software Properties 5. Grid 6. Grid --&gt; Horizontal 7. Grid --&gt; Vertical 8. Soil 9. Soil --&gt; Soil Map 10. Soil --&gt; Snow Free Albedo 11. Soil --&gt; Hydrology 12. Soil --&gt; Hydrology --&gt; Freezing 13. Soil --&gt; Hydrology --&gt; Drainage 14. Soil --&gt; Heat Treatment 15. Snow 16. Snow --&gt; Snow Albedo 17. Vegetation 18. Energy Balance 19. Carbon Cycle 20. Carbon Cycle --&gt; Vegetation 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality 26. Carbon Cycle --&gt; Litter 27. Carbon Cycle --&gt; Soil 28. Carbon Cycle --&gt; Permafrost Carbon 29. Nitrogen Cycle 30. River Routing 31. River Routing --&gt; Oceanic Discharge 32. Lakes 33. Lakes --&gt; Method 34. Lakes --&gt; Wetlands 1. Key Properties Land surface key properties 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Description Is Required Step7: 1.4. Land Atmosphere Flux Exchanges Is Required Step8: 1.5. Atmospheric Coupling Treatment Is Required Step9: 1.6. Land Cover Is Required Step10: 1.7. Land Cover Change Is Required Step11: 1.8. Tiling Is Required Step12: 2. Key Properties --&gt; Conservation Properties TODO 2.1. Energy Is Required Step13: 2.2. Water Is Required Step14: 2.3. Carbon Is Required Step15: 3. Key Properties --&gt; Timestepping Framework TODO 3.1. Timestep Dependent On Atmosphere Is Required Step16: 3.2. Time Step Is Required Step17: 3.3. Timestepping Method Is Required Step18: 4. Key Properties --&gt; Software Properties Software properties of land surface code 4.1. Repository Is Required Step19: 4.2. Code Version Is Required Step20: 4.3. Code Languages Is Required Step21: 5. Grid Land surface grid 5.1. Overview Is Required Step22: 6. Grid --&gt; Horizontal The horizontal grid in the land surface 6.1. Description Is Required Step23: 6.2. Matches Atmosphere Grid Is Required Step24: 7. Grid --&gt; Vertical The vertical grid in the soil 7.1. Description Is Required Step25: 7.2. Total Depth Is Required Step26: 8. Soil Land surface soil 8.1. Overview Is Required Step27: 8.2. Heat Water Coupling Is Required Step28: 8.3. Number Of Soil layers Is Required Step29: 8.4. Prognostic Variables Is Required Step30: 9. Soil --&gt; Soil Map Key properties of the land surface soil map 9.1. Description Is Required Step31: 9.2. Structure Is Required Step32: 9.3. Texture Is Required Step33: 9.4. Organic Matter Is Required Step34: 9.5. Albedo Is Required Step35: 9.6. Water Table Is Required Step36: 9.7. Continuously Varying Soil Depth Is Required Step37: 9.8. Soil Depth Is Required Step38: 10. Soil --&gt; Snow Free Albedo TODO 10.1. Prognostic Is Required Step39: 10.2. Functions Is Required Step40: 10.3. Direct Diffuse Is Required Step41: 10.4. Number Of Wavelength Bands Is Required Step42: 11. Soil --&gt; Hydrology Key properties of the land surface soil hydrology 11.1. Description Is Required Step43: 11.2. Time Step Is Required Step44: 11.3. Tiling Is Required Step45: 11.4. Vertical Discretisation Is Required Step46: 11.5. Number Of Ground Water Layers Is Required Step47: 11.6. Lateral Connectivity Is Required Step48: 11.7. Method Is Required Step49: 12. Soil --&gt; Hydrology --&gt; Freezing TODO 12.1. Number Of Ground Ice Layers Is Required Step50: 12.2. Ice Storage Method Is Required Step51: 12.3. Permafrost Is Required Step52: 13. Soil --&gt; Hydrology --&gt; Drainage TODO 13.1. Description Is Required Step53: 13.2. Types Is Required Step54: 14. Soil --&gt; Heat Treatment TODO 14.1. Description Is Required Step55: 14.2. Time Step Is Required Step56: 14.3. Tiling Is Required Step57: 14.4. Vertical Discretisation Is Required Step58: 14.5. Heat Storage Is Required Step59: 14.6. Processes Is Required Step60: 15. Snow Land surface snow 15.1. Overview Is Required Step61: 15.2. Tiling Is Required Step62: 15.3. Number Of Snow Layers Is Required Step63: 15.4. Density Is Required Step64: 15.5. Water Equivalent Is Required Step65: 15.6. Heat Content Is Required Step66: 15.7. Temperature Is Required Step67: 15.8. Liquid Water Content Is Required Step68: 15.9. Snow Cover Fractions Is Required Step69: 15.10. Processes Is Required Step70: 15.11. Prognostic Variables Is Required Step71: 16. Snow --&gt; Snow Albedo TODO 16.1. Type Is Required Step72: 16.2. Functions Is Required Step73: 17. Vegetation Land surface vegetation 17.1. Overview Is Required Step74: 17.2. Time Step Is Required Step75: 17.3. Dynamic Vegetation Is Required Step76: 17.4. Tiling Is Required Step77: 17.5. Vegetation Representation Is Required Step78: 17.6. Vegetation Types Is Required Step79: 17.7. Biome Types Is Required Step80: 17.8. Vegetation Time Variation Is Required Step81: 17.9. Vegetation Map Is Required Step82: 17.10. Interception Is Required Step83: 17.11. Phenology Is Required Step84: 17.12. Phenology Description Is Required Step85: 17.13. Leaf Area Index Is Required Step86: 17.14. Leaf Area Index Description Is Required Step87: 17.15. Biomass Is Required Step88: 17.16. Biomass Description Is Required Step89: 17.17. Biogeography Is Required Step90: 17.18. Biogeography Description Is Required Step91: 17.19. Stomatal Resistance Is Required Step92: 17.20. Stomatal Resistance Description Is Required Step93: 17.21. Prognostic Variables Is Required Step94: 18. Energy Balance Land surface energy balance 18.1. Overview Is Required Step95: 18.2. Tiling Is Required Step96: 18.3. Number Of Surface Temperatures Is Required Step97: 18.4. Evaporation Is Required Step98: 18.5. Processes Is Required Step99: 19. Carbon Cycle Land surface carbon cycle 19.1. Overview Is Required Step100: 19.2. Tiling Is Required Step101: 19.3. Time Step Is Required Step102: 19.4. Anthropogenic Carbon Is Required Step103: 19.5. Prognostic Variables Is Required Step104: 20. Carbon Cycle --&gt; Vegetation TODO 20.1. Number Of Carbon Pools Is Required Step105: 20.2. Carbon Pools Is Required Step106: 20.3. Forest Stand Dynamics Is Required Step107: 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis TODO 21.1. Method Is Required Step108: 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration TODO 22.1. Maintainance Respiration Is Required Step109: 22.2. Growth Respiration Is Required Step110: 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation TODO 23.1. Method Is Required Step111: 23.2. Allocation Bins Is Required Step112: 23.3. Allocation Fractions Is Required Step113: 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology TODO 24.1. Method Is Required Step114: 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality TODO 25.1. Method Is Required Step115: 26. Carbon Cycle --&gt; Litter TODO 26.1. Number Of Carbon Pools Is Required Step116: 26.2. Carbon Pools Is Required Step117: 26.3. Decomposition Is Required Step118: 26.4. Method Is Required Step119: 27. Carbon Cycle --&gt; Soil TODO 27.1. Number Of Carbon Pools Is Required Step120: 27.2. Carbon Pools Is Required Step121: 27.3. Decomposition Is Required Step122: 27.4. Method Is Required Step123: 28. Carbon Cycle --&gt; Permafrost Carbon TODO 28.1. Is Permafrost Included Is Required Step124: 28.2. Emitted Greenhouse Gases Is Required Step125: 28.3. Decomposition Is Required Step126: 28.4. Impact On Soil Properties Is Required Step127: 29. Nitrogen Cycle Land surface nitrogen cycle 29.1. Overview Is Required Step128: 29.2. Tiling Is Required Step129: 29.3. Time Step Is Required Step130: 29.4. Prognostic Variables Is Required Step131: 30. River Routing Land surface river routing 30.1. Overview Is Required Step132: 30.2. Tiling Is Required Step133: 30.3. Time Step Is Required Step134: 30.4. Grid Inherited From Land Surface Is Required Step135: 30.5. Grid Description Is Required Step136: 30.6. Number Of Reservoirs Is Required Step137: 30.7. Water Re Evaporation Is Required Step138: 30.8. Coupled To Atmosphere Is Required Step139: 30.9. Coupled To Land Is Required Step140: 30.10. Quantities Exchanged With Atmosphere Is Required Step141: 30.11. Basin Flow Direction Map Is Required Step142: 30.12. Flooding Is Required Step143: 30.13. Prognostic Variables Is Required Step144: 31. River Routing --&gt; Oceanic Discharge TODO 31.1. Discharge Type Is Required Step145: 31.2. Quantities Transported Is Required Step146: 32. Lakes Land surface lakes 32.1. Overview Is Required Step147: 32.2. Coupling With Rivers Is Required Step148: 32.3. Time Step Is Required Step149: 32.4. Quantities Exchanged With Rivers Is Required Step150: 32.5. Vertical Grid Is Required Step151: 32.6. Prognostic Variables Is Required Step152: 33. Lakes --&gt; Method TODO 33.1. Ice Treatment Is Required Step153: 33.2. Albedo Is Required Step154: 33.3. Dynamics Is Required Step155: 33.4. Dynamic Lake Extent Is Required Step156: 33.5. Endorheic Basins Is Required Step157: 34. Lakes --&gt; Wetlands TODO 34.1. Description Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'niwa', 'sandbox-1', 'land') Explanation: ES-DOC CMIP6 Model Properties - Land MIP Era: CMIP6 Institute: NIWA Source ID: SANDBOX-1 Topic: Land Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. Properties: 154 (96 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:30 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Conservation Properties 3. Key Properties --&gt; Timestepping Framework 4. Key Properties --&gt; Software Properties 5. Grid 6. Grid --&gt; Horizontal 7. Grid --&gt; Vertical 8. Soil 9. Soil --&gt; Soil Map 10. Soil --&gt; Snow Free Albedo 11. Soil --&gt; Hydrology 12. Soil --&gt; Hydrology --&gt; Freezing 13. Soil --&gt; Hydrology --&gt; Drainage 14. Soil --&gt; Heat Treatment 15. Snow 16. Snow --&gt; Snow Albedo 17. Vegetation 18. Energy Balance 19. Carbon Cycle 20. Carbon Cycle --&gt; Vegetation 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality 26. Carbon Cycle --&gt; Litter 27. Carbon Cycle --&gt; Soil 28. Carbon Cycle --&gt; Permafrost Carbon 29. Nitrogen Cycle 30. River Routing 31. River Routing --&gt; Oceanic Discharge 32. Lakes 33. Lakes --&gt; Method 34. Lakes --&gt; Wetlands 1. Key Properties Land surface key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of land surface model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of land surface model code (e.g. MOSES2.2) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.3. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "water" # "energy" # "carbon" # "nitrogen" # "phospherous" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.4. Land Atmosphere Flux Exchanges Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Fluxes exchanged with the atmopshere. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.5. Atmospheric Coupling Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_cover') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bare soil" # "urban" # "lake" # "land ice" # "lake ice" # "vegetated" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.6. Land Cover Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Types of land cover defined in the land surface model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_cover_change') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.7. Land Cover Change Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how land cover change is managed (e.g. the use of net or gross transitions) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.8. Tiling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.energy') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Conservation Properties TODO 2.1. Energy Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.water') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.2. Water Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how water is conserved globally and to what level (e.g. within X [units]/year) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.3. Carbon Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Timestepping Framework TODO 3.1. Timestep Dependent On Atmosphere Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is a time step dependent on the frequency of atmosphere coupling? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overall timestep of land surface model (i.e. time between calls) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.3. Timestepping Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of time stepping method and associated time step(s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Software Properties Software properties of land surface code 4.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Grid Land surface grid 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the grid in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.horizontal.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Grid --&gt; Horizontal The horizontal grid in the land surface 6.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general structure of the horizontal grid (not including any tiling) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.2. Matches Atmosphere Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the horizontal grid match the atmosphere? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.vertical.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Grid --&gt; Vertical The vertical grid in the soil 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general structure of the vertical grid in the soil (not including any tiling) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.vertical.total_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 7.2. Total Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The total depth of the soil (in metres) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Soil Land surface soil 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of soil in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_water_coupling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.2. Heat Water Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the coupling between heat and water in the soil End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.number_of_soil layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 8.3. Number Of Soil layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of soil layers End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the soil scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Soil --&gt; Soil Map Key properties of the land surface soil map 9.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of soil map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.structure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.2. Structure Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil structure map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.texture') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.3. Texture Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil texture map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.organic_matter') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.4. Organic Matter Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil organic matter map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.5. Albedo Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil albedo map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.water_table') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.6. Water Table Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil water table map, if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 9.7. Continuously Varying Soil Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the soil properties vary continuously with depth? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.soil_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.8. Soil Depth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil depth map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 10. Soil --&gt; Snow Free Albedo TODO 10.1. Prognostic Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is snow free albedo prognostic? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.functions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation type" # "soil humidity" # "vegetation state" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10.2. Functions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If prognostic, describe the dependancies on snow free albedo calculations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "distinction between direct and diffuse albedo" # "no distinction between direct and diffuse albedo" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10.3. Direct Diffuse Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, describe the distinction between direct and diffuse albedo End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 10.4. Number Of Wavelength Bands Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, enter the number of wavelength bands used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11. Soil --&gt; Hydrology Key properties of the land surface soil hydrology 11.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of the soil hydrological model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 11.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of river soil hydrology in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.3. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil hydrology tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.4. Vertical Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the typical vertical discretisation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 11.5. Number Of Ground Water Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of soil layers that may contain water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "perfect connectivity" # "Darcian flow" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.6. Lateral Connectivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe the lateral connectivity between tiles End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Bucket" # "Force-restore" # "Choisnel" # "Explicit diffusion" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.7. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The hydrological dynamics scheme in the land surface model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 12. Soil --&gt; Hydrology --&gt; Freezing TODO 12.1. Number Of Ground Ice Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How many soil layers may contain ground ice End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.2. Ice Storage Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method of ice storage End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.3. Permafrost Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of permafrost, if any, within the land surface scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.drainage.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 13. Soil --&gt; Hydrology --&gt; Drainage TODO 13.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General describe how drainage is included in the land surface scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.drainage.types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Gravity drainage" # "Horton mechanism" # "topmodel-based" # "Dunne mechanism" # "Lateral subsurface flow" # "Baseflow from groundwater" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.2. Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Different types of runoff represented by the land surface model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14. Soil --&gt; Heat Treatment TODO 14.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of how heat treatment properties are defined End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 14.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of soil heat scheme in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14.3. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil heat treatment tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14.4. Vertical Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the typical vertical discretisation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Force-restore" # "Explicit diffusion" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.5. Heat Storage Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify the method of heat storage End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "soil moisture freeze-thaw" # "coupling with snow temperature" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.6. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe processes included in the treatment of soil heat End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15. Snow Land surface snow 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of snow in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the snow tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.number_of_snow_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 15.3. Number Of Snow Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of snow levels used in the land surface scheme/model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.density') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "constant" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.4. Density Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow density End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.water_equivalent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.5. Water Equivalent Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of the snow water equivalent End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.heat_content') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.6. Heat Content Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of the heat content of snow End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.temperature') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.7. Temperature Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow temperature End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.liquid_water_content') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.8. Liquid Water Content Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow liquid water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_cover_fractions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "ground snow fraction" # "vegetation snow fraction" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.9. Snow Cover Fractions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify cover fractions used in the surface snow scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "snow interception" # "snow melting" # "snow freezing" # "blowing snow" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.10. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Snow related processes in the land surface scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.11. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the snow scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_albedo.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "prescribed" # "constant" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16. Snow --&gt; Snow Albedo TODO 16.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of snow-covered land albedo End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_albedo.functions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation type" # "snow age" # "snow density" # "snow grain type" # "aerosol deposition" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16.2. Functions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N *If prognostic, * End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17. Vegetation Land surface vegetation 17.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of vegetation in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 17.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of vegetation scheme in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.dynamic_vegetation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 17.3. Dynamic Vegetation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there dynamic evolution of vegetation? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.4. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the vegetation tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_representation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation types" # "biome types" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.5. Vegetation Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Vegetation classification used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "broadleaf tree" # "needleleaf tree" # "C3 grass" # "C4 grass" # "vegetated" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.6. Vegetation Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of vegetation types in the classification, if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biome_types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "evergreen needleleaf forest" # "evergreen broadleaf forest" # "deciduous needleleaf forest" # "deciduous broadleaf forest" # "mixed forest" # "woodland" # "wooded grassland" # "closed shrubland" # "opne shrubland" # "grassland" # "cropland" # "wetlands" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.7. Biome Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of biome types in the classification, if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_time_variation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed (not varying)" # "prescribed (varying from files)" # "dynamical (varying from simulation)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.8. Vegetation Time Variation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How the vegetation fractions in each tile are varying with time End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_map') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.9. Vegetation Map Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.interception') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 17.10. Interception Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is vegetation interception of rainwater represented? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.phenology') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic (vegetation map)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.11. Phenology Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation phenology End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.phenology_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.12. Phenology Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation phenology End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.leaf_area_index') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prescribed" # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.13. Leaf Area Index Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation leaf area index End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.leaf_area_index_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.14. Leaf Area Index Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of leaf area index End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biomass') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.15. Biomass Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 *Treatment of vegetation biomass * End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biomass_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.16. Biomass Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation biomass End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biogeography') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.17. Biogeography Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation biogeography End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biogeography_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.18. Biogeography Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation biogeography End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.stomatal_resistance') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "light" # "temperature" # "water availability" # "CO2" # "O3" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.19. Stomatal Resistance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify what the vegetation stomatal resistance depends on End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.20. Stomatal Resistance Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation stomatal resistance End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.21. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the vegetation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 18. Energy Balance Land surface energy balance 18.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of energy balance in land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 18.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the energy balance tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 18.3. Number Of Surface Temperatures Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.evaporation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "alpha" # "beta" # "combined" # "Monteith potential evaporation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18.4. Evaporation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify the formulation method for land surface evaporation, from soil and vegetation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "transpiration" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18.5. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe which processes are included in the energy balance scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19. Carbon Cycle Land surface carbon cycle 19.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of carbon cycle in land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the carbon cycle tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 19.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of carbon cycle in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "grand slam protocol" # "residence time" # "decay time" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 19.4. Anthropogenic Carbon Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Describe the treament of the anthropogenic carbon pool End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.5. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the carbon scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 20. Carbon Cycle --&gt; Vegetation TODO 20.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 20.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 20.3. Forest Stand Dynamics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the treatment of forest stand dyanmics End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis TODO 21.1. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration TODO 22.1. Maintainance Respiration Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for maintainence respiration End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22.2. Growth Respiration Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for growth respiration End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation TODO 23.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the allocation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "leaves + stems + roots" # "leaves + stems + roots (leafy + woody)" # "leaves + fine roots + coarse roots + stems" # "whole plant (no distinction)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23.2. Allocation Bins Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify distinct carbon bins used in allocation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "function of vegetation type" # "function of plant allometry" # "explicitly calculated" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23.3. Allocation Fractions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how the fractions of allocation are calculated End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology TODO 24.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the phenology scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality TODO 25.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the mortality scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 26. Carbon Cycle --&gt; Litter TODO 26.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.4. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the general method used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 27. Carbon Cycle --&gt; Soil TODO 27.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.4. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the general method used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 28. Carbon Cycle --&gt; Permafrost Carbon TODO 28.1. Is Permafrost Included Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is permafrost included? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28.2. Emitted Greenhouse Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the GHGs emitted End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28.4. Impact On Soil Properties Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the impact of permafrost on soil properties End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 29. Nitrogen Cycle Land surface nitrogen cycle 29.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the nitrogen cycle in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 29.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the notrogen cycle tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 29.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of nitrogen cycle in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 29.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the nitrogen scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30. River Routing Land surface river routing 30.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of river routing in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the river routing, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of river routing scheme in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 30.4. Grid Inherited From Land Surface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the grid inherited from land surface? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.grid_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.5. Grid Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of grid, if not inherited from land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.number_of_reservoirs') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.6. Number Of Reservoirs Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of reservoirs End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.water_re_evaporation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "flood plains" # "irrigation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30.7. Water Re Evaporation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N TODO End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 30.8. Coupled To Atmosphere Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Is river routing coupled to the atmosphere model component? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.coupled_to_land') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.9. Coupled To Land Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the coupling between land and rivers End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30.10. Quantities Exchanged With Atmosphere Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "present day" # "adapted for other periods" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30.11. Basin Flow Direction Map Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What type of basin flow direction map is being used? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.flooding') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.12. Flooding Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the representation of flooding, if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.13. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the river routing End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "direct (large rivers)" # "diffuse" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31. River Routing --&gt; Oceanic Discharge TODO 31.1. Discharge Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify how rivers are discharged to the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31.2. Quantities Transported Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Quantities that are exchanged from river-routing to the ocean model component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 32. Lakes Land surface lakes 32.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of lakes in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.coupling_with_rivers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 32.2. Coupling With Rivers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are lakes coupled to the river routing model component? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 32.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of lake scheme in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 32.4. Quantities Exchanged With Rivers Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If coupling with rivers, which quantities are exchanged between the lakes and rivers End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.vertical_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 32.5. Vertical Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the vertical grid of lakes End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 32.6. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the lake scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.ice_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 33. Lakes --&gt; Method TODO 33.1. Ice Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is lake ice included? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 33.2. Albedo Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of lake albedo End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.dynamics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "No lake dynamics" # "vertical" # "horizontal" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 33.3. Dynamics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which dynamics of lakes are treated? horizontal, vertical, etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 33.4. Dynamic Lake Extent Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is a dynamic lake extent scheme included? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.endorheic_basins') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 33.5. Endorheic Basins Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Basins not flowing to ocean included? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.wetlands.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 34. Lakes --&gt; Wetlands TODO 34.1. Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the treatment of wetlands, if any End of explanation
1,621
Given the following text description, write Python code to implement the functionality described below step by step Description: Using critical sections A critical section is a region of code that should not run in parallel. For example, the increment of a variable is not considered an atomic operation, so, it should be performed using mutual exclusion. What happens when mutial exclusion is not used in critical sections? Using threads All Python’s built-in data structures (such as lists, dictionaries, etc.) are thread-safe. However, other user's data structures implemented by users, or simpler types like integers and floats, should not be accesed concurrently. Step1: The same example, using mutual exclusion (using a lock) Step2: Notice that both tasks are CPU-bound. This means that using threading has not any wall time advantage compared to an iterative implementation of both taks. Using processes Step3: Unlike threading, multiprocessing is suitable for reducing the running times in the case of CPU-bound problems. Using coroutines Like threads, coroutines should be only used when the coroutines must wait (typically for an I/O transaction). Otherwhise, use multiprocessing. Step4: Coroutines are faster than threads, but not faster than the one-loop version of the task.
Python Code: # Two threads that have a critical section executed in parallel without mutual exclusion. # This code does not work! import threading import time counter = 10 def task_1(): global counter for i in range(10**6): counter += 1 def task_2(): global counter for i in range(10**6+1): counter -= 1 thread_1 = threading.Thread(target=task_1) thread_2 = threading.Thread(target=task_2) thread_1.start() thread_2.start() print("(Both threads started)") thread_1.join() thread_2.join() print("\nBoth threads finished") print('counter =', counter) Explanation: Using critical sections A critical section is a region of code that should not run in parallel. For example, the increment of a variable is not considered an atomic operation, so, it should be performed using mutual exclusion. What happens when mutial exclusion is not used in critical sections? Using threads All Python’s built-in data structures (such as lists, dictionaries, etc.) are thread-safe. However, other user's data structures implemented by users, or simpler types like integers and floats, should not be accesed concurrently. End of explanation # Two threads that have a critical section executed sequentially. import threading import time lock = threading.Lock() counter = 10 def task_1(): global counter for i in range(10**6): with lock: counter += 1 def task_2(): global counter for i in range(10**6+1): with lock: counter -= 1 thread_1 = threading.Thread(target=task_1) thread_2 = threading.Thread(target=task_2) now = time.perf_counter() # Real time (not only user time) thread_1.start() thread_2.start() print("Both threads started") thread_1.join() thread_2.join() print("Both threads finished") elapsed = time.perf_counter() - now print(f"elapsed {elapsed:0.2f} seconds") print('counter =', counter) Explanation: The same example, using mutual exclusion (using a lock): End of explanation # Two processes that have a critical section executed sequentially import multiprocessing import time import ctypes def task_1(lock, counter): for i in range(10000): with lock: counter.value += 1 def task_2(lock, counter): for i in range(10001): with lock: counter.value -= 1 lock = multiprocessing.Lock() manager = multiprocessing.Manager() counter = manager.Value(ctypes.c_int, 10) process_1 = multiprocessing.Process(target=task_1, args=(lock, counter)) process_2 = multiprocessing.Process(target=task_2, args=(lock, counter)) now = time.perf_counter() process_1.start() process_2.start() print("Both tasks started") process_1.join() process_2.join() print("Both tasks finished") elapsed = time.perf_counter() - now print(f"elapsed {elapsed:0.2f} seconds") print('counter =', counter.value) Explanation: Notice that both tasks are CPU-bound. This means that using threading has not any wall time advantage compared to an iterative implementation of both taks. Using processes End of explanation import asyncio counter = 10 async def task_1(): global counter for i in range(10): print("o", end='', flush=True) counter += 1 await task_2() async def task_2(): global counter print("O", end='', flush=True) counter -= 1 await task_1() print('\ncounter =', counter) import asyncio import time counter = 10 async def task_1(): global counter for i in range(10**6): counter += 1 await task_2() async def task_2(): global counter counter -= 1 now = time.perf_counter() await task_1() elapsed = time.perf_counter() - now print(f"\nelapsed {elapsed:0.2f} seconds") print('counter =', counter) Explanation: Unlike threading, multiprocessing is suitable for reducing the running times in the case of CPU-bound problems. Using coroutines Like threads, coroutines should be only used when the coroutines must wait (typically for an I/O transaction). Otherwhise, use multiprocessing. End of explanation import time counter = 10 def task(): global counter for i in range(10**6): counter += 1 counter -= 1 now = time.perf_counter() task() elapsed = time.perf_counter() - now print(f"\nelapsed {elapsed:0.2f} seconds") print('counter =', counter) Explanation: Coroutines are faster than threads, but not faster than the one-loop version of the task. End of explanation
1,622
Given the following text description, write Python code to implement the functionality described below step by step Description: Visualizing a Gensim model To illustrate how to use pyLDAvis's gensim helper funtions we will create a model from the 20 Newsgroup corpus. Minimal preprocessing is done and so the model is not the best, the goal of this notebook is to demonstrate the the helper functions. Downloading the data Step1: Exploring the dataset Each group dir has a set of files Step2: Lets take a peak at one email Step3: Loading the tokenizing the corpus Step4: Creating the dictionary, and bag of words corpus Step5: Fitting the LDA model Step6: Visualizing the model with pyLDAvis Okay, the moment we have all been waiting for is finally here! You'll notice in the visualizaiton that we have a few junk topics that would probably disappear after better preprocessing of the corpus. This is left as an exercises to the reader. Step7: Fitting the HDP model We could visualize the LDA model with pyLDAvis, in the same maner, we can also visualize gensim HDP models with pyLDAvis. The difference between HDP and LDA is that HDP is a non-parametric method. Which means you don't need to specify the number of topics, HDP will fit as many topics as it can and find the optimal number of topics by itself. Step8: Visualizing the HDP model with pyLDAvis As for the LDA model, you only need to give your model, the corpus and the dictionary associated to prepare the visualization.
Python Code: %%bash mkdir -p data pushd data if [ -d "20news-bydate-train" ] then echo "The data has already been downloaded..." else wget http://qwone.com/%7Ejason/20Newsgroups/20news-bydate.tar.gz tar xfv 20news-bydate.tar.gz rm 20news-bydate.tar.gz fi echo "Lets take a look at the groups..." ls 20news-bydate-train/ popd Explanation: Visualizing a Gensim model To illustrate how to use pyLDAvis's gensim helper funtions we will create a model from the 20 Newsgroup corpus. Minimal preprocessing is done and so the model is not the best, the goal of this notebook is to demonstrate the the helper functions. Downloading the data End of explanation ls -lah data/20news-bydate-train/sci.space | tail -n 5 Explanation: Exploring the dataset Each group dir has a set of files: End of explanation !head data/20news-bydate-train/sci.space/61422 -n 20 Explanation: Lets take a peak at one email: End of explanation from glob import glob import re import string import funcy as fp from gensim import models from gensim.corpora import Dictionary, MmCorpus import nltk import pandas as pd # quick and dirty.... EMAIL_REGEX = re.compile(r"[a-z0-9\.\+_-]+@[a-z0-9\._-]+\.[a-z]*") FILTER_REGEX = re.compile(r"[^a-z '#]") TOKEN_MAPPINGS = [(EMAIL_REGEX, "#email"), (FILTER_REGEX, ' ')] def tokenize_line(line): res = line.lower() for regexp, replacement in TOKEN_MAPPINGS: res = regexp.sub(replacement, res) return res.split() def tokenize(lines, token_size_filter=2): tokens = fp.mapcat(tokenize_line, lines) return [t for t in tokens if len(t) > token_size_filter] def load_doc(filename): group, doc_id = filename.split('/')[-2:] with open(filename, errors='ignore') as f: doc = f.readlines() return {'group': group, 'doc': doc, 'tokens': tokenize(doc), 'id': doc_id} docs = pd.DataFrame(list(map(load_doc, glob('data/20news-bydate-train/*/*')))).set_index(['group','id']) docs.head() Explanation: Loading the tokenizing the corpus End of explanation def nltk_stopwords(): return set(nltk.corpus.stopwords.words('english')) def prep_corpus(docs, additional_stopwords=set(), no_below=5, no_above=0.5): print('Building dictionary...') dictionary = Dictionary(docs) stopwords = nltk_stopwords().union(additional_stopwords) stopword_ids = map(dictionary.token2id.get, stopwords) dictionary.filter_tokens(stopword_ids) dictionary.compactify() dictionary.filter_extremes(no_below=no_below, no_above=no_above, keep_n=None) dictionary.compactify() print('Building corpus...') corpus = [dictionary.doc2bow(doc) for doc in docs] return dictionary, corpus dictionary, corpus = prep_corpus(docs['tokens']) MmCorpus.serialize('newsgroups.mm', corpus) dictionary.save('newsgroups.dict') Explanation: Creating the dictionary, and bag of words corpus End of explanation %%time lda = models.ldamodel.LdaModel(corpus=corpus, id2word=dictionary, num_topics=50, passes=10) lda.save('newsgroups_50_lda.model') Explanation: Fitting the LDA model End of explanation import pyLDAvis.gensim as gensimvis import pyLDAvis vis_data = gensimvis.prepare(lda, corpus, dictionary) pyLDAvis.display(vis_data) Explanation: Visualizing the model with pyLDAvis Okay, the moment we have all been waiting for is finally here! You'll notice in the visualizaiton that we have a few junk topics that would probably disappear after better preprocessing of the corpus. This is left as an exercises to the reader. :) End of explanation %%time # The optional parameter T here indicates that HDP should find no more than 50 topics # if there exists any. hdp = models.hdpmodel.HdpModel(corpus, dictionary, T=50) hdp.save('newsgroups_hdp.model') Explanation: Fitting the HDP model We could visualize the LDA model with pyLDAvis, in the same maner, we can also visualize gensim HDP models with pyLDAvis. The difference between HDP and LDA is that HDP is a non-parametric method. Which means you don't need to specify the number of topics, HDP will fit as many topics as it can and find the optimal number of topics by itself. End of explanation vis_data = gensimvis.prepare(hdp, corpus, dictionary) pyLDAvis.display(vis_data) Explanation: Visualizing the HDP model with pyLDAvis As for the LDA model, you only need to give your model, the corpus and the dictionary associated to prepare the visualization. End of explanation
1,623
Given the following text description, write Python code to implement the functionality described below step by step Description: Key Requirements for the iRF scikit-learn implementation The following is a documentation of the main requirements for the iRF implementation Pseudocode iRF implementation Inputs Step1: Step 1 Step2: Step 2 Step3: Step 2.2 Display Feature Importances Graphically (just for interest) Step4: Step 3 Step6: Step 4
Python Code: # Setup %matplotlib inline import matplotlib.pyplot as plt from sklearn.datasets import load_iris from sklearn.cross_validation import train_test_split from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import confusion_matrix from sklearn.datasets import load_iris from sklearn import tree import numpy as np # Define a function to draw the decision trees in IPython # Adapted from: http://scikit-learn.org/stable/modules/tree.html from IPython.display import display, Image import pydotplus # Custom util functions from utils import utils # Set seed for reproducibility np.random.seed(1015) Explanation: Key Requirements for the iRF scikit-learn implementation The following is a documentation of the main requirements for the iRF implementation Pseudocode iRF implementation Inputs: * D = {($X_{i}$, $Y_{i}$), $X_{i} \in \mathbb{R}$, $Y_{i} \in \left {0, 1 \right }$ p , Y i ∈ {0, 1}},C ∈ {0, 1}, B, K Step 0: Setup Import required libraries and set up the seed value for reproducibility Keep all custom functions in utils/utils.py End of explanation # Load the iris data iris = load_iris() # Create the train-test datasets X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target) np.random.seed(1039) # Just fit a simple random forest classifier with 2 decision trees rf = RandomForestClassifier(n_estimators = 2) rf.fit(X = X_train, y = y_train) # Now plot the trees individually for dtree in rf.estimators_: dot_data = tree.export_graphviz(dtree , out_file = None , filled = True , rounded = True , special_characters = True) graph = pydotplus.graph_from_dot_data(dot_data) img = Image(graph.create_png()) display(img) utils.draw_tree(inp_tree = dtree) Explanation: Step 1: Fit the Initial Random Forest Just fit every feature with equal weights per the usual random forest code e.g. DecisionForestClassifier in scikit-learn End of explanation importances = rf.feature_importances_ std = np.std([dtree.feature_importances_ for dtree in rf.estimators_] , axis=0) indices = np.argsort(importances)[::-1] # Check that the feature importances are standardized to 1 print(sum(importances)) Explanation: Step 2: Get the Gini Importance of Weights For the first random forest we just need to get the Gini Importance of Weights Step 2.1 Get them numerically - most important End of explanation # Print the feature ranking print("Feature ranking:") for f in range(X_train.shape[1]): print("%d. feature %d (%f)" % (f + 1, indices[f], importances[indices[f]])) # Plot the feature importances of the forest plt.figure() plt.title("Feature importances") plt.bar(range(X_train.shape[1]), importances[indices], color="r", yerr=std[indices], align="center") plt.xticks(range(X_train.shape[1]), indices) plt.xlim([-1, X_train.shape[1]]) plt.show() Explanation: Step 2.2 Display Feature Importances Graphically (just for interest) End of explanation feature_names = ["X" + str(i) for i in range(X_train.shape[1])] target_vals = list(np.sort(np.unique(y_train))) target_names = ["y" + str(i) for i in target_vals] print(feature_names) print(target_names) # Get the second tree - just for testing estimator = rf.estimators_[1] # Using those arrays, we can parse the tree structure: n_nodes = estimator.tree_.node_count children_left = estimator.tree_.children_left children_right = estimator.tree_.children_right feature = estimator.tree_.feature #print(feature) features_all = [feature_names[i] for i in feature] #print("feature_names", feature_names, sep = ":\n") threshold = estimator.tree_.threshold # The tree structure can be traversed to compute various properties such # as the depth of each node and whether or not it is a leaf. node_depth = np.zeros(shape=n_nodes, dtype = "int64") is_leaves = np.zeros(shape=n_nodes, dtype=bool) #nodes = np.empty(shape=n_nodes, dtype = "int64") nodes = [] used_feature_names = [] stack = [(0, -1)] # seed is the root node id and its parent depth while len(stack) > 0: node_id, parent_depth = stack.pop() node_depth[node_id] = parent_depth + 1 #np.append(arr=nodes, values=node_id) nodes.append(node_id) used_feature_names.append(features_all[node_id]) # print(feature[node_id]) # If we have a test node if (children_left[node_id] != children_right[node_id]): stack.append((children_left[node_id], parent_depth + 1)) stack.append((children_right[node_id], parent_depth + 1)) else: is_leaves[node_id] = True print("nodes", np.asarray(a = nodes, dtype = "int64"), sep = ":\n") print("node_depth", node_depth, sep = ":\n") print("leaf_node", is_leaves, sep = ":\n") print("feature_names", used_feature_names, sep = ":\n") print("feature", feature, sep = ":\n") Explanation: Step 3: For each Tree get core leaf node features For each decision tree in the classifier, get: The list of leaf nodes Depth of the leaf node Leaf node predicted class i.e. {0, 1} Probability of predicting class in leaf node Number of observations in the leaf node i.e. weight of node Name the Features End of explanation def _get_tree_paths(tree, node_id = 0, depth = 0): Returns all paths through the tree as list of node_ids if node_id == _tree.TREE_LEAF: raise ValueError("Invalid node_id %s" % _tree.TREE_LEAF) left_child = tree.children_left[node_id] right_child = tree.children_right[node_id] if left_child != _tree.TREE_LEAF: left_paths = _get_tree_paths(tree, left_child, depth=depth + 1) right_paths = _get_tree_paths(tree, right_child, depth=depth + 1) for path in left_paths: path.append(node_id) for path in right_paths: path.append(node_id) paths = left_paths + right_paths else: paths = [[node_id]] return paths leaf_node_paths = dict() leaf_to_path = dict() for idx, dtree in enumerate(rf.estimators_): # leaf_to_path = {} node_paths = _get_tree_paths(tree = dtree.tree_, node_id = 0, depth = 0) leaf_node_paths[idx] = node_paths #map leaves to paths for path in node_paths: leaf_to_path[path[-1]] = path leaf_node_paths Explanation: Step 4: For each tree get the paths to the leaf node from root node For each decision tree in the classifier, get: Full path sequence to all leaf nodes i.e. SEQUENCE of all features that led to a leaf node Path to all leaf nodes i.e. SET of all features that led to a leaf node i.e. remove duplicate features Get the node_ids and the feature_ids at each node_id Get the feature SET associated with each node along a path Get the tree paths The following code is taken from: https://github.com/andosa/treeinterpreter/blob/master/treeinterpreter/treeinterpreter.py#L12-L33 End of explanation
1,624
Given the following text description, write Python code to implement the functionality described below step by step Description: <img src=images/continuum_analytics_b&w.png align="left" width="15%" style="margin-right Step1: <hr/> But as data grows and systems become more complex, moving data and querying data become more difficult. Python already has excellent tools for data that fits in memory, but we want to hook up to data that is inconvenient. From now on, we're going to assume one of the following Step2: <hr/> URI Strings Odo refers to foreign data either with a Python object like a sqlalchemy.Table object for a SQL table, or with a string URI, like postgresql Step3: What kind of object did you get receive as output? Call type on your result. Step4: <hr/> How it works Odo is a network of fast pairwise conversions between pairs of formats. We when we migrate between two formats we traverse a path of pairwise conversions. We visualize that network below Step5: <hr/> <img src="images/continuum_analytics_logo.png" alt="Continuum Logo", align="right", width="30%">, Blaze Blaze translates a subset of numpy/pandas syntax into database queries. It hides away the database. On simple datasets, like CSV files, Blaze acts like Pandas with slightly different syntax. In this case Blaze is just using Pandas. <hr/> Pandas example Step6: <hr/> Blaze example Step7: <hr/> Foreign Data Blaze does different things under-the-hood on different kinds of data CSV files Step8: <hr /> Work happens on the database If we were using pandas we would read the table into pandas, then use pandas' fast in-memory algorithms for computation. Here we translate your query into SQL and then send that query to the database to do the work. Pandas $\leftarrow_\textrm{data}$ SQL, then Pandas computes Blaze $\rightarrow_\textrm{query}$ SQL, then database computes If we want to dive into the internal API we can inspect the query that Blaze transmits. <hr /> Step9: <hr /> Exercises Now we load the Lahman baseball database and perform similar queries Step10: <hr /> Example Step11: <hr/> Store Results By default Blaze only shows us the first ten lines of a result. This provides a more interactive feel and stops us from accidentally crushing our system. Sometimes we do want to compute all of the results and store them someplace. Blaze expressions are valid sources for odo. So we can store our results in any format. <hr/> Exercise
Python Code: import pandas as pd df = pd.read_csv('data/iris.csv') df.head() df.groupby(df.Species).PetalLength.mean() # Average petal length per species Explanation: <img src=images/continuum_analytics_b&w.png align="left" width="15%" style="margin-right:15%"> <h1 align='center'>Introduction to Blaze</h1> In this tutorial we'll learn how to use Blaze to discover, migrate, and query data living in other databases. Generally this tutorial will have the following format odo - Move data to database blaze - Query data in database Goal: Accessible, Interactive, Analytic Queries NumPy and Pandas provide accessible, interactive, analytic queries; this is valuable. End of explanation from odo import odo import numpy as np import pandas as pd odo("data/iris.csv", pd.DataFrame) Explanation: <hr/> But as data grows and systems become more complex, moving data and querying data become more difficult. Python already has excellent tools for data that fits in memory, but we want to hook up to data that is inconvenient. From now on, we're going to assume one of the following: You have an inconvenient amount of data That data should live someplace other than your computer <hr/> Databases and Python When in-memory arrays/dataframes cease to be an option, we turn to databases. These live outside of the Python process and so might be less convenient. The open source Python ecosystem includes libraries to interact with these databases and with foreign data in general. Examples: SQL - sqlalchemy Hive/Cassandra - pyhive Impala - impyla RedShift - redshift-sqlalchemy ... MongoDB - pymongo HBase - happybase Spark - pyspark SSH - paramiko HDFS - pywebhdfs Amazon S3 - boto Today we're going to use some of these indirectly with odo (was into) and Blaze. We'll try to point out these libraries as we automate them so that, if you'd like, you can use them independently. <hr /> <img src="images/continuum_analytics_logo.png" alt="Continuum Logo", align="right", width="30%">, odo (formerly into) Odo migrates data between formats and locations. Before we can use a database we need to move data into it. The odo project provides a single consistent interface to move data between formats and between locations. We'll start with local data and eventually move out to remote data. odo docs <hr/> Examples Odo moves data into a target from a source ```python odo(source, target) ``` The target and source can be either a Python object or a string URI. The following are all valid calls to into ```python odo('iris.csv', pd.DataFrame) # Load CSV file into new DataFrame odo(my_df, 'iris.json') # Write DataFrame into JSON file odo('iris.csv', 'iris.json') # Migrate data from CSV to JSON ``` <hr/> Exercise Use odo to load the iris.csv file into a Python list, a np.ndarray, and a pd.DataFrame End of explanation odo("data/iris.csv", "sqlite:///my.db::iris") Explanation: <hr/> URI Strings Odo refers to foreign data either with a Python object like a sqlalchemy.Table object for a SQL table, or with a string URI, like postgresql://hostname::tablename. URI's often take on the following form protocol://path-to-resource::path-within-resource Where path-to-resource might point to a file, a database hostname, etc. while path-within-resource might refer to a datapath or table name. Note the two main separators :// separates the protocol on the left (sqlite, mongodb, ssh, hdfs, hive, ...) :: separates the path within the database on the right (e.g. tablename) odo docs on uri strings <hr/> Examples Here are some example URIs myfile.json myfiles.*.csv' postgresql://hostname::tablename mongodb://hostname/db::collection ssh://user@host:/path/to/myfile.csv hdfs://user@host:/path/to/*.csv <hr /> Exercise Migrate your CSV file into a table named iris in a new SQLite database at sqlite:///my.db. Remember to use the :: separator and to separate your database name from your table name. odo docs on SQL End of explanation type(_) Explanation: What kind of object did you get receive as output? Call type on your result. End of explanation odo('s3://nyqpug/tips.csv', pd.DataFrame) Explanation: <hr/> How it works Odo is a network of fast pairwise conversions between pairs of formats. We when we migrate between two formats we traverse a path of pairwise conversions. We visualize that network below: Each node represents a data format. Each directed edge represents a function to transform data between two formats. A single call to into may traverse multiple edges and multiple intermediate formats. Red nodes support larger-than-memory data. A single call to into may traverse several intermediate formats calling on several conversion functions. For example, we when migrate a CSV file to a Mongo database we might take the following route: Load in to a DataFrame (pandas.read_csv) Convert to np.recarray (DataFrame.to_records) Then to a Python Iterator (np.ndarray.tolist) Finally to Mongo (pymongo.Collection.insert) Alternatively we could write a special function that uses MongoDB's native CSV loader and shortcut this entire process with a direct edge CSV -&gt; Mongo. These functions are chosen because they are fast, often far faster than converting through a central serialization format. This picture is actually from an older version of odo, when the graph was still small enough to visualize pleasantly. See odo docs for a more updated version. <hr/> Remote Data We can interact with remote data in three locations On Amazon's S3 (this will be quick) On a remote machine via ssh On the Hadoop File System (HDFS) For most of this we'll wait until we've seen Blaze, briefly we'll use S3. S3 For now, we quickly grab a file from Amazon's S3. This example depends on boto to interact with S3. conda install boto odo docs on aws End of explanation import pandas as pd df = pd.read_csv('data/iris.csv') df.head(5) df.Species.unique() df.Species.drop_duplicates() Explanation: <hr/> <img src="images/continuum_analytics_logo.png" alt="Continuum Logo", align="right", width="30%">, Blaze Blaze translates a subset of numpy/pandas syntax into database queries. It hides away the database. On simple datasets, like CSV files, Blaze acts like Pandas with slightly different syntax. In this case Blaze is just using Pandas. <hr/> Pandas example End of explanation import blaze as bz d = bz.Data('data/iris.csv') d.head(5) d.Species.distinct() Explanation: <hr/> Blaze example End of explanation db = bz.Data('sqlite:///my.db') #db.iris #db.iris.head() db.iris.Species.distinct() db.iris[db.iris.Species == 'versicolor'][['Species', 'SepalLength']] Explanation: <hr/> Foreign Data Blaze does different things under-the-hood on different kinds of data CSV files: Pandas DataFrames (or iterators of DataFrames) SQL tables: SQLAlchemy. Mongo collections: PyMongo ... SQL We'll play with SQL a lot during this tutorial. Blaze translates your query to SQLAlchemy. SQLAlchemy then translates to the SQL dialect of your database, your database then executes that query intelligently. Blaze $\rightarrow$ SQLAlchemy $\rightarrow$ SQL $\rightarrow$ Database computation This translation process lets analysts interact with a familiar interface while leveraging a potentially powerful database. To keep things local we'll use SQLite, but this works with any database with a SQLAlchemy dialect. Examples in this section use the iris dataset. Exercises use the Lahman Baseball statistics database, year 2013. If you have not downloaded this dataset you could do so here - https://github.com/jknecht/baseball-archive-sqlite/raw/master/lahman2013.sqlite. <hr/> Examples Lets dive into Blaze Syntax. For simple queries it looks and feels similar to Pandas End of explanation # Inspect SQL query query = db.iris[db.iris.Species == 'versicolor'][['Species', 'SepalLength']] print bz.compute(query) query = bz.by(db.iris.Species, longest=db.iris.PetalLength.max(), shortest=db.iris.PetalLength.min()) print bz.compute(query) Explanation: <hr /> Work happens on the database If we were using pandas we would read the table into pandas, then use pandas' fast in-memory algorithms for computation. Here we translate your query into SQL and then send that query to the database to do the work. Pandas $\leftarrow_\textrm{data}$ SQL, then Pandas computes Blaze $\rightarrow_\textrm{query}$ SQL, then database computes If we want to dive into the internal API we can inspect the query that Blaze transmits. <hr /> End of explanation # db = bz.Data('postgresql://postgres:postgres@ec2-54-159-160-163.compute-1.amazonaws.com') # Use Postgres if you don't have the sqlite file db = bz.Data('sqlite:///data/lahman2013.sqlite') db.dshape # View the Salaries table # What are the distinct teamIDs in the Salaries table? # What is the minimum and maximum yearID in the Sarlaries table? # For the Oakland Athletics (teamID OAK), pick out the playerID, salary, and yearID columns # Sort that result by salary. # Use the ascending=False keyword argument to the sort function to find the highest paid players Explanation: <hr /> Exercises Now we load the Lahman baseball database and perform similar queries End of explanation import pandas as pd iris = pd.read_csv('data/iris.csv') iris.groupby('Species').PetalLength.min() iris = bz.Data('sqlite:///my.db::iris') bz.by(iris.Species, largest=iris.PetalLength.max(), smallest=iris.PetalLength.min()) print(_) Explanation: <hr /> Example: Split-apply-combine In Pandas we perform computations on a per-group basis with the groupby operator. In Blaze our syntax is slightly different, using instead the by function. End of explanation result = bz.by(db.Salaries.teamID, avg=db.Salaries.salary.mean(), max=db.Salaries.salary.max(), ratio=db.Salaries.salary.max() / db.Salaries.salary.min() ).sort('ratio', ascending=False) odo(result, list)[:10] Explanation: <hr/> Store Results By default Blaze only shows us the first ten lines of a result. This provides a more interactive feel and stops us from accidentally crushing our system. Sometimes we do want to compute all of the results and store them someplace. Blaze expressions are valid sources for odo. So we can store our results in any format. <hr/> Exercise: Storage The solution to the first split-apply-combine problem is below. Store that result in a list, a CSV file, and in a new SQL table in our database (use a uri like sqlite://... to specify the SQL table.) End of explanation
1,625
Given the following text description, write Python code to implement the functionality described below step by step Description: Nonparametric estimatio of Doppler function Step1: Doppler function $$r\left(x\right)=\sqrt{x\left(1-x\right)}\sin\left(\frac{1.2\pi}{x+.05}\right),\quad x\in\left[0,1\right]$$ Step2: Derivative of Doppler function Step3: Left and right truncated exponentials Right truncated Step4: Draw the densitites Step5: Kernels Truncated (Uniform) Step6: Nadaraya-Watson (NW) or local constant estimator Local weighting For each observed data $X$ ($N$-vector) and grid $U$ ($M$-vector) this function returns $N\times M$-matrix of weights Step7: Nadaraya-Watson (NW) $$\hat{m}\left(x\right)=\frac{\sum_{i=1}^{n}k\left(\frac{X_{i}-x}{h}\right)Y_{i}}{\sum_{i=1}^{n}k\left(\frac{X_{i}-x}{h}\right)}$$ Step8: Generate data $$Y_{i}=m\left(X_{i}\right)+\epsilon_{i},\quad\epsilon_{i}\sim NID\left(0,\sigma=0.1\right)$$ Step9: Perform estimation and plot the results Step10: Bias correction For bias computation we take the density of $X$ and two derivatives of conditional mean $m(x)$ as known. In practice, they have to be estimated. Step11: Local Linear (LL) estimator $$\left(\begin{array}{c} \hat{\alpha}\left(x\right)\ \hat{\beta}\left(x\right) \end{array}\right)=\left(\sum_{i=1}^{n}k_{i}\left(x\right)Z_{i}\left(x\right)Z_{i}\left(x\right)^{\prime}\right)^{-1}\sum_{i=1}^{n}k_{i}\left(x\right)Z_{i}\left(x\right)Y_{i}$$ $$\left(\begin{array}{c} \hat{\alpha}\left(x\right)\ \hat{\beta}\left(x\right) \end{array}\right) =\left(Z\left(x\right)^{\prime}K\left(x\right)Z\left(x\right)\right)^{-1}Z\left(x\right)^{\prime}K\left(x\right)Y$$ $K(x)$ - $N\times N$ $Z(x)$ - $N\times 2$ $Y$ - $N\times 1$ Step12: Perform estimation and plot the results Step13: Comparison for different DGP of X Step14: Conditional variance and confidence intervals Leave-one-out errors Step15: Estimate variance Step16: Plot the results Step17: Bandwidth selection Cross-validation criterion $$\tilde{e}{i}\left(h\right)=Y{i}-\tilde{m}{-i}\left(X{i},h\right)$$ $$CV\left(h\right)=\frac{1}{n}\sum_{i=1}^{n}\tilde{e}_{i}\left(h\right)^{2}$$ $$\hat{h}=\arg\min_{h\geq h_{l}}CV\left(h\right)$$ Step18: Plot the (optimized) fit
Python Code: import numpy as np import matplotlib.pyplot as plt import seaborn as sns import scipy.stats as ss import sympy as sp sns.set_context('notebook') %matplotlib inline Explanation: Nonparametric estimatio of Doppler function End of explanation x = np.linspace(.01, .99, num=1e3) doppler = lambda x : np.sqrt(x * (1 - x)) * np.sin(1.2 * np.pi / (x + .05)) plt.plot(x, doppler(x)) plt.show() Explanation: Doppler function $$r\left(x\right)=\sqrt{x\left(1-x\right)}\sin\left(\frac{1.2\pi}{x+.05}\right),\quad x\in\left[0,1\right]$$ End of explanation from sympy.utilities.lambdify import lambdify from IPython.display import display, Math, Latex u = sp.Symbol('u') sym_doppler = lambda x : (x * (1 - x))**.5 * sp.sin(1.2 * sp.pi / (x + .05)) d_doppler = sym_doppler(u).diff() dd_doppler = sym_doppler(u).diff(n=2) display(Math(sp.latex(d_doppler))) d_doppler = np.vectorize(lambdify(u, d_doppler)) dd_doppler = np.vectorize(lambdify(u, dd_doppler)) plt.plot(x, d_doppler(x)) plt.show() Explanation: Derivative of Doppler function End of explanation def f_rtexp(x, lmbd=1, b=1): return np.exp(-x / lmbd) / lmbd / (1 - np.exp(-b / lmbd)) def f_ltexp(x, lmbd=1, b=1): return np.exp(x / lmbd) / lmbd / (np.exp(b / lmbd) - 1) def right_trunc_exp(lmbd=1, b=1, size=1000): X = np.sort(np.random.rand(size)) return - lmbd * np.log(1 - X * (1 - np.exp(-b / lmbd))) def left_trunc_exp(lmbd=1, b=1, size=1000): X = np.sort(np.random.rand(size)) return lmbd * np.log(1 - X * (1 - np.exp(b / lmbd))) # Equivalent using SciPy: # Y = ss.truncexpon.rvs(1, size=1000) lmbd = .2 Y1 = right_trunc_exp(lmbd=lmbd) Y2 = left_trunc_exp(lmbd=lmbd) density1 = ss.gaussian_kde(Y1) density2 = ss.gaussian_kde(Y2) U = np.linspace(0, 1, num=1e3) Explanation: Left and right truncated exponentials Right truncated: $$f\left(x\right)=\frac{e^{-x/\lambda}/\lambda}{1-e^{-b/\lambda}},\quad F\left(x\right)=\frac{1-e^{-x/\lambda}}{1-e^{-b/\lambda}},\quad F^{-1}\left(x\right)=-\lambda\log\left(1-x\left(1-e^{-b/\lambda}\right)\right)$$ Left truncated: $$f\left(x\right)=\frac{e^{x/\lambda}/\lambda}{e^{b/\lambda}-1},\quad F\left(x\right)=\frac{1-e^{x/\lambda}}{1-e^{b/\lambda}},\quad F^{-1}\left(x\right)=\lambda\log\left(1-x\left(1-e^{b/\lambda}\right)\right)$$ End of explanation fig = plt.figure(figsize=(15, 5)) plt.subplot(1, 2, 1) plt.hist(Y1, normed=True, bins=20, label='Histogram') plt.plot(U, f_rtexp(U, lmbd=lmbd), lw=4, color=[0, 0, 0], label='True density') plt.plot(U, density1(U), lw=4, color='red', label='Kernel density') plt.legend() plt.title('Right truncated') plt.subplot(1, 2, 2) plt.hist(Y2, normed=True, bins=20, label='Histogram') plt.plot(U, f_ltexp(U, lmbd=lmbd), lw=4, color=[0, 0, 0], label='True density') plt.plot(U, density2(U), lw=4, color='red', label='Kernel density') plt.legend() plt.title('Left truncated') plt.show() Explanation: Draw the densitites End of explanation def indicator(x): return np.asfarray((np.abs(x) <= 1.) & (np.abs(x) >= 0.)) def kernel(x, ktype='Truncated'): if ktype == 'Truncated': return .5 * indicator(x) if ktype == 'Epanechnikov': return 3./4. * (1 - x**2) * indicator(x) if ktype == 'Biweight': return 15./16. * (1 - x**2)**2 * indicator(x) if ktype == 'Triweight': return 35./36. * (1 - x**2)**3 * indicator(x) if ktype == 'Gaussian': return 1./np.sqrt(2. * np.pi) * np.exp(- .5 * x**2) def roughness(ktype='Truncated'): if ktype == 'Truncated': return 1./2. if ktype == 'Epanechnikov': return 3./5. if ktype == 'Biweight': return 5./7. if ktype == 'Triweight': return 350./429. if ktype == 'Gaussian': return np.pi**(-.5)/2. def sigmak(ktype='Truncated'): if ktype == 'Truncated': return 1./3. if ktype == 'Epanechnikov': return 1./5. if ktype == 'Biweight': return 1./7. if ktype == 'Triweight': return 1./9. if ktype == 'Gaussian': return 1. x = np.linspace(0., 2., 100) names = ['Truncated', 'Epanechnikov', 'Biweight', 'Triweight', 'Gaussian'] for name in names: plt.plot(x, kernel(x, ktype=name), label=name, lw=2) plt.legend() plt.show() Explanation: Kernels Truncated (Uniform): $k_{0}\left(u\right)=\frac{1}{2}1\left(\left|u\right|\leq1\right)$ Epanechnikov: $k_{1}\left(u\right)=\frac{3}{4}\left(1-u^{2}\right)1\left(\left|u\right|\leq1\right)$ Biweight: $k_{2}\left(u\right)=\frac{15}{16}\left(1-u^{2}\right)^{2}1\left(\left|u\right|\leq1\right)$ Triweight: $k_{2}\left(u\right)=\frac{35}{36}\left(1-u^{2}\right)^{3}1\left(\left|u\right|\leq1\right)$ Gaussian: $k_{\phi}\left(u\right)=\frac{1}{\sqrt{2\pi}}\exp\left(-\frac{1}{2}u^2\right)$ End of explanation def weight(U, X, h=.1, ktype='Truncated'): # X - N-array # U - M-array # XmU - M*N-array XmU = (X - np.atleast_2d(U).T) / h # K - M*N-array K = kernel(XmU, ktype) # K.sum(1) - M-array # K.T - N*M-array # K.T / K.sum(1) - N*M-array return (K.T / K.sum(1)).T Explanation: Nadaraya-Watson (NW) or local constant estimator Local weighting For each observed data $X$ ($N$-vector) and grid $U$ ($M$-vector) this function returns $N\times M$-matrix of weights End of explanation def NW(U, X, Y, h=.1, ktype='Truncated'): return np.dot(weight(U, X, h, ktype), Y) Explanation: Nadaraya-Watson (NW) $$\hat{m}\left(x\right)=\frac{\sum_{i=1}^{n}k\left(\frac{X_{i}-x}{h}\right)Y_{i}}{\sum_{i=1}^{n}k\left(\frac{X_{i}-x}{h}\right)}$$ End of explanation def generate_data(N=1000, M=500, lmbd=1, trunc='left'): if trunc == 'left': X = left_trunc_exp(lmbd=lmbd, size=N) if trunc == 'right': X = right_trunc_exp(lmbd=lmbd, size=N) e = np.random.normal(0, .1, N) Y = doppler(X) + e U = np.linspace(.01, .99, M) return X, Y, U Explanation: Generate data $$Y_{i}=m\left(X_{i}\right)+\epsilon_{i},\quad\epsilon_{i}\sim NID\left(0,\sigma=0.1\right)$$ End of explanation X, Y, U = generate_data() # Nadaraya-Watson estimator Yhat = NW(U, X, Y, h=.05, ktype='Truncated') fig = plt.figure(figsize=(10, 6)) plt.plot(U, doppler(U), lw=2, color='blue', label='True') plt.plot(U, Yhat, lw=2, color='red', label='Fitted') plt.scatter(X, Y, s=15, lw=.5, facecolor='none', label='Realized') plt.xlim([0, 1]) plt.xlabel('X') plt.ylabel('Y') plt.legend() plt.show() Explanation: Perform estimation and plot the results End of explanation def fx(x, lmbd=1, b=1): return sp.exp(-x / lmbd) / lmbd / (1 - sp.exp(-b / lmbd)) dfx = fx(u).diff() fx = np.vectorize(lambdify(u, fx(u))) dfx = np.vectorize(lambdify(u, dfx)) def bias(U, etype='NW', h=.05, ktype='Gaussian'): if etype == 'NW': bias = .5 * dd_doppler(U) + d_doppler(U) * dfx(U) / fx(U) if etype == 'LL': bias = .5 * dd_doppler(U) * fx(U) return bias * h**2 * sigmak(ktype) h = .05 ktype = 'Gaussian' fig = plt.figure(figsize=(15, 6)) X, Y, U = generate_data() Yhat = NW(X, X, Y, h=h, ktype=ktype) Ynobias = Yhat - bias(X, etype='NW', h=h, ktype=ktype) plt.plot(X, doppler(X), lw=2, color='blue', label='True') plt.plot(X, Yhat, lw=2, color='red', label='Fitted') plt.scatter(X, Y, s=15, lw=.5, facecolor='none', label='Realized') plt.plot(X, Ynobias, lw=2, color='green', label='No Bias') plt.xlim([0, 1]) plt.xlabel('X') plt.ylabel('Y') plt.legend() plt.show() Explanation: Bias correction For bias computation we take the density of $X$ and two derivatives of conditional mean $m(x)$ as known. In practice, they have to be estimated. End of explanation def LL(U, X, Y, h=.1, ktype='Truncated'): # X - N-array # U - M-array # K - M*N-array W = weight(U, X, h, ktype) alpha = np.empty(U.shape[0]) beta = np.empty(U.shape[0]) for i in range(U.shape[0]): # N*N-array K = np.diag(W[i]) # N-array Z1 = (X - U[i]) / h Z0 = np.ones(Z1.shape) # 2*N-array Z = np.vstack([Z0, Z1]).T # 2*2-array A = np.dot(Z.T, np.dot(K, Z)) # 2-array B = np.dot(Z.T, np.dot(K, Y)) # 2-array coef = np.dot(np.linalg.inv(A), B) alpha[i] = coef[0] beta[i] = coef[1] return alpha, beta Explanation: Local Linear (LL) estimator $$\left(\begin{array}{c} \hat{\alpha}\left(x\right)\ \hat{\beta}\left(x\right) \end{array}\right)=\left(\sum_{i=1}^{n}k_{i}\left(x\right)Z_{i}\left(x\right)Z_{i}\left(x\right)^{\prime}\right)^{-1}\sum_{i=1}^{n}k_{i}\left(x\right)Z_{i}\left(x\right)Y_{i}$$ $$\left(\begin{array}{c} \hat{\alpha}\left(x\right)\ \hat{\beta}\left(x\right) \end{array}\right) =\left(Z\left(x\right)^{\prime}K\left(x\right)Z\left(x\right)\right)^{-1}Z\left(x\right)^{\prime}K\left(x\right)Y$$ $K(x)$ - $N\times N$ $Z(x)$ - $N\times 2$ $Y$ - $N\times 1$ End of explanation X, Y, U = generate_data() Yhat, dYhat = LL(U, X, Y, h=.05, ktype='Gaussian') fig = plt.figure(figsize=(15, 6)) plt.subplot(1, 2, 1) plt.plot(U, doppler(U), lw=2, color='blue', label='True') plt.plot(U, Yhat, lw=2, color='red', label='Fitted') plt.scatter(X, Y, s=15, lw=.5, facecolor='none', label='Realized') plt.xlim([0, 1]) plt.xlabel('X') plt.ylabel('Y') plt.legend() plt.title('Doppler function') plt.subplot(1, 2, 2) plt.plot(U, d_doppler(U), lw=2, color='blue', label='True') plt.plot(U, dYhat, lw=2, color='red', label='Fitted') plt.xlim([0, 1]) plt.xlabel('X') plt.ylabel('Y') plt.legend() plt.title('Doppler function derivative') plt.show() Explanation: Perform estimation and plot the results End of explanation X1, Y1, U = generate_data(lmbd=.1, trunc='left') X2, Y2, U = generate_data(lmbd=.1, trunc='right') ktype = 'Gaussian' h = .05 Y1hat = NW(U, X1, Y1, h=h, ktype=ktype) Y2hat = NW(U, X2, Y2, h=h, ktype=ktype) fig = plt.figure(figsize=(15, 10)) plt.subplot(2, 2, 1) plt.hist(X1, normed=True, bins=20, label='Histogram') plt.ylabel('X1') plt.subplot(2, 2, 2) plt.hist(X2, normed=True, bins=20, label='Histogram') plt.ylabel('X2') plt.subplot(2, 2, 3) plt.plot(U, doppler(U), lw=2, color='blue', label='True') plt.plot(U, Y1hat, lw=2, color='red', label='Fitted') plt.scatter(X1, Y1, s=15, lw=.5, facecolor='none', label='Realized') plt.xlim([0, 1]) plt.xlabel('X1') plt.ylabel('Y1') plt.legend() plt.subplot(2, 2, 4) plt.plot(U, doppler(U), lw=2, color='blue', label='True') plt.plot(U, Y2hat, lw=2, color='red', label='Fitted') plt.scatter(X2, Y2, s=15, lw=.5, facecolor='none', label='Realized') plt.xlim([0, 1]) plt.xlabel('X2') plt.ylabel('Y2') plt.legend() plt.show() Explanation: Comparison for different DGP of X End of explanation def error(Y, X, h, ktype): ehat = np.empty(X.shape) for i in range(X.shape[0]): ehat[i] = Y[i] - NW(X[i], np.delete(X, i), np.delete(Y, i), h=h, ktype=ktype) return np.array(ehat) Explanation: Conditional variance and confidence intervals Leave-one-out errors End of explanation N = 500 X, Y, U = generate_data(N=N, lmbd=.2) h = .05 ktype = 'Epanechnikov' Yhat = NW(U, X, Y, h=h, ktype=ktype) ehat = error(Y, X, h, ktype) sigma2hat = NW(U, X, ehat**2, h=.1, ktype=ktype) fxhat = ss.gaussian_kde(X)(U) V2hat = roughness(ktype) * sigma2hat / fxhat / N / h shat = V2hat**.5 Explanation: Estimate variance End of explanation fig = plt.figure(figsize = (10, 10)) plt.subplot(3, 1, 1) plt.scatter(X, Y, s=15, lw=.5, facecolor='none', label='Realized') #plt.plot(U, doppler(U), lw=2, color='blue', label='True') plt.fill_between(U, Yhat - 2*shat, Yhat + 2*shat, lw=0, color='red', alpha=.2, label='+2s') plt.plot(U, Yhat, lw=2, color='red', label='Fitted') plt.ylabel('Y') plt.legend() plt.xlim([0, 1]) ylim = plt.gca().get_ylim() plt.title('Data') plt.subplot(3, 1, 2) plt.scatter(X, ehat, s=15, lw=.5, facecolor='none', label='Errors') plt.axhline(color='black') plt.ylim(ylim) plt.xlim([0, 1]) plt.title('Errors') plt.subplot(3, 1, 3) plt.plot(U, sigma2hat**.5, lw=2, color='red', label='Estimate') plt.plot(U, .1 * np.ones(U.shape), lw=2, color='blue', label='True') plt.ylim([0, .4]) plt.xlim([0, 1]) plt.legend() plt.xlabel('X') plt.title('Conditional variance') plt.tight_layout() plt.show() Explanation: Plot the results End of explanation N = 500 X, Y, U = generate_data(N=N) ktype = 'Gaussian' H = np.linspace(.001, .05, 100) CV = np.array([]) for h in H: ehat = error(Y, X, h, ktype) CV = np.append(CV, np.mean(ehat**2)) h = H[CV.argmin()] Yhat = NW(U, X, Y, h=h, ktype=ktype) ehat = error(Y, X, h, ktype) sigma2hat = NW(U, X, ehat ** 2, h=h, ktype=ktype) fxhat = ss.gaussian_kde(X)(U) V2hat = roughness(ktype) * sigma2hat / fxhat / N / h shat = V2hat**.5 plt.figure(figsize=(10, 5)) plt.plot(H, CV) plt.scatter(h, CV.min(), facecolor='none', lw=2, s=100) plt.xlim([H.min(), H.max()]) plt.xlabel('Bandwidth, h') plt.ylabel('cross-validation, CV') plt.show() Explanation: Bandwidth selection Cross-validation criterion $$\tilde{e}{i}\left(h\right)=Y{i}-\tilde{m}{-i}\left(X{i},h\right)$$ $$CV\left(h\right)=\frac{1}{n}\sum_{i=1}^{n}\tilde{e}_{i}\left(h\right)^{2}$$ $$\hat{h}=\arg\min_{h\geq h_{l}}CV\left(h\right)$$ End of explanation plt.figure(figsize=(10, 5)) #plt.plot(U, doppler(U), lw=2, color='blue', label='True') plt.fill_between(U, Yhat - 2*shat, Yhat + 2*shat, lw=0, color='red', alpha=.2, label='+2s') plt.plot(U, Yhat, lw=2, color='red', label='Fitted') plt.scatter(X, Y, s=15, lw=.5, facecolor='none', label='Realized') plt.xlim([0, 1]) plt.xlabel('X') plt.ylabel('Y') plt.legend() plt.show() Explanation: Plot the (optimized) fit End of explanation
1,626
Given the following text description, write Python code to implement the functionality described below step by step Description: This demo shows how to use the Group Bayesian Representational Similarity Analysis (GBRSA) method in brainiak with a simulated dataset. Note that although the name has "group", it is also suitable for analyzing data of a single participant When you apply this tool to real fMRI data, it is required that the data of each participant to be motion corrected. If multiple runs are acquired for each participant, they should be spatially aligned. You might want to do slice-timing correction. You will need to have the mask of the Region of Interest (ROI) ready (defined anatomically or by independent tasks, which is up to you). nilearn provides tools to extract signal from mask. You can refer to http Step1: You might want to keep a log of the output. Step2: We want to simulate some data in which each voxel responds to different task conditions differently, but following a common covariance structure Load an example design matrix. The user should prepare their design matrix with their favorate software, such as using 3ddeconvolve of AFNI, or using SPM or FSL. The design matrix reflects your belief of how fMRI signal should respond to a task (if a voxel does respond). The common assumption is that a neural event that you are interested in will elicit a slow hemodynamic response in some voxels. The response peaks around 4-6 seconds after the event onset and dies down more than 12 seconds after the event. Therefore, typically you convolve a time series A, composed of delta (stem) functions reflecting the time of each neural event belonging to the same category (e.g. all trials in which a participant sees a face), with a hemodynamic response function B, to form the hypothetic response of any voxel to such type of neural event. For each type of event, such a convoluted time course can be generated. These time courses, put together, are called design matrix, reflecting what we believe a temporal signal would look like, if it exists in any voxel. Our goal is to figure out how the (spatial) response pattern of a population of voxels (in an Region of Interest, ROI) are similar or disimilar to different types of tasks (e.g., watching face vs. house, watching different categories of animals, different conditions of a cognitive task). So we need the design matrix in order to estimate the similarity matrix we are interested. We can use the utility called ReadDesign from brainiak.utils to read a design matrix generated from AFNI. For design matrix saved as Matlab data file by SPM or or other toolbox, you can use scipy.io.loadmat('YOURFILENAME') and extract the design matrix from the dictionary returned. Basically, the Bayesian RSA in this toolkit just needs a numpy array which is in size of {time points} * {condition} You can also generate design matrix using the function gen_design which is in brainiak.utils. It takes in (names of) event timing files in AFNI or FSL format (denoting onsets, duration, and weight for each event belongning to the same condition) and outputs the design matrix as numpy array. In typical fMRI analysis, some nuisance regressors such as head motion, baseline time series and slow drift are also entered into regression. In using our method, you should not include such regressors into the design matrix, because the spatial spread of such nuisance regressors might be quite different from the spatial spread of task related signal. Including such nuisance regressors in design matrix might influence the pseudo-SNR map, which in turn influence the estimation of the shared covariance matrix. But you may include motion time course in the nuisance parameter. We concatenate the design matrix by 2 to 3 times, mimicking 2 to 3 runs of identical timing Note that different subjects do not have to have the same number of voxels or time points. The timing of the task conditions of them can also differ. The simulation below reflects this Step3: simulate data Step4: Then, we simulate signals, assuming the magnitude of response to each condition follows a common covariance matrix. Note that Group Bayesian Representational Similarity Analysis (GBRSA) does not impose Gaussian Process prior on log(SNR) as BRSA does, for two reasons Step5: In the following, pseudo-SNR is generated from a Gaussian Process defined on a "square" ROI, just for simplicity of code Notice that GBRSA does not make assumption of smoothness of SNR, so it won't utilize this fact. Step6: The reason that the pseudo-SNRs in the example voxels are not too small, while the signal looks much smaller is because we happen to have low amplitudes in our design matrix. The true SNR depends on both the amplitudes in design matrix and the pseudo-SNR. Therefore, be aware that pseudo-SNR does not directly reflects how much signal the data have, but rather a map indicating the relative strength of signal in differerent voxels. When you have multiple runs, the noise won't be correlated between runs. Therefore, you should tell BRSA when is the onset of each scan. Note that the data (variable Y above) you feed to BRSA is the concatenation of data from all runs along the time dimension, as a 2-D matrix of time x space Step7: Fit Group Bayesian RSA to our simulated data The nuisance regressors in typical fMRI analysis (such as head motion signal) are replaced by principal components estimated from residuals after subtracting task-related response. n_nureg tells the model how many principal components to keep from the residual as nuisance regressors, in order to account for spatial correlation in noise. When it is set to None and auto_nuisance=True, this number will be estimated automatically by an algorithm of Gavish & Dohono 2014. If you prefer not using this approach based on principal components of residuals, you can set auto_nuisance=False, and optionally provide your own nuisance regressors as a list (one numpy array per subject) as nuisance argument to GBRSA.fit(). In practice, we find that the result is much better with auto_nuisance=True. The idea of modeling the spatial noise correlation with the principal component decomposition of the residual noise is similar to that in GLMdenoise (http Step8: We can have a look at the estimated similarity in matrix gbrsa.C_. We can also compare the ideal covariance above with the one recovered, gbrsa.U_ Step9: In contrast, we can have a look of the similarity matrix based on Pearson correlation between point estimates of betas of different conditions. This is what vanila RSA might give Step10: We can make a comparison between the estimated SNR map and the true SNR map Step11: We can also look at how SNRs are recovered. Step12: We can also examine the relation between recovered betas and true betas Step13: "Decoding" from new data Now we generate a new data set, assuming signal is the same but noise is regenerated. We want to use the transform() function of gbrsa to estimate the "design matrix" in this new dataset. We keep the signal the same as in training data, but generate new noise. Note that we did this purely for simplicity of simulation. It is totally fine and encouraged for the event timing to be different in your training and testing data. You just need to capture them in your design matrix Step14: Model selection by cross-validataion Step15: Full model performs better on testing data that has the same property of signal and noise with training data. Below, we fit the model to data containing only noise and test how it performs on data with signal. Step16: We can see that the difference is smaller but full model generally performs slightly worse, because of overfitting. This is expected. So, after fitting a model to your data, you should also check cross-validated log likelihood on separate runs from the same group of participants, and make sure your model is at least better than a null model before you trust your similarity matrix. Another diagnostic of bad model to your data is very small diagonal values in the shared covariance structure U_ Shown below
Python Code: %matplotlib inline import scipy.stats import scipy.spatial.distance as spdist import numpy as np from brainiak.reprsimil.brsa import GBRSA import brainiak.utils.utils as utils import matplotlib.pyplot as plt import matplotlib as mpl import logging np.random.seed(10) import copy Explanation: This demo shows how to use the Group Bayesian Representational Similarity Analysis (GBRSA) method in brainiak with a simulated dataset. Note that although the name has "group", it is also suitable for analyzing data of a single participant When you apply this tool to real fMRI data, it is required that the data of each participant to be motion corrected. If multiple runs are acquired for each participant, they should be spatially aligned. You might want to do slice-timing correction. You will need to have the mask of the Region of Interest (ROI) ready (defined anatomically or by independent tasks, which is up to you). nilearn provides tools to extract signal from mask. You can refer to http://nilearn.github.io/manipulating_images/manipulating_images.html When analyzing an ROI of hundreds to thousands voxels, it is expected to be faster than the non-group version BRSA (refer to the other example). The reason is that GBRSA marginalize the SNR and AR(1) coefficient parameters of each voxel by numerical integration, thus eliminating hundreds to thousands of free parameters and reducing computation. However, if you are doing searchlight analysis with tens of voxels in each searchlight, it is possible that BRSA is faster. GBRSA and BRSA might not return exactly the same result. Which one is more accurate might depend on the parameter choice, as well as the property of data. Please note that the model assumes that the covariance matrix U which all $\beta_i$ follow describe a multi-variate Gaussian distribution that is zero-meaned. This assumption does not imply that there must be both positive and negative responses across voxels. However, it means that (Group) Bayesian RSA treats the task-evoked activity against baseline BOLD level as signal, while in other RSA tools the deviation of task-evoked activity in each voxel from the average task-evoked activity level across voxels may be considered as signal of interest. Due to this assumption in (G)BRSA, relatively high degree of similarity may be expected when the activity patterns of two task conditions share a strong sensory driven components. When two task conditions elicit exactly the same activity pattern but only differ in their global magnitudes, under the assumption in (G)BRSA, their similarity is 1; under the assumption that only deviation of pattern from average patterns is signal of interest (which is currently not supported by (G)BRSA), their similarity would be -1 because the deviations of the two patterns from their average pattern are exactly opposite. Load some package which we will use in this demo. If you see error related to loading any package, you can install that package. For example, if you use Anaconda, you can use "conda install matplotlib" to install matplotlib. End of explanation logging.basicConfig( level=logging.DEBUG, filename='gbrsa_example.log', format='%(relativeCreated)6d %(threadName)s %(message)s') Explanation: You might want to keep a log of the output. End of explanation n_subj = 5 n_run = np.random.random_integers(2, 4, n_subj) ROI_edge = np.random.random_integers(20, 40, n_subj) # We simulate "ROI" of a square shape design = [None] * n_subj for subj in range(n_subj): design[subj] = utils.ReadDesign(fname="example_design.1D") design[subj].n_TR = design[subj].n_TR * n_run[subj] design[subj].design_task = np.tile(design[subj].design_task[:,:-1], [n_run[subj], 1]) # The last "condition" in design matrix # codes for trials subjects made an error. # We ignore it here. n_C = np.size(design[0].design_task, axis=1) # The total number of conditions. n_V = [int(roi_e**2) for roi_e in ROI_edge] # The total number of simulated voxels n_T = [d.n_TR for d in design] # The total number of time points, # after concatenating all fMRI runs fig = plt.figure(num=None, figsize=(12, 3), dpi=150, facecolor='w', edgecolor='k') plt.plot(design[0].design_task) plt.ylim([-0.2, 0.4]) plt.title('hypothetic fMRI response time courses ' 'of all conditions for one subject\n' '(design matrix)') plt.xlabel('time') plt.show() Explanation: We want to simulate some data in which each voxel responds to different task conditions differently, but following a common covariance structure Load an example design matrix. The user should prepare their design matrix with their favorate software, such as using 3ddeconvolve of AFNI, or using SPM or FSL. The design matrix reflects your belief of how fMRI signal should respond to a task (if a voxel does respond). The common assumption is that a neural event that you are interested in will elicit a slow hemodynamic response in some voxels. The response peaks around 4-6 seconds after the event onset and dies down more than 12 seconds after the event. Therefore, typically you convolve a time series A, composed of delta (stem) functions reflecting the time of each neural event belonging to the same category (e.g. all trials in which a participant sees a face), with a hemodynamic response function B, to form the hypothetic response of any voxel to such type of neural event. For each type of event, such a convoluted time course can be generated. These time courses, put together, are called design matrix, reflecting what we believe a temporal signal would look like, if it exists in any voxel. Our goal is to figure out how the (spatial) response pattern of a population of voxels (in an Region of Interest, ROI) are similar or disimilar to different types of tasks (e.g., watching face vs. house, watching different categories of animals, different conditions of a cognitive task). So we need the design matrix in order to estimate the similarity matrix we are interested. We can use the utility called ReadDesign from brainiak.utils to read a design matrix generated from AFNI. For design matrix saved as Matlab data file by SPM or or other toolbox, you can use scipy.io.loadmat('YOURFILENAME') and extract the design matrix from the dictionary returned. Basically, the Bayesian RSA in this toolkit just needs a numpy array which is in size of {time points} * {condition} You can also generate design matrix using the function gen_design which is in brainiak.utils. It takes in (names of) event timing files in AFNI or FSL format (denoting onsets, duration, and weight for each event belongning to the same condition) and outputs the design matrix as numpy array. In typical fMRI analysis, some nuisance regressors such as head motion, baseline time series and slow drift are also entered into regression. In using our method, you should not include such regressors into the design matrix, because the spatial spread of such nuisance regressors might be quite different from the spatial spread of task related signal. Including such nuisance regressors in design matrix might influence the pseudo-SNR map, which in turn influence the estimation of the shared covariance matrix. But you may include motion time course in the nuisance parameter. We concatenate the design matrix by 2 to 3 times, mimicking 2 to 3 runs of identical timing Note that different subjects do not have to have the same number of voxels or time points. The timing of the task conditions of them can also differ. The simulation below reflects this End of explanation noise_bot = 0.5 noise_top = 1.5 noise_level = [None] * n_subj noise = [None] * n_subj rho1 = [None] * n_subj for subj in range(n_subj): noise_level[subj] = np.random.rand(n_V[subj]) * \ (noise_top - noise_bot) + noise_bot # The standard deviation of the noise is in the range of [noise_bot, noise_top] # In fact, we simulate autocorrelated noise with AR(1) model. So the noise_level reflects # the independent additive noise at each time point (the "fresh" noise) # AR(1) coefficient rho1_top = 0.8 rho1_bot = -0.2 for subj in range(n_subj): rho1[subj] = np.random.rand(n_V[subj]) \ * (rho1_top - rho1_bot) + rho1_bot noise_smooth_width = 10.0 dist2 = [None] * n_subj for subj in range(n_subj): coords = np.mgrid[0:ROI_edge[subj], 0:ROI_edge[subj], 0:1] coords_flat = np.reshape(coords,[3, n_V[subj]]).T dist2[subj] = spdist.squareform(spdist.pdist(coords_flat, 'sqeuclidean')) # generating noise K_noise = noise_level[subj][:, np.newaxis] \ * (np.exp(-dist2[subj] / noise_smooth_width**2 / 2.0) \ + np.eye(n_V[subj]) * 0.1) * noise_level[subj] # We make spatially correlated noise by generating # noise at each time point from a Gaussian Process # defined over the coordinates. L_noise = np.linalg.cholesky(K_noise) noise[subj] = np.zeros([n_T[subj], n_V[subj]]) noise[subj][0, :] = np.dot(L_noise, np.random.randn(n_V[subj]))\ / np.sqrt(1 - rho1[subj]**2) for i_t in range(1, n_T[subj]): noise[subj][i_t, :] = noise[subj][i_t - 1, :] * rho1[subj] \ + np.dot(L_noise,np.random.randn(n_V[subj])) # For each voxel, the noise follows AR(1) process: # fresh noise plus a dampened version of noise at # the previous time point. # In this simulation, we also introduced spatial smoothness resembling a Gaussian Process. # Notice that we simulated in this way only to introduce spatial noise correlation. # This does not represent the assumption of the form of spatial noise correlation in the model. # Instead, the model is designed to capture structured noise correlation manifested # as a few spatial maps each modulated by a time course, which appears as spatial noise correlation. plt.pcolor(K_noise) plt.colorbar() plt.xlim([0, ROI_edge[-1] * ROI_edge[-1]]) plt.ylim([0, ROI_edge[-1] * ROI_edge[-1]]) plt.title('Spatial covariance matrix of noise\n of the last participant') plt.show() fig = plt.figure(num=None, figsize=(12, 2), dpi=150, facecolor='w', edgecolor='k') plt.plot(noise[-1][:, 0]) plt.title('noise in an example voxel') plt.show() Explanation: simulate data: noise + signal First, we start with noise, which is Gaussian Process in space and AR(1) in time End of explanation # ideal covariance matrix ideal_cov = np.zeros([n_C, n_C]) ideal_cov = np.eye(n_C) * 0.6 ideal_cov[8:12, 8:12] = 0.6 for cond in range(8, 12): ideal_cov[cond,cond] = 1 fig = plt.figure(num=None, figsize=(4, 4), dpi=100) plt.pcolor(ideal_cov) plt.colorbar() plt.xlim([0, 16]) plt.ylim([0, 16]) ax = plt.gca() ax.set_aspect(1) plt.title('ideal covariance matrix') plt.show() std_diag = np.diag(ideal_cov)**0.5 ideal_corr = ideal_cov / std_diag / std_diag[:, None] fig = plt.figure(num=None, figsize=(4, 4), dpi=100) plt.pcolor(ideal_corr) plt.colorbar() plt.xlim([0, 16]) plt.ylim([0, 16]) ax = plt.gca() ax.set_aspect(1) plt.title('ideal correlation matrix') plt.show() Explanation: Then, we simulate signals, assuming the magnitude of response to each condition follows a common covariance matrix. Note that Group Bayesian Representational Similarity Analysis (GBRSA) does not impose Gaussian Process prior on log(SNR) as BRSA does, for two reasons: (1) computational speed, (2) we numerically marginalize SNR for each voxel in GBRSA Let's keep in mind of the pattern of the ideal covariance / correlation below and see how well BRSA can recover their patterns. End of explanation L_full = np.linalg.cholesky(ideal_cov) # generating signal snr_level = np.random.rand(n_subj) * 0.6 + 0.4 # Notice that accurately speaking this is not SNR. # The magnitude of signal depends not only on beta but also on x. # (noise_level*snr_level)**2 is the factor multiplied # with ideal_cov to form the covariance matrix from which # the response amplitudes (beta) of a voxel are drawn from. tau = np.random.rand(n_subj) * 0.8 + 0.2 # magnitude of Gaussian Process from which the log(SNR) is drawn smooth_width = np.random.rand(n_subj) * 5.0 + 3.0 # spatial length scale of the Gaussian Process, unit: voxel inten_kernel = np.random.rand(n_subj) * 4.0 + 2.0 # intensity length scale of the Gaussian Process # Slightly counter-intuitively, if this parameter is very large, # say, much larger than the range of intensities of the voxels, # then the smoothness has much small dependency on the intensity. Y = [None] * n_subj snr = [None] * n_subj signal = [None] * n_subj betas_simulated = [None] * n_subj inten = [None] * n_subj for subj in range(n_subj): inten[subj] = np.random.rand(n_V[subj]) * 20.0 # For simplicity, we just assume that the intensity # of all voxels are uniform distributed between 0 and 20 # parameters of Gaussian process to generate pseuso SNR # For curious user, you can also try the following commond # to see what an example snr map might look like if the intensity # grows linearly in one spatial direction inten_tile = np.tile(inten[subj], [n_V[subj], 1]) inten_diff2 = (inten_tile - inten_tile.T)**2 K = np.exp(-dist2[subj] / smooth_width[subj]**2 / 2.0 - inten_diff2 / inten_kernel[subj]**2 / 2.0) * tau[subj]**2 \ + np.eye(n_V[subj]) * tau[subj]**2 * 0.001 # A tiny amount is added to the diagonal of # the GP covariance matrix to make sure it can be inverted L = np.linalg.cholesky(K) snr[subj] = np.exp(np.dot(L, np.random.randn(n_V[subj]))) * snr_level[subj] sqrt_v = noise_level[subj] * snr[subj] betas_simulated[subj] = np.dot(L_full, np.random.randn(n_C, n_V[subj])) * sqrt_v signal[subj] = np.dot(design[subj].design_task, betas_simulated[subj]) Y[subj] = signal[subj] + noise[subj] + inten[subj] # The data to be fed to the program. fig = plt.figure(num=None, figsize=(4, 4), dpi=100) plt.pcolor(np.reshape(snr[0], [ROI_edge[0], ROI_edge[0]])) plt.colorbar() ax = plt.gca() ax.set_aspect(1) plt.title('pseudo-SNR in a square "ROI" \nof participant 0') plt.show() snr_all = np.concatenate(snr) idx = np.argmin(np.abs(snr_all - np.median(snr_all))) median_subj = np.min(np.where(idx - np.cumsum(n_V) < 0)) idx = idx - np.cumsum(np.concatenate([[0], n_V]))[median_subj] # choose a voxel of medium level SNR. fig = plt.figure(num=None, figsize=(12, 4), dpi=150, facecolor='w', edgecolor='k') noise_plot, = plt.plot(noise[median_subj][:,idx],'g') signal_plot, = plt.plot(signal[median_subj][:,idx],'b') plt.legend([noise_plot, signal_plot], ['noise', 'signal']) plt.title('simulated data in an example voxel' ' with pseudo-SNR of {} in participant {}'.format(snr[median_subj][idx], median_subj)) plt.xlabel('time') plt.show() fig = plt.figure(num=None, figsize=(12, 4), dpi=150, facecolor='w', edgecolor='k') data_plot, = plt.plot(Y[median_subj][:,idx],'r') plt.legend([data_plot], ['observed data of the voxel']) plt.xlabel('time') plt.show() idx = np.argmin(np.abs(snr_all - np.max(snr_all))) highest_subj = np.min(np.where(idx - np.cumsum(n_V) < 0)) idx = idx - np.cumsum(np.concatenate([[0], n_V]))[highest_subj] # display the voxel of the highest level SNR. fig = plt.figure(num=None, figsize=(12, 4), dpi=150, facecolor='w', edgecolor='k') noise_plot, = plt.plot(noise[highest_subj][:,idx],'g') signal_plot, = plt.plot(signal[highest_subj][:,idx],'b') plt.legend([noise_plot, signal_plot], ['noise', 'signal']) plt.title('simulated data in the voxel with the highest' ' pseudo-SNR of {} in subject {}'.format(snr[highest_subj][idx], highest_subj)) plt.xlabel('time') plt.show() fig = plt.figure(num=None, figsize=(12, 4), dpi=150, facecolor='w', edgecolor='k') data_plot, = plt.plot(Y[highest_subj][:,idx],'r') plt.legend([data_plot], ['observed data of the voxel']) plt.xlabel('time') plt.show() Explanation: In the following, pseudo-SNR is generated from a Gaussian Process defined on a "square" ROI, just for simplicity of code Notice that GBRSA does not make assumption of smoothness of SNR, so it won't utilize this fact. End of explanation scan_onsets = [np.int32(np.linspace(0, design[i].n_TR,num=n_run[i] + 1)[: -1]) for i in range(n_subj)] print('scan onsets: {}'.format(scan_onsets)) Explanation: The reason that the pseudo-SNRs in the example voxels are not too small, while the signal looks much smaller is because we happen to have low amplitudes in our design matrix. The true SNR depends on both the amplitudes in design matrix and the pseudo-SNR. Therefore, be aware that pseudo-SNR does not directly reflects how much signal the data have, but rather a map indicating the relative strength of signal in differerent voxels. When you have multiple runs, the noise won't be correlated between runs. Therefore, you should tell BRSA when is the onset of each scan. Note that the data (variable Y above) you feed to BRSA is the concatenation of data from all runs along the time dimension, as a 2-D matrix of time x space End of explanation gbrsa = GBRSA() # Initiate an instance gbrsa.fit(X=Y, design=[d.design_task for d in design],scan_onsets=scan_onsets) # The data to fit should be given to the argument X. # Design matrix goes to design. And so on. Explanation: Fit Group Bayesian RSA to our simulated data The nuisance regressors in typical fMRI analysis (such as head motion signal) are replaced by principal components estimated from residuals after subtracting task-related response. n_nureg tells the model how many principal components to keep from the residual as nuisance regressors, in order to account for spatial correlation in noise. When it is set to None and auto_nuisance=True, this number will be estimated automatically by an algorithm of Gavish & Dohono 2014. If you prefer not using this approach based on principal components of residuals, you can set auto_nuisance=False, and optionally provide your own nuisance regressors as a list (one numpy array per subject) as nuisance argument to GBRSA.fit(). In practice, we find that the result is much better with auto_nuisance=True. The idea of modeling the spatial noise correlation with the principal component decomposition of the residual noise is similar to that in GLMdenoise (http://kendrickkay.net/GLMdenoise/). Apparently one can imagine that the choice of the number of principal components used as nuisance regressors can influence the result. If you just choose 1 or 2, perhaps only the global drift would be captured. But including too many nuisance regressors would slow the fitting speed and might have risk of overfitting. Among all the algorithms we have tested with simulation data, the Gavish & Donoho algorithm appears the most robust and the estimate is closest to the true simulated number. But it does have a tendency to under-estimate the number of components, which is one limitation in (G)BRSA module. End of explanation fig = plt.figure(num=None, figsize=(4, 4), dpi=100) plt.pcolor(gbrsa.C_, vmin=-0.1, vmax=1) plt.xlim([0, 16]) plt.ylim([0, 16]) plt.colorbar() ax = plt.gca() ax.set_aspect(1) plt.title('Estimated correlation structure\n shared between voxels\n' 'This constitutes the output of Bayesian RSA\n') plt.show() fig = plt.figure(num=None, figsize=(4, 4), dpi=100) plt.pcolor(gbrsa.U_) plt.xlim([0, 16]) plt.ylim([0, 16]) plt.colorbar() ax = plt.gca() ax.set_aspect(1) plt.title('Estimated covariance structure\n shared between voxels\n') plt.show() Explanation: We can have a look at the estimated similarity in matrix gbrsa.C_. We can also compare the ideal covariance above with the one recovered, gbrsa.U_ End of explanation sum_point_corr = np.zeros((n_C, n_C)) sum_point_cov = np.zeros((n_C, n_C)) betas_point = [None] * n_subj for subj in range(n_subj): regressor = np.insert(design[subj].design_task, 0, 1, axis=1) betas_point[subj] = np.linalg.lstsq(regressor, Y[subj])[0] point_corr = np.corrcoef(betas_point[subj][1:, :]) point_cov = np.cov(betas_point[subj][1:, :]) sum_point_corr += point_corr sum_point_cov += point_cov if subj == 0: fig = plt.figure(num=None, figsize=(4, 4), dpi=100) plt.pcolor(point_corr, vmin=-0.1, vmax=1) plt.xlim([0, 16]) plt.ylim([0, 16]) plt.colorbar() ax = plt.gca() ax.set_aspect(1) plt.title('Correlation structure estimated\n' 'based on point estimates of betas\n' 'for subject {}'.format(subj)) plt.show() fig = plt.figure(num=None, figsize=(4, 4), dpi=100) plt.pcolor(point_cov) plt.xlim([0, 16]) plt.ylim([0, 16]) plt.colorbar() ax = plt.gca() ax.set_aspect(1) plt.title('Covariance structure of\n' 'point estimates of betas\n' 'for subject {}'.format(subj)) plt.show() fig = plt.figure(num=None, figsize=(4, 4), dpi=100) plt.pcolor(sum_point_corr / n_subj, vmin=-0.1, vmax=1) plt.xlim([0, 16]) plt.ylim([0, 16]) plt.colorbar() ax = plt.gca() ax.set_aspect(1) plt.title('Correlation structure estimated\n' 'based on point estimates of betas\n' 'averaged over subjects') plt.show() fig = plt.figure(num=None, figsize=(4, 4), dpi=100) plt.pcolor(sum_point_cov / n_subj) plt.xlim([0, 16]) plt.ylim([0, 16]) plt.colorbar() ax = plt.gca() ax.set_aspect(1) plt.title('Covariance structure of\n' 'point estimates of betas\n' 'averaged over subjects') plt.show() Explanation: In contrast, we can have a look of the similarity matrix based on Pearson correlation between point estimates of betas of different conditions. This is what vanila RSA might give End of explanation subj = highest_subj fig, axes = plt.subplots(nrows=1, ncols=n_subj, figsize=(25, 5)) vmax = np.max([np.max(gbrsa.nSNR_[s]) for s in range(n_subj)]) for s in range(n_subj): im = axes[s].pcolor(np.reshape(gbrsa.nSNR_[s], [ROI_edge[s], ROI_edge[s]]), vmin=0,vmax=vmax) axes[s].set_aspect(1) fig.colorbar(im, ax=axes.ravel().tolist(), shrink=0.75) plt.suptitle('estimated pseudo-SNR',fontsize="xx-large" ) plt.show() fig, axes = plt.subplots(nrows=1, ncols=n_subj, figsize=(25, 5)) vmax = np.max([np.max(snr[s]) for s in range(n_subj)]) for s in range(n_subj): im = axes[s].pcolor(np.reshape(snr[s], [ROI_edge[s], ROI_edge[s]]), vmin=0,vmax=vmax) axes[s].set_aspect(1) fig.colorbar(im, ax=axes.ravel().tolist(), shrink=0.75) plt.suptitle('simulated pseudo-SNR',fontsize="xx-large" ) plt.show() RMS_GBRSA = np.mean((gbrsa.C_ - ideal_corr)**2)**0.5 RMS_RSA = np.mean((point_corr - ideal_corr)**2)**0.5 print('RMS error of group Bayesian RSA: {}'.format(RMS_GBRSA)) print('RMS error of standard RSA: {}'.format(RMS_RSA)) Explanation: We can make a comparison between the estimated SNR map and the true SNR map End of explanation fig, axes = plt.subplots(nrows=1, ncols=n_subj, figsize=(25, 5)) for s in range(n_subj): im = axes[s].scatter(np.log(snr[s]) - np.mean(np.log(snr[s])), np.log(gbrsa.nSNR_[s])) if s == 0: axes[s].set_ylabel('recovered log pseudo-SNR',fontsize='xx-large') if s == int(n_subj/2): axes[s].set_xlabel('true normalized log SNR',fontsize='xx-large') axes[s].set_aspect(1) plt.suptitle('estimated vs. simulated normalized log SNR',fontsize="xx-large" ) plt.show() fig, axes = plt.subplots(nrows=1, ncols=n_subj, figsize=(25, 5)) for s in range(n_subj): im = axes[s].scatter(snr[s], gbrsa.nSNR_[s]) if s == 0: axes[s].set_ylabel('recovered pseudo-SNR',fontsize='xx-large') if s == int(n_subj/2): axes[s].set_xlabel('true normalized SNR',fontsize='xx-large') axes[s].set_aspect(1) plt.suptitle('estimated vs. simulated SNR',fontsize="xx-large" ) plt.show() Explanation: We can also look at how SNRs are recovered. End of explanation fig, axes = plt.subplots(nrows=1, ncols=n_subj, figsize=(25, 5)) for s in range(n_subj): im = axes[s].scatter(betas_simulated[s] , gbrsa.beta_[s]) if s == 0: axes[s].set_ylabel('recovered betas by GBRSA',fontsize='xx-large') if s == int(n_subj/2): axes[s].set_xlabel('true betas',fontsize='xx-large') axes[s].set_aspect(1) plt.suptitle('estimated vs. simulated betas, \nby GBRSA',fontsize="xx-large" ) plt.show() fig, axes = plt.subplots(nrows=1, ncols=n_subj, figsize=(25, 5)) for s in range(n_subj): im = axes[s].scatter(betas_simulated[s] , betas_point[s][1:, :]) if s == 0: axes[s].set_ylabel('recovered betas by simple regression',fontsize='xx-large') if s == int(n_subj/2): axes[s].set_xlabel('true betas',fontsize='xx-large') axes[s].set_aspect(1) plt.suptitle('estimated vs. simulated betas, \nby simple regression',fontsize="xx-large" ) plt.show() Explanation: We can also examine the relation between recovered betas and true betas End of explanation noise_new = [None] * n_subj Y_new = [None] * n_subj for subj in range(n_subj): # generating noise K_noise = noise_level[subj][:, np.newaxis] \ * (np.exp(-dist2[subj] / noise_smooth_width**2 / 2.0) \ + np.eye(n_V[subj]) * 0.1) * noise_level[subj] # We make spatially correlated noise by generating # noise at each time point from a Gaussian Process # defined over the coordinates. L_noise = np.linalg.cholesky(K_noise) noise_new[subj] = np.zeros([n_T[subj], n_V[subj]]) noise_new[subj][0, :] = np.dot(L_noise, np.random.randn(n_V[subj]))\ / np.sqrt(1 - rho1[subj]**2) for i_t in range(1, n_T[subj]): noise_new[subj][i_t, :] = noise_new[subj][i_t - 1, :] * rho1[subj] \ + np.dot(L_noise,np.random.randn(n_V[subj])) Y_new[subj] = signal[subj] + noise_new[subj] + inten[subj] ts, ts0 = gbrsa.transform(Y_new,scan_onsets=scan_onsets) # ts is the estimated task-related time course, with each column corresponding to the task condition of the same # column in design matrix. # ts0 is the estimated time courses that have the same spatial spread as those in the training data (X0). # It is possible some task related signal is still in X0 or ts0, but not captured by the design matrix. fig, axes = plt.subplots(nrows=1, ncols=n_subj, figsize=(25, 5)) for s in range(n_subj): recovered_plot, = axes[s].plot(ts[s][:200, 8], 'b') design_plot, = axes[s].plot(design[s].design_task[:200, 8], 'g') if s == int(n_subj/2): axes[s].set_xlabel('time',fontsize='xx-large') fig.legend([design_plot, recovered_plot], ['design matrix for one condition', 'recovered time course for the condition'], fontsize='xx-large') plt.show() # We did not plot the whole time series for the purpose of seeing closely how much the two # time series overlap fig, axes = plt.subplots(nrows=1, ncols=n_subj, figsize=(25, 5)) for s in range(n_subj): c = np.corrcoef(design[s].design_task.T, ts[s].T) im = axes[s].pcolor(c[0:16, 16:],vmin=-0.5,vmax=1) axes[s].set_aspect(1) if s == int(n_subj/2): axes[s].set_xlabel('recovered time course',fontsize='xx-large') if s == 0: axes[s].set_ylabel('true design matrix',fontsize='xx-large') fig.suptitle('correlation between true design matrix \nand the recovered task-related activity') fig.colorbar(im, ax=axes.ravel().tolist(), shrink=0.75) plt.show() print('average SNR level:', snr_level) print('Apparently how much the recovered time course resembles the true design matrix depends on SNR') Explanation: "Decoding" from new data Now we generate a new data set, assuming signal is the same but noise is regenerated. We want to use the transform() function of gbrsa to estimate the "design matrix" in this new dataset. We keep the signal the same as in training data, but generate new noise. Note that we did this purely for simplicity of simulation. It is totally fine and encouraged for the event timing to be different in your training and testing data. You just need to capture them in your design matrix End of explanation width = 0.35 [score, score_null] = gbrsa.score(X=Y_new, design=[d.design_task for d in design], scan_onsets=scan_onsets) plt.bar(np.arange(n_subj),np.asarray(score)-np.asarray(score_null), width=width) plt.ylim(0, np.max([np.asarray(score)-np.asarray(score_null)])+100) plt.ylabel('cross-validated log likelihood') plt.xlabel('partipants') plt.title('Difference between cross-validated log likelihoods\n of full model and null model\non new data containing signal') plt.show() Y_nosignal = [noise_new[s] + inten[s] for s in range(n_subj)] [score_noise, score_null_noise] = gbrsa.score(X=Y_nosignal, design=[d.design_task for d in design], scan_onsets=scan_onsets) plt.bar(np.arange(n_subj),np.asarray(score_noise)-np.asarray(score_null_noise), width=width) plt.ylim(np.min([np.asarray(score_noise)-np.asarray(score_null_noise)])-100, 0) plt.ylabel('cross-validated log likelihood') plt.xlabel('partipants') plt.title('Difference between cross-validated log likelihoods\n of full model and null model\non pure noise') plt.show() Explanation: Model selection by cross-validataion: Similar to BRSA, you can compare different models by cross-validating the parameters of one model learnt from some training data on some testing data. GBRSA provides a score() function, which returns you a pair of cross-validated log likelihood for testing data. The first returned item is a numpy array of the cross-validated log likelihood of the model you have specified, for the testing data of all the subjects. The second is a numpy arrary of those of a null model which assumes everything else the same except that there is no task-related activity. Notice that comparing the score of your model of interest against its corresponding null model is not the only way to compare models. You might also want to compare against a model using the same set of design matrix, but a different rank (especially rank 1, which means all task conditions have the same response pattern, only differing in their magnitude). In general, in the context of GBRSA, a model means the timing of each event and the way these events are grouped, together with other trivial parameters such as the rank of the covariance matrix and the number of nuisance regressors. All these parameters can influence model performance. In future, we will provide interface to evaluate the predictive power for the data by different predefined similarity matrix or covariance matrix. End of explanation gbrsa_noise = GBRSA(n_iter=40) gbrsa_noise.fit(X=[noise[s] + inten[s] for s in range(n_subj)], design=[d.design_task for d in design],scan_onsets=scan_onsets) Y_nosignal = [noise_new[s] + inten[s] for s in range(n_subj)] [score_noise, score_null_noise] = gbrsa_noise.score(X=Y_nosignal, design=[d.design_task for d in design], scan_onsets=scan_onsets) plt.bar(np.arange(n_subj),np.asarray(score_noise)-np.asarray(score_null_noise), width=width) plt.ylim(np.min([np.asarray(score_noise)-np.asarray(score_null_noise)])-100, np.max([np.asarray(score_noise)-np.asarray(score_null_noise)])+100) plt.ylabel('cross-validated log likelihood') plt.xlabel('partipants') plt.title('Difference between cross-validated log likelihoods\n of full model and null model\ntrained on pure noise') plt.show() Explanation: Full model performs better on testing data that has the same property of signal and noise with training data. Below, we fit the model to data containing only noise and test how it performs on data with signal. End of explanation plt.pcolor(gbrsa_noise.U_) plt.colorbar() ax = plt.gca() ax.set_aspect(1) plt.title('covariance matrix of task conditions estimated from pure noise') Explanation: We can see that the difference is smaller but full model generally performs slightly worse, because of overfitting. This is expected. So, after fitting a model to your data, you should also check cross-validated log likelihood on separate runs from the same group of participants, and make sure your model is at least better than a null model before you trust your similarity matrix. Another diagnostic of bad model to your data is very small diagonal values in the shared covariance structure U_ Shown below: End of explanation
1,627
Given the following text description, write Python code to implement the functionality described below step by step Description: Visualization 1 Step1: Scatter plots Learn how to use Matplotlib's plt.scatter function to make a 2d scatter plot. Generate random data using np.random.randn. Style the markers (color, size, shape, alpha) appropriately. Include an x and y label and title. Step2: Histogram Learn how to use Matplotlib's plt.hist function to make a 1d histogram. Generate randpom data using np.random.randn. Figure out how to set the number of histogram bins and other style options. Include an x and y label and title.
Python Code: %matplotlib inline import matplotlib.pyplot as plt import numpy as np Explanation: Visualization 1: Matplotlib Basics Exercises End of explanation plt.scatter(np.random.randn(100), np.random.randn(100), c='g', s=50, marker='+', alpha=0.7) plt.xlabel('Random x values') plt.ylabel('Random y values') plt.title('Randomness Fun!') Explanation: Scatter plots Learn how to use Matplotlib's plt.scatter function to make a 2d scatter plot. Generate random data using np.random.randn. Style the markers (color, size, shape, alpha) appropriately. Include an x and y label and title. End of explanation plt.hist(np.random.randn(100), bins=5, log=True, orientation='horizontal') plt.xlabel('Logarithmic Probability') plt.ylabel('Random Number') plt.title('Probability of Random Numbers') Explanation: Histogram Learn how to use Matplotlib's plt.hist function to make a 1d histogram. Generate randpom data using np.random.randn. Figure out how to set the number of histogram bins and other style options. Include an x and y label and title. End of explanation
1,628
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Atmoschem MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Software Properties 3. Key Properties --&gt; Timestep Framework 4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order 5. Key Properties --&gt; Tuning Applied 6. Grid 7. Grid --&gt; Resolution 8. Transport 9. Emissions Concentrations 10. Emissions Concentrations --&gt; Surface Emissions 11. Emissions Concentrations --&gt; Atmospheric Emissions 12. Emissions Concentrations --&gt; Concentrations 13. Gas Phase Chemistry 14. Stratospheric Heterogeneous Chemistry 15. Tropospheric Heterogeneous Chemistry 16. Photo Chemistry 17. Photo Chemistry --&gt; Photolysis 1. Key Properties Key properties of the atmospheric chemistry 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Chemistry Scheme Scope Is Required Step7: 1.4. Basic Approximations Is Required Step8: 1.5. Prognostic Variables Form Is Required Step9: 1.6. Number Of Tracers Is Required Step10: 1.7. Family Approach Is Required Step11: 1.8. Coupling With Chemical Reactivity Is Required Step12: 2. Key Properties --&gt; Software Properties Software properties of aerosol code 2.1. Repository Is Required Step13: 2.2. Code Version Is Required Step14: 2.3. Code Languages Is Required Step15: 3. Key Properties --&gt; Timestep Framework Timestepping in the atmospheric chemistry model 3.1. Method Is Required Step16: 3.2. Split Operator Advection Timestep Is Required Step17: 3.3. Split Operator Physical Timestep Is Required Step18: 3.4. Split Operator Chemistry Timestep Is Required Step19: 3.5. Split Operator Alternate Order Is Required Step20: 3.6. Integrated Timestep Is Required Step21: 3.7. Integrated Scheme Type Is Required Step22: 4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order ** 4.1. Turbulence Is Required Step23: 4.2. Convection Is Required Step24: 4.3. Precipitation Is Required Step25: 4.4. Emissions Is Required Step26: 4.5. Deposition Is Required Step27: 4.6. Gas Phase Chemistry Is Required Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry Is Required Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry Is Required Step30: 4.9. Photo Chemistry Is Required Step31: 4.10. Aerosols Is Required Step32: 5. Key Properties --&gt; Tuning Applied Tuning methodology for atmospheric chemistry component 5.1. Description Is Required Step33: 5.2. Global Mean Metrics Used Is Required Step34: 5.3. Regional Metrics Used Is Required Step35: 5.4. Trend Metrics Used Is Required Step36: 6. Grid Atmospheric chemistry grid 6.1. Overview Is Required Step37: 6.2. Matches Atmosphere Grid Is Required Step38: 7. Grid --&gt; Resolution Resolution in the atmospheric chemistry grid 7.1. Name Is Required Step39: 7.2. Canonical Horizontal Resolution Is Required Step40: 7.3. Number Of Horizontal Gridpoints Is Required Step41: 7.4. Number Of Vertical Levels Is Required Step42: 7.5. Is Adaptive Grid Is Required Step43: 8. Transport Atmospheric chemistry transport 8.1. Overview Is Required Step44: 8.2. Use Atmospheric Transport Is Required Step45: 8.3. Transport Details Is Required Step46: 9. Emissions Concentrations Atmospheric chemistry emissions 9.1. Overview Is Required Step47: 10. Emissions Concentrations --&gt; Surface Emissions ** 10.1. Sources Is Required Step48: 10.2. Method Is Required Step49: 10.3. Prescribed Climatology Emitted Species Is Required Step50: 10.4. Prescribed Spatially Uniform Emitted Species Is Required Step51: 10.5. Interactive Emitted Species Is Required Step52: 10.6. Other Emitted Species Is Required Step53: 11. Emissions Concentrations --&gt; Atmospheric Emissions TO DO 11.1. Sources Is Required Step54: 11.2. Method Is Required Step55: 11.3. Prescribed Climatology Emitted Species Is Required Step56: 11.4. Prescribed Spatially Uniform Emitted Species Is Required Step57: 11.5. Interactive Emitted Species Is Required Step58: 11.6. Other Emitted Species Is Required Step59: 12. Emissions Concentrations --&gt; Concentrations TO DO 12.1. Prescribed Lower Boundary Is Required Step60: 12.2. Prescribed Upper Boundary Is Required Step61: 13. Gas Phase Chemistry Atmospheric chemistry transport 13.1. Overview Is Required Step62: 13.2. Species Is Required Step63: 13.3. Number Of Bimolecular Reactions Is Required Step64: 13.4. Number Of Termolecular Reactions Is Required Step65: 13.5. Number Of Tropospheric Heterogenous Reactions Is Required Step66: 13.6. Number Of Stratospheric Heterogenous Reactions Is Required Step67: 13.7. Number Of Advected Species Is Required Step68: 13.8. Number Of Steady State Species Is Required Step69: 13.9. Interactive Dry Deposition Is Required Step70: 13.10. Wet Deposition Is Required Step71: 13.11. Wet Oxidation Is Required Step72: 14. Stratospheric Heterogeneous Chemistry Atmospheric chemistry startospheric heterogeneous chemistry 14.1. Overview Is Required Step73: 14.2. Gas Phase Species Is Required Step74: 14.3. Aerosol Species Is Required Step75: 14.4. Number Of Steady State Species Is Required Step76: 14.5. Sedimentation Is Required Step77: 14.6. Coagulation Is Required Step78: 15. Tropospheric Heterogeneous Chemistry Atmospheric chemistry tropospheric heterogeneous chemistry 15.1. Overview Is Required Step79: 15.2. Gas Phase Species Is Required Step80: 15.3. Aerosol Species Is Required Step81: 15.4. Number Of Steady State Species Is Required Step82: 15.5. Interactive Dry Deposition Is Required Step83: 15.6. Coagulation Is Required Step84: 16. Photo Chemistry Atmospheric chemistry photo chemistry 16.1. Overview Is Required Step85: 16.2. Number Of Reactions Is Required Step86: 17. Photo Chemistry --&gt; Photolysis Photolysis scheme 17.1. Method Is Required Step87: 17.2. Environmental Conditions Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'csiro-bom', 'sandbox-2', 'atmoschem') Explanation: ES-DOC CMIP6 Model Properties - Atmoschem MIP Era: CMIP6 Institute: CSIRO-BOM Source ID: SANDBOX-2 Topic: Atmoschem Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry. Properties: 84 (39 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:53:56 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Software Properties 3. Key Properties --&gt; Timestep Framework 4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order 5. Key Properties --&gt; Tuning Applied 6. Grid 7. Grid --&gt; Resolution 8. Transport 9. Emissions Concentrations 10. Emissions Concentrations --&gt; Surface Emissions 11. Emissions Concentrations --&gt; Atmospheric Emissions 12. Emissions Concentrations --&gt; Concentrations 13. Gas Phase Chemistry 14. Stratospheric Heterogeneous Chemistry 15. Tropospheric Heterogeneous Chemistry 16. Photo Chemistry 17. Photo Chemistry --&gt; Photolysis 1. Key Properties Key properties of the atmospheric chemistry 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of atmospheric chemistry model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of atmospheric chemistry model code. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "troposhere" # "stratosphere" # "mesosphere" # "mesosphere" # "whole atmosphere" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.3. Chemistry Scheme Scope Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Atmospheric domains covered by the atmospheric chemistry model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Basic approximations made in the atmospheric chemistry model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "3D mass/mixing ratio for gas" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.5. Prognostic Variables Form Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Form of prognostic variables in the atmospheric chemistry component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 1.6. Number Of Tracers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of advected tracers in the atmospheric chemistry model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.family_approach') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 1.7. Family Approach Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Atmospheric chemistry calculations (not advection) generalized into families of species? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 1.8. Coupling With Chemical Reactivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Software Properties Software properties of aerosol code 2.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Operator splitting" # "Integrated" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Timestep Framework Timestepping in the atmospheric chemistry model 3.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Mathematical method deployed to solve the evolution of a given variable End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.2. Split Operator Advection Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for chemical species advection (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.3. Split Operator Physical Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for physics (in seconds). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.4. Split Operator Chemistry Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for chemistry (in seconds). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 3.5. Split Operator Alternate Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.6. Integrated Timestep Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestep for the atmospheric chemistry model (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Implicit" # "Semi-implicit" # "Semi-analytic" # "Impact solver" # "Back Euler" # "Newton Raphson" # "Rosenbrock" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 3.7. Integrated Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify the type of timestep scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order ** 4.1. Turbulence Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.2. Convection Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.3. Precipitation Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.4. Emissions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.5. Deposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.6. Gas Phase Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.9. Photo Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.10. Aerosols Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Key Properties --&gt; Tuning Applied Tuning methodology for atmospheric chemistry component 5.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics of the global mean state used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics of mean state used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Grid Atmospheric chemistry grid 6.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general structure of the atmopsheric chemistry grid End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.2. Matches Atmosphere Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 * Does the atmospheric chemistry grid match the atmosphere grid?* End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Grid --&gt; Resolution Resolution in the atmospheric chemistry grid 7.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.2. Canonical Horizontal Resolution Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 7.3. Number Of Horizontal Gridpoints Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 7.4. Number Of Vertical Levels Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Number of vertical levels resolved on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 7.5. Is Adaptive Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Default is False. Set true if grid resolution changes during execution. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.transport.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Transport Atmospheric chemistry transport 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview of transport implementation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 8.2. Use Atmospheric Transport Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is transport handled by the atmosphere, rather than within atmospheric cehmistry? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.transport.transport_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.3. Transport Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If transport is handled within the atmospheric chemistry scheme, describe it. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Emissions Concentrations Atmospheric chemistry emissions 9.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview atmospheric chemistry emissions End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Vegetation" # "Soil" # "Sea surface" # "Anthropogenic" # "Biomass burning" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10. Emissions Concentrations --&gt; Surface Emissions ** 10.1. Sources Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Climatology" # "Spatially uniform mixing ratio" # "Spatially uniform concentration" # "Interactive" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10.2. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10.3. Prescribed Climatology Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant)) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10.4. Prescribed Spatially Uniform Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and prescribed as spatially uniform End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10.5. Interactive Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and specified via an interactive method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10.6. Other Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and specified via any other method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Aircraft" # "Biomass burning" # "Lightning" # "Volcanos" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11. Emissions Concentrations --&gt; Atmospheric Emissions TO DO 11.1. Sources Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Climatology" # "Spatially uniform mixing ratio" # "Spatially uniform concentration" # "Interactive" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.2. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.3. Prescribed Climatology Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant)) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.4. Prescribed Spatially Uniform Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and prescribed as spatially uniform End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.5. Interactive Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and specified via an interactive method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.6. Other Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and specified via an &quot;other method&quot; End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12. Emissions Concentrations --&gt; Concentrations TO DO 12.1. Prescribed Lower Boundary Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed at the lower boundary. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.2. Prescribed Upper Boundary Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed at the upper boundary. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 13. Gas Phase Chemistry Atmospheric chemistry transport 13.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview gas phase atmospheric chemistry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HOx" # "NOy" # "Ox" # "Cly" # "HSOx" # "Bry" # "VOCs" # "isoprene" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.2. Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Species included in the gas phase chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.3. Number Of Bimolecular Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of bi-molecular reactions in the gas phase chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.4. Number Of Termolecular Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of ter-molecular reactions in the gas phase chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of reactions in the tropospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of reactions in the stratospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.7. Number Of Advected Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of advected species in the gas phase chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.8. Number Of Steady State Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 13.9. Interactive Dry Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 13.10. Wet Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 13.11. Wet Oxidation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14. Stratospheric Heterogeneous Chemistry Atmospheric chemistry startospheric heterogeneous chemistry 14.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview stratospheric heterogenous atmospheric chemistry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Cly" # "Bry" # "NOy" # TODO - please enter value(s) Explanation: 14.2. Gas Phase Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Gas phase species included in the stratospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sulphate" # "Polar stratospheric ice" # "NAT (Nitric acid trihydrate)" # "NAD (Nitric acid dihydrate)" # "STS (supercooled ternary solution aerosol particule))" # TODO - please enter value(s) Explanation: 14.3. Aerosol Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Aerosol species included in the stratospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 14.4. Number Of Steady State Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of steady state species in the stratospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 14.5. Sedimentation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 14.6. Coagulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15. Tropospheric Heterogeneous Chemistry Atmospheric chemistry tropospheric heterogeneous chemistry 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview tropospheric heterogenous atmospheric chemistry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.2. Gas Phase Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of gas phase species included in the tropospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sulphate" # "Nitrate" # "Sea salt" # "Dust" # "Ice" # "Organic" # "Black carbon/soot" # "Polar stratospheric ice" # "Secondary organic aerosols" # "Particulate organic matter" # TODO - please enter value(s) Explanation: 15.3. Aerosol Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Aerosol species included in the tropospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 15.4. Number Of Steady State Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of steady state species in the tropospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 15.5. Interactive Dry Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 15.6. Coagulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 16. Photo Chemistry Atmospheric chemistry photo chemistry 16.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview atmospheric photo chemistry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 16.2. Number Of Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of reactions in the photo-chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Offline (clear sky)" # "Offline (with clouds)" # "Online" # TODO - please enter value(s) Explanation: 17. Photo Chemistry --&gt; Photolysis Photolysis scheme 17.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Photolysis scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.2. Environmental Conditions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.) End of explanation
1,629
Given the following text description, write Python code to implement the functionality described below step by step Description: Spark Machine Learning Pipeline This coursework is about implementing and applying Spark Machine Learning Pipelines, and evaluating them with respect to preprocessing, parametrisation, and scaling. 1. Data set initial analysis and summary of pipeline task. (20%) 1.1. Summary of machine learning pipeline Step 1. Step 2. Step 3. Step 4. 1.2. Loading data to RDD Step1: 1.3. Descriptive Statistics Step2: 1.4. Data Cleaning Step3: 2. Implementation of machine learning pipeline. (25%) Implement a machine learning pipeline in Spark, including feature extractors, transformers, and/or selectors. Test that your pipeline it is correctly implemented and explain your choice of processing steps, learning algorithms, and parameter settings. Step4: 3. Evaluation and test of model. (20%) Evaluate the performance of your pipeline using training and test set (don’t use CV but pyspark.ml.tuning.TrainValidationSplit). 3.1. Evaluate performance of machine learning pipeline on training data and test data. Step5: 4. Model fine-tuning. (35%) Implement a parameter grid (using pyspark.ml.tuning.ParamGridBuilder[source]), varying at least one feature preprocessing step, one machine learning parameter, and the training set size. Document the training and test performance and the time taken for training and testing. Comment on your findings. 4.1. Training set size evaluation Step6: 4.2. Machine Learning Model Hyperparameter search
Python Code: # import dependencies for creating a data frame from pyspark.sql import SparkSession from pyspark.sql import Row from pyspark.sql.types import * import csv # Create SparkSession spark = SparkSession.builder.getOrCreate() # create RDD from csv files trainRDD = spark.read.csv("hdfs://saltdean/data/data/santander-products/train_ver2.csv", header=True, mode="DROPMALFORMED", schema=schema) # alternatively... # create RDD from csv files trainRDD = sc.textFile("hdfs://saltdean/data/data/santander-products/train_ver2.csv") trainRDD = trainRDD.mapPartitions(lambda x: csv.reader(x)) # alternatively... from https://spark.apache.org/docs/latest/sql-programming-guide.html#programmatically-specifying-the-schema # create RDD from csv files lines = sc.textFile("hdfs://saltdean/data/data/santander-products/train_ver2.csv") elements = lines.map(lambda l: l.split(",")) # Each line is converted to a tuple. clients = elements.map(lambda p: (p[0], p[1].strip(),p[2],...)) # The schema is encoded in a string. schemaString = "name age ..." fields = [StructField(field_name, StringType(), True) for field_name in schemaString.split()] schema = StructType(fields) # Apply the schema to the RDD and register the DataFrame to be used with Spark SQL. trainRDD = spark.createDataFrame(clients, schema) trainRDD.createOrReplaceTempView('trainingset') # alternatively, as seen in tutorial 8: lines = sc.textFile("hdfs://saltdean/data/data/santander-products/train_ver2.csv") parts = lines.map(lambda l: l.split(",")) trainRDD = parts.map(lambda p: Row(userId=int(p[0]), movieId=int(p[1]), rating=float(p[2]), timestamp=int(p[3]))) # Create DataFrame and register it to be used with Spark SQL. trainData = spark.createDataFrame(trainRDD) trainData.createOrReplaceTempView('Clients') # For testing print(trainData.describe()) # columns info print(trainData.count()) # number of instances Explanation: Spark Machine Learning Pipeline This coursework is about implementing and applying Spark Machine Learning Pipelines, and evaluating them with respect to preprocessing, parametrisation, and scaling. 1. Data set initial analysis and summary of pipeline task. (20%) 1.1. Summary of machine learning pipeline Step 1. Step 2. Step 3. Step 4. 1.2. Loading data to RDD End of explanation # Code modified from # https://www.kaggle.com/apryor6/santander-product-recommendation/detailed-cleaning-visualization-python/notebook # import dependencies import numpy as np import pandas as pd # create dataframe 'df' limit_rows = 7000000 df = pd.read_csv("hdfs://saltdean/data/data/santander-products/train_ver2.csv", dtype={"sexo":str, "ult_fec_cli_1t":str, "indext":str}, nrows=limit_rows) unique_ids = pd.Series(df["ncodpers"].unique()) limit_people = 1.2e4 unique_id = unique_ids.sample(n=limit_people) df = df[df.ncodpers.isin(unique_id)] df.count() # number of instances df.describe() Explanation: 1.3. Descriptive Statistics End of explanation # find missing values df.isnull().any() # Remove age outliers and nan from dataframe df.loc[df.age < 18,"age"] = df.loc[(df.age >= 18) & (df.age <= 30),"age"].mean(skipna=True) # replace outlier con mean df.loc[df.age > 100,"age"] = df.loc[(df.age >= 30) & (df.age <= 100),"age"].mean(skipna=True) # replace outlier con mean df["age"].fillna(df["age"].mean(),inplace=True) # replace nan with mean df["age"] = df["age"].astype(int) # Replace missing values df.loc[df["ind_nuevo"].isnull(),"ind_nuevo"] = 1 # new customers id '1' df.loc[df.antiguedad.isnull(),"antiguedad"] = df.antiguedad.min() df.loc[df.antiguedad <0, "antiguedad"] = 0 # new customer antiguedad '0' df.loc[df.indrel.isnull(),"indrel"] = 1 dates=df.loc[:,"fecha_alta"].sort_values().reset_index() median_date = int(np.median(dates.index.values)) df.loc[df.fecha_alta.isnull(),"fecha_alta"] = dates.loc[median_date,"fecha_alta"] # fill join date missing values df.loc[df.ind_actividad_cliente.isnull(),"ind_actividad_cliente"] = \ df["ind_actividad_cliente"].median() # fill in customer activity missing df.loc[df.nomprov.isnull(),"nomprov"] = "UNKNOWN" # known values for city of residence df.loc[df.indfall.isnull(),"indfall"] = "N" # missing deceased index set to N df.loc[df.tiprel_1mes.isnull(),"tiprel_1mes"] = "A" # customer status, if missing = active df.tiprel_1mes = df.tiprel_1mes.astype("category") # customer status as categorical # Customer type normalization as categorical variable map_dict = { 1.0:"1", "1.0":"1", "1":"1", "3.0":"3", "P":"P", 3.0:"3", 2.0:"2", "3":"3", "2.0":"2", "4.0":"4", "4":"4", "2":"2"} df.indrel_1mes.fillna("P",inplace=True) df.indrel_1mes = df.indrel_1mes.apply(lambda x: map_dict.get(x,x)) df.indrel_1mes = df.indrel_1mes.astype("category") # Replace missing values in target features with 0 # target features = boolean indicator as to whether or not that product was owned that month df.loc[df.ind_nomina_ult1.isnull(), "ind_nomina_ult1"] = 0 df.loc[df.ind_nom_pens_ult1.isnull(), "ind_nom_pens_ult1"] = 0 # Elimnate entries with nan values in given variable, eg: print("Total number of entries before removing nan= ", df.count()) df.renta.isnull().sum() df.na.drop(subset=["renta","indfall","tiprel_1mes","indrel_1mes"]) # !!!! need to be tested that only nan entries are removed df.renta.isnull().sum() print("Total number of entries after removing nan= ", df.count()) # Eliminate redundant variables df.drop(["tipodom","cod_prov"],axis=1,inplace=True) # check all missing values are gone df.isnull().any() # Convert target features column into integers feature_cols = df.iloc[:1,].filter(regex="ind_+.*ult.*").columns.values for col in feature_cols: df[col] = df[col].astype(int) Explanation: 1.4. Data Cleaning End of explanation # code modified from Spark documentation at: # https://spark.apache.org/docs/2.1.0/ml-classification-regression.html#random-forest-classifier # and DataBricks at: # https://docs.databricks.com/spark/latest/mllib/binary-classification-mllib-pipelines.html # imports dependencies for Random Forest pipeline from pyspark.ml import Pipeline from pyspark.ml.classification import RandomForestClassifier from pyspark.ml.feature import IndexToString, StringIndexer, VectorIndexer, OneHotEncoder, StringIndexer, VectorAssembler # stages in the Pipeline stages = [] # One-Hot Encoding categoricalColumns = ["a", "b", "c", "d", "e", "f", "g", "j"] for categoricalCol in categoricalColumns: stringIndexer = StringIndexer(inputCol=categoricalCol, outputCol=categoricalCol+"Index") # Category Indexing with StringIndexer encoder = OneHotEncoder(inputCol=categoricalCol+"Index", outputCol=categoricalCol+"classVec") # Use OneHotEncoder to convert categorical variables into binary SparseVectors stages += [stringIndexer, encoder] # Add stages to the pipeline # Convert labels into label indices using the StringIndexer label_stringIdx = StringIndexer(inputCol = "add here target column in csv file", outputCol = "labels") stages += [label_stringIdx] # Add stage to the pipeline # Transform all features into a vector using VectorAssembler numericCols = ["m", "n", "o", "p", "q", "r"] assemblerInputs = map(lambda c: c + "classVec", categoricalColumns) + numericCols assembler = VectorAssembler(inputCols=assemblerInputs, outputCol="features") stages += [assembler] # Add stage to the pipeline # Train a RandomForest model. rf = RandomForestClassifier(labelCol="labels", featuresCol="features", numTrees=100, # Number of trees in the random forest impurity='entropy', # Criterion used for information gain calculation featureSubsetStrategy="auto", predictionCol="prediction" maxDepth=5, maxBins=32, minInstancesPerNode=1, minInfoGain=0.0, subsamplingRate=1.0) stages += [rf] # Add stage to the pipeline # Machine Learning Pipeline pipeline = Pipeline(stages=stages) Explanation: 2. Implementation of machine learning pipeline. (25%) Implement a machine learning pipeline in Spark, including feature extractors, transformers, and/or selectors. Test that your pipeline it is correctly implemented and explain your choice of processing steps, learning algorithms, and parameter settings. End of explanation # imports dependencies from pyspark.ml.tuning import CrossValidator, ParamGridBuilder, TrainValidationSplit from pyspark.ml.evaluation import MulticlassClassificationEvaluator # Split data into training set and testing set [trainData, testData] = trainData.randomSplit([0.8, 0.2], seed = 100) # Train model in pipeline rfModel = pipeline.fit(trainData) # Make predictions for training set and compute training set accuracy predictions = rfModel.transform(trainData) evaluator = MulticlassClassificationEvaluator(labelCol="labels", predictionCol="prediction", metricName="accuracy") accuracy = evaluator.evaluate(predictions) print("Test Error = %g" % (1.0 - accuracy)) print(train_pipeline.stages[0]) # summary # Run the feature transformations pipeline on the test data set pipelineModel = prePro_pipeline.fit(testClients) # computes feature statistics testData = pipelineModel.transform(testClients) # transforms the features # Make predictions for test set and compute test error test_predictions = rfModel.transform(testData) test_accuracy = evaluator.evaluate(test_predictions) print("Test Error = %g" % (1.0 - test_accuracy)) Explanation: 3. Evaluation and test of model. (20%) Evaluate the performance of your pipeline using training and test set (don’t use CV but pyspark.ml.tuning.TrainValidationSplit). 3.1. Evaluate performance of machine learning pipeline on training data and test data. End of explanation print('Training set size evaluation') # size of different training set to be evaluated, and split of training set sizes = [0.5, 0.1, 0.05, 0.01, 0.001] data = trainData.randomSplit(sizes, seed = 100) print('\n=== training set of size 100%') # Train model in pipeline tempModel = pipeline.fit(trainData) # Make predictions for training set and compute training set accuracy tempPredictions = tempModel.transform(trainData) tempAccuracy = evaluator.evaluate(tempPredictions) print("Classification Error = %g" % (1.0 - tempAccuracy)) for x in data: print('\n=== training set of size reduced to %g' % x) # Train model in pipeline tempModel = pipeline.fit(data[x]) # Make predictions for training set and compute training set accuracy tempPredictions = tempModel.transform(data[x]) tempAccuracy = evaluator.evaluate(tempPredictions) print("Classification Error = %g" % (1.0 - tempAccuracy)) Explanation: 4. Model fine-tuning. (35%) Implement a parameter grid (using pyspark.ml.tuning.ParamGridBuilder[source]), varying at least one feature preprocessing step, one machine learning parameter, and the training set size. Document the training and test performance and the time taken for training and testing. Comment on your findings. 4.1. Training set size evaluation End of explanation # Define hyperparameters and their values to search and evaluate paramGrid = ParamGridBuilder() \ .addGrid(rf.numTrees, [10,20,50,100,200,500,1000,5000]) \ .addGrid(rf.minInstancesPerNode, [0,1,2,4,6,8,10]) \ .addGrid(rf.maxDepth, [2,5,10,20,50]).build() # Grid Search and Cross Validation crossVal = CrossValidator(estimator=rf, estimatorParamMaps=paramGrid, evaluator=evaluator) print('starting Hyperparameter Grid Search with cross-validation') rfCrosVal = crossVal.fit(trainData) print('Grid Search has finished') print(rfCrosVal.bestModel.rank) paramMap = list(zip(rfCrosVal.getEstimatorParamMaps(),rfCrosVal.avgMetrics)) paramMax = max(paramMap, key=lambda x: x[1]) print(paramMax) # Evaluate the model with test data cvtest_predictions = rfCrosVal.transform(testData) cvtest_accuracy = evaluator.evaluate(cvtest_predictions) print("Test Error = %g" % (1.0 - cvtest_accuracy)) Explanation: 4.2. Machine Learning Model Hyperparameter search End of explanation
1,630
Given the following text description, write Python code to implement the functionality described below step by step Description: Access Ensembl BioMart using biomart module We use rpy2 and R magics in IPython Notebook to utilize the powerful biomaRt package in R. Usage Step1: Tutorial What marts are available? Current build (currently not working...) Step2: Sometimes you need to specify a particular genome build (e.g., GTEx v6 used GENCODE v19, which was based on GRCh37.p13 = Ensembl 74) Step3: What datasets are available? Step4: Select a mart & dataset Step5: For an old archive, you can even specify the archive version when calling useMart, e.g., Step6: We will use mart build v74 as our example Step7: What attributes and filters can I use? Attributes are the identifiers that you want to retrieve. For example HGNC gene ID, chromosome name, Ensembl transcript ID. Filters are the identifiers that you supply in a query. Some but not all of the filter names may be the same as the attribute names. Values are the filter identifiers themselves. For example the values of the filter “HGNC symbol” could be 3 genes “TP53”, “SRY” and “KIAA1199”. Step8: You can search for specific attributes by running grep() on the name. For example, if you’re looking for Affymetrix microarray probeset IDs Step9: Demo All genes on Y chromosome Query in R Step10: Accessible in Python Step11: Annotate a gene list
Python Code: import pandas as pd %load_ext rpy2.ipython %%R library(biomaRt) %load_ext version_information %version_information pandas, rpy2 Explanation: Access Ensembl BioMart using biomart module We use rpy2 and R magics in IPython Notebook to utilize the powerful biomaRt package in R. Usage: Run Setup Select a mart & dataset Demo All genes on Y chromosome Annotate a gene list Ref: Blog post: Some basics of biomaRt Table of Assemblies: http://www.ensembl.org/info/website/archives/assembly.html Setup End of explanation %%R marts = listMarts() head(marts) Explanation: Tutorial What marts are available? Current build (currently not working...): End of explanation %%R marts.v74 = listMarts(host="dec2013.archive.ensembl.org") head(marts.v74) Explanation: Sometimes you need to specify a particular genome build (e.g., GTEx v6 used GENCODE v19, which was based on GRCh37.p13 = Ensembl 74): End of explanation %%R datasets = listDatasets(useMart("ensembl")) head(datasets) Explanation: What datasets are available? End of explanation %%R mart.hsa = useMart("ensembl", "hsapiens_gene_ensembl") Explanation: Select a mart & dataset End of explanation %%R mart74.hsa = useMart("ENSEMBL_MART_ENSEMBL", "hsapiens_gene_ensembl", host="dec2013.archive.ensembl.org") Explanation: For an old archive, you can even specify the archive version when calling useMart, e.g., End of explanation %%R mart.hsa = mart74.hsa Explanation: We will use mart build v74 as our example End of explanation %%R attributes <- listAttributes(mart.hsa) head(attributes) %%R filters <- listFilters(mart.hsa) head(filters) Explanation: What attributes and filters can I use? Attributes are the identifiers that you want to retrieve. For example HGNC gene ID, chromosome name, Ensembl transcript ID. Filters are the identifiers that you supply in a query. Some but not all of the filter names may be the same as the attribute names. Values are the filter identifiers themselves. For example the values of the filter “HGNC symbol” could be 3 genes “TP53”, “SRY” and “KIAA1199”. End of explanation %%R head(attributes[grep("affy", attributes$name),]) Explanation: You can search for specific attributes by running grep() on the name. For example, if you’re looking for Affymetrix microarray probeset IDs: End of explanation %%R -o df df = getBM(attributes=c("ensembl_gene_id", "hgnc_symbol", "chromosome_name"), filters="chromosome_name", values="Y", mart=mart.hsa) head(df) Explanation: Demo All genes on Y chromosome Query in R: End of explanation df.head() Explanation: Accessible in Python: End of explanation genes = ["ENSG00000135245", "ENSG00000240758", "ENSG00000225490"] %%R -i genes -o df df = getBM(attributes=c("ensembl_gene_id", "hgnc_symbol", "external_gene_id", "chromosome_name", "gene_biotype", "description"), filters="ensembl_gene_id", values=genes, mart=mart.hsa) df df Explanation: Annotate a gene list End of explanation
1,631
Given the following text description, write Python code to implement the functionality described below step by step Description: 누적 분포 함수와 확률 밀도 함수 누적 분포 함수(cumulative distribution function)와 확률 밀도 함수(probabiligy density function)는 확률 변수의 분포 즉, 확률 분포를 수학적으로 정의하기 위한 수식이다. 확률 분포의 묘사 확률의 정의에서 확률은 사건(event)이라는 표본의 집합에 대해 할당된 숫자라고 하였다. 데이터 분석을 하려면 확률이 구체적으로 어떻게 할당되었는지를 묘사(describe)하거 전달(communicate)해야 할 필요가 있다. 어떤 사건에 어느 정도의 확률이 할당되었는지를 묘사한 것을 확률 분포(distribution)라고 한다. 확률 분포를 묘사하기 위해서는 모든 사건(event)들을 하나 하나 제시하고 거기에 할당된 숫자를 보여야 하기 때문에 확률 분포의 묘사는 결코 쉽지 않은 작업이다. 그러나 확률 변수를 이용하면 이러한 묘사 작업이 좀 더 쉬워진다. 왜냐하면 사건(event)이 구간(interval)이 되고 이 구산을 지정하는데는 시작점과 끝점이라는 두 개의 숫자만 있으면 되기 때문이다. [[school_notebook Step1: 시계 바늘 확률 문제의 경우를 예로 들어보자. 이 경우에는 각도가 0도부터 360까지이지만 음의 무한대를 시작점으로 해도 상관없다. $$ F(0) = P({ -\infty {}^{\circ} \leq \theta < 0 {}^{\circ} }) = 0 $$ $$ F(10) = P({ -\infty {}^{\circ} \leq \theta < 10 {}^{\circ} }) = \dfrac{1}{36} $$ $$ F(20) = P({ -\infty {}^{\circ} \leq \theta < 20 {}^{\circ} }) = \dfrac{2}{36} $$ $$ \vdots $$ $$ F(350) = P({ -\infty {}^{\circ} \leq \theta < 350 {}^{\circ} }) = \dfrac{35}{36} $$ $$ F(360) = P({ -\infty {}^{\circ} \leq \theta < 360 {}^{\circ} }) = 1 $$ $$ F(370) = P({ -\infty {}^{\circ} \leq \theta < 370 {}^{\circ} }) = 1 $$ $$ F(380) = P({ -\infty {}^{\circ} \leq \theta < 380 {}^{\circ} }) = 1 $$ $$ \vdots $$ 이를 NumPy와 matplotlib를 사용하여 그래프로 그래면 다음과 같다. Step2: 누적 밀도 함수 즉 cdf는 다음과 같은 특징을 가진다. $F(-\infty) = 0$ $F(+\infty) = 1$ $F(x) \geq F(y) \;\; \text{ if } \;\; x > y $ 확률 밀도 함수 누적 분포 함수는 확률 분포를 함수라는 편리한 상태로 바꾸어 주었다. 누적 분포 함수는 확률이 어느 사건(event)에 어느 정도 분포되어 있는지 수학적으로 명확하게 표현해 준다. 그러나 누적 분포 함수가 표현하는 사건이 음수 무한대를 시작점으로 하고 변수 $x$를 끝점으로 하는 구간이다보니 분포의 형상을 직관적으로 이해하기는 힘든 단점이 있다. 다시 말해서 어떤 확률 변수 값이 더 자주 나오는지에 대한 정보를 알기 힘들다는 점이다. 이를 알기 위해서는 확률 변수가 나올 수 있는 전체 구간 ($-\infty$ ~ $\infty$)을 아주 작은 폭을 가지는 구간들로 나눈 다음 각 구간의 확률을 살펴보는 것이 편리하다. 다만 이렇게 되면 구간의 폭(width)을 어느 정도로 정의해야 하는지에 대한 추가적인 약속이 필요하기 때문에 실효성이 떨어진다. 이러한 단점을 보완하기 위해 생각한 것이 절대적인 확률이 아닌 상대적인 확률 분포 형태만을 보기 위한 확률 밀도 함수(probability density function)이다. 누적 확률 분포 그래프의 x축의 오른쪽으로 이동하면서 크기의 변화를 살펴보자.만약 특정한 $x$값 근처의 구간에 확률이 배정되지 않았다면 누적 분포 함수는 그 구간을 지나도 증가하지 않는다. 즉, 기울기가 0이다. 왜냐하면 $x$ 값이 커졌다(x축의 오른쪽으로 이동하였다)는 것은 앞의 구간을 포함하는 더 큰 구간(사건)에 대한 확률을 묘사하고 있는 것인데 추가적으로 포함된 신규 구간에 확률이 없다면 그 신규 구간을 포함한 구간이나 포함하지 않은 구간이나 배정된 확률이 같기 때문이다. 누적 분포 함수의 기울기가 0이 아닌 경우는 추가적으로 포함된 구간에 0이 아닌 확률이 할당되어 있는 경우이다. 만약 더 많은 확률이 할당되었다면 누적 분포 함수는 그 구간을 지나면서 더 빠른 속도로 증가할 것이다. 다시 말해서 함수의 기울기가 커진다. 이러한 방식으로 누적 분포의 기울기의 크기를 보면 각 위치에 배정된 확률의 상대적 크기를 알 수 있다. 기울기를 구하는 수학적 연산이 미분(differentiation)이므로 확률 밀도 함수는 누적 분포 함수의 미분으로 정의한다. $$ \dfrac{dF(x)}{dx} = f(x) $$ 이를 적분으로 나타내면 다음과 같다. $$ F(x) = \int_{-\infty}^{x} f(u) du $$ 확률 밀도 함수는 특정 확률 변수 구간의 확률이 다른 구간에 비해 상대적으로 얼마나 높은가를 나타내는 것이며 그 값 자체가 확률은 아니다라는 점을 명심해야 한다. 확률 밀도 함수는 다음과 같은 특징을 가진다. $-\infty$ 부터 $\infty$ 까지 적분하면 그 값은 1이 된다. $$ \int_{-\infty}^{\infty} f(u)du = 1$$ 확률 밀도 함수는 0보다 같거나 크다. $$ f(x) \geq 0 $$ 앞서 보인 시계 바늘 문제에서 확률 밀도함수를 구하면 다음과 같다. Step3: 확률 질량 함수 이산 확률 분포는 확률 밀도 함수를 정의할 수 없는 대신 확률 질량 함수가 존재한다. 확률 질량 함수(probability mass funtion)는 이산 확률 변수의 가능한 값 하나 하나에 대해 확률을 정의한 함수이다. 예를 들어 6면체인 주사위를 던져서 나올 수 있는 값은 1부터 6까지의 이산적인 값을 가지는데 이러한 이산 확률 변수는 예를 들어 다음과 같은 확률 질량 함수를 가질 수 있다. 이 경우에는 공정하지 않은(unfair) 주사위의 확률 분포를 보이고 있다. Step4: 위의 확률 질량 함수는 주사위 눈금 1이 나오지 않고 6이 비정상적으로 많이 나오게 만든 비정상적인 주사위(unfair dice)를 묘사한다. 이 확률 변수에 대해 각 값을 누적하여 더하면 이산 확률 변수의 누적 분포 함수(cumulative distribution function)를 구할 수 있다.
Python Code: %%tikz \filldraw [fill=white] (0,0) circle [radius=1cm]; \foreach \angle in {60,30,...,-270} { \draw[line width=1pt] (\angle:0.9cm) -- (\angle:1cm); } \draw (0,0) -- (90:0.8cm); Explanation: 누적 분포 함수와 확률 밀도 함수 누적 분포 함수(cumulative distribution function)와 확률 밀도 함수(probabiligy density function)는 확률 변수의 분포 즉, 확률 분포를 수학적으로 정의하기 위한 수식이다. 확률 분포의 묘사 확률의 정의에서 확률은 사건(event)이라는 표본의 집합에 대해 할당된 숫자라고 하였다. 데이터 분석을 하려면 확률이 구체적으로 어떻게 할당되었는지를 묘사(describe)하거 전달(communicate)해야 할 필요가 있다. 어떤 사건에 어느 정도의 확률이 할당되었는지를 묘사한 것을 확률 분포(distribution)라고 한다. 확률 분포를 묘사하기 위해서는 모든 사건(event)들을 하나 하나 제시하고 거기에 할당된 숫자를 보여야 하기 때문에 확률 분포의 묘사는 결코 쉽지 않은 작업이다. 그러나 확률 변수를 이용하면 이러한 묘사 작업이 좀 더 쉬워진다. 왜냐하면 사건(event)이 구간(interval)이 되고 이 구산을 지정하는데는 시작점과 끝점이라는 두 개의 숫자만 있으면 되기 때문이다. [[school_notebook:4bcfe70a64de40ec945639236b0e911d]] 그러나 사건(event) 즉, 구간(interval) 하나를 정의하기 위해 숫자가 하나가 아닌 두 개가 필요하다는 점은 아무래도 불편하다. 숫자 하나만으로 사건 즉, 구간을 정의할 수 있는 방법은 없을까? 이를 해결하기 위한 아이디어 중 하나는 구간의 시작을 나타내는 숫자를 모두 같은 숫자인 음수 무한대($-\infty$)로 통일하는 것이다. 여러가지 구간들 중에서 시작점이 음수 무한대인 구간만 사용하는 것이라고 볼 수 있다. $${ -\infty \leq X < -1 } $$ $${ -\infty \leq X < 0 } $$ $${ -\infty \leq X < 1 } $$ $${ -\infty \leq X < 2 } $$ $$ \vdots $$ $$ { -\infty \leq X < x } $$ $$ \vdots $$ 물론 이러한 구간들은 시그마 필드를 구성하는 전체 사건(event)들 중 일부에 지나지 않는다. 그러나 확률 공간과 시그마 필드의 정의를 이용하면 이러한 구간들로부터 시작점이 음수 무한대가 아닌 다른 구간들을 생성할 수 있다. 또한 새로 생성된 구간들에 대한 확률값도 확률의 정의에 따라 계산할 수 있다. 누적 확률 분포 위와 같은 방법으로 서술된 확률 분포를 누적 분포 함수 (cumulative distribution function) 또는 누적 확률 분포라고 하고 약자로 cdf라고 쓴다. 일반적으로 cdf는 대문자를 사용하여 $F(x)$와 같은 기호로 표시하며 이 때 독립 변수 $x$는 범위의 끝을 뜻한다. 범위의 시작은 음의 무한대(negative infinity, $-\infty$)이다. 확률 변수 $X$에 대한 누적 확률 분포 $F(x)$의 수학적 정의는 다음과 같다. $$ F(x) = P({X < x}) = P(X < x)$$ 몇가지 누적 확률 분포 표시의 예를 들면 다음과 같다. $$ \vdots $$ * $F(-1)$ : 확률 변수가 $-\infty$이상 -1 미만인 구간 내에 존재할 확률 즉, $P( { -\infty \leq X < -1 })$ * $F(0)$ : 확률 변수가 $-\infty$이상 0 미만인 구간 내에 존재할 확률 즉, $P( { -\infty \leq X < 0 })$ * $F(1)$ : 확률 변수가 $-\infty$이상 1 미만인 구간 내에 존재할 확률 즉, $P( { -\infty \leq X < 1 })$ $$ \vdots $$ * $F(10)$ : 확률 변수가 $-\infty$이상 10 미만인 구간 내에 존재할 확률 즉, $P( { -\infty \leq X < 10 })$ $$ \vdots $$ End of explanation t = np.linspace(-100, 500, 100) F = t / 360 F[t < 0] = 0 F[t > 360] = 1 plt.plot(t, F) plt.ylim(-0.1, 1.1) plt.xticks([0, 180, 360]); plt.title("Cumulative Distribution Function"); plt.xlabel("$x$ (deg.)"); plt.ylabel("$F(x)$"); Explanation: 시계 바늘 확률 문제의 경우를 예로 들어보자. 이 경우에는 각도가 0도부터 360까지이지만 음의 무한대를 시작점으로 해도 상관없다. $$ F(0) = P({ -\infty {}^{\circ} \leq \theta < 0 {}^{\circ} }) = 0 $$ $$ F(10) = P({ -\infty {}^{\circ} \leq \theta < 10 {}^{\circ} }) = \dfrac{1}{36} $$ $$ F(20) = P({ -\infty {}^{\circ} \leq \theta < 20 {}^{\circ} }) = \dfrac{2}{36} $$ $$ \vdots $$ $$ F(350) = P({ -\infty {}^{\circ} \leq \theta < 350 {}^{\circ} }) = \dfrac{35}{36} $$ $$ F(360) = P({ -\infty {}^{\circ} \leq \theta < 360 {}^{\circ} }) = 1 $$ $$ F(370) = P({ -\infty {}^{\circ} \leq \theta < 370 {}^{\circ} }) = 1 $$ $$ F(380) = P({ -\infty {}^{\circ} \leq \theta < 380 {}^{\circ} }) = 1 $$ $$ \vdots $$ 이를 NumPy와 matplotlib를 사용하여 그래프로 그래면 다음과 같다. End of explanation t = np.linspace(-100, 500, 1000) F = t / 360 F[t < 0] = 0 F[t > 360] = 1 f = np.gradient(F) # 수치미분 plt.plot(t, f) plt.ylim(-0.0001, f.max()*1.1) plt.xticks([0, 180, 360]); plt.title("Probability Density Function"); plt.xlabel("$x$ (deg.)"); plt.ylabel("$f(x)$"); Explanation: 누적 밀도 함수 즉 cdf는 다음과 같은 특징을 가진다. $F(-\infty) = 0$ $F(+\infty) = 1$ $F(x) \geq F(y) \;\; \text{ if } \;\; x > y $ 확률 밀도 함수 누적 분포 함수는 확률 분포를 함수라는 편리한 상태로 바꾸어 주었다. 누적 분포 함수는 확률이 어느 사건(event)에 어느 정도 분포되어 있는지 수학적으로 명확하게 표현해 준다. 그러나 누적 분포 함수가 표현하는 사건이 음수 무한대를 시작점으로 하고 변수 $x$를 끝점으로 하는 구간이다보니 분포의 형상을 직관적으로 이해하기는 힘든 단점이 있다. 다시 말해서 어떤 확률 변수 값이 더 자주 나오는지에 대한 정보를 알기 힘들다는 점이다. 이를 알기 위해서는 확률 변수가 나올 수 있는 전체 구간 ($-\infty$ ~ $\infty$)을 아주 작은 폭을 가지는 구간들로 나눈 다음 각 구간의 확률을 살펴보는 것이 편리하다. 다만 이렇게 되면 구간의 폭(width)을 어느 정도로 정의해야 하는지에 대한 추가적인 약속이 필요하기 때문에 실효성이 떨어진다. 이러한 단점을 보완하기 위해 생각한 것이 절대적인 확률이 아닌 상대적인 확률 분포 형태만을 보기 위한 확률 밀도 함수(probability density function)이다. 누적 확률 분포 그래프의 x축의 오른쪽으로 이동하면서 크기의 변화를 살펴보자.만약 특정한 $x$값 근처의 구간에 확률이 배정되지 않았다면 누적 분포 함수는 그 구간을 지나도 증가하지 않는다. 즉, 기울기가 0이다. 왜냐하면 $x$ 값이 커졌다(x축의 오른쪽으로 이동하였다)는 것은 앞의 구간을 포함하는 더 큰 구간(사건)에 대한 확률을 묘사하고 있는 것인데 추가적으로 포함된 신규 구간에 확률이 없다면 그 신규 구간을 포함한 구간이나 포함하지 않은 구간이나 배정된 확률이 같기 때문이다. 누적 분포 함수의 기울기가 0이 아닌 경우는 추가적으로 포함된 구간에 0이 아닌 확률이 할당되어 있는 경우이다. 만약 더 많은 확률이 할당되었다면 누적 분포 함수는 그 구간을 지나면서 더 빠른 속도로 증가할 것이다. 다시 말해서 함수의 기울기가 커진다. 이러한 방식으로 누적 분포의 기울기의 크기를 보면 각 위치에 배정된 확률의 상대적 크기를 알 수 있다. 기울기를 구하는 수학적 연산이 미분(differentiation)이므로 확률 밀도 함수는 누적 분포 함수의 미분으로 정의한다. $$ \dfrac{dF(x)}{dx} = f(x) $$ 이를 적분으로 나타내면 다음과 같다. $$ F(x) = \int_{-\infty}^{x} f(u) du $$ 확률 밀도 함수는 특정 확률 변수 구간의 확률이 다른 구간에 비해 상대적으로 얼마나 높은가를 나타내는 것이며 그 값 자체가 확률은 아니다라는 점을 명심해야 한다. 확률 밀도 함수는 다음과 같은 특징을 가진다. $-\infty$ 부터 $\infty$ 까지 적분하면 그 값은 1이 된다. $$ \int_{-\infty}^{\infty} f(u)du = 1$$ 확률 밀도 함수는 0보다 같거나 크다. $$ f(x) \geq 0 $$ 앞서 보인 시계 바늘 문제에서 확률 밀도함수를 구하면 다음과 같다. End of explanation x = np.arange(1,7) y = np.array([0.0, 0.1, 0.1, 0.2, 0.2, 0.4]) plt.stem(x, y); plt.xlim(0, 7); plt.ylim(-0.01, 0.5); Explanation: 확률 질량 함수 이산 확률 분포는 확률 밀도 함수를 정의할 수 없는 대신 확률 질량 함수가 존재한다. 확률 질량 함수(probability mass funtion)는 이산 확률 변수의 가능한 값 하나 하나에 대해 확률을 정의한 함수이다. 예를 들어 6면체인 주사위를 던져서 나올 수 있는 값은 1부터 6까지의 이산적인 값을 가지는데 이러한 이산 확률 변수는 예를 들어 다음과 같은 확률 질량 함수를 가질 수 있다. 이 경우에는 공정하지 않은(unfair) 주사위의 확률 분포를 보이고 있다. End of explanation x = np.arange(1,7) y = np.array([0.0, 0.1, 0.1, 0.2, 0.2, 0.4]) z = np.cumsum(y) plt.step(x, z); plt.xlim(0, 7); plt.ylim(-0.01, 1.1); Explanation: 위의 확률 질량 함수는 주사위 눈금 1이 나오지 않고 6이 비정상적으로 많이 나오게 만든 비정상적인 주사위(unfair dice)를 묘사한다. 이 확률 변수에 대해 각 값을 누적하여 더하면 이산 확률 변수의 누적 분포 함수(cumulative distribution function)를 구할 수 있다. End of explanation
1,632
Given the following text description, write Python code to implement the functionality described below step by step Description: Contextual Bandits (incomplete) Step1: Query by Committee Step2: Stochastic Gradient Descent Step3: Random selection of data points at each iteration. Step4: SVM with Random Sampling Step5: Contextual Bandits We implement a contextual bandit algorithm for active learning, suggested by <a href="https Step6: Each cluster has a context vector containing 4 pieces of information Step7: We'll use Thompson Sampling with linear payoff and with Gaussian prior and likelihood. The algorithm is described in <a href="http Step8: Initially, we choose 100 random points to sample.
Python Code: import numpy as np import pandas as pd import pickle import seaborn as sns from pandas import DataFrame, Index from sklearn import metrics from sklearn.linear_model import SGDClassifier from sklearn.svm import SVC from sklearn.kernel_approximation import RBFSampler, Nystroem from sklearn.linear_model import PassiveAggressiveClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.cluster import MiniBatchKMeans from sklearn.utils import shuffle from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.model_selection import KFold from scipy.spatial.distance import cosine from IPython.core.display import HTML from mclearn import * %matplotlib inline sns.set_palette("husl", 7) HTML(open("styles/stylesheet.css", "r").read()) # read in the data sdss = pd.io.parsers.read_csv("data/sdss_dr7_photometry.csv.gz", compression="gzip", index_col=["ra", "dec"]) # save the names of the 11 feature vectors and the target column feature_names = ["psfMag_u", "psfMag_g", "psfMag_r", "psfMag_i", "psfMag_z", "petroMag_u", "petroMag_g", "petroMag_r", "petroMag_i", "petroMag_z", "petroRad_r"] target_name = "class" X_train, X_test, y_train, y_test = train_test_split(np.array(sdss[feature_names]), np.array(sdss['class']), train_size=100000, test_size=30000) # shuffle the data X_train, y_train = shuffle(X_train, y_train) X_test, y_test = shuffle(X_test, y_test) Explanation: Contextual Bandits (incomplete) End of explanation accuracies = [] predictions = [[] for i in range(10)] forests = [None] * 11 # initially, pick 100 random points to query X_train_cur, y_train_cur = X_train[:100], y_train[:100] X_train_pool, y_train_pool = X_train[100:], y_train[100:] # find the accuracy rate, given the current training example forests[-1] = RandomForestClassifier(n_jobs=-1, class_weight='auto', random_state=5) forests[-1].fit(X_train_cur, y_train_cur) y_pred_test = forests[-1].predict(X_test) confusion_test = metrics.confusion_matrix(y_test, y_pred_test) accuracies.append(balanced_accuracy_expected(confusion_test)) # query by committee to pick the next point to sample kfold = KFold(len(y_train_cur), n_folds=10, shuffle=True) for i, (train_index, test_index) in enumerate(kfold): forests[i] = RandomForestClassifier(n_jobs=-1, class_weight='auto', random_state=5) forests[i].fit(X_train_cur[train_index], y_train_cur[train_index]) predictions[i] = forests[i].predict(X_train_pool) Explanation: Query by Committee End of explanation # normalise features to have mean 0 and variance 1 scaler = StandardScaler() scaler.fit(X_train) # fit only on training data X_train = scaler.transform(X_train) X_test = scaler.transform(X_test) # approximates feature map of an RBF kernel by Monte Carlo approximation of its Fourier transform. rbf_feature = RBFSampler(n_components=200, gamma=0.3, random_state=1) X_train_rbf = rbf_feature.fit_transform(X_train) X_test_rbf = rbf_feature.transform(X_test) Explanation: Stochastic Gradient Descent End of explanation benchmark_sgd = SGDClassifier(loss="hinge", alpha=0.000001, penalty="l1", n_iter=10, n_jobs=-1, class_weight='auto', fit_intercept=True, random_state=1) benchmark_sgd.fit(X_train_rbf[:100], y_train[:100]) benchmark_y_pred = benchmark_sgd.predict(X_test_rbf) benchmark_confusion = metrics.confusion_matrix(y_test, benchmark_y_pred) benchmark_learning_curve = [] sample_sizes = np.concatenate((np.arange(100, 1000, 100), np.arange(1000, 10000, 1000), np.arange(10000, 100000, 10000), np.arange(100000, 1000000, 100000), np.arange(1000000, len(X_train), 500000), [len(X_train)])) benchmark_learning_curve.append(balanced_accuracy_expected(benchmark_confusion)) classes = np.unique(y_train) for i, j in zip(sample_sizes[:-1], sample_sizes[1:]): for _ in range(10): X_train_partial, y_train_partial = shuffle(X_train_rbf[i:j], y_train[i:j]) benchmark_sgd.partial_fit(X_train_partial, y_train_partial, classes=classes) benchmark_y_pred = benchmark_sgd.predict(X_test_rbf) benchmark_confusion = metrics.confusion_matrix(y_test, benchmark_y_pred) benchmark_learning_curve.append(balanced_accuracy_expected(benchmark_confusion)) # save output for later re-use with open('results/sdss_active_learning/sgd_benchmark.pickle', 'wb') as f: pickle.dump((benchmark_sgd, sample_sizes, benchmark_learning_curve), f, pickle.HIGHEST_PROTOCOL) plot_learning_curve(sample_sizes, benchmark_learning_curve, "Benchmark Learning Curve (Random Selection)") Explanation: Random selection of data points at each iteration. End of explanation svm_random = SVC(kernel='rbf', random_state=7, cache_size=2000, class_weight='auto') svm_random.fit(X_train[:100], y_train[:100]) svm_y_pred = svm_random.predict(X_test) svm_confusion = metrics.confusion_matrix(y_test, svm_y_pred) svm_learning_curve = [] sample_sizes = np.concatenate((np.arange(200, 1000, 100), np.arange(1000, 20000, 1000))) svm_learning_curve.append(balanced_accuracy_expected(svm_confusion)) previous_h = svm_random.predict(X_train) rewards = [] for i in sample_sizes: svm_random.fit(X_train[:i], y_train[:i]) svm_y_pred = svm_random.predict(X_test) svm_confusion = metrics.confusion_matrix(y_test, svm_y_pred) svm_learning_curve.append(balanced_accuracy_expected(svm_confusion)) current_h = svm_random.predict(X_train) reward = 0 for i, j in zip(current_h, previous_h): reward += 1 if i != j else 0 reward = reward / len(current_h) previous_h = current_h rewards.append(reward) # save output for later re-use with open('results/sdss_active_learning/sgd_svm_random.pickle', 'wb') as f: pickle.dump((sample_sizes, svm_learning_curve, rewards), f, pickle.HIGHEST_PROTOCOL) log_rewards = np.log(rewards) beta, intercept = np.polyfit(sample_sizes, log_rewards, 1) alpha = np.exp(intercept) plt.plot(sample_sizes, rewards) plt.plot(sample_sizes, alpha * np.exp(beta * sample_sizes)) plot_learning_curve(sample_sizes, svm_learning_curve, "SVM Learning Curve (Random Selection)") Explanation: SVM with Random Sampling End of explanation n_clusters = 100 kmeans = MiniBatchKMeans(n_clusters=n_clusters, init_size=100*n_clusters, random_state=2) X_train_transformed = kmeans.fit_transform(X_train) Explanation: Contextual Bandits We implement a contextual bandit algorithm for active learning, suggested by <a href="https://hal.archives-ouvertes.fr/hal-01069802" target="_blank">Bouneffouf et al (2014)</a>. End of explanation unlabelled_points = set(range(0, len(X_train))) empty_clusters = set() cluster_sizes = [len(np.flatnonzero(kmeans.labels_ == i)) for i in range(n_clusters)] cluster_points = [list(np.flatnonzero(kmeans.labels_ == i)) for i in range(n_clusters)] no_labelled = [0 for i in range(n_clusters)] prop_labelled = [0 for i in range(n_clusters)] d_means = [] d_var = [] for i in range(n_clusters): distance, distance_squared, count = 0, 0, 0 for j, p1 in enumerate(cluster_points[i]): for p2 in cluster_points[i][j+1:]: d = np.fabs(X_train_transformed[p1][i] - X_train_transformed[p2][i]) distance += d distance_squared += d**2 count += 1 if cluster_sizes[i] > 1: d_means.append(distance / count) d_var.append((distance_squared / count) - (distance / count)**2) else: d_means.append(0) d_var.append(0) context = np.array([list(x)for x in zip(d_means, d_var, cluster_sizes, prop_labelled)]) Explanation: Each cluster has a context vector containing 4 pieces of information: The mean distance between individual points in the cluster. The variance of the distance between individual points in the cluster. The number of points in the cluster. The proportion of points that have been labelled in the cluster. End of explanation context_size = 4 B = np.eye(context_size) mu = np.array([0] * context_size) f = np.array([0] * context_size) v_squared = 0.25 Explanation: We'll use Thompson Sampling with linear payoff and with Gaussian prior and likelihood. The algorithm is described in <a href="http://arxiv.org/abs/1209.3352" target="_blank">Argawal et al (2013)</a>. End of explanation active_sgd = SVC(kernel='rbf', random_state=7, cache_size=2000, class_weight='auto') #active_sgd = SGDClassifier(loss="hinge", alpha=0.000001, penalty="l1", n_iter=10, n_jobs=-1, # class_weight='auto', fit_intercept=True, random_state=1) X_train_cur, y_train_cur = X_train[:100], y_train[:100] active_sgd.fit(X_train_cur, y_train_cur) # update context for i in np.arange(0, 100): this_cluster = kmeans.labels_[i] cluster_points[this_cluster].remove(i) unlabelled_points.remove(i) if not cluster_points[this_cluster]: empty_clusters.add(this_cluster) no_labelled[this_cluster] += 1 context[this_cluster][3] = no_labelled[this_cluster] / cluster_sizes[this_cluster] # initial prediction active_y_pred = active_sgd.predict(X_test) active_confusion = metrics.confusion_matrix(y_test, active_y_pred) active_learning_curve = [] active_learning_curve.append(balanced_accuracy_expected(active_confusion)) classes = np.unique(y_train) # compute the current hypothesis previous_h = active_sgd.predict(X_train) active_steps = [100] no_choices = 1 rewards = [] for i in range(2000 // no_choices): mu_sample = np.random.multivariate_normal(mu, v_squared * np.linalg.inv(B)) reward_sample = [np.dot(c, mu_sample) for c in context] chosen_arm = np.argmax(reward_sample) while chosen_arm in empty_clusters: reward_sample[chosen_arm] = float('-inf') chosen_arm = np.argmax(reward_sample) # select a random point in the cluster query = np.random.choice(cluster_points[chosen_arm], min(len(cluster_points[chosen_arm]), no_choices), replace=False) # update context for q in query: cluster_points[chosen_arm].remove(q) unlabelled_points.remove(q) if not cluster_points[chosen_arm]: empty_clusters.add(chosen_arm) no_labelled[chosen_arm] += len(query) context[chosen_arm][3] = no_labelled[chosen_arm] / cluster_sizes[chosen_arm] active_steps.append(active_steps[-1] + len(query)) # run stochastic gradient descent #active_sgd.partial_fit(X_train_rbf[query], y_train[query], classes=classes) X_train_cur = np.vstack((X_train_cur, X_train[query])) y_train_cur = np.concatenate((y_train_cur, y_train[query])) active_sgd = SVC(kernel='rbf', random_state=7, cache_size=2000, class_weight='auto') active_sgd.fit(X_train_cur, y_train_cur) active_y_pred = active_sgd.predict(X_test) active_confusion = metrics.confusion_matrix(y_test, active_y_pred) active_learning_curve.append(balanced_accuracy_expected(active_confusion)) # compute the reward from choosing such arm current_h = active_sgd.predict(X_train) reward = 0 for i, j in zip(current_h, previous_h): reward += 1 if i != j else 0 reward = reward / len(current_h) reward = reward / (alpha * np.exp(beta * len(y_train_cur))) previous_h = current_h rewards.append(reward) # compute posterior distribution B = B + np.outer(context[chosen_arm], context[chosen_arm]) f = f + reward * context[chosen_arm] mu = np.dot(np.linalg.inv(B), f) plot_learning_curve(active_steps, active_learning_curve, "SVM Learning Curve (Active Learning)") Explanation: Initially, we choose 100 random points to sample. End of explanation
1,633
Given the following text description, write Python code to implement the functionality described below step by step Description: Copyright 2019 The TensorFlow Authors. Step1: Keras を使ったマルチワーカートレーニング <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https Step2: TensorFlow をインポートする前に、環境にいくつかの変更を加えます。 すべての GPU を無効にします。 これにより、すべてのワーカーが同じ GPU を使用しようとすることによって発生するエラーが防止されます。 実際のアプリケーションでは、各ワーカーは異なるマシン上にあります。 Step3: TF_CONFIG 環境変数をリセットします(これについては後で詳しく説明します)。 Step4: 現在のディレクトリが Python のパス上にあることを確認してください。これにより、ノートブックは %%writefile で書き込まれたファイルを後でインポートできるようになります。 Step5: 次に TensorFlow をインポートします。 Step6: データセットとモデルの定義 次に、単純なモデルとデータセットの設定を使用してmnist.pyファイルを作成します。 この Python ファイルは、このチュートリアルのワーカープロセスによって使用されます。 Step7: シングルワーカーでのモデルのトレーニング まず、少数のエポックでモデルをトレーニングし、シングルワーカーで結果を観察して、すべてが正しく機能していることを確認します。エポックが進むにつれ、損失が下降し、精度が 1.0 に近づくはずです。 Step8: マルチワーカー構成 では、マルチワーカートレーニングの世界を覗いてみましょう。 ジョブとタスクのクラスタ TensorFlow では、分散トレーニングには、いくつかのジョブが含まれる 'cluster' があり、各ジョブには 1 つ以上の 'task' が含まれることがあります。 それぞれに異なる役割をもつ複数のマシンでトレーニングするには TF_CONFIG 環境変数が必要です。TF_CONFIG は JSON 文字列で、クラスタの一部である各ワーカーのクラスタ構成を指定するために使用されます。 TF_CONFIG 変数には、'cluster' と 'task' の 2 つのコンポーネントがあります。 'cluster' はすべてのワーカーに共通し、トレーニングクラスタに関する情報を、'worker' または 'chief' などのさまざまなジョブの種類で構成される dict として提供します。 tf.distribute.MultiWorkerMirroredStrategy によるマルチワーカートレーニングでは通常、'worker' が通常行うことのほかにチェックポイントの保存や TensorBoard 用のサマリーファイルの書き込みといった役割を果たす 1 つの 'worker' があります。こういった 'worker' はチーフワーカー(ジョブ名は 'chief')と呼ばれます。 通例、'chief' には 'index' 0 が指定されます(実際、tf.distribute.Strategy はそのように実装されています)。 'task' は現在のタスクの情報を提供し、ワーカーごとに異なります。タスクはそのワーカーの 'type' と 'index' を指定します。 以下に構成例を示します。 Step9: これは、JSON 文字列としてシリアル化された同じTF_CONFIGです。 Step10: tf_config は Python 単なるローカル変数です。トレーニング構成で使用するには、この dict を JSON としてシリアル化し、TF_CONFIG 環境変数に配置する必要があります。 上記の構成例では、タスク 'type' を 'worker' に設定し、タスク 'index' を 0 に設定しています。そのため、このマシンが最初のワーカーとなります。'chief' ワーカーとして指定されることになるため、ほかのワーカーよりも多くの作業を行います。 注意 Step11: すると、サブプロセスからその環境変数にアクセスできます。 Step12: 次のセクションでは、似たような方法で、TF_CONFIG をワーカーのサブプロセスに渡します。実際に行う場合はこのようにしてジョブを起動することはありませんが、この例では十分です。 適切なストラテジーを選択する TensorFlow では、以下の 2 つの分散型トレーニングがあります。 同期トレーニング Step13: 注意 Step14: 注意 Step15: 上記のコードスニペットでは、Dataset.batchに渡されるglobal_batch_sizeがper_worker_batch_size * num_workersに設定されていることに注意してください。これにより、ワーカーの数に関係なく、各ワーカーがper_worker_batch_sizeの例のバッチを処理するようになります。 現在のディレクトリには、両方の Python ファイルが含まれています。 Step16: json はTF_CONFIG}をシリアル化し、環境変数に追加します。 Step17: これで、main.pyを実行し、TF_CONFIGを使用するワーカープロセスを起動できます。 Step18: 上記のコマンドについて注意すべき点がいくつかあります。 ノートブック 「マジック」 である%%bashを使用して、いくつかの bash コマンドを実行します。 このワーカーは終了しないため、--bgフラグを使用してbashプロセスをバックグラウンドで実行します。 このワーカーは始める前にすべてのワーカーを待ちます。 バックグラウンドのワーカープロセスはこのノートブックに出力を出力しないため、&amp;&gt; で出力をファイルにリダイレクトし、何が起こったかを検査できます。 プロセスが開始するまで数秒待ちます。 Step19: これまでにワーカーのログファイルに出力されたものを検査します。 Step20: ログファイルの最後の行は Started server with target Step21: 2番目のワーカーを起動します。すべてのワーカーがアクティブであるため、これによりトレーニングが開始されます(したがって、このプロセスをバックグラウンドで実行する必要はありません)。 Step22: 最初のワーカーにより書き込まれたログを再確認すると、そのモデルのトレーニングに参加していることがわかります。 Step23: 当然ながら、これはこのチュートリアルの最初に実行したテストよりも実行速度が劣っています。 単一のマシンで複数のワーカーを実行しても、オーバーヘッドが追加されるだけです。 ここではトレーニングの時間を改善することではなく、マルチワーカートレーニングの例を紹介することを目的としています。 Step24: マルチワーカートレーニングの詳細 ここまで、基本的なマルチワーカーのセットアップの実行を見てきました。 このチュートリアルの残りの部分では、実際のユースケースで役立つ重要な要素について詳しく説明します。 データセットのシャーディング マルチワーカートレーニングでは、コンバージェンスとパフォーマンスを確保するために、データセットのシャーディングが必要です。 前のセクションの例は、tf.distribute.Strategy API により提供されるデフォルトの自動シャーディングに依存しています。tf.data.experimental.DistributeOptions の tf.data.experimental.AutoShardPolicy を設定することで、シャーディングを制御できます。 自動シャーディングの詳細については、分散入力ガイドをご覧ください。 自動シャーディングをオフにして、各レプリカがすべての例を処理する方法の簡単な例を次に示します(推奨されません)。 Step25: 評価 validation_data を Model.fit に渡すと、エポックごとにトレーニングと評価が交互に行われるようになります。validation_data を取る評価は同じセットのワーカー間で分散されているため、評価結果はすべてのワーカーが使用できるように集計されます。 トレーニングと同様に、評価データセットもファイルレベルで自動的にシャーディングされます。評価データセットにグローバルバッチサイズを設定し、validation_steps を設定する必要があります。 評価ではデータセットを繰り返すことも推奨されます。 Alternatively, you can also create another task that periodically reads checkpoints and runs the evaluation. This is what Estimator does. But this is not a recommended way to perform evaluation and thus its details are omitted. 性能 これで、MultiWorkerMirroredStrategy を使ってマルチワーカーで実行するようにセットアップされた Keras モデルの準備ができました。 マルチワーカートレーニングのパフォーマンスを調整するには、次を行うことができます。 tf.distribute.MultiWorkerMirroredStrategy には複数の集合体通信実装が用意されています。 RING は、クロスホスト通信レイヤーとして、gRPC を使用したリング状の集合体を実装します。 NCCL は NVIDIA Collective Communication Library を使用して集合体を実装します。 AUTO は、選択をランタイムに任せます。 集合体の最適な実装は、GPU の数、GPU の種類、およびクラスタ内のネットワーク相互接続によって異なります。自動選択をオーバーライドするには、MultiWorkerMirroredStrategy のコンストラクタの communication_options パラメータを以下のようにして指定します。 python communication_options=tf.distribute.experimental.CommunicationOptions(implementation=tf.distribute.experimental.CollectiveCommunication.NCCL) 可能であれば、変数を tf.float にキャストします。 公式の ResNet モデルには、どのようにしてこれを行うかの例が示されています。 フォールトトレランス 同期トレーニングでは、ワーカーが 1 つでも失敗し、障害復旧の仕組みが存在しない場合、クラスタは失敗します。 Keras をtf.distribute.Strategyで使用する場合、ワーカーが停止した場合や不安定である際に、フォールトトラレンスが機能するというメリットがあります。この機能は、指定された分散ファイルシステムにトレーニングの状態を保存するため、失敗、または、中断されたインスタンスを再開する場合に、トレーニングの状態が復旧されます。 When a worker becomes unavailable, other workers will fail (possibly after a timeout). In such cases, the unavailable worker needs to be restarted, as well as other workers that have failed. 注意 Step26: これで、保存の準備ができました。 Step27: 前述したように、後でモデルを読み込む場合、チーフが保存した場所にあるモデルのみを使用するべきなので、非チーフワーカーが保存した一時的なモデルは削除します。 Step28: 読み込む際に便利な tf.keras.models.load_model API を使用して、以降の作業に続けることにします。 ここでは、単一ワーカーのみを使用してトレーニングを読み込んで続けると仮定します。この場合、別の strategy.scope() 内で tf.keras.models.load_model を呼び出しません(前に定義したように、strategy = tf.distribute.MultiWorkerMirroredStrategy() です)。 Step29: チェックポイントの保存と復元 一方、チェックポイントを作成すれば、モデルの重みを保存し、モデル全体を保存せずともそれらを復元することが可能です。 ここでは、モデルをトラッキングする tf.train.Checkpoint を 1 つ作成します。これは tf.train.CheckpointManager によって管理されるため、最新のチェックポイントのみが保存されます。 Step30: CheckpointManager の準備ができたら、チェックポイントを保存し、チーフ以外のワーカーが保存したチェックポイントを削除します。 Step31: これで、復元する必要があれば、便利なtf.train.latest_checkpoint関数を使用して、保存された最新のチェックポイントを見つけることができるようになりました。チェックポイントが復元されると、トレーニングを続行することができます。 Step32: BackupAndRestore コールバック tf.keras.callbacks.BackupAndRestore コールバックはフォールトトレランス機能を提供します。この機能はモデルと現在のエポック番号を一時チェックポイントファイルをbackup_dir引数でバックアップし、BackupAndRestoreでコールバックします。これは、各エポックの終了時に実行されます。 ジョブが中断されて再開されると、コールバックは最新のチェックポイントを復元するため、中断されたエポックの始めからトレーニングを続行することができます。未完了のエポックで中断前に実行された部分のトレーニングは破棄されるため、モデルの最終状態に影響することはありません。 これを使用するには、Model.fit 呼び出し時に、 Model.fit のインスタンスを指定します。 MultiWorkerMirroredStrategy では、ワーカーが中断されると、そのワーカーが再開するまでクラスタ全体が一時停止されます。そのワーカーが再開するとほかのワーカーも再開します。中断したワーカーがクラスタに参加し直すと、各ワーカーは以前に保存されたチェックポイントファイルを読み取って以前の状態を復元するため、クラスタの同期状態が戻ります。そして、トレーニングが続行されます。 BackupAndRestore コールバックは、CheckpointManager を使用して、トレーニングの状態を保存・復元します。これには、既存のチェックポイントを最新のものと併せて追跡するチェックポイントと呼ばれるファイルが生成されます。このため、ほかのチェックポイントの保存に backup_dir を再利用しないようにし、名前の競合を回避する必要があります。 現在、BackupAndRestore コールバックは、ストラテジーなしのシングルワーカートレーニング(MirroredStrategy)と MultiWorkerMirroredStrategy によるマルチワーカートレーニングをサポートしています。 以下に、マルチワーカートレーニングとシングルワーカートレーニングの 2 つの例を示します。
Python Code: #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. Explanation: Copyright 2019 The TensorFlow Authors. End of explanation import json import os import sys Explanation: Keras を使ったマルチワーカートレーニング <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で実行</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tutorials/distribute/multi_worker_with_keras.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a></td> <td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tutorials/distribute/multi_worker_with_keras.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a></td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/tutorials/distribute/multi_worker_with_keras.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a> </td> </table> 概要 このチュートリアルでは、tf.distribute.Strategy API、具体的にはtf.distribute.MultiWorkerMirroredStrategy クラスを使用して、Keras モデルと Model.fit API によるマルチワーカー分散型トレーニングを実演します。このストラテジーの助けにより、シングルワーカーで実行するように設計された Keras モデルは、最小限のコード変更で複数のワーカーでシームレスに機能することができます。 tf.distribute.Strategy API をさらに学習したい方は、TensorFlow での分散トレーニングガイドで TensorFlow がサポートする分散ストラテジーの概要をご覧ください。 Keras とカスタムループで MultiWorkerMirroredStrategy を使用する方法を学習する場合は、Keras と MultiWorkerMirroredStrategy によるカスタムトレーニングループをご覧ください。 このチュートリアルの目的は、2 つのワーカーを使った最小限のマルチワーカーの例を紹介することです。 セットアップ まず、必要なものをインポートします。 End of explanation os.environ["CUDA_VISIBLE_DEVICES"] = "-1" Explanation: TensorFlow をインポートする前に、環境にいくつかの変更を加えます。 すべての GPU を無効にします。 これにより、すべてのワーカーが同じ GPU を使用しようとすることによって発生するエラーが防止されます。 実際のアプリケーションでは、各ワーカーは異なるマシン上にあります。 End of explanation os.environ.pop('TF_CONFIG', None) Explanation: TF_CONFIG 環境変数をリセットします(これについては後で詳しく説明します)。 End of explanation if '.' not in sys.path: sys.path.insert(0, '.') Explanation: 現在のディレクトリが Python のパス上にあることを確認してください。これにより、ノートブックは %%writefile で書き込まれたファイルを後でインポートできるようになります。 End of explanation import tensorflow as tf Explanation: 次に TensorFlow をインポートします。 End of explanation %%writefile mnist_setup.py import os import tensorflow as tf import numpy as np def mnist_dataset(batch_size): (x_train, y_train), _ = tf.keras.datasets.mnist.load_data() # The `x` arrays are in uint8 and have values in the [0, 255] range. # You need to convert them to float32 with values in the [0, 1] range. x_train = x_train / np.float32(255) y_train = y_train.astype(np.int64) train_dataset = tf.data.Dataset.from_tensor_slices( (x_train, y_train)).shuffle(60000).repeat().batch(batch_size) return train_dataset def build_and_compile_cnn_model(): model = tf.keras.Sequential([ tf.keras.layers.InputLayer(input_shape=(28, 28)), tf.keras.layers.Reshape(target_shape=(28, 28, 1)), tf.keras.layers.Conv2D(32, 3, activation='relu'), tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dense(10) ]) model.compile( loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), optimizer=tf.keras.optimizers.SGD(learning_rate=0.001), metrics=['accuracy']) return model Explanation: データセットとモデルの定義 次に、単純なモデルとデータセットの設定を使用してmnist.pyファイルを作成します。 この Python ファイルは、このチュートリアルのワーカープロセスによって使用されます。 End of explanation import mnist_setup batch_size = 64 single_worker_dataset = mnist_setup.mnist_dataset(batch_size) single_worker_model = mnist_setup.build_and_compile_cnn_model() single_worker_model.fit(single_worker_dataset, epochs=3, steps_per_epoch=70) Explanation: シングルワーカーでのモデルのトレーニング まず、少数のエポックでモデルをトレーニングし、シングルワーカーで結果を観察して、すべてが正しく機能していることを確認します。エポックが進むにつれ、損失が下降し、精度が 1.0 に近づくはずです。 End of explanation tf_config = { 'cluster': { 'worker': ['localhost:12345', 'localhost:23456'] }, 'task': {'type': 'worker', 'index': 0} } Explanation: マルチワーカー構成 では、マルチワーカートレーニングの世界を覗いてみましょう。 ジョブとタスクのクラスタ TensorFlow では、分散トレーニングには、いくつかのジョブが含まれる 'cluster' があり、各ジョブには 1 つ以上の 'task' が含まれることがあります。 それぞれに異なる役割をもつ複数のマシンでトレーニングするには TF_CONFIG 環境変数が必要です。TF_CONFIG は JSON 文字列で、クラスタの一部である各ワーカーのクラスタ構成を指定するために使用されます。 TF_CONFIG 変数には、'cluster' と 'task' の 2 つのコンポーネントがあります。 'cluster' はすべてのワーカーに共通し、トレーニングクラスタに関する情報を、'worker' または 'chief' などのさまざまなジョブの種類で構成される dict として提供します。 tf.distribute.MultiWorkerMirroredStrategy によるマルチワーカートレーニングでは通常、'worker' が通常行うことのほかにチェックポイントの保存や TensorBoard 用のサマリーファイルの書き込みといった役割を果たす 1 つの 'worker' があります。こういった 'worker' はチーフワーカー(ジョブ名は 'chief')と呼ばれます。 通例、'chief' には 'index' 0 が指定されます(実際、tf.distribute.Strategy はそのように実装されています)。 'task' は現在のタスクの情報を提供し、ワーカーごとに異なります。タスクはそのワーカーの 'type' と 'index' を指定します。 以下に構成例を示します。 End of explanation json.dumps(tf_config) Explanation: これは、JSON 文字列としてシリアル化された同じTF_CONFIGです。 End of explanation os.environ['GREETINGS'] = 'Hello TensorFlow!' Explanation: tf_config は Python 単なるローカル変数です。トレーニング構成で使用するには、この dict を JSON としてシリアル化し、TF_CONFIG 環境変数に配置する必要があります。 上記の構成例では、タスク 'type' を 'worker' に設定し、タスク 'index' を 0 に設定しています。そのため、このマシンが最初のワーカーとなります。'chief' ワーカーとして指定されることになるため、ほかのワーカーよりも多くの作業を行います。 注意: 他のマシンにも TF_CONFIG 環境変数を設定し、同じ 'cluster' dict が必要となりますが、それらのマシンの役割に応じた異なるタスク 'type' またはタスク 'index' が必要となります。 説明の目的により、このチュートリアルではある localhost の 2 つのワーカーでどのようにTF_CONFIG 変数をセットアップできるかを示しています。 実際には、外部 IP aドレス/ポートに複数のワーカーを作成して、ワーカーごとに適宜 TF_CONFIG 変数を設定する必要があります。 このチュートリアルでは、2 つのワーカーを使用します。 最初の ('chief') ワーカーの TF_CONFIG は上記に示す通りです。 2 つ目のワーカーでは、tf_config['task']['index']=1 を設定します。 ノートブックの環境変数とサブプロセス サブプロセスは、親プロセスの環境変数を継承します。 たとえば、この Jupyter ノートブックのプロセスでは、環境変数を次のように設定できます。 End of explanation %%bash echo ${GREETINGS} Explanation: すると、サブプロセスからその環境変数にアクセスできます。 End of explanation strategy = tf.distribute.MultiWorkerMirroredStrategy() Explanation: 次のセクションでは、似たような方法で、TF_CONFIG をワーカーのサブプロセスに渡します。実際に行う場合はこのようにしてジョブを起動することはありませんが、この例では十分です。 適切なストラテジーを選択する TensorFlow では、以下の 2 つの分散型トレーニングがあります。 同期トレーニング: トレーニングのステップがワーカーとレプリカ間で同期されます。 非同期トレーニング: トレーニングステップが厳密に同期されません(パラメータサーバートレーニングなど)。 このチュートリアルでは、tf.distribute.MultiWorkerMirroredStrategy のインスタンスを使用して、同期マルチワーカートレーニングを実行する方法を示します。 MultiWorkerMirroredStrategyは、すべてのワーカーの各デバイスにあるモデルのレイヤーにすべての変数のコピーを作成します。集合通信に使用する TensorFlow 演算子CollectiveOpsを使用して勾配を集め、変数の同期を維持します。このストラテジーの詳細は、tf.distribute.Strategyガイドで説明されています。 End of explanation with strategy.scope(): # Model building/compiling need to be within `strategy.scope()`. multi_worker_model = mnist_setup.build_and_compile_cnn_model() Explanation: 注意: MultiWorkerMirroredStrategyが呼び出されると、TF_CONFIGが解析され、TensorFlow の GRPC サーバーが開始します。そのため、TF_CONFIG環境変数は、tf.distribute.Strategyインスタンスが作成される前に設定しておく必要があります。TF_CONFIGはまだ設定されていないため、上記の戦略は実質的にシングルワーカーのトレーニングです。 MultiWorkerMirroredStrategy は、tf.distribute.experimental.CommunicationOptions パラメータを介して複数の実装を提供します。1) RING は gRPC をクロスホスト通信レイヤーとして使用して、リング状の集合体を実装します。2) NCCL は NVIDIA Collective Communication Library を使用して集合体を実装します。3) AUTO はその選択をランタイムに任せます。集合体の最適な実装は GPU の数と種類、およびクラスタ内のネットワーク相互接続によって異なります。 モデルのトレーニング tf.kerasにtf.distribute.Strategy API を統合したため、トレーニングをマルチワーカーに分散するには、モデルビルディングとmodel.compile()呼び出しをstrategy.scope()内に収めるように変更することだけが必要となりました。この分散ストラテジーのスコープは、どこでどのように変数が作成されるかを指定し、MultiWorkerMirroredStrategyの場合、作成される変数はMirroredVariableで、各ワーカーに複製されます。 End of explanation %%writefile main.py import os import json import tensorflow as tf import mnist_setup per_worker_batch_size = 64 tf_config = json.loads(os.environ['TF_CONFIG']) num_workers = len(tf_config['cluster']['worker']) strategy = tf.distribute.MultiWorkerMirroredStrategy() global_batch_size = per_worker_batch_size * num_workers multi_worker_dataset = mnist_setup.mnist_dataset(global_batch_size) with strategy.scope(): # Model building/compiling need to be within `strategy.scope()`. multi_worker_model = mnist_setup.build_and_compile_cnn_model() multi_worker_model.fit(multi_worker_dataset, epochs=3, steps_per_epoch=70) Explanation: 注意: 現在のところ、MultiWorkerMirroredStrategy には、TensorFlow 演算子をストラテジーのインスタンスが作成された後に作成する必要があるという制限があります。RuntimeError: Collective ops must be configured at program startup が表示される場合は、プログラムのはじめに MultiWorkerMirroredStrategy のインスタンスを作成するようにし、演算子を作成するコードをストラテジーがインスタンス化される後に配置するようにしてください。 MultiWorkerMirroredStrategyで実際に実行するには、ワーカープロセスを実行し、TF_CONFIGをそれらに渡す必要があります。 前に記述したmnist_setup.pyファイルと同様に、各ワーカーが実行するmain.pyは次のとおりです。 End of explanation %%bash ls *.py Explanation: 上記のコードスニペットでは、Dataset.batchに渡されるglobal_batch_sizeがper_worker_batch_size * num_workersに設定されていることに注意してください。これにより、ワーカーの数に関係なく、各ワーカーがper_worker_batch_sizeの例のバッチを処理するようになります。 現在のディレクトリには、両方の Python ファイルが含まれています。 End of explanation os.environ['TF_CONFIG'] = json.dumps(tf_config) Explanation: json はTF_CONFIG}をシリアル化し、環境変数に追加します。 End of explanation # first kill any previous runs %killbgscripts %%bash --bg python main.py &> job_0.log Explanation: これで、main.pyを実行し、TF_CONFIGを使用するワーカープロセスを起動できます。 End of explanation import time time.sleep(10) Explanation: 上記のコマンドについて注意すべき点がいくつかあります。 ノートブック 「マジック」 である%%bashを使用して、いくつかの bash コマンドを実行します。 このワーカーは終了しないため、--bgフラグを使用してbashプロセスをバックグラウンドで実行します。 このワーカーは始める前にすべてのワーカーを待ちます。 バックグラウンドのワーカープロセスはこのノートブックに出力を出力しないため、&amp;&gt; で出力をファイルにリダイレクトし、何が起こったかを検査できます。 プロセスが開始するまで数秒待ちます。 End of explanation %%bash cat job_0.log Explanation: これまでにワーカーのログファイルに出力されたものを検査します。 End of explanation tf_config['task']['index'] = 1 os.environ['TF_CONFIG'] = json.dumps(tf_config) Explanation: ログファイルの最後の行は Started server with target: grpc://localhost:12345であるはずです。最初のワーカーは準備が整い、他のすべてのワーカーの準備が整うのを待っています。 2番目のワーカーのプロセスを始めるようにtf_configを更新します。 End of explanation %%bash python main.py Explanation: 2番目のワーカーを起動します。すべてのワーカーがアクティブであるため、これによりトレーニングが開始されます(したがって、このプロセスをバックグラウンドで実行する必要はありません)。 End of explanation %%bash cat job_0.log Explanation: 最初のワーカーにより書き込まれたログを再確認すると、そのモデルのトレーニングに参加していることがわかります。 End of explanation # Delete the `TF_CONFIG`, and kill any background tasks so they don't affect the next section. os.environ.pop('TF_CONFIG', None) %killbgscripts Explanation: 当然ながら、これはこのチュートリアルの最初に実行したテストよりも実行速度が劣っています。 単一のマシンで複数のワーカーを実行しても、オーバーヘッドが追加されるだけです。 ここではトレーニングの時間を改善することではなく、マルチワーカートレーニングの例を紹介することを目的としています。 End of explanation options = tf.data.Options() options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.OFF global_batch_size = 64 multi_worker_dataset = mnist_setup.mnist_dataset(batch_size=64) dataset_no_auto_shard = multi_worker_dataset.with_options(options) Explanation: マルチワーカートレーニングの詳細 ここまで、基本的なマルチワーカーのセットアップの実行を見てきました。 このチュートリアルの残りの部分では、実際のユースケースで役立つ重要な要素について詳しく説明します。 データセットのシャーディング マルチワーカートレーニングでは、コンバージェンスとパフォーマンスを確保するために、データセットのシャーディングが必要です。 前のセクションの例は、tf.distribute.Strategy API により提供されるデフォルトの自動シャーディングに依存しています。tf.data.experimental.DistributeOptions の tf.data.experimental.AutoShardPolicy を設定することで、シャーディングを制御できます。 自動シャーディングの詳細については、分散入力ガイドをご覧ください。 自動シャーディングをオフにして、各レプリカがすべての例を処理する方法の簡単な例を次に示します(推奨されません)。 End of explanation model_path = '/tmp/keras-model' def _is_chief(task_type, task_id): # Note: there are two possible `TF_CONFIG` configuration. # 1) In addition to `worker` tasks, a `chief` task type is use; # in this case, this function should be modified to # `return task_type == 'chief'`. # 2) Only `worker` task type is used; in this case, worker 0 is # regarded as the chief. The implementation demonstrated here # is for this case. # For the purpose of this Colab section, the `task_type is None` case # is added because it is effectively run with only a single worker. return (task_type == 'worker' and task_id == 0) or task_type is None def _get_temp_dir(dirpath, task_id): base_dirpath = 'workertemp_' + str(task_id) temp_dir = os.path.join(dirpath, base_dirpath) tf.io.gfile.makedirs(temp_dir) return temp_dir def write_filepath(filepath, task_type, task_id): dirpath = os.path.dirname(filepath) base = os.path.basename(filepath) if not _is_chief(task_type, task_id): dirpath = _get_temp_dir(dirpath, task_id) return os.path.join(dirpath, base) task_type, task_id = (strategy.cluster_resolver.task_type, strategy.cluster_resolver.task_id) write_model_path = write_filepath(model_path, task_type, task_id) Explanation: 評価 validation_data を Model.fit に渡すと、エポックごとにトレーニングと評価が交互に行われるようになります。validation_data を取る評価は同じセットのワーカー間で分散されているため、評価結果はすべてのワーカーが使用できるように集計されます。 トレーニングと同様に、評価データセットもファイルレベルで自動的にシャーディングされます。評価データセットにグローバルバッチサイズを設定し、validation_steps を設定する必要があります。 評価ではデータセットを繰り返すことも推奨されます。 Alternatively, you can also create another task that periodically reads checkpoints and runs the evaluation. This is what Estimator does. But this is not a recommended way to perform evaluation and thus its details are omitted. 性能 これで、MultiWorkerMirroredStrategy を使ってマルチワーカーで実行するようにセットアップされた Keras モデルの準備ができました。 マルチワーカートレーニングのパフォーマンスを調整するには、次を行うことができます。 tf.distribute.MultiWorkerMirroredStrategy には複数の集合体通信実装が用意されています。 RING は、クロスホスト通信レイヤーとして、gRPC を使用したリング状の集合体を実装します。 NCCL は NVIDIA Collective Communication Library を使用して集合体を実装します。 AUTO は、選択をランタイムに任せます。 集合体の最適な実装は、GPU の数、GPU の種類、およびクラスタ内のネットワーク相互接続によって異なります。自動選択をオーバーライドするには、MultiWorkerMirroredStrategy のコンストラクタの communication_options パラメータを以下のようにして指定します。 python communication_options=tf.distribute.experimental.CommunicationOptions(implementation=tf.distribute.experimental.CollectiveCommunication.NCCL) 可能であれば、変数を tf.float にキャストします。 公式の ResNet モデルには、どのようにしてこれを行うかの例が示されています。 フォールトトレランス 同期トレーニングでは、ワーカーが 1 つでも失敗し、障害復旧の仕組みが存在しない場合、クラスタは失敗します。 Keras をtf.distribute.Strategyで使用する場合、ワーカーが停止した場合や不安定である際に、フォールトトラレンスが機能するというメリットがあります。この機能は、指定された分散ファイルシステムにトレーニングの状態を保存するため、失敗、または、中断されたインスタンスを再開する場合に、トレーニングの状態が復旧されます。 When a worker becomes unavailable, other workers will fail (possibly after a timeout). In such cases, the unavailable worker needs to be restarted, as well as other workers that have failed. 注意: 以前は、ModelCheckpoint コールバックには、マルチワーカートレーニングに失敗したジョブを再開したときに、トレーニングの状態を復元するメカニズムがありました。新たに導入される BackupAndRestore コールバックでは、一貫したエクスペリエンスを提供するために、シングルワーカートレーニングにもこのサポートが追加され、既存の ModelCheckpoint コールバックからフォールトトレランス機能が削除されました。今後、この動作に依存するアプリケーションは、新しい BackupAndRestore コールバックに移行する必要があります。 ModelCheckpoint コールバック ModelCheckpointコールバックは、フォールトトレランス機能を提供しなくなりました。代わりに BackupAndRestoreコールバックを使用してください。 ModelCheckpointコールバックを使用してチェックポイントを保存することは、依然として可能です。ただし、これを使用する場合、トレーニングが中断されるか、問題なく終了した場合、チェックポイントからトレーニングを続行するには、手動でモデルを読み込まなければなりません。 オプションで、ユーザーはModelCheckpointコールバックの外部でモデル/重みを保存および復元することを選択できます。 モデルの保存と読み込み model.save または tf.saved_model.save を使用してモデルを保存するには、ワーカーごとに異なる保存先が必要となります。 チーフワーカー以外のワーカーの場合、モデルを一時ディレクトリに保存する必要があります。 チーフワーカーの場合、指定されたモデルのディレクトリに保存する必要があります。 ワーカーの一時ディレクトリは、複数のワーカーが同じ場所に書き込もうとしてエラーが発生しないように、一意のディレクトリである必要があります。 すべてのディレクトリに保存されるモデルは同一のものであり、復元やサービングで参照されるのは一般的に、チーフワーカーが保存したモデルです。 トレーニングが完了したらワーカーが作成した一時ディレクトリを削除するクリーンアップロジックを用意しておく必要があります。 チーフとワーカーを同時に保存する必要があるのは、チェックポイント中に変数を集計する可能性があり、チーフとワーカーの両方が allreduce 通信プロトコルに参加する必要があるためです。しかしながら、チーフとワーカーを同じモデルディレクトリに保存すると競合が発生し、エラーとなります。 MultiWorkerMirroredStrategy を使用すると、プログラムはワーカーごとに実行され、現在のワーカーがチーフであるかを知る際には、task_type と task_id の属性があるクラスタレゾルバオブジェクトが利用されます。 task_type から、現在のジョブが何であるか('worker' など)を知ることができます。 task_id から、ワーカーの ID を得られます。 task_id == 0 のワーカーはチーフワーカーです。 以下のコードスニペットの write_filepath 関数は、書き込みのファイルパスを指定します。このパスはワーカーの task_id によって異なります。 チーフワーカー(task_id == 0)の場合は、元のファイルパスに書き込みます。 それ以外のワーカーの場合は、書き込むディレクトリパスに task_id を指定して、一時ディレクトリ(temp_dir)を作成します。 End of explanation multi_worker_model.save(write_model_path) Explanation: これで、保存の準備ができました。 End of explanation if not _is_chief(task_type, task_id): tf.io.gfile.rmtree(os.path.dirname(write_model_path)) Explanation: 前述したように、後でモデルを読み込む場合、チーフが保存した場所にあるモデルのみを使用するべきなので、非チーフワーカーが保存した一時的なモデルは削除します。 End of explanation loaded_model = tf.keras.models.load_model(model_path) # Now that the model is restored, and can continue with the training. loaded_model.fit(single_worker_dataset, epochs=2, steps_per_epoch=20) Explanation: 読み込む際に便利な tf.keras.models.load_model API を使用して、以降の作業に続けることにします。 ここでは、単一ワーカーのみを使用してトレーニングを読み込んで続けると仮定します。この場合、別の strategy.scope() 内で tf.keras.models.load_model を呼び出しません(前に定義したように、strategy = tf.distribute.MultiWorkerMirroredStrategy() です)。 End of explanation checkpoint_dir = '/tmp/ckpt' checkpoint = tf.train.Checkpoint(model=multi_worker_model) write_checkpoint_dir = write_filepath(checkpoint_dir, task_type, task_id) checkpoint_manager = tf.train.CheckpointManager( checkpoint, directory=write_checkpoint_dir, max_to_keep=1) Explanation: チェックポイントの保存と復元 一方、チェックポイントを作成すれば、モデルの重みを保存し、モデル全体を保存せずともそれらを復元することが可能です。 ここでは、モデルをトラッキングする tf.train.Checkpoint を 1 つ作成します。これは tf.train.CheckpointManager によって管理されるため、最新のチェックポイントのみが保存されます。 End of explanation checkpoint_manager.save() if not _is_chief(task_type, task_id): tf.io.gfile.rmtree(write_checkpoint_dir) Explanation: CheckpointManager の準備ができたら、チェックポイントを保存し、チーフ以外のワーカーが保存したチェックポイントを削除します。 End of explanation latest_checkpoint = tf.train.latest_checkpoint(checkpoint_dir) checkpoint.restore(latest_checkpoint) multi_worker_model.fit(multi_worker_dataset, epochs=2, steps_per_epoch=20) Explanation: これで、復元する必要があれば、便利なtf.train.latest_checkpoint関数を使用して、保存された最新のチェックポイントを見つけることができるようになりました。チェックポイントが復元されると、トレーニングを続行することができます。 End of explanation # Multi-worker training with `MultiWorkerMirroredStrategy` # and the `BackupAndRestore` callback. callbacks = [tf.keras.callbacks.BackupAndRestore(backup_dir='/tmp/backup')] with strategy.scope(): multi_worker_model = mnist_setup.build_and_compile_cnn_model() multi_worker_model.fit(multi_worker_dataset, epochs=3, steps_per_epoch=70, callbacks=callbacks) Explanation: BackupAndRestore コールバック tf.keras.callbacks.BackupAndRestore コールバックはフォールトトレランス機能を提供します。この機能はモデルと現在のエポック番号を一時チェックポイントファイルをbackup_dir引数でバックアップし、BackupAndRestoreでコールバックします。これは、各エポックの終了時に実行されます。 ジョブが中断されて再開されると、コールバックは最新のチェックポイントを復元するため、中断されたエポックの始めからトレーニングを続行することができます。未完了のエポックで中断前に実行された部分のトレーニングは破棄されるため、モデルの最終状態に影響することはありません。 これを使用するには、Model.fit 呼び出し時に、 Model.fit のインスタンスを指定します。 MultiWorkerMirroredStrategy では、ワーカーが中断されると、そのワーカーが再開するまでクラスタ全体が一時停止されます。そのワーカーが再開するとほかのワーカーも再開します。中断したワーカーがクラスタに参加し直すと、各ワーカーは以前に保存されたチェックポイントファイルを読み取って以前の状態を復元するため、クラスタの同期状態が戻ります。そして、トレーニングが続行されます。 BackupAndRestore コールバックは、CheckpointManager を使用して、トレーニングの状態を保存・復元します。これには、既存のチェックポイントを最新のものと併せて追跡するチェックポイントと呼ばれるファイルが生成されます。このため、ほかのチェックポイントの保存に backup_dir を再利用しないようにし、名前の競合を回避する必要があります。 現在、BackupAndRestore コールバックは、ストラテジーなしのシングルワーカートレーニング(MirroredStrategy)と MultiWorkerMirroredStrategy によるマルチワーカートレーニングをサポートしています。 以下に、マルチワーカートレーニングとシングルワーカートレーニングの 2 つの例を示します。 End of explanation
1,634
Given the following text description, write Python code to implement the functionality described below step by step Description: Scheduler for quantum gates and instructions Author Step1: Gate schedule Let's first define a quantum circuit Step2: This is a rather boring circuit, but it is useful as a demonstration for the scheduler. We now define a scheduler and schedule the execution of gates in the circuit Step3: This result shows the scheduled starting time for each gate. In the first cycle we execute an iSWAP on qubit 0 and 1 and an X gate on qubit 0; In the second cycle we execute one X gate on qubit 2 and one Y gate on qubit 0; In the last cycle, we execute a single X gate on qubit 0. As printed bellow Step4: We can also schedule the gate follows the rule "as late as possible" Step5: The only difference is that the "iSWAP" gate and the X gate on qubit 2 are shifted by one cycle. Instruction/Pulse schedule Often different quantum gates will have different execution time. To consider this, we define a list of quantum instructions, where X gate has the execution time 1 while the iSWAP gate takes the time 3.5 Step6: The scheduled execution time for each gate can no longer be assigned to gate cycles. But we can see this through the noisy circuit simulator of qutip, where the circuit is compiled to control signals Step7: The green and orange pulses represent rotations along the X and Z axis. The green pulse is the iSWAP gate, which is executed simultaneously with a few other single-qubit rotations on qubit 0. Considering commuting gates We consider the following circuit Step8: At first sight, it might look like no gates can be run in parallel. However, the two CNOT gates actually commute and if we permute them, we can run one CNOT together with the last Hadamard gate. Step9: Random shuffle The scheduling algorithm is heuristic and hence cannot always find the optimal result. Therefore randomness can be added to the scheduling process by the parameters random_shuffle and repeat_num.
Python Code: # imports import qutip from qutip_qip.circuit import QubitCircuit from qutip_qip.compiler import Scheduler from qutip_qip.compiler import Instruction from qutip_qip.device import LinearSpinChain Explanation: Scheduler for quantum gates and instructions Author: Boxi Li (etamin1201@gmail.com) The finite coherence time of physical qubits is one of the major factors that limits the performance of quantum computation. In principle, the execution time of the circuit has to be much shorter than the coherence time. One way to reduce this execution time is to run several gates in the circuit parallelly. This can spare a lot of time if e.g. the same single-qubit gate is applied to all qubits, as in the Grover algorithm. A scheduler for a quantum computer, similar to its classical counterpart, schedules a quantum circuit to minimize the execution time. It determines the order in which the gates are executed. As a simple rule, the scheduler allows gates to be executed in parallel if they do not address the same qubits. Further hardware constraints can be included, but here we only focus on this simple criterion by considering gates that do not address the same qubits. The non-trivial part of a scheduler is that it has to consider the possible permutations of commuting quantum gates. Hence, exploring various possibilities for permutation while following physical constraints of the hardware is the main challenging task for the scheduler. We first show how we can schedule gate execution in quantum circuits using the built-in tools in qutip_qip and then the scheduling of compiled control pulses. In the end, we also show a simple example where the permutation of commuting gates matters in the scheduling and how to handle such situations. End of explanation circuit = QubitCircuit(3) circuit.add_gate("X", 0) circuit.add_gate("ISWAP", targets=[1,2]) circuit.add_gate("X", 2) circuit.add_gate("Y", 0) circuit.add_gate("X", 0) circuit Explanation: Gate schedule Let's first define a quantum circuit End of explanation scheduler = Scheduler("ASAP") # schedule as soon as possible scheduled_time = scheduler.schedule(circuit) scheduled_time Explanation: This is a rather boring circuit, but it is useful as a demonstration for the scheduler. We now define a scheduler and schedule the execution of gates in the circuit End of explanation cycle_list = [[] for i in range(max(scheduled_time) + 1)] for i, time in enumerate(scheduled_time): gate = circuit.gates[i] cycle_list[time].append(gate.name + str(gate.targets)) for cycle in cycle_list: print(cycle) Explanation: This result shows the scheduled starting time for each gate. In the first cycle we execute an iSWAP on qubit 0 and 1 and an X gate on qubit 0; In the second cycle we execute one X gate on qubit 2 and one Y gate on qubit 0; In the last cycle, we execute a single X gate on qubit 0. As printed bellow: End of explanation scheduler = Scheduler("ALAP") # schedule as late as possible scheduled_time = scheduler.schedule(circuit) cycle_list = [[] for i in range(max(scheduled_time) + 1)] for i, time in enumerate(scheduled_time): gate = circuit.gates[i] cycle_list[time].append(gate.name + str(gate.targets)) for cycle in cycle_list: print(cycle) Explanation: We can also schedule the gate follows the rule "as late as possible" End of explanation scheduler = Scheduler("ASAP") instructions = [] for gate in circuit.gates: if gate.name in ("X"): duration = 1 elif gate.name == "ISWAP": duration = 3.5 instruction = Instruction(gate, duration=duration) instructions.append(instruction) scheduler.schedule(instructions) Explanation: The only difference is that the "iSWAP" gate and the X gate on qubit 2 are shifted by one cycle. Instruction/Pulse schedule Often different quantum gates will have different execution time. To consider this, we define a list of quantum instructions, where X gate has the execution time 1 while the iSWAP gate takes the time 3.5 End of explanation device = LinearSpinChain(3) device.load_circuit(circuit, "ASAP") # The circuit are compiled to instructions and scheduled. device.plot_pulses(); Explanation: The scheduled execution time for each gate can no longer be assigned to gate cycles. But we can see this through the noisy circuit simulator of qutip, where the circuit is compiled to control signals: (Notice that the execution time follows the hardware parameter of spin chain and the Y gate is decomposed into a Z-X-Z rotation). End of explanation circuit = QubitCircuit(3) circuit.add_gate("SNOT", 0) circuit.add_gate("CNOT", 1, 0) circuit.add_gate("CNOT", 2, 0) circuit.add_gate("SNOT", 2) circuit Explanation: The green and orange pulses represent rotations along the X and Z axis. The green pulse is the iSWAP gate, which is executed simultaneously with a few other single-qubit rotations on qubit 0. Considering commuting gates We consider the following circuit: End of explanation scheduler = Scheduler("ALAP") scheduled_time = scheduler.schedule(circuit) cycle_list = [[] for i in range(max(scheduled_time) + 1)] for i, time in enumerate(scheduled_time): gate = circuit.gates[i] cycle_list[time].append(gate.name + str(gate.targets)) for cycle in cycle_list: print(cycle) Explanation: At first sight, it might look like no gates can be run in parallel. However, the two CNOT gates actually commute and if we permute them, we can run one CNOT together with the last Hadamard gate. End of explanation from qutip.ipynbtools import version_table version_table() Explanation: Random shuffle The scheduling algorithm is heuristic and hence cannot always find the optimal result. Therefore randomness can be added to the scheduling process by the parameters random_shuffle and repeat_num. End of explanation
1,635
Given the following text description, write Python code to implement the functionality described below step by step Description: Index - Back Step1: Building a Custom Widget - Hello World The widget framework is built on top of the Comm framework (short for communication). The Comm framework is a framework that allows the kernel to send/receive JSON messages to/from the front end (as seen below). To create a custom widget, you need to define the widget both in the browser and in the python kernel. Building a Custom Widget To get started, you'll create a simple hello world widget. Later you'll build on this foundation to make more complex widgets. Python Kernel DOMWidget and Widget To define a widget, you must inherit from the Widget or DOMWidget base class. If you intend for your widget to be displayed in the Jupyter notebook, you'll want to inherit from the DOMWidget. The DOMWidget class itself inherits from the Widget class. The Widget class is useful for cases in which the Widget is not meant to be displayed directly in the notebook, but instead as a child of another rendering environment. For example, if you wanted to create a three.js widget (a popular WebGL library), you would implement the rendering window as a DOMWidget and any 3D objects or lights meant to be rendered in that window as Widgets. _view_name Inheriting from the DOMWidget does not tell the widget framework what front end widget to associate with your back end widget. Instead, you must tell it yourself by defining specially named trait attributes, _view_name and _view_module (as seen below) and optionally _model_name and _model_module. Step2: sync=True traitlets Traitlets is an IPython library for defining type-safe properties on configurable objects. For this tutorial you do not need to worry about the configurable piece of the traitlets machinery. The sync=True keyword argument tells the widget framework to handle synchronizing that value to the browser. Without sync=True, the browser would have no knowledge of _view_name or _view_module. Other traitlet types Unicode, used for _view_name, is not the only Traitlet type, there are many more some of which are listed below Step3: Define the view Next, define your widget view class. Inherit from the DOMWidgetView by using the .extend method. Step4: Render method Lastly, override the base render method of the view to define custom rendering logic. A handle to the widget's default DOM element can be acquired via this.el. The el property is the DOM element associated with the view. Step5: Test You should be able to display your widget just like any other widget now. Step6: Making the widget stateful There is not much that you can do with the above example that you can't do with the IPython display framework. To change this, you will make the widget stateful. Instead of displaying a static "hello world" message, it will display a string set by the back end. First you need to add a traitlet in the back end. Use the name of value to stay consistent with the rest of the widget framework and to allow your widget to be used with interact. Step7: Accessing the model from the view To access the model associated with a view instance, use the model property of the view. get and set methods are used to interact with the Backbone model. get is trivial, however you have to be careful when using set. After calling the model set you need call the view's touch method. This associates the set operation with a particular view so output will be routed to the correct cell. The model also has an on method, which allows you to listen to events triggered by the model (like value changes). Rendering model contents By replacing the string literal with a call to model.get, the view will now display the value of the back end upon display. However, it will not update itself to a new value when the value changes. Step8: Dynamic updates To get the view to update itself dynamically, register a function to update the view's value when the model's value property changes. This can be done using the model.on method. The on method takes three parameters, an event name, callback handle, and callback context. The Backbone event named change will fire whenever the model changes. By appending Step9: Test
Python Code: from __future__ import print_function Explanation: Index - Back End of explanation import ipywidgets as widgets from traitlets import Unicode, validate class HelloWidget(widgets.DOMWidget): _view_name = Unicode('HelloView').tag(sync=True) _view_module = Unicode('hello').tag(sync=True) Explanation: Building a Custom Widget - Hello World The widget framework is built on top of the Comm framework (short for communication). The Comm framework is a framework that allows the kernel to send/receive JSON messages to/from the front end (as seen below). To create a custom widget, you need to define the widget both in the browser and in the python kernel. Building a Custom Widget To get started, you'll create a simple hello world widget. Later you'll build on this foundation to make more complex widgets. Python Kernel DOMWidget and Widget To define a widget, you must inherit from the Widget or DOMWidget base class. If you intend for your widget to be displayed in the Jupyter notebook, you'll want to inherit from the DOMWidget. The DOMWidget class itself inherits from the Widget class. The Widget class is useful for cases in which the Widget is not meant to be displayed directly in the notebook, but instead as a child of another rendering environment. For example, if you wanted to create a three.js widget (a popular WebGL library), you would implement the rendering window as a DOMWidget and any 3D objects or lights meant to be rendered in that window as Widgets. _view_name Inheriting from the DOMWidget does not tell the widget framework what front end widget to associate with your back end widget. Instead, you must tell it yourself by defining specially named trait attributes, _view_name and _view_module (as seen below) and optionally _model_name and _model_module. End of explanation %%javascript define('hello', ["jupyter-js-widgets"], function(widgets) { }); Explanation: sync=True traitlets Traitlets is an IPython library for defining type-safe properties on configurable objects. For this tutorial you do not need to worry about the configurable piece of the traitlets machinery. The sync=True keyword argument tells the widget framework to handle synchronizing that value to the browser. Without sync=True, the browser would have no knowledge of _view_name or _view_module. Other traitlet types Unicode, used for _view_name, is not the only Traitlet type, there are many more some of which are listed below: Any Bool Bytes CBool CBytes CComplex CFloat CInt CLong CRegExp CUnicode CaselessStrEnum Complex Dict DottedObjectName Enum Float FunctionType Instance InstanceType Int List Long Set TCPAddress Tuple Type Unicode Union Not all of these traitlets can be synchronized across the network, only the JSON-able traits and Widget instances will be synchronized. Front end (JavaScript) Models and views The IPython widget framework front end relies heavily on Backbone.js. Backbone.js is an MVC (model view controller) framework. Widgets defined in the back end are automatically synchronized with generic Backbone.js models in the front end. The traitlets are added to the front end instance automatically on first state push. The _view_name trait that you defined earlier is used by the widget framework to create the corresponding Backbone.js view and link that view to the model. Import jupyter-js-widgets You first need to import the jupyter-js-widgets module. To import modules, use the define method of require.js (as seen below). End of explanation %%javascript require.undef('hello'); define('hello', ["jupyter-js-widgets"], function(widgets) { // Define the HelloView var HelloView = widgets.DOMWidgetView.extend({ }); return { HelloView: HelloView } }); Explanation: Define the view Next, define your widget view class. Inherit from the DOMWidgetView by using the .extend method. End of explanation %%javascript require.undef('hello'); define('hello', ["jupyter-js-widgets"], function(widgets) { var HelloView = widgets.DOMWidgetView.extend({ // Render the view. render: function() { this.el.textContent = 'Hello World!'; }, }); return { HelloView: HelloView }; }); Explanation: Render method Lastly, override the base render method of the view to define custom rendering logic. A handle to the widget's default DOM element can be acquired via this.el. The el property is the DOM element associated with the view. End of explanation HelloWidget() Explanation: Test You should be able to display your widget just like any other widget now. End of explanation class HelloWidget(widgets.DOMWidget): _view_name = Unicode('HelloView').tag(sync=True) _view_module = Unicode('hello').tag(sync=True) value = Unicode('Hello World!').tag(sync=True) Explanation: Making the widget stateful There is not much that you can do with the above example that you can't do with the IPython display framework. To change this, you will make the widget stateful. Instead of displaying a static "hello world" message, it will display a string set by the back end. First you need to add a traitlet in the back end. Use the name of value to stay consistent with the rest of the widget framework and to allow your widget to be used with interact. End of explanation %%javascript require.undef('hello'); define('hello', ["jupyter-js-widgets"], function(widgets) { var HelloView = widgets.DOMWidgetView.extend({ render: function() { this.el.textContent = this.model.get('value'); }, }); return { HelloView : HelloView }; }); Explanation: Accessing the model from the view To access the model associated with a view instance, use the model property of the view. get and set methods are used to interact with the Backbone model. get is trivial, however you have to be careful when using set. After calling the model set you need call the view's touch method. This associates the set operation with a particular view so output will be routed to the correct cell. The model also has an on method, which allows you to listen to events triggered by the model (like value changes). Rendering model contents By replacing the string literal with a call to model.get, the view will now display the value of the back end upon display. However, it will not update itself to a new value when the value changes. End of explanation %%javascript require.undef('hello'); define('hello', ["jupyter-js-widgets"], function(widgets) { var HelloView = widgets.DOMWidgetView.extend({ render: function() { this.value_changed(); this.model.on('change:value', this.value_changed, this); }, value_changed: function() { this.el.textContent = this.model.get('value'); }, }); return { HelloView : HelloView }; }); Explanation: Dynamic updates To get the view to update itself dynamically, register a function to update the view's value when the model's value property changes. This can be done using the model.on method. The on method takes three parameters, an event name, callback handle, and callback context. The Backbone event named change will fire whenever the model changes. By appending :value to it, you tell Backbone to only listen to the change event of the value property (as seen below). End of explanation w = HelloWidget() w w.value = 'test' Explanation: Test End of explanation
1,636
Given the following text description, write Python code to implement the functionality described below step by step Description: Simulate binaural hearing when the stimulus is rotated around a ring of speakers. Step1: First, some code to render a mouse's head and a ring of speakers Step2: Virtual sources Virtual sources are sounds that come from a location between two speakers. We will synthesize their sound using the two nearest speakers. Next, let's find the nearest two speakers for an arbitrary virtual angle. Subsequent code requires that the first speaker returned be the closest, so be extra careful! Important note Step3: Synthetic speaker amplitudes for virtual sources Next, we want to generate the amplitude that each of our two closest speakers should be driven at. In free space, the sound that reaches the ear will fall off in amplitude by 1/distance to the ear. We'll use the center of the head as our target point and will match the synthesized amplitude to what the virtual source would have created at that point. (We don't try to match phase!). More complicatedly, speaking of phase, we need to be aware of the fact that because the two speakers are different distances from the ear, it's possible that their signal will combine destructively. So ideally, we'd scale them to make sure that their sum has the right amplitude. In order to do this, we will use 3 methods. - "Closest" Step4: Calculate actual distances to ears The speakers are being scaled to aim at the center of the head. The ears are off center of this. The cosine rule for triangles helps us again here to find out the distance to each ear if we know the distance and angle to a source and the interaural distance Step5: To simulate, we need the wave equation This describes the signal which is detected at some distance from a spherically propagating perfect sound source. The amplitude at unit distance (the units here are m, based on the definition of the speed of sound) is <amp>. We do everything at time <t>=0, because all the phases and so on scale by this. Step6: Make a helper function to run the simulation Step7: And another one to plot results Step8: Let's do it! Simulate the situation where the mouse is offset from the center of the ring, but still inside.
Python Code: #%% import numpy as np import matplotlib.pyplot as plt import matplotlib.patches as mpatches from matplotlib.collections import PatchCollection Explanation: Simulate binaural hearing when the stimulus is rotated around a ring of speakers. End of explanation # MEASURE SOURCE ANGLE RELATIVE TO NOSE. POSITIVE IS CLOCKWISE WHEN LOOKING DOWN def render_ring(num_speakers=16, radius=0.5, speaker_radius=0.025, ax=None): if not ax: fig, ax = plt.subplots(1,1) ax.set_xlim(-2*radius,2*radius) ax.set_ylim(-2*radius,2*radius) ax.set_aspect('equal') SpeakerAngles = np.linspace(0, np.pi*2, num=num_speakers, endpoint=False) for n in range(num_speakers): center = (np.sin(SpeakerAngles[n])*radius, np.cos(SpeakerAngles[n])*radius ) ax.add_patch(mpatches.Circle(center,speaker_radius)) return ax def render_mouse(interaural_distance=0.0086, ax=None, ear_diameter=0.008, scale=3, xpos=0): if not ax: fig, ax = plt.subplots(1,1) ax.set_xlim(-2*radius,2*radius) ax.set_ylim(-2*radius,2*radius) ax.set_aspect('equal') x = interaural_distance/2*scale eye_y=1*interaural_distance*scale eye_r=interaural_distance/4*scale ear_height=1.25*ear_diameter*scale ear_width=1.25*ear_diameter/2*scale head_ytop=2.5*interaural_distance*scale head_ybot=0.75*interaural_distance*scale ax.add_patch(mpatches.Circle((x+xpos,eye_y),eye_r,color='black')) ax.add_patch(mpatches.Circle((-x+xpos,eye_y),eye_r,color='black')) ax.add_patch(mpatches.Polygon([[-x*1.5+xpos,-head_ybot],[0+xpos,head_ytop],[x*1.5+xpos,-head_ybot]], color='gray')) ax.add_patch(mpatches.Ellipse((x+xpos,0),ear_width,ear_height,30,color='slateblue')) ax.add_patch(mpatches.Ellipse((-x+xpos,0),ear_width,ear_height,-30,color='slateblue')) return ax # Test it out fig = plt.figure(figsize=(9,3)) gs = fig.add_gridspec(1,3, hspace=0.3, wspace=0.3) ax1 = fig.add_subplot(gs[0:,0]) ax2 = fig.add_subplot(gs[0,1]) ax3 = fig.add_subplot(gs[0,2]) NumSpeakers = 16 Radius = 0.5 HeadPos = 0 ax1 = render_mouse(ax=ax1, xpos=HeadPos) ax1 = render_ring(num_speakers=NumSpeakers, radius=Radius, ax=ax1) ax1.autoscale() ax1.set_aspect('equal') ax1.set_title('Interaural distance 8.6 mm\nHead centered at {} m\n' '{} Speakers at {} m'.format(HeadPos, NumSpeakers, Radius)) HeadPos = 0.6 ax2 = render_mouse(ax=ax2, xpos=HeadPos) ax2 = render_ring(num_speakers=NumSpeakers, radius=Radius, ax=ax2) ax2.autoscale() ax2.set_aspect('equal') ax2.set_title('Interaural distance 8.6 mm\nHead centered at {} m\n' '{} Speakers at {} m'.format(HeadPos, NumSpeakers, Radius)) NumSpeakers = 64 HeadPos = 0.4 ax3 = render_mouse(ax=ax3, xpos=HeadPos) ax3 = render_ring(num_speakers=NumSpeakers, radius=Radius, ax=ax3, speaker_radius=0.025/2) ax3.autoscale() ax3.set_aspect('equal') ax3.set_title('Interaural distance 8.6 mm\nHead centered at {} m\n' '{} Speakers at {} m'.format(HeadPos, NumSpeakers, Radius)) Explanation: First, some code to render a mouse's head and a ring of speakers End of explanation def virtual_source_angle(virtual_angle, num_speakers, radius): speaker_angles = np.linspace(0, 2*np.pi, num_speakers, endpoint=False) speaker_angles = np.expand_dims(speaker_angles,axis=0) virtual_angle = np.mod(virtual_angle, 2*np.pi) # make angles between 0 and 2pi dist_mat = np.minimum(np.abs(virtual_angle - speaker_angles), np.abs(2*np.pi - (virtual_angle - speaker_angles)) ) ClosestSourceAngle = np.take(speaker_angles, np.argmin(dist_mat, axis=1)) if (virtual_angle.ndim > 0): ClosestSourceAngle = np.expand_dims(ClosestSourceAngle,1) dist_mat2 = dist_mat for rowidx, row in enumerate(dist_mat): dist_mat2[rowidx, row.argmin()] = np.inf NextClosestSourceAngle = np.take(speaker_angles, np.argmin(dist_mat2, axis=1)) if (virtual_angle.ndim > 0): NextClosestSourceAngle = np.expand_dims(NextClosestSourceAngle,1) return [ClosestSourceAngle, NextClosestSourceAngle] Explanation: Virtual sources Virtual sources are sounds that come from a location between two speakers. We will synthesize their sound using the two nearest speakers. Next, let's find the nearest two speakers for an arbitrary virtual angle. Subsequent code requires that the first speaker returned be the closest, so be extra careful! Important note: Our reference for virtual source angles is "north" (in front of the nose), going clockwise. End of explanation def cos_triangle_rule(x1, x2, phi): # Really useful formula for finding the length of a vector difference when you # know the lengths of the two arguments and the angle between them. # In other words, return ||A - B||, where phi is the angle between them, # and ||A|| = x1 and ||B|| = x2. return np.sqrt(x1**2 + x2**2 - 2*x1*x2*np.cos(phi)) def virtual_source_amplitude(virtual_angle, radius, source1_angle, source2_angle, freq, head_x=0, speed_of_sound=343.0, synthesis='phasor'): # We assume that the virtual source is on a ring with the given radius, and # that the head is positioned at the vertical center, and horizontally displaced # by head_x. The amplitude is equally split between the two given sources # based on their relative distance from the virtual source. # NOTE: source angles are measured clockwise from vertical axis, but head is displaced # in the horizontal axis # Finally, we assume the virtual amplitude is 1. Everything scales with that virtual_angle = np.mod(virtual_angle, 2*np.pi) # make angles between 0 and 2pi source1_angle = np.mod(source1_angle, 2*np.pi) source2_angle = np.mod(source2_angle, 2*np.pi) virtual_distance = cos_triangle_rule(radius, head_x, np.pi/2 - virtual_angle) source1_distance = cos_triangle_rule(radius, head_x, np.pi/2 - source1_angle) source2_distance = cos_triangle_rule(radius, head_x, np.pi/2 - source2_angle) dAngle = np.minimum(np.abs(source2_angle-source1_angle), # Angle between sources. Note that this assumes np.abs(2*np.pi - (source2_angle-source1_angle))) # they are adjacent! angularDistance = np.minimum(np.abs(virtual_angle - source1_angle), # Take into account wrapping np.abs(2*np.pi - (virtual_angle - source1_angle))) source1_relative_angle = 1 - angularDistance/dAngle # Want amplitude to be large (close to 1) source2_relative_angle = 1 - source1_relative_angle # for closest source, source1 if synthesis=='phasor': omega = 2*np.pi*freq k = omega / speed_of_sound phi_v = -k*virtual_distance # This is the phase angle of the sound from virtual source. phi_1 = -k*source1_distance phi_2 = -k*source2_distance # If we wanted to match phase and amplitude of the virtual signal, it's easy # Define a triangle by the three sounds in phase space. # Use the law of sines find out proper amplitudes. scale = (1/virtual_distance) / np.sin(phi_2 - phi_1) source1_amp = source1_distance * np.sin(phi_2 - phi_v) * scale source2_amp = source2_distance * np.sin(phi_v - phi_1) * scale # The downside of this is that at higher frequencies, we'll end up cycling # our amplitudes really fast to match the phase. Instead, let's just # smoothly interpolate our phase from one speaker to the next # TBD. # Fix the situation where the speakers are at the same difference source1_amp = np.where(phi_2 != phi_1, source1_amp, source1_relative_angle) source2_amp = np.where(phi_2 != phi_1, source2_amp, source2_relative_angle) elif synthesis == 'naive': source1_amp = source1_relative_angle * source1_distance / virtual_distance source2_amp = source2_relative_angle * source2_distance / virtual_distance elif synthesis == 'closest': source1_amp = np.ones(source1_relative_angle.shape) * source1_distance / virtual_distance source2_amp = 0 * source1_amp # get dimensions right return [source1_amp, source2_amp] Explanation: Synthetic speaker amplitudes for virtual sources Next, we want to generate the amplitude that each of our two closest speakers should be driven at. In free space, the sound that reaches the ear will fall off in amplitude by 1/distance to the ear. We'll use the center of the head as our target point and will match the synthesized amplitude to what the virtual source would have created at that point. (We don't try to match phase!). More complicatedly, speaking of phase, we need to be aware of the fact that because the two speakers are different distances from the ear, it's possible that their signal will combine destructively. So ideally, we'd scale them to make sure that their sum has the right amplitude. In order to do this, we will use 3 methods. - "Closest": This will just drive the closest speaker, with amplitude scaled by distance - "Naive": This will just scale the amplitude of the sound based on the relative angle between the virtual source and the two speakers. - "Phasor": This will take into account the relative phase of the signals. End of explanation # remember - 0 degrees is straight ahead def left_ear_distance(angle, radius, head_x=0, interaural_distance=0.0086): if (head_x - interaural_distance/2) < 0: return cos_triangle_rule(radius, head_x - interaural_distance/2, np.pi/2 + angle) else: return cos_triangle_rule(radius, head_x - interaural_distance/2, np.pi/2 - angle) def right_ear_distance(angle, radius, head_x=0, interaural_distance=0.0086): if (head_x + interaural_distance/2) < 0: return cos_triangle_rule(radius, head_x + interaural_distance/2, np.pi/2 + angle) else: return cos_triangle_rule(radius, head_x + interaural_distance/2, np.pi/2 - angle) Explanation: Calculate actual distances to ears The speakers are being scaled to aim at the center of the head. The ears are off center of this. The cosine rule for triangles helps us again here to find out the distance to each ear if we know the distance and angle to a source and the interaural distance End of explanation def wave_eq(amp, r, freq, t=0,speed_of_sound = 343.0): omega = 2*np.pi*freq k = omega / speed_of_sound return (amp/r) * np.exp(1j * (omega*t - k*r)) Explanation: To simulate, we need the wave equation This describes the signal which is detected at some distance from a spherically propagating perfect sound source. The amplitude at unit distance (the units here are m, based on the definition of the speed of sound) is <amp>. We do everything at time <t>=0, because all the phases and so on scale by this. End of explanation def ring_audio_synthesis(num_speakers, radius, head_pos, frequencies, synthesis='phasor'): VirtualAngles = np.expand_dims(np.linspace(-np.pi, np.pi, 8*num_speakers, endpoint=False),axis=1) VirtualDistance = radius # This is a fixed assumption!!! VirtualAmp = 1 # This is a fixed assumption LR_Distances = [left_ear_distance(VirtualAngles, radius, head_pos), right_ear_distance(VirtualAngles, radius, head_pos)] DesiredSounds = [np.abs(wave_eq(VirtualAmp, LR_Distances[0], frequencies)), np.abs(wave_eq(VirtualAmp, LR_Distances[1], frequencies))] SynthAngle = virtual_source_angle(VirtualAngles, num_speakers, radius) SynthAmp = virtual_source_amplitude(VirtualAngles, radius, SynthAngle[0], SynthAngle[1], frequencies, head_pos, synthesis=synthesis) LeftDistances = [left_ear_distance(s, radius, head_pos) for s in SynthAngle] RightDistances = [right_ear_distance(s, radius, head_pos) for s in SynthAngle] SynthesizedSounds = [ np.abs(wave_eq(SynthAmp[0], LeftDistances[0], frequencies) + \ wave_eq(SynthAmp[1], LeftDistances[1], frequencies)), np.abs(wave_eq(SynthAmp[0], RightDistances[0], frequencies) + \ wave_eq(SynthAmp[1], RightDistances[1], frequencies)) ] SynthesizedSoundAtCenter = np.abs(wave_eq(SynthAmp[0], LR_Distances[0], frequencies) + \ wave_eq(SynthAmp[1], LR_Distances[1], frequencies)) return VirtualAngles, LR_Distances, DesiredSounds, SynthAngle, SynthAmp, \ [LeftDistances, RightDistances], SynthesizedSounds, SynthesizedSoundAtCenter Explanation: Make a helper function to run the simulation End of explanation def plot_results(virtual_angles, frequencies, radius, head_x, num_speakers, desired_sounds, synthesized_sounds, synthesis='phasor', speaker_radius=0.05/2, # 5 cm diameter speakers mouse_scale=3, interaural_distance=0.0086): # 8.6 mm fig = plt.figure(figsize=(20,12)) gs = fig.add_gridspec(2,5, hspace=0.3, wspace=0.3) ax1 = fig.add_subplot(gs[0:,0]) ax2 = fig.add_subplot(gs[0,1:3]) ax3 = fig.add_subplot(gs[1,1:3]) ax4 = fig.add_subplot(gs[0,3:]) ax5 = fig.add_subplot(gs[1,3:]) ax1 = render_mouse(interaural_distance=interaural_distance, ax=ax1, scale=mouse_scale, xpos=head_x) ax1 = render_ring(num_speakers=num_speakers, radius=radius, ax=ax1, speaker_radius=speaker_radius) ax1.autoscale() ax1.set_aspect('equal') ax1.set_title('Interaural distance 8.6 mm\nHead centered at {} m\n' '{} Speakers at {} m'.format(head_x, num_speakers, radius)) vmax = 1.25 * np.max(desired_sounds[1]) vmin = np.min(desired_sounds[1]) if (vmin < 0): vmin = 1.25 * vmin else: vmin = 0.5 * vmin RightEarSignal = ax2.imshow((synthesized_sounds[1]).T, origin='lower', interpolation=None, aspect='auto', vmin=vmin, vmax=vmax, extent=[virtual_angles[0,0],virtual_angles[-1,0], frequencies[0,0],frequencies[0,-1]]) ax2.set_yscale('log') ax2.set_ylabel('Virtual Source Frequency') RightEarSignal.cmap.set_over('red') RightEarSignal.cmap.set_over('pink') fig.colorbar(RightEarSignal, ax=ax2, extend='max') ax2.set_title('Phasor Synthesis Sound at Right Ear') DesiredRightEar = ax3.imshow(desired_sounds[1].T, origin='lower', interpolation=None, aspect='auto', vmin=vmin, vmax=vmax, extent=[virtual_angles[0,0],virtual_angles[-1,0], frequencies[0,0],frequencies[0,-1]]) ax3.set_xlabel('Virtual Source Angle') ax3.set_yscale('log') ax3.set_ylabel('Virtual Source Frequency') fig.colorbar(DesiredRightEar, ax=ax3, extend='both') ax3.set_title('Desired Sound at Right Ear') # Next, plot ILDs if head_x > radius: first = 0 second = 1 label = '(Left - Right)' else: first = 1 second = 0 label = '(Right - Left)' vmax = 1.25 * np.max(desired_sounds[first] - desired_sounds[second]) vmin = 1.25 * np.min(desired_sounds[first] - desired_sounds[second]) ILD = ax4.imshow((synthesized_sounds[first] - synthesized_sounds[second]).T, origin='lower', interpolation=None, aspect='auto', vmin=vmin, vmax=vmax, extent=[virtual_angles[0,0],virtual_angles[-1,0], frequencies[0,0],frequencies[0,-1]]) ax4.set_yscale('log') ax4.set_ylabel('Virtual Source Frequency') ILD.cmap.set_over('red') ILD.cmap.set_under('pink') fig.colorbar(ILD, ax=ax4, extend='both') ax4.set_title('Phasor Synthesis Interaural Level Difference {}'.format(label)) ExpectedILD = ax5.imshow(desired_sounds[first].T - desired_sounds[second].T, origin='lower', interpolation=None, aspect='auto', vmin=vmin, vmax=vmax, extent=[virtual_angles[0,0],virtual_angles[-1,0], frequencies[0,0],frequencies[0,-1]]) ax5.set_yscale('log') ax5.set_xlabel('Virtual Source Angle') ax5.set_ylabel('Virtual Source Frequency') fig.colorbar(ExpectedILD, ax=ax5, extend='both') ax5.set_title('Expected Interaural Level Difference {}'.format(label)) return (fig, ax1, ax2, ax3, ax4, ax5) Explanation: And another one to plot results End of explanation #%% # Info about ring NumSpeakers = 16 Radius = 0.5 # 50 cm radius # Mouse position HeadPos = 0.4 Frequencies = np.expand_dims(np.linspace(500,20000,num=200),axis=0) VirtualAngles, LR_Distances, DesiredSounds, SynthAngle, SynthAmp, LR2, SynthesizedSounds, SynthesizedSoundsAtCenter = \ ring_audio_synthesis(NumSpeakers, Radius, HeadPos, Frequencies) #%% #%% plot_results(VirtualAngles, Frequencies, Radius, HeadPos, NumSpeakers, DesiredSounds, SynthesizedSounds); #%% VirtualAngles, LR_Distances, DesiredSounds, SynthAngle, SynthAmp, LR2, SynthesizedSounds, SynthesizedSoundsAtCenter = \ ring_audio_synthesis(NumSpeakers, Radius, HeadPos, Frequencies, synthesis='naive') plot_results(VirtualAngles, Frequencies, Radius, HeadPos, NumSpeakers, DesiredSounds, SynthesizedSounds); #%% VirtualAngles, LR_Distances, DesiredSounds, SynthAngle, SynthAmp, LR2, SynthesizedSounds, SynthesizedSoundsAtCenter = \ ring_audio_synthesis(NumSpeakers, Radius, HeadPos, Frequencies, synthesis='closest') plot_results(VirtualAngles, Frequencies, Radius, HeadPos, NumSpeakers, DesiredSounds, SynthesizedSounds); # %% Explanation: Let's do it! Simulate the situation where the mouse is offset from the center of the ring, but still inside. End of explanation
1,637
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: I have the tensors:
Problem: import numpy as np import pandas as pd import torch ids, x = load_data() ids = torch.argmax(ids, 1, True) idx = ids.repeat(1, 2).view(70, 1, 2) result = torch.gather(x, 1, idx) result = result.squeeze(1)
1,638
Given the following text description, write Python code to implement the functionality described below step by step Description: Spatial Weights Spatial weights are mathematical structures used to represent spatial relationships. They characterize the relationship of each observation to every other observation using some concept of proximity or closeness that depends on the weight type. They can be build in PySAL from shapefiles, as well as some types of files. Step1: There are functions to construct weights directly from a file path. Step2: Weight Types Contiguity Step3: All weights objects have a few traits that you can use to work with the weights object, as well as to get information about the weights object. To get the neighbors & weights around an observation, use the observation's index on the weights object, like a dictionary Step4: By default, the weights and the pandas dataframe will use the same index. So, we can view the observation and its neighbors in the dataframe by putting the observation's index and its neighbors' indexes together in one list Step5: and grabbing those elements from the dataframe Step6: A full, dense matrix describing all of the pairwise relationships is constructed using the .full method, or when pysal.full is called on a weights object Step7: Note that this matrix is binary, in that its elements are either zero or one, since an observation is either a neighbor or it is not a neighbor. However, many common use cases of spatial weights require that the matrix is row-standardized. This is done simply in PySAL using the .transform attribute Step8: Now, if we build a new full matrix, its rows should sum to one Step9: Since weight matrices are typically very sparse, there is also a sparse weights matrix constructor Step10: By default, PySAL assigns each observation an index according to the order in which the observation was read in. This means that, by default, all of the observations in the weights object are indexed by table order. If you have an alternative ID variable, you can pass that into the weights constructor. For example, the NAT.shp dataset has a possible alternative ID Variable, a FIPS code. Step11: The observation we were discussing above is in the fifth row Step12: Now, Pend Oreille county has a different index Step13: Note that a KeyError in Python usually means that some index, here 4, was not found in the collection being searched, the IDs in the queen weights object. This makes sense, since we explicitly passed an idVariable argument, and nothing has a FIPS code of 4. Instead, if we use the observation's FIPS code Step14: We get what we need. In addition, we have to now query the dataframe using the FIPS code to find our neighbors. But, this is relatively easy to do, since pandas will parse the query by looking into python objects, if told to. First, let us store the neighbors of our target county Step15: Then, we can use this list in .query Step16: Note that we have to use @ before the name in order to show that we're referring to a python object and not a column in the dataframe. Step17: Of course, we could also reindex the dataframe to use the same index as our weights Step18: Now that both are using the same weights, we can use the .loc indexer again Step19: Rook Weights Rook weights are another type of contiguity weight, but consider observations as neighboring only when they share an edge. The rook neighbors of an observation may be different than its queen neighbors, depending on how the observation and its nearby polygons are configured. We can construct this in the same way as the queen weights, using the special rook_from_shapefile function Step20: These weights function exactly like the Queen weights, and are only distinguished by what they consider "neighbors." Bishop Weights In theory, a "Bishop" weighting scheme is one that arises when only polygons that share vertexes are considered to be neighboring. But, since Queen contiguigy requires either an edge or a vertex and Rook contiguity requires only shared edges, the following relationship is true Step21: Thus, the vast majority of counties have no bishop neighbors. But, a few do. A simple way to see these observations in the dataframe is to find all elements of the dataframe that are not "islands," the term for an observation with no neighbors Step22: Distance There are many other kinds of weighting functions in PySAL. Another separate type use a continuous measure of distance to define neighborhoods. To use these measures, we first must extract the polygons' centroids. For each polygon poly in dataframe.geometry, we want poly.centroid. So, one way to do this is to make a list of all of the centroids Step23: If we were working with point data, this step would be unncessary. KnnW If we wanted to consider only the k-nearest neighbors to an observation's centroid, we could use the knnW function in PySAL. This specific type of distance weights requires that we first build a KDTree, a special representation for spatial point data. Fortunately, this is built in to PySAL Step24: Then, we can use this to build a spatial weights object where only the closest k observations are considered "neighbors." In this example, let's do the closest 5 Step25: So, all observations have exactly 5 neighbors. Sometimes, these neighbors are actually different observations than the ones identified by contiguity neighbors. For example, Pend Oreille gets a new neighbor, Kootenai county Step26: Kernel W Kernel Weights are continuous distance-based weights that use kernel densities to provide an indication of neighborliness. Typically, they estimate a bandwidth, which is a parameter governing how far out observations should be considered neighboring. Then, using this bandwidth, they evaluate a continuous kernel function to provide a weight between 0 and 1. Many different choices of kernel functions are supported, and bandwidth can be estimated at each point or over the entire map. For example, if we wanted to use a single estimated bandwidth for the entire map and weight according to a gaussian kernel
Python Code: import pysal as ps import numpy as np Explanation: Spatial Weights Spatial weights are mathematical structures used to represent spatial relationships. They characterize the relationship of each observation to every other observation using some concept of proximity or closeness that depends on the weight type. They can be build in PySAL from shapefiles, as well as some types of files. End of explanation shp_path = ps.examples.get_path('NAT.shp') Explanation: There are functions to construct weights directly from a file path. End of explanation qW = ps.queen_from_shapefile(shp_path) dataframe = ps.pdio.read_files(shp_path) qW Explanation: Weight Types Contiguity: Queen Weights A commonly-used type of weight is a queen contigutiy weight, which reflects adjacency relationships as a binary indicator variable denoting whether or not a polygon shares an edge or a verted each another polygon. These weights are symmetric, in that when polygon $A$ neighbors polygon $B$, both $w_{AB} = 1$ and $w_{BA} = 1$. To construct queen weights from a shapefile, use the queen_from_shapefile function: End of explanation qW[4] #neighbors & weights of the 5th observation Explanation: All weights objects have a few traits that you can use to work with the weights object, as well as to get information about the weights object. To get the neighbors & weights around an observation, use the observation's index on the weights object, like a dictionary: End of explanation self_and_neighbors = [4] self_and_neighbors.extend(qW.neighbors[4]) print(self_and_neighbors) Explanation: By default, the weights and the pandas dataframe will use the same index. So, we can view the observation and its neighbors in the dataframe by putting the observation's index and its neighbors' indexes together in one list: End of explanation dataframe.loc[self_and_neighbors] Explanation: and grabbing those elements from the dataframe: End of explanation Wmatrix, ids = qW.full() #Wmatrix, ids = ps.full(qW) Wmatrix Explanation: A full, dense matrix describing all of the pairwise relationships is constructed using the .full method, or when pysal.full is called on a weights object: End of explanation qW.transform = 'r' Explanation: Note that this matrix is binary, in that its elements are either zero or one, since an observation is either a neighbor or it is not a neighbor. However, many common use cases of spatial weights require that the matrix is row-standardized. This is done simply in PySAL using the .transform attribute End of explanation Wmatrix, ids = qW.full() Wmatrix.sum(axis=1) #numpy axes are 0:column, 1:row, 2:facet, into higher dimensions Explanation: Now, if we build a new full matrix, its rows should sum to one: End of explanation qW.sparse Explanation: Since weight matrices are typically very sparse, there is also a sparse weights matrix constructor: End of explanation dataframe.head() Explanation: By default, PySAL assigns each observation an index according to the order in which the observation was read in. This means that, by default, all of the observations in the weights object are indexed by table order. If you have an alternative ID variable, you can pass that into the weights constructor. For example, the NAT.shp dataset has a possible alternative ID Variable, a FIPS code. End of explanation qW = ps.queen_from_shapefile(shp_path, idVariable='FIPS') Explanation: The observation we were discussing above is in the fifth row: Pend Oreille county, Washington. Note that its FIPS code is 53051. Then, instead of indexing the weights and the dataframe just based on read-order, use the FIPS code as an index: End of explanation qW[4] #fails, since no FIPS is 4. Explanation: Now, Pend Oreille county has a different index: End of explanation qW['53051'] Explanation: Note that a KeyError in Python usually means that some index, here 4, was not found in the collection being searched, the IDs in the queen weights object. This makes sense, since we explicitly passed an idVariable argument, and nothing has a FIPS code of 4. Instead, if we use the observation's FIPS code: End of explanation self_and_neighbors = ['53051'] self_and_neighbors.extend(qW.neighbors['53051']) Explanation: We get what we need. In addition, we have to now query the dataframe using the FIPS code to find our neighbors. But, this is relatively easy to do, since pandas will parse the query by looking into python objects, if told to. First, let us store the neighbors of our target county: End of explanation dataframe.query('FIPS in @self_and_neighbors') Explanation: Then, we can use this list in .query: End of explanation #dataframe.query('FIPS in neighs') will fail because there is no column called 'neighs' Explanation: Note that we have to use @ before the name in order to show that we're referring to a python object and not a column in the dataframe. End of explanation fips_frame = dataframe.set_index(dataframe.FIPS) fips_frame.head() Explanation: Of course, we could also reindex the dataframe to use the same index as our weights: End of explanation fips_frame.loc[self_and_neighbors] Explanation: Now that both are using the same weights, we can use the .loc indexer again: End of explanation rW = ps.rook_from_shapefile(shp_path, idVariable='FIPS') rW['53051'] Explanation: Rook Weights Rook weights are another type of contiguity weight, but consider observations as neighboring only when they share an edge. The rook neighbors of an observation may be different than its queen neighbors, depending on how the observation and its nearby polygons are configured. We can construct this in the same way as the queen weights, using the special rook_from_shapefile function End of explanation bW = ps.w_difference(qW, rW, constrained=False, silent_island_warning=True) #silence because there will be a lot of warnings bW.histogram Explanation: These weights function exactly like the Queen weights, and are only distinguished by what they consider "neighbors." Bishop Weights In theory, a "Bishop" weighting scheme is one that arises when only polygons that share vertexes are considered to be neighboring. But, since Queen contiguigy requires either an edge or a vertex and Rook contiguity requires only shared edges, the following relationship is true: $$ \mathcal{Q} = \mathcal{R} \cup \mathcal{B} $$ where $\mathcal{Q}$ is the set of neighbor pairs via queen contiguity, $\mathcal{R}$ is the set of neighbor pairs via Rook contiguity, and $\mathcal{B}$ via Bishop contiguity. Thus: $$ \mathcal{Q} \setminus \mathcal{R} = \mathcal{B}$$ Bishop weights entail all Queen neighbor pairs that are not also Rook neighbors. PySAL does not have a dedicated bishop weights constructor, but you can construct very easily using the w_difference function. This function is one of a family of tools to work with weights, all defined in ps.weights, that conduct these types of set operations between weight objects. End of explanation islands = bW.islands dataframe.query('FIPS not in @islands') Explanation: Thus, the vast majority of counties have no bishop neighbors. But, a few do. A simple way to see these observations in the dataframe is to find all elements of the dataframe that are not "islands," the term for an observation with no neighbors: End of explanation centroids = [list(poly.centroid) for poly in dataframe.geometry] centroids[0:5] #let's look at the first five Explanation: Distance There are many other kinds of weighting functions in PySAL. Another separate type use a continuous measure of distance to define neighborhoods. To use these measures, we first must extract the polygons' centroids. For each polygon poly in dataframe.geometry, we want poly.centroid. So, one way to do this is to make a list of all of the centroids: End of explanation kdtree = ps.cg.KDTree(centroids) Explanation: If we were working with point data, this step would be unncessary. KnnW If we wanted to consider only the k-nearest neighbors to an observation's centroid, we could use the knnW function in PySAL. This specific type of distance weights requires that we first build a KDTree, a special representation for spatial point data. Fortunately, this is built in to PySAL: End of explanation nn5 = ps.knnW(kdtree, k=5) nn5.histogram Explanation: Then, we can use this to build a spatial weights object where only the closest k observations are considered "neighbors." In this example, let's do the closest 5: End of explanation nn5[4] dataframe.loc[nn5.neighbors[4] + [4]] fips_frame.loc[qW.neighbors['53051'] + ['53051']] Explanation: So, all observations have exactly 5 neighbors. Sometimes, these neighbors are actually different observations than the ones identified by contiguity neighbors. For example, Pend Oreille gets a new neighbor, Kootenai county: End of explanation kernelW = ps.Kernel(centroids, fixed=True, function='gaussian') #ps.Kernel(centroids, fixed=False, function='gaussian') #same kernel, but bandwidth changes at each observation dataframe.loc[kernelW.neighbors[4] + [4]] Explanation: Kernel W Kernel Weights are continuous distance-based weights that use kernel densities to provide an indication of neighborliness. Typically, they estimate a bandwidth, which is a parameter governing how far out observations should be considered neighboring. Then, using this bandwidth, they evaluate a continuous kernel function to provide a weight between 0 and 1. Many different choices of kernel functions are supported, and bandwidth can be estimated at each point or over the entire map. For example, if we wanted to use a single estimated bandwidth for the entire map and weight according to a gaussian kernel: End of explanation
1,639
Given the following text description, write Python code to implement the functionality described below step by step Description: Poincare Map This example shows how to calculate a simple Poincare Map with REBOUND. A Poincare Map (or sometimes calles Poincare Section) can be helpful to understand dynamical systems. Step1: We first create the initial conditions for our map. The most interesting Poincare maps exist near resonance, so we have to find a system near a resonance. The easiest way to get planets into resonance is migration. So that's what we'll do. Initially we setup a simulation in which the planets are placed just outside the 2 Step2: We then define a simple migration force that will act on the outer planet. We implement it in python. This is relatively slow, but we only need to migrate the planet for a short time. Step3: Next, we link the additional migration forces to our REBOUND simulation and get the pointer to the particle array. Step4: Then, we just integrate the system for 3000 time units, about 500 years in units where $G=1$. Step5: Then we save the simulation to a binary file. We'll be reusing it a lot later to create the initial conditions and it is faster to load it from file than to migrate the planets into resonance each time. Step6: To create the poincare map, we first define which hyper surface we want to look at. Here, we choose the pericenter of the outer planet. Step7: We will also need a helper function that ensures our resonant angle is in the range $[-\pi Step8: The following function generate the Poincare Map for one set of initial conditions. We first load the resonant system from the binary file we created earlier. We then randomly perturb the velocity of one of the particles. If we perturb the velocity enough, the planets will not be in resonant anymore. We also initialize shadow particles to calculate the MEGNO, a fast chaos indicator. Step9: For this example we'll run 10 initial conditions. Some of them will be in resonance, some other won't be. We run them in parallel using the InterruptiblePool that comes with REBOUND. Step10: Now we can finally plot the Poincare Map. We color the points by the MEGNO value of the particular simulation. A value close to 2 corresponds to quasi-periodic orbits, a large value indicate chaotic motion.
Python Code: import rebound import numpy as np Explanation: Poincare Map This example shows how to calculate a simple Poincare Map with REBOUND. A Poincare Map (or sometimes calles Poincare Section) can be helpful to understand dynamical systems. End of explanation sim = rebound.Simulation() sim.add(m=1.) sim.add(m=1e-3,a=1,e=0.001) sim.add(m=0.,a=1.65) sim.move_to_com() Explanation: We first create the initial conditions for our map. The most interesting Poincare maps exist near resonance, so we have to find a system near a resonance. The easiest way to get planets into resonance is migration. So that's what we'll do. Initially we setup a simulation in which the planets are placed just outside the 2:1 mean motion resonance. End of explanation def migrationForce(reb_sim): tau = 40000. ps[2].ax -= ps[2].vx/tau ps[2].ay -= ps[2].vy/tau ps[2].az -= ps[2].vz/tau Explanation: We then define a simple migration force that will act on the outer planet. We implement it in python. This is relatively slow, but we only need to migrate the planet for a short time. End of explanation sim.additional_forces = migrationForce ps = sim.particles Explanation: Next, we link the additional migration forces to our REBOUND simulation and get the pointer to the particle array. End of explanation sim.integrate(3000.) Explanation: Then, we just integrate the system for 3000 time units, about 500 years in units where $G=1$. End of explanation sim.save("resonant_system.bin") Explanation: Then we save the simulation to a binary file. We'll be reusing it a lot later to create the initial conditions and it is faster to load it from file than to migrate the planets into resonance each time. End of explanation def hyper(sim): ps = sim.particles dx = ps[2].x -ps[0].x dy = ps[2].y -ps[0].y dvx = ps[2].vx-ps[0].vx dvy = ps[2].vy-ps[0].vy return dx*dvx + dy*dvy Explanation: To create the poincare map, we first define which hyper surface we want to look at. Here, we choose the pericenter of the outer planet. End of explanation def mod2pi(x): if x>np.pi: return mod2pi(x-2.*np.pi) if x<-np.pi: return mod2pi(x+2.*np.pi) return x Explanation: We will also need a helper function that ensures our resonant angle is in the range $[-\pi:\pi]$. End of explanation def runone(args): i = args # integer numbering the run N_points_max = 2000 # maximum number of point in our Poincare Section N_points = 0 poincare_map = np.zeros((N_points_max,2)) # setting up simulation from binary file sim = rebound.Simulation.from_file("resonant_system.bin") vx = 0.97+0.06*(float(i)/float(Nsim)) sim.particles[2].vx *= vx sim.t = 0. # reset time to 0 # Integrate simulation in small intervals # After each interval check if we crossed the # hypersurface. If so, bisect until we hit the # hypersurface exactly up to a precision # of dt_epsilon dt = 0.13 dt_epsilon = 0.001 sign = hyper(sim) while sim.t<15000. and N_points < N_points_max: oldt = sim.t olddt = sim.dt sim.integrate(oldt+dt) nsign = hyper(sim) if sign*nsign < 0.: # Hyper surface crossed. leftt = oldt rightt = sim.t sim.dt = -olddt while (rightt-leftt > dt_epsilon): # Bisection. midt = (leftt+rightt)/2. sim.integrate(midt) msign = hyper(sim) if msign*sign > 0.: leftt = midt sim.dt = 0.3*olddt else: rightt = midt sim.dt = -0.3*olddt # Hyper surface found up to precision of dt_epsilon. # Calculate orbital elements o = sim.calculate_orbits() # Check if we cross hypersurface in one direction or the other. if o[1].r<o[1].a: # Calculate resonant angle phi and its time derivative tp = np.pi*2. phi = mod2pi(o[0].l-2.*o[1].l+o[1].omega+o[1].Omega) phid = (tp/o[0].P-2.*tp/o[1].P)/(tp/o[0].P) # Store value for map poincare_map[N_points] = [phi,phid] N_points += 1 sim.dt = olddt sim.integrate(oldt+dt) sign = nsign # Rerun to calculate Megno sim = rebound.Simulation.from_file("resonant_system.bin") vx = 0.97+0.06*(float(i)/float(Nsim)) sim.particles[2].vx *= vx sim.t = 0. # reset time to 0 sim.init_megno(1e-16) # add variational (shadow) particles and calculate MEGNO sim.integrate(15000.) return (poincare_map, sim.calculate_megno(),vx) Explanation: The following function generate the Poincare Map for one set of initial conditions. We first load the resonant system from the binary file we created earlier. We then randomly perturb the velocity of one of the particles. If we perturb the velocity enough, the planets will not be in resonant anymore. We also initialize shadow particles to calculate the MEGNO, a fast chaos indicator. End of explanation Nsim = 10 pool = rebound.InterruptiblePool() res = pool.map(runone,range(Nsim)) Explanation: For this example we'll run 10 initial conditions. Some of them will be in resonance, some other won't be. We run them in parallel using the InterruptiblePool that comes with REBOUND. End of explanation %matplotlib inline import matplotlib.pyplot as plt fig = plt.figure(figsize=(14,8)) ax = plt.subplot(111) ax.set_xlabel("$\phi$"); ax.set_ylabel("$\dot{\phi}$") ax.set_xlim([-np.pi,np.pi]); ax.set_ylim([-0.06,0.1]) cm = plt.cm.get_cmap('brg') for m, megno, vx in res: c = np.empty(len(m[:,0])); c.fill(megno) p = ax.scatter(m[:,0],m[:,1],marker=".",c=c, vmin=1.4, vmax=3, s=25,edgecolor='none', cmap=cm) cb = plt.colorbar(p, ax=ax) cb.set_label("MEGNO $<Y>$") Explanation: Now we can finally plot the Poincare Map. We color the points by the MEGNO value of the particular simulation. A value close to 2 corresponds to quasi-periodic orbits, a large value indicate chaotic motion. End of explanation
1,640
Given the following text description, write Python code to implement the functionality described below step by step Description: Copyright 2018 The TensorFlow Authors. Licensed under the Apache License, Version 2.0 (the "License"). Neural Machine Translation with Attention <table class="tfo-notebook-buttons" align="left"><td> <a target="_blank" href="https Step1: Download and prepare the dataset We'll use a language dataset provided by http Step2: Limit the size of the dataset to experiment faster (optional) Training on the complete dataset of >100,000 sentences will take a long time. To train faster, we can limit the size of the dataset to 30,000 sentences (of course, translation quality degrades with less data) Step3: Create a tf.data dataset Step4: Write the encoder and decoder model Here, we'll implement an encoder-decoder model with attention which you can read about in the TensorFlow Neural Machine Translation (seq2seq) tutorial. This example uses a more recent set of APIs. This notebook implements the attention equations from the seq2seq tutorial. The following diagram shows that each input words is assigned a weight by the attention mechanism which is then used by the decoder to predict the next word in the sentence. <img src="https Step5: Define the optimizer and the loss function Step6: Training Pass the input through the encoder which return encoder output and the encoder hidden state. The encoder output, encoder hidden state and the decoder input (which is the start token) is passed to the decoder. The decoder returns the predictions and the decoder hidden state. The decoder hidden state is then passed back into the model and the predictions are used to calculate the loss. Use teacher forcing to decide the next input to the decoder. Teacher forcing is the technique where the target word is passed as the next input to the decoder. The final step is to calculate the gradients and apply it to the optimizer and backpropagate. Step7: Translate The evaluate function is similar to the training loop, except we don't use teacher forcing here. The input to the decoder at each time step is its previous predictions along with the hidden state and the encoder output. Stop predicting when the model predicts the end token. And store the attention weights for every time step. Note
Python Code: from __future__ import absolute_import, division, print_function # Import TensorFlow >= 1.9 and enable eager execution import tensorflow as tf tf.enable_eager_execution() import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split import unicodedata import re import numpy as np import os import time print(tf.__version__) Explanation: Copyright 2018 The TensorFlow Authors. Licensed under the Apache License, Version 2.0 (the "License"). Neural Machine Translation with Attention <table class="tfo-notebook-buttons" align="left"><td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/nmt_with_attention/nmt_with_attention.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td><td> <a target="_blank" href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/nmt_with_attention/nmt_with_attention.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a></td></table> This notebook trains a sequence to sequence (seq2seq) model for Spanish to English translation using tf.keras and eager execution. This is an advanced example that assumes some knowledge of sequence to sequence models. After training the model in this notebook, you will be able to input a Spanish sentence, such as "¿todavia estan en casa?", and return the English translation: "are you still at home?" The translation quality is reasonable for a toy example, but the generated attention plot is perhaps more interesting. This shows which parts of the input sentence has the model's attention while translating: <img src="https://tensorflow.org/images/spanish-english.png" alt="spanish-english attention plot"> Note: This example takes approximately 10 mintues to run on a single P100 GPU. End of explanation # Download the file path_to_zip = tf.keras.utils.get_file( 'spa-eng.zip', origin='http://download.tensorflow.org/data/spa-eng.zip', extract=True) path_to_file = os.path.dirname(path_to_zip)+"/spa-eng/spa.txt" # Converts the unicode file to ascii def unicode_to_ascii(s): return ''.join(c for c in unicodedata.normalize('NFD', s) if unicodedata.category(c) != 'Mn') def preprocess_sentence(w): w = unicode_to_ascii(w.lower().strip()) # creating a space between a word and the punctuation following it # eg: "he is a boy." => "he is a boy ." # Reference:- https://stackoverflow.com/questions/3645931/python-padding-punctuation-with-white-spaces-keeping-punctuation w = re.sub(r"([?.!,¿])", r" \1 ", w) w = re.sub(r'[" "]+', " ", w) # replacing everything with space except (a-z, A-Z, ".", "?", "!", ",") w = re.sub(r"[^a-zA-Z?.!,¿]+", " ", w) w = w.rstrip().strip() # adding a start and an end token to the sentence # so that the model know when to start and stop predicting. w = '<start> ' + w + ' <end>' return w # 1. Remove the accents # 2. Clean the sentences # 3. Return word pairs in the format: [ENGLISH, SPANISH] def create_dataset(path, num_examples): lines = open(path, encoding='UTF-8').read().strip().split('\n') word_pairs = [[preprocess_sentence(w) for w in l.split('\t')] for l in lines[:num_examples]] return word_pairs # This class creates a word -> index mapping (e.g,. "dad" -> 5) and vice-versa # (e.g., 5 -> "dad") for each language, class LanguageIndex(): def __init__(self, lang): self.lang = lang self.word2idx = {} self.idx2word = {} self.vocab = set() self.create_index() def create_index(self): for phrase in self.lang: self.vocab.update(phrase.split(' ')) self.vocab = sorted(self.vocab) self.word2idx['<pad>'] = 0 for index, word in enumerate(self.vocab): self.word2idx[word] = index + 1 for word, index in self.word2idx.items(): self.idx2word[index] = word def max_length(tensor): return max(len(t) for t in tensor) def load_dataset(path, num_examples): # creating cleaned input, output pairs pairs = create_dataset(path, num_examples) # index language using the class defined above inp_lang = LanguageIndex(sp for en, sp in pairs) targ_lang = LanguageIndex(en for en, sp in pairs) # Vectorize the input and target languages # Spanish sentences input_tensor = [[inp_lang.word2idx[s] for s in sp.split(' ')] for en, sp in pairs] # English sentences target_tensor = [[targ_lang.word2idx[s] for s in en.split(' ')] for en, sp in pairs] # Calculate max_length of input and output tensor # Here, we'll set those to the longest sentence in the dataset max_length_inp, max_length_tar = max_length(input_tensor), max_length(target_tensor) # Padding the input and output tensor to the maximum length input_tensor = tf.keras.preprocessing.sequence.pad_sequences(input_tensor, maxlen=max_length_inp, padding='post') target_tensor = tf.keras.preprocessing.sequence.pad_sequences(target_tensor, maxlen=max_length_tar, padding='post') return input_tensor, target_tensor, inp_lang, targ_lang, max_length_inp, max_length_tar Explanation: Download and prepare the dataset We'll use a language dataset provided by http://www.manythings.org/anki/. This dataset contains language translation pairs in the format: May I borrow this book? ¿Puedo tomar prestado este libro? There are a variety of languages available, but we'll use the English-Spanish dataset. For convenience, we've hosted a copy of this dataset on Google Cloud, but you can also download your own copy. After downloading the dataset, here are the steps we'll take to prepare the data: Add a start and end token to each sentence. Clean the sentences by removing special characters. Create a word index and reverse word index (dictionaries mapping from word → id and id → word). Pad each sentence to a maximum length. End of explanation # Try experimenting with the size of that dataset num_examples = 30000 input_tensor, target_tensor, inp_lang, targ_lang, max_length_inp, max_length_targ = load_dataset(path_to_file, num_examples) # Creating training and validation sets using an 80-20 split input_tensor_train, input_tensor_val, target_tensor_train, target_tensor_val = train_test_split(input_tensor, target_tensor, test_size=0.2) # Show length len(input_tensor_train), len(target_tensor_train), len(input_tensor_val), len(target_tensor_val) Explanation: Limit the size of the dataset to experiment faster (optional) Training on the complete dataset of >100,000 sentences will take a long time. To train faster, we can limit the size of the dataset to 30,000 sentences (of course, translation quality degrades with less data): End of explanation BUFFER_SIZE = len(input_tensor_train) BATCH_SIZE = 64 N_BATCH = BUFFER_SIZE//BATCH_SIZE embedding_dim = 256 units = 1024 vocab_inp_size = len(inp_lang.word2idx) vocab_tar_size = len(targ_lang.word2idx) dataset = tf.data.Dataset.from_tensor_slices((input_tensor_train, target_tensor_train)).shuffle(BUFFER_SIZE) dataset = dataset.apply(tf.contrib.data.batch_and_drop_remainder(BATCH_SIZE)) Explanation: Create a tf.data dataset End of explanation def gru(units): # If you have a GPU, we recommend using CuDNNGRU(provides a 3x speedup than GRU) # the code automatically does that. if tf.test.is_gpu_available(): return tf.keras.layers.CuDNNGRU(units, return_sequences=True, return_state=True, recurrent_initializer='glorot_uniform') else: return tf.keras.layers.GRU(units, return_sequences=True, return_state=True, recurrent_activation='sigmoid', recurrent_initializer='glorot_uniform') class Encoder(tf.keras.Model): def __init__(self, vocab_size, embedding_dim, enc_units, batch_sz): super(Encoder, self).__init__() self.batch_sz = batch_sz self.enc_units = enc_units self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim) self.gru = gru(self.enc_units) def call(self, x, hidden): x = self.embedding(x) output, state = self.gru(x, initial_state = hidden) return output, state def initialize_hidden_state(self): return tf.zeros((self.batch_sz, self.enc_units)) class Decoder(tf.keras.Model): def __init__(self, vocab_size, embedding_dim, dec_units, batch_sz): super(Decoder, self).__init__() self.batch_sz = batch_sz self.dec_units = dec_units self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim) self.gru = gru(self.dec_units) self.fc = tf.keras.layers.Dense(vocab_size) # used for attention self.W1 = tf.keras.layers.Dense(self.dec_units) self.W2 = tf.keras.layers.Dense(self.dec_units) self.V = tf.keras.layers.Dense(1) def call(self, x, hidden, enc_output): # enc_output shape == (batch_size, max_length, hidden_size) # hidden shape == (batch_size, hidden size) # hidden_with_time_axis shape == (batch_size, 1, hidden size) # we are doing this to perform addition to calculate the score hidden_with_time_axis = tf.expand_dims(hidden, 1) # score shape == (batch_size, max_length, hidden_size) score = tf.nn.tanh(self.W1(enc_output) + self.W2(hidden_with_time_axis)) # attention_weights shape == (batch_size, max_length, 1) # we get 1 at the last axis because we are applying score to self.V attention_weights = tf.nn.softmax(self.V(score), axis=1) # context_vector shape after sum == (batch_size, hidden_size) context_vector = attention_weights * enc_output context_vector = tf.reduce_sum(context_vector, axis=1) # x shape after passing through embedding == (batch_size, 1, embedding_dim) x = self.embedding(x) # x shape after concatenation == (batch_size, 1, embedding_dim + hidden_size) x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1) # passing the concatenated vector to the GRU output, state = self.gru(x) # output shape == (batch_size * max_length, hidden_size) output = tf.reshape(output, (-1, output.shape[2])) # output shape == (batch_size * max_length, vocab) x = self.fc(output) return x, state, attention_weights def initialize_hidden_state(self): return tf.zeros((self.batch_sz, self.dec_units)) encoder = Encoder(vocab_inp_size, embedding_dim, units, BATCH_SIZE) decoder = Decoder(vocab_tar_size, embedding_dim, units, BATCH_SIZE) Explanation: Write the encoder and decoder model Here, we'll implement an encoder-decoder model with attention which you can read about in the TensorFlow Neural Machine Translation (seq2seq) tutorial. This example uses a more recent set of APIs. This notebook implements the attention equations from the seq2seq tutorial. The following diagram shows that each input words is assigned a weight by the attention mechanism which is then used by the decoder to predict the next word in the sentence. <img src="https://www.tensorflow.org/images/seq2seq/attention_mechanism.jpg" width="500" alt="attention mechanism"> The input is put through an encoder model which gives us the encoder output of shape (batch_size, max_length, hidden_size) and the encoder hidden state of shape (batch_size, hidden_size). Here are the equations that are implemented: <img src="https://www.tensorflow.org/images/seq2seq/attention_equation_0.jpg" alt="attention equation 0" width="800"> <img src="https://www.tensorflow.org/images/seq2seq/attention_equation_1.jpg" alt="attention equation 1" width="800"> We're using Bahdanau attention. Lets decide on notation before writing the simplified form: FC = Fully connected (dense) layer EO = Encoder output H = hidden state X = input to the decoder And the pseudo-code: score = FC(tanh(FC(EO) + FC(H))) attention weights = softmax(score, axis = 1). Softmax by default is applied on the last axis but here we want to apply it on the 1st axis, since the shape of score is (batch_size, max_length, hidden_size). Max_length is the length of our input. Since we are trying to assign a weight to each input, softmax should be applied on that axis. context vector = sum(attention weights * EO, axis = 1). Same reason as above for choosing axis as 1. embedding output = The input to the decoder X is passed through an embedding layer. merged vector = concat(embedding output, context vector) This merged vector is then given to the GRU The shapes of all the vectors at each step have been specified in the comments in the code: End of explanation optimizer = tf.train.AdamOptimizer() def loss_function(real, pred): mask = 1 - np.equal(real, 0) loss_ = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=real, logits=pred) * mask return tf.reduce_mean(loss_) Explanation: Define the optimizer and the loss function End of explanation EPOCHS = 10 for epoch in range(EPOCHS): start = time.time() hidden = encoder.initialize_hidden_state() total_loss = 0 for (batch, (inp, targ)) in enumerate(dataset): loss = 0 with tf.GradientTape() as tape: enc_output, enc_hidden = encoder(inp, hidden) dec_hidden = enc_hidden dec_input = tf.expand_dims([targ_lang.word2idx['<start>']] * BATCH_SIZE, 1) # Teacher forcing - feeding the target as the next input for t in range(1, targ.shape[1]): # passing enc_output to the decoder predictions, dec_hidden, _ = decoder(dec_input, dec_hidden, enc_output) loss += loss_function(targ[:, t], predictions) # using teacher forcing dec_input = tf.expand_dims(targ[:, t], 1) batch_loss = (loss / int(targ.shape[1])) total_loss += batch_loss variables = encoder.variables + decoder.variables gradients = tape.gradient(loss, variables) optimizer.apply_gradients(zip(gradients, variables), tf.train.get_or_create_global_step()) if batch % 100 == 0: print('Epoch {} Batch {} Loss {:.4f}'.format(epoch + 1, batch, batch_loss.numpy())) print('Epoch {} Loss {:.4f}'.format(epoch + 1, total_loss / N_BATCH)) print('Time taken for 1 epoch {} sec\n'.format(time.time() - start)) Explanation: Training Pass the input through the encoder which return encoder output and the encoder hidden state. The encoder output, encoder hidden state and the decoder input (which is the start token) is passed to the decoder. The decoder returns the predictions and the decoder hidden state. The decoder hidden state is then passed back into the model and the predictions are used to calculate the loss. Use teacher forcing to decide the next input to the decoder. Teacher forcing is the technique where the target word is passed as the next input to the decoder. The final step is to calculate the gradients and apply it to the optimizer and backpropagate. End of explanation def evaluate(sentence, encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ): attention_plot = np.zeros((max_length_targ, max_length_inp)) sentence = preprocess_sentence(sentence) inputs = [inp_lang.word2idx[i] for i in sentence.split(' ')] inputs = tf.keras.preprocessing.sequence.pad_sequences([inputs], maxlen=max_length_inp, padding='post') inputs = tf.convert_to_tensor(inputs) result = '' hidden = [tf.zeros((1, units))] enc_out, enc_hidden = encoder(inputs, hidden) dec_hidden = enc_hidden dec_input = tf.expand_dims([targ_lang.word2idx['<start>']], 0) for t in range(max_length_targ): predictions, dec_hidden, attention_weights = decoder(dec_input, dec_hidden, enc_out) # storing the attention weigths to plot later on attention_weights = tf.reshape(attention_weights, (-1, )) attention_plot[t] = attention_weights.numpy() predicted_id = tf.multinomial(tf.exp(predictions), num_samples=1)[0][0].numpy() result += targ_lang.idx2word[predicted_id] + ' ' if targ_lang.idx2word[predicted_id] == '<end>': return result, sentence, attention_plot # the predicted ID is fed back into the model dec_input = tf.expand_dims([predicted_id], 0) return result, sentence, attention_plot # function for plotting the attention weights def plot_attention(attention, sentence, predicted_sentence): fig = plt.figure(figsize=(10,10)) ax = fig.add_subplot(1, 1, 1) ax.matshow(attention, cmap='viridis') fontdict = {'fontsize': 14} ax.set_xticklabels([''] + sentence, fontdict=fontdict, rotation=90) ax.set_yticklabels([''] + predicted_sentence, fontdict=fontdict) plt.show() def translate(sentence, encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ): result, sentence, attention_plot = evaluate(sentence, encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ) print('Input: {}'.format(sentence)) print('Predicted translation: {}'.format(result)) attention_plot = attention_plot[:len(result.split(' ')), :len(sentence.split(' '))] plot_attention(attention_plot, sentence.split(' '), result.split(' ')) translate('hace mucho frio aqui.', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ) translate('esta es mi vida.', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ) translate('¿todavia estan en casa?', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ) # wrong translation translate('trata de averiguarlo.', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ) Explanation: Translate The evaluate function is similar to the training loop, except we don't use teacher forcing here. The input to the decoder at each time step is its previous predictions along with the hidden state and the encoder output. Stop predicting when the model predicts the end token. And store the attention weights for every time step. Note: The encoder output is calculated only once for one input. End of explanation
1,641
Given the following text description, write Python code to implement the functionality described below step by step Description: Introductory Notebook to mpnum mpnum implements matrix product arrays (MPA), which are efficient parameterizations of certain multi-partite arrays. Special cases of the MPA structure, which are omnipresent in many-body quantum physics, are matrix product states (MPS) and matrix product operators (MPO) with one and two array indices per site, respectively. In the applied math community, matrix product states are also known as tensor trains (TT). The main class implementing an MPA with arbitrary number of array indices (or "physical legs") is mpnum.MPArray. Step1: MPA and MPS basics A convenient example to deal with is a random MPA. First, we create a fixed seed, then a random MPA Step2: The MPA is an instance of the MPArray class Step3: Number of sites Step4: Number of physical legs at each site (=number of array indices at each site) Step5: Because the MPA has one physical leg per site, we have created a matrix product state (i.e. a tensor train). In the graphical notation, this MPS looks like this Note that mpnum internally stores the local tensors of the matrix product representation on the right hand side. We see below how to obtain the "dense" tensor from an MPArray. Dimension of each physical leg Step6: Note that the number and dimension of the physical legs at each site can differ (altough this is rarely used in practice). Representation ranks (aka compression ranks) between each pair of sites Step7: In physics, the representation ranks are usually called the bond dimensions of the representation. Dummy bonds before and after the chain are omitted in mpa.ranks. (Currently, mpnum only implements open boundary conditions.) Above, we have specified normalized=True. Therefore, we have created an MPA with $\ell_2$-norm 1. In case the MPA does not represent a vector but has more physical legs, it is nonetheless treated as a vector. Hence, for operators <code>mp.norm</code> implements the Frobenius norm. Step8: Convert to a dense array, which should be used with care due because the memory used increases exponentially with the number of sites Step9: The resulting full array has one index for each physical leg. Now convert the full array back to an MPA Step10: We have obtained an MPA with length 1. This is not what we expected. The reason is that by default, all legs are placed on a single site (also notice the difference between mpa2.shape here and mpa.shape from above) Step11: We obtain the desired result by specifying the number of legs per site we want Step12: Finally, we can compute the norm distance between the two MPAs. (Again, the Frobenius norm is used.) Step13: Since this is an often used operation and allows for additional optimization (not implemented currently), it is advisable to use the specific <code>mp.normdist</code> for this Step14: Sums, differences and scalar multiplication of MPAs is done with the normal operators Step15: Multiplication with a scalar leaves the bond dimension unchanged Step16: The bond dimensions of a sum (or difference) are given by the sums of the bond dimensions Step17: MPO basics First, we create a random MPA with two physical legs per site Step18: In graphical notation, mpo looks like this It's basic properties are Step19: Each site has two physical legs, one with dimension 3 and one with dimension 2. This corresponds to a non-square full array. Step20: Now convert the mpo to a full array Step21: We refer to this arangement of axes as local form, since indices which correspond to the same site are neighboring. This is a natural form for the MPO representation. However, for some operations it is necessary to have row and column indices grouped together -- we refer to this as global form Step22: This gives the expected result. Note that it is crucial to specify the correct number of sites, otherwise we do not get what we want Step23: As an alternative, there is the following shorthand Step24: An array in global form can be converted into matrix-product form with the following API Step25: MPO-MPS product and arbitrary MPA-MPA products We can now compute the matrix-vector product of mpa from above (which is an MPS) and mpo. Step26: The result is a new MPS, with local dimension changed by mpo and looks like this Step27: Note that in any case, the ranks of the output of mp.dot are the products of the original ranks Step28: Now we compute the same product using the full arrays arr and mpo_arr Step29: As you can see, we need to reshape the result prod3_vec before we can convert it back to an MPA Step30: Now we can compare the two results Step31: We can also compare by converting prod to a full array Step32: Converting full operators to MPOs While MPO algorithms avoid using full operators in general, we will need to convert a term acting on only two sites to an MPO in order to continue with MPO operations; i.e. we will need to convert a full array to an MPO. First, we define a full operator Step33: This operator is the so-called controlled Z gate Step34: Now we can create an MPO, being careful to specify the correct number of legs per site Step35: To test it, we apply the operator to the state which has both qubits in state e2 Step36: Reshape and convert to an MPS Step37: Now we can compute the matrix-vector product Step38: The output is as expected Step39: However, the result is not what we want Step40: The reason is easy to see Step41: Keep in mind that we have to use to_array_global before the reshape. Using to_array would not provide us the matrix which we have applied to the state with mp.dot. Instead, it will exactly return the input Step42: Again, from_array_global is just the shorthand for the following Step43: As you can see, in the explicit version you must submit both the correct number of sites and the correct number of physical legs per site. Therefore, the function MPArray.from_array_global simplifies the conversion. Creating MPAs from Kronecker products It is a frequent task to create an MPS which represents the product state of $\vert 0 \rangle$ on each qubit. If the chain is very long, we cannot create the full array with np.kron and use MPArray.from_array afterwards because the array would be too large. In the following, we describe how to efficiently construct an MPA representation of a Kronecker product of vectors. The same methods can be used to efficiently construct MPA representations of Kronecker products of operators or tensors with three or more indices. First, we need the state on a single site Step44: Then we can use from_kron to directly create an MPS representation of the Kronecker product Step45: This works well for large numbers of sites because the needed memory scales linearly with the number of sites Step46: An even more pythonic solution is the use of iterators in this example Step47: Do not call .to_array() on this state! The bond dimension of the state is 1, because it is a product state Step48: We can also create a single-site MPS Step49: After that, we can use mp.chain to create Kronecker products of the MPS directly Step50: It returns the same result as before Step51: We can also use mp.chain on the three-site MPS Step52: Note that mp.chain interprets the factors in the tensor product as distinct sites. Hence, the factors do not need to be of the same length or even have the same number of indices. In contrast, there is also mp.localouter, which computes the tensor product of MPArrays with the same number of sites Step53: Compression A typical matrix product based numerical algorithm performs many additions or multiplications of MPAs. As mentioned above, both operations increase the rank. If we let the bond dimension grow, the amount of memory we need grows with the number of operations we perform. To avoid this problem, we have to find an MPA with a smaller rank which is a good approximation to the original MPA. We start by creating an MPO representation of the identity matrix on 6 sites with local dimension 3 Step54: As it is a tensor product operator, it has rank 1 Step55: However, addition increases the rank Step56: Matrix multiplication multiplies the individual ranks Step57: (NB Step58: Calling compress on an MPA replaces the MPA in place with a version with smaller bond dimension. Overlap gives the absolute value of the (Hilbert-Schmidt) inner product between the original state and the output Step59: Instead of in-place compression, we can also obtain a compressed copy Step60: SVD compression can also be told to meet a certain truncation error (see the documentation of mp.MPArray.compress for details). Step61: We can also use variational compression instead of SVD compression Step62: As a reminder, it is always advisable to check whether the overlap between the input state and the compression is large enough. In an involved algorithm, it can be useful to store the compression error at each invocation of compression. MPO sum of local terms A frequent task is to compute the MPO representation of a local Hamiltonian, i.e. of an operator of the form $$ H = \sum_{i=1}^{n-1} h_{i, i+1} $$ where $h_{i, i+1}$ acts only on sites $i$ and $i + 1$. This means that $h_{i, i+1} = \mathbb 1_{i - 1} \otimes h'{i, i+1} \otimes \mathbb 1{n - w + 1}$ where $\mathbb 1_k$ is the identity matrix on $k$ sites and $w = 2$ is the width of $h'_{i, i+1}$. We show how to obtain an MPO representation of such a Hamiltonian. First of all, we need to define the local terms. For simplicity, we choose $h'_{i, i+1} = \sigma_Z \otimes \sigma_Z$ independently of $i$. Step63: First, we have to convert the local term h to an MPO Step64: h_mpo has rank 4 even though h is a tensor product. This is far from optimal. We improve things as follows Step65: The most simple way is to implement the formula from above with MPOs Step66: Next, we compute the sum of all the local terms and check the bond dimension of the result Step67: The ranks are explained by the ranks of the local terms Step68: We just have to add the ranks at each position. mpnum provides a function which constructs H from h_mpo, with an output MPO with smaller rank by taking into account the trivial action on some sites Step69: Without additional arguments, mp.local_sum() just adds the local terms with the first term starting on site 0, the second on site 1 and so on. In addition, the length of the chain is chosen such that the last site of the chain coincides with the last site of the last local term. Other constructions can be obtained by prodividing additional arguments. We can check that the two Hamiltonians are equal Step70: Of course, this means that we could just compress H Step71: We can also check the minimal bond dimension which can be achieved with SVD compression with small error Step72: MPS, MPOs and PMPS We can represent vectors (e.g. pure quantum states) as MPS, we can represent arbitrary matrices as MPO and we can represent positive semidefinite matrices as purifying matrix product states (PMPS). For mixed quantum states, we can thus choose between the MPO and PMPS representations. As mentioned in the introduction, MPS and MPOs are handled as MPAs with one and two physical legs per site. In addition, PMPS are handled as MPAs with two physical legs per site, where the first leg is the "system" site and the second leg is the corresponding "ancilla" site. From MPS and PMPS representations, we can easily obtain MPO representations. mpnum provides routines for this Step73: As expected, the rank of mps_mpo is the square of the rank of mps. Now we create a PMPS with system site dimension 2 and ancilla site dimension 3 Step74: Again, the rank is squared, as expected. We can verify that the first physical leg of each site of pmps is indeed the system site by checking the shape of pmps_mpo Step75: Local reduced states For state tomography applications, we frequently need the local reduced states of an MPS, MPO or PMPS. We provide the following functions for this task
Python Code: import numpy as np import numpy.linalg as la import mpnum as mp Explanation: Introductory Notebook to mpnum mpnum implements matrix product arrays (MPA), which are efficient parameterizations of certain multi-partite arrays. Special cases of the MPA structure, which are omnipresent in many-body quantum physics, are matrix product states (MPS) and matrix product operators (MPO) with one and two array indices per site, respectively. In the applied math community, matrix product states are also known as tensor trains (TT). The main class implementing an MPA with arbitrary number of array indices (or "physical legs") is mpnum.MPArray. End of explanation rng = np.random.RandomState(seed=42) mpa = mp.random_mpa(sites=4, ldim=2, rank=3, randstate=rng, normalized=True) Explanation: MPA and MPS basics A convenient example to deal with is a random MPA. First, we create a fixed seed, then a random MPA: End of explanation mpa Explanation: The MPA is an instance of the MPArray class: End of explanation len(mpa) Explanation: Number of sites: End of explanation mpa.ndims Explanation: Number of physical legs at each site (=number of array indices at each site): End of explanation mpa.shape Explanation: Because the MPA has one physical leg per site, we have created a matrix product state (i.e. a tensor train). In the graphical notation, this MPS looks like this Note that mpnum internally stores the local tensors of the matrix product representation on the right hand side. We see below how to obtain the "dense" tensor from an MPArray. Dimension of each physical leg: End of explanation mpa.ranks Explanation: Note that the number and dimension of the physical legs at each site can differ (altough this is rarely used in practice). Representation ranks (aka compression ranks) between each pair of sites: End of explanation mp.norm(mpa) Explanation: In physics, the representation ranks are usually called the bond dimensions of the representation. Dummy bonds before and after the chain are omitted in mpa.ranks. (Currently, mpnum only implements open boundary conditions.) Above, we have specified normalized=True. Therefore, we have created an MPA with $\ell_2$-norm 1. In case the MPA does not represent a vector but has more physical legs, it is nonetheless treated as a vector. Hence, for operators <code>mp.norm</code> implements the Frobenius norm. End of explanation arr = mpa.to_array() arr.shape Explanation: Convert to a dense array, which should be used with care due because the memory used increases exponentially with the number of sites: End of explanation mpa2 = mp.MPArray.from_array(arr) len(mpa2) Explanation: The resulting full array has one index for each physical leg. Now convert the full array back to an MPA: End of explanation mpa2.shape mpa.shape Explanation: We have obtained an MPA with length 1. This is not what we expected. The reason is that by default, all legs are placed on a single site (also notice the difference between mpa2.shape here and mpa.shape from above): End of explanation mpa2 = mp.MPArray.from_array(arr, ndims=1) len(mpa2) Explanation: We obtain the desired result by specifying the number of legs per site we want: End of explanation mp.norm(mpa - mpa2) Explanation: Finally, we can compute the norm distance between the two MPAs. (Again, the Frobenius norm is used.) End of explanation mp.normdist(mpa, mpa2) Explanation: Since this is an often used operation and allows for additional optimization (not implemented currently), it is advisable to use the specific <code>mp.normdist</code> for this: End of explanation mp.norm(3 * mpa) mp.norm(mpa + 0.5 * mpa) mp.norm(mpa - 1.5 * mpa) Explanation: Sums, differences and scalar multiplication of MPAs is done with the normal operators: End of explanation mpa.ranks (3 * mpa).ranks Explanation: Multiplication with a scalar leaves the bond dimension unchanged: End of explanation mpa2 = mp.random_mpa(sites=4, ldim=2, rank=2, randstate=rng) mpa2.ranks (mpa + mpa2).ranks Explanation: The bond dimensions of a sum (or difference) are given by the sums of the bond dimensions: End of explanation mpo = mp.random_mpa(sites=4, ldim=(3, 2), rank=3, randstate=rng, normalized=True) Explanation: MPO basics First, we create a random MPA with two physical legs per site: End of explanation [len(mpo), mpo.ndims, mpo.ranks] Explanation: In graphical notation, mpo looks like this It's basic properties are: End of explanation mpo.shape Explanation: Each site has two physical legs, one with dimension 3 and one with dimension 2. This corresponds to a non-square full array. End of explanation mpo_arr = mpo.to_array() mpo_arr.shape Explanation: Now convert the mpo to a full array: End of explanation from mpnum.utils.array_transforms import local_to_global mpo_arr = mpo.to_array() mpo_arr = local_to_global(mpo_arr, sites=len(mpo)) mpo_arr.shape Explanation: We refer to this arangement of axes as local form, since indices which correspond to the same site are neighboring. This is a natural form for the MPO representation. However, for some operations it is necessary to have row and column indices grouped together -- we refer to this as global form: End of explanation mpo_arr = mpo.to_array() mpo_arr = local_to_global(mpo_arr, sites=2) mpo_arr.shape Explanation: This gives the expected result. Note that it is crucial to specify the correct number of sites, otherwise we do not get what we want: End of explanation mpo_arr = mpo.to_array_global() mpo_arr.shape Explanation: As an alternative, there is the following shorthand: End of explanation mpo2 = mp.MPArray.from_array_global(mpo_arr, ndims=2) mp.normdist(mpo, mpo2) Explanation: An array in global form can be converted into matrix-product form with the following API: End of explanation mpa.shape mpo.shape prod = mp.dot(mpo, mpa, axes=(-1, 0)) prod.shape Explanation: MPO-MPS product and arbitrary MPA-MPA products We can now compute the matrix-vector product of mpa from above (which is an MPS) and mpo. End of explanation prod2 = mp.dot(mpa, mpo, axes=(0, 1)) mp.normdist(prod, prod2) Explanation: The result is a new MPS, with local dimension changed by mpo and looks like this: The axes argument is optional and defaults to axes=(-1, 0) -- i.e. contracting, at each site, the last pyhsical index of the first factor with the first physical index of the second factor. More specifically, the axes argument specifies which physical legs should be contracted: axes[0] specifies the physical in the first argument, and axes[1] specifies the physical leg in the second argument. This means that the same product can be achieved with End of explanation mpo.ranks, mpa.ranks, prod.ranks Explanation: Note that in any case, the ranks of the output of mp.dot are the products of the original ranks: End of explanation arr_vec = arr.ravel() mpo_arr = mpo.to_array_global() mpo_arr_matrix = mpo_arr.reshape((81, 16)) prod3_vec = np.dot(mpo_arr_matrix, arr_vec) prod3_vec.shape Explanation: Now we compute the same product using the full arrays arr and mpo_arr: End of explanation prod3_arr = prod3_vec.reshape((3, 3, 3, 3)) prod3 = mp.MPArray.from_array(prod3_arr, ndims=1) prod3.shape Explanation: As you can see, we need to reshape the result prod3_vec before we can convert it back to an MPA: End of explanation mp.normdist(prod, prod3) Explanation: Now we can compare the two results: End of explanation prod_arr = prod.to_array() la.norm((prod3_arr - prod_arr).reshape(81)) Explanation: We can also compare by converting prod to a full array: End of explanation CZ = np.array([[ 1., 0., 0., 0.], [ 0., 1., 0., 0.], [ 0., 0., 1., 0.], [ 0., 0., 0., -1.]]) Explanation: Converting full operators to MPOs While MPO algorithms avoid using full operators in general, we will need to convert a term acting on only two sites to an MPO in order to continue with MPO operations; i.e. we will need to convert a full array to an MPO. First, we define a full operator: End of explanation CZ_arr = CZ.reshape((2, 2, 2, 2)) Explanation: This operator is the so-called controlled Z gate: Apply Z on the second qubit if the first qubit is in state e2. To convert it to an MPO, we have to reshape: End of explanation CZ_mpo = mp.MPArray.from_array_global(CZ_arr, ndims=2) Explanation: Now we can create an MPO, being careful to specify the correct number of legs per site: End of explanation vec = np.kron([0, 1], [0, 1]) vec Explanation: To test it, we apply the operator to the state which has both qubits in state e2: End of explanation vec_arr = vec.reshape([2, 2]) mps = mp.MPArray.from_array(vec_arr, ndims=1) Explanation: Reshape and convert to an MPS: End of explanation out = mp.dot(CZ_mpo, mps) out.to_array().ravel() Explanation: Now we can compute the matrix-vector product: End of explanation CZ_mpo2 = mp.MPArray.from_array(CZ_arr, ndims=2) Explanation: The output is as expected: We have acquired a minus sign. We have to be careful to use from_array_global and not from_array for CZ_mpo, because the CZ_arr is in global form. Here, all physical legs have the same dimension, so we can use from_array without error: End of explanation out2 = mp.dot(CZ_mpo2, mps) out2.to_array().ravel() Explanation: However, the result is not what we want: End of explanation CZ_mpo2.to_array_global().reshape(4, 4) Explanation: The reason is easy to see: We have applied the following matrix to our state: End of explanation CZ_mpo2.to_array().reshape(4, 4) Explanation: Keep in mind that we have to use to_array_global before the reshape. Using to_array would not provide us the matrix which we have applied to the state with mp.dot. Instead, it will exactly return the input: End of explanation from mpnum.utils.array_transforms import global_to_local CZ_mpo3 = mp.MPArray.from_array(global_to_local(CZ_arr, sites=2), ndims=2) mp.normdist(CZ_mpo, CZ_mpo3) Explanation: Again, from_array_global is just the shorthand for the following: End of explanation e1 = np.array([1, 0]) e1 Explanation: As you can see, in the explicit version you must submit both the correct number of sites and the correct number of physical legs per site. Therefore, the function MPArray.from_array_global simplifies the conversion. Creating MPAs from Kronecker products It is a frequent task to create an MPS which represents the product state of $\vert 0 \rangle$ on each qubit. If the chain is very long, we cannot create the full array with np.kron and use MPArray.from_array afterwards because the array would be too large. In the following, we describe how to efficiently construct an MPA representation of a Kronecker product of vectors. The same methods can be used to efficiently construct MPA representations of Kronecker products of operators or tensors with three or more indices. First, we need the state on a single site: End of explanation mps = mp.MPArray.from_kron([e1, e1, e1]) mps.to_array().ravel() Explanation: Then we can use from_kron to directly create an MPS representation of the Kronecker product: End of explanation mps = mp.MPArray.from_kron([e1] * 2000) len(mps) Explanation: This works well for large numbers of sites because the needed memory scales linearly with the number of sites: End of explanation from itertools import repeat mps = mp.MPArray.from_kron(repeat(e1, 2000)) len(mps) Explanation: An even more pythonic solution is the use of iterators in this example: End of explanation np.array(mps.ranks) # Convert to an array for nicer display Explanation: Do not call .to_array() on this state! The bond dimension of the state is 1, because it is a product state: End of explanation mps1 = mp.MPArray.from_array(e1, ndims=1) len(mps1) Explanation: We can also create a single-site MPS: End of explanation mps = mp.chain([mps1, mps1, mps1]) len(mps) Explanation: After that, we can use mp.chain to create Kronecker products of the MPS directly: End of explanation mps.to_array().ravel() Explanation: It returns the same result as before: End of explanation mps = mp.chain([mps] * 100) len(mps) Explanation: We can also use mp.chain on the three-site MPS: End of explanation mps = mp.chain([mps1] * 4) len(mps), mps.shape, rho = mp.localouter(mps.conj(), mps) len(rho), rho.shape Explanation: Note that mp.chain interprets the factors in the tensor product as distinct sites. Hence, the factors do not need to be of the same length or even have the same number of indices. In contrast, there is also mp.localouter, which computes the tensor product of MPArrays with the same number of sites: End of explanation op = mp.eye(sites=6, ldim=3) op.shape Explanation: Compression A typical matrix product based numerical algorithm performs many additions or multiplications of MPAs. As mentioned above, both operations increase the rank. If we let the bond dimension grow, the amount of memory we need grows with the number of operations we perform. To avoid this problem, we have to find an MPA with a smaller rank which is a good approximation to the original MPA. We start by creating an MPO representation of the identity matrix on 6 sites with local dimension 3: End of explanation op.ranks Explanation: As it is a tensor product operator, it has rank 1: End of explanation op2 = op + op + op op2.ranks Explanation: However, addition increases the rank: End of explanation op3 = mp.dot(op2, op2) op3.ranks Explanation: Matrix multiplication multiplies the individual ranks: End of explanation op3 /= mp.norm(op3.copy()) # normalize to make overlap meaningful copy = op3.copy() overlap = copy.compress(method='svd', rank=1) copy.ranks Explanation: (NB: compress or compression below can call canonicalize on the MPA, which in turn could already reduce the rank to 1 in case the rank can be compressed without error. Keep that in mind.) Keep in mind that the operator represented by op3 is still the identity operator, i.e. a tensor product operator. This means that we expect to find a good approximation with low rank easily. Finding such an approximation is called compression and is achieved as follows: End of explanation overlap Explanation: Calling compress on an MPA replaces the MPA in place with a version with smaller bond dimension. Overlap gives the absolute value of the (Hilbert-Schmidt) inner product between the original state and the output: End of explanation compr, overlap = op3.compression(method='svd', rank=2) overlap, compr.ranks, op3.ranks Explanation: Instead of in-place compression, we can also obtain a compressed copy: End of explanation compr, overlap = op3.compression(method='svd', relerr=1e-6) overlap, compr.ranks, op3.ranks Explanation: SVD compression can also be told to meet a certain truncation error (see the documentation of mp.MPArray.compress for details). End of explanation compr, overlap = op3.compression(method='var', rank=2, num_sweeps=10, var_sites=2) # Convert overlap from numpy array with shape () to float for nicer display: overlap = overlap.flat[0] complex(overlap), compr.ranks, op3.ranks Explanation: We can also use variational compression instead of SVD compression: End of explanation zeros = np.zeros((2, 2)) zeros idm = np.eye(2) idm # Create a float array instead of an int array to avoid problems later Z = np.diag([1., -1]) Z h = np.kron(Z, Z) h Explanation: As a reminder, it is always advisable to check whether the overlap between the input state and the compression is large enough. In an involved algorithm, it can be useful to store the compression error at each invocation of compression. MPO sum of local terms A frequent task is to compute the MPO representation of a local Hamiltonian, i.e. of an operator of the form $$ H = \sum_{i=1}^{n-1} h_{i, i+1} $$ where $h_{i, i+1}$ acts only on sites $i$ and $i + 1$. This means that $h_{i, i+1} = \mathbb 1_{i - 1} \otimes h'{i, i+1} \otimes \mathbb 1{n - w + 1}$ where $\mathbb 1_k$ is the identity matrix on $k$ sites and $w = 2$ is the width of $h'_{i, i+1}$. We show how to obtain an MPO representation of such a Hamiltonian. First of all, we need to define the local terms. For simplicity, we choose $h'_{i, i+1} = \sigma_Z \otimes \sigma_Z$ independently of $i$. End of explanation h_arr = h.reshape((2, 2, 2, 2)) h_mpo = mp.MPArray.from_array_global(h_arr, ndims=2) h_mpo.ranks Explanation: First, we have to convert the local term h to an MPO: End of explanation h_mpo = mp.MPArray.from_kron([Z, Z]) h_mpo.ranks Explanation: h_mpo has rank 4 even though h is a tensor product. This is far from optimal. We improve things as follows: (We could also compress h_mpo.) End of explanation width = 2 sites = 6 local_terms = [] for startpos in range(sites - width + 1): left = [mp.MPArray.from_kron([idm] * startpos)] if startpos > 0 else [] right = [mp.MPArray.from_kron([idm] * (sites - width - startpos))] \ if sites - width - startpos > 0 else [] h_at_startpos = mp.chain(left + [h_mpo] + right) local_terms.append(h_at_startpos) local_terms Explanation: The most simple way is to implement the formula from above with MPOs: First we compute the $h_{i, i+1}$ from the $h'_{i, i+1}$: End of explanation H = local_terms[0] for local_term in local_terms[1:]: H += local_term H.ranks Explanation: Next, we compute the sum of all the local terms and check the bond dimension of the result: End of explanation [local_term.ranks for local_term in local_terms] Explanation: The ranks are explained by the ranks of the local terms: End of explanation H2 = mp.local_sum([h_mpo] * (sites - width + 1)) H2.ranks Explanation: We just have to add the ranks at each position. mpnum provides a function which constructs H from h_mpo, with an output MPO with smaller rank by taking into account the trivial action on some sites: End of explanation mp.normdist(H, H2) Explanation: Without additional arguments, mp.local_sum() just adds the local terms with the first term starting on site 0, the second on site 1 and so on. In addition, the length of the chain is chosen such that the last site of the chain coincides with the last site of the last local term. Other constructions can be obtained by prodividing additional arguments. We can check that the two Hamiltonians are equal: End of explanation H_comp, overlap = H.compression(method='svd', rank=3) overlap / mp.norm(H)**2 H_comp.ranks Explanation: Of course, this means that we could just compress H: End of explanation H_comp, overlap = H.compression(method='svd', relerr=1e-6) overlap / mp.norm(H)**2 H_comp.ranks Explanation: We can also check the minimal bond dimension which can be achieved with SVD compression with small error: End of explanation mps = mp.random_mpa(sites=5, ldim=2, rank=3, normalized=True) mps_mpo = mp.mps_to_mpo(mps) mps_mpo.ranks Explanation: MPS, MPOs and PMPS We can represent vectors (e.g. pure quantum states) as MPS, we can represent arbitrary matrices as MPO and we can represent positive semidefinite matrices as purifying matrix product states (PMPS). For mixed quantum states, we can thus choose between the MPO and PMPS representations. As mentioned in the introduction, MPS and MPOs are handled as MPAs with one and two physical legs per site. In addition, PMPS are handled as MPAs with two physical legs per site, where the first leg is the "system" site and the second leg is the corresponding "ancilla" site. From MPS and PMPS representations, we can easily obtain MPO representations. mpnum provides routines for this: End of explanation pmps = mp.random_mpa(sites=5, ldim=(2, 3), rank=3, normalized=True) pmps.shape pmps_mpo = mp.pmps_to_mpo(pmps) pmps_mpo.ranks Explanation: As expected, the rank of mps_mpo is the square of the rank of mps. Now we create a PMPS with system site dimension 2 and ancilla site dimension 3: End of explanation pmps_mpo.shape Explanation: Again, the rank is squared, as expected. We can verify that the first physical leg of each site of pmps is indeed the system site by checking the shape of pmps_mpo: End of explanation width = 3 startsites = range(len(pmps) - width + 1) for startsite, red in zip(startsites, mp.reductions_pmps(pmps, width, startsites)): print('Reduction starting on site', startsite) print('bdims:', red.ranks) red_mpo = mp.pmps_to_mpo(red) print('trace:', mp.trace(red_mpo)) print() Explanation: Local reduced states For state tomography applications, we frequently need the local reduced states of an MPS, MPO or PMPS. We provide the following functions for this task: mp.reductions_mps_as_pmps(): Input: MPS, output: local reductions as PMPS mp.reductions_mps_as_mpo(): Input: MPS, output: local reductions as MPO mp.reductions_pmps(): Input: PMPS, output: Local reductions as PMPS mp.reductions_mpo(): Input: MPO, output: Local reductions as MPO The arguments of all functions are similar, e.g.: End of explanation
1,642
Given the following text description, write Python code to implement the functionality described below step by step Description: <span style="color Step1: Fetch info for a published data set by its accession ID You can find the study ID or individual sample IDs from published papers or by searching the NCBI or related databases. ipyrad can take as input one or more accessions IDs for individual Runs or Studies (SRR or SRP, and similarly ERR or ERP, etc.). Step2: File names You can select columns by their index number to use for file names. See below. Step3: Download the data From an sratools object you can fetch just the info, or you can download the files as well. Here we call .run() to download the data into a designated workdir. There are arguments for how to name the files according to name fields in the fetch_runinfo table. The accessions argument here is a list of the first five SRR sample IDs in the table above. Step4: Check the data files You can see that the files were named according to the SRR and species name in the table. The intermediate .sra files were removed and only the fastq files were saved.
Python Code: # conda install ipyrad -c bioconda # conda install sratools -c bioconda import ipyrad.analysis as ipa Explanation: <span style="color:gray">ipyrad-analysis toolkit:</span> sratools For reproducibility purposes, it is nice to be able to download the raw data for your analysis from an online repository like NCBI with a simple script at the top of your notebook. We've written a simple wrapper for the sratools command line program (which is notoriously difficult to use and poorly documented) to try to make this easier to do. Required software End of explanation # init sratools object with an accessions argument sra = ipa.sratools(accessions="SRP065788") # fetch info for all samples from this study, save as a dataframe stable = sra.fetch_runinfo() # the dataframe has all information about this study stable.head() Explanation: Fetch info for a published data set by its accession ID You can find the study ID or individual sample IDs from published papers or by searching the NCBI or related databases. ipyrad can take as input one or more accessions IDs for individual Runs or Studies (SRR or SRP, and similarly ERR or ERP, etc.). End of explanation stable.iloc[:5, [0, 28, 29]] Explanation: File names You can select columns by their index number to use for file names. See below. End of explanation # select first 5 samples list_of_srrs = stable.Run[:5] list_of_srrs # new sra object sra2 = ipa.sratools(accessions=list_of_srrs, workdir="downloaded") # call download (run) function sra2.run(auto=True, name_fields=(1,30)) Explanation: Download the data From an sratools object you can fetch just the info, or you can download the files as well. Here we call .run() to download the data into a designated workdir. There are arguments for how to name the files according to name fields in the fetch_runinfo table. The accessions argument here is a list of the first five SRR sample IDs in the table above. End of explanation ! ls -l downloaded Explanation: Check the data files You can see that the files were named according to the SRR and species name in the table. The intermediate .sra files were removed and only the fastq files were saved. End of explanation
1,643
Given the following text description, write Python code to implement the functionality described below step by step Description: Integer representations Integers are typically represented in memory as a base-2 bit pattern, and in python the built-in function bin can be used to inspect that Step1: If the number of bits used is fixed, the range of integers that can be represented would be fixed and can potentially overflow. That is the case for many languages such as C/C++. In python, integers have arbitrary precision and therefore we can represent an arbitrarily large range of integers (only limited by memory available). Step2: However as I'll explain in this post, one still needs to be careful with precision issues especially when using the pydata stack (numpy/pandas). Can integers overflow in python? Short answers Step3: Here we consider a list of integers going from $2^{0}$ to $2^{159}$, and we use sys.getsizeof to inspect how many bytes are actually used to store the integer Step4: Plotting the results Step5: We can see that it takes 28 bytes before we get to $2^{30}$ where python allocates 4 more bytes to store larger integers. Certainly not the most compact representation, as a raw 64-bit array (i.e. 8 bytes) could do the job with fixed-precision. However we get the benefits of arbitrary precision and many others in python. Also as we can expect, the storage increases roughly logarithmically as the integer gets larger. And interestly, it looks like python bumps the storage size at $2^{30}$, $2^{60}$, $2^{90}$, and so on. Be Careful with Overflows in numpy In a lot of situations we would prefer to use the pydata stack (numpy/scipy/pandas) for computation over pure python. It is important to note that overflows can occur, because the data structures under the hood are fixed-precision. Here we have a numpy array of integers Step6: This is a 64-bit integer and therefore $2^{63}-1$ is actually the largest integer it can hold. Adding 1 to the array will silently cause an overflow Step7: similary, we'd need to be careful with np.sum Step8: On the other hand, np.mean actually computes by first converting all inputs into float, so the overflow won't happen at this value yet
Python Code: bin(19) Explanation: Integer representations Integers are typically represented in memory as a base-2 bit pattern, and in python the built-in function bin can be used to inspect that: End of explanation 2 ** 200 Explanation: If the number of bits used is fixed, the range of integers that can be represented would be fixed and can potentially overflow. That is the case for many languages such as C/C++. In python, integers have arbitrary precision and therefore we can represent an arbitrarily large range of integers (only limited by memory available). End of explanation %matplotlib inline import matplotlib.pyplot as plt from IPython.core.pylabtools import figsize figsize(15, 5) import numpy as np import pandas as pd Explanation: However as I'll explain in this post, one still needs to be careful with precision issues especially when using the pydata stack (numpy/pandas). Can integers overflow in python? Short answers: No if the operations are done in pure python, because python integers have arbitrary precision Yes if the operations are done in the pydata stack (numpy/pandas), because they use C-style fixed-precision integers Arbitrary precision So how do python integers achieve arbitrary precision? In python 2, there are actually two integers types: int and long, where int is the C-style fixed-precision integer and long is the arbitrary-precision integer. Operations are automatically promoted to long if int is not sufficient, so there's no risk of overflowing. In python 3, int is the only integer type and it is arbitrary-precision. To see a bit under the hood, let's examine how much the storage size changes for different integers in python. End of explanation import sys int_sizes = {} for i in range(160): int_sizes[i] = sys.getsizeof(2 ** i) int_sizes = pd.Series(int_sizes) Explanation: Here we consider a list of integers going from $2^{0}$ to $2^{159}$, and we use sys.getsizeof to inspect how many bytes are actually used to store the integer: End of explanation ax = int_sizes.plot(ylim=[0, 60]) ax.set_ylabel('number of bytes') ax.set_xlabel('integer (log scale)') ax.set_title('number of bytes used to store an integer (python 3.4)') ax.set_xticks(range(0, 160, 10)) labels = ['$2^{%d}$' % x for x in range(0, 160, 10)] ax.set_xticklabels(labels) ax.tick_params(axis='x', labelsize=18) ax.tick_params(axis='y', labelsize=12) int_sizes[29:31].head() Explanation: Plotting the results: End of explanation a = np.array([2**63 - 1, 2**63 - 1], dtype=int) a a.dtype Explanation: We can see that it takes 28 bytes before we get to $2^{30}$ where python allocates 4 more bytes to store larger integers. Certainly not the most compact representation, as a raw 64-bit array (i.e. 8 bytes) could do the job with fixed-precision. However we get the benefits of arbitrary precision and many others in python. Also as we can expect, the storage increases roughly logarithmically as the integer gets larger. And interestly, it looks like python bumps the storage size at $2^{30}$, $2^{60}$, $2^{90}$, and so on. Be Careful with Overflows in numpy In a lot of situations we would prefer to use the pydata stack (numpy/scipy/pandas) for computation over pure python. It is important to note that overflows can occur, because the data structures under the hood are fixed-precision. Here we have a numpy array of integers End of explanation a + 1 Explanation: This is a 64-bit integer and therefore $2^{63}-1$ is actually the largest integer it can hold. Adding 1 to the array will silently cause an overflow End of explanation a.sum() Explanation: similary, we'd need to be careful with np.sum: End of explanation a.mean() Explanation: On the other hand, np.mean actually computes by first converting all inputs into float, so the overflow won't happen at this value yet End of explanation
1,644
Given the following text description, write Python code to implement the functionality described below step by step Description: plt.figure() plt.pcolormesh(delz, cmap='RdBu', vmin=0, vmax=5) plt.show() It seems like this should work too...<br> ...but no. How do we look at maps of gradients? Step1: Make up a uniform vector field of wind direction U=(u,v) at nodes
Python Code: xdelz = mg.link_vector_to_raster(delz, flip_vertically=True)[:,5] print np.shape(xdelz),xdelz # Make up a uniform vector field of wind direction U = (u,v) (at nodes or links?) # Calculate wind stress (function of U and delz) # Calculate flux vector Q = (qx,qy) # Calculate flux divergence for i in range(25): ... g = mg.calculate_gradients_at_active_links(z) ... qs = -kd*g ... dqsdx = mg.calculate_flux_divergence_at_nodes(qs) ... dzdt = -dqsdx ... z[interior_nodes] += dzdt[interior_nodes]*dt Explanation: plt.figure() plt.pcolormesh(delz, cmap='RdBu', vmin=0, vmax=5) plt.show() It seems like this should work too...<br> ...but no. How do we look at maps of gradients? End of explanation # Can I make shorter names? uxn = 'land_surface_air_flow__x_component_of_velocity' vxn = 'land_surface_air_flow__y_component_of_velocity' u = mg.add_zeros('node', uxn) v = mg.add_zeros('node', vxn) print('Shape of u {shape}'.format(shape=u.shape)) # Yes u_at_link = mg.map_mean_of_link_nodes_to_link('land_surface_air_flow__x_component_of_velocity') v_at_link = mg.map_mean_of_link_nodes_to_link('land_surface_air_flow__y_component_of_velocity') u_at_link = u_at_link[mg.active_links] v_at_link = v_at_link[mg.active_links] print('Shape of u_at_link {shape}'.format(shape=u_at_link.shape)) print('Shape of v_at_link {shape}'.format(shape=v_at_link.shape)) print('Shape of delz {shape}'.format(shape=delz.shape)) # Make up a uniform vector field of wind direction U = (u,v) (at nodes or links?) # Calculate wind stress (function of U and delz) # Calculate flux vector Q = (qx,qy) # Calculate flux divergence Explanation: Make up a uniform vector field of wind direction U=(u,v) at nodes End of explanation
1,645
Given the following text description, write Python code to implement the functionality described below step by step Description: Exact solution used in MES runs We would like to MES the operation $$ \partial_\rho\partial_\rho f $$ Using cylindrical geometry. This could of course be done with $$ \partial_\rho^2 f $$ but terms like this appears in $\nabla(f\nabla_\perp g)$ terms Step1: Initialize Step2: Define the variables Step3: Define the function to take the derivative of NOTE Step4: Calculating the solution Step5: Plot Step6: Print the variables in BOUT++ format
Python Code: %matplotlib notebook from sympy import init_printing from sympy import S from sympy import sin, cos, tanh, exp, pi, sqrt from boutdata.mms import x, y, z, t from boutdata.mms import DDX import os, sys # If we add to sys.path, then it must be an absolute path common_dir = os.path.abspath('./../../../../common') # Sys path is a list of system paths sys.path.append(common_dir) from CELMAPy.MES import get_metric, make_plot, BOUT_print init_printing() Explanation: Exact solution used in MES runs We would like to MES the operation $$ \partial_\rho\partial_\rho f $$ Using cylindrical geometry. This could of course be done with $$ \partial_\rho^2 f $$ but terms like this appears in $\nabla(f\nabla_\perp g)$ terms End of explanation folder = '../properZ/' metric = get_metric() Explanation: Initialize End of explanation # Initialization the_vars = {} Explanation: Define the variables End of explanation # We need Lx from boututils.options import BOUTOptions myOpts = BOUTOptions(folder) Lx = eval(myOpts.geom['Lx']) # Gaussian with sinus and parabola # The skew sinus # In cartesian coordinates we would like a sinus with with a wave-vector in the direction # 45 degrees with respect to the first quadrant. This can be achieved with a wave vector # k = [1/sqrt(2), 1/sqrt(2)] # sin((1/sqrt(2))*(x + y)) # We would like 2 nodes, so we may write # sin((1/sqrt(2))*(x + y)*(2*pi/(2*Lx))) # Rewriting this to cylindrical coordinates, gives # sin((1/sqrt(2))*(x*(cos(z)+sin(z)))*(2*pi/(2*Lx))) # The gaussian # In cartesian coordinates we would like # f = exp(-(1/(2*w^2))*((x-x0)^2 + (y-y0)^2)) # In cylindrical coordinates, this translates to # f = exp(-(1/(2*w^2))*(x^2 + y^2 + x0^2 + y0^2 - 2*(x*x0+y*y0) )) # = exp(-(1/(2*w^2))*(rho^2 + rho0^2 - 2*rho*rho0*(cos(theta)*cos(theta0)+sin(theta)*sin(theta0)) )) # = exp(-(1/(2*w^2))*(rho^2 + rho0^2 - 2*rho*rho0*(cos(theta - theta0)) )) # A parabola # In cartesian coordinates, we have # ((x-x0)/Lx)^2 # Chosing this function to have a zero value at the edge yields in cylindrical coordinates # ((x*cos(z)+Lx)/(2*Lx))^2 w = 0.8*Lx rho0 = 0.3*Lx theta0 = 5*pi/4 the_vars['f'] = sin((1/sqrt(2))*(x*(cos(z)+sin(z)))*(2*pi/(2*Lx)))*\ exp(-(1/(2*w**2))*(x**2 + rho0**2 - 2*x*rho0*(cos(z - theta0)) ))*\ ((x*cos(z)+Lx)/(2*Lx))**2 Explanation: Define the function to take the derivative of NOTE: z must be periodic The field $f(\rho, \theta)$ must be of class infinity in $z=0$ and $z=2\pi$ The field $f(\rho, \theta)$ must be single valued when $\rho\to0$ The field $f(\rho, \theta)$ must be continuous in the $\rho$ direction with $f(\rho, \theta + \pi)$ Eventual BC in $\rho$ must be satisfied End of explanation the_vars['S'] = DDX(DDX(the_vars['f'], metric=metric), metric=metric) Explanation: Calculating the solution End of explanation make_plot(folder=folder, the_vars=the_vars, plot2d=True, include_aux=False) Explanation: Plot End of explanation BOUT_print(the_vars, rational=False) Explanation: Print the variables in BOUT++ format End of explanation
1,646
Given the following text description, write Python code to implement the functionality described below step by step Description: python libs for all vis things Step1: matplotlib interactive vis notes Step2: + seaborn Step3: ipywidgets helpful tutorial here with matplotlib Step4: with seaborn!
Python Code: %pylab inline Explanation: python libs for all vis things End of explanation t = arange(0.0, 1.0, 0.01) y1 = sin(2*pi*t) y2 = sin(2*2*pi*t) import pandas as pd df = pd.DataFrame({'t': t, 'y1': y1, 'y2': y2}) df.head(10) fig = figure(1, figsize = (10,10)) ax1 = fig.add_subplot(211) ax1.plot(t, y1) ax1.grid(True) ax1.set_ylim((-2, 2)) ax1.set_ylabel('Gentle Lull') ax1.set_title('I can plot waves') for label in ax1.get_xticklabels(): label.set_color('r') ax2 = fig.add_subplot(212) ax2.plot(t, y2,) ax2.grid(True) ax2.set_ylim((-2, 2)) ax2.set_ylabel('Getting choppier') l = ax2.set_xlabel('Hi PyLadies') l.set_color('g') l.set_fontsize('large') show() Explanation: matplotlib interactive vis notes: http://matplotlib.org/users/navigation_toolbar.html End of explanation import seaborn as sns sns.set(color_codes=True) sns.distplot(y1) sns.distplot(y2) Explanation: + seaborn End of explanation from ipywidgets import widgets from IPython.html.widgets import * t = arange(0.0, 1.0, 0.01) def pltsin(f): plt.plot(t, sin(2*pi*t*f)) interact(pltsin, f=(1,10,0.1)) Explanation: ipywidgets helpful tutorial here with matplotlib End of explanation def pltsin(f): sns.distplot(sin(2*pi*t*f)) interact(pltsin, f=(1,10,0.1)) Explanation: with seaborn! End of explanation
1,647
Given the following text description, write Python code to implement the functionality described below step by step Description: Reusable Embeddings Learning Objectives 1. Learn how to use a pre-trained TF Hub text modules to generate sentence vectors 1. Learn how to incorporate a pre-trained TF-Hub module into a Keras model 1. Learn how to deploy and use a text model on CAIP Introduction In this notebook, we will implement text models to recognize the probable source (GitHub, TechCrunch, or The New York Times) of the titles we have in the title dataset. First, we will load and pre-process the texts and labels so that they are suitable to be fed to sequential Keras models with first layer being TF-hub pre-trained modules. Thanks to this first layer, we won't need to tokenize and integerize the text before passing it to our models. The pre-trained layer will take care of that for us, and consume directly raw text. However, we will still have to one-hot-encode each of the 3 classes into a 3 dimensional basis vector. Then we will build, train and compare simple DNN models starting with different pre-trained TF-Hub layers. Step1: Replace the variable values in the cell below Step2: Create a Dataset from BigQuery Hacker news headlines are available as a BigQuery public dataset. The dataset contains all headlines from the sites inception in October 2006 until October 2015. Here is a sample of the dataset Step3: Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http Step6: Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for machine learning. Step7: For ML training, we usually need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset). AutoML however figures out on its own how to create these splits, so we won't need to do that here. Step8: AutoML for text classification requires that * the dataset be in csv form with * the first column being the texts to classify or a GCS path to the text * the last colum to be the text labels The dataset we pulled from BiqQuery satisfies these requirements. Step9: Let's make sure we have roughly the same number of labels for each of our three labels Step10: Finally we will save our data, which is currently in-memory, to disk. We will create a csv file containing the full dataset and another containing only 1000 articles for development. Note Step11: Now let's sample 1000 articles from the full dataset and make sure we have enough examples for each label in our sample dataset (see here for further details on how to prepare data for AutoML). Step12: Let's write the sample datatset to disk. Step13: Let's start by specifying where the information about the trained models will be saved as well as where our dataset is located Step14: Loading the dataset As in the previous labs, our dataset consists of titles of articles along with the label indicating from which source these articles have been taken from (GitHub, TechCrunch, or The New York Times) Step15: Let's look again at the number of examples per label to make sure we have a well-balanced dataset Step16: Preparing the labels In this lab, we will use pre-trained TF-Hub embeddings modules for english for the first layer of our models. One immediate advantage of doing so is that the TF-Hub embedding module will take care for us of processing the raw text. This also means that our model will be able to consume text directly instead of sequences of integers representing the words. However, as before, we still need to preprocess the labels into one-hot-encoded vectors Step17: Preparing the train/test splits Let's split our data into train and test splits Step18: To be on the safe side, we verify that the train and test splits have roughly the same number of examples per class. Since it is the case, accuracy will be a good metric to use to measure the performance of our models. Step19: Now let's create the features and labels we will feed our models with Step20: NNLM Model We will first try a word embedding pre-trained using a Neural Probabilistic Language Model. TF-Hub has a 50-dimensional one called nnlm-en-dim50-with-normalization, which also normalizes the vectors produced. Once loaded from its url, the TF-hub module can be used as a normal Keras layer in a sequential or functional model. Since we have enough data to fine-tune the parameters of the pre-trained embedding itself, we will set trainable=True in the KerasLayer that loads the pre-trained embedding Step21: Note that this TF-Hub embedding produces a single 50-dimensional vector when passed a sentence Step22: Swivel Model Then we will try a word embedding obtained using Swivel, an algorithm that essentially factorizes word co-occurrence matrices to create the words embeddings. TF-Hub hosts the pretrained gnews-swivel-20dim-with-oov 20-dimensional Swivel module. Step23: Similarly as the previous pre-trained embedding, it outputs a single vector when passed a sentence Step24: Building the models Let's write a function that takes as input an instance of a KerasLayer (i.e. the swivel_module or the nnlm_module we constructed above) as well as the name of the model (say swivel or nnlm) returns a compiled Keras sequential model starting with this pre-trained TF-hub layer, adding one or more dense relu layers to it, and ending with a softmax layer giving the probability of each of the classes Step25: Let's also wrap the training code into a train_and_evaluate function that * takes as input the training and validation data, as well as the compiled model itself, and the batch_size * trains the compiled model for 100 epochs at most, and does early-stopping when the validation loss is no longer decreasing * returns an history object, which will help us to plot the learning curves Step26: Training NNLM Step27: Training Swivel Step28: Comparing the models Swivel trains faster but achieves a lower validation accuracy, and requires more epochs to train on. At last, let's compare all the models we have trained at once using TensorBoard in order to choose the one that overfits the less for the same performance level. Run the output of the following command in your Cloud Shell to launch TensorBoard, and use the Web Preview on port 6006 to view it. Step29: Deploying the model The first step is to serialize one of our trained Keras model as a SavedModel Step30: Then we can deploy the model using the gcloud CLI as before Step31: Note the ENDPOINT_RESOURCENAME above as you'll need it below for the prediction. Before we try our deployed model, let's inspect its signature to know what to send to the deployed API Step32: Let's go ahead and hit our model Step33: Insert below the ENDPOINT_RESOURCENAME from the deployment code above.
Python Code: import os import pandas as pd from google.cloud import bigquery Explanation: Reusable Embeddings Learning Objectives 1. Learn how to use a pre-trained TF Hub text modules to generate sentence vectors 1. Learn how to incorporate a pre-trained TF-Hub module into a Keras model 1. Learn how to deploy and use a text model on CAIP Introduction In this notebook, we will implement text models to recognize the probable source (GitHub, TechCrunch, or The New York Times) of the titles we have in the title dataset. First, we will load and pre-process the texts and labels so that they are suitable to be fed to sequential Keras models with first layer being TF-hub pre-trained modules. Thanks to this first layer, we won't need to tokenize and integerize the text before passing it to our models. The pre-trained layer will take care of that for us, and consume directly raw text. However, we will still have to one-hot-encode each of the 3 classes into a 3 dimensional basis vector. Then we will build, train and compare simple DNN models starting with different pre-trained TF-Hub layers. End of explanation PROJECT = !(gcloud config get-value core/project) PROJECT = PROJECT[0] BUCKET = PROJECT # defaults to PROJECT REGION = "us-central1" # Replace with your REGION os.environ["PROJECT"] = PROJECT os.environ["BUCKET"] = BUCKET os.environ["REGION"] = REGION %%bash gcloud config set project $PROJECT gcloud config set ai/region $REGION Explanation: Replace the variable values in the cell below: End of explanation %%bigquery --project $PROJECT SELECT url, title, score FROM `bigquery-public-data.hacker_news.stories` WHERE LENGTH(title) > 10 AND score > 10 AND LENGTH(url) > 0 LIMIT 10 Explanation: Create a Dataset from BigQuery Hacker news headlines are available as a BigQuery public dataset. The dataset contains all headlines from the sites inception in October 2006 until October 2015. Here is a sample of the dataset: End of explanation %%bigquery --project $PROJECT SELECT ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source, COUNT(title) AS num_articles FROM `bigquery-public-data.hacker_news.stories` WHERE REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$') AND LENGTH(title) > 10 GROUP BY source ORDER BY num_articles DESC LIMIT 100 Explanation: Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http://mobile.nytimes.com/...., I want to be left with <i>nytimes</i> End of explanation regex = ".*://(.[^/]+)/" sub_query = SELECT title, ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '{0}'), '.'))[OFFSET(1)] AS source FROM `bigquery-public-data.hacker_news.stories` WHERE REGEXP_CONTAINS(REGEXP_EXTRACT(url, '{0}'), '.com$') AND LENGTH(title) > 10 .format( regex ) query = SELECT LOWER(REGEXP_REPLACE(title, '[^a-zA-Z0-9 $.-]', ' ')) AS title, source FROM ({sub_query}) WHERE (source = 'github' OR source = 'nytimes' OR source = 'techcrunch') .format( sub_query=sub_query ) print(query) Explanation: Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for machine learning. End of explanation bq = bigquery.Client(project=PROJECT) title_dataset = bq.query(query).to_dataframe() title_dataset.head() Explanation: For ML training, we usually need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset). AutoML however figures out on its own how to create these splits, so we won't need to do that here. End of explanation print(f"The full dataset contains {len(title_dataset)} titles") Explanation: AutoML for text classification requires that * the dataset be in csv form with * the first column being the texts to classify or a GCS path to the text * the last colum to be the text labels The dataset we pulled from BiqQuery satisfies these requirements. End of explanation title_dataset.source.value_counts() Explanation: Let's make sure we have roughly the same number of labels for each of our three labels: End of explanation DATADIR = "./data/" if not os.path.exists(DATADIR): os.makedirs(DATADIR) FULL_DATASET_NAME = "titles_full.csv" FULL_DATASET_PATH = os.path.join(DATADIR, FULL_DATASET_NAME) # Let's shuffle the data before writing it to disk. title_dataset = title_dataset.sample(n=len(title_dataset)) title_dataset.to_csv( FULL_DATASET_PATH, header=False, index=False, encoding="utf-8" ) Explanation: Finally we will save our data, which is currently in-memory, to disk. We will create a csv file containing the full dataset and another containing only 1000 articles for development. Note: It may take a long time to train AutoML on the full dataset, so we recommend to use the sample dataset for the purpose of learning the tool. End of explanation sample_title_dataset = title_dataset.sample(n=1000) sample_title_dataset.source.value_counts() Explanation: Now let's sample 1000 articles from the full dataset and make sure we have enough examples for each label in our sample dataset (see here for further details on how to prepare data for AutoML). End of explanation SAMPLE_DATASET_NAME = "titles_sample.csv" SAMPLE_DATASET_PATH = os.path.join(DATADIR, SAMPLE_DATASET_NAME) sample_title_dataset.to_csv( SAMPLE_DATASET_PATH, header=False, index=False, encoding="utf-8" ) sample_title_dataset.head() import datetime import os import shutil import pandas as pd import tensorflow as tf from tensorflow.keras.callbacks import EarlyStopping, TensorBoard from tensorflow.keras.layers import Dense from tensorflow.keras.models import Sequential from tensorflow.keras.preprocessing.text import Tokenizer from tensorflow.keras.utils import to_categorical from tensorflow_hub import KerasLayer print(tf.__version__) %matplotlib inline Explanation: Let's write the sample datatset to disk. End of explanation MODEL_DIR = f"gs://{BUCKET}/text_models" Explanation: Let's start by specifying where the information about the trained models will be saved as well as where our dataset is located: End of explanation ls $DATADIR DATASET_NAME = "titles_full.csv" TITLE_SAMPLE_PATH = os.path.join(DATADIR, DATASET_NAME) COLUMNS = ["title", "source"] titles_df = pd.read_csv(TITLE_SAMPLE_PATH, header=None, names=COLUMNS) titles_df.head() Explanation: Loading the dataset As in the previous labs, our dataset consists of titles of articles along with the label indicating from which source these articles have been taken from (GitHub, TechCrunch, or The New York Times): End of explanation titles_df.source.value_counts() Explanation: Let's look again at the number of examples per label to make sure we have a well-balanced dataset: End of explanation CLASSES = {"github": 0, "nytimes": 1, "techcrunch": 2} N_CLASSES = len(CLASSES) def encode_labels(sources): classes = [CLASSES[source] for source in sources] one_hots = to_categorical(classes, num_classes=N_CLASSES) return one_hots encode_labels(titles_df.source[:4]) Explanation: Preparing the labels In this lab, we will use pre-trained TF-Hub embeddings modules for english for the first layer of our models. One immediate advantage of doing so is that the TF-Hub embedding module will take care for us of processing the raw text. This also means that our model will be able to consume text directly instead of sequences of integers representing the words. However, as before, we still need to preprocess the labels into one-hot-encoded vectors: End of explanation N_TRAIN = int(len(titles_df) * 0.95) titles_train, sources_train = ( titles_df.title[:N_TRAIN], titles_df.source[:N_TRAIN], ) titles_valid, sources_valid = ( titles_df.title[N_TRAIN:], titles_df.source[N_TRAIN:], ) Explanation: Preparing the train/test splits Let's split our data into train and test splits: End of explanation sources_train.value_counts() sources_valid.value_counts() Explanation: To be on the safe side, we verify that the train and test splits have roughly the same number of examples per class. Since it is the case, accuracy will be a good metric to use to measure the performance of our models. End of explanation X_train, Y_train = titles_train.values, encode_labels(sources_train) X_valid, Y_valid = titles_valid.values, encode_labels(sources_valid) X_train[:3] Y_train[:3] Explanation: Now let's create the features and labels we will feed our models with: End of explanation # TODO 1 NNLM = "https://tfhub.dev/google/nnlm-en-dim50/2" nnlm_module = KerasLayer( NNLM, output_shape=[50], input_shape=[], dtype=tf.string, trainable=True ) Explanation: NNLM Model We will first try a word embedding pre-trained using a Neural Probabilistic Language Model. TF-Hub has a 50-dimensional one called nnlm-en-dim50-with-normalization, which also normalizes the vectors produced. Once loaded from its url, the TF-hub module can be used as a normal Keras layer in a sequential or functional model. Since we have enough data to fine-tune the parameters of the pre-trained embedding itself, we will set trainable=True in the KerasLayer that loads the pre-trained embedding: End of explanation # TODO 1 nnlm_module(tf.constant(["The dog is happy to see people in the street."])) Explanation: Note that this TF-Hub embedding produces a single 50-dimensional vector when passed a sentence: End of explanation # TODO 1 SWIVEL = "https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim-with-oov/1" swivel_module = KerasLayer( SWIVEL, output_shape=[20], input_shape=[], dtype=tf.string, trainable=True ) Explanation: Swivel Model Then we will try a word embedding obtained using Swivel, an algorithm that essentially factorizes word co-occurrence matrices to create the words embeddings. TF-Hub hosts the pretrained gnews-swivel-20dim-with-oov 20-dimensional Swivel module. End of explanation # TODO 1 swivel_module(tf.constant(["The dog is happy to see people in the street."])) Explanation: Similarly as the previous pre-trained embedding, it outputs a single vector when passed a sentence: End of explanation def build_model(hub_module, name): model = Sequential( [ hub_module, # TODO 2 Dense(16, activation="relu"), Dense(N_CLASSES, activation="softmax"), ], name=name, ) model.compile( optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"] ) return model Explanation: Building the models Let's write a function that takes as input an instance of a KerasLayer (i.e. the swivel_module or the nnlm_module we constructed above) as well as the name of the model (say swivel or nnlm) returns a compiled Keras sequential model starting with this pre-trained TF-hub layer, adding one or more dense relu layers to it, and ending with a softmax layer giving the probability of each of the classes: End of explanation def train_and_evaluate(train_data, val_data, model, batch_size=5000): X_train, Y_train = train_data tf.random.set_seed(33) model_dir = os.path.join(MODEL_DIR, model.name) if tf.io.gfile.exists(model_dir): tf.io.gfile.rmtree(model_dir) history = model.fit( X_train, Y_train, epochs=100, batch_size=batch_size, validation_data=val_data, callbacks=[EarlyStopping(patience=1), TensorBoard(model_dir)], ) return history Explanation: Let's also wrap the training code into a train_and_evaluate function that * takes as input the training and validation data, as well as the compiled model itself, and the batch_size * trains the compiled model for 100 epochs at most, and does early-stopping when the validation loss is no longer decreasing * returns an history object, which will help us to plot the learning curves End of explanation data = (X_train, Y_train) val_data = (X_valid, Y_valid) nnlm_model = build_model(nnlm_module, "nnlm") nnlm_history = train_and_evaluate(data, val_data, nnlm_model) history = nnlm_history pd.DataFrame(history.history)[["loss", "val_loss"]].plot() pd.DataFrame(history.history)[["accuracy", "val_accuracy"]].plot() Explanation: Training NNLM End of explanation swivel_model = build_model(swivel_module, name="swivel") swivel_history = train_and_evaluate(data, val_data, swivel_model) history = swivel_history pd.DataFrame(history.history)[["loss", "val_loss"]].plot() pd.DataFrame(history.history)[["accuracy", "val_accuracy"]].plot() Explanation: Training Swivel End of explanation !echo tensorboard --logdir $MODEL_DIR --port 6006 Explanation: Comparing the models Swivel trains faster but achieves a lower validation accuracy, and requires more epochs to train on. At last, let's compare all the models we have trained at once using TensorBoard in order to choose the one that overfits the less for the same performance level. Run the output of the following command in your Cloud Shell to launch TensorBoard, and use the Web Preview on port 6006 to view it. End of explanation OUTPUT_DIR = "./savedmodels_vertex" shutil.rmtree(OUTPUT_DIR, ignore_errors=True) EXPORT_PATH = os.path.join(OUTPUT_DIR, "swivel") os.environ["EXPORT_PATH"] = EXPORT_PATH shutil.rmtree(EXPORT_PATH, ignore_errors=True) tf.keras.models.save_model(swivel_model, EXPORT_PATH) Explanation: Deploying the model The first step is to serialize one of our trained Keras model as a SavedModel: End of explanation %%bash # TODO 5 TIMESTAMP=$(date -u +%Y%m%d_%H%M%S) MODEL_DISPLAYNAME=title_model_$TIMESTAMP ENDPOINT_DISPLAYNAME=swivel_$TIMESTAMP IMAGE_URI="us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-3:latest" ARTIFACT_DIRECTORY=gs://${BUCKET}/${MODEL_DISPLAYNAME}/ echo $ARTIFACT_DIRECTORY gsutil cp -r ${EXPORT_PATH}/* ${ARTIFACT_DIRECTORY} # Model MODEL_RESOURCENAME=$(gcloud ai models upload \ --region=$REGION \ --display-name=$MODEL_DISPLAYNAME \ --container-image-uri=$IMAGE_URI \ --artifact-uri=$ARTIFACT_DIRECTORY \ --format="value(model)") echo "MODEL_DISPLAYNAME=${MODEL_DISPLAYNAME}" echo "MODEL_RESOURCENAME=${MODEL_RESOURCENAME}" # Endpoint ENDPOINT_RESOURCENAME=$(gcloud ai endpoints create \ --region=$REGION \ --display-name=$ENDPOINT_DISPLAYNAME \ --format="value(name)") echo "ENDPOINT_DISPLAYNAME=${ENDPOINT_DISPLAYNAME}" echo "ENDPOINT_RESOURCENAME=${ENDPOINT_RESOURCENAME}" # Deployment DEPLOYED_MODEL_DISPLAYNAME=${MODEL_DISPLAYNAME}_deployment MACHINE_TYPE=n1-standard-2 MIN_REPLICA_COUNT=1 MAX_REPLICA_COUNT=3 gcloud ai endpoints deploy-model $ENDPOINT_RESOURCENAME \ --region=$REGION \ --model=$MODEL_RESOURCENAME \ --display-name=$DEPLOYED_MODEL_DISPLAYNAME \ --machine-type=$MACHINE_TYPE \ --min-replica-count=$MIN_REPLICA_COUNT \ --max-replica-count=$MAX_REPLICA_COUNT \ --traffic-split=0=100 Explanation: Then we can deploy the model using the gcloud CLI as before: End of explanation !saved_model_cli show \ --tag_set serve \ --signature_def serving_default \ --dir {EXPORT_PATH} !find {EXPORT_PATH} Explanation: Note the ENDPOINT_RESOURCENAME above as you'll need it below for the prediction. Before we try our deployed model, let's inspect its signature to know what to send to the deployed API: End of explanation %%writefile input.json { "instances": [ {"keras_layer_1_input": "hello"} ] } Explanation: Let's go ahead and hit our model: End of explanation %%bash ENDPOINT_RESOURCENAME= #TODO: insert the ENDPOINT_RESOURCENAME here from above gcloud ai endpoints predict $ENDPOINT_RESOURCENAME \ --region $REGION \ --json-request input.json Explanation: Insert below the ENDPOINT_RESOURCENAME from the deployment code above. End of explanation
1,648
Given the following text description, write Python code to implement the functionality described below step by step Description: Simulation of between-subjects regressor effects on HDDM parameters This notebook is slightly modified from the .py posted by Michael, see here for the discussion. The goal of this script is to verify that effects of between-subjects regressor on HDDM parameters can be accurately estimated, including when there are also within-subject effects. That is, to see if we can estimate within HDDM the impact of a continuous measure x, where x can be age, IQ, etc on DDM params. Values of a between-subject regressor on a or v are simulated by creating multiple subject datasets at increasing values of the regressor. Michael Frank had reiterated a point from a previous post though, based on this article, that it is actually fine to fit the model without x as a regressor at all, and then to correlate the individual subject intercepts with the between subjects factor after the fact -- but that involves a classical correlation test whereas the regression approach here estimates the effect and its uncertainty within the bayesian parameter estimation itself. Here are some issues to consider when doing this as pointed out by Michael Frank Step1: Stimulation with between-subject regressor This is the code directly from the .py file downloaded from the forum. Note that the commented code, you can find the way to sitmulate between-subject regressor with a within-subject condition (level 1 and level 2) Generate data Here the simulation assumes that there is no interaction between the inter-subject regressor and within-subject condition. So the setting is $ a = int_{a} + \beta_{a} * X$, the level 1 is $a$ and level 2 is $a+0.5$. I will try to simulate a bit more complicated situation in the latter part Step2: Recover parameters Step3: load from file and examine recovery Step4: Stimulation with between-within interaction Generate data Here we simulate the interaction between between-subject regressor and within subject variable $a$ interacts with different levels, e.g., level 1 is $$ a_1 = int_{a} + \beta_{a1} * X $$, and level 2 is $$ a_2 = int_{a} + \beta_{a2} * X $$? Step5: Plot the correlations Step6: Recovery of parameters
Python Code: import hddm from numpy import mean, std import numpy as np from pandas import Series import pandas as pd import os as os import matplotlib.pyplot as plt # os.chdir('/storage/home/mnh5174') Explanation: Simulation of between-subjects regressor effects on HDDM parameters This notebook is slightly modified from the .py posted by Michael, see here for the discussion. The goal of this script is to verify that effects of between-subjects regressor on HDDM parameters can be accurately estimated, including when there are also within-subject effects. That is, to see if we can estimate within HDDM the impact of a continuous measure x, where x can be age, IQ, etc on DDM params. Values of a between-subject regressor on a or v are simulated by creating multiple subject datasets at increasing values of the regressor. Michael Frank had reiterated a point from a previous post though, based on this article, that it is actually fine to fit the model without x as a regressor at all, and then to correlate the individual subject intercepts with the between subjects factor after the fact -- but that involves a classical correlation test whereas the regression approach here estimates the effect and its uncertainty within the bayesian parameter estimation itself. Here are some issues to consider when doing this as pointed out by Michael Frank: A central issue that came up is that when estimating parameters, they should still be informed by, and in the range of, the priors. The intercept parameters for the main DDM parameters are informed by empirical priors (see the supplement of the 2013 paper), and so my earlier recommendation (in that thread) to try $a$ ~ 0 + X was misguided because it would enforce the intercept to be 0 which has very low prior density for $a$ (this is not a problem for $v$ ~ 0 + X, given that 0 is in the range of priors for drift, but it turns out there is no need to enforce intercepts to be 0 after all -- which is good because one would usually like to still estimate indiv subject parameters that reflect variability beyond that explained by the single x factor). Relatedly, a key recommendation is to first z-score (or at least mean-center) the between subjects factor x and to then estimate e.g., a ~ 1 + X. This way every subject still has an intercept drawn from an overall group distribution, with priors of that distribution centered at empirical priors, but then we can also estimate the degree to which this intercept is modified up or down by the (z-scored / mean-centered) between-subjects factor. The simulation and parameter recovery confirmed that the beta coefficient on the x factor is recovered quite well at least for the various cases that Michael had tried. (In the generative data, the intercepts should also be within the range of priors for this to work reliably (e.g. a_int > 0), which makes sense). He (Michael) also confirmed that this works for both threshold and drift, and when allowing both to vary freely during fitting it correctly estimated the magnitude of the specific factor that varied in generative data, and pinned the other one at ~zero). Note one limitation in the implementation is that while we allow x to affect the mean of the group distribution on the intercept, we assume the variance of that distribution is constant across levels of x - ie. there is no way currently to allow a_std to vary with x. But that's not such a terrible limitation (it just means that a_std for the overall group parameters will need to be large enough to accommodate the most variable group - but it should still go down relative to not allowing x to influence the mean at all). End of explanation beta_a = 0.4 # between subjects scaling factor - adjust this and should be able to recover its true value a_int = 1 # set intercept within range of empirical priors v_int = 0.3 x_range = range(11) # set values of between subject measures, here from 0 to 10 trials_per_level = 200 subjs_per_bin = 10 data_group = pd.DataFrame() # empty df to append to for x in x_range: xx = (x - mean(x_range)) / std(x_range) # z-score the x factor a = a_int + beta_a * xx # indiv subj param values that are centered on intercept but deviate from it up or down by z-scored x # v = v_int+ beta_a*xx # can also do for drift, here using same beta coeff # parvec = {'v':.3, 'a':a, 't':.3, 'sv':0, 'z':.5, 'sz':0, 'st':0} # parvec2 = {'v':.3, 'a':a+.5, 't':.3, 'sv':0, 'z':.5, 'sz':0, 'st':0} parvec = {'v':.3, 'a':a, 't':.3} # set a to value set by regression, here v is set to constant # note that for subjs_per_bin > 1, these are just the mean values of the parameters; indiv subjs within bin are sampled from distributions with the given means, but can still differ within bin around those means. #not including sv, sz, st in the statement ensures those are actually 0. data_a, params_a = hddm.generate.gen_rand_data({'level1': parvec}, size=trials_per_level, subjs=subjs_per_bin) # can also do with two levels of within-subj conditions # data_a, params_a = hddm.generate.gen_rand_data({'level1': parvec,'level2': parvec2}, size=trials_per_level, subjs=subjs_per_bin) data_a['z_x'] = Series(xx * np.ones((len(data_a))), index=data_a.index) # store the (z-scored) between subjects factor in the data frame, same value for every row for each subject in the bin data_a['x'] = Series(x * np.ones((len(data_a))), index=data_a.index) # store the (z-scored) between subjects factor in the data frame, same value for every row for each subject in the bin data_a['a_population'] = Series(a * np.ones((len(data_a))), index=data_a.index) # store the (z-scored) between subjects factor in the data frame, same value for every row for each subject in the bin data_a['subj_idx'] = Series(x * subjs_per_bin + data_a['subj_idx'] * np.ones((len(data_a))), index=data_a.index) # store the correct subj_idx when iterating over multiple subjs per bin # concatenate data_a with group data data_a_df = pd.DataFrame(data=[data_a]) data_group = data_group.append([data_a], ignore_index=True) #write original simulated data to file data_group.to_csv('data_group.csv') Explanation: Stimulation with between-subject regressor This is the code directly from the .py file downloaded from the forum. Note that the commented code, you can find the way to sitmulate between-subject regressor with a within-subject condition (level 1 and level 2) Generate data Here the simulation assumes that there is no interaction between the inter-subject regressor and within-subject condition. So the setting is $ a = int_{a} + \beta_{a} * X$, the level 1 is $a$ and level 2 is $a+0.5$. I will try to simulate a bit more complicated situation in the latter part: $a$ interacts with different levels, e.g., level 1 is $ a_1 = int_{a} + \beta_{a1} * X $, and level 2 is $ a_2 = int_{a} + \beta_{a2} * X $? End of explanation a_reg = {'model': 'a ~ 1 + z_x', 'link_func': lambda x: x} # a_reg_within = {'model': 'a ~ 1+x + C(condition)', 'link_func': lambda x: x} # for including and estimating within subject effects of condition v_reg = {'model': 'v ~ 1 + z_x', 'link_func': lambda x: x} reg_comb = [a_reg, v_reg] # m_reg = hddm.HDDMRegressor(data_group, reg_comb, group_only_regressors=['true']) # Do not estimate individual subject parameters for all regressors. m_reg = hddm.HDDMRegressor(data_group, a_reg, group_only_regressors=['true']) m_reg.find_starting_values() m_reg.sample(3000, burn=500, dbname='a_bwsubs_t200.db', db='pickle') m_reg.save('a_bwsubs_model_t200') m_reg.print_stats() # check values of reg coefficients against the generated ones Explanation: Recover parameters End of explanation #load from file and examine recovery m_reg = hddm.load('a_bwsubs_model_t200') data_group = pd.read_csv('data_group.csv') #look at correlation of recovered parameter with original subjdf = data_group.groupby('subj_idx').first().reset_index() ## check for residual correlation with x a_int_recovered =[] #obtain mean a intercept parameter for subjects from scipy import stats for i in range(0,(1+max(x_range))*subjs_per_bin): a='a_Intercept_subj.' a+=str(i) a+='.0' p1=m_reg.nodes_db.node[a] a_int_recovered.append(p1.trace().mean()) #obtain mean a regression weight for z_x a_x_recovered = m_reg.nodes_db.node['a_z_x'].trace().mean() #compute predicted a parameter as a function of subject intercept and between-subjects regressor subjdf.apred = a_int_recovered + a_x_recovered * subjdf.z_x # correlation of recovered a with population a. r = .97 here: good! fig = plt.figure() fig.set_size_inches(5, 4) plt.scatter(subjdf.apred,subjdf.a_population) # predicted versus observed a plt.xlabel('recovered a') plt.ylabel('simulated a') plt.savefig('correlation of simulated and recovered a params.png', dpi=300, format='png') print('Pearson correlation between a_x and x') print(stats.pearsonr(subjdf.apred,subjdf.a_population)) # correlation between predicted a value and population a value # correlation of subject intercept with between-subjects regressor # should be zero correlation if entirely accounted for by x regressor - # this correlation should instead be positive if x is removed from the model fit fig = plt.figure() fig.set_size_inches(5, 4) plt.scatter(subjdf.x,a_int_recovered) plt.xlabel('between subjs regressor x') plt.ylabel('recovered subject a intercept') plt.show() plt.savefig('residual correlation between subj intercept and x.png', dpi=300, format='png') print('Pearson correlation between a_x and x') print(stats.pearsonr(a_int_recovered,subjdf.x)) Explanation: load from file and examine recovery End of explanation beta_a1 = 0.4 # between subjects scaling factor - adjust this and should be able to recover its true value beta_a2 = 0.1 a_int = 1 # set intercept within range of empirical priors v_int = 0.3 x_range = range(11) # set values of between subject measures, here from 0 to 10 trials_per_level = 100 subjs_per_bin = 10 data_group = pd.DataFrame() # empty df to append to for x in x_range: xx = (x - mean(x_range)) / std(x_range) # z-score the x factor a1 = a_int + beta_a1 * xx # indiv subj param values that are centered on intercept but deviate from it up or down by z-scored x a2 = a_int + beta_a2 * xx # v = v_int+ beta_a*xx # can also do for drift, here using same beta coeff parvec = {'v':.3, 'a':a1, 't':.3, 'sv':0, 'z':.5, 'sz':0, 'st':0} parvec2 = {'v':.3, 'a':a2, 't':.3, 'sv':0, 'z':.5, 'sz':0, 'st':0} # parvec = {'v':.3, 'a':a, 't':.3} # set a to value set by regression, here v is set to constant # note that for subjs_per_bin > 1, these are just the mean values of the parameters; indiv subjs within bin are sampled from distributions with the given means, but can still differ within bin around those means. #not including sv, sz, st in the statement ensures those are actually 0. # data_a, params_a = hddm.generate.gen_rand_data({'level1': parvec}, size=trials_per_level, subjs=subjs_per_bin) # can also do with two levels of within-subj conditions data_a, params_a = hddm.generate.gen_rand_data({'level1': parvec,'level2': parvec2}, size=trials_per_level, subjs=subjs_per_bin) data_a['z_x'] = Series(xx * np.ones((len(data_a))), index=data_a.index) # store the (z-scored) between subjects factor in the data frame, same value for every row for each subject in the bin data_a['x'] = Series(x * np.ones((len(data_a))), index=data_a.index) # store the (z-scored) between subjects factor in the data frame, same value for every row for each subject in the bin #data_a['a_population'] = Series(a * np.ones((len(data_a))), index=data_a.index) # store the (z-scored) between subjects factor in the data frame, same value for every row for each subject in the bin data_a['a_population'] = Series(a1 * np.ones((len(data_a))), index=data_a.index) # store the (z-scored) between subjects factor in the data frame, same value for every row for each subject in the bin data_a.loc[data_a['condition'] == 'level2', 'a_population'] = a2 data_a['subj_idx'] = Series(x * subjs_per_bin + data_a['subj_idx'] * np.ones((len(data_a))), index=data_a.index) # store the correct subj_idx when iterating over multiple subjs per bin # concatenate data_a with group data data_a_df = pd.DataFrame(data=[data_a]) data_group = data_group.append([data_a], ignore_index=True) #write original simulated data to file data_group.to_csv('data_group_int.csv') Explanation: Stimulation with between-within interaction Generate data Here we simulate the interaction between between-subject regressor and within subject variable $a$ interacts with different levels, e.g., level 1 is $$ a_1 = int_{a} + \beta_{a1} * X $$, and level 2 is $$ a_2 = int_{a} + \beta_{a2} * X $$? End of explanation colors = np.where(data_group["condition"]== 'level1','r','b') fig = plt.figure() fig.set_size_inches(5, 4) plt.scatter(data_group.z_x,data_group.a_population, c=colors) # predicted versus observed a plt.xlabel('z_x') plt.ylabel('simulated a') Explanation: Plot the correlations End of explanation # a_reg_within = {'model': 'a ~ 1+x + C(condition)', 'link_func': lambda x: x} # for including and estimating within subject effects of condition # try main effect of within and interaction between within and between-subject a_reg = {'model': 'a ~ 1 + z_x:condition + C(condition)', 'link_func': lambda x: x} v_reg = {'model': 'v ~ 1 + z_x:condition + C(condition)', 'link_func': lambda x: x} reg_comb = [a_reg, v_reg] # m_reg = hddm.HDDMRegressor(data_group, reg_comb, group_only_regressors=['true']) # Do not estimate individual subject parameters for all regressors. m_reg = hddm.HDDMRegressor(data_group, a_reg, group_only_regressors=['true']) m_reg.find_starting_values() m_reg.sample(2000, burn=500, dbname='a_bwsubs_int.db', db='pickle') m_reg.save('a_bwsubs_int') m_reg.print_stats() # check values of reg coefficients against the generated ones m_reg.plot_posteriors() #load from file and examine recovery m_reg = hddm.load('a_bwsubs_int') data_group = pd.read_csv('data_group_int.csv') #look at correlation of recovered parameter with original subjdf = data_group.groupby(['subj_idx','condition']).first().reset_index() ## check for residual correlation with x a_int_recovered =[] # obtain mean a intercept parameter for subjects from scipy import stats for i in range(0,(1+max(x_range))*subjs_per_bin): a='a_Intercept_subj.' a+=str(i) a+='.0' p1=m_reg.nodes_db.node[a] a_int_recovered.append(p1.trace().mean()) a_int_recovered.append(p1.trace().mean()) # repeat the intercept one more time for the level2 #obtain mean a regression weight for z_x #a_x_recovered = m_reg.nodes_db.node['a_z_x'].trace().mean() #obtain mean a regression weight for z_x (slope) a_x_recovered_1 = m_reg.nodes_db.node['a_z_x:condition[level1]'].trace().mean() a_x_recovered_2 = m_reg.nodes_db.node['a_z_x:condition[level2]'].trace().mean() #compute predicted a parameter as a function of subject intercept and between-subjects regressor # subjdf['a_int_recovered'] = a_int_recovered subjdf.loc[subjdf['condition'] == 'level1', 'apred'] = a_int_recovered + a_x_recovered_1 * subjdf.z_x subjdf.loc[subjdf['condition'] == 'level2', 'apred'] = a_int_recovered + a_x_recovered_2 * subjdf.z_x # correlation of recovered a with population a. r = .97 here: good! subjdf1 = subjdf[subjdf['condition']=='level1'] fig = plt.figure() fig.set_size_inches(5, 4) plt.scatter(subjdf1.apred,subjdf1.a_population) # predicted versus observed a plt.xlabel('recovered a') plt.ylabel('simulated a') # plt.savefig('correlation of simulated and recovered a params.png', dpi=300, format='png') print('Pearson correlation between a_x and x') print(stats.pearsonr(subjdf1.apred,subjdf1.a_population)) # correlation between predicted a value and population a value # correlation of recovered a with population a. r = .97 here: good! subjdf2 = subjdf[subjdf['condition']=='level2'] fig = plt.figure() fig.set_size_inches(5, 4) plt.scatter(subjdf2.apred,subjdf2.a_population) # predicted versus observed a plt.xlabel('recovered a') plt.ylabel('simulated a') #plt.savefig('correlation of simulated and recovered a params.png', dpi=300, format='png') print('Pearson correlation between a_x and x') print(stats.pearsonr(subjdf2.apred,subjdf2.z_x)) # correlation between predicted a value and population a value colors = np.where(subjdf["condition"]== 'level1','r','b') fig, ax = plt.subplots(figsize=(5,4)) ax.scatter(x=subjdf['z_x'], y=subjdf['apred'], c=colors) ax.set_ylabel('recovered a', fontsize=12) ax.set_xlabel('z_x', fontsize=12) #ax.set_title(f'Group: {idx}', fontsize=14) colors = np.where(subjdf["condition"]== 'level1','r','b') fig, ax = plt.subplots(figsize=(5,4)) ax.scatter(x=subjdf['apred'], y=subjdf['a_population'], c=colors) ax.set_ylabel('recovered a', fontsize=12) ax.set_xlabel('simulated a', fontsize=12) #ax.set_title(f'Group: {idx}', fontsize=14) plt.savefig('correlation of simulated and recovered a params.png', dpi=300, format='png') for idx, gp in subjdf.groupby('condition'): fig, ax = plt.subplots(figsize=(5,4)) ax.scatter(x=gp['apred'], y=gp['a_population']) ax.set_ylabel('recovered a', fontsize=12) ax.set_xlabel('simulated a', fontsize=12) ax.set_title(f'Group: {idx}', fontsize=14) plt.show() # correlation of subject intercept with between-subjects regressor # should be zero correlation if entirely accounted for by x regressor - # this correlation should instead be positive if x is removed from the model fit subjdf['a_int_recovered'] = a_int_recovered for idx, gp in subjdf.groupby('condition'): fig, ax = plt.subplots(figsize=(5,4)) ax.scatter(x=gp['x'], y=gp['a_int_recovered']) ax.set_xlabel('between subjs regressor x', fontsize=12) ax.set_ylabel('recovered subject a intercept', fontsize=12) ax.set_title(f'Group: {idx}', fontsize=14) plt.show() Explanation: Recovery of parameters End of explanation
1,649
Given the following text description, write Python code to implement the functionality described below step by step Description: Strings and Control Flow Step1: Strings Strings are just arrays of characters Step2: Arithmetic with Strings Step3: You can compare strings Step4: Python supports Unicode characters You can enter unicode characters directly from the keyboard (depends on your operating system), or you can use the ASCII encoding. A list of ASCII encoding can be found here. For example the ASCII ecoding for the greek capital omega is U+03A9, so you can create the character with \U000003A9 Step5: Emoji are unicode characters, so you can use them a well Step6: Emoji can not be used as variable names (at least not yet ...) Step7: Watch out for variable types! Step8: Use explicit formatting to avoid these errors Python string formatting has the form Step9: Nice trick to convert number to a different base Step10: Formatting works with units Step11: Working with strings Step12: Find and Replace Step13: Justification and Cleaning Step14: Splitting and Joining Step15: Line Formatting Step16: Control Flow Like all computer languages, Python supports the standard types of control flows including Step17: For loops are different in python. You do not need to specify the beginning and end values of the loop Step18: Loops are slow in Python. Do not use them if you do not have to! Step19: Bonus Topic - Symbolic Mathematics (SymPy) Step20: You have to explicitly tell SymPy what symbols you want to use Step21: SymPy will now manipulate x, y, and z as symbols and not variables with values Step22: SymPy has all sorts of ways to manipulates symbolic equations Step23: SymPy can do your calculus homework.
Python Code: import numpy as np from astropy.table import QTable from astropy import units as u Explanation: Strings and Control Flow End of explanation s = 'spam' s,len(s),s[0],s[0:2] s[::-1] Explanation: Strings Strings are just arrays of characters End of explanation e = "eggs" s + e s + " " + e 4 * (s + " ") + e print(4 * (s + " ") + s + " and\n" + e) # use \n to get a newline with the print function Explanation: Arithmetic with Strings End of explanation "spam" == "good" "spam" != "good" "spam" == "spam" "sp" < "spam" "spam" < "eggs" Explanation: You can compare strings End of explanation print("This resistor has a value of 100 k\U000003A9") Ω = 1e3 Ω + np.pi Explanation: Python supports Unicode characters You can enter unicode characters directly from the keyboard (depends on your operating system), or you can use the ASCII encoding. A list of ASCII encoding can be found here. For example the ASCII ecoding for the greek capital omega is U+03A9, so you can create the character with \U000003A9 End of explanation pumpkin = "\U0001F383" radio_telescope = "\U0001F4E1" print(pumpkin + radio_telescope) Explanation: Emoji are unicode characters, so you can use them a well End of explanation 🐯 = 2.345 🐯 ** 2 Explanation: Emoji can not be used as variable names (at least not yet ...) End of explanation n = 4 print("I would like " + n + " orders of spam") print("I would like " + str(n) + " orders of spam") Explanation: Watch out for variable types! End of explanation A = 42 B = 1.23456 C = 1.23456e10 D = 'Forty Two' "I like the number {0:d}".format(A) "I like the number {0:s}".format(D) "The number {0:f} is fine, but not a cool as {1:d}".format(B,A) "The number {0:.3f} is fine, but not a cool as {1:d}".format(C,A) # 3 places after decimal "The number {0:.3e} is fine, but not a cool as {1:d}".format(C,A) # sci notation "{0:g} and {1:g} are the same format but different results".format(B,C) Explanation: Use explicit formatting to avoid these errors Python string formatting has the form: {Variable Index: Format Type} .format(Variable) End of explanation "Representation of the number {1:s} - dec: {0:d}; hex: {0:x}; oct: {0:o}; bin: {0:b}".format(A,D) Explanation: Nice trick to convert number to a different base End of explanation NH_D = 34.47 * u.AU "The New Horizons spacecraft is {0:.1f} from the Sun".format(NH_D) "The New Horizons spacecraft is at a distance of {0.value:.1f} in the units of {0.unit:s} from the Sun".format(NH_D) Explanation: Formatting works with units End of explanation line = "My hovercraft is full of eels" Explanation: Working with strings End of explanation line.replace('eels', 'wheels') Explanation: Find and Replace End of explanation line.center(100) line.ljust(100) line.rjust(100, "*") line2 = " My hovercraft is full of eels " line2.strip() line3 = "*$*$*$*$*$*$*$*$My hovercraft is full of eels*$*$*$*$" line3.strip('*$') line3.lstrip('*$'), line3.rstrip('*$') Explanation: Justification and Cleaning End of explanation line.split() '_*_'.join(line.split()) ' '.join(line.split()[::-1]) Explanation: Splitting and Joining End of explanation anotherline = "mY hoVErCRaft iS fUlL oF eEELS" anotherline.upper() anotherline.lower() anotherline.title() anotherline.capitalize() anotherline.swapcase() Explanation: Line Formatting End of explanation x = -1 if x > 0: print("{0} is a positive number".format(x)) else: print("{0} is not a positive number".format(x)) x = 0 if x > 0: print("x is positive") elif x == 0: print("x is zero") else: print("x is negative") y = 0 while y < 12: print(s, end=" ") # specify what charater to print at the end of output if y > 6: print(e, end=" * ") y += 1 Explanation: Control Flow Like all computer languages, Python supports the standard types of control flows including: IF statements WHILE loops FOR loops End of explanation T = QTable.read('Planets.csv', format='ascii.csv') T[0:2] for Value in T['Name']: print(Value) for Idx,Val in enumerate(T['Name']): print(Idx,Val) for Idx,Val in enumerate(T['Name']): a = T['a'][Idx] * u.AU if (a < 3.0 * u.AU): Place = "Inner" else: Place = "Outer" S = "The planet {0:s}, at a distance of {1:.1f}, is in the {2:s} solar system".format(Val,a,Place) print(S) Explanation: For loops are different in python. You do not need to specify the beginning and end values of the loop End of explanation np.random.seed(42) BigZ = np.random.random(10000) # 10,000 value array BigZ[:10] # This is slow! for Idx,Val in enumerate(BigZ): if (Val > 0.5): BigZ[Idx] = 0 BigZ[:10] %%timeit for Idx,Val in enumerate(BigZ): if (Val > 0.5): BigZ[Idx] = 0 # Masks are MUCH faster mask = np.where(BigZ>0.5) BigZ[mask] = 0 BigZ[:10] %%timeit -o mask = np.where(BigZ>0.5) BigZ[mask] = 0 Explanation: Loops are slow in Python. Do not use them if you do not have to! End of explanation import sympy as sp sp.init_printing() # Turns on pretty printing np.sqrt(8) sp.sqrt(8) Explanation: Bonus Topic - Symbolic Mathematics (SymPy) End of explanation x, y, z = sp.symbols('x y z') Explanation: You have to explicitly tell SymPy what symbols you want to use End of explanation E = 2*x + y E E + 3 E - x E/x Explanation: SymPy will now manipulate x, y, and z as symbols and not variables with values End of explanation sp.simplify(E/x) F = (x + 2)*(x - 3) F sp.expand(F) G = 2*x**2 + 5*x + 3 sp.factor(G) sp.solve(G,x) H = 2*y*x**3 + 12*x**2 - x + 3 - 8*x**2 + 4*x + x**3 + 5 + 2*y*x**2 + x*y H sp.collect(H,x) sp.collect(H,y) Explanation: SymPy has all sorts of ways to manipulates symbolic equations End of explanation G sp.diff(G,x) sp.diff(G,x,2) sp.integrate(G,x) sp.integrate(G,(x,0,5)) # limits x = 0 to 5 Explanation: SymPy can do your calculus homework. End of explanation
1,650
Given the following text description, write Python code to implement the functionality described below step by step Description: Solution Excercise 9 Team Hadochi Jorn van der Ent Michiel Voermans 23 January 2017 Load Modules and check for presence of ESRI Shapefile drive Step1: Set working directory to 'data' Step2: Interactive input system Step3: Create shape file from input Step4: Convert shapefile to KML with bash
Python Code: from osgeo import ogr from osgeo import osr import os driverName = "ESRI Shapefile" drv = ogr.GetDriverByName( driverName ) if drv is None: print "%s driver not available.\n" % driverName else: print "%s driver IS available.\n" % driverName Explanation: Solution Excercise 9 Team Hadochi Jorn van der Ent Michiel Voermans 23 January 2017 Load Modules and check for presence of ESRI Shapefile drive End of explanation os.chdir('./data') Explanation: Set working directory to 'data' End of explanation layername = raw_input("Name of Layer: ") pointnumber = raw_input("How many points do you want to insert? ") pointcoordinates = [] for number in range(1, (int(pointnumber)+1)): x = raw_input(("What is the Latitude (WGS 84) of Point %s ? " % str(number))) y = raw_input(("What is the Longitude (WGS 84) of Point %s ? " % str(number))) pointcoordinates += [(float(x), float(y))] # e.g.: # pointcoordinates =[(4.897070, 52.377956), (5.104480, 52.092876)] Explanation: Interactive input system End of explanation # Set filename fn = layername + ".shp" ds = drv.CreateDataSource(fn) # Set spatial reference spatialReference = osr.SpatialReference() spatialReference.ImportFromEPSG(4326) ## Create Layer layer=ds.CreateLayer(layername, spatialReference, ogr.wkbPoint) # Get layer Definition layerDefinition = layer.GetLayerDefn() for pointcoord in pointcoordinates: ## Create a point point = ogr.Geometry(ogr.wkbPoint) ## SetPoint(self, int point, double x, double y, double z = 0) point.SetPoint(0, pointcoord[0], pointcoord[1]) ## Feature is defined from properties of the layer:e.g: feature = ogr.Feature(layerDefinition) ## Lets add the points to the feature feature.SetGeometry(point) ## Lets store the feature in a layer layer.CreateFeature(feature) ds.Destroy() Explanation: Create shape file from input End of explanation bashcommand = 'ogr2ogr -f KML -t_srs crs:84 points.kml points.shp' os.system(bashcommand) Explanation: Convert shapefile to KML with bash End of explanation
1,651
Given the following text description, write Python code to implement the functionality described below step by step Description: Bienvenid@s a Jupyter Los cuadernos de Jupyter son una herramienta interactiva que te permite preparar documentos con código ejecutable, ecuaciones, texto, imágenes, videos, entre otros, que te ayuda a enriquecer o explicar la lógica detallada de tu código. Los cuadernos de jupyter son comúnmente usados en Step1: Cada celda la puedes usar para escribir el código que tu quieras y si de repente se te olvida alguna función o tienes duda de si el nombre es correcto IPython es muy amable en ese sentido. Para saber acerca de una función, es decir cuál es su salida o los parámetros que necesita puedes usar el signo de interrogación al final del nombre de la función. Ejercicio 2 En la siguiente celda busca las siguientes funciones Step2: Como te pudiste dar cuenta, cuando no encuentra la función te da un error... En IPython, y por lo tanto en Jupyter, hay una utilidad de completar con Tab. Esto quiere decir que si tu empiezas a escribir el nombre de una variable, función o atributo, no tienes que escribirlo todo, puedes empezar con unas cuantas letras y automáticamente (si es lo único que empieza de esa forma) lo va a completar por ti. Todos los flojos y/o olvidadizos amamos esta herramienta. En caso de que haya varias opciones no se va a completar, pero si lo vuelves a oprimir te va a mostrar en la celda todas las opciones que tienes... Step3: Ejercicio 3 Empieza a escribir las primeras tres letras de cada elemento de la celda anterior y presiona tab para ver si se puede autocompletar Step4: También hay funciones mágicas que nos permitirán hacer diversas tareas como mostrar las gráficas que se produzcan en el código dentro de una celda, medir el tiempo de ejecución del código y cambiar del directorio de trabajo, entre otras. para ver qué funciones mágicas hay en Jupyter sólo tienes que escribir python %magic Todas las funciones "mágicas" empiezan con el signo de porcentaje % Step5: Gráficas Ahora veremos unos ejemplos de gráficas y cómo hacerlas interactivas. Estos ejemplos fueron tomados de la libreta para demostración de nature Step7: La gráfica que estás viendo sigue la siguiente ecuación $$y=x^2$$ Ejercicio 4 Edita el código de arriba y vuélvelo a correr pero ahora intenta reemplazar la expresión
Python Code: # Lo primero que ejecutarás será 'Hola Jupyter' print('Hola a Todos') Explanation: Bienvenid@s a Jupyter Los cuadernos de Jupyter son una herramienta interactiva que te permite preparar documentos con código ejecutable, ecuaciones, texto, imágenes, videos, entre otros, que te ayuda a enriquecer o explicar la lógica detallada de tu código. Los cuadernos de jupyter son comúnmente usados en: ciencia, análisis de datos y educación. Eso no significa que son de uso exclusivo para esos casos, en mi experiencia los cuadernos me han ayudado a organizar mis pensamientos y visualizar mejor los datos que analizo. Tal vez te preguntes ¿qué puede hacer Jupyter por mí? Bueno, aquí hay unas cuántas ventajas de usar Jupyter: Trabajar en tu lenguaje preferido. Jupyter tiene soporte para más de 40 lenguajes de programación, incluyendo los que son populares para el análisis de datos como Python (obviamente), R, Julia y Scala. Así que si por algún extraño motivo decides que no te gustó python (lo dudo :P) puedes seguir disfrutando las libretas de jupyter. Compartir tus libretas con quien quieras. En donde quieras y como quieras. Esto significa que puedes escribir tareas y reportes en tus libretas y mandarlos a alguien específico o subirlos a tu repositorio de Github y que todo el mundo sepa lo que hiciste. Esto es como reproducibilidad al máximo... Convierte tus libretas a archivos de diferentas formatos. Puedes convertir tus libretas a documentos estáticos como HTML, LaTeX, PDF, Markdown, reStructuredText,etc. La documentación está en este vínculo y la forma fácil de hacerlo es: Click en "File" Pon el cursos sobre "Download as" Selecciona la opción que prefieras. Widgets interactivos Puedes crear salidas con videos, barras para cambiar valores... Exploraremos esto más adelante. Ahora que ya sabes lo que Jupyter es, vamos a usarlo!! Formato y ejecución de celdas los cuadernos de Jupyter se manejan con celdas y es importante saber que cada celda puede ser de distintos tipos, unas van a ser de código (ya sea python, r, julia o el kernel deseado) y otras de markdown. Lo primero que vas a hacer es dar click en la celda anterior y en esta para que notes que ambas son celdas de markdown. Puedes notar que hay formas para que en el documento aparezcan enlaces, listas, palabras resaltadas en negritas, encabezados. Aunado a esto puedes insertar fórmulas en LaTex, código en HTML, imágenes, tablas y código no ejecutable pero resaltado con la sintaxis apropiada. Ejemplos con estilos usados normalmente Encabezados Si escribes esto: # Encabezado 1 ## Encabezado 2 ### Encabezado 3 ... ###### Encabezado 6 Obtienes esto: Encabezado 1 Encabezado 2 Encabezado 3 ... Encabezado 6 Modificaciones en las letras Puedes hacer que las letras se vean en negritas, cursivas o tachadas de la siguiente forma: **Negritas** o __Negritas__ *Cursivas* o _Cursivas_ ~~Tachado~~ Negrita, Negrita, Cursiva, Cursiva, ~~Tachada~~ Listas Puedes listar objetos ya sea con puntos o con números de la siguiente forma - Objeto 1 - Sub objeto - Otro sub objeto - Objeto 2 1. Objeto 1 2. Objeto 2 Objeto 1 sub objeto otro sub objeto Objeto 2 Objeto Objeto Enlaces a páginas Para insertar enlaces a páginas relevantes puedes hacerlo directamente copiando y pegando el url o puedes hacer que una palabra esté ligada a un vínculo de a siguiente forma [palabra](dirección) Cheatsheet de Markdown. En esta página puedes encontrar la forma de hacer muchas cosas más Ejercicio 1 Crea una tabla que tenga como encabezados: Miembro Edad Género De todos los miembros de tu familia Las celdas por default van a estar en modo de código pero puedes cambiarla de dos formas, una en la barra que está llena de íconos hay una parte que dice Code, esta nada más tienes que hacer un click y cambiarlo a Markdown. La seguna es usando el teclado y lo que debes de hacer es seleccionar tu celda, presionar la tecla Esc, presionar la letra m y presionar Enter para empezar a escribir... Una vez que terminaste de escribir, puedes ejecutar cada celda ya sea presionando Shift + Enter o con el boton de "play" en la barra de herramientas Ya que vimos la parte de Markdown veamos la parte de código. Cada vez que creas una libreta nueva tu puedes decir en qué lenguaje quieres tu kernel, en nuestro caso sólo veremos python 3 porque no hemos instalado ningún otro kernel para jupyter pero si te interesa puedes ver cómo hacerlo aquí Así que el código que veremos a continuación es en python. End of explanation sum? max? round? mean? Explanation: Cada celda la puedes usar para escribir el código que tu quieras y si de repente se te olvida alguna función o tienes duda de si el nombre es correcto IPython es muy amable en ese sentido. Para saber acerca de una función, es decir cuál es su salida o los parámetros que necesita puedes usar el signo de interrogación al final del nombre de la función. Ejercicio 2 En la siguiente celda busca las siguientes funciones: sum, max, round, mean. No olvides ejecutar la celda después de haber escrito las funciones. End of explanation variable = 50 saludo = 'Hola' Explanation: Como te pudiste dar cuenta, cuando no encuentra la función te da un error... En IPython, y por lo tanto en Jupyter, hay una utilidad de completar con Tab. Esto quiere decir que si tu empiezas a escribir el nombre de una variable, función o atributo, no tienes que escribirlo todo, puedes empezar con unas cuantas letras y automáticamente (si es lo único que empieza de esa forma) lo va a completar por ti. Todos los flojos y/o olvidadizos amamos esta herramienta. En caso de que haya varias opciones no se va a completar, pero si lo vuelves a oprimir te va a mostrar en la celda todas las opciones que tienes... End of explanation vars? Explanation: Ejercicio 3 Empieza a escribir las primeras tres letras de cada elemento de la celda anterior y presiona tab para ver si se puede autocompletar End of explanation %magic Explanation: También hay funciones mágicas que nos permitirán hacer diversas tareas como mostrar las gráficas que se produzcan en el código dentro de una celda, medir el tiempo de ejecución del código y cambiar del directorio de trabajo, entre otras. para ver qué funciones mágicas hay en Jupyter sólo tienes que escribir python %magic Todas las funciones "mágicas" empiezan con el signo de porcentaje % End of explanation # Importa matplotlib (paquete para graficar) y numpy (paquete para arreglos). # Fíjate en el la función mágica para que aparezca nuestra gráfica en la celda. %matplotlib inline import matplotlib.pyplot as plt import numpy as np # Crea un arreglo de 30 valores para x que va de 0 a 5. x = np.linspace(0, 5, 30) y = np.sin(x) # grafica y versus x fig, ax = plt.subplots(nrows=1, ncols=1) ax.plot(x, y, color='red') ax.set_xlabel('x') ax.set_ylabel('y') ax.set_title('A simple graph of $y=x^2$') Explanation: Gráficas Ahora veremos unos ejemplos de gráficas y cómo hacerlas interactivas. Estos ejemplos fueron tomados de la libreta para demostración de nature End of explanation # Importa matplotlib y numpy # con la misma "magia". %matplotlib inline import matplotlib.pyplot as plt import numpy as np # Importa la función interactiva de IPython usada # para construir los widgets interactivos from IPython.html.widgets import interact def plot_sine(frequency=4.0, grid_points=12, plot_original=True): Grafica muestras discretas de una curva sinoidal en ``[0, 1]``. x = np.linspace(0, 1, grid_points + 2) y = np.sin(2 * frequency * np.pi * x) xf = np.linspace(0, 1, 1000) yf = np.sin(2 * frequency * np.pi * xf) fig, ax = plt.subplots(figsize=(8, 6)) ax.set_xlabel('x') ax.set_ylabel('signal') ax.set_title('Aliasing in discretely sampled periodic signal') if plot_original: ax.plot(xf, yf, color='red', linestyle='solid', linewidth=2) ax.plot(x, y, marker='o', linewidth=2) # la función interactiva construye automáticamente una interfase de usuario para explorar # la gráfica de la función de seno. interact(plot_sine, frequency=(1.0, 22.0, 0.5), grid_points=(10, 16, 1), plot_original=True) Explanation: La gráfica que estás viendo sigue la siguiente ecuación $$y=x^2$$ Ejercicio 4 Edita el código de arriba y vuélvelo a correr pero ahora intenta reemplazar la expresión: y = x**2 con: y=np.sin(x) Gráficas interactivas End of explanation
1,652
Given the following text description, write Python code to implement the functionality described below step by step Description: Decison Trees First we'll load some fake data on past hires I made up. Note how we use pandas to convert a csv file into a DataFrame Step1: scikit-learn needs everything to be numerical for decision trees to work. So, we'll map Y,N to 1,0 and levels of education to some scale of 0-2. In the real world, you'd need to think about how to deal with unexpected or missing data! By using map(), we know we'll get NaN for unexpected values. Step2: Next we need to separate the features from the target column that we're trying to bulid a decision tree for. Step3: Now actually construct the decision tree Step4: ... and display it. Note you need to have pydotplus installed for this to work. (!pip install pydotplus) To read this decision tree, each condition branches left for "true" and right for "false". When you end up at a value, the value array represents how many samples exist in each target value. So value = [0. 5.] mean there are 0 "no hires" and 5 "hires" by the tim we get to that point. value = [3. 0.] means 3 no-hires and 0 hires. Step5: Ensemble learning
Python Code: import numpy as np import pandas as pd from sklearn import tree input_file = "e:/sundog-consult/udemy/datascience/PastHires.csv" df = pd.read_csv(input_file, header = 0) df.head() Explanation: Decison Trees First we'll load some fake data on past hires I made up. Note how we use pandas to convert a csv file into a DataFrame: End of explanation d = {'Y': 1, 'N': 0} df['Hired'] = df['Hired'].map(d) df['Employed?'] = df['Employed?'].map(d) df['Top-tier school'] = df['Top-tier school'].map(d) df['Interned'] = df['Interned'].map(d) d = {'BS': 0, 'MS': 1, 'PhD': 2} df['Level of Education'] = df['Level of Education'].map(d) df.head() Explanation: scikit-learn needs everything to be numerical for decision trees to work. So, we'll map Y,N to 1,0 and levels of education to some scale of 0-2. In the real world, you'd need to think about how to deal with unexpected or missing data! By using map(), we know we'll get NaN for unexpected values. End of explanation features = list(df.columns[:6]) features Explanation: Next we need to separate the features from the target column that we're trying to bulid a decision tree for. End of explanation y = df["Hired"] X = df[features] clf = tree.DecisionTreeClassifier() clf = clf.fit(X,y) Explanation: Now actually construct the decision tree: End of explanation from IPython.display import Image from sklearn.externals.six import StringIO import pydotplus dot_data = StringIO() tree.export_graphviz(clf, out_file=dot_data, feature_names=features) graph = pydotplus.graph_from_dot_data(dot_data.getvalue()) Image(graph.create_png()) Explanation: ... and display it. Note you need to have pydotplus installed for this to work. (!pip install pydotplus) To read this decision tree, each condition branches left for "true" and right for "false". When you end up at a value, the value array represents how many samples exist in each target value. So value = [0. 5.] mean there are 0 "no hires" and 5 "hires" by the tim we get to that point. value = [3. 0.] means 3 no-hires and 0 hires. End of explanation from sklearn.ensemble import RandomForestClassifier clf = RandomForestClassifier(n_estimators=10) clf = clf.fit(X, y) #Predict employment of an employed 10-year veteran print (clf.predict([[10, 1, 4, 0, 0, 0]])) #...and an unemployed 10-year veteran print (clf.predict([[10, 0, 4, 0, 0, 0]])) Explanation: Ensemble learning: using a random forest We'll use a random forest of 10 decision trees to predict employment of specific candidate profiles: End of explanation
1,653
Given the following text description, write Python code to implement the functionality described below step by step Description: <a href='http Step1: Import matplotlib.pyplot as plt and set %matplotlib inline if you are using the jupyter notebook. What command do you use if you aren't using the jupyter notebook? Step2: Exercise 1 Follow along with these steps Step3: Exercise 2 Create a figure object and put two axes on it, ax1 and ax2. Located at [0,0,1,1] and [0.2,0.5,.2,.2] respectively. Step4: Now plot (x,y) on both axes. And call your figure object to show it. Step5: Exercise 3 Create the plot below by adding two axes to a figure object at [0,0,1,1] and [0.2,0.5,.4,.4] Step6: Now use x,y, and z arrays to recreate the plot below. Notice the xlimits and y limits on the inserted plot Step7: Exercise 4 Use plt.subplots(nrows=1, ncols=2) to create the plot below. Step8: Now plot (x,y) and (x,z) on the axes. Play around with the linewidth and style Step9: See if you can resize the plot by adding the figsize() argument in plt.subplots() are copying and pasting your previous code.
Python Code: import numpy as np x = np.arange(0,100) y = x*2 z = x**2 Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a> Matplotlib Exercises - Solutions Welcome to the exercises for reviewing matplotlib! Take your time with these, Matplotlib can be tricky to understand at first. These are relatively simple plots, but they can be hard if this is your first time with matplotlib, feel free to reference the solutions as you go along. Also don't worry if you find the matplotlib syntax frustrating, we actually won't be using it that often throughout the course, we will switch to using seaborn and pandas built-in visualization capabilities. But, those are built-off of matplotlib, which is why it is still important to get exposure to it! * NOTE: ALL THE COMMANDS FOR PLOTTING A FIGURE SHOULD ALL GO IN THE SAME CELL. SEPARATING THEM OUT INTO MULTIPLE CELLS MAY CAUSE NOTHING TO SHOW UP. * Exercises Follow the instructions to recreate the plots using this data: Data End of explanation import matplotlib.pyplot as plt %matplotlib inline # plt.show() for non-notebook users Explanation: Import matplotlib.pyplot as plt and set %matplotlib inline if you are using the jupyter notebook. What command do you use if you aren't using the jupyter notebook? End of explanation fig = plt.figure() ax = fig.add_axes([0,0,1,1]) ax.plot(x,y) ax.set_xlabel('x') ax.set_ylabel('y') ax.set_title('title') Explanation: Exercise 1 Follow along with these steps: * Create a figure object called fig using plt.figure() * Use add_axes to add an axis to the figure canvas at [0,0,1,1]. Call this new axis ax. * Plot (x,y) on that axes and set the labels and titles to match the plot below: End of explanation fig = plt.figure() ax1 = fig.add_axes([0,0,1,1]) ax2 = fig.add_axes([0.2,0.5,.2,.2]) Explanation: Exercise 2 Create a figure object and put two axes on it, ax1 and ax2. Located at [0,0,1,1] and [0.2,0.5,.2,.2] respectively. End of explanation ax1.plot(x,y) ax1.set_xlabel('x') ax1.set_ylabel('y') ax2.plot(x,y) ax2.set_xlabel('x') ax2.set_ylabel('y') fig # Show figure object Explanation: Now plot (x,y) on both axes. And call your figure object to show it. End of explanation fig = plt.figure() ax = fig.add_axes([0,0,1,1]) ax2 = fig.add_axes([0.2,0.5,.4,.4]) Explanation: Exercise 3 Create the plot below by adding two axes to a figure object at [0,0,1,1] and [0.2,0.5,.4,.4] End of explanation ax.plot(x,z) ax.set_xlabel('X') ax.set_ylabel('Z') ax2.plot(x,y) ax2.set_xlabel('X') ax2.set_ylabel('Y') ax2.set_title('zoom') ax2.set_xlim(20,22) ax2.set_ylim(30,50) fig Explanation: Now use x,y, and z arrays to recreate the plot below. Notice the xlimits and y limits on the inserted plot: End of explanation # Empty canvas of 1 by 2 subplots fig, axes = plt.subplots(nrows=1, ncols=2) Explanation: Exercise 4 Use plt.subplots(nrows=1, ncols=2) to create the plot below. End of explanation axes[0].plot(x,y,color="blue", lw=3, ls='--') axes[1].plot(x,z,color="red", lw=3, ls='-') fig Explanation: Now plot (x,y) and (x,z) on the axes. Play around with the linewidth and style End of explanation fig, axes = plt.subplots(nrows=1, ncols=2,figsize=(12,2)) axes[0].plot(x,y,color="blue", lw=5) axes[0].set_xlabel('x') axes[0].set_ylabel('y') axes[1].plot(x,z,color="red", lw=3, ls='--') axes[1].set_xlabel('x') axes[1].set_ylabel('z') Explanation: See if you can resize the plot by adding the figsize() argument in plt.subplots() are copying and pasting your previous code. End of explanation
1,654
Given the following text description, write Python code to implement the functionality described below step by step Description: 函数 Step1: 可接受任意数量参数的函数 为了能让一个函数接受任意数量的位置参数,可以使用一个*参数 为了接受任意数量的关键字参数,使用一个以 **开头的参数 这个和Packing和unpacking的用法是相同的,关键字参数一般是可以表示成字典的unpacking的 *arg1, **arg2就可以表示所有的参数形式 一个*参数只能出现在函数定义中最后一个位置参数后面,而 **参数只能出现在最后一个参数。 有一点要注意的是,在*参数后面仍然可以定义其他参数 Step2: ```python def a(x, *args, y) Step3: 给函数参数增加元信息 使用函数参数注解是一个很好的办法,它能提示程序员应该怎样正确使用这个函数 PEP 3107 -- Function Annotations | Python.org Python 函数注释 - CSDN博客 python解释器不会对这些注解添加任何的语义。它们不会被类型检查,运行时跟没有加注解之前的效果也没有任何差距。 然而,对于那些阅读源码的人来讲就很有帮助啦。第三方工具和框架可能会对这些注解添加语义。同时它们也会出现在文档中 函数注解只存储在函数的 __annotations__属性中 func.__annotations__ 尽管注解的使用方法可能有很多种,但是它们的主要用途还是文档。 因为python并没有类型声明,通常来讲仅仅通过阅读源码很难知道应该传递什么样的参数给这个函数。 这时候使用注解就能给程序员更多的提示,让他们可以正确的使用函数 Step4: 默认参数 定义一个有可选参数的函数是非常简单的,直接在函数定义中给参数指定一个默认值,并放到参数列表最后就行了 如果默认参数是一个可修改的容器比如一个列表、集合或者字典,可以使用None作为默认值 默认参数的值仅仅在函数定义的时候赋值一次 默认参数的值应该是不可变的对象,比如None、True、False、数字或字符串 Step5: 定义匿名或内联函数 当一些函数很简单,仅仅只是计算一个表达式的值的时候,就可以使用lambda表达式来代替了 lambda表达式典型的使用场景是排序或数据reduce等: Step6: 这其中的奥妙在于lambda表达式中的x是一个自由变量, 在运行时绑定值,而不是定义时就绑定,这跟函数的默认值参数定义是不同的 lambda 表达式的参数是在运行时绑定的,而函数定义的参数是在定义的时候才绑定的,两者是存在这不同的 如果你想让某个匿名函数在定义时就捕获到值,可以将那个参数值定义成默认参数即可 Step7: 通过使用函数默认值参数形式,lambda函数在定义时就能绑定到值 减少可调用对象的参数个数 如果需要减少某个函数的参数个数,你可以使用 functools.partial() 。 partial() 函数允许你给一个或多个参数设置固定的值,减少接下来被调用时的参数个数 partial() 通常被用来微调其他库函数所使用的回调函数的参数
Python Code: %matplotlib inline # 多行结果输出支持 from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "all" Explanation: 函数 End of explanation # 可变参数 packing and unpacking def avg(first, *rest): return (first + sum(rest)) / (1 + len(rest)) # Sample use avg(1, 2) # 1.5 avg(1, 2, 3, 4) # 2.5 # 如果你还希望某个函数能同时接受任意数量的位置参数和关键字参数,可以同时使用*和** # 使用这个函数时,所有位置参数会被放到args元组中,所有关键字参数会被放到字典kwargs中 def anyargs(*args, **kwargs): print(args) # A tuple print(kwargs) # A dict arg1 = 1, 2, 4 arg2 = {'q': 2, 'a': 3} anyargs(*arg1, **arg2) Explanation: 可接受任意数量参数的函数 为了能让一个函数接受任意数量的位置参数,可以使用一个*参数 为了接受任意数量的关键字参数,使用一个以 **开头的参数 这个和Packing和unpacking的用法是相同的,关键字参数一般是可以表示成字典的unpacking的 *arg1, **arg2就可以表示所有的参数形式 一个*参数只能出现在函数定义中最后一个位置参数后面,而 **参数只能出现在最后一个参数。 有一点要注意的是,在*参数后面仍然可以定义其他参数 End of explanation def recv(maxsize, *, block): 'Receives a message' print(maxsize) recv(1024, True) # TypeError recv(1024, block=True) # Ok # 参数的默认值是None的意思是可以没有 def minimum(*values, clip=None): m = min(values) if clip is not None: m = clip if clip > m else m return m minimum(1, 5, 2, -5, 10) # Returns -5 minimum(1, 5, 2, -5, 10, clip=0) # Returns 0 Explanation: ```python def a(x, *args, y): pass def b(x, args, y, *kwargs): pass ``` 只接受关键字参数的函数 将强制关键字参数放到某个*参数或者单个*后面就能达到这种效果 使用强制关键字参数会比使用位置参数表意更加清晰,程序也更加具有可读性 使用强制关键字参数也会比使用**kwargs参数更好,因为在使用函数help的时候输出也会更容易理解 End of explanation def add(x:int, y:int) -> int: return x + y Explanation: 给函数参数增加元信息 使用函数参数注解是一个很好的办法,它能提示程序员应该怎样正确使用这个函数 PEP 3107 -- Function Annotations | Python.org Python 函数注释 - CSDN博客 python解释器不会对这些注解添加任何的语义。它们不会被类型检查,运行时跟没有加注解之前的效果也没有任何差距。 然而,对于那些阅读源码的人来讲就很有帮助啦。第三方工具和框架可能会对这些注解添加语义。同时它们也会出现在文档中 函数注解只存储在函数的 __annotations__属性中 func.__annotations__ 尽管注解的使用方法可能有很多种,但是它们的主要用途还是文档。 因为python并没有类型声明,通常来讲仅仅通过阅读源码很难知道应该传递什么样的参数给这个函数。 这时候使用注解就能给程序员更多的提示,让他们可以正确的使用函数 End of explanation # Using a list as a default value # 可变的默认参数使用 None 作为默认值 def spam(a, b=None): # 这里使用 is if b is None: b = [] pass Explanation: 默认参数 定义一个有可选参数的函数是非常简单的,直接在函数定义中给参数指定一个默认值,并放到参数列表最后就行了 如果默认参数是一个可修改的容器比如一个列表、集合或者字典,可以使用None作为默认值 默认参数的值仅仅在函数定义的时候赋值一次 默认参数的值应该是不可变的对象,比如None、True、False、数字或字符串 End of explanation y = lambda x, y: x ** y y(2, 3) names = ['David Beazley', 'Brian Jones', 'Raymond Hettinger', 'Ned Batchelder'] sorted(names, key=lambda name: name.split()[-1].lower()) 'David Beazley'.split()[-1].lower() x = 10 a = lambda y: x + y x = 20 b = lambda y: x + y a(10) b(10) Explanation: 定义匿名或内联函数 当一些函数很简单,仅仅只是计算一个表达式的值的时候,就可以使用lambda表达式来代替了 lambda表达式典型的使用场景是排序或数据reduce等: End of explanation x = 10 a = lambda y, x=x: x + y x = 20 b = lambda y, x=x: x + y a(10) b(10) Explanation: 这其中的奥妙在于lambda表达式中的x是一个自由变量, 在运行时绑定值,而不是定义时就绑定,这跟函数的默认值参数定义是不同的 lambda 表达式的参数是在运行时绑定的,而函数定义的参数是在定义的时候才绑定的,两者是存在这不同的 如果你想让某个匿名函数在定义时就捕获到值,可以将那个参数值定义成默认参数即可 End of explanation def spam(a, b, c, d): print(a, b, c, d) from functools import partial s2 = partial(spam, d=3) s2(1, 2, 3) Explanation: 通过使用函数默认值参数形式,lambda函数在定义时就能绑定到值 减少可调用对象的参数个数 如果需要减少某个函数的参数个数,你可以使用 functools.partial() 。 partial() 函数允许你给一个或多个参数设置固定的值,减少接下来被调用时的参数个数 partial() 通常被用来微调其他库函数所使用的回调函数的参数 End of explanation
1,655
Given the following text description, write Python code to implement the functionality described below step by step Description: <table align="left"> <td> <a href="https Step1: PIP Install Packages and dependencies Step2: 1. Project Configuration Step3: 2. Get training data In this step, we are going to Step4: 2.1. Input data Create a dictionary with a mapping for each label. Step5: Verify number of records Step6: 2.2. Data processing fn Step7: Some small test Step8: 2.3.Data preparation Text preprocessor Step9: 3. Model Create a TensorFlow model Step10: 3.1. Basic model Step11: 3.2. Pretrained Glove embeddings Embeddings can be downloaded from Stanford Glove project Step12: Create an embedding index Step13: 3.3. Create, compile and train TensorFlow model Step14: 4. Deployment 4.1. Prepare custom model prediction Step15: Testing custom prediction locally Step16: 4.2. Package it Step17: Wrap it up and copy to GCP Step18: 5. Create model and version Step19: 6. Testing Step20: Authenticate and call AI Plaform prediction API Step21: 7. Deploy using AI Platform Pipelines With AI Platform Pipelines, you can orchestrate your machine learning (ML) workflows as reusable and reproducible pipelines. AI Platform Pipelines saves you the difficulty of setting up Kubeflow Pipelines with TensorFlow Extended on Google Kubernetes Engine. Install the KubeFlow Pipelines SDK Step22: Import dependencies Step23: Create a Hosted AI Platform Pipeline Create a new Hosted KubeFlow pipeline under AI Platform -> Pipelines. Set up you AI Platform Pipeline as indicated here Note Step24: Train the model Step25: Deploy the model Step26: Run the KFP pipeline Step27: Obtain the KFP_HOST variable from the AI Platform Managed pipelines screen in Google Cloud Console.
Python Code: import sys # If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training jobs and prediction # requests. if 'google.colab' in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user() # If you are running this notebook locally, replace the string below with the # path to your service account key and run this cell to authenticate your GCP # account. else: %env GOOGLE_APPLICATION_CREDENTIALS '' Explanation: <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/ai-platform-samples/blob/main/notebooks/samples/tensorflow/sentiment_analysis/ai_platform_sentiment_analysis.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://github.com/GoogleCloudPlatform/ai-platform-samples/blob/main/notebooks/samples/tensorflow/sentiment_analysis/ai_platform_sentiment_analysis.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> </table> Overview AI Platform Online Prediction now supports custom python code in to apply custom prediction routines, in this blog post we will perform sentiment analysis using Twitter data and Transfer learning using Pretrained Glove embeddings. This tutorial also uses the new AI Platform Pipelines product. Dataset We use the Twitter data which is called sentiment140 dataset. It contains 1,600,000 tweets extracted using the Twitter AI. The tweets have been annotated (0 = negative, 4 = positive) and they can be used to detect sentiment. It contains the following 6 fields: target: the polarity of the tweet (0 = negative, 2 = neutral, 4 = positive) ids: The id of the tweet ( 2087) date: the date of the tweet (Sat May 16 23:58:44 UTC 2009) flag: The query (lyx). If there is no query, then this value is NO_QUERY. user: the user that tweeted (robotickilldozr) text: the text of the tweet (Lyx is cool) The official link regarding the dataset with resources about how it was generated is here Objective In this notebook, we show how to deploy a TensorFlow model using AI Platform Custom Prediction Code using sentiment140 for sentiment analysis. Costs This tutorial uses billable components of Google Cloud Platform (GCP): Cloud AI Platform Cloud Storage Learn about Cloud AI Platform pricing and Cloud Storage pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage. Set up your local development environment If you are using Colab or AI Platform Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step. Otherwise, make sure your environment meets this notebook's requirements. You need the following: The Google Cloud SDK Git Python 3 virtualenv Jupyter notebook running in a virtual environment with Python 3 The Google Cloud guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions: Install and initialize the Cloud SDK. Install Python 3. Install virtualenv and create a virtual environment that uses Python 3. Activate that environment and run pip install jupyter in a shell to install Jupyter. Run jupyter notebook in a shell to launch Jupyter. Open this notebook in the Jupyter Notebook Dashboard. Set up your GCP project The following steps are required, regardless of your notebook environment. Select or create a GCP project.. When you first create an account, you get a $300 free credit towards your compute/storage costs. Make sure that billing is enabled for your project. Enable the AI Platform APIs and Compute Engine APIs. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands. Authenticate your GCP account If you are using AI Platform Notebooks, your environment is already authenticated. Skip this step. If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. Otherwise, follow these steps: In the GCP Console, go to the Create service account key page. From the Service account drop-down list, select New service account. In the Service account name field, enter a name. From the Role drop-down list, select Machine Learning Engine > AI Platform Admin and Storage > Storage Object Admin. Click Create. A JSON file that contains your key downloads to your local environment. Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell. If you are running this notebook in Colab, run the following cell to authenticate your Google Cloud Platform user account End of explanation !pip install -U tensorflow==1.15.* --user import tensorflow as tf print(tf.__version__) import pandas as pd import numpy as np import os Explanation: PIP Install Packages and dependencies End of explanation PROJECT_ID = '[your-project-id]' # TODO (Set to your GCP Project name) !gcloud config set project {PROJECT_ID} BUCKET_NAME = '[your-bucket-name]' # TODO (Set to your GCS Bucket name) REGION = 'us-central1' #@param {type:"string"} # Model information. ROOT = 'ml_pipeline' MODEL_DIR = os.path.join(ROOT,'models').replace("\\","/") PACKAGES_DIR = os.path.join(ROOT,'packages').replace("\\","/") !gsutil rm -r gs://{BUCKET_NAME}/{ROOT} Explanation: 1. Project Configuration End of explanation !gsutil cp gs://cloud-samples-data/ai-platform/sentiment_analysis/training.csv . Explanation: 2. Get training data In this step, we are going to: 1. Download Twitter data 2. Load the data to Pandas Dataframe. 3. Convert the class feature (sentiment) from string to a numeric indicator. Data can be downloaded directly from here (https://www.kaggle.com/kazanova/sentiment140) It is also located here: gs://cloud-samples-data/ai-platform/sentiment_analysis/training.csv You can copy it by using the following command: gsutil cp gs://cloud-samples-data/ai-platform/sentiment_analysis/training.csv . End of explanation sentiment_mapping = { 0: 'negative', 2: 'neutral', 4: 'positive' } df_twitter = pd.read_csv('training.csv', encoding='latin1', header=None)\ .rename(columns={ 0: 'sentiment', 1: 'id', 2: 'posted_at', 3: 'query', 4: 'username', 5: 'text' })[['sentiment', 'text']] df_twitter['sentiment_label'] = df_twitter['sentiment'].map(sentiment_mapping) Explanation: 2.1. Input data Create a dictionary with a mapping for each label. End of explanation df_twitter['sentiment_label'].count() Explanation: Verify number of records End of explanation %%writefile preprocess.py from tensorflow.python.keras.preprocessing import sequence from tensorflow.keras.preprocessing import text import re class TextPreprocessor(object): def __init__(self, vocab_size, max_sequence_length): self._vocab_size = vocab_size self._max_sequence_length = max_sequence_length self._tokenizer = None def _clean_line(self, text): text = re.sub(r"http\S+", "", text) text = re.sub(r"@[A-Za-z0-9]+", "", text) text = re.sub(r"#[A-Za-z0-9]+", "", text) text = text.replace("RT","") text = text.lower() text = text.strip() return text def fit(self, text_list): # Create vocabulary from input corpus. text_list_cleaned = [self._clean_line(txt) for txt in text_list] tokenizer = text.Tokenizer(num_words=self._vocab_size) tokenizer.fit_on_texts(text_list) self._tokenizer = tokenizer def transform(self, text_list): # Transform text to sequence of integers text_list = [self._clean_line(txt) for txt in text_list] text_sequence = self._tokenizer.texts_to_sequences(text_list) # Fix sequence length to max value. Sequences shorter than the length are # padded in the beginning and sequences longer are truncated # at the beginning. padded_text_sequence = sequence.pad_sequences( text_sequence, maxlen=self._max_sequence_length) return padded_text_sequence Explanation: 2.2. Data processing fn End of explanation from preprocess import TextPreprocessor processor = TextPreprocessor(5, 5) processor.fit(['hello Google Cloud AI Platform','test']) processor.transform(['hello Google Cloud AI Platform',"lol"]) Explanation: Some small test: End of explanation CLASSES = {'negative': 0, 'positive': 1} # label-to-int mapping VOCAB_SIZE = 25000 # Limit on the number vocabulary size used for tokenization MAX_SEQUENCE_LENGTH = 50 # Sentences will be truncated/padded to this length from preprocess import TextPreprocessor from sklearn.model_selection import train_test_split sents = df_twitter.text labels = np.array(df_twitter.sentiment_label.map(CLASSES)) # Train and test split X, _, y, _ = train_test_split(sents, labels, test_size=0.1) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25) # Create vocabulary from training corpus. processor = TextPreprocessor(VOCAB_SIZE, MAX_SEQUENCE_LENGTH) processor.fit(X_train) # Preprocess the data train_texts_vectorized = processor.transform(X_train) eval_texts_vectorized = processor.transform(X_test) import pickle with open('./processor_state.pkl', 'wb') as f: pickle.dump(processor, f) Explanation: 2.3.Data preparation Text preprocessor End of explanation # Hyperparameters LEARNING_RATE = .001 EMBEDDING_DIM = 50 FILTERS = 64 DROPOUT_RATE = 0.5 POOL_SIZE = 3 NUM_EPOCH = 25 BATCH_SIZE = 128 KERNEL_SIZES = [2, 5, 8] Explanation: 3. Model Create a TensorFlow model End of explanation def create_model(vocab_size, embedding_dim, filters, kernel_sizes, dropout_rate, pool_size, embedding_matrix): # Input layer model_input = tf.keras.layers.Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32') # Embedding layer z = tf.keras.layers.Embedding( input_dim=vocab_size + 1, output_dim=embedding_dim, input_length=MAX_SEQUENCE_LENGTH, weights=[embedding_matrix] )(model_input) z = tf.keras.layers.Dropout(dropout_rate)(z) # Convolutional block conv_blocks = [] for kernel_size in kernel_sizes: conv = tf.keras.layers.Convolution1D( filters=filters, kernel_size=kernel_size, padding='valid', activation='relu', bias_initializer='random_uniform', strides=1)(z) conv = tf.keras.layers.MaxPooling1D(pool_size=2)(conv) conv = tf.keras.layers.Flatten()(conv) conv_blocks.append(conv) z = tf.keras.layers.Concatenate()(conv_blocks) if len(conv_blocks) > 1 else conv_blocks[0] z = tf.keras.layers.Dropout(dropout_rate)(z) z = tf.keras.layers.Dense(100, activation='relu')(z) model_output = tf.keras.layers.Dense(1, activation='sigmoid')(z) model = tf.keras.models.Model(model_input, model_output) return model Explanation: 3.1. Basic model End of explanation !gsutil cp gs://cloud-samples-data/ai-platform/sentiment_analysis/glove.twitter.27B.50d.txt . Explanation: 3.2. Pretrained Glove embeddings Embeddings can be downloaded from Stanford Glove project: https://nlp.stanford.edu/projects/glove/ - Download file here (http://nlp.stanford.edu/data/glove.twitter.27B.zip) - Twitter (2B tweets, 27B tokens, 1.2M vocab, uncased, 25d, 50d, 100d, & 200d vectors, 1.42 GB download) It is also located here: gs://cloud-samples-data/ai-platform/sentiment_analysis/glove.twitter.27B.50d.txt You can copy it by using the following command: gsutil cp gs://cloud-samples-data/ai-platform/sentiment_analysis/glove.twitter.27B.50d.txt . End of explanation def get_coefs(word, *arr): return word, np.asarray(arr, dtype='float32') embeddings_index = dict(get_coefs(*o.strip().split()) for o in open('glove.twitter.27B.50d.txt','r', encoding='utf8')) word_index = processor._tokenizer.word_index nb_words = min(VOCAB_SIZE, len(word_index)) embedding_matrix = np.zeros((nb_words + 1, EMBEDDING_DIM)) for word, i in word_index.items(): if i >= VOCAB_SIZE: continue embedding_vector = embeddings_index.get(word) if embedding_vector is not None: embedding_matrix[i] = embedding_vector Explanation: Create an embedding index End of explanation model = create_model(VOCAB_SIZE, EMBEDDING_DIM, FILTERS, KERNEL_SIZES, DROPOUT_RATE,POOL_SIZE, embedding_matrix) # Compile model with learning parameters. optimizer = tf.keras.optimizers.Nadam(lr=0.001) model.compile(optimizer=optimizer, loss='binary_crossentropy', metrics=['acc']) #Keras train history = model.fit( train_texts_vectorized, y_train, epochs=NUM_EPOCH, batch_size=BATCH_SIZE, validation_data=(eval_texts_vectorized, y_test), verbose=2, callbacks=[ tf.keras.callbacks.ReduceLROnPlateau( monitor='val_acc', min_delta=0.005, patience=3, factor=0.5), tf.keras.callbacks.EarlyStopping( monitor='val_loss', min_delta=0.005, patience=5, verbose=0, mode='auto' ), tf.keras.callbacks.History() ] ) with open('history.pkl','wb') as file: pickle.dump(history.history,file) model.save('keras_saved_model.h5') Explanation: 3.3. Create, compile and train TensorFlow model End of explanation %%writefile custom_prediction.py import os import pickle import numpy as np from datetime import date from google.cloud import logging import tensorflow.keras as keras class CustomModelPrediction(object): def __init__(self, model, processor): self._model = model self._processor = processor def _postprocess(self, predictions): labels = ['negative', 'positive'] return [ { "label":labels[int(np.round(prediction))], "score":float(np.round(prediction, 4)) } for prediction in predictions] def predict(self, instances, **kwargs): preprocessed_data = self._processor.transform(instances) predictions = self._model.predict(preprocessed_data) labels = self._postprocess(predictions) return labels @classmethod def from_path(cls, model_dir): model = keras.models.load_model( os.path.join(model_dir,'keras_saved_model.h5')) with open(os.path.join(model_dir, 'processor_state.pkl'), 'rb') as f: processor = pickle.load(f) return cls(model, processor) Explanation: 4. Deployment 4.1. Prepare custom model prediction End of explanation from custom_prediction import CustomModelPrediction classifier = CustomModelPrediction.from_path('.') requests = (['God I hate the north', 'god I love this']) response = classifier.predict(requests) response Explanation: Testing custom prediction locally End of explanation %%writefile setup.py from setuptools import setup setup( name='tweet_sentiment_classifier', version='0.1', include_package_data=True, scripts=['preprocess.py', 'custom_prediction.py'] ) Explanation: 4.2. Package it End of explanation !python setup.py sdist !gsutil cp ./dist/tweet_sentiment_classifier-0.1.tar.gz gs://{BUCKET_NAME}/{PACKAGES_DIR}/tweet_sentiment_classifier-0.1.tar.gz !gsutil cp keras_saved_model.h5 gs://{BUCKET_NAME}/{MODEL_DIR}/ !gsutil cp processor_state.pkl gs://{BUCKET_NAME}/{MODEL_DIR}/ Explanation: Wrap it up and copy to GCP End of explanation MODEL_NAME='twitter_model_custom_prediction' MODEL_VERSION='v1' RUNTIME_VERSION='1.15' PYTHON_VERSION='3.7' !gcloud beta ai-platform models create {MODEL_NAME} --regions {REGION} --enable-logging --enable-console-logging !gcloud ai-platform versions delete {MODEL_VERSION} --model {MODEL_NAME} --quiet !gcloud beta ai-platform versions create {MODEL_VERSION} \ --model {MODEL_NAME} \ --origin gs://{BUCKET_NAME}/{MODEL_DIR} \ --python-version {PYTHON_VERSION} \ --runtime-version {RUNTIME_VERSION} \ --package-uris gs://{BUCKET_NAME}/{PACKAGES_DIR}/tweet_sentiment_classifier-0.1.tar.gz \ --prediction-class=custom_prediction.CustomModelPrediction Explanation: 5. Create model and version End of explanation from googleapiclient import discovery from oauth2client.client import GoogleCredentials import json requests = [ 'god this episode is bad', 'meh, I kinda like it', 'what were the writer thinking, omg!', 'omg! what a twist, who would\'ve though :o!', 'woohoow, sansa for the win!' ] # JSON format the requests request_data = {'instances': requests} Explanation: 6. Testing End of explanation %%time api = discovery.build('ml', 'v1') parent = 'projects/{}/models/{}/versions/{}'.format(PROJECT_ID, MODEL_NAME, MODEL_VERSION) parent = 'projects/{}/models/{}'.format(PROJECT_ID, MODEL_NAME) response = api.projects().predict(body=request_data, name=parent).execute() response['predictions'] # Delete model version resource ! gcloud ai-platform versions delete {MODEL_VERSION} --model {MODEL_NAME} --quiet # Delete model resource ! gcloud ai-platform models delete {MODEL_NAME} --quiet Explanation: Authenticate and call AI Plaform prediction API End of explanation !pip install 'kfp>=0.1.31' --user Explanation: 7. Deploy using AI Platform Pipelines With AI Platform Pipelines, you can orchestrate your machine learning (ML) workflows as reusable and reproducible pipelines. AI Platform Pipelines saves you the difficulty of setting up Kubeflow Pipelines with TensorFlow Extended on Google Kubernetes Engine. Install the KubeFlow Pipelines SDK End of explanation import json import kfp import kfp.components as comp import kfp.dsl as dsl import pandas as pd import time Explanation: Import dependencies End of explanation # Project parameters. CLUSTER='' # TODO Change to your GKE cluster ZONE='us-central1-a' # Pipeline Parameters MODEL_NAME = 'sentiment_classifier' + str(int(time.time())) MODEL_VERSION = 'v1' + str(int(time.time())) RUNTIME_VERSION = '1.15' PYTHON_VERSION='3.7' PACKAGE_TRAINER_URI = 'gs://cloud-samples-data/ai-platform/sentiment_analysis/trainer-0.1.tar.gz' PACKAGE_CUSTOM_PREDICTION_URI = 'gs://cloud-samples-data/ai-platform/sentiment_analysis/custom_prediction-0.1.tar.gz' PACKAGE_URIS = json.dumps([PACKAGE_TRAINER_URI]) PACKAGE_PATH='./trainer' PYTHON_MODULE = 'trainer.task' TRAINING_FILE='gs://cloud-samples-data/ai-platform/sentiment_analysis/training.csv'.format(BUCKET_NAME) GLOVE_FILE='gs://cloud-samples-data/ai-platform/sentiment_analysis/glove.twitter.27B.50d.txt'.format(BUCKET_NAME) MODEL_DIR='gs://{}/models'.format(BUCKET_NAME) SAVED_MODEL_NAME='keras_saved_model.h5' PROCESSOR_STATE_FILE='processor_state.pkl' PIPELINE_NAME = 'Text Prediction' PIPELINE_FILENAME_PREFIX = 'twitter' PIPELINE_DESCRIPTION = 'Text Prediction' # Note, numeric parameters should be pass as string. TRAINER_ARGS = json.dumps(['--train-file', TRAINING_FILE, '--glove-file', GLOVE_FILE, '--learning-rate', '0.001', '--embedding-dim', '50', '--num-epochs', '25', '--filter-size', '64', '--batch-size', '128', '--vocab-size', '25000', '--pool-size', '3', '--max-sequence-length', '50', '--saved-model', SAVED_MODEL_NAME, '--preprocessor-state-file', PROCESSOR_STATE_FILE, '--gcs-bucket', BUCKET_NAME, '--deploy-gcp'] ) Explanation: Create a Hosted AI Platform Pipeline Create a new Hosted KubeFlow pipeline under AI Platform -> Pipelines. Set up you AI Platform Pipeline as indicated here Note: Verify you are using version 0.2.5 and above. More information here Seting up credentials If you run pipelines that requires calling any GCP services, such as Cloud Storage, Cloud ML Engine, Dataflow, or Dataproc, you need to set the application default credential to a pipeline step by mounting the proper GCP service account token as a Kubernetes secret. Documentation here Train and deploy the model End of explanation aiplatform_train_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/master/components/gcp/ml_engine/train/component.yaml') def train(project_id, trainer_args, package_uris, job_dir, region, python_module, python_version, runtime_version): return aiplatform_train_op( project_id=project_id, python_module=python_module, python_version=python_version, package_uris=package_uris, region=region, args=trainer_args, job_dir=job_dir, runtime_version=runtime_version ) Explanation: Train the model End of explanation aiplatform_deploy_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/master/components/gcp/ml_engine/deploy/component.yaml') def deploy(project_id, model_uri, model_id, model_version, runtime_version, python_version, version): return aiplatform_deploy_op( model_uri=model_uri, project_id=project_id, model_id=model_id, version_id=model_version, runtime_version=runtime_version, python_version=python_version, version=version, replace_existing_version=True, set_default=True) @dsl.pipeline( name=PIPELINE_NAME, description=PIPELINE_DESCRIPTION ) def pipeline(project_id=PROJECT_ID, python_module=PYTHON_MODULE, region=REGION, runtime_version=RUNTIME_VERSION, package_uris=PACKAGE_URIS, python_version=PYTHON_VERSION, job_dir=MODEL_DIR,): train_task = train(project_id, TRAINER_ARGS, package_uris, job_dir, region, python_module, python_version, runtime_version) deploy_task = deploy(project_id, train_task.outputs['job_dir'], MODEL_NAME, MODEL_VERSION, runtime_version, python_version, { "deploymentUri": 'gs://news-ml/models', "packageUris": [PACKAGE_CUSTOM_PREDICTION_URI, PACKAGE_TRAINER_URI], "predictionClass": 'model_prediction.CustomModelPrediction' } ) return True # Reference for invocation later pipeline_func = pipeline Explanation: Deploy the model End of explanation !kubectl get secrets !gcloud container clusters get-credentials "$CLUSTER" --zone "$ZONE" --project "$PROJECT_ID" Explanation: Run the KFP pipeline End of explanation KFP_HOST = '' pipeline = kfp.Client(host=KFP_HOST).create_run_from_pipeline_func(pipeline, arguments={}) pipeline.wait_for_run_completion(timeout=1800) Explanation: Obtain the KFP_HOST variable from the AI Platform Managed pipelines screen in Google Cloud Console. End of explanation
1,656
Given the following text description, write Python code to implement the functionality described below step by step Description: Step5: Training data was collected in the Self-Driving Car simulator on Mac OS using a Playstation 3 console controller. Recording Measurement class To simplify accessing each measurement from the original CSV, I've encapsulated each row in a special class. It serves the following purposes Step7: Image Preprocessor Algorithm used to preprocess am image prior to feeding it into the network for training and predicting. Step11: Track 1 Training Dataset class Step12: Instantiates the Track 1 training dataset, prints details about the object then prints the first 5 elements of the dataframe. Step13: Feature Plotting After plotting the steering, throttle, brake, and speed features, we can see that the vast majority of driving was done between 27.5 and 31 MPH. Furthermore, every feature for brake, steering and throttle was 0 hence the large redish bar at zero for all features. Step14: Steering Histogram Plot After plotting a histogram of the steering feature for the entire dataset, we can see that a significant amount more recorded driving was done with the steering wheel between 0.01 and 0.25. We can also observe the majority of recorded driving was done with the steering wheel between -0.25 and 0.25. Very few recorded data went beyond a +/- .5 steering angle. Step15: Steering Line Plot After plotting a standard line graph (honestly, I don't know the official name of this graph) of the steering feature for the entire dataset, the widest curves appear to be dominantly right while smoother, more consistent left curves are observed throughout the entire recording session. Furthermore, I postulate the abrupt right turn spikes with large gaps are indicative of recovery training from the right line moving towards the center of the lane. Step16: Explore the features Here I sample the 0th feature and print some statistics about the 0th RecordingMeasurement. Step17: Here I randomize the training data and inject to first 10 measurements into the batch generator. Note each item in X_train is an instance of the RecordingMeasurement class. Step18: Visualize batch features Here I've plotted the first 10 randomly selected images from the batch of images. I've plotted the original image in the YUV colorspace as well as each channel Y, U and V. Between RGB, HSV and YUV, the YUV colorspace captured the most intuitive representation of steering angle to image pixels representing a curved road. Step19: Network Architecture BaseNetwork is the base class for all network implementations. It contains the necessary plumbing to save trained network data, load previously trained model and weights, and implements the abstract #fit and #build_model methods which all sub classes must implement. Step21: Track1 extends BaseNetwork. It contains a simple 4-layer convolutional neural network with 4 fully connected layers with 10% dropout after flattening the data as well as after the first FC layer. Step22: Instantiate the classifier Step23: Train the network
Python Code: class RecordingMeasurement: A representation of a vehicle's state at a point in time while driving around a track during recording. Features available are: left_camera_view - An image taken by the LEFT camera. center_camera_view - An image taken by the CENTER camera. right_camera_view - An image taken by the RIGHT camera. steering_angle - A normalized steering angle in the range -1 to 1. speed - The speed in which the vehicle was traveling at measurement time. This class serves the following purposes: 1. Provides convenience getter methods for left, center and camera images. In an effort to reduce memory footprint, they're essentially designed to lazily instantiate (once) the actual image array at the time the method is invoked. 2. Strips whitespace off the left, center, and right camera image paths. 3. Casts the original absolute path of each camera image to a relative path. This adds reassurance the image will load on any computer. 4. Provides a convenient #is_valid_measurment method which encapsulates pertinent logic to ensure data quality is satisfactory. def __init__(self, measurement_data): self.measurement_data = measurement_data self.steering_angle = round(float(measurement_data['steering']), 4) self.speed = round(float(measurement_data['speed']), 4) l = measurement_data['left'].strip() c = measurement_data['center'].strip() r = measurement_data['right'].strip() # cast absolute path to relative path to be environment agnostic l, c, r = [('./IMG/' + os.path.split(file_path)[1]) for file_path in (l, c, r)] self.left_camera_view_path = l self.center_camera_view_path = c self.right_camera_view_path = r def is_valid_measurement(self): Return true if the original center image is available to load. return os.path.isfile(self.center_camera_view_path) def left_camera_view(self): Lazily instantiates the left camera view image at first call. if not hasattr(self, '__left_camera_view'): self.__left_camera_view = self.__load_image(self.left_camera_view_path) return self.__left_camera_view def center_camera_view(self): Lazily instantiates the center camera view image at first call. if not hasattr(self, '__center_camera_view'): self.__center_camera_view = self.__load_image(self.center_camera_view_path) return self.__center_camera_view def right_camera_view(self): Lazily instantiates the right camera view image at first call. if not hasattr(self, '__right_camera_view'): self.__right_camera_view = self.__load_image(self.right_camera_view_path) return self.__right_camera_view def __load_image(self, imagepath): image_array = None if os.path.isfile(imagepath): image_array = misc.imread(imagepath) else: print('File Not Found: {}'.format(imagepath)) return image_array def __str__(self): results = [] results.append('Image paths:') results.append('') results.append(' Left camera path: {}'.format(self.left_camera_view_path)) results.append(' Center camera path: {}'.format(self.center_camera_view_path)) results.append(' Right camera path: {}'.format(self.right_camera_view_path)) results.append('') results.append('Additional features:') results.append('') results.append(' Steering angle: {}'.format(self.steering_angle)) results.append(' Speed: {}'.format(self.speed)) return '\n'.join(results) Explanation: Training data was collected in the Self-Driving Car simulator on Mac OS using a Playstation 3 console controller. Recording Measurement class To simplify accessing each measurement from the original CSV, I've encapsulated each row in a special class. It serves the following purposes: Strips whitespace off the left, center, and right camera image paths. Casts the original absolute path of each camera image to a relative path. This adds reassurance the image will load on any computer. Provides a convenient #is_valid_measurment method which encapsulates pertinent logic to ensure data quality is satisfactory. Provides convenience getter methods for left, center and camera images. They're essentially designed to lazily instantiate (once) the actual image array at the time the method is invoked. End of explanation def preprocess_image(image_array, output_shape=(40, 80), colorspace='yuv'): Reminder: Source image shape is (160, 320, 3) Our preprocessing algorithm consists of the following steps: 1. Converts BGR to YUV colorspace. This allows us to leverage luminance (Y channel - brightness - black and white representation), and chrominance (U and V - blue–luminance and red–luminance differences respectively) 2. Crops top 31.25% portion and bottom 12.5% portion. The entire width of the image is preserved. This allows the model to generalize better to unseen roadways since we clop artifacts such as trees, buildings, etc. above the horizon. We also clip the hood from the image. 3. Finally, I allow users of this algorithm the ability to specify the shape of the final image via the output_shape argument. Once I've cropped the image, I resize it to the specified shape using the INTER_AREA interpolation agorithm as it is the best choice to preserve original image features. See `Scaling` section in OpenCV documentation: http://docs.opencv.org/trunk/da/d6e/tutorial_py_geometric_transformations.html # convert image to another colorspace if colorspace == 'yuv': image_array = cv2.cvtColor(image_array, cv2.COLOR_BGR2YUV) elif colorspace == 'hsv': image_array = cv2.cvtColor(image_array, cv2.COLOR_BGR2HSV) elif colorspace == 'rgb': image_array = cv2.cvtColor(image_array, cv2.COLOR_BGR2RGB) # [y1:y2, x1:x2] # # crops top 31.25% portion and bottom 12.5% portion # # The entire width of the image is preserved image_array = image_array[50:140, 0:320] # Let's blur the image to smooth out some of the artifacts kernel_size = 5 # Must be an odd number (3, 5, 7...) image_array = cv2.GaussianBlur(image_array, (kernel_size, kernel_size), 0) # resize image to output_shape image_array = cv2.resize(image_array, (output_shape[1], output_shape[0]), interpolation=cv2.INTER_AREA) return image_array Explanation: Image Preprocessor Algorithm used to preprocess am image prior to feeding it into the network for training and predicting. End of explanation class Track1Dataset: Parses driving_log.csv and constructs training, validation and test datasets corresponding to measurements taken at various points in time while recording on track 1. * X_train - A set of examples used for learning, that is to fit the parameters [i.e., weights] of the classifier. * X_val - A set of examples used to tune the hyperparameters [i.e., architecture, not weights] of a classifier, for example to choose the number of hidden units in a neural network. * X_test - A set of examples used only to assess the performance [generalization] of a fully-specified classifier. * y_train, y_val, y_test - The steering angle corresponding to their respective X features. DRIVING_LOG_PATH = './driving_log.csv' def __init__(self, validation_split_percentage=0.2, test_split_percentage=0.05): self.X_train = [] self.X_val = [] self.X_test = [] self.y_train = [] self.y_val = [] self.y_test = [] self.dataframe = None self.headers = [] self.__loaded = False self.__load(validation_split_percentage=validation_split_percentage, test_split_percentage=test_split_percentage) assert self.__loaded == True, 'The dataset was not loaded. Perhaps driving_log.csv is missing.' def __load(self, validation_split_percentage, test_split_percentage): Splits the training data into a validation and test dataset. * X_train - A set of examples used for learning, that is to fit the parameters [i.e., weights] of the classifier. * X_val - A set of examples used to tune the hyperparameters [i.e., architecture, not weights] of a classifier, for example to choose the number of hidden units in a neural network. * X_test - A set of examples used only to assess the performance [generalization] of a fully-specified classifier. * y_train, y_val, y_test - The steering angle corresponding to their respective X features. if not self.__loaded: X_train, y_train, headers, df = [], [], [], None # read in driving_log.csv and construct the # initial X_train and y_train before splitting # it into validation and testing sets. if os.path.isfile(self.DRIVING_LOG_PATH): df = pd.read_csv(self.DRIVING_LOG_PATH) headers = list(df.columns.values) for index, measurement_data in df.iterrows(): measurement = RecordingMeasurement(measurement_data=measurement_data) X_train.append(measurement) y_train.append(measurement.steering_angle) self.__loaded = True # generate the validation set X_train, X_val, y_train, y_val = train_test_split( X_train, y_train, test_size=validation_split_percentage, random_state=0) X_train, y_train, X_val, y_val = np.array(X_train), np.array(y_train, dtype=np.float32), \ np.array(X_val), np.array(y_val, dtype=np.float32) # generate the test set X_train, X_test, y_train, y_test = train_test_split( X_train, y_train, test_size=test_split_percentage, random_state=0) X_train, y_train, X_test, y_test = np.array(X_train), np.array(y_train, dtype=np.float32), \ np.array(X_test), np.array(y_test, dtype=np.float32) self.X_train = X_train self.X_val = X_val self.X_test = X_test self.y_train = y_train self.y_val = y_val self.y_test = y_test self.dataframe = df self.headers = headers def batch_generator(self, X, Y, label, num_epochs, batch_size=32, output_shape=(160, 320), flip_images=True, classifier=None, colorspace='yuv'): A custom batch generator with the main goal of reducing memory footprint on computers and GPUs with limited memory space. Infinitely yields `batch_size` elements from the X and Y datasets. During batch iteration, this algorithm randomly flips the image and steering angle to reduce bias towards a specific steering angle/direction. population = len(X) counter = 0 _index_in_epoch = 0 _tot_epochs = 0 batch_size = min(batch_size, population) batch_count = int(math.ceil(population / batch_size)) assert X.shape[0] == Y.shape[0], 'X and Y size must be identical.' print('Batch generating against the {} dataset with population {} and shape {}'.format(label, population, X.shape)) while True: counter += 1 print('batch gen iter {}'.format(counter)) for i in range(batch_count): start_i = _index_in_epoch _index_in_epoch += batch_size if _index_in_epoch >= population: # Save the classifier to support manual early stoppage if classifier is not None: classifier.save() print(' sampled entire population. reshuffling deck and resetting all counters.') perm = np.arange(population) np.random.shuffle(perm) X = X[perm] Y = Y[perm] start_i = 0 _index_in_epoch = batch_size _tot_epochs += 1 end_i = _index_in_epoch X_batch = [] y_batch = [] for j in range(start_i, end_i): steering_angle = Y[j] measurement = X[j] center_image = measurement.center_camera_view() if center_image is not None: image = preprocess_image(center_image, output_shape=output_shape, colorspace=colorspace) # Here I throw in a random image flip to reduce bias towards # a specific direction/steering angle. if flip_images and random.random() > 0.5: X_batch.append(np.fliplr(image)) y_batch.append(-steering_angle) else: X_batch.append(image) y_batch.append(steering_angle) yield np.array(X_batch), np.array(y_batch) def __str__(self): results = [] results.append('{} Stats:'.format(self.__class__.__name__)) results.append('') results.append(' [Headers]') results.append('') results.append(' {}'.format(self.headers)) results.append('') results.append('') results.append(' [Shapes]') results.append('') results.append(' training features: {}'.format(self.X_train.shape)) results.append(' training labels: {}'.format(self.y_train.shape)) results.append('') results.append(' validation features: {}'.format(self.X_val.shape)) results.append(' validation labels: {}'.format(self.y_val.shape)) results.append('') results.append(' test features: {}'.format(self.X_test.shape)) results.append(' test labels: {}'.format(self.y_test.shape)) results.append('') results.append(' [Dataframe sample]') results.append('') results.append(str(self.dataframe.head(n=5))) return '\n'.join(results) Explanation: Track 1 Training Dataset class End of explanation dataset = Track1Dataset(validation_split_percentage=0.2, test_split_percentage=0.05) print(dataset) Explanation: Instantiates the Track 1 training dataset, prints details about the object then prints the first 5 elements of the dataframe. End of explanation %matplotlib inline import matplotlib.pyplot as plt dataset.dataframe.plot.hist(alpha=0.5) Explanation: Feature Plotting After plotting the steering, throttle, brake, and speed features, we can see that the vast majority of driving was done between 27.5 and 31 MPH. Furthermore, every feature for brake, steering and throttle was 0 hence the large redish bar at zero for all features. End of explanation dataset.dataframe['steering'].plot.hist(alpha=0.5) Explanation: Steering Histogram Plot After plotting a histogram of the steering feature for the entire dataset, we can see that a significant amount more recorded driving was done with the steering wheel between 0.01 and 0.25. We can also observe the majority of recorded driving was done with the steering wheel between -0.25 and 0.25. Very few recorded data went beyond a +/- .5 steering angle. End of explanation dataset.dataframe['steering'].plot(alpha=0.5) Explanation: Steering Line Plot After plotting a standard line graph (honestly, I don't know the official name of this graph) of the steering feature for the entire dataset, the widest curves appear to be dominantly right while smoother, more consistent left curves are observed throughout the entire recording session. Furthermore, I postulate the abrupt right turn spikes with large gaps are indicative of recovery training from the right line moving towards the center of the lane. End of explanation print('Center camera view shape:\n\n{}\n'.format(dataset.X_train[0].center_camera_view().shape)) print(dataset.X_train[0]) Explanation: Explore the features Here I sample the 0th feature and print some statistics about the 0th RecordingMeasurement. End of explanation perm = np.arange(len(dataset.X_train)) np.random.shuffle(perm) output_shape = (40, 80, 3) generator = dataset.batch_generator( colorspace='yuv', X=dataset.X_train[0:10], Y=dataset.y_train[0:10], output_shape=output_shape, label='batch feature exploration', num_epochs=1, batch_size=10 ) Explanation: Here I randomize the training data and inject to first 10 measurements into the batch generator. Note each item in X_train is an instance of the RecordingMeasurement class. End of explanation from zimpy.plot.image_plotter import ImagePlotter # Grab the first 10 items from the training set and X_batch, y_batch = next(generator) # print(X_batch.shape) # print(y_batch.shape) # Cast to string so they render nicely in graph y_batch = [str(x) for x in y_batch] ImagePlotter.plot_images(X_batch, y_batch, rows=2, columns=5) ImagePlotter.plot_images(X_batch[:,:,:,0], y_batch, rows=2, columns=5) ImagePlotter.plot_images(X_batch[:,:,:,1], y_batch, rows=2, columns=5) ImagePlotter.plot_images(X_batch[:,:,:,2], y_batch, rows=2, columns=5) Explanation: Visualize batch features Here I've plotted the first 10 randomly selected images from the batch of images. I've plotted the original image in the YUV colorspace as well as each channel Y, U and V. Between RGB, HSV and YUV, the YUV colorspace captured the most intuitive representation of steering angle to image pixels representing a curved road. End of explanation class BaseNetwork: WEIGHTS_FILE_NAME = 'model_final.h5' MODEL_FILE_NAME = 'model_final.json' def __init__(self): self.uuid = uuid.uuid4() self.model = None self.weights = None def fit(self, X_train, y_train, X_val, y_val, nb_epoch=2, batch_size=32, samples_per_epoch=None, output_shape=(40, 80, 3)): raise NotImplementedError def build_model(self, input_shape, output_shape, learning_rate=0.001, dropout_prob=0.1, activation='relu'): raise NotImplementedError def save(self): print('Saved {} model.'.format(self.__class__.__name__)) self.__persist() def __persist(self): save_dir = os.path.join(os.path.dirname(__file__)) weights_save_path = os.path.join(save_dir, self.WEIGHTS_FILE_NAME) model_save_path = os.path.join(save_dir, self.MODEL_FILE_NAME) if not os.path.exists(save_dir): os.makedirs(save_dir) self.model.save_weights(weights_save_path) with open(model_save_path, 'w') as outfile: json.dump(self.model.to_json(), outfile) def __str__(self): results = [] if self.model is not None: results.append(self.model.summary()) return '\n'.join(results) Explanation: Network Architecture BaseNetwork is the base class for all network implementations. It contains the necessary plumbing to save trained network data, load previously trained model and weights, and implements the abstract #fit and #build_model methods which all sub classes must implement. End of explanation class Track1(BaseNetwork): def fit(self, model, dataset, nb_epoch=2, batch_size=32, samples_per_epoch=None, output_shape=(40, 80, 3)): # Fit the model leveraging the custom # batch generator baked into the # dataset itself. history = model.fit_generator( dataset.batch_generator( X=dataset.X_train, Y=dataset.y_train, label='train set', num_epochs=nb_epoch, batch_size=batch_size, output_shape=output_shape, classifier=self ), nb_epoch=nb_epoch, samples_per_epoch=len(X_train), verbose=2, validation_data=dataset.batch_generator( X=dataset.X_val, Y=dataset.y_val, label='validation set', num_epochs=nb_epoch, batch_size=batch_size, output_shape=output_shape ) ) print(history.history) self.save() def build_model(self, input_shape, output_shape, learning_rate=0.001, dropout_prob=0.1, activation='relu'): Inital zero-mean normalization input layer. A 4-layer deep neural network with 4 fully connected layers at the top. ReLU activation used on each convolution layer. Dropout of 10% (default) used after initially flattening after convolution layers. Dropout of 10% (default) used after first fully connected layer. Adam optimizer with 0.001 learning rate (default) used in this network. Mean squared error loss function was used since this is a regression problem and MSE is quite common and robust for regression analysis. model = Sequential() model.add(Lambda(lambda x: x / 255 - 0.5, input_shape=input_shape, output_shape=output_shape)) model.add(Convolution2D(24, 5, 5, border_mode='valid', activation=activation)) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Convolution2D(36, 5, 5, border_mode='valid', activation=activation)) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Convolution2D(48, 5, 5, border_mode='same', activation=activation)) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Convolution2D(64, 3, 3, border_mode='same', activation=activation)) model.add(Flatten()) model.add(Dropout(dropout_prob)) model.add(Dense(1024, activation=activation)) model.add(Dropout(dropout_prob)) model.add(Dense(100, activation=activation)) model.add(Dense(50, activation=activation)) model.add(Dense(10, activation=activation)) model.add(Dense(1, init='normal')) optimizer = Adam(lr=learning_rate) model.compile(loss='mse', optimizer=optimizer, metrics=['accuracy']) self.model = model model.summary() return model Explanation: Track1 extends BaseNetwork. It contains a simple 4-layer convolutional neural network with 4 fully connected layers with 10% dropout after flattening the data as well as after the first FC layer. End of explanation output_shape=(40, 80, 3) clf = Track1() model = clf.build_model( input_shape=output_shape, output_shape=output_shape, learning_rate=0.001, dropout_prob=0.1, activation='relu' ) Explanation: Instantiate the classifier End of explanation if False: clf.fit( model, dataset, nb_epoch=2, batch_size=32 ) Explanation: Train the network End of explanation
1,657
Given the following text description, write Python code to implement the functionality described below step by step Description: Copyright 2019 The TensorFlow Authors. Step1: Word embeddings <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https Step2: Download the IMDb Dataset You will use the Large Movie Review Dataset through the tutorial. You will train a sentiment classifier model on this dataset and in the process learn embeddings from scratch. To read more about loading a dataset from scratch, see the Loading text tutorial. Download the dataset using Keras file utility and take a look at the directories. Step3: Take a look at the train/ directory. It has pos and neg folders with movie reviews labelled as positive and negative respectively. You will use reviews from pos and neg folders to train a binary classification model. Step4: The train directory also has additional folders which should be removed before creating training dataset. Step5: Next, create a tf.data.Dataset using tf.keras.preprocessing.text_dataset_from_directory. You can read more about using this utility in this text classification tutorial. Use the train directory to create both train and validation datasets with a split of 20% for validation. Step6: Take a look at a few movie reviews and their labels (1 Step7: Configure the dataset for performance These are two important methods you should use when loading data to make sure that I/O does not become blocking. .cache() keeps data in memory after it's loaded off disk. This will ensure the dataset does not become a bottleneck while training your model. If your dataset is too large to fit into memory, you can also use this method to create a performant on-disk cache, which is more efficient to read than many small files. .prefetch() overlaps data preprocessing and model execution while training. You can learn more about both methods, as well as how to cache data to disk in the data performance guide. Step8: Using the Embedding layer Keras makes it easy to use word embeddings. Take a look at the Embedding layer. The Embedding layer can be understood as a lookup table that maps from integer indices (which stand for specific words) to dense vectors (their embeddings). The dimensionality (or width) of the embedding is a parameter you can experiment with to see what works well for your problem, much in the same way you would experiment with the number of neurons in a Dense layer. Step9: When you create an Embedding layer, the weights for the embedding are randomly initialized (just like any other layer). During training, they are gradually adjusted via backpropagation. Once trained, the learned word embeddings will roughly encode similarities between words (as they were learned for the specific problem your model is trained on). If you pass an integer to an embedding layer, the result replaces each integer with the vector from the embedding table Step10: For text or sequence problems, the Embedding layer takes a 2D tensor of integers, of shape (samples, sequence_length), where each entry is a sequence of integers. It can embed sequences of variable lengths. You could feed into the embedding layer above batches with shapes (32, 10) (batch of 32 sequences of length 10) or (64, 15) (batch of 64 sequences of length 15). The returned tensor has one more axis than the input, the embedding vectors are aligned along the new last axis. Pass it a (2, 3) input batch and the output is (2, 3, N) Step11: When given a batch of sequences as input, an embedding layer returns a 3D floating point tensor, of shape (samples, sequence_length, embedding_dimensionality). To convert from this sequence of variable length to a fixed representation there are a variety of standard approaches. You could use an RNN, Attention, or pooling layer before passing it to a Dense layer. This tutorial uses pooling because it's the simplest. The Text Classification with an RNN tutorial is a good next step. Text preprocessing Next, define the dataset preprocessing steps required for your sentiment classification model. Initialize a TextVectorization layer with the desired parameters to vectorize movie reviews. You can learn more about using this layer in the Text Classification tutorial. Step12: Create a classification model Use the Keras Sequential API to define the sentiment classification model. In this case it is a "Continuous bag of words" style model. * The TextVectorization layer transforms strings into vocabulary indices. You have already initialized vectorize_layer as a TextVectorization layer and built it's vocabulary by calling adapt on text_ds. Now vectorize_layer can be used as the first layer of your end-to-end classification model, feeding transformed strings into the Embedding layer. * The Embedding layer takes the integer-encoded vocabulary and looks up the embedding vector for each word-index. These vectors are learned as the model trains. The vectors add a dimension to the output array. The resulting dimensions are Step13: Compile and train the model You will use TensorBoard to visualize metrics including loss and accuracy. Create a tf.keras.callbacks.TensorBoard. Step14: Compile and train the model using the Adam optimizer and BinaryCrossentropy loss. Step15: With this approach the model reaches a validation accuracy of around 78% (note that the model is overfitting since training accuracy is higher). Note Step16: Visualize the model metrics in TensorBoard. Step17: Retrieve the trained word embeddings and save them to disk Next, retrieve the word embeddings learned during training. The embeddings are weights of the Embedding layer in the model. The weights matrix is of shape (vocab_size, embedding_dimension). Obtain the weights from the model using get_layer() and get_weights(). The get_vocabulary() function provides the vocabulary to build a metadata file with one token per line. Step18: Write the weights to disk. To use the Embedding Projector, you will upload two files in tab separated format Step19: If you are running this tutorial in Colaboratory, you can use the following snippet to download these files to your local machine (or use the file browser, View -> Table of contents -> File browser).
Python Code: #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. Explanation: Copyright 2019 The TensorFlow Authors. End of explanation import io import os import re import shutil import string import tensorflow as tf from tensorflow.keras import Sequential from tensorflow.keras.layers import Dense, Embedding, GlobalAveragePooling1D from tensorflow.keras.layers import TextVectorization Explanation: Word embeddings <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/text/guide/word_embeddings"> <img src="https://www.tensorflow.org/images/tf_logo_32px.png" /> View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/text/blob/master/docs/guide/word_embeddings.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/text/blob/master/docs/guide/word_embeddings.ipynb"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/text/docs/guide/word_embeddings.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> This tutorial contains an introduction to word embeddings. You will train your own word embeddings using a simple Keras model for a sentiment classification task, and then visualize them in the Embedding Projector (shown in the image below). <img src="images/embedding.jpg" alt="Screenshot of the embedding projector" width="400"/> Representing text as numbers Machine learning models take vectors (arrays of numbers) as input. When working with text, the first thing you must do is come up with a strategy to convert strings to numbers (or to "vectorize" the text) before feeding it to the model. In this section, you will look at three strategies for doing so. One-hot encodings As a first idea, you might "one-hot" encode each word in your vocabulary. Consider the sentence "The cat sat on the mat". The vocabulary (or unique words) in this sentence is (cat, mat, on, sat, the). To represent each word, you will create a zero vector with length equal to the vocabulary, then place a one in the index that corresponds to the word. This approach is shown in the following diagram. <img src="images/one-hot.png" alt="Diagram of one-hot encodings" width="400" /> To create a vector that contains the encoding of the sentence, you could then concatenate the one-hot vectors for each word. Key point: This approach is inefficient. A one-hot encoded vector is sparse (meaning, most indices are zero). Imagine you have 10,000 words in the vocabulary. To one-hot encode each word, you would create a vector where 99.99% of the elements are zero. Encode each word with a unique number A second approach you might try is to encode each word using a unique number. Continuing the example above, you could assign 1 to "cat", 2 to "mat", and so on. You could then encode the sentence "The cat sat on the mat" as a dense vector like [5, 1, 4, 3, 5, 2]. This approach is efficient. Instead of a sparse vector, you now have a dense one (where all elements are full). There are two downsides to this approach, however: The integer-encoding is arbitrary (it does not capture any relationship between words). An integer-encoding can be challenging for a model to interpret. A linear classifier, for example, learns a single weight for each feature. Because there is no relationship between the similarity of any two words and the similarity of their encodings, this feature-weight combination is not meaningful. Word embeddings Word embeddings give us a way to use an efficient, dense representation in which similar words have a similar encoding. Importantly, you do not have to specify this encoding by hand. An embedding is a dense vector of floating point values (the length of the vector is a parameter you specify). Instead of specifying the values for the embedding manually, they are trainable parameters (weights learned by the model during training, in the same way a model learns weights for a dense layer). It is common to see word embeddings that are 8-dimensional (for small datasets), up to 1024-dimensions when working with large datasets. A higher dimensional embedding can capture fine-grained relationships between words, but takes more data to learn. <img src="images/embedding2.png" alt="Diagram of an embedding" width="400"/> Above is a diagram for a word embedding. Each word is represented as a 4-dimensional vector of floating point values. Another way to think of an embedding is as "lookup table". After these weights have been learned, you can encode each word by looking up the dense vector it corresponds to in the table. Setup End of explanation url = "https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz" dataset = tf.keras.utils.get_file("aclImdb_v1.tar.gz", url, untar=True, cache_dir='.', cache_subdir='') dataset_dir = os.path.join(os.path.dirname(dataset), 'aclImdb') os.listdir(dataset_dir) Explanation: Download the IMDb Dataset You will use the Large Movie Review Dataset through the tutorial. You will train a sentiment classifier model on this dataset and in the process learn embeddings from scratch. To read more about loading a dataset from scratch, see the Loading text tutorial. Download the dataset using Keras file utility and take a look at the directories. End of explanation train_dir = os.path.join(dataset_dir, 'train') os.listdir(train_dir) Explanation: Take a look at the train/ directory. It has pos and neg folders with movie reviews labelled as positive and negative respectively. You will use reviews from pos and neg folders to train a binary classification model. End of explanation remove_dir = os.path.join(train_dir, 'unsup') shutil.rmtree(remove_dir) Explanation: The train directory also has additional folders which should be removed before creating training dataset. End of explanation batch_size = 1024 seed = 123 train_ds = tf.keras.preprocessing.text_dataset_from_directory( 'aclImdb/train', batch_size=batch_size, validation_split=0.2, subset='training', seed=seed) val_ds = tf.keras.preprocessing.text_dataset_from_directory( 'aclImdb/train', batch_size=batch_size, validation_split=0.2, subset='validation', seed=seed) Explanation: Next, create a tf.data.Dataset using tf.keras.preprocessing.text_dataset_from_directory. You can read more about using this utility in this text classification tutorial. Use the train directory to create both train and validation datasets with a split of 20% for validation. End of explanation for text_batch, label_batch in train_ds.take(1): for i in range(5): print(label_batch[i].numpy(), text_batch.numpy()[i]) Explanation: Take a look at a few movie reviews and their labels (1: positive, 0: negative) from the train dataset. End of explanation AUTOTUNE = tf.data.AUTOTUNE train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE) val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE) Explanation: Configure the dataset for performance These are two important methods you should use when loading data to make sure that I/O does not become blocking. .cache() keeps data in memory after it's loaded off disk. This will ensure the dataset does not become a bottleneck while training your model. If your dataset is too large to fit into memory, you can also use this method to create a performant on-disk cache, which is more efficient to read than many small files. .prefetch() overlaps data preprocessing and model execution while training. You can learn more about both methods, as well as how to cache data to disk in the data performance guide. End of explanation # Embed a 1,000 word vocabulary into 5 dimensions. embedding_layer = tf.keras.layers.Embedding(1000, 5) Explanation: Using the Embedding layer Keras makes it easy to use word embeddings. Take a look at the Embedding layer. The Embedding layer can be understood as a lookup table that maps from integer indices (which stand for specific words) to dense vectors (their embeddings). The dimensionality (or width) of the embedding is a parameter you can experiment with to see what works well for your problem, much in the same way you would experiment with the number of neurons in a Dense layer. End of explanation result = embedding_layer(tf.constant([1, 2, 3])) result.numpy() Explanation: When you create an Embedding layer, the weights for the embedding are randomly initialized (just like any other layer). During training, they are gradually adjusted via backpropagation. Once trained, the learned word embeddings will roughly encode similarities between words (as they were learned for the specific problem your model is trained on). If you pass an integer to an embedding layer, the result replaces each integer with the vector from the embedding table: End of explanation result = embedding_layer(tf.constant([[0, 1, 2], [3, 4, 5]])) result.shape Explanation: For text or sequence problems, the Embedding layer takes a 2D tensor of integers, of shape (samples, sequence_length), where each entry is a sequence of integers. It can embed sequences of variable lengths. You could feed into the embedding layer above batches with shapes (32, 10) (batch of 32 sequences of length 10) or (64, 15) (batch of 64 sequences of length 15). The returned tensor has one more axis than the input, the embedding vectors are aligned along the new last axis. Pass it a (2, 3) input batch and the output is (2, 3, N) End of explanation # Create a custom standardization function to strip HTML break tags '<br />'. def custom_standardization(input_data): lowercase = tf.strings.lower(input_data) stripped_html = tf.strings.regex_replace(lowercase, '<br />', ' ') return tf.strings.regex_replace(stripped_html, '[%s]' % re.escape(string.punctuation), '') # Vocabulary size and number of words in a sequence. vocab_size = 10000 sequence_length = 100 # Use the text vectorization layer to normalize, split, and map strings to # integers. Note that the layer uses the custom standardization defined above. # Set maximum_sequence length as all samples are not of the same length. vectorize_layer = TextVectorization( standardize=custom_standardization, max_tokens=vocab_size, output_mode='int', output_sequence_length=sequence_length) # Make a text-only dataset (no labels) and call adapt to build the vocabulary. text_ds = train_ds.map(lambda x, y: x) vectorize_layer.adapt(text_ds) Explanation: When given a batch of sequences as input, an embedding layer returns a 3D floating point tensor, of shape (samples, sequence_length, embedding_dimensionality). To convert from this sequence of variable length to a fixed representation there are a variety of standard approaches. You could use an RNN, Attention, or pooling layer before passing it to a Dense layer. This tutorial uses pooling because it's the simplest. The Text Classification with an RNN tutorial is a good next step. Text preprocessing Next, define the dataset preprocessing steps required for your sentiment classification model. Initialize a TextVectorization layer with the desired parameters to vectorize movie reviews. You can learn more about using this layer in the Text Classification tutorial. End of explanation embedding_dim=16 model = Sequential([ vectorize_layer, Embedding(vocab_size, embedding_dim, name="embedding"), GlobalAveragePooling1D(), Dense(16, activation='relu'), Dense(1) ]) Explanation: Create a classification model Use the Keras Sequential API to define the sentiment classification model. In this case it is a "Continuous bag of words" style model. * The TextVectorization layer transforms strings into vocabulary indices. You have already initialized vectorize_layer as a TextVectorization layer and built it's vocabulary by calling adapt on text_ds. Now vectorize_layer can be used as the first layer of your end-to-end classification model, feeding transformed strings into the Embedding layer. * The Embedding layer takes the integer-encoded vocabulary and looks up the embedding vector for each word-index. These vectors are learned as the model trains. The vectors add a dimension to the output array. The resulting dimensions are: (batch, sequence, embedding). The GlobalAveragePooling1D layer returns a fixed-length output vector for each example by averaging over the sequence dimension. This allows the model to handle input of variable length, in the simplest way possible. The fixed-length output vector is piped through a fully-connected (Dense) layer with 16 hidden units. The last layer is densely connected with a single output node. Caution: This model doesn't use masking, so the zero-padding is used as part of the input and hence the padding length may affect the output. To fix this, see the masking and padding guide. End of explanation tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir="logs") Explanation: Compile and train the model You will use TensorBoard to visualize metrics including loss and accuracy. Create a tf.keras.callbacks.TensorBoard. End of explanation model.compile(optimizer='adam', loss=tf.keras.losses.BinaryCrossentropy(from_logits=True), metrics=['accuracy']) model.fit( train_ds, validation_data=val_ds, epochs=15, callbacks=[tensorboard_callback]) Explanation: Compile and train the model using the Adam optimizer and BinaryCrossentropy loss. End of explanation model.summary() Explanation: With this approach the model reaches a validation accuracy of around 78% (note that the model is overfitting since training accuracy is higher). Note: Your results may be a bit different, depending on how weights were randomly initialized before training the embedding layer. You can look into the model summary to learn more about each layer of the model. End of explanation #docs_infra: no_execute %load_ext tensorboard %tensorboard --logdir logs Explanation: Visualize the model metrics in TensorBoard. End of explanation weights = model.get_layer('embedding').get_weights()[0] vocab = vectorize_layer.get_vocabulary() Explanation: Retrieve the trained word embeddings and save them to disk Next, retrieve the word embeddings learned during training. The embeddings are weights of the Embedding layer in the model. The weights matrix is of shape (vocab_size, embedding_dimension). Obtain the weights from the model using get_layer() and get_weights(). The get_vocabulary() function provides the vocabulary to build a metadata file with one token per line. End of explanation out_v = io.open('vectors.tsv', 'w', encoding='utf-8') out_m = io.open('metadata.tsv', 'w', encoding='utf-8') for index, word in enumerate(vocab): if index == 0: continue # skip 0, it's padding. vec = weights[index] out_v.write('\t'.join([str(x) for x in vec]) + "\n") out_m.write(word + "\n") out_v.close() out_m.close() Explanation: Write the weights to disk. To use the Embedding Projector, you will upload two files in tab separated format: a file of vectors (containing the embedding), and a file of meta data (containing the words). End of explanation try: from google.colab import files files.download('vectors.tsv') files.download('metadata.tsv') except Exception: pass Explanation: If you are running this tutorial in Colaboratory, you can use the following snippet to download these files to your local machine (or use the file browser, View -> Table of contents -> File browser). End of explanation
1,658
Given the following text description, write Python code to implement the functionality described below step by step Description: Nuclear Data In this notebook, we will go through the salient features of the openmc.data package in the Python API. This package enables inspection, analysis, and conversion of nuclear data from ACE files. Most importantly, the package provides a mean to generate HDF5 nuclear data libraries that are used by the transport solver. Step1: Physical Data Some very helpful physical data is available as part of openmc.data Step2: The IncidentNeutron class The most useful class within the openmc.data API is IncidentNeutron, which stores to continuous-energy incident neutron data. This class has factory methods from_ace, from_endf, and from_hdf5 which take a data file on disk and parse it into a hierarchy of classes in memory. To demonstrate this feature, we will download an ACE file (which can be produced with NJOY 2016) and then load it in using the IncidentNeutron.from_ace method. Step3: Cross sections From Python, it's easy to explore (and modify) the nuclear data. Let's start off by reading the total cross section. Reactions are indexed using their "MT" number -- a unique identifier for each reaction defined by the ENDF-6 format. The MT number for the total cross section is 1. Step4: Cross sections for each reaction can be stored at multiple temperatures. To see what temperatures are available, we can look at the reaction's xs attribute. Step5: To find the cross section at a particular energy, 1 eV for example, simply get the cross section at the appropriate temperature and then call it as a function. Note that our nuclear data uses eV as the unit of energy. Step6: The xs attribute can also be called on an array of energies. Step7: A quick way to plot cross sections is to use the energy attribute of IncidentNeutron. This gives an array of all the energy values used in cross section interpolation for each temperature present. Step8: Reaction Data Most of the interesting data for an IncidentNeutron instance is contained within the reactions attribute, which is a dictionary mapping MT values to Reaction objects. Step9: Let's suppose we want to look more closely at the (n,2n) reaction. This reaction has an energy threshold Step10: The (n,2n) cross section, like all basic cross sections, is represented by the Tabulated1D class. The energy and cross section values in the table can be directly accessed with the x and y attributes. Using the x and y has the nice benefit of automatically acounting for reaction thresholds. Step11: To get information on the energy and angle distribution of the neutrons emitted in the reaction, we need to look at the products attribute. Step12: We see that the neutrons emitted have a correlated angle-energy distribution. Let's look at the energy_out attribute to see what the outgoing energy distributions are. Step13: Here we see we have a tabulated outgoing energy distribution for each incoming energy. Note that the same probability distribution classes that we could use to create a source definition are also used within the openmc.data package. Let's plot every fifth distribution to get an idea of what they look like. Step14: Unresolved resonance probability tables We can also look at unresolved resonance probability tables which are stored in a ProbabilityTables object. In the following example, we'll create a plot showing what the total cross section probability tables look like as a function of incoming energy. Step15: Exporting HDF5 data If you have an instance IncidentNeutron that was created from ACE or HDF5 data, you can easily write it to disk using the export_to_hdf5() method. This can be used to convert ACE to HDF5 or to take an existing data set and actually modify cross sections. Step16: With few exceptions, the HDF5 file encodes the same data as the ACE file. Step17: And one of the best parts of using HDF5 is that it is a widely used format with lots of third-party support. You can use h5py, for example, to inspect the data. Step18: So we see that the hierarchy of data within the HDF5 mirrors the hierarchy of Python objects that we manipulated before. Step19: Working with ENDF files In addition to being able to load ACE and HDF5 data, we can also load ENDF data directly into an IncidentNeutron instance using the from_endf() factory method. Let's download the ENDF/B-VII.1 evaluation for $^{157}$Gd and load it in Step20: Just as before, we can get a reaction by indexing the object directly Step21: However, if we look at the cross section now, we see that it isn't represented as tabulated data anymore. Step22: If you had Cython installed when you built/installed OpenMC, you should be able to evaluate resonant cross sections from ENDF data directly, i.e., OpenMC will reconstruct resonances behind the scenes for you. Step23: When data is loaded from an ENDF file, there is also a special resonances attribute that contains resolved and unresolved resonance region data (from MF=2 in an ENDF file). Step24: We see that $^{157}$Gd has a resolved resonance region represented in the Reich-Moore format as well as an unresolved resonance region. We can look at the min/max energy of each region by doing the following Step25: With knowledge of the energy bounds, let's create an array of energies over the entire resolved resonance range and plot the elastic scattering cross section. Step26: Resonance ranges also have a useful parameters attribute that shows the energies and widths for resonances. Step27: Heavy-nuclide resonance scattering OpenMC has two methods for accounting for resonance upscattering in heavy nuclides, DBRC and RVS. These methods rely on 0 K elastic scattering data being present. If you have an existing ACE/HDF5 dataset and you need to add 0 K elastic scattering data to it, this can be done using the IncidentNeutron.add_elastic_0K_from_endf() method. Let's do this with our original gd157 object that we instantiated from an ACE file. Step28: Let's check to make sure that we have both the room temperature elastic scattering cross section as well as a 0K cross section. Step29: Generating data from NJOY To run OpenMC in continuous-energy mode, you generally need to have ACE files already available that can be converted to OpenMC's native HDF5 format. If you don't already have suitable ACE files or need to generate new data, both the IncidentNeutron and ThermalScattering classes include from_njoy() methods that will run NJOY to generate ACE files and then read those files to create OpenMC class instances. The from_njoy() methods take as input the name of an ENDF file on disk. By default, it is assumed that you have an executable named njoy available on your path. This can be configured with the optional njoy_exec argument. Additionally, if you want to show the progress of NJOY as it is running, you can pass stdout=True. Let's use IncidentNeutron.from_njoy() to run NJOY to create data for $^2$H using an ENDF file. We'll specify that we want data specifically at 300, 400, and 500 K. Step30: Now we can use our h2 object just as we did before. Step31: Note that 0 K elastic scattering data is automatically added when using from_njoy() so that resonance elastic scattering treatments can be used. Windowed multipole OpenMC can also be used with an experimental format called windowed multipole. Windowed multipole allows for analytic on-the-fly Doppler broadening of the resolved resonance range. Windowed multipole data can be downloaded with the openmc-get-multipole-data script. This data can be used in the transport solver, but it can also be used directly in the Python API. Step32: The WindowedMultipole object can be called with energy and temperature values. Calling the object gives a tuple of 3 cross sections Step33: An array can be passed for the energy argument. Step34: The real advantage to multipole is that it can be used to generate cross sections at any temperature. For example, this plot shows the Doppler broadening of the 6.67 eV resonance between 0 K and 900 K.
Python Code: %matplotlib inline import os from pprint import pprint import shutil import subprocess import urllib.request import h5py import numpy as np import matplotlib.pyplot as plt import matplotlib.cm from matplotlib.patches import Rectangle import openmc.data Explanation: Nuclear Data In this notebook, we will go through the salient features of the openmc.data package in the Python API. This package enables inspection, analysis, and conversion of nuclear data from ACE files. Most importantly, the package provides a mean to generate HDF5 nuclear data libraries that are used by the transport solver. End of explanation openmc.data.atomic_mass('Fe54') openmc.data.NATURAL_ABUNDANCE['H2'] openmc.data.atomic_weight('C') Explanation: Physical Data Some very helpful physical data is available as part of openmc.data: atomic masses, natural abundances, and atomic weights. End of explanation url = 'https://anl.box.com/shared/static/kxm7s57z3xgfbeq29h54n7q6js8rd11c.ace' filename, headers = urllib.request.urlretrieve(url, 'gd157.ace') # Load ACE data into object gd157 = openmc.data.IncidentNeutron.from_ace('gd157.ace') gd157 Explanation: The IncidentNeutron class The most useful class within the openmc.data API is IncidentNeutron, which stores to continuous-energy incident neutron data. This class has factory methods from_ace, from_endf, and from_hdf5 which take a data file on disk and parse it into a hierarchy of classes in memory. To demonstrate this feature, we will download an ACE file (which can be produced with NJOY 2016) and then load it in using the IncidentNeutron.from_ace method. End of explanation total = gd157[1] total Explanation: Cross sections From Python, it's easy to explore (and modify) the nuclear data. Let's start off by reading the total cross section. Reactions are indexed using their "MT" number -- a unique identifier for each reaction defined by the ENDF-6 format. The MT number for the total cross section is 1. End of explanation total.xs Explanation: Cross sections for each reaction can be stored at multiple temperatures. To see what temperatures are available, we can look at the reaction's xs attribute. End of explanation total.xs['294K'](1.0) Explanation: To find the cross section at a particular energy, 1 eV for example, simply get the cross section at the appropriate temperature and then call it as a function. Note that our nuclear data uses eV as the unit of energy. End of explanation total.xs['294K']([1.0, 2.0, 3.0]) Explanation: The xs attribute can also be called on an array of energies. End of explanation gd157.energy energies = gd157.energy['294K'] total_xs = total.xs['294K'](energies) plt.loglog(energies, total_xs) plt.xlabel('Energy (eV)') plt.ylabel('Cross section (b)') Explanation: A quick way to plot cross sections is to use the energy attribute of IncidentNeutron. This gives an array of all the energy values used in cross section interpolation for each temperature present. End of explanation pprint(list(gd157.reactions.values())[:10]) Explanation: Reaction Data Most of the interesting data for an IncidentNeutron instance is contained within the reactions attribute, which is a dictionary mapping MT values to Reaction objects. End of explanation n2n = gd157[16] print('Threshold = {} eV'.format(n2n.xs['294K'].x[0])) Explanation: Let's suppose we want to look more closely at the (n,2n) reaction. This reaction has an energy threshold End of explanation n2n.xs xs = n2n.xs['294K'] plt.plot(xs.x, xs.y) plt.xlabel('Energy (eV)') plt.ylabel('Cross section (b)') plt.xlim((xs.x[0], xs.x[-1])) Explanation: The (n,2n) cross section, like all basic cross sections, is represented by the Tabulated1D class. The energy and cross section values in the table can be directly accessed with the x and y attributes. Using the x and y has the nice benefit of automatically acounting for reaction thresholds. End of explanation n2n.products neutron = n2n.products[0] neutron.distribution Explanation: To get information on the energy and angle distribution of the neutrons emitted in the reaction, we need to look at the products attribute. End of explanation dist = neutron.distribution[0] dist.energy_out Explanation: We see that the neutrons emitted have a correlated angle-energy distribution. Let's look at the energy_out attribute to see what the outgoing energy distributions are. End of explanation for e_in, e_out_dist in zip(dist.energy[::5], dist.energy_out[::5]): plt.semilogy(e_out_dist.x, e_out_dist.p, label='E={:.2f} MeV'.format(e_in/1e6)) plt.ylim(top=1e-6) plt.legend() plt.xlabel('Outgoing energy (eV)') plt.ylabel('Probability/eV') plt.show() Explanation: Here we see we have a tabulated outgoing energy distribution for each incoming energy. Note that the same probability distribution classes that we could use to create a source definition are also used within the openmc.data package. Let's plot every fifth distribution to get an idea of what they look like. End of explanation fig = plt.figure() ax = fig.add_subplot(111) cm = matplotlib.cm.Spectral_r # Determine size of probability tables urr = gd157.urr['294K'] n_energy = urr.table.shape[0] n_band = urr.table.shape[2] for i in range(n_energy): # Get bounds on energy if i > 0: e_left = urr.energy[i] - 0.5*(urr.energy[i] - urr.energy[i-1]) else: e_left = urr.energy[i] - 0.5*(urr.energy[i+1] - urr.energy[i]) if i < n_energy - 1: e_right = urr.energy[i] + 0.5*(urr.energy[i+1] - urr.energy[i]) else: e_right = urr.energy[i] + 0.5*(urr.energy[i] - urr.energy[i-1]) for j in range(n_band): # Determine maximum probability for a single band max_prob = np.diff(urr.table[i,0,:]).max() # Determine bottom of band if j > 0: xs_bottom = urr.table[i,1,j] - 0.5*(urr.table[i,1,j] - urr.table[i,1,j-1]) value = (urr.table[i,0,j] - urr.table[i,0,j-1])/max_prob else: xs_bottom = urr.table[i,1,j] - 0.5*(urr.table[i,1,j+1] - urr.table[i,1,j]) value = urr.table[i,0,j]/max_prob # Determine top of band if j < n_band - 1: xs_top = urr.table[i,1,j] + 0.5*(urr.table[i,1,j+1] - urr.table[i,1,j]) else: xs_top = urr.table[i,1,j] + 0.5*(urr.table[i,1,j] - urr.table[i,1,j-1]) # Draw rectangle with appropriate color ax.add_patch(Rectangle((e_left, xs_bottom), e_right - e_left, xs_top - xs_bottom, color=cm(value))) # Overlay total cross section ax.plot(gd157.energy['294K'], total.xs['294K'](gd157.energy['294K']), 'k') # Make plot pretty and labeled ax.set_xlim(1.0, 1.0e5) ax.set_ylim(1e-1, 1e4) ax.set_xscale('log') ax.set_yscale('log') ax.set_xlabel('Energy (eV)') ax.set_ylabel('Cross section(b)') Explanation: Unresolved resonance probability tables We can also look at unresolved resonance probability tables which are stored in a ProbabilityTables object. In the following example, we'll create a plot showing what the total cross section probability tables look like as a function of incoming energy. End of explanation gd157.export_to_hdf5('gd157.h5', 'w') Explanation: Exporting HDF5 data If you have an instance IncidentNeutron that was created from ACE or HDF5 data, you can easily write it to disk using the export_to_hdf5() method. This can be used to convert ACE to HDF5 or to take an existing data set and actually modify cross sections. End of explanation gd157_reconstructed = openmc.data.IncidentNeutron.from_hdf5('gd157.h5') np.all(gd157[16].xs['294K'].y == gd157_reconstructed[16].xs['294K'].y) Explanation: With few exceptions, the HDF5 file encodes the same data as the ACE file. End of explanation h5file = h5py.File('gd157.h5', 'r') main_group = h5file['Gd157/reactions'] for name, obj in sorted(list(main_group.items()))[:10]: if 'reaction_' in name: print('{}, {}'.format(name, obj.attrs['label'].decode())) n2n_group = main_group['reaction_016'] pprint(list(n2n_group.values())) Explanation: And one of the best parts of using HDF5 is that it is a widely used format with lots of third-party support. You can use h5py, for example, to inspect the data. End of explanation n2n_group['294K/xs'][()] Explanation: So we see that the hierarchy of data within the HDF5 mirrors the hierarchy of Python objects that we manipulated before. End of explanation # Download ENDF file url = 'https://t2.lanl.gov/nis/data/data/ENDFB-VII.1-neutron/Gd/157' filename, headers = urllib.request.urlretrieve(url, 'gd157.endf') # Load into memory gd157_endf = openmc.data.IncidentNeutron.from_endf(filename) gd157_endf Explanation: Working with ENDF files In addition to being able to load ACE and HDF5 data, we can also load ENDF data directly into an IncidentNeutron instance using the from_endf() factory method. Let's download the ENDF/B-VII.1 evaluation for $^{157}$Gd and load it in: End of explanation elastic = gd157_endf[2] Explanation: Just as before, we can get a reaction by indexing the object directly: End of explanation elastic.xs Explanation: However, if we look at the cross section now, we see that it isn't represented as tabulated data anymore. End of explanation elastic.xs['0K'](0.0253) Explanation: If you had Cython installed when you built/installed OpenMC, you should be able to evaluate resonant cross sections from ENDF data directly, i.e., OpenMC will reconstruct resonances behind the scenes for you. End of explanation gd157_endf.resonances.ranges Explanation: When data is loaded from an ENDF file, there is also a special resonances attribute that contains resolved and unresolved resonance region data (from MF=2 in an ENDF file). End of explanation [(r.energy_min, r.energy_max) for r in gd157_endf.resonances.ranges] Explanation: We see that $^{157}$Gd has a resolved resonance region represented in the Reich-Moore format as well as an unresolved resonance region. We can look at the min/max energy of each region by doing the following: End of explanation # Create log-spaced array of energies resolved = gd157_endf.resonances.resolved energies = np.logspace(np.log10(resolved.energy_min), np.log10(resolved.energy_max), 1000) # Evaluate elastic scattering xs at energies xs = elastic.xs['0K'](energies) # Plot cross section vs energies plt.loglog(energies, xs) plt.xlabel('Energy (eV)') plt.ylabel('Cross section (b)') Explanation: With knowledge of the energy bounds, let's create an array of energies over the entire resolved resonance range and plot the elastic scattering cross section. End of explanation resolved.parameters.head(10) Explanation: Resonance ranges also have a useful parameters attribute that shows the energies and widths for resonances. End of explanation gd157.add_elastic_0K_from_endf('gd157.endf') Explanation: Heavy-nuclide resonance scattering OpenMC has two methods for accounting for resonance upscattering in heavy nuclides, DBRC and RVS. These methods rely on 0 K elastic scattering data being present. If you have an existing ACE/HDF5 dataset and you need to add 0 K elastic scattering data to it, this can be done using the IncidentNeutron.add_elastic_0K_from_endf() method. Let's do this with our original gd157 object that we instantiated from an ACE file. End of explanation gd157[2].xs Explanation: Let's check to make sure that we have both the room temperature elastic scattering cross section as well as a 0K cross section. End of explanation # Download ENDF file url = 'https://t2.lanl.gov/nis/data/data/ENDFB-VII.1-neutron/H/2' filename, headers = urllib.request.urlretrieve(url, 'h2.endf') # Run NJOY to create deuterium data h2 = openmc.data.IncidentNeutron.from_njoy('h2.endf', temperatures=[300., 400., 500.], stdout=True) Explanation: Generating data from NJOY To run OpenMC in continuous-energy mode, you generally need to have ACE files already available that can be converted to OpenMC's native HDF5 format. If you don't already have suitable ACE files or need to generate new data, both the IncidentNeutron and ThermalScattering classes include from_njoy() methods that will run NJOY to generate ACE files and then read those files to create OpenMC class instances. The from_njoy() methods take as input the name of an ENDF file on disk. By default, it is assumed that you have an executable named njoy available on your path. This can be configured with the optional njoy_exec argument. Additionally, if you want to show the progress of NJOY as it is running, you can pass stdout=True. Let's use IncidentNeutron.from_njoy() to run NJOY to create data for $^2$H using an ENDF file. We'll specify that we want data specifically at 300, 400, and 500 K. End of explanation h2[2].xs Explanation: Now we can use our h2 object just as we did before. End of explanation url = 'https://github.com/mit-crpg/WMP_Library/releases/download/v1.1/092238.h5' filename, headers = urllib.request.urlretrieve(url, '092238.h5') u238_multipole = openmc.data.WindowedMultipole.from_hdf5('092238.h5') Explanation: Note that 0 K elastic scattering data is automatically added when using from_njoy() so that resonance elastic scattering treatments can be used. Windowed multipole OpenMC can also be used with an experimental format called windowed multipole. Windowed multipole allows for analytic on-the-fly Doppler broadening of the resolved resonance range. Windowed multipole data can be downloaded with the openmc-get-multipole-data script. This data can be used in the transport solver, but it can also be used directly in the Python API. End of explanation u238_multipole(1.0, 294) Explanation: The WindowedMultipole object can be called with energy and temperature values. Calling the object gives a tuple of 3 cross sections: elastic scattering, radiative capture, and fission. End of explanation E = np.linspace(5, 25, 1000) plt.semilogy(E, u238_multipole(E, 293.606)[1]) Explanation: An array can be passed for the energy argument. End of explanation E = np.linspace(6.1, 7.1, 1000) plt.semilogy(E, u238_multipole(E, 0)[1]) plt.semilogy(E, u238_multipole(E, 900)[1]) Explanation: The real advantage to multipole is that it can be used to generate cross sections at any temperature. For example, this plot shows the Doppler broadening of the 6.67 eV resonance between 0 K and 900 K. End of explanation
1,659
Given the following text description, write Python code to implement the functionality described below step by step Description: Open Traffic Reporter Map-Matching Optimization The Open Traffic Reporter map-matching service is based on the Hidden Markov Model (HMM) design of Newton and Krumm (2009). Skipping over 99% of the innerworkings of how HMMs work (for more detail see here) there are two principal parameters in the HMM algorithm that must be estimated from data. $$ p(z_t|r_i) = \frac{1}{\sqrt{2 \pi \sigma_z}} e^{-0.5(\frac{||z_t - x_{t,i}||_{\text{great circle}}}{\sigma_z})^2}$$ Step1: 1. Generate Random Routes Generate routes from Google Maps POIs or Mapzen Venues Step2: Save or load a specific set of routes Step3: 2. Grid Search for Optimal Parameter Values Step4: 3. Plot the Curves Step5: 5. Visualize Routes
Python Code: from __future__ import division from matplotlib import pyplot as plt import numpy as np import os import urllib import json import pandas as pd from random import shuffle, choice import pickle import sys; sys.path.insert(0, os.path.abspath('..')); import validator.validator as val %matplotlib inline mapzenKey = os.environ.get('MAPZEN_API') gmapsKey = os.environ.get('GOOGLE_MAPS') Explanation: Open Traffic Reporter Map-Matching Optimization The Open Traffic Reporter map-matching service is based on the Hidden Markov Model (HMM) design of Newton and Krumm (2009). Skipping over 99% of the innerworkings of how HMMs work (for more detail see here) there are two principal parameters in the HMM algorithm that must be estimated from data. $$ p(z_t|r_i) = \frac{1}{\sqrt{2 \pi \sigma_z}} e^{-0.5(\frac{||z_t - x_{t,i}||_{\text{great circle}}}{\sigma_z})^2}$$ End of explanation # routeList = val.get_POI_routes_by_length('Paris', 1, 5, 20, gmapsKey) routeList = val.get_routes_by_length('San Francisco', 1, 5, 20, mapzenKey) Explanation: 1. Generate Random Routes Generate routes from Google Maps POIs or Mapzen Venues End of explanation # routeList = pickle.load(open('sf_routes.pkl','rb')) # pickle.dump(routeList, open('saf_routes.pkl','wb')) Explanation: Save or load a specific set of routes End of explanation df = pd.DataFrame(columns=['beta', 'sigma_z', 'score']) outDfRow = -1 saveResults = False noiseLevels = np.linspace(0, 100, 21) noiseLevels = [5] sampleRates = [1, 5, 10, 20, 30] sampleRates = [5] betas = np.linspace(1,10,19) sigmaZs = np.linspace(1,10,19) for i, rteCoords in enumerate(routeList): print("Processig route {0} of {1}".format(i, len(routeList))) shape, routeUrl = val.get_route_shape(rteCoords) for beta in betas: for sigmaZ in sigmaZs: print("Computing score for sigma_z: {0}, beta: {1}".format( sigmaZ, beta)) edges, shapeCoords, traceAttrUrl = val.get_trace_attrs( shape, beta=beta, sigmaZ=sigmaZ) edges = val.get_coords_per_second(shapeCoords, edges, '2768') for noise in noiseLevels: noise = round(noise,3) for sampleRate in sampleRates: outDfRow += 1 df.loc[outDfRow, ['beta','sigma_z']] = [beta, sigmaZ] dfEdges = val.format_edge_df(edges) dfEdges, jsonDict, geojson = val.synthesize_gps( dfEdges, shapeCoords, '2768', noise=noise, sampleRate=sampleRate, beta=beta, sigmaZ=sigmaZ) segments, reportUrl = val.get_reporter_segments(jsonDict) matches, score = val.get_matches(segments, dfEdges) df.loc[outDfRow, 'score'] = score if saveResults: matches.to_csv( '../data/matches_{0}_to_{1}_w_{2}_m_noise_at_{3}_Hz.csv'.format( stName, endName, str(noise), str(Hz)), index=False) with open('../data/trace_{0}_to_{1}_w_{2}_m_noise_at_{3}_Hz.geojson'.format( stName, endName, str(noise), str(Hz)), 'w+') as fp: json.dump(geojson, fp) Explanation: 2. Grid Search for Optimal Parameter Values End of explanation df['score'] = df['score'].astype(float) df['beta'] = df['beta'].astype(float) df['sigma_z'] = df['sigma_z'].astype(float) df.groupby('beta').agg('mean').reset_index().plot('beta','score', figsize=(12,8)) df.groupby('sigma_z').agg('mean').reset_index().plot('sigma_z','score', figsize=(12,8)) Explanation: 3. Plot the Curves End of explanation geojsonList = [trace for trace in os.listdir('../data/') if trace.endswith('json')] fname = '../data/' + choice(geojsonList) val.generate_route_map(fname, 14) fname Explanation: 5. Visualize Routes End of explanation
1,660
Given the following text description, write Python code to implement the functionality described below step by step Description: Plotting brightness temperatures in a Lambert Conformal Conic map projection In this notebook we're going to continue working with http Step1: the glob function finds a file using a wildcard to save typing (google Step2: Read the radiance data from MODIS_SWATH_Type_L1B/Data Fields/EV_1KM_Emissive note that channel 31 occurs at index value 10 Step3: the data is stored as unsigned, 2 byte integers which can hold values from 0 to $2^{16}$ - 1 = 65,535 Step4: we need to apply a scale and offset to convert to radiance (the netcdf module did this for us automatically $Data = (RawData - offset) \times scale$ this information is included in the attributes of each variable. (see page 36 of the Modis users guide ) here is the scale for all 16 channels Step5: and here is the offset for 16 channels Step7: now convert this to brightness temperature Step8: histogram the calibrated radiances and show that they lie between 0-10 $W\,m^{-2}\,\mu m^{-1}\,sr^{-1}$ Step9: Read MYD03 Geolocation Fields note that the longitude and latitude arrays are (406,271) while the actual data are (2030,1354). These lat/lon arrays show only every fifth row and column. We need to get the full lat/lon arrays from the MYD03 file Step10: now regrid the radiances and brightness temperatures on a 0.1 x 0.1 degree regular lat/lon grid Step11: Plot this gridded data without a map projections Step12: Now replot using an lcc (Lambert conformal conic) projection from basemap at http Step13: repeat for brightness temperature
Python Code: from __future__ import print_function import os,site import glob import h5py from IPython.display import Image import numpy as np from matplotlib import pyplot as plt # # add the lib folder to the path assuming it is on the same # level as the notebooks folder # libdir=os.path.abspath('../lib') site.addsitedir(libdir) import h5dump Explanation: Plotting brightness temperatures in a Lambert Conformal Conic map projection In this notebook we're going to continue working with http://clouds.eos.ubc.ca/~phil/Downloads/a301/MYD021KM.A2005188.0405.005.2009232180906.h5. We will also need to go to Laadsweb and do a wildcard search on the filename: MYD03.A2005188.0405.* which will produce the geometry file which can be downloaded and converted to hdf5 to get http://clouds.eos.ubc.ca/~phil/Downloads/a301/MYD03.A2005188.0405.005.2009231234639.h5 The MYD03 file contains the lat/lon for every pixel and is described at http://modaps.nascom.nasa.gov/services/about/products/MYD03.html Recall that we read and converted the channel 31 radiance: End of explanation h5_filename=glob.glob('../data/MYD02*.h5') print("found {}".format(h5_filename)) h5_file=h5py.File(h5_filename[0]) Explanation: the glob function finds a file using a wildcard to save typing (google: python glob wildcard) End of explanation index31=10 Explanation: Read the radiance data from MODIS_SWATH_Type_L1B/Data Fields/EV_1KM_Emissive note that channel 31 occurs at index value 10 End of explanation chan31=h5_file['MODIS_SWATH_Type_L1B']['Data Fields']['EV_1KM_Emissive'][index31,:,:] print(chan31.shape,chan31.dtype) chan31[:3,:3] Explanation: the data is stored as unsigned, 2 byte integers which can hold values from 0 to $2^{16}$ - 1 = 65,535 End of explanation scale=h5_file['MODIS_SWATH_Type_L1B']['Data Fields']['EV_1KM_Emissive'].attrs['radiance_scales'][...] print(scale) Explanation: we need to apply a scale and offset to convert to radiance (the netcdf module did this for us automatically $Data = (RawData - offset) \times scale$ this information is included in the attributes of each variable. (see page 36 of the Modis users guide ) here is the scale for all 16 channels End of explanation offset=h5_file['MODIS_SWATH_Type_L1B']['Data Fields']['EV_1KM_Emissive'].attrs['radiance_offsets'][...] print(offset) chan31=(chan31 - offset[index31])*scale[index31] Explanation: and here is the offset for 16 channels End of explanation def planckInvert(wavel,Llambda): input wavelength in microns and Llambda in W/m^2/micron/sr, output output brightness temperature in K (note that we've remove the factor of pi because we are working with radiances, not fluxes) c=2.99792458e+08 #m/s -- speed of light in vacumn h=6.62606876e-34 #J s -- Planck's constant kb=1.3806503e-23 # J/K -- Boltzman's constant c1=2.*h*c**2. c2=h*c/kb Llambda=Llambda*1.e6 #convert to W/m^2/m/sr wavel=wavel*1.e-6 #convert wavelength to m Tbright=c2/(wavel*np.log(c1/(wavel**5.*Llambda) + 1.)) return Tbright chan31_Tbright=planckInvert(11.02, chan31) %matplotlib inline Explanation: now convert this to brightness temperature End of explanation import matplotlib.pyplot as plt out=plt.hist(chan31_Tbright.flat) Explanation: histogram the calibrated radiances and show that they lie between 0-10 $W\,m^{-2}\,\mu m^{-1}\,sr^{-1}$ End of explanation geom_filename=glob.glob('../data/MYD03*.h5') print("found {}".format(h5_filename)) geom_h5=h5py.File(geom_filename[0]) h5dump.dumph5(geom_h5) the_long=geom_h5['MODIS_Swath_Type_GEO']['Geolocation Fields']['Longitude'][...] the_lat=geom_h5['MODIS_Swath_Type_GEO']['Geolocation Fields']['Latitude'][...] print(the_long.shape,the_lat.shape) print('===================================================') print('Size of Longitude: {}'.format(the_long.shape)) print('Longitude Range: {} ~ {}'.format(np.min(the_long), np.max(the_long))) print('===================================================') print('Size of Latitude: {}'.format(the_lat.shape)) print('Latitude Range: {} ~ {}'.format(np.min(the_lat), np.max(the_lat))) def reproj_L1B(raw_data, raw_x, raw_y, xlim, ylim, res): ''' ========================================================================================= Reproject MODIS L1B file to a regular grid ----------------------------------------------------------------------------------------- d_array, x_array, y_array, bin_count = reproj_L1B(raw_data, raw_x, raw_y, xlim, ylim, res) ----------------------------------------------------------------------------------------- Input: raw_data: L1B data, N*M 2-D array. raw_x: longitude info. N*M 2-D array. raw_y: latitude info. N*M 2-D array. xlim: range of longitude, a list. ylim: range of latitude, a list. res: resolution, single value. Output: d_array: L1B reprojected data. x_array: reprojected longitude. y_array: reprojected latitude. bin_count: how many raw data point included in a reprojected grid. Note: function do not performs well if "res" is larger than the resolution of input data. size of "raw_data", "raw_x", "raw_y" must agree. ========================================================================================= ''' x_bins=np.arange(xlim[0], xlim[1], res) y_bins=np.arange(ylim[0], ylim[1], res) # x_indices=np.digitize(raw_x.flat, x_bins) # y_indices=np.digitize(raw_y.flat, y_bins) x_indices=np.searchsorted(x_bins, raw_x.flat, 'right') y_indices=np.searchsorted(y_bins, raw_y.flat, 'right') y_array=np.zeros([len(y_bins), len(x_bins)], dtype=np.float) x_array=np.zeros([len(y_bins), len(x_bins)], dtype=np.float) d_array=np.zeros([len(y_bins), len(x_bins)], dtype=np.float) bin_count=np.zeros([len(y_bins), len(x_bins)], dtype=np.int) for n in range(len(y_indices)): #indices bin_row=y_indices[n]-1 # '-1' is because we call 'right' in np.searchsorted. bin_col=x_indices[n]-1 bin_count[bin_row, bin_col] += 1 x_array[bin_row, bin_col] += raw_x.flat[n] y_array[bin_row, bin_col] += raw_y.flat[n] d_array[bin_row, bin_col] += raw_data.flat[n] for i in range(x_array.shape[0]): for j in range(x_array.shape[1]): if bin_count[i, j] > 0: x_array[i, j]=x_array[i, j]/bin_count[i, j] y_array[i, j]=y_array[i, j]/bin_count[i, j] d_array[i, j]=d_array[i, j]/bin_count[i, j] else: d_array[i, j]=np.nan x_array[i, j]=np.nan y_array[i,j]=np.nan return d_array, x_array, y_array, bin_count Explanation: Read MYD03 Geolocation Fields note that the longitude and latitude arrays are (406,271) while the actual data are (2030,1354). These lat/lon arrays show only every fifth row and column. We need to get the full lat/lon arrays from the MYD03 file End of explanation xlim=[np.min(the_long), np.max(the_long)] ylim=[np.min(the_lat), np.max(the_lat)] chan31_grid, longitude, latitude, bin_count = reproj_L1B(chan31, the_long, the_lat, xlim, ylim, 0.1) tbright_grid,longitude,latitude,bin_count=reproj_L1B(chan31_Tbright, the_long, the_lat, xlim, ylim, 0.1) chan31_grid=np.ma.masked_where(np.isnan(chan31_grid), chan31_grid) bin_count=np.ma.masked_where(np.isnan(bin_count), bin_count) longitude=np.ma.masked_where(np.isnan(longitude), longitude) latitude=np.ma.masked_where(np.isnan(latitude), latitude) longitude.shape Explanation: now regrid the radiances and brightness temperatures on a 0.1 x 0.1 degree regular lat/lon grid End of explanation fig=plt.figure(figsize=(10.5, 9.5)) ax=fig.add_subplot(111) ax.set_xlim(xlim[0], xlim[1]) ax.set_ylim(ylim[0], ylim[1]) image=ax.pcolormesh(longitude, latitude, chan31_grid) Explanation: Plot this gridded data without a map projections End of explanation from mpl_toolkits.basemap import Basemap lcc_values=dict(resolution='l',projection='lcc', lat_1=20,lat_2=40,lat_0=30,lon_0=135, llcrnrlon=120,llcrnrlat=20, urcrnrlon=150,urcrnrlat=42) proj=Basemap(**lcc_values) # create figure, add axes fig=plt.figure(figsize=(12, 12)) ax=fig.add_subplot(111) ## define parallels and meridians to draw. parallels=np.arange(-90, 90, 5) meridians=np.arange(0, 360, 5) proj.drawparallels(parallels, labels=[1, 0, 0, 0],\ fontsize=10, latmax=90) proj.drawmeridians(meridians, labels=[0, 0, 0, 1],\ fontsize=10, latmax=90) # draw coast & fill continents #map.fillcontinents(color=[0.25, 0.25, 0.25], lake_color=None) # coral out=proj.drawcoastlines(linewidth=1.5, linestyle='solid', color='k') x, y=proj(longitude, latitude) # contourf the bathmetry CS=proj.pcolor(x, y, chan31_grid, cmap=plt.cm.hot) # colorbar CBar=proj.colorbar(CS, 'right', size='5%', pad='5%') CBar.set_label('Channel 31 radiance ($W\,m^{-2}\,\mu m\,sr^{-1})$', fontsize=10) CBar.ax.tick_params(axis='y', length=0) Explanation: Now replot using an lcc (Lambert conformal conic) projection from basemap at http://matplotlib.org/basemap/users/examples.html End of explanation # create figure, add axes fig=plt.figure(figsize=(12, 12)) ax=fig.add_subplot(111) ## define parallels and meridians to draw. parallels=np.arange(-90, 90, 5) meridians=np.arange(0, 360, 5) proj.drawparallels(parallels, labels=[1, 0, 0, 0],\ fontsize=10, latmax=90) proj.drawmeridians(meridians, labels=[0, 0, 0, 1],\ fontsize=10, latmax=90) # draw coast & fill continents #map.fillcontinents(color=[0.25, 0.25, 0.25], lake_color=None) # coral out=proj.drawcoastlines(linewidth=1.5, linestyle='solid', color='k') x, y=proj(longitude, latitude) # contourf the bathmetry CS=proj.pcolor(x, y, tbright_grid, cmap=plt.cm.hot) # colorbar CBar=proj.colorbar(CS, 'right', size='5%', pad='5%') CBar.set_label('Channel 31 Brightness temperature (K)', fontsize=10) CBar.ax.tick_params(axis='y', length=0) Explanation: repeat for brightness temperature End of explanation
1,661
Given the following text description, write Python code to implement the functionality described below step by step Description: Porting Tensorflow tutorial "Deep MNIST for Experts" to polygoggles based on https Step1: What TensorFlow actually did in that single line was to add new operations to the computation graph. These operations included ones to compute gradients, compute parameter update steps, and apply update steps to the parameters. The returned operation train_step, when run, will apply the gradient descent updates to the parameters. Training the model can therefore be accomplished by repeatedly running train_step. Step2: Evaluate the Model How well did our model do? First we'll figure out where we predicted the correct label. tf.argmax is an extremely useful function which gives you the index of the highest entry in a tensor along some axis. For example, tf.argmax(y,1) is the label our model thinks is most likely for each input, while tf.argmax(y_,1) is the true label. We can use tf.equal to check if our prediction matches the truth. Step3: That gives us a list of booleans. To determine what fraction are correct, we cast to floating point numbers and then take the mean. For example, [True, False, True, True] would become [1,0,1,1] which would become 0.75. Step4: Finally, we can evaluate our accuracy on the test data. (On MNIST this should be about 91% correct.) Step5: Build a Multilayer Convolutional Network Getting 91% accuracy on MNIST is bad. It's almost embarrassingly bad. In this section, we'll fix that, jumping from a very simple model to something moderately sophisticated Step6: Convolution and Pooling TensorFlow also gives us a lot of flexibility in convolution and pooling operations. How do we handle the boundaries? What is our stride size? In this example, we're always going to choose the vanilla version. Our convolutions uses a stride of one and are zero padded so that the output is the same size as the input. Our pooling is plain old max pooling over 2x2 blocks. To keep our code cleaner, let's also abstract those operations into functions. Step7: First Convolutional Layer We can now implement our first layer. It will consist of convolution, followed by max pooling. The convolutional will compute 32 features for each 5x5 patch. Its weight tensor will have a shape of [5, 5, 1, 32]. The first two dimensions are the patch size, the next is the number of input channels, and the last is the number of output channels. We will also have a bias vector with a component for each output channel. Step8: To apply the layer, we first reshape x to a 4d tensor, with the second and third dimensions corresponding to image width and height, and the final dimension corresponding to the number of color channels. Step9: Second Convolutional Layer In order to build a deep network, we stack several layers of this type. The second layer will have 64 features for each 5x5 patch. Step10: Densely Connected Layer Now that the image size has been reduced to 7x7, we add a fully-connected layer with 1024 neurons to allow processing on the entire image. We reshape the tensor from the pooling layer into a batch of vectors, multiply by a weight matrix, add a bias, and apply a ReLU. XXX where is the 7x7 coming from? when bumping to width, height of 50 each Step11: Dropout To reduce overfitting, we will apply dropout before the readout layer. We create a placeholder for the probability that a neuron's output is kept during dropout. This allows us to turn dropout on during training, and turn it off during testing. TensorFlow's tf.nn.dropout op automatically handles scaling neuron outputs in addition to masking them, so dropout just works without any additional scaling. Step12: Readout Layer Finally, we add a softmax layer, just like for the one layer softmax regression above. Step13: Train and Evaluate the Model How well does this model do? To train and evaluate it we will use code that is nearly identical to that for the simple one layer SoftMax network above. The differences are that
Python Code: import math import os import tensorflow as tf import datasets import make_polygon_pngs use_MNIST_instead_of_our_data = False if use_MNIST_instead_of_our_data: width = 28 height = 28 num_training_steps = 20000 batch_size = 50 else: width = 70 # at 150, takes forever, 0 training accuracy after step 300 height = 70 num_training_steps = 1000 batch_size = 50 training_images = 5000 test_images = 1000 allow_rotation = True if use_MNIST_instead_of_our_data: from tensorflow.examples.tutorials.mnist import input_data data_sets = input_data.read_data_sets('MNIST_data', one_hot=True) else: collection_dir = make_polygon_pngs.make_collection(width, height, training_images, test_images, allow_rotation=allow_rotation) data_sets = datasets.read_data_sets(collection_dir) if not use_MNIST_instead_of_our_data: print("collection_dir:", collection_dir) random_img_basename = os.listdir(os.path.join(collection_dir, "train"))[0] random_img_full_name = os.path.join(collection_dir, "train", random_img_basename) print(random_img_full_name) from IPython.core.display import Image Image(filename=(random_img_full_name)) sess = tf.InteractiveSession() flat_size = width * height num_labels = data_sets.train.labels.shape[1] x = tf.placeholder(tf.float32, shape=[None, flat_size]) y_ = tf.placeholder(tf.float32, shape=[None, num_labels]) W = tf.Variable(tf.zeros([flat_size, num_labels])) b = tf.Variable(tf.zeros([num_labels])) sess.run(tf.initialize_all_variables()) # We can now implement our regression model. It only takes one line! # We multiply the vectorized input images x by the weight matrix W, add the bias b, # and compute the softmax probabilities that are assigned to each class. y = tf.nn.softmax(tf.matmul(x,W) + b) # The cost function to be minimized during training can be specified just as easily. # Our cost function will be the cross-entropy between the target and the model's prediction. cross_entropy = -tf.reduce_sum(y_*tf.log(y)) # Now that we have defined our model and training cost function, it is straightforward to train using TensorFlow. # Because TensorFlow knows the entire computation graph, it can use automatic differentiation to find # the gradients of the cost with respect to each of the variables. # TensorFlow has a variety of builtin optimization algorithms. # For this example, we will use steepest gradient descent, with a step length of 0.01, to descend the cross entropy. train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy) Explanation: Porting Tensorflow tutorial "Deep MNIST for Experts" to polygoggles based on https://www.tensorflow.org/versions/r0.7/tutorials/mnist/pros/index.html End of explanation for i in range(1000): batch = data_sets.train.next_batch(50) train_step.run(feed_dict={x: batch[0], y_: batch[1]}) Explanation: What TensorFlow actually did in that single line was to add new operations to the computation graph. These operations included ones to compute gradients, compute parameter update steps, and apply update steps to the parameters. The returned operation train_step, when run, will apply the gradient descent updates to the parameters. Training the model can therefore be accomplished by repeatedly running train_step. End of explanation correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1)) Explanation: Evaluate the Model How well did our model do? First we'll figure out where we predicted the correct label. tf.argmax is an extremely useful function which gives you the index of the highest entry in a tensor along some axis. For example, tf.argmax(y,1) is the label our model thinks is most likely for each input, while tf.argmax(y_,1) is the true label. We can use tf.equal to check if our prediction matches the truth. End of explanation accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) Explanation: That gives us a list of booleans. To determine what fraction are correct, we cast to floating point numbers and then take the mean. For example, [True, False, True, True] would become [1,0,1,1] which would become 0.75. End of explanation print(accuracy.eval(feed_dict={x: data_sets.test.images, y_: data_sets.test.labels})) Explanation: Finally, we can evaluate our accuracy on the test data. (On MNIST this should be about 91% correct.) End of explanation def weight_variable(shape): initial = tf.truncated_normal(shape, stddev=0.1) return tf.Variable(initial) def bias_variable(shape): initial = tf.constant(0.1, shape=shape) return tf.Variable(initial) Explanation: Build a Multilayer Convolutional Network Getting 91% accuracy on MNIST is bad. It's almost embarrassingly bad. In this section, we'll fix that, jumping from a very simple model to something moderately sophisticated: a small convolutional neural network. This will get us to around 99.2% accuracy -- not state of the art, but respectable. Weight Initialization To create this model, we're going to need to create a lot of weights and biases. One should generally initialize weights with a small amount of noise for symmetry breaking, and to prevent 0 gradients. Since we're using ReLU neurons, it is also good practice to initialize them with a slightly positive initial bias to avoid "dead neurons." Instead of doing this repeatedly while we build the model, let's create two handy functions to do it for us. End of explanation def conv2d(x, W): return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME') def max_pool_2x2(x): return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') Explanation: Convolution and Pooling TensorFlow also gives us a lot of flexibility in convolution and pooling operations. How do we handle the boundaries? What is our stride size? In this example, we're always going to choose the vanilla version. Our convolutions uses a stride of one and are zero padded so that the output is the same size as the input. Our pooling is plain old max pooling over 2x2 blocks. To keep our code cleaner, let's also abstract those operations into functions. End of explanation W_conv1 = weight_variable([5, 5, 1, 32]) b_conv1 = bias_variable([32]) Explanation: First Convolutional Layer We can now implement our first layer. It will consist of convolution, followed by max pooling. The convolutional will compute 32 features for each 5x5 patch. Its weight tensor will have a shape of [5, 5, 1, 32]. The first two dimensions are the patch size, the next is the number of input channels, and the last is the number of output channels. We will also have a bias vector with a component for each output channel. End of explanation x_image = tf.reshape(x, [-1, width, height,1]) # XXX not sure which is width and which is height # We then convolve x_image with the weight tensor, add the bias, apply the ReLU function, and finally max pool. h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1) h_pool1 = max_pool_2x2(h_conv1) Explanation: To apply the layer, we first reshape x to a 4d tensor, with the second and third dimensions corresponding to image width and height, and the final dimension corresponding to the number of color channels. End of explanation W_conv2 = weight_variable([5, 5, 32, 64]) b_conv2 = bias_variable([64]) h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2) h_pool2 = max_pool_2x2(h_conv2) Explanation: Second Convolutional Layer In order to build a deep network, we stack several layers of this type. The second layer will have 64 features for each 5x5 patch. End of explanation def get_size_reduced_to_from_input_tensor_size(input_tensor_size): size_reduced_to_squared = input_tensor_size / 64. / batch_size # last divide is 50., pretty sure it's batch size return math.sqrt(size_reduced_to_squared) print(get_size_reduced_to_from_input_tensor_size(4620800)) print(get_size_reduced_to_from_input_tensor_size(1036800)) if use_MNIST_instead_of_our_data: size_reduced_to = 7 else: # for width & height = 50, size_reduced_to seems to be 13 # for width & height = 70, size_reduced_to seems to be 18 # for width & height = 150, size_reduced_to seems to be 38 size_reduced_to = 18 #W_fc1 = weight_variable([7 * 7 * 64, 1024]) W_fc1 = weight_variable([size_reduced_to * size_reduced_to * 64, 1024]) b_fc1 = bias_variable([1024]) #h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64]) h_pool2_flat = tf.reshape(h_pool2, [-1, size_reduced_to*size_reduced_to*64]) h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1) Explanation: Densely Connected Layer Now that the image size has been reduced to 7x7, we add a fully-connected layer with 1024 neurons to allow processing on the entire image. We reshape the tensor from the pooling layer into a batch of vectors, multiply by a weight matrix, add a bias, and apply a ReLU. XXX where is the 7x7 coming from? when bumping to width, height of 50 each: InvalidArgumentError: Input to reshape is a tensor with 540800 values, but the requested shape requires a multiple of 3136 7 x 7 x 64 = 3136 540800 / 64. = 8450 13 x 13 x 50 x 64 = 540800 On MNIST, if I change the densely connected layer to fail (change the 7x7x64 to 7x7x65 in both W_fcl and h_pool2_flat for example, then I get the following error as soon as start to train: InvalidArgumentError: Input to reshape is a tensor with 156800 values, but the requested shape requires a multiple of 3185 note 3185 = 7x7x65 156800 = 7 * 7 * 64 * 50 50 is batch size with width & height = 70: Input to reshape is a tensor with 1036800 values, but the requested shape requires a multiple of 10816 with width & height = 150: Input to reshape is a tensor with 4620800 values, but the requested shape requires a multiple of 20736 End of explanation keep_prob = tf.placeholder(tf.float32) h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob) Explanation: Dropout To reduce overfitting, we will apply dropout before the readout layer. We create a placeholder for the probability that a neuron's output is kept during dropout. This allows us to turn dropout on during training, and turn it off during testing. TensorFlow's tf.nn.dropout op automatically handles scaling neuron outputs in addition to masking them, so dropout just works without any additional scaling. End of explanation W_fc2 = weight_variable([1024, num_labels]) b_fc2 = bias_variable([num_labels]) y_conv=tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2) Explanation: Readout Layer Finally, we add a softmax layer, just like for the one layer softmax regression above. End of explanation cross_entropy = -tf.reduce_sum(y_*tf.log(y_conv)) train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy) correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) sess.run(tf.initialize_all_variables()) for i in range(num_training_steps): batch = data_sets.train.next_batch(batch_size) if i%100 == 0: train_accuracy = accuracy.eval(feed_dict={x:batch[0], y_: batch[1], keep_prob: 1.0}) print("step %d, training accuracy %g"%(i, train_accuracy)) train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5}) print("test accuracy %g"%accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0})) Explanation: Train and Evaluate the Model How well does this model do? To train and evaluate it we will use code that is nearly identical to that for the simple one layer SoftMax network above. The differences are that: we will replace the steepest gradient descent optimizer with the more sophisticated ADAM optimizer; we will include the additional parameter keep_prob in feed_dict to control the dropout rate; and we will add logging to every 100th iteration in the training process. End of explanation
1,662
Given the following text description, write Python code to implement the functionality described below step by step Description: Title Step1: Load Boston Housing Dataset Step2: Standardize Features Step3: Fit Ridge Regression The hyperparameter, $\alpha$, lets us control how much we penalize the coefficients, with higher values of $\alpha$ creating simpler modelers. The ideal value of $\alpha$ should be tuned like any other hyperparameter. In scikit-learn, $\alpha$ is set using the alpha parameter.
Python Code: # Load library from sklearn.linear_model import Lasso from sklearn.datasets import load_boston from sklearn.preprocessing import StandardScaler Explanation: Title: Lasso Regression Slug: lasso_regression Summary: How to conduct lasso regression in scikit-learn for machine learning in Python. Date: 2017-09-18 12:00 Category: Machine Learning Tags: Linear Regression Authors: Chris Albon Preliminaries End of explanation # Load data boston = load_boston() X = boston.data y = boston.target Explanation: Load Boston Housing Dataset End of explanation # Standarize features scaler = StandardScaler() X_std = scaler.fit_transform(X) Explanation: Standardize Features End of explanation # Create lasso regression with alpha value regr = Lasso(alpha=0.5) # Fit the linear regression model = regr.fit(X_std, y) Explanation: Fit Ridge Regression The hyperparameter, $\alpha$, lets us control how much we penalize the coefficients, with higher values of $\alpha$ creating simpler modelers. The ideal value of $\alpha$ should be tuned like any other hyperparameter. In scikit-learn, $\alpha$ is set using the alpha parameter. End of explanation
1,663
Given the following text description, write Python code to implement the functionality described below step by step Description: Showing a Test Sample Step1: Finding a Valid Image Size Not all the images have the same size dimensions so the first thing we'll have to do will be to rescale them. That's because the model will only accept a specific image size. The strategy to find out this value will be averaging the size in the test dataset. After that, the whole training set will be rescaled as well using this value. Step2: Utils for Creating Batches During the training stage it's not a good idea to load the whole set of images in the memory since we might run out of memory. Thus, we'll need a generator that give us batches of images to keep it in memory until we don't need it anymore. Step3: Testing the batches methods Step4: Preprocessing data Step5: Storing preprocessed batches on disk Step6: NOTE Since here the data is already processed and saved as pickle files. Building the Network Step7: Testing model
Python Code: num_test_images = len(test_image_names) idx = random.randint(0, num_test_images) sample_file, sample_name = test_image_names[idx], test_image_names[idx].split('_')[:-1] path_file = os.path.join(test_root_path, sample_file) sample_image = imread(path_file) print("Id: {}, Image Label: {}, Shape: {}".format(idx, '_'.join(sample_name), sample_image.shape)) plt.figure(figsize=(3,3)) plt.imshow(sample_image) plt.axis('off') plt.show() Explanation: Showing a Test Sample End of explanation width = 0 height = 0 for i in range(num_test_images): path_file = os.path.join(test_root_path, test_image_names[i]) image = imread(path_file) width += image.shape[0] height += image.shape[1] width_mean = width//num_test_images height_mean = height//num_test_images dim_size = (width_mean + height_mean) // 2 print("Width (mean): {}".format(width_mean)) print("height (mean): {}".format(height_mean)) print("Input image size: {}".format(dim_size)) Explanation: Finding a Valid Image Size Not all the images have the same size dimensions so the first thing we'll have to do will be to rescale them. That's because the model will only accept a specific image size. The strategy to find out this value will be averaging the size in the test dataset. After that, the whole training set will be rescaled as well using this value. End of explanation def imresize(im): return np.array(Image.fromarray(im).resize((dim_size, dim_size))) def get_num_of_samples(): count = 0 for _,character in enumerate(character_directories): path = os.path.join(train_root_path, character) count += len(listdir(path)) return count def get_batch(batch_init, batch_size): data = {'image':[], 'label':[]} character_batch_size = batch_size//len(character_directories) character_batch_init = batch_init//len(character_directories) character_batch_end = character_batch_init + character_batch_size for _,character in enumerate(character_directories): path = os.path.join(train_root_path, character) images_list = listdir(path) for i in range(character_batch_init, character_batch_end): if len(images_list) == 0: continue #if this character has small number of features #we repeat them if i >= len(images_list): p = i % len(images_list) else: p = i path_file = os.path.join(path, images_list[p]) image = imread(path_file) #all with the same shape image = imresize(image) data['image'].append(image) data['label'].append(character) return data def get_batches(num_batches, batch_size, verbose=False): #num max of samples num_samples = get_num_of_samples() #check number of batches with the maximum max_num_batches = num_samples//batch_size - 1 if verbose: print("Number of samples:{}".format(num_samples)) print("Batches:{} Size:{}".format(num_batches, batch_size)) assert num_batches <= max_num_batches, "Surpassed the maximum number of batches" for i in range(0, num_batches): init = i * batch_size if verbose: print("Batch-{} yielding images from {} to {}...".format(i, init, init+batch_size)) yield get_batch(init, batch_size) Explanation: Utils for Creating Batches During the training stage it's not a good idea to load the whole set of images in the memory since we might run out of memory. Thus, we'll need a generator that give us batches of images to keep it in memory until we don't need it anymore. End of explanation #testing generator batch_size = 500 for b in get_batches(3, batch_size, verbose=True): print("\t|- retrieved {} images".format(len(b['image']))) Explanation: Testing the batches methods: End of explanation #num characters num_characters = len(character_directories) #normalize def normalize(x): #we use the feature scaling to have all the batches #in the same space, that is (0,1) return (x - np.amin(x))/(np.amax(x) - np.amin(x)) #one-hot encode lb = preprocessing.LabelBinarizer() lb = lb.fit(character_directories) def one_hot(label): return lb.transform([label]) Explanation: Preprocessing data End of explanation num_batches = 40 batch_size = 500 cnt_images = 0 for cnt, b in enumerate(get_batches(num_batches, batch_size)): data = {'image':[], 'label':[]} for i in range( min(len(b['image']), batch_size) ): image = np.array( b['image'][i] ) label = np.array( b['label'][i] ) #label = label.reshape([-1,:]) if len(image.shape) == 3: data['image'].append(normalize(image)) data['label'].append(one_hot(label)[-1,:]) cnt_images += 1 else: print("Dim image < 3") with open("simpson_train_{}.pkl".format(cnt), 'wb') as file: pickle.dump(data, file, pickle.HIGHEST_PROTOCOL) print("Loaded {} train images and stored on disk".format(cnt_images)) #testing load from file with open('simpson_train_0.pkl', 'rb') as file: data = pickle.load(file) print("Example of onehot encoded:\n{}".format(data['label'][0])) print("Data shape: {}".format(data['image'][0].shape)) Explanation: Storing preprocessed batches on disk End of explanation #helpers def convolution(inputs, kernel_shape, stride_shape, output_depth): #convolution variables input_depth = inputs.get_shape().as_list()[3] filter_shape = kernel_shape + (input_depth, output_depth) dev = 1/np.sqrt(kernel_shape[0]*kernel_shape[1]) filter_ = tf.Variable(tf.truncated_normal(filter_shape, stddev=dev), name="filter_") stride_shape = (1,) + stride_shape + (1,) pool_shape = stride_shape bias_ = tf.Variable(tf.truncated_normal([output_depth], stddev=dev), name="bias_") #convolution x = tf.nn.conv2d(inputs, filter_, stride_shape, padding='SAME') x = tf.nn.bias_add(x, bias_) x = tf.nn.relu(x) #x = tf.nn.conv2d(inputs, filter_, stride_shape, padding='SAME') #x = tf.nn.bias_add(x, bias_) #x = tf.nn.relu(x) #pooling x = tf.nn.max_pool(x, pool_shape, stride_shape, padding='SAME') return x def classifier(inputs, num_outputs): #classifier variables num_inputs = inputs.get_shape().as_list()[1] dev = 1/np.sqrt(num_inputs) weights = tf.Variable(tf.truncated_normal((num_inputs,)+num_outputs, stddev=dev), name="weights") bias = tf.Variable(tf.truncated_normal(num_outputs, stddev=dev), name="bias") #classifier logits = tf.add(tf.matmul(inputs, weights), bias) return logits ##building the network #remove previous weights, bias, etc import tensorflow.compat.v1 as tf tf.disable_v2_behavior() tf.reset_default_graph() #shape image_shape = (dim_size, dim_size, 3) label_shape = (num_characters,) #data X = tf.placeholder(tf.float32, (None,) + image_shape) y = tf.placeholder(tf.float32, (None,) + label_shape) #conv print(X.get_shape().as_list()) conv = convolution(X, (5,5), (2,2), 32) print(conv.get_shape().as_list()) conv = convolution(X, (5,5), (2,2), 64) print(conv.get_shape().as_list()) conv = convolution(conv, (5,5), (2,2), 128) print(conv.get_shape().as_list()) #before classifier flatten_shape = np.prod(conv.get_shape().as_list()[1:]) flatten = tf.reshape(conv, [-1,flatten_shape]) #classifying num_outputs = label_shape logits = classifier(flatten, (40,)) logits = tf.nn.dropout(logits, 0.8) logits = classifier(logits, num_outputs) print("Inputs shape: {}".format(X.get_shape().as_list()[1:])) print("Flatten shape: {}".format(flatten_shape)) print("Outputs shape: {}".format(logits.get_shape().as_list()[1])) #loss and optmizer cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=logits, labels=y)) optimizer = tf.train.AdamOptimizer().minimize(cost) #accuracy correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) ##Train the model x_train = [] y_train = [] x_val = [] y_val = [] ## epochs = 30 sess = tf.Session() sess.run(tf.global_variables_initializer()) stats = {'train_loss':[], 'val_loss':[], 'acc':[]} for e in range(epochs): for i in range(num_batches): fname = "simpson_train_{}.pkl".format(i) if os.path.exists(fname): with open(fname, 'rb') as file: #print("Processing: {}".format(fname)) data = pickle.load(file) x_train, x_val, y_train, y_val = train_test_split(data['image'], data['label'], test_size=0.2, random_state=42) feed_dict = {X: x_train, y: y_train} train_loss, _ = sess.run([cost, optimizer], feed_dict) feed_dict = {X: x_val, y: y_val} val_loss, acc = sess.run([cost, accuracy], feed_dict) #storing stats stats['train_loss'].append(train_loss) stats['val_loss'].append(val_loss) stats['acc'].append(acc) #enough accuracy if acc > 0.8: break print("Epoch:{} Training Loss:{:.4f} Validation Loss:{:.4f} Accuracy:{:.4f}".format(e, train_loss, val_loss, acc)) #stop epochs if acc > 0.8: break with open("stats.pkl", 'wb') as file: pickle.dump(stats, file, pickle.HIGHEST_PROTOCOL) #don't plot the first 7 stats, they're in a big scale plt.plot(stats['train_loss'][7:], label='Train Loss') plt.plot(stats['val_loss'][7:], label='Validation Loss') plt.plot(stats['acc'][7:], label='Accuracy') plt.legend() Explanation: NOTE Since here the data is already processed and saved as pickle files. Building the Network End of explanation def test(): idx = random.randint(0, num_test_images) sample_file, sample_name = test_image_names[idx], test_image_names[idx].split('_')[:-1] path_file = os.path.join(test_root_path, sample_file) sample_image = imread(path_file) test_image = sample_image test_label = ' '.join([s.capitalize() for s in sample_name]) test_image_norm = normalize(imresize(sample_image)) prediction = sess.run(logits, {X:[test_image_norm]}) prediction = lb.inverse_transform(prediction) #showing #print("Label: {}".format(test_label)) prediction = ' '.join([s.capitalize() for s in prediction[0].split('_')]) #print("Prediction: {}".format(prediction)) return test_image, test_label, prediction # Execute and show fig, axs = plt.subplots(2, 2, figsize=(10,10)) for i, ax in enumerate(axs.ravel()): test_image, test_label, prediction = test() ax.set_title("Prediction: {}".format(test_label)) ax.imshow(test_image) ax.axis('off') plt.show() Explanation: Testing model End of explanation
1,664
Given the following text description, write Python code to implement the functionality described below step by step Description: Comparing Encoder-Decoders Analysis Model Architecture Step1: Perplexity on Each Dataset Step2: Loss vs. Epoch Step3: Perplexity vs. Epoch Step4: Generations Step5: BLEU Analysis Step6: N-pairs BLEU Analysis This analysis randomly samples 1000 pairs of generations/ground truths and treats them as translations, giving their BLEU score. We can expect very low scores in the ground truth and high scores can expose hyper-common generations Step7: Alignment Analysis This analysis computs the average Smith-Waterman alignment score for generations, with the same intuition as N-pairs BLEU, in that we expect low scores in the ground truth and hyper-common generations to raise the scores
Python Code: report_files = ["/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_200_512_04dra/encdec_noing10_200_512_04dra.json", "/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_200_512_04drb/encdec_noing10_200_512_04drb.json", "/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_200_512_04drbef/encdec_noing10_200_512_04drbef.json"] log_files = ["/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_200_512_04dra/encdec_noing10_200_512_04dra_logs.json", "/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_200_512_04drb/encdec_noing10_200_512_04drb_logs.json", "/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_200_512_04drbef/encdec_noing10_200_512_04drbef_logs.json"] reports = [] logs = [] import json import matplotlib.pyplot as plt import numpy as np for report_file in report_files: with open(report_file) as f: reports.append((report_file.split('/')[-1].split('.json')[0], json.loads(f.read()))) for log_file in log_files: with open(log_file) as f: logs.append((log_file.split('/')[-1].split('.json')[0], json.loads(f.read()))) for report_name, report in reports: print '\n', report_name, '\n' print 'Encoder: \n', report['architecture']['encoder'] print 'Decoder: \n', report['architecture']['decoder'] Explanation: Comparing Encoder-Decoders Analysis Model Architecture End of explanation %matplotlib inline from IPython.display import HTML, display def display_table(data): display(HTML( u'<table><tr>{}</tr></table>'.format( u'</tr><tr>'.join( u'<td>{}</td>'.format('</td><td>'.join(unicode(_) for _ in row)) for row in data) ) )) def bar_chart(data): n_groups = len(data) train_perps = [d[1] for d in data] valid_perps = [d[2] for d in data] test_perps = [d[3] for d in data] fig, ax = plt.subplots(figsize=(10,8)) index = np.arange(n_groups) bar_width = 0.3 opacity = 0.4 error_config = {'ecolor': '0.3'} train_bars = plt.bar(index, train_perps, bar_width, alpha=opacity, color='b', error_kw=error_config, label='Training Perplexity') valid_bars = plt.bar(index + bar_width, valid_perps, bar_width, alpha=opacity, color='r', error_kw=error_config, label='Valid Perplexity') test_bars = plt.bar(index + 2*bar_width, test_perps, bar_width, alpha=opacity, color='g', error_kw=error_config, label='Test Perplexity') plt.xlabel('Model') plt.ylabel('Scores') plt.title('Perplexity by Model and Dataset') plt.xticks(index + bar_width / 3, [d[0] for d in data]) plt.legend() plt.tight_layout() plt.show() data = [['<b>Model</b>', '<b>Train Perplexity</b>', '<b>Valid Perplexity</b>', '<b>Test Perplexity</b>']] for rname, report in reports: data.append([rname, report['train_perplexity'], report['valid_perplexity'], report['test_perplexity']]) display_table(data) bar_chart(data[1:]) Explanation: Perplexity on Each Dataset End of explanation %matplotlib inline plt.figure(figsize=(10, 8)) for rname, l in logs: for k in l.keys(): plt.plot(l[k][0], l[k][1], label=str(k) + ' ' + rname + ' (train)') plt.plot(l[k][0], l[k][2], label=str(k) + ' ' + rname + ' (valid)') plt.title('Loss v. Epoch') plt.xlabel('Epoch') plt.ylabel('Loss') plt.legend() plt.show() Explanation: Loss vs. Epoch End of explanation %matplotlib inline plt.figure(figsize=(10, 8)) for rname, l in logs: for k in l.keys(): plt.plot(l[k][0], l[k][3], label=str(k) + ' ' + rname + ' (train)') plt.plot(l[k][0], l[k][4], label=str(k) + ' ' + rname + ' (valid)') plt.title('Perplexity v. Epoch') plt.xlabel('Epoch') plt.ylabel('Perplexity') plt.legend() plt.show() Explanation: Perplexity vs. Epoch End of explanation def print_sample(sample, best_bleu=None): enc_input = ' '.join([w for w in sample['encoder_input'].split(' ') if w != '<pad>']) gold = ' '.join([w for w in sample['gold'].split(' ') if w != '<mask>']) print('Input: '+ enc_input + '\n') print('Gend: ' + sample['generated'] + '\n') print('True: ' + gold + '\n') if best_bleu is not None: cbm = ' '.join([w for w in best_bleu['best_match'].split(' ') if w != '<mask>']) print('Closest BLEU Match: ' + cbm + '\n') print('Closest BLEU Score: ' + str(best_bleu['best_score']) + '\n') print('\n') def display_sample(samples, best_bleu=False): for enc_input in samples: data = [] for rname, sample in samples[enc_input]: gold = ' '.join([w for w in sample['gold'].split(' ') if w != '<mask>']) data.append([rname, '<b>Generated: </b>' + sample['generated']]) if best_bleu: cbm = ' '.join([w for w in sample['best_match'].split(' ') if w != '<mask>']) data.append([rname, '<b>Closest BLEU Match: </b>' + cbm + ' (Score: ' + str(sample['best_score']) + ')']) data.insert(0, ['<u><b>' + enc_input + '</b></u>', '<b>True: ' + gold+ '</b>']) display_table(data) def process_samples(samples): # consolidate samples with identical inputs result = {} for rname, t_samples, t_cbms in samples: for i, sample in enumerate(t_samples): enc_input = ' '.join([w for w in sample['encoder_input'].split(' ') if w != '<pad>']) if t_cbms is not None: sample.update(t_cbms[i]) if enc_input in result: result[enc_input].append((rname, sample)) else: result[enc_input] = [(rname, sample)] return result samples = process_samples([(rname, r['train_samples'], r['best_bleu_matches_train'] if 'best_bleu_matches_train' in r else None) for (rname, r) in reports]) display_sample(samples, best_bleu='best_bleu_matches_train' in reports[1][1]) samples = process_samples([(rname, r['valid_samples'], r['best_bleu_matches_valid'] if 'best_bleu_matches_valid' in r else None) for (rname, r) in reports]) display_sample(samples, best_bleu='best_bleu_matches_valid' in reports[1][1]) samples = process_samples([(rname, r['test_samples'], r['best_bleu_matches_test'] if 'best_bleu_matches_test' in r else None) for (rname, r) in reports]) display_sample(samples, best_bleu='best_bleu_matches_test' in reports[1][1]) Explanation: Generations End of explanation def print_bleu(blue_structs): data= [['<b>Model</b>', '<b>Overall Score</b>','<b>1-gram Score</b>','<b>2-gram Score</b>','<b>3-gram Score</b>','<b>4-gram Score</b>']] for rname, blue_struct in blue_structs: data.append([rname, blue_struct['score'], blue_struct['components']['1'], blue_struct['components']['2'], blue_struct['components']['3'], blue_struct['components']['4']]) display_table(data) # Training Set BLEU Scores print_bleu([(rname, report['train_bleu']) for (rname, report) in reports]) # Validation Set BLEU Scores print_bleu([(rname, report['valid_bleu']) for (rname, report) in reports]) # Test Set BLEU Scores print_bleu([(rname, report['test_bleu']) for (rname, report) in reports]) # All Data BLEU Scores print_bleu([(rname, report['combined_bleu']) for (rname, report) in reports]) Explanation: BLEU Analysis End of explanation # Training Set BLEU n-pairs Scores print_bleu([(rname, report['n_pairs_bleu_train']) for (rname, report) in reports]) # Validation Set n-pairs BLEU Scores print_bleu([(rname, report['n_pairs_bleu_valid']) for (rname, report) in reports]) # Test Set n-pairs BLEU Scores print_bleu([(rname, report['n_pairs_bleu_test']) for (rname, report) in reports]) # Combined n-pairs BLEU Scores print_bleu([(rname, report['n_pairs_bleu_all']) for (rname, report) in reports]) # Ground Truth n-pairs BLEU Scores print_bleu([(rname, report['n_pairs_bleu_gold']) for (rname, report) in reports]) Explanation: N-pairs BLEU Analysis This analysis randomly samples 1000 pairs of generations/ground truths and treats them as translations, giving their BLEU score. We can expect very low scores in the ground truth and high scores can expose hyper-common generations End of explanation def print_align(reports): data= [['<b>Model</b>', '<b>Average (Train) Generated Score</b>','<b>Average (Valid) Generated Score</b>','<b>Average (Test) Generated Score</b>','<b>Average (All) Generated Score</b>', '<b>Average (Gold) Score</b>']] for rname, report in reports: data.append([rname, report['average_alignment_train'], report['average_alignment_valid'], report['average_alignment_test'], report['average_alignment_all'], report['average_alignment_gold']]) display_table(data) print_align(reports) Explanation: Alignment Analysis This analysis computs the average Smith-Waterman alignment score for generations, with the same intuition as N-pairs BLEU, in that we expect low scores in the ground truth and hyper-common generations to raise the scores End of explanation
1,665
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Toplevel MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Flux Correction 3. Key Properties --&gt; Genealogy 4. Key Properties --&gt; Software Properties 5. Key Properties --&gt; Coupling 6. Key Properties --&gt; Tuning Applied 7. Key Properties --&gt; Conservation --&gt; Heat 8. Key Properties --&gt; Conservation --&gt; Fresh Water 9. Key Properties --&gt; Conservation --&gt; Salt 10. Key Properties --&gt; Conservation --&gt; Momentum 11. Radiative Forcings 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect 24. Radiative Forcings --&gt; Aerosols --&gt; Dust 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt 28. Radiative Forcings --&gt; Other --&gt; Land Use 29. Radiative Forcings --&gt; Other --&gt; Solar 1. Key Properties Key properties of the model 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 2. Key Properties --&gt; Flux Correction Flux correction properties of the model 2.1. Details Is Required Step7: 3. Key Properties --&gt; Genealogy Genealogy and history of the model 3.1. Year Released Is Required Step8: 3.2. CMIP3 Parent Is Required Step9: 3.3. CMIP5 Parent Is Required Step10: 3.4. Previous Name Is Required Step11: 4. Key Properties --&gt; Software Properties Software properties of model 4.1. Repository Is Required Step12: 4.2. Code Version Is Required Step13: 4.3. Code Languages Is Required Step14: 4.4. Components Structure Is Required Step15: 4.5. Coupler Is Required Step16: 5. Key Properties --&gt; Coupling ** 5.1. Overview Is Required Step17: 5.2. Atmosphere Double Flux Is Required Step18: 5.3. Atmosphere Fluxes Calculation Grid Is Required Step19: 5.4. Atmosphere Relative Winds Is Required Step20: 6. Key Properties --&gt; Tuning Applied Tuning methodology for model 6.1. Description Is Required Step21: 6.2. Global Mean Metrics Used Is Required Step22: 6.3. Regional Metrics Used Is Required Step23: 6.4. Trend Metrics Used Is Required Step24: 6.5. Energy Balance Is Required Step25: 6.6. Fresh Water Balance Is Required Step26: 7. Key Properties --&gt; Conservation --&gt; Heat Global heat convervation properties of the model 7.1. Global Is Required Step27: 7.2. Atmos Ocean Interface Is Required Step28: 7.3. Atmos Land Interface Is Required Step29: 7.4. Atmos Sea-ice Interface Is Required Step30: 7.5. Ocean Seaice Interface Is Required Step31: 7.6. Land Ocean Interface Is Required Step32: 8. Key Properties --&gt; Conservation --&gt; Fresh Water Global fresh water convervation properties of the model 8.1. Global Is Required Step33: 8.2. Atmos Ocean Interface Is Required Step34: 8.3. Atmos Land Interface Is Required Step35: 8.4. Atmos Sea-ice Interface Is Required Step36: 8.5. Ocean Seaice Interface Is Required Step37: 8.6. Runoff Is Required Step38: 8.7. Iceberg Calving Is Required Step39: 8.8. Endoreic Basins Is Required Step40: 8.9. Snow Accumulation Is Required Step41: 9. Key Properties --&gt; Conservation --&gt; Salt Global salt convervation properties of the model 9.1. Ocean Seaice Interface Is Required Step42: 10. Key Properties --&gt; Conservation --&gt; Momentum Global momentum convervation properties of the model 10.1. Details Is Required Step43: 11. Radiative Forcings Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5) 11.1. Overview Is Required Step44: 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 Carbon dioxide forcing 12.1. Provision Is Required Step45: 12.2. Additional Information Is Required Step46: 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 Methane forcing 13.1. Provision Is Required Step47: 13.2. Additional Information Is Required Step48: 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O Nitrous oxide forcing 14.1. Provision Is Required Step49: 14.2. Additional Information Is Required Step50: 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 Troposheric ozone forcing 15.1. Provision Is Required Step51: 15.2. Additional Information Is Required Step52: 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 Stratospheric ozone forcing 16.1. Provision Is Required Step53: 16.2. Additional Information Is Required Step54: 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC Ozone-depleting and non-ozone-depleting fluorinated gases forcing 17.1. Provision Is Required Step55: 17.2. Equivalence Concentration Is Required Step56: 17.3. Additional Information Is Required Step57: 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 SO4 aerosol forcing 18.1. Provision Is Required Step58: 18.2. Additional Information Is Required Step59: 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon Black carbon aerosol forcing 19.1. Provision Is Required Step60: 19.2. Additional Information Is Required Step61: 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon Organic carbon aerosol forcing 20.1. Provision Is Required Step62: 20.2. Additional Information Is Required Step63: 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate Nitrate forcing 21.1. Provision Is Required Step64: 21.2. Additional Information Is Required Step65: 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect Cloud albedo effect forcing (RFaci) 22.1. Provision Is Required Step66: 22.2. Aerosol Effect On Ice Clouds Is Required Step67: 22.3. Additional Information Is Required Step68: 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect Cloud lifetime effect forcing (ERFaci) 23.1. Provision Is Required Step69: 23.2. Aerosol Effect On Ice Clouds Is Required Step70: 23.3. RFaci From Sulfate Only Is Required Step71: 23.4. Additional Information Is Required Step72: 24. Radiative Forcings --&gt; Aerosols --&gt; Dust Dust forcing 24.1. Provision Is Required Step73: 24.2. Additional Information Is Required Step74: 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic Tropospheric volcanic forcing 25.1. Provision Is Required Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation Is Required Step76: 25.3. Future Explosive Volcanic Aerosol Implementation Is Required Step77: 25.4. Additional Information Is Required Step78: 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic Stratospheric volcanic forcing 26.1. Provision Is Required Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation Is Required Step80: 26.3. Future Explosive Volcanic Aerosol Implementation Is Required Step81: 26.4. Additional Information Is Required Step82: 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt Sea salt forcing 27.1. Provision Is Required Step83: 27.2. Additional Information Is Required Step84: 28. Radiative Forcings --&gt; Other --&gt; Land Use Land use forcing 28.1. Provision Is Required Step85: 28.2. Crop Change Only Is Required Step86: 28.3. Additional Information Is Required Step87: 29. Radiative Forcings --&gt; Other --&gt; Solar Solar forcing 29.1. Provision Is Required Step88: 29.2. Additional Information Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'test-institute-3', 'sandbox-3', 'toplevel') Explanation: ES-DOC CMIP6 Model Properties - Toplevel MIP Era: CMIP6 Institute: TEST-INSTITUTE-3 Source ID: SANDBOX-3 Sub-Topics: Radiative Forcings. Properties: 85 (42 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:46 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Flux Correction 3. Key Properties --&gt; Genealogy 4. Key Properties --&gt; Software Properties 5. Key Properties --&gt; Coupling 6. Key Properties --&gt; Tuning Applied 7. Key Properties --&gt; Conservation --&gt; Heat 8. Key Properties --&gt; Conservation --&gt; Fresh Water 9. Key Properties --&gt; Conservation --&gt; Salt 10. Key Properties --&gt; Conservation --&gt; Momentum 11. Radiative Forcings 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect 24. Radiative Forcings --&gt; Aerosols --&gt; Dust 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt 28. Radiative Forcings --&gt; Other --&gt; Land Use 29. Radiative Forcings --&gt; Other --&gt; Solar 1. Key Properties Key properties of the model 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top level overview of coupled model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of coupled model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Flux Correction Flux correction properties of the model 2.1. Details Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how flux corrections are applied in the model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Genealogy Genealogy and history of the model 3.1. Year Released Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Year the model was released End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.2. CMIP3 Parent Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 CMIP3 parent if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.3. CMIP5 Parent Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 CMIP5 parent if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.4. Previous Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Previously known as End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Software Properties Software properties of model 4.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.4. Components Structure Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OASIS" # "OASIS3-MCT" # "ESMF" # "NUOPC" # "Bespoke" # "Unknown" # "None" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 4.5. Coupler Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Overarching coupling framework for model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Key Properties --&gt; Coupling ** 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of coupling in the model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 5.2. Atmosphere Double Flux Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Atmosphere grid" # "Ocean grid" # "Specific coupler grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 5.3. Atmosphere Fluxes Calculation Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Where are the air-sea fluxes calculated End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 5.4. Atmosphere Relative Winds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Key Properties --&gt; Tuning Applied Tuning methodology for model 6.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics/diagnostics of the global mean state used in tuning model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics/diagnostics used in tuning model/component (such as 20th century) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.5. Energy Balance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.6. Fresh Water Balance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Key Properties --&gt; Conservation --&gt; Heat Global heat convervation properties of the model 7.1. Global Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how heat is conserved globally End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.2. Atmos Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the atmosphere/ocean coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.3. Atmos Land Interface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how heat is conserved at the atmosphere/land coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.4. Atmos Sea-ice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.5. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the ocean/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.6. Land Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the land/ocean coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Key Properties --&gt; Conservation --&gt; Fresh Water Global fresh water convervation properties of the model 8.1. Global Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how fresh_water is conserved globally End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.2. Atmos Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.3. Atmos Land Interface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how fresh water is conserved at the atmosphere/land coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.4. Atmos Sea-ice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.5. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.6. Runoff Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how runoff is distributed and conserved End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.7. Iceberg Calving Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how iceberg calving is modeled and conserved End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.8. Endoreic Basins Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how endoreic basins (no ocean access) are treated End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.9. Snow Accumulation Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how snow accumulation over land and over sea-ice is treated End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Key Properties --&gt; Conservation --&gt; Salt Global salt convervation properties of the model 9.1. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how salt is conserved at the ocean/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10. Key Properties --&gt; Conservation --&gt; Momentum Global momentum convervation properties of the model 10.1. Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how momentum is conserved in the model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11. Radiative Forcings Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5) 11.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of radiative forcings (GHG and aerosols) implementation in model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 Carbon dioxide forcing 12.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 Methane forcing 13.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 13.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O Nitrous oxide forcing 14.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 Troposheric ozone forcing 15.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 Stratospheric ozone forcing 16.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 16.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC Ozone-depleting and non-ozone-depleting fluorinated gases forcing 17.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "Option 1" # "Option 2" # "Option 3" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.2. Equivalence Concentration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Details of any equivalence concentrations used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 SO4 aerosol forcing 18.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 18.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon Black carbon aerosol forcing 19.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon Organic carbon aerosol forcing 20.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 20.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate Nitrate forcing 21.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 21.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect Cloud albedo effect forcing (RFaci) 22.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 22.2. Aerosol Effect On Ice Clouds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative effects of aerosols on ice clouds are represented? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect Cloud lifetime effect forcing (ERFaci) 23.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 23.2. Aerosol Effect On Ice Clouds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative effects of aerosols on ice clouds are represented? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 23.3. RFaci From Sulfate Only Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative forcing from aerosol cloud interactions from sulfate aerosol only? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 23.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 24. Radiative Forcings --&gt; Aerosols --&gt; Dust Dust forcing 24.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 24.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic Tropospheric volcanic forcing 25.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in historical simulations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in future simulations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 25.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic Stratospheric volcanic forcing 26.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in historical simulations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in future simulations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt Sea salt forcing 27.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 28. Radiative Forcings --&gt; Other --&gt; Land Use Land use forcing 28.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 28.2. Crop Change Only Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Land use change represented via crop change only? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "irradiance" # "proton" # "electron" # "cosmic ray" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 29. Radiative Forcings --&gt; Other --&gt; Solar Solar forcing 29.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How solar forcing is provided End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 29.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation
1,666
Given the following text description, write Python code to implement the functionality described below step by step Description: Convenience methods for optimisation This example demonstrates how to use the convenience methods fmin and curve_fit for optimisation. These methods allow you to perform simple minimisation or curve fitting outside the time-series context typically used in Pints. Minimisation with fmin In this part of the example, we define a function f() and estimate the arguments that minimise it. For this we use fmin(), which has a similar interface to SciPy's fmin(). Step1: We can make a contour plot near the true solution to see how we're doing Step2: Curve fitting with curve_fit In this part of the example we fit a curve to some data, using curve_fit(), which has a similar interface to SciPy's curve_fit(). Step3: Again, we can use matplotlib to have a look at the results
Python Code: import pints # Define a quadratic function f(x) def f(x): return 1 + (x[0] - 3) ** 2 + (x[1] + 5) ** 2 # Choose a starting point for the search x0 = [1, 1] # Find the arguments for which it is minimised xopt, fopt = pints.fmin(f, x0, method=pints.XNES) print(xopt) print(fopt) Explanation: Convenience methods for optimisation This example demonstrates how to use the convenience methods fmin and curve_fit for optimisation. These methods allow you to perform simple minimisation or curve fitting outside the time-series context typically used in Pints. Minimisation with fmin In this part of the example, we define a function f() and estimate the arguments that minimise it. For this we use fmin(), which has a similar interface to SciPy's fmin(). End of explanation import numpy as np import matplotlib.pyplot as plt x = np.linspace(0, 6, 100) y = np.linspace(-10, 0, 100) X, Y = np.meshgrid(x, y) Z = f(np.stack((X, Y))) plt.figure() plt.contour(X, Y, Z) plt.plot(xopt[0], xopt[1], 'x') plt.show() Explanation: We can make a contour plot near the true solution to see how we're doing End of explanation # Define a quadratic function `y = f(x|a, b, c)` def f(x, a, b, c): return a + b * x + c * x ** 2 # Generate some test noisy test data x = np.linspace(-5, 5, 100) e = np.random.normal(loc=0, scale=2, size=x.shape) y = f(x, 9, 3, 1) + e # Find the parameters that give the best fit x0 = [0, 0, 0] xopt, fopt = pints.curve_fit(f, x, y, x0, method=pints.XNES) print(xopt) Explanation: Curve fitting with curve_fit In this part of the example we fit a curve to some data, using curve_fit(), which has a similar interface to SciPy's curve_fit(). End of explanation plt.figure() plt.xlabel('x') plt.ylabel('y') plt.plot(x, y, 'x', label='Noisy data') plt.plot(x, f(x, 9, 3, 1), label='Original function') plt.plot(x, f(x, *xopt), label='Esimated function') plt.legend() plt.show() Explanation: Again, we can use matplotlib to have a look at the results End of explanation
1,667
Given the following text description, write Python code to implement the functionality described below step by step Description: Lesson 14 访问网络初步和 requests 包 v1.0.0 2016.11 by David.Yi v1.1 2020.5 2020.6 edit by David Yi 本次内容要点 requests 包介绍 访问网页 调用接口 思考一下:写个同步数据的软件需要注意哪些方面 requests 包 requests 包是 python 目前最好用的网站内容访问包,设计上比较人性化,可以大大简化代码的复杂度。 网络访问、接口调用、网络检查、互联网爬虫等,都离不开 requests 包。 一般来说,python 自带的函数包还是做的非常不错的,但是也有例外,其中之一就是网络访问方面。与其说是 python 原生的网络访问函数包做的不够人性化,不如说是 requests 函数包做的太人性化了,设计的非常好,配合 python 自带的优雅属性,在 python 的迅速发展过程中起到了正向的作用。requests 之后,也有很多函数包打着人性化的旗子,甚至有一些滥用了。 使用 requests 包之前需要安装 requests。 Step1: 下载文件 使用requets 可以很方便的获得网站中的图片、文件等。下面只是简单的举例,下载 baidu 的 logo 文件。 Step2: 读取接口 我们来做一个读取新冠疫情数据的demo。 平时我们说的接口,可以简单的理解为满足一定的认证方式后,通过输入参数的值,获得需要的内容。认证方式、入参、出参这些都是事先约定的。包括接口文档、参数列表、自动联调等这些,在目前 python 的接口开发中使用一些新技术都是可以自动生成的。python 本身开发接口非常容易,有机会另外专门讲述。相对来说,读取和调用接口的操作更为常见。
Python Code: # 获得一个网站的信息 import requests r = requests.get('http://www.huifu.com') print(r.content) print(r.headers) Explanation: Lesson 14 访问网络初步和 requests 包 v1.0.0 2016.11 by David.Yi v1.1 2020.5 2020.6 edit by David Yi 本次内容要点 requests 包介绍 访问网页 调用接口 思考一下:写个同步数据的软件需要注意哪些方面 requests 包 requests 包是 python 目前最好用的网站内容访问包,设计上比较人性化,可以大大简化代码的复杂度。 网络访问、接口调用、网络检查、互联网爬虫等,都离不开 requests 包。 一般来说,python 自带的函数包还是做的非常不错的,但是也有例外,其中之一就是网络访问方面。与其说是 python 原生的网络访问函数包做的不够人性化,不如说是 requests 函数包做的太人性化了,设计的非常好,配合 python 自带的优雅属性,在 python 的迅速发展过程中起到了正向的作用。requests 之后,也有很多函数包打着人性化的旗子,甚至有一些滥用了。 使用 requests 包之前需要安装 requests。 End of explanation # 下载文件 使用 requests # baidu 的 logo 文件: http://home.baidu.com/resource/r/home/img/logo-yy.gif import requests url = 'http://home.baidu.com/resource/r/home/img/logo-yy.gif' r = requests.get(url) with open("files/baidu_logo.gif", "wb") as code: code.write(r.content) print('download ok') Explanation: 下载文件 使用requets 可以很方便的获得网站中的图片、文件等。下面只是简单的举例,下载 baidu 的 logo 文件。 End of explanation # demo for infection/region # input region, start_date, then get data # 接口:感染/国家地区 import requests # API url url = 'https://covid-19.adapay.tech/api/v1/' # token, can call register function get the API token token = '497115d0c2ff9586bf0fe03088cfdbe2' # region or country region='US' # headers, need the API token headers = { 'token': token } # the params payload = { 'region': region, 'start_date':'2020-06-04' } # call requets to load r = requests.get(url+'infection/region', params=payload, headers=headers) data = r.json() print(data) print(type(data)) # 获得指定key 的内容,实际上是字典,因此可以一层层嵌套访问 print(data['data']['region']['US']['2020-04-24']['confirmed']) # 我们模拟一个实际的使用方式,获得10天的数据 # demo for infection/region # input region, start_date, end_date, then get data # 接口:感染/国家地区 import requests # API url url = 'https://covid-19.adapay.tech/api/v1/' # token, can call register function get the API token token = '497115d0c2ff9586bf0fe03088cfdbe2' # region or country region='US' # headers, need the API token headers = { 'token': token } # the params payload = { 'region': region, 'start_date':'2020-04-24', 'end_date':'2020-05-03' } # call requets to load r = requests.get(url+'infection/region', params=payload, headers=headers) data = r.json() print(data) # 截取需要的字典内容 dict1 = data['data']['region']['US'] print(dict1) print('---') # 根据字典进行遍历 list1 = [] list2 = [] for key, value in dict1.items(): print(key,value) list1.append(value['confirmed']) list2.append(key[5:10]) print('---') print(list1) print('---') print(list2) # 绘制一个折线图 import matplotlib.pyplot as plt plt.plot(list2,list1) plt.show() Explanation: 读取接口 我们来做一个读取新冠疫情数据的demo。 平时我们说的接口,可以简单的理解为满足一定的认证方式后,通过输入参数的值,获得需要的内容。认证方式、入参、出参这些都是事先约定的。包括接口文档、参数列表、自动联调等这些,在目前 python 的接口开发中使用一些新技术都是可以自动生成的。python 本身开发接口非常容易,有机会另外专门讲述。相对来说,读取和调用接口的操作更为常见。 End of explanation
1,668
Given the following text description, write Python code to implement the functionality described below step by step Description: <h1> Working with Data in Pandas </h1> Patrick Phelps - Manger of Data Science @ Yelp Frances Haugen - Product Manager @ Pinterest <h3> Introduction </h3> All numbers used in this exercise are part <a href='https Step1: <h2> A tiny bit about jupyter notebooks </h2> Jupyter notebooks a really wonderful way to merge code and data and notes into a single cohesive analysis. The most important bits to understand for this tutorial is that notebooks run code based one "cells" This text is in a cell for writing notes called a markdown cell. Here we can use HTML tags and web markdown syntax. Step2: When we run a code cell (shift-enter) the notebook runs that bit of code, when we run a markdown cell what happends? Try clicking on this to enter edit mode, doing a bit of editing then hitting shfit-enter. How does HTML tagging work in this sort of cell? Can you make this BOLD? <h2> Loading the Data </h2> Edit the below base_file_path to point to where you've saved the yelp_academic_dataest_business.json file and the yelp_dataset_cumulative_value_of_a_review_in_pageviews_per_year.csv. Step3: Let's start by importing our market pageview data Step4: The above loads the csv into a pandas dataframe, the most important part of pandas. It is easiest to think of a pandas dataframe as a special form of an spreadsheet. Unlike a spreadsheet we have to interact with our dataframe with code. Let's take a look at it first. Step5: This dataframe only has 3 rows so that's all we see. We can always pass a number to df.head() to get more rows if they exist. So what can we say from this dataframe? First, what is in it? Let's look at a subsection of columns first, we do this by passing a list of columns we'd like to see, enclosed in square-braces. Step6: So what do these numbers mean? For each additional review on a business in a mid-market city we expect to get (on average or at the mean) an extra 135.426 cumulative page-views per year if that review is one of the first 10 reviews for that business and 84.14 additional page-views per year if that review is after those first 10 reviews. These are cumulative in this exercise because they take into account that more page-views = more reviews later and represent the total cumulative impact per year of a new review. Does that make sense inuitively? We might imagine that the first 10 reviews allow a business to go from completely undiscovered to discovered so they generate a lot of traffic as people discover that a place exists. On the other hand we don't see nearly as much impact past the first 10 reviews. That makes sense too, suppose a place had 200 reviews, how much do we expect traffic to improve if it got another reivew? For this tutorial I've simplified this to just a binary cut. In reality there's a time-dependent surface underlying what's going on but for the exercise let's keep it simple as we'll still capture the basic idea. So what are the other columns in our market data? Let's transpose (rotate) our dataframe using the .T command. Notice how I can chain commands in pandas with the dot syntax. Step7: <h2> A little bit (I promise) on statistics </h2> So we understand the mean impact rows in this dataframe, but what are these lower and upper bound columns? These represent our best guess on the limits of where the true impact value is given our data, the represent our uncertainty. If you don't know any statistics, no problem, we can think of our uncertainty as simply representing out bounds on where we'd be willing to place our bets. We can understand intuitively what gives rise to this, if we examined just one business we wouldn't be very sure what to expect for every business of the millions of businesses on Yelp rather we'd want to capture that we had very limited information. Diving a bit more into the statistics, these bounds are our 95% credible interval, this sounds really fancy but is actually pretty easy to understand. Let's unpack what a credible interval is in human English using a simple coin toss, while this might seem a bit non-sequitor coin tosses are a great way to illustarte basic concepts. Imagine, I tell you I have a coin and ask you to guess how often you think it will land heads if I flip it 100 times. Did you guess around 50? Why? $\$ What you just did is called setting a "prior," you used your knowledge (prior belief) of other coins to estimate that most coins flip heads and tails with about even chances. Yay, so now I flip the coin 100 times and it comes up heads 52 times, do you think this is still a fair coin? I imagine that you still do, but why? It didn't come up exactly 50 times heads so why do we still believe it is a fair coin? Of course a fair coin can sometimes result in not exactly 50 heads in 100 flips there's going to be some "wiggle" each time you do it. That wiggle is what in stats we call our credible interval. A 95% credible interval reperesents the region of values we'd expect to come up 95% of the time, occasionally, 5% of the time our fair coin will come up wtih more extreme numbers of heads but we expect 95% of the probability to be within the bounds. Let's return to our dataframe. Step8: We can now understand what our lower and upper bound on impact means, these are our 95% credible interval on the mean impact, sometimes businesses will see lower impact from their reviews and sometimes they'll see more than the mean impact, but 95% of the time their impact will be between the lower and upper bound. <h2> Working with json </h2> One of the things we can do in Jupyter notebooks is define helper functions that live throughout the life of the notebook. Here I define a basic function to read in a json file, which for formatting reasons is a bit harder than csv's. Json is a formatting syntax for files that's a bit more flexible than csv but the price we pay is that we then need to put in more work to read it. Here I define a function with python. A function represents a reusable piece of code. Once you run this cell you'll be able to call the json_reading_function from any other cell below or above this one. Cells in jupyter notebooks care about run order not place on the page. If you need to modify this function you'll need to re-run this cell (shift-enter) to updated the working memory copy of it in your notebook. Step9: If you run the above cell does the number to the left of the cell change? What do you think this represents? Step10: Now let's make a simple plot from our loaded biz_data. We can first look at the columns available using .head() like we did previously. Step11: Ah, lots of information here. From the Yelp academic dataset guide we know that each row in this dataframe represents just one business at a specific moment in time, it's like snapshot. Here we get the busienss location information, it's categories and city, it's hours, name, if it is still open and operational, and it's review count and average rating. Finally we get information about it's attributes as a jason blog in one of the dataframe columns. <h3> Making a histogram </h3> Pandas provides a super useful and easy interface for making plots, let's do a quick histogram of our review count and set the scale to logrithmic on the y-axis. Step12: One of the things we can do with histograms is bin them with different numbers of bins like so Step13: How are these two plots different? Why? Would the story you felt you got from each plot be different? Let's try some other basic plotting to get a hang of things, suppose we wanted to plot number of reviews versus average rating? Step14: What story about businesses does this tell us? Can your group think of any reasons the plot might have this shape? Try playing around with some of the above arguments to the plot function, what do they do and why? <h2> Working with Groupby </h2> Ok, let's return to our problem at hand, a coherent SEO strategy. Our SEO data comes in the form by market saturation, let's see if we can corral our business data to inform market saturation levels. Here we'll use .groupby and .agg (aggregate) syntax to slice up our data. Step15: In a groupby we ask pandas to setup artifical groups within our dataframe, when we operate on the groups we don't have to worry about anything "crossing over" from one group to another. Step16: Hmm, those are some odd cities and how come Montreal only has 4 reviews and 1 biz? Something's odd. Let's do a quick sort to figure it out. Step17: Ah, our data aren't fully clean, that's pretty common, for example Montreal vs Montréal. Since we're just doing a minor example we won't spend too much time cleaning our data but let's quickly see how number of reviews stack-up. Step18: Uh-oh. More long-tail data, as you may have guessed by now these sorts of patterns are super common when we look at real world applications and web data. Time for us to start thinking about how to deal with it. <h1> Tricks for dealing with long-tail data </h1> Let's examine a few simple steps when working with long-tail data before moving back to our problem. The first trick is normalization. When we normalize we want to take out extraneous dependancies. For example we know our above cities probably have different populations that we should think about before trying to determine what we mean by market saturation. The simple raw-number of reviews in say Lead, SD and Las Vegas aren't good indicators of market saturation simply because the population, and thus the number of businesses in them are vastly different. We can take out this population effect by simply dividing our total reviews among the businesses to get the average # reviews per business in a city. Step19: So what's wrong with the above, why might the average be misleading here? Think about it for a second then look at the next cell. Step20: So most of our extremely high reviews per biz cities only have a tiny number of businesses so these likely aren't terribly informative. How can we treat these? We need to take into account our knowledge of how many businesses are present as well. Step21: Now to use this information. We'll use the standard error to set a lower bound on the mean, we're techinically treating these as normals but this is just an approximation. In reality we would use something like scipy's stats models to get confidence bounds or boostrap but for the tutorial let's keep it simple. Step22: <h1> Putting it all together </h1> So we're finally ready to ask our basic question, how can we make a coherent statement of where to focus or SEO efforts. To recap Step23: Let's now apply these boundaries to our data cities. Step24: Cool, now that we know per city the market saturation we can join it onto our per business data. Step25: <h1> Where should we focus? </h1> So let's examine the impact of our focus now, to gauge how we might see impact, suppose we could add 1 review to all businesses below 10 reviews or 1 review to all businesses above 10 reviews, what would we expect to see in page-views over the course of the year? Step26: So at first glance it looks like we're better off focusing on less reviewed businesses, but is there any subtlties here?
Python Code: %load_ext autoreload %autoreload 2 %matplotlib inline import pandas as pd import matplotlib.pyplot as plt import numpy as np import json pd.set_option('display.mpl_style', 'default') plt.rcParams['figure.figsize'] = (12.0, 8.0) Explanation: <h1> Working with Data in Pandas </h1> Patrick Phelps - Manger of Data Science @ Yelp Frances Haugen - Product Manager @ Pinterest <h3> Introduction </h3> All numbers used in this exercise are part <a href='https://www.yelp.com/dataset_challenge'> Yelp's academic dataset challenge</a> to which we've added some <b>fake traffic data</b>. In this notebook we're going to do a bit of data science using pandas, matplotlib, and some fake Yelp traffic data. We're going to create a better SEO strategy for our platform geared around optimizing for expected future traffic in various markets and in the process learn about "long-tail" or power-law data. This notebook is the <b>full version</b> where all the steps are filled out. If you're looking for the "give me a challenge" version this isn't what you want. <h3> The Problem </h3> Let's start with some problem framing. Your SEO analysts tell you they need to decide if they should focus their work on low review businesses or high review businesses this quarter. You want to have a coherent market strategy so you ask your web-ops team to get you some numbers on incoming traffic versus the number of reviews you've seen historically. Here we'll use the yelp academic dataset to get our business information and an included csv the yelp_dataset_cumulative_value_of_a_review_in_pageviews_per_year file to represent the web-ops teams response to you on incoming traffic. Here we've already segmented this data by market "saturation" which is a rough gauge of how saturated we think Yelp usage is in a given city. <h3> Setting Up our Notebook: </h3> Here we're just doing some basic python importing. For this tutorial you'll need the following python packages: 1. pandas 2. matplotlib 3. numpy 4. json This tutorial is made for python 2.7 End of explanation # This is a code cell here text needs to either be python code or commented out like this. a = 5 print 1 Explanation: <h2> A tiny bit about jupyter notebooks </h2> Jupyter notebooks a really wonderful way to merge code and data and notes into a single cohesive analysis. The most important bits to understand for this tutorial is that notebooks run code based one "cells" This text is in a cell for writing notes called a markdown cell. Here we can use HTML tags and web markdown syntax. End of explanation base_file_path = '../data/' yelp_biz_dataset_file = base_file_path + 'yelp_academic_dataset_business.json' yelp_market_saturation_file = base_file_path + 'yelp_dataset_cumulative_value_of_a_review_in_pageviews_per_year.csv' Explanation: When we run a code cell (shift-enter) the notebook runs that bit of code, when we run a markdown cell what happends? Try clicking on this to enter edit mode, doing a bit of editing then hitting shfit-enter. How does HTML tagging work in this sort of cell? Can you make this BOLD? <h2> Loading the Data </h2> Edit the below base_file_path to point to where you've saved the yelp_academic_dataest_business.json file and the yelp_dataset_cumulative_value_of_a_review_in_pageviews_per_year.csv. End of explanation market_data = pd.read_csv(yelp_market_saturation_file, index_col=0) Explanation: Let's start by importing our market pageview data: End of explanation # To look at a dataframe use the .head() command. By default this shows up-to the first 5 rows. market_data.head() Explanation: The above loads the csv into a pandas dataframe, the most important part of pandas. It is easiest to think of a pandas dataframe as a special form of an spreadsheet. Unlike a spreadsheet we have to interact with our dataframe with code. Let's take a look at it first. End of explanation market_data[[ 'market_saturation', 'mean_impact_for_first_ten_reviews', 'mean_impact_for_higher_reviews' ]].head() Explanation: This dataframe only has 3 rows so that's all we see. We can always pass a number to df.head() to get more rows if they exist. So what can we say from this dataframe? First, what is in it? Let's look at a subsection of columns first, we do this by passing a list of columns we'd like to see, enclosed in square-braces. End of explanation market_data.head().T Explanation: So what do these numbers mean? For each additional review on a business in a mid-market city we expect to get (on average or at the mean) an extra 135.426 cumulative page-views per year if that review is one of the first 10 reviews for that business and 84.14 additional page-views per year if that review is after those first 10 reviews. These are cumulative in this exercise because they take into account that more page-views = more reviews later and represent the total cumulative impact per year of a new review. Does that make sense inuitively? We might imagine that the first 10 reviews allow a business to go from completely undiscovered to discovered so they generate a lot of traffic as people discover that a place exists. On the other hand we don't see nearly as much impact past the first 10 reviews. That makes sense too, suppose a place had 200 reviews, how much do we expect traffic to improve if it got another reivew? For this tutorial I've simplified this to just a binary cut. In reality there's a time-dependent surface underlying what's going on but for the exercise let's keep it simple as we'll still capture the basic idea. So what are the other columns in our market data? Let's transpose (rotate) our dataframe using the .T command. Notice how I can chain commands in pandas with the dot syntax. End of explanation market_data.head().T Explanation: <h2> A little bit (I promise) on statistics </h2> So we understand the mean impact rows in this dataframe, but what are these lower and upper bound columns? These represent our best guess on the limits of where the true impact value is given our data, the represent our uncertainty. If you don't know any statistics, no problem, we can think of our uncertainty as simply representing out bounds on where we'd be willing to place our bets. We can understand intuitively what gives rise to this, if we examined just one business we wouldn't be very sure what to expect for every business of the millions of businesses on Yelp rather we'd want to capture that we had very limited information. Diving a bit more into the statistics, these bounds are our 95% credible interval, this sounds really fancy but is actually pretty easy to understand. Let's unpack what a credible interval is in human English using a simple coin toss, while this might seem a bit non-sequitor coin tosses are a great way to illustarte basic concepts. Imagine, I tell you I have a coin and ask you to guess how often you think it will land heads if I flip it 100 times. Did you guess around 50? Why? $\$ What you just did is called setting a "prior," you used your knowledge (prior belief) of other coins to estimate that most coins flip heads and tails with about even chances. Yay, so now I flip the coin 100 times and it comes up heads 52 times, do you think this is still a fair coin? I imagine that you still do, but why? It didn't come up exactly 50 times heads so why do we still believe it is a fair coin? Of course a fair coin can sometimes result in not exactly 50 heads in 100 flips there's going to be some "wiggle" each time you do it. That wiggle is what in stats we call our credible interval. A 95% credible interval reperesents the region of values we'd expect to come up 95% of the time, occasionally, 5% of the time our fair coin will come up wtih more extreme numbers of heads but we expect 95% of the probability to be within the bounds. Let's return to our dataframe. End of explanation def json_reading_function(file_path): with open(file_path) as f: df = pd.DataFrame(json.loads(line) for line in f) return df Explanation: We can now understand what our lower and upper bound on impact means, these are our 95% credible interval on the mean impact, sometimes businesses will see lower impact from their reviews and sometimes they'll see more than the mean impact, but 95% of the time their impact will be between the lower and upper bound. <h2> Working with json </h2> One of the things we can do in Jupyter notebooks is define helper functions that live throughout the life of the notebook. Here I define a basic function to read in a json file, which for formatting reasons is a bit harder than csv's. Json is a formatting syntax for files that's a bit more flexible than csv but the price we pay is that we then need to put in more work to read it. Here I define a function with python. A function represents a reusable piece of code. Once you run this cell you'll be able to call the json_reading_function from any other cell below or above this one. Cells in jupyter notebooks care about run order not place on the page. If you need to modify this function you'll need to re-run this cell (shift-enter) to updated the working memory copy of it in your notebook. End of explanation biz_data = json_reading_function(yelp_biz_dataset_file) Explanation: If you run the above cell does the number to the left of the cell change? What do you think this represents? End of explanation biz_data.head(3).T Explanation: Now let's make a simple plot from our loaded biz_data. We can first look at the columns available using .head() like we did previously. End of explanation ax = biz_data.review_count.hist() ax.set_yscale('log') ax.set_xlabel('Number of reviews per biz') ax.set_ylabel('Number of businesses') Explanation: Ah, lots of information here. From the Yelp academic dataset guide we know that each row in this dataframe represents just one business at a specific moment in time, it's like snapshot. Here we get the busienss location information, it's categories and city, it's hours, name, if it is still open and operational, and it's review count and average rating. Finally we get information about it's attributes as a jason blog in one of the dataframe columns. <h3> Making a histogram </h3> Pandas provides a super useful and easy interface for making plots, let's do a quick histogram of our review count and set the scale to logrithmic on the y-axis. End of explanation ax = biz_data.review_count.hist(bins=1000) ax.set_yscale('log') ax.set_xlabel('Number of reviews per biz') ax.set_ylabel('Number of businesses') Explanation: One of the things we can do with histograms is bin them with different numbers of bins like so: End of explanation ax = biz_data.plot(x='review_count', y='stars', linestyle='', marker='.', alpha=0.2) ax.set_xlim([0, biz_data.review_count.max()]) ax.set_xlabel('Number of reviews') ax.set_ylabel('Average Star Rating') Explanation: How are these two plots different? Why? Would the story you felt you got from each plot be different? Let's try some other basic plotting to get a hang of things, suppose we wanted to plot number of reviews versus average rating? End of explanation city_state_groups = biz_data.groupby(['state', 'city']) Explanation: What story about businesses does this tell us? Can your group think of any reasons the plot might have this shape? Try playing around with some of the above arguments to the plot function, what do they do and why? <h2> Working with Groupby </h2> Ok, let's return to our problem at hand, a coherent SEO strategy. Our SEO data comes in the form by market saturation, let's see if we can corral our business data to inform market saturation levels. Here we'll use .groupby and .agg (aggregate) syntax to slice up our data. End of explanation city_state_data = city_state_groups.agg({ 'business_id': pd.Series.nunique, 'review_count': 'sum' }) city_state_data.head(10).T Explanation: In a groupby we ask pandas to setup artifical groups within our dataframe, when we operate on the groups we don't have to worry about anything "crossing over" from one group to another. End of explanation city_state_data.sort_values(by='review_count', ascending=False).head(10).T Explanation: Hmm, those are some odd cities and how come Montreal only has 4 reviews and 1 biz? Something's odd. Let's do a quick sort to figure it out. End of explanation def clean_city_str(city_str): temp = city_str.replace(' ', '') temp = temp.lower() return temp biz_data['city_key'] = biz_data.city.apply(lambda x: clean_city_str(x)) city_state_data = biz_data.groupby(['state', 'city_key'], as_index=[False, False]).agg({ 'business_id': pd.Series.nunique, 'review_count': 'sum', 'state': 'first', 'city_key': 'first' }) city_state_data.rename(columns={'business_id': 'num_biz'}, inplace=True) fig, ax = plt.subplots(figsize=(20,10)) city_state_data.sort_values(by='review_count', ascending=False).head(100).review_count.plot(kind='bar', ax=ax) ax.set_yscale('log') ax.set_ylabel('review_count') Explanation: Ah, our data aren't fully clean, that's pretty common, for example Montreal vs Montréal. Since we're just doing a minor example we won't spend too much time cleaning our data but let's quickly see how number of reviews stack-up. End of explanation city_state_data['mean_num_reviews_per_biz'] = ( city_state_data.review_count / city_state_data.num_biz.astype('float') ) fig, ax = plt.subplots(figsize=(20,10)) city_state_data.sort_values( by='mean_num_reviews_per_biz', ascending=False ).mean_num_reviews_per_biz.plot(kind='bar', ax=ax) ax.set_yscale('log') ax.set_ylabel('mean_num_reviews_per_biz') Explanation: Uh-oh. More long-tail data, as you may have guessed by now these sorts of patterns are super common when we look at real world applications and web data. Time for us to start thinking about how to deal with it. <h1> Tricks for dealing with long-tail data </h1> Let's examine a few simple steps when working with long-tail data before moving back to our problem. The first trick is normalization. When we normalize we want to take out extraneous dependancies. For example we know our above cities probably have different populations that we should think about before trying to determine what we mean by market saturation. The simple raw-number of reviews in say Lead, SD and Las Vegas aren't good indicators of market saturation simply because the population, and thus the number of businesses in them are vastly different. We can take out this population effect by simply dividing our total reviews among the businesses to get the average # reviews per business in a city. End of explanation city_state_data.sort_values( by='mean_num_reviews_per_biz', ascending=False ).head(20) Explanation: So what's wrong with the above, why might the average be misleading here? Think about it for a second then look at the next cell. End of explanation def standard_err(x): return np.std(x)/np.sqrt(len(x)) city_state_data['std_err'] = biz_data.groupby(['state', 'city_key']).agg({ 'review_count': standard_err }) Explanation: So most of our extremely high reviews per biz cities only have a tiny number of businesses so these likely aren't terribly informative. How can we treat these? We need to take into account our knowledge of how many businesses are present as well. End of explanation # cut to eliminate places with no deviation. city_state_data = city_state_data[city_state_data.std_err > 0] city_state_data['lower_conf_mean_review'] = city_state_data.apply( lambda x: max(x.mean_num_reviews_per_biz - 1.96 * x.std_err, 0), 1 ) ax = city_state_data.lower_conf_mean_review.hist(bins=100) ax.set_ylabel('Number of cities') ax.set_xlabel('lower bound on mean reviews per biz') Explanation: Now to use this information. We'll use the standard error to set a lower bound on the mean, we're techinically treating these as normals but this is just an approximation. In reality we would use something like scipy's stats models to get confidence bounds or boostrap but for the tutorial let's keep it simple. End of explanation fig, ax = plt.subplots() city_state_data.lower_conf_mean_review.hist(bins=100, ax=ax, color='#0073bb') ax.set_ylabel('Number of cities') ax.set_xlabel('lower bound on mean reviews per biz') # Add transparent rectangles head_patch = plt.matplotlib.patches.Rectangle((0,0), 1.5, 40, alpha=0.25, color='#41a700') middle_patch = plt.matplotlib.patches.Rectangle((1.5,0), 7, 40, alpha=0.25, color='#0073bb') tail_patch = plt.matplotlib.patches.Rectangle((8.5,0), 70, 40, alpha=0.25, color='#d32323') ax.add_patch(head_patch) ax.add_patch(middle_patch) ax.add_patch(tail_patch) # Add text annotations ax.text(0.5,25,"New Market", color='#41a700', fontsize=16, rotation=90) ax.text(7,25,"Mid Market", color='#0073bb', fontsize=16, rotation=90) ax.text(30,25,"Saturated Market", color='#d32323', fontsize=16) Explanation: <h1> Putting it all together </h1> So we're finally ready to ask our basic question, how can we make a coherent statement of where to focus or SEO efforts. To recap: We have data on the value of how much traffic we hope to drive per review added in our market_data dataframe segemented by market saturation. We have data on per city in the academic dataset how saturated the various markets might be in terms of # reviews per biz in our city_state_data dataframe. We have data on the number of reviews per business currently in each city in our biz_data dataframe. <h3> Defining a clear objective </h3> Critical to how we answer a question is identifying what we wish to use to measure success. Here we'll use a very simple proxy. We want to generate the maximal new page-views per year via our SEO startegy, we don't know how much work various types of SEO are but what we want to generate is a good sense of the trade offs. <h3> Assign market-saturation levels to cities </h3> From our data per city we need to define saturation levels. Looking at the distrbution let's make the following distinctions. End of explanation def assign_market_saturation(lower_bound_mean_reviews): saturation_level = None if lower_bound_mean_reviews <= 3: saturation_level = 'new_market' elif lower_bound_mean_reviews <= 9: saturation_level = 'mid_market' else: saturation_level = 'saturated_market' return saturation_level city_state_data['market_saturation'] = city_state_data.lower_conf_mean_review.apply( assign_market_saturation ) city_state_data.sort_values(by='review_count', ascending=False).head().T Explanation: Let's now apply these boundaries to our data cities. End of explanation state_city_saturation_df = city_state_data[['state', 'city_key', 'market_saturation']] biz_data['city_key'] = biz_data.city.apply(clean_city_str) # Merge on market saturation data biz_data = biz_data.merge(state_city_saturation_df, how='inner', on=['state', 'city_key']) Explanation: Cool, now that we know per city the market saturation we can join it onto our per business data. End of explanation lookup_key = { ('<10', 'lower'): 'lower_bound_impact_for_first_ten_reviews', ('<10', 'mean'): 'mean_impact_for_first_ten_reviews', ('<10', 'upper'): 'upper_bound_impact_for_first_ten_reviews', ('10+', 'lower'): 'lower_bound_impact_for_higher_reviews', ('10+', 'mean'): 'mean_impact_for_higher_reviews', ('10+', 'upper'): 'upper_bound_impact_for_higher_reviews' } def yearly_impact_of_one_more_review(biz_row, impact='mean'): impact_measures = market_data[market_data.market_saturation == biz_row.market_saturation] if biz_row.review_count >= 10: lookup_review_count = '10+' else: lookup_review_count = '<10' return impact_measures[lookup_key[(lookup_review_count, impact)]].values[0] biz_data['mean_added_yearly_pageviews'] = biz_data.apply( lambda x: yearly_impact_of_one_more_review(x), 1 ) biz_data['lower_added_yearly_pageviews'] = biz_data.apply( lambda x: yearly_impact_of_one_more_review(x, 'lower'), 1 ) biz_data['upper_added_yearly_pageviews'] = biz_data.apply( lambda x: yearly_impact_of_one_more_review(x, 'upper'), 1 ) biz_data['review_bucket'] = biz_data.review_count.apply(lambda x: '10+' if x >= 10 else '<10') results = biz_data.groupby(['review_bucket']).agg( { 'mean_added_yearly_pageviews': 'sum', 'lower_added_yearly_pageviews': 'sum', 'upper_added_yearly_pageviews': 'sum' } ) results ax = results.T.plot(kind='box') ax.set_ylabel('Expected Pageviews') ax.set_ylim([0, 3000000]) ax.set_xlabel('Review segement to target') Explanation: <h1> Where should we focus? </h1> So let's examine the impact of our focus now, to gauge how we might see impact, suppose we could add 1 review to all businesses below 10 reviews or 1 review to all businesses above 10 reviews, what would we expect to see in page-views over the course of the year? End of explanation by_market_saturation_results = biz_data.groupby(['market_saturation', 'review_bucket']).agg( { 'mean_added_yearly_pageviews': 'sum', 'lower_added_yearly_pageviews': 'sum', 'upper_added_yearly_pageviews': 'sum' } ) by_market_saturation_results[ [ 'mean_added_yearly_pageviews', 'lower_added_yearly_pageviews', 'upper_added_yearly_pageviews' ] ] ax = by_market_saturation_results[ [ 'mean_added_yearly_pageviews', 'lower_added_yearly_pageviews', 'upper_added_yearly_pageviews' ] ].T.plot(kind='box') ax.set_yscale('log') plt.xticks(rotation=70) Explanation: So at first glance it looks like we're better off focusing on less reviewed businesses, but is there any subtlties here? End of explanation
1,669
Given the following text description, write Python code to implement the functionality described below step by step Description: <a href='http Step1: Get the Data We'll work with the Ecommerce Customers csv file from the company. It has Customer info, suchas Email, Address, and their color Avatar. Then it also has numerical value columns Step2: Check the head of customers, and check out its info() and describe() methods. Step3: Exploratory Data Analysis Let's explore the data! For the rest of the exercise we'll only be using the numerical data of the csv file. Use seaborn to create a jointplot to compare the Time on Website and Yearly Amount Spent columns. Does the correlation make sense? Step4: Do the same but with the Time on App column instead. Step5: Use jointplot to create a 2D hex bin plot comparing Time on App and Length of Membership. Step6: Let's explore these types of relationships across the entire data set. Use pairplot to recreate the plot below.(Don't worry about the the colors) Step7: Atma Step8: Training and Testing Data Now that we've explored the data a bit, let's go ahead and split the data into training and testing sets. Set a variable X equal to the numerical features of the customers and a variable y equal to the "Yearly Amount Spent" column. Step9: Use model_selection.train_test_split from sklearn to split the data into training and testing sets. Set test_size=0.3 and random_state=101 Step10: Training the Model Now its time to train our model on our training data! Import LinearRegression from sklearn.linear_model Step11: Create an instance of a LinearRegression() model named lm. Step12: Train/fit lm on the training data. Step13: Print out the coefficients of the model Step14: Predicting Test Data Now that we have fit our model, let's evaluate its performance by predicting off the test values! Use lm.predict() to predict off the X_test set of the data. Step15: Create a scatterplot of the real test values versus the predicted values. Step16: Evaluating the Model Let's evaluate our model performance by calculating the residual sum of squares and the explained variance score (R^2). Calculate the Mean Absolute Error, Mean Squared Error, and the Root Mean Squared Error. Refer to the lecture or to Wikipedia for the formulas Step17: Residuals You should have gotten a very good model with a good fit. Let's quickly explore the residuals to make sure everything was okay with our data. Plot a histogram of the residuals and make sure it looks normally distributed. Use either seaborn distplot, or just plt.hist().
Python Code: import pandas as pd import numpy, matplotlib.pyplot as plt import seaborn as sns %matplotlib inline Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a> Linear Regression - Project Exercise Congratulations! You just got some contract work with an Ecommerce company based in New York City that sells clothing online but they also have in-store style and clothing advice sessions. Customers come in to the store, have sessions/meetings with a personal stylist, then they can go home and order either on a mobile app or website for the clothes they want. The company is trying to decide whether to focus their efforts on their mobile app experience or their website. They've hired you on contract to help them figure it out! Let's get started! Just follow the steps below to analyze the customer data (it's fake, don't worry I didn't give you real credit card numbers or emails). Imports Import pandas, numpy, matplotlib,and seaborn. Then set %matplotlib inline (You'll import sklearn as you need it.) End of explanation customers = pd.read_csv('Ecommerce Customers') Explanation: Get the Data We'll work with the Ecommerce Customers csv file from the company. It has Customer info, suchas Email, Address, and their color Avatar. Then it also has numerical value columns: Avg. Session Length: Average session of in-store style advice sessions. Time on App: Average time spent on App in minutes Time on Website: Average time spent on Website in minutes Length of Membership: How many years the customer has been a member. Read in the Ecommerce Customers csv file as a DataFrame called customers. End of explanation customers.head() customers.describe() customers.info() Explanation: Check the head of customers, and check out its info() and describe() methods. End of explanation sns.jointplot(customers['Time on Website'], customers['Yearly Amount Spent']) Explanation: Exploratory Data Analysis Let's explore the data! For the rest of the exercise we'll only be using the numerical data of the csv file. Use seaborn to create a jointplot to compare the Time on Website and Yearly Amount Spent columns. Does the correlation make sense? End of explanation sns.jointplot(customers['Time on App'], customers['Yearly Amount Spent']) Explanation: Do the same but with the Time on App column instead. End of explanation sns.jointplot(customers['Time on App'], customers['Length of Membership'], kind='hex') Explanation: Use jointplot to create a 2D hex bin plot comparing Time on App and Length of Membership. End of explanation sns.pairplot(data=customers) Explanation: Let's explore these types of relationships across the entire data set. Use pairplot to recreate the plot below.(Don't worry about the the colors) End of explanation sns.lmplot('Length of Membership', 'Yearly Amount Spent', data=customers) Explanation: Atma: Inference from pairplot - longer memberships - spend more. important to keep your regular memebers happy - correlation bw time on app and purchases. Focus on app more. Keep website functional - session length - not a strong correlation Based off this plot what looks to be the most correlated feature with Yearly Amount Spent? Length of membership followed by time on app Create a linear model plot (using seaborn's lmplot) of Yearly Amount Spent vs. Length of Membership. End of explanation customers.columns x = customers[['Avg. Session Length', 'Time on App', 'Time on Website', 'Length of Membership']] y = customers['Yearly Amount Spent'] Explanation: Training and Testing Data Now that we've explored the data a bit, let's go ahead and split the data into training and testing sets. Set a variable X equal to the numerical features of the customers and a variable y equal to the "Yearly Amount Spent" column. End of explanation from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(x,y, test_size=0.3, random_state=101) x_train.shape y_test.shape Explanation: Use model_selection.train_test_split from sklearn to split the data into training and testing sets. Set test_size=0.3 and random_state=101 End of explanation from sklearn.linear_model import LinearRegression Explanation: Training the Model Now its time to train our model on our training data! Import LinearRegression from sklearn.linear_model End of explanation lm = LinearRegression() Explanation: Create an instance of a LinearRegression() model named lm. End of explanation lm.fit(x_train, y_train) Explanation: Train/fit lm on the training data. End of explanation lm.coef_ pd.DataFrame(lm.coef_, index=x_train.columns, columns=['Coefficients']) lm.intercept_ Explanation: Print out the coefficients of the model End of explanation y_predicted = lm.predict(x_test) Explanation: Predicting Test Data Now that we have fit our model, let's evaluate its performance by predicting off the test values! Use lm.predict() to predict off the X_test set of the data. End of explanation plt.scatter(y_test, y_predicted) # plt.title='Fitted vs predicted' plt.xlabel ='Fitted - yearly purchases' plt.ylabel ='Predicted - yearly purchases' plt.scatter() Explanation: Create a scatterplot of the real test values versus the predicted values. End of explanation from sklearn.metrics import mean_absolute_error, mean_squared_error import numpy as np print("MAE: " + str(mean_absolute_error(y_test, y_predicted))) print("MSE: " + str(mean_squared_error(y_test, y_predicted))) print("RMSE: " + str(np.sqrt(mean_squared_error(y_test, y_predicted)))) Explanation: Evaluating the Model Let's evaluate our model performance by calculating the residual sum of squares and the explained variance score (R^2). Calculate the Mean Absolute Error, Mean Squared Error, and the Root Mean Squared Error. Refer to the lecture or to Wikipedia for the formulas End of explanation sns.distplot((y_test - y_predicted), bins=50) Explanation: Residuals You should have gotten a very good model with a good fit. Let's quickly explore the residuals to make sure everything was okay with our data. Plot a histogram of the residuals and make sure it looks normally distributed. Use either seaborn distplot, or just plt.hist(). End of explanation
1,670
Given the following text description, write Python code to implement the functionality described below step by step Description: 1. Combine the two lists Step1: Solution Step2: 2. Create all products of the two lists Step3: Solution Step4: Using product() from itertools with list() Step5: 3. Sort the list Do not do it manually ;) Step6: Solution Step7: 4. Filter the list Filter out all the negativ numbers. Step8: Solution Step9: 5. Create a parabola function Define your own parabola function $f(x) = a (x - b)^2 - c$. Solution Step10: 6. Plot two parabola in subplots Feel free to make it look nice. Step11: Solution Step12: 7. Differntiate a sin function numerical Use the given dataset Step13: Solution Step14: Plot everything Step15: 8. Smooth some noisy data Use the given dataset Step16: Solution Step17: 9. Create a Histogram Create a histogram from the poisson data array. Do not use any build in numpy or histogram plot function. Step18: Solution Step19: Using python build in dict and exceptions Step20: Using the defaultdict() from collections with int() Step21: Using counter from collections Step22: 10. Write your own histogram plotting function Wrap it around the plt.plot function so you can use all of its arguments Step23: Solution Step24: 11. Draw a ASCII chrismas tree and write it to a file Merry christmas. Feel free to add some UNICODE decoration. Solution
Python Code: l0 = [0, 1, 2, 3, 4, 5] l1 = ['a', 'b', 'c', 'd', 'e', 'f'] Explanation: 1. Combine the two lists End of explanation list(zip(l0, l1)) Explanation: Solution: Use zip() with list() End of explanation l0 = [0, 1, 2] l1 = ['a', 'b', 'c'] Explanation: 2. Create all products of the two lists End of explanation [(i0, i1) for i0 in l0 for i1 in l1] Explanation: Solution: Using list comprehension End of explanation from itertools import product list(product(l0, l1)) Explanation: Using product() from itertools with list() End of explanation l = [12.34, 'z', 3, 'a', 8, 'd', 16.6] Explanation: 3. Sort the list Do not do it manually ;) End of explanation l_str = [i for i in l if isinstance(i, str)] l_nr = [i for i in l if isinstance(i, (float, int))] sorted(l_nr) + sorted(l_str) Explanation: Solution: Split the list into strings and numbers first. Than use sorted on both list and combien them. End of explanation import random random_data = [random.random() - 0.5 for _ in range(10000)] Explanation: 4. Filter the list Filter out all the negativ numbers. End of explanation neg_data = [i for i in random_data if i < 0] neg_data[:10] Explanation: Solution: End of explanation def parabola(x, a = 1, b=0, c=0): return a * (x - b)**2 - c parabola(0, a = 0.5, b=-2, c=3) Explanation: 5. Create a parabola function Define your own parabola function $f(x) = a (x - b)^2 - c$. Solution: End of explanation import matplotlib.pyplot as plt %matplotlib inline Explanation: 6. Plot two parabola in subplots Feel free to make it look nice. End of explanation fig, axs = plt.subplots(1, 2, figsize=(14,4)) x_data0 = [8 * x /1000 for x in range(-1000, 500)] p0 = [parabola(x, a=2, b=-2, c=-3) for x in x_data0] axs[0].plot(x_data0, p0, color='green', ls=':', lw=2, marker='D', markevery=50, ms=5, label=r'$f(x) = 2 (x + 2)^2 + 3$') x_data1 = [8 * x /1000 for x in range(-500, 1000)] p1 = [parabola(x, a=2, b=2, c=-3) for x in x_data1] axs[1].plot(x_data1, p1, color='red', ls='--', lw=2, marker='o', markevery=50, ms=5, label=r'$f(x) = 2 (x - 2)^2 - 3$') for ax in axs: ax.legend(loc=9) Explanation: Solution: End of explanation import math import matplotlib.pyplot as plt %matplotlib inline xdata = [2 * math.pi * x / 1000 for x in range(1000)] ydata = [math.sin(x) for x in xdata] Explanation: 7. Differntiate a sin function numerical Use the given dataset End of explanation dy_data = [(y1 - y0) / (x1 - x0) for y1, y0, x1, x0 in zip(ydata[1:], ydata[:-1], xdata[1:], xdata[:-1])] Explanation: Solution: Use slicing and zip() to differentiate End of explanation plt.figure(figsize=(12, 3)) plt.plot(xdata, ydata, label=r'$sin(x)$') plt.plot(xdata[1:], dy_data, label=r'$\frac{\Delta sin(x)}{\Delta x} $') plt.legend() plt.xlim(0, 6.28) Explanation: Plot everything End of explanation import math import random import matplotlib.pyplot as plt %matplotlib inline xdata = [4 * math.pi * x / 1000 for x in range(1000)] noisy_data = [math.sin(x) + 0.25 * random.random() for x in xdata] Explanation: 8. Smooth some noisy data Use the given dataset End of explanation s = 50 smoothed_data = [sum(noisy_data[i:i+s]) / s for i in range(len(noisy_data) - s)] plt.figure(figsize=(12,3)) plt.plot(xdata, noisy_data, ls='', marker='.', ms=3) plt.plot(xdata[s//2:-s//2], smoothed_data, color='red', lw=3) plt.xlim(0, 6.28) Explanation: Solution: End of explanation from numpy.random import poisson data = poisson(lam = 50, size=int(10e6)) # Don't worry too much about the next line it just prints nicely print(str(data[:5])[1:-1], '...', str(data[-5:])[1:-1]) Explanation: 9. Create a Histogram Create a histogram from the poisson data array. Do not use any build in numpy or histogram plot function. End of explanation histo = {} for value in data: if value in histo: histo[value] += 1 else: histo[value] = 1 Explanation: Solution: Using python build in dict and check for key with if in End of explanation histo = {} for value in data: try: histo[value] += 1 except KeyError: histo[value] = 1 Explanation: Using python build in dict and exceptions End of explanation from collections import defaultdict histo = defaultdict(int) for value in data: histo[value] += 1 Explanation: Using the defaultdict() from collections with int() End of explanation from collections import Counter histo = Counter(data) histo Explanation: Using counter from collections End of explanation import matplotlib.pyplot as plt %matplotlib inline Explanation: 10. Write your own histogram plotting function Wrap it around the plt.plot function so you can use all of its arguments End of explanation from collections import Counter def plot_histogram(data, normed=False, *args, **kwargs): histo = Counter(data) bins = list(histo.keys()) freqs = list(histo.values()) if normed: n = sum(freqs) freqs = [freq / n for freq in freqs] plt.plot(bins, freqs, *args, **kwargs) plot_histogram(data, normed=True, ls='--', marker='o', ms=5, color='red') Explanation: Solution: End of explanation ... print(tree(12), star=True) Explanation: 11. Draw a ASCII chrismas tree and write it to a file Merry christmas. Feel free to add some UNICODE decoration. Solution: End of explanation
1,671
Given the following text description, write Python code to implement the functionality described below step by step Description: <h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Calculating-Seasonal-Averages-from-Timeseries-of-Monthly-Means-" data-toc-modified-id="Calculating-Seasonal-Averages-from-Timeseries-of-Monthly-Means--1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Calculating Seasonal Averages from Timeseries of Monthly Means </a></span><ul class="toc-item"><li><ul class="toc-item"><li><ul class="toc-item"><li><span><a href="#Some-calendar-information-so-we-can-support-any-netCDF-calendar." data-toc-modified-id="Some-calendar-information-so-we-can-support-any-netCDF-calendar.-1.0.0.1"><span class="toc-item-num">1.0.0.1&nbsp;&nbsp;</span>Some calendar information so we can support any netCDF calendar.</a></span></li><li><span><a href="#A-few-calendar-functions-to-determine-the-number-of-days-in-each-month" data-toc-modified-id="A-few-calendar-functions-to-determine-the-number-of-days-in-each-month-1.0.0.2"><span class="toc-item-num">1.0.0.2&nbsp;&nbsp;</span>A few calendar functions to determine the number of days in each month</a></span></li><li><span><a href="#Open-the-Dataset" data-toc-modified-id="Open-the-Dataset-1.0.0.3"><span class="toc-item-num">1.0.0.3&nbsp;&nbsp;</span>Open the <code>Dataset</code></a></span></li><li><span><a href="#Now-for-the-heavy-lifting Step1: Some calendar information so we can support any netCDF calendar. Step4: A few calendar functions to determine the number of days in each month If you were just using the standard calendar, it would be easy to use the calendar.month_range function. Step5: Open the Dataset Step6: Now for the heavy lifting
Python Code: %matplotlib inline import numpy as np import pandas as pd import xarray as xr from netCDF4 import num2date import matplotlib.pyplot as plt print("numpy version : ", np.__version__) print("pandas version : ", pd.__version__) print("xarray version : ", xr.__version__) Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Calculating-Seasonal-Averages-from-Timeseries-of-Monthly-Means-" data-toc-modified-id="Calculating-Seasonal-Averages-from-Timeseries-of-Monthly-Means--1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Calculating Seasonal Averages from Timeseries of Monthly Means </a></span><ul class="toc-item"><li><ul class="toc-item"><li><ul class="toc-item"><li><span><a href="#Some-calendar-information-so-we-can-support-any-netCDF-calendar." data-toc-modified-id="Some-calendar-information-so-we-can-support-any-netCDF-calendar.-1.0.0.1"><span class="toc-item-num">1.0.0.1&nbsp;&nbsp;</span>Some calendar information so we can support any netCDF calendar.</a></span></li><li><span><a href="#A-few-calendar-functions-to-determine-the-number-of-days-in-each-month" data-toc-modified-id="A-few-calendar-functions-to-determine-the-number-of-days-in-each-month-1.0.0.2"><span class="toc-item-num">1.0.0.2&nbsp;&nbsp;</span>A few calendar functions to determine the number of days in each month</a></span></li><li><span><a href="#Open-the-Dataset" data-toc-modified-id="Open-the-Dataset-1.0.0.3"><span class="toc-item-num">1.0.0.3&nbsp;&nbsp;</span>Open the <code>Dataset</code></a></span></li><li><span><a href="#Now-for-the-heavy-lifting:" data-toc-modified-id="Now-for-the-heavy-lifting:-1.0.0.4"><span class="toc-item-num">1.0.0.4&nbsp;&nbsp;</span>Now for the heavy lifting:</a></span></li></ul></li></ul></li></ul></li></ul></div> Calculating Seasonal Averages from Timeseries of Monthly Means Author: Joe Hamman The data used for this example can be found in the xray-data repository. You may need to change the path to rasm.nc below. Suppose we have a netCDF or xray Dataset of monthly mean data and we want to calculate the seasonal average. To do this properly, we need to calculate the weighted average considering that each month has a different number of days. Suppose we have a netCDF or xarray.Dataset of monthly mean data and we want to calculate the seasonal average. To do this properly, we need to calculate the weighted average considering that each month has a different number of days. End of explanation dpm = {'noleap': [0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31], '365_day': [0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31], 'standard': [0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31], 'gregorian': [0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31], 'proleptic_gregorian': [0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31], 'all_leap': [0, 31, 29, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31], '366_day': [0, 31, 29, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31], '360_day': [0, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30]} Explanation: Some calendar information so we can support any netCDF calendar. End of explanation def leap_year(year, calendar='standard'): Determine if year is a leap year leap = False if ((calendar in ['standard', 'gregorian', 'proleptic_gregorian', 'julian']) and (year % 4 == 0)): leap = True if ((calendar == 'proleptic_gregorian') and (year % 100 == 0) and (year % 400 != 0)): leap = False elif ((calendar in ['standard', 'gregorian']) and (year % 100 == 0) and (year % 400 != 0) and (year < 1583)): leap = False return leap def get_dpm(time, calendar='standard'): return a array of days per month corresponding to the months provided in `months` month_length = np.zeros(len(time), dtype=np.int) cal_days = dpm[calendar] for i, (month, year) in enumerate(zip(time.month, time.year)): month_length[i] = cal_days[month] if leap_year(year, calendar=calendar): month_length[i] += 1 return month_length Explanation: A few calendar functions to determine the number of days in each month If you were just using the standard calendar, it would be easy to use the calendar.month_range function. End of explanation ds = xr.tutorial.open_dataset('rasm').load() print(ds) Explanation: Open the Dataset End of explanation # Make a DataArray with the number of days in each month, size = len(time) month_length = xr.DataArray(get_dpm(ds.time.to_index(), calendar='noleap'), coords=[ds.time], name='month_length') # Calculate the weights by grouping by 'time.season'. # Conversion to float type ('astype(float)') only necessary for Python 2.x weights = month_length.groupby('time.season') / month_length.astype(float).groupby('time.season').sum() # Test that the sum of the weights for each season is 1.0 np.testing.assert_allclose(weights.groupby('time.season').sum().values, np.ones(4)) # Calculate the weighted average ds_weighted = (ds * weights).groupby('time.season').sum(dim='time') print(ds_weighted) # only used for comparisons ds_unweighted = ds.groupby('time.season').mean('time') ds_diff = ds_weighted - ds_unweighted # Quick plot to show the results notnull = pd.notnull(ds_unweighted['Tair'][0]) fig, axes = plt.subplots(nrows=4, ncols=3, figsize=(14,12)) for i, season in enumerate(('DJF', 'MAM', 'JJA', 'SON')): ds_weighted['Tair'].sel(season=season).where(notnull).plot.pcolormesh( ax=axes[i, 0], vmin=-30, vmax=30, cmap='Spectral_r', add_colorbar=True, extend='both') ds_unweighted['Tair'].sel(season=season).where(notnull).plot.pcolormesh( ax=axes[i, 1], vmin=-30, vmax=30, cmap='Spectral_r', add_colorbar=True, extend='both') ds_diff['Tair'].sel(season=season).where(notnull).plot.pcolormesh( ax=axes[i, 2], vmin=-0.1, vmax=.1, cmap='RdBu_r', add_colorbar=True, extend='both') axes[i, 0].set_ylabel(season) axes[i, 1].set_ylabel('') axes[i, 2].set_ylabel('') for ax in axes.flat: ax.axes.get_xaxis().set_ticklabels([]) ax.axes.get_yaxis().set_ticklabels([]) ax.axes.axis('tight') ax.set_xlabel('') axes[0, 0].set_title('Weighted by DPM') axes[0, 1].set_title('Equal Weighting') axes[0, 2].set_title('Difference') plt.tight_layout() fig.suptitle('Seasonal Surface Air Temperature', fontsize=16, y=1.02) # Wrap it into a simple function def season_mean(ds, calendar='standard'): # Make a DataArray of season/year groups year_season = xr.DataArray(ds.time.to_index().to_period(freq='Q-NOV').to_timestamp(how='E'), coords=[ds.time], name='year_season') # Make a DataArray with the number of days in each month, size = len(time) month_length = xr.DataArray(get_dpm(ds.time.to_index(), calendar=calendar), coords=[ds.time], name='month_length') # Calculate the weights by grouping by 'time.season' weights = month_length.groupby('time.season') / month_length.groupby('time.season').sum() # Test that the sum of the weights for each season is 1.0 np.testing.assert_allclose(weights.groupby('time.season').sum().values, np.ones(4)) # Calculate the weighted average return (ds * weights).groupby('time.season').sum(dim='time') Explanation: Now for the heavy lifting: We first have to come up with the weights, - calculate the month lengths for each monthly data record - calculate weights using groupby('time.season') Finally, we just need to multiply our weights by the Dataset and sum allong the time dimension. End of explanation
1,672
Given the following text description, write Python code to implement the functionality described below step by step Description: Analýza volatilních pohybů v Pythonu a Pandas 2 Jde o příklad, jak hledat vztah konkrétní podmínky na výsledek. V následujícím článku popisuji, jaký má vliv volatilní úsečka, popsaná v předchozím článku, na následující cenovou úsečku. Zároveň se budu snažit odpovědět na následující otázky Step1: Stažení vzorových dat Nejprve si připravím vzorová data. Například využiji trh SPY, což je ETF kopírující hlavní americký akciový index S&P 500. Step2: V předchozím článku jsem definoval, jak najdu volatilní svíčku. Připomínám, že se jedná o svíčku, která je větší než 4 předchozí svíčky. Abych tento kód nemusel psát pořád dokola, vytvořil jsem si osobní balíček, obsahující mé používané analýzy. Zde je jednoznačná výhoda pythonu a jiných programovacích jazyků. Funkce volatile_bars přebírá jako první parametr tabulku dat typu pandas.DataFrame, která musí obsahovat sloupečky Open, High, Low, Close. Jako jednotlivé řádky bere data svíček trhů, takže tam klidně můžu mít data jiného timeframe. Funkce volatile_bars vytvoří nový sloupeček s názvem VolBar, ve kterém označí volatilní svíčky hodnotou True a všechny ostatní svíčky hodnotou False. Step3: Pandas.DataFrame má jednoduchou funkci, která se nazývá pct_change. Ta mi zjistí procentuální změnu hodnoty aktuálního řádku od přechozího. V mém případě to v jednoduchosti znamená, že zjistím, jak se dnešní Close procentuálně změnilo od včerejška. Výsledek vložím do nového sloupečku s názvem pct_change. Pozn. Step4: Nyní přichází na řadu princip analýzy, na který chci tímhle článkem upozornit. Analýza vlivu určité podmínky na budoucí výsledek Jako podmínku beru volatilní úsečku, tu mám označenou ve sloupci VolBar a změnu ceny v procentech mám ve sloupečku pct_change. Nyní můžu posunout data ve sloupečku pct_change o jeden řádek výše a získám tak procentuální změnu Close ceny (výsledek) v následující svíčce v řádku s označeným VolBarem (podmínka). Step5: Ještě pomocí filtru rozdělím na stoupající volatilní úsečku a klesající volatilní úsečku. Step6: Proměnná up_volbars nyní obsahuje pouze řádky stoupajících volatilních úseček => jejich uzavírací cena Close je výšší než otevírací cena Open. Pro klesající volatilní úsečky v proměnné down_volbars platí opačná podmínka. Pravděpodobnost návratu po reverzní svíčce Příprava filtrů definujících reverzní pohyb po volatilní úsečce. Kladný pct_change-1 znamená, že je následující svíčka stoupající a záporný klesající. Step7: Volatilní up svíčka Výpočet pravděpodobnosti, že po volatilní úsečce bude následovat klesající úsečka Step8: Volatilní down svíčka Step9: Analýza rozložení výnosů po volatilní svíčce Step10: Rozložení velikosti výnosů down reverzu po up volatilním pohybu Velikosti výnosů (procentuální změny close) pro reverz po volatilní stoupající úsečce si můžu zobrazit pomocí histogramu. Pandas.DataFrame.hist() je funkce, která využívá knihovnu matplotlib pro zobrazení dat. Step11: Více se mi ale líbí možnosti knihovny seaborn, která využívá pandas.DataFrame a pandas.Series jako vstupní data a pro statistické účely je dokáže pěkně zobrazit. Funkce Pandas.DataFrame.describe() vypočítá pro dané data základní statistické údaje jako jsou průměr, směrodatná odchylka, atd. Step12: Rozložení velikosti výnosů up reverzu po down volatilním pohybu Podobný rozbor pro reverzní pohyb po volatilní klesající úsečce Step13: Výsledek Z výše popsané analýzy jsem zjistil následující charakteristiku pro volatilní svíčky v trhu SPY v časovém období od 1.1.2005 do 31.12.2018
Python Code: import sys import pandas as pd import pandas_datareader as pdr import pandas_datareader.data as web import matplotlib import seaborn as sns import datetime print('Python', sys.version) print('Pandas', pd.__version__) print('Pandas-datareader', pdr.__version__) print('Matplotlib', matplotlib.__version__) print('Seaborn', sns.__version__) Explanation: Analýza volatilních pohybů v Pythonu a Pandas 2 Jde o příklad, jak hledat vztah konkrétní podmínky na výsledek. V následujícím článku popisuji, jaký má vliv volatilní úsečka, popsaná v předchozím článku, na následující cenovou úsečku. Zároveň se budu snažit odpovědět na následující otázky: * Má cenu po volatilní úsečce nakupovat nebo prodávat? * Jaká je pravděpodobnost, že následující úsečka bude pokračovat v pohybu, nebo se vrátí? * Kam umístit target? Pro analýzu jsem použil následující moduly: End of explanation start = datetime.datetime(2005, 1, 1) end = datetime.datetime(2018, 12, 31) spy_data = web.DataReader('SPY', 'yahoo', start, end) spy_data = spy_data.drop(['Volume', 'Adj Close'], axis=1) # sloupce 'Volume' a 'Adj Close' nebudu potřebovat spy_data.tail() Explanation: Stažení vzorových dat Nejprve si připravím vzorová data. Například využiji trh SPY, což je ETF kopírující hlavní americký akciový index S&P 500. End of explanation # vhat je můj osobní balíček analytických funkcí, které nechci psát pokaždé znovu a znovu from vhat.analyse.volbars import volatile_bars volatile_bars(spy_data, N=4, drop_calculations=False) spy_data.tail() Explanation: V předchozím článku jsem definoval, jak najdu volatilní svíčku. Připomínám, že se jedná o svíčku, která je větší než 4 předchozí svíčky. Abych tento kód nemusel psát pořád dokola, vytvořil jsem si osobní balíček, obsahující mé používané analýzy. Zde je jednoznačná výhoda pythonu a jiných programovacích jazyků. Funkce volatile_bars přebírá jako první parametr tabulku dat typu pandas.DataFrame, která musí obsahovat sloupečky Open, High, Low, Close. Jako jednotlivé řádky bere data svíček trhů, takže tam klidně můžu mít data jiného timeframe. Funkce volatile_bars vytvoří nový sloupeček s názvem VolBar, ve kterém označí volatilní svíčky hodnotou True a všechny ostatní svíčky hodnotou False. End of explanation spy_data['pct_change'] = spy_data.Close.pct_change() spy_data.head() Explanation: Pandas.DataFrame má jednoduchou funkci, která se nazývá pct_change. Ta mi zjistí procentuální změnu hodnoty aktuálního řádku od přechozího. V mém případě to v jednoduchosti znamená, že zjistím, jak se dnešní Close procentuálně změnilo od včerejška. Výsledek vložím do nového sloupečku s názvem pct_change. Pozn.: Uvažuji obchodování na Close cenách jednotlivých svíček. End of explanation spy_data['pct_change-1'] = spy_data['pct_change'].shift(-1) spy_data[spy_data['VolBar']].head() Explanation: Nyní přichází na řadu princip analýzy, na který chci tímhle článkem upozornit. Analýza vlivu určité podmínky na budoucí výsledek Jako podmínku beru volatilní úsečku, tu mám označenou ve sloupci VolBar a změnu ceny v procentech mám ve sloupečku pct_change. Nyní můžu posunout data ve sloupečku pct_change o jeden řádek výše a získám tak procentuální změnu Close ceny (výsledek) v následující svíčce v řádku s označeným VolBarem (podmínka). End of explanation upcandle_filter = spy_data['C-O'] >= 0 # long up_volbars = spy_data[spy_data['VolBar'] & upcandle_filter] down_volbars = spy_data[spy_data['VolBar'] & ~upcandle_filter] down_volbars.head() Explanation: Ještě pomocí filtru rozdělím na stoupající volatilní úsečku a klesající volatilní úsečku. End of explanation down_revers_filter = up_volbars['pct_change-1']<0 up_revers_filter = down_volbars['pct_change-1']>0 Explanation: Proměnná up_volbars nyní obsahuje pouze řádky stoupajících volatilních úseček => jejich uzavírací cena Close je výšší než otevírací cena Open. Pro klesající volatilní úsečky v proměnné down_volbars platí opačná podmínka. Pravděpodobnost návratu po reverzní svíčce Příprava filtrů definujících reverzní pohyb po volatilní úsečce. Kladný pct_change-1 znamená, že je následující svíčka stoupající a záporný klesající. End of explanation perc = up_volbars[down_revers_filter].shape[0] / up_volbars.shape[0] print('Probability of the following short bar after the volatile up bar:', f'{perc*100:.2f}%') Explanation: Volatilní up svíčka Výpočet pravděpodobnosti, že po volatilní úsečce bude následovat klesající úsečka: End of explanation perc = down_volbars[up_revers_filter].shape[0] / down_volbars.shape[0] print('Probability of the following up bar after the volatile short bar:', f'{perc*100:.2f}%') Explanation: Volatilní down svíčka End of explanation BINS = 50 # pouze pomocná proměnná, která určuje, kolik maximálně má být zobrazeno sloupečků v histogramu Explanation: Analýza rozložení výnosů po volatilní svíčce End of explanation up_volbars.loc[down_revers_filter,'pct_change-1'].hist(bins=BINS); Explanation: Rozložení velikosti výnosů down reverzu po up volatilním pohybu Velikosti výnosů (procentuální změny close) pro reverz po volatilní stoupající úsečce si můžu zobrazit pomocí histogramu. Pandas.DataFrame.hist() je funkce, která využívá knihovnu matplotlib pro zobrazení dat. End of explanation import seaborn as sns sns.distplot(up_volbars.loc[down_revers_filter,'pct_change-1'], bins=BINS) up_volbars.loc[down_revers_filter,'pct_change-1'].describe() Explanation: Více se mi ale líbí možnosti knihovny seaborn, která využívá pandas.DataFrame a pandas.Series jako vstupní data a pro statistické účely je dokáže pěkně zobrazit. Funkce Pandas.DataFrame.describe() vypočítá pro dané data základní statistické údaje jako jsou průměr, směrodatná odchylka, atd. End of explanation down_volbars.loc[up_revers_filter,'pct_change-1'].hist(bins=BINS); sns.distplot(down_volbars.loc[up_revers_filter,'pct_change-1'], bins=BINS) down_volbars.loc[up_revers_filter,'pct_change-1'].describe() Explanation: Rozložení velikosti výnosů up reverzu po down volatilním pohybu Podobný rozbor pro reverzní pohyb po volatilní klesající úsečce: End of explanation spy_data.Close.plot(); Explanation: Výsledek Z výše popsané analýzy jsem zjistil následující charakteristiku pro volatilní svíčky v trhu SPY v časovém období od 1.1.2005 do 31.12.2018: Po volatilní stoupající svíčce následovala reverzní svíce (klesající) v 44.83% případů. Po volatilní klesající svíčce následovala reverzní svíce (stoupající) v 57.89% případů. Už tenhle první výsledek nám ukazuje, že: Má větší smysl hledat v trhu SPY reverz po klesající volatilní úsečce směrem nahoru. Samozřejmě je to dáno charakterem trhu SPY. Na grafu SPY je možné vidět na první pohled, že má spíše tendenci stoupat: End of explanation
1,673
Given the following text description, write Python code to implement the functionality described below step by step Description: Preprocessing and Pipelines Step1: Cross-validated pipelines including scaling, we need to estimate mean and standard deviation separately for each fold. To do that, we build a pipeline. Step2: Cross-validation with a pipeline Step3: Grid Search with a pipeline
Python Code: from sklearn.datasets import load_digits from sklearn.cross_validation import train_test_split digits = load_digits() X_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target) Explanation: Preprocessing and Pipelines End of explanation from sklearn.pipeline import Pipeline from sklearn.svm import SVC from sklearn.preprocessing import StandardScaler pipeline = Pipeline([("scaler", StandardScaler()), ("svm", SVC())]) # in new versions: make_pipeline(StandardScaler(), SVC()) pipeline.fit(X_train, y_train) pipeline.predict(X_test) Explanation: Cross-validated pipelines including scaling, we need to estimate mean and standard deviation separately for each fold. To do that, we build a pipeline. End of explanation from sklearn.cross_validation import cross_val_score cross_val_score(pipeline, X_train, y_train) Explanation: Cross-validation with a pipeline End of explanation import numpy as np from sklearn.grid_search import GridSearchCV param_grid = {'svm__C': 10. ** np.arange(-3, 3), 'svm__gamma' : 10. ** np.arange(-3, 3)} grid_pipeline = GridSearchCV(pipeline, param_grid=param_grid, n_jobs=-1) grid_pipeline.fit(X_train, y_train) grid_pipeline.score(X_test, y_test) Explanation: Grid Search with a pipeline End of explanation
1,674
Given the following text description, write Python code to implement the functionality described below step by step Description: Visual Overview of Plotting Functions We've talked a lot about laying things out, etc, but we haven't talked about actually plotting data yet. Matplotlib has a number of different plotting functions -- many more than we'll cover here, in fact. There's a more complete list in the pyplot documentation, and matplotlib gallery is a great place to get examples of all of them. However, a full list and/or the gallery can be a bit overwhelming at first. Instead we'll condense it down and give you a look at some of the ones you're most likely to use, and then go over a subset of those in more detail. Here's a simplified visual overview of matplotlib's most commonly used plot types. Let's browse through these, and then we'll go over a few in more detail. Clicking on any of these images will take you to the code that generated them. We'll skip that for now, but feel browse through it later. The Basics Step1: Note that we held on to what ax.bar(...) returned. Matplotlib plotting methods return an Artist or a sequence of artists. Anything you can see in a matplotlib figure/axes/etc is an Artist of some sort. Most of the time, you will not need to retain these returned objects. You will want to capture them for special customizing that may not be possible through the normal plotting mechanism. Let's re-visit that last example and modify what's plotted. In the case of bar, a container artist is returned, so we'll modify its contents instead of the container itself (thus for bar in vert_bars). Step2: Keep in mind that any plotting method in matplotlib returns the artists that are plotted. We'll use it again, particularly when we get to adding colorbars to images. Filled Regions Step3: However, it can also be used to fill between two curves. This is particularly useful when you want to show an envelope of some sort (e.g. error, confidence, amplitude, etc). Step4: Exercise 2.1 Step5: Input Data Step6: You may notice that colorbar is a Figure method and not an Axes method. That's because colorbar doesn't operate on the axes. Instead, it shrinks the current axes by a bit, adds a new axes to the figure, and places the colorbar on that axes. The new axes that fig.colorbar creates is fairly limited in where it can be positioned. For example, it's always outside the axes it "steals" room from. Sometimes you may want to avoid "stealing" room from an axes or maybe even have the colorbar inside another axes. In that case, you can manually create the axes for the colorbar and position it where you'd like Step7: One note Step8: In this case, we'd really like the white in the colormap to correspond to 0. A quick way to do this is to make the vmin equal to the negative of the vmax. Step9: vmin and vmax are also very useful when we want multiple plots to share one colorbar, as our next exercise will do. Exercise 2.2
Python Code: np.random.seed(1) x = np.arange(5) y = np.random.randn(5) fig, axes = plt.subplots(ncols=2, figsize=plt.figaspect(1./2)) vert_bars = axes[0].bar(x, y, color='lightblue', align='center') horiz_bars = axes[1].barh(x, y, color='lightblue', align='center') # I'll also introduce axhline & axvline to draw a line all the way across the axes # This can be a quick-n-easy way to draw an axis "spine". axes[0].axhline(0, color='gray', linewidth=2) axes[1].axvline(0, color='gray', linewidth=2) plt.show() Explanation: Visual Overview of Plotting Functions We've talked a lot about laying things out, etc, but we haven't talked about actually plotting data yet. Matplotlib has a number of different plotting functions -- many more than we'll cover here, in fact. There's a more complete list in the pyplot documentation, and matplotlib gallery is a great place to get examples of all of them. However, a full list and/or the gallery can be a bit overwhelming at first. Instead we'll condense it down and give you a look at some of the ones you're most likely to use, and then go over a subset of those in more detail. Here's a simplified visual overview of matplotlib's most commonly used plot types. Let's browse through these, and then we'll go over a few in more detail. Clicking on any of these images will take you to the code that generated them. We'll skip that for now, but feel browse through it later. The Basics: 1D series/points What we've mentioned so far <a href="examples/plot_example.py"><img src="images/plot_example.png"></a> <a href="examples/scatter_example.py"><img src="images/scatter_example.png"></a> Other common plot types <a href="examples/bar_example.py"><img src="images/bar_example.png"></a> <a href="examples/fill_example.py"><img src="images/fill_example.png"></a> 2D Arrays and Images <a href="examples/imshow_example.py"><img src="images/imshow_example.png"></a> <a href="examples/pcolor_example.py"><img src="images/pcolor_example.png"></a> <a href="examples/contour_example.py"><img src="images/contour_example.png"></a> Vector Fields <a href="examples/vector_example.py"><img src="images/vector_example.png"></a> Data Distributions <a href="examples/statistical_example.py"><img src="images/statistical_example.png"></a> Detailed Examples (of a few of these) Input Data: 1D Series We've briefly mentioned ax.plot(x, y) and ax.scatter(x, y) to draw lines and points, respectively. We'll cover some of their options (markers, colors, linestyles, etc) in the next section. Let's move on to a couple of other common plot types. Bar Plots: ax.bar(...) and ax.barh(...) <img src="images/bar_example.png"> Bar plots are one of the most common plot types. Matplotlib's ax.bar(...) method can also plot general rectangles, but the default is optimized for a simple sequence of x, y values, where the rectangles have a constant width. There's also ax.barh(...) (for horizontal), which makes a constant-height assumption instead of a constant-width assumption. End of explanation fig, ax = plt.subplots() vert_bars = ax.bar(x, y, color='lightblue', align='center') # We could have also done this with two separate calls to `ax.bar` and numpy boolean indexing. for bar in vert_bars: if bar.xy[1] < 0: bar.set(edgecolor='darkred', color='salmon', linewidth=3) plt.show() Explanation: Note that we held on to what ax.bar(...) returned. Matplotlib plotting methods return an Artist or a sequence of artists. Anything you can see in a matplotlib figure/axes/etc is an Artist of some sort. Most of the time, you will not need to retain these returned objects. You will want to capture them for special customizing that may not be possible through the normal plotting mechanism. Let's re-visit that last example and modify what's plotted. In the case of bar, a container artist is returned, so we'll modify its contents instead of the container itself (thus for bar in vert_bars). End of explanation np.random.seed(1) y = np.random.randn(100).cumsum() x = np.linspace(0, 10, 100) fig, ax = plt.subplots() ax.fill_between(x, y, color='lightblue') plt.show() Explanation: Keep in mind that any plotting method in matplotlib returns the artists that are plotted. We'll use it again, particularly when we get to adding colorbars to images. Filled Regions: ax.fill(x, y), fill_between(...), etc <img src="images/fill_example.png"> Of these functions, ax.fill_between(...) is probably the one you'll use the most often. In its most basic form, it fills between the given y-values and 0: End of explanation x = np.linspace(0, 10, 200) y1 = 2 * x + 1 y2 = 3 * x + 1.2 y_mean = 0.5 * x * np.cos(2*x) + 2.5 * x + 1.1 fig, ax = plt.subplots() # Plot the envelope with `fill_between` ax.fill_between(x, y1, y2, color='yellow') # Plot the "centerline" with `plot` ax.plot(x, y_mean, color='black') plt.show() Explanation: However, it can also be used to fill between two curves. This is particularly useful when you want to show an envelope of some sort (e.g. error, confidence, amplitude, etc). End of explanation import numpy as np import matplotlib.pyplot as plt np.random.seed(1) # Generate data... y_raw = np.random.randn(1000).cumsum() + 15 x_raw = np.linspace(0, 24, y_raw.size) # Get averages of every 100 samples... x_pos = x_raw.reshape(-1, 100).min(axis=1) y_avg = y_raw.reshape(-1, 100).mean(axis=1) y_err = y_raw.reshape(-1, 100).ptp(axis=1) bar_width = x_pos[1] - x_pos[0] # Make a made up future prediction with a fake confidence x_pred = np.linspace(0, 30) y_max_pred = y_avg[0] + y_err[0] + 2.3 * x_pred y_min_pred = y_avg[0] - y_err[0] + 1.2 * x_pred # Just so you don't have to guess at the colors... barcolor, linecolor, fillcolor = 'wheat', 'salmon', 'lightblue' # Now you're on your own! Explanation: Exercise 2.1: Now let's try combining bar and fill_between: Can you reproduce the figure below? <img src="images/exercise_2.1-bar_and_fill_between.png"> End of explanation from matplotlib.cbook import get_sample_data data = np.load(get_sample_data('axes_grid/bivariate_normal.npy')) fig, ax = plt.subplots() im = ax.imshow(data, cmap='gist_earth') fig.colorbar(im) plt.show() Explanation: Input Data: 2D Arrays or Images There are several options for plotting 2D datasets. imshow, pcolor, and pcolormesh have a lot of overlap, at first glance. (The example image below is meant to clarify that somewhat.) <img src="images/imshow_example.png"> <img src="images/pcolor_example.png"> In short, imshow can interpolate and display large arrays very quickly, while pcolormesh and pcolor are much slower, but can handle flexible (i.e. more than just rectangular) arrangements of cells. We won't dwell too much on the differences and overlaps here. They have overlapping capabilities, but different default behavior because their primary use-cases are a bit different (there's also matshow, which is imshow with different defaults). Instead we'll focus on what they have in common. imshow, pcolor, pcolormesh, scatter, and any other matplotlib plotting methods that map a range of data values onto a colormap will return artists that are instances of ScalarMappable. In practice, what that means is a) you can display a colorbar for them, and b) they share several keyword arguments. Colorbars Let's add a colorbar to the figure to display what colors correspond to values of data we've plotted. End of explanation fig, ax = plt.subplots() cax = fig.add_axes([0.27, 0.8, 0.5, 0.05]) im = ax.imshow(data, cmap='gist_earth') fig.colorbar(im, cax=cax, orientation='horizontal') plt.show() Explanation: You may notice that colorbar is a Figure method and not an Axes method. That's because colorbar doesn't operate on the axes. Instead, it shrinks the current axes by a bit, adds a new axes to the figure, and places the colorbar on that axes. The new axes that fig.colorbar creates is fairly limited in where it can be positioned. For example, it's always outside the axes it "steals" room from. Sometimes you may want to avoid "stealing" room from an axes or maybe even have the colorbar inside another axes. In that case, you can manually create the axes for the colorbar and position it where you'd like: End of explanation from matplotlib.cbook import get_sample_data data = np.load(get_sample_data('axes_grid/bivariate_normal.npy')) fig, ax = plt.subplots() im = ax.imshow(data, cmap='seismic', interpolation='nearest') fig.colorbar(im) plt.show() Explanation: One note: In the last module in this tutorial, we'll briefly cover axes_grid, which is very useful for aligning colorbars and/or other axes with images displayed with imshow. ### Shared parameters for imshow, pcolormesh, contour, scatter, etc As we mentioned earlier, any plotting method that creates a ScalarMappable will have some common kwargs. The ones you'll use the most frequently are: cmap : The colormap (or name of the colormap) used to display the input. (We'll go over the different colormaps in the next section.) vmin : The minimum data value that will correspond to the "bottom" of the colormap (defaults to the minimum of your input data). vmax : The maximum data value that will correspond to the "top" of the colormap (defaults to the maximum of your input data). norm : A Normalize instance to control how the data values are mapped to the colormap. By default, this will be a linear scaling between vmin and vmax, but other norms are available (e.g. LogNorm, PowerNorm, etc). vmin and vmax are particularly useful. Quite often, you'll want the colors to be mapped to a set range of data values, which aren't the min/max of your input data. For example, you might want a symmetric ranges of values around 0. As an example of that, let's use a divergent colormap with the data we showed earlier. We'll also use interpolation="nearest" to "turn off" interpolation of the cells in the input dataset: End of explanation fig, ax = plt.subplots() im = ax.imshow(data, cmap='seismic', interpolation='nearest', vmin=-2, vmax=2) fig.colorbar(im) plt.show() Explanation: In this case, we'd really like the white in the colormap to correspond to 0. A quick way to do this is to make the vmin equal to the negative of the vmax. End of explanation %load exercises/2.2-vmin_vmax_imshow_and_colorbars.py import numpy as np import matplotlib.pyplot as plt np.random.seed(1) # Generate random data with different ranges... data1 = np.random.random((10, 10)) data2 = 2 * np.random.random((10, 10)) data3 = 3 * np.random.random((10, 10)) # Set up our figure and axes... fig, axes = plt.subplots(ncols=3, figsize=plt.figaspect(0.5)) fig.tight_layout() # Make the subplots fill up the figure a bit more... cax = fig.add_axes([0.25, 0.1, 0.55, 0.03]) # Add an axes for the colorbar # Now you're on your own! Explanation: vmin and vmax are also very useful when we want multiple plots to share one colorbar, as our next exercise will do. Exercise 2.2: Can you reproduce the figure below? <img src="images/exercise_2.2-vmin_vmax_imshow_and_colorbars.png"> End of explanation
1,675
Given the following text description, write Python code to implement the functionality described below step by step Description: Model My Watershed (MMW) API Demo Step1: MMW production API endpoint base url. Step2: The job is not completed instantly and the results are not returned directly by the API request that initiated the job. The user must first issue an API request to confirm that the job is complete, then fetch the results. The demo presented here performs automated retries (checks) until the server confirms the job is completed, then requests the JSON results and converts (deserializes) them into a Python dictionary. Step3: 2. Construct AOI GeoJSON for job request Parameters passed to the "analyze" API requests. Step4: 3. Issue job requests, fetch job results when done, then examine results. Repeat for each request type Step5: Issue job request Step6: Everything below is just exploration of the results. Examine the content of the results (as JSON, and Python dictionaries) Step7: result is a dictionary with one item, survey. This item in turn is a dictionary with 3 items Step8: Issue job request Step9: result is a dictionary with one item, survey. This item in turn is a dictionary with 3 items Step10: Issue job request Step11: result is a dictionary with one item, survey. This item in turn is a dictionary with 3 items
Python Code: import json import requests from requests.adapters import HTTPAdapter from requests.packages.urllib3.util.retry import Retry def requests_retry_session( retries=3, backoff_factor=0.3, status_forcelist=(500, 502, 504), session=None, ): session = session or requests.Session() retry = Retry( total=retries, read=retries, connect=retries, backoff_factor=backoff_factor, status_forcelist=status_forcelist, ) adapter = HTTPAdapter(max_retries=retry) session.mount('http://', adapter) session.mount('https://', adapter) return session Explanation: Model My Watershed (MMW) API Demo: Analyze land properties Emilio Mayorga, University of Washington, Seattle. 2018-5-17 (minor updates to documentation on 2018-8-19). Demo put together using as a starting point instructions from Azavea from October 2017. See also the related, previous notebook, https://github.com/WikiWatershed/model-my-watershed/blob/develop/doc/MMW_API_watershed_demo.ipynb Introduction The Model My Watershed API allows you to delineate watersheds and analyze geo-data for watersheds and arbitrary areas. You can read more about the work at WikiWatershed or use the web app. MMW users can discover their API keys through the user interface, and test the MMW geoprocessing API on either the live or staging apps. An Account page with the API key is available from either app (live or staging). To see it, go to the app, log in, and click on "Account" in the dropdown that appears when you click on your username in the top right. Your key is different between staging and production. For testing with the live (production) API and key, go to https://modelmywatershed.org/api/docs/ The API can be tested from the command line using curl. This example uses the production API to test the watershed endpoint: bash curl -H "Content-Type: application/json" -H "Authorization: Token YOUR_API_KEY" -X POST -d '{ "location": [39.67185,-75.76743] }' https://modelmywatershed.org/api/watershed/ MMW API: Obtain land properties based on "analyze" geoprocessing on AOI (small box around a point) 1. Set up End of explanation api_url = "https://modelmywatershed.org/api/" Explanation: MMW production API endpoint base url. End of explanation def get_job_result(api_url, s, jobrequest): url_tmplt = api_url + "jobs/{job}/" get_url = url_tmplt.format result = '' while not result: get_req = requests_retry_session(session=s).get(get_url(job=jobrequest['job'])) result = json.loads(get_req.content)['result'] return result s = requests.Session() APIToken = '<YOUR API TOKEN STRING>' # ENTER YOUR OWN API TOKEN s.headers.update({ 'Authorization': APIToken, 'Content-Type': 'application/json' }) Explanation: The job is not completed instantly and the results are not returned directly by the API request that initiated the job. The user must first issue an API request to confirm that the job is complete, then fetch the results. The demo presented here performs automated retries (checks) until the server confirms the job is completed, then requests the JSON results and converts (deserializes) them into a Python dictionary. End of explanation from shapely.geometry import box, MultiPolygon width = 0.0004 # Looks like using a width smaller than 0.0002 causes a problem with the API? # GOOS: (-88.5552, 40.4374) elev 240.93. Agriculture Site—Goose Creek (Corn field) Site (GOOS) at IML CZO # SJER: (-119.7314, 37.1088) elev 403.86. San Joaquin Experimental Reserve Site (SJER) at South Sierra CZO lon, lat = -119.7314, 37.1088 bbox = box(lon-0.5*width, lat-0.5*width, lon+0.5*width, lat+0.5*width) payload = MultiPolygon([bbox]).__geo_interface__ json_payload = json.dumps(payload) payload Explanation: 2. Construct AOI GeoJSON for job request Parameters passed to the "analyze" API requests. End of explanation # convenience function, to simplify the request calls, below def analyze_api_request(api_name, s, api_url, json_payload): post_url = "{}analyze/{}/".format(api_url, api_name) post_req = requests_retry_session(session=s).post(post_url, data=json_payload) jobrequest_json = json.loads(post_req.content) # Fetch and examine job result result = get_job_result(api_url, s, jobrequest_json) return result Explanation: 3. Issue job requests, fetch job results when done, then examine results. Repeat for each request type End of explanation result = analyze_api_request('land/2011_2011', s, api_url, json_payload) Explanation: Issue job request: analyze/land/2011_2011/ End of explanation type(result), result.keys() Explanation: Everything below is just exploration of the results. Examine the content of the results (as JSON, and Python dictionaries) End of explanation result['survey'].keys() categories = result['survey']['categories'] len(categories), categories[1] land_categories_nonzero = [d for d in categories if d['coverage'] > 0] land_categories_nonzero Explanation: result is a dictionary with one item, survey. This item in turn is a dictionary with 3 items: displayName, name, categories. The first two are just labels. The data are in the categories item. End of explanation result = analyze_api_request('terrain', s, api_url, json_payload) Explanation: Issue job request: analyze/terrain/ End of explanation categories = result['survey']['categories'] len(categories), categories [d for d in categories if d['type'] == 'average'] Explanation: result is a dictionary with one item, survey. This item in turn is a dictionary with 3 items: displayName, name, categories. The first two are just labels. The data are in the categories item. End of explanation result = analyze_api_request('climate', s, api_url, json_payload) Explanation: Issue job request: analyze/climate/ End of explanation categories = result['survey']['categories'] len(categories), categories[:2] ppt = [d['ppt'] for d in categories] tmean = [d['tmean'] for d in categories] # ppt is in cm, right? sum(ppt) import calendar import numpy as np calendar.mdays # Annual tmean needs to be weighted by the number of days per month sum(np.asarray(tmean) * np.asarray(calendar.mdays[1:]))/365 Explanation: result is a dictionary with one item, survey. This item in turn is a dictionary with 3 items: displayName, name, categories. The first two are just labels. The data are in the categories item. End of explanation
1,676
Given the following text description, write Python code to implement the functionality described below step by step Description: An Introduction to Matplotlib Tenzing HY Joshi Step1: Where's the plot to this story? By default, with pyplot the interactive Mode is turned off. That means that the state of our Figure is updated on every plt command, but only drawn when we ask for it to be drawn plt.draw() and shown when we ask for it to be shown plt.show(). So lets have a look at what happened. Step2: Interactive mode on or off is a preference. See how it works for your workflow. plt.ion() can be used to turn interactive mode on plt.ioff() then turns it off For now lets switch over to the %pylab notebook configuration to make it easier on ourselves. Step3: Some Simple Plots Step4: Lots of kwargs to modify your plot A few that I find most useful are Step5: Nice scatter Example from the MPL website. Note that the kwargs are different here. Quick inspection of the docs is handy (shift + tab in jupy notebooks). Step6: Loads of examples and plot types in the Matplotlib.org Gallery Its worth looking through some examples just to get a feel for what types of plots are available and how they are used. Figures and Axes Working with MPL Figure and Axes objects gives you more control. You can quickly make multiple plots, shared axes, etc... on the same Figure. Figure command Subplot command Different sized subplots Axes controls ticks, labels, else? Step7: fig.add_axes is another option for adding axes as you wish. * Relative lower left corner x and y coordinates * Relative x and y spans of the axes Step8: plt.subplots gives an alternative route, creating all of the axes at once. Less flexability since you'll end up with a grid of subplots, but thats exactly what you want a lot of the time. sharex and sharey kwargs do exactly that for all of the axes. Step9: Colors and colormaps MPL has a variety of Colormaps to choose from. I also use the python library Palettable to gain access to a few other colors and colormaps in convienent ways. I won't use this library today, but if you're interested in some other options from what MPL has it is worth a look. Step10: Colormap normalization can also be pretty handy! They are found in matplotlib.colors. Lets look at Lograthmic (LogNorm), but also symmetric log, power law, discrete bounds, and custom ranges available. * Colormap Normalization Step11: Lines and text Adding horizontal and vertical lines hlines and vlines Adding text to your figures is also often needed. Step12: Lets use vertical lines to represent the means of our distributions instead of plotting all of them. We'll also add some text to describe these vertical lines. Step13: We can do the same with horizontal lines Step14: Displaying images Loading image data is supported by the Pillow library. Natively, matplotlib only supports PNG images. The commands shown below fall back on Pillow if the native read fails. Matplotlib plotting can handle float32 and uint8, but image reading/writing for any format other than PNG is limited to uint8 data. Step15: Lets plot the R, G, and B components of this image.
Python Code: import matplotlib as mpl mpl # I normally prototype my code in an editor + ipy terminal. # In those cases I import pyplot and numpy via import matplotlib.pyplot as plt import numpy as np # In Jupy notebooks we've got magic functions and pylab gives you pyplot as plt and numpy as np # %pylab # Additionally, inline will let you plot inline of the notebook # %pylab inline # And notebook, as I've just found out gives you some resizing etc... tools inline. # %pylab notebook y = np.ones(10) for x in range(2,10): y[x] = y[x-2] + y[x-1] plt.plot(y) plt.title('This story') Explanation: An Introduction to Matplotlib Tenzing HY Joshi: thjoshi@lbl.gov Nick Swanson-Hysell: swanson-hysell@berkeley.edu What is Matplotlib? matplotlib is a library for making 2D plots of arrays in Python. ... matplotlib is designed with the philosophy that you should be able to create simple plots with just a few commands, or just one! ... The matplotlib code is conceptually divided into three parts: the pylab interface is the set of functions provided by matplotlib.pylab which allow the user to create plots with code quite similar to MATLAB figure generating code (Pyplot tutorial). The matplotlib frontend or matplotlib API is the set of classes that do the heavy lifting, creating and managing figures, text, lines, plots and so on (Artist tutorial). This is an abstract interface that knows nothing about output. The backends are device-dependent drawing devices, aka renderers, that transform the frontend representation to hardcopy or a display device (What is a backend?). Resources Matplotlib website Python Programming tutorial Sci-Py Lectures What I'll touch on Importing Simple plots Figures and Axes Useful plot types Formatting End of explanation plt.show() print('I can not run this command until I close the window because interactive mode is turned off') Explanation: Where's the plot to this story? By default, with pyplot the interactive Mode is turned off. That means that the state of our Figure is updated on every plt command, but only drawn when we ask for it to be drawn plt.draw() and shown when we ask for it to be shown plt.show(). So lets have a look at what happened. End of explanation %pylab inline # Set default figure size for your viewing pleasure... pylab.rcParams['figure.figsize'] = (10.0, 7.0) Explanation: Interactive mode on or off is a preference. See how it works for your workflow. plt.ion() can be used to turn interactive mode on plt.ioff() then turns it off For now lets switch over to the %pylab notebook configuration to make it easier on ourselves. End of explanation x = np.linspace(0,5,100) y = np.random.exponential(1./3., 100) # Make a simply plot of x vs y, Set the points to have an 'x' marker. plt.plot(x,y, c='r',marker='x') # Label our x and y axes and give the plot a title. plt.xlabel('Sample time (au)') plt.ylabel('Exponential Sample (au)') plt.title('See the trend?') Explanation: Some Simple Plots End of explanation x = np.linspace(0,6.,1000.) # Alpha = 0.5, color = red, linstyle = dotted, linewidth = 3, label = x plt.plot(x, x, alpha = 0.5, c = 'r', ls = ':', lw=3., label='x') # Alpha = 0.5, color = blue, linstyle = solid, linewidth = 3, label = x**(3/2) # Check out the LaTeX! plt.plot(x, x**(3./2), alpha = 0.5, c = 'b', ls = '-', lw=3., label=r'x$^{3/2}$') # And so on... plt.plot(x, x**2, alpha = 0.5, c = 'g', ls = '--', lw=3., label=r'x$^2$') plt.plot(x, np.log(1+x)*20., alpha = 0.5, c = 'c', ls = '-.', lw=3., label='log(1+x)') # Add a legend (loc gives some options about where the legend is placed) plt.legend(loc=2) Explanation: Lots of kwargs to modify your plot A few that I find most useful are: * alpha * color or c * linestyle or ls * linewidth or lw * marker * markersize or ms * label End of explanation N = 50 x = np.random.rand(N) y = np.random.rand(N) colors = np.random.rand(N) area = np.pi * (15 * np.random.rand(N))**2 # 0 to 15 point radiuses # size = area variable, c = colors variable x = plt.scatter(x, y, s=area, c=colors, alpha=0.4) plt.show() N=10000 values1 = np.random.normal(25., 3., N) values2 = np.random.normal(33., 8., N/7) valuestot = np.concatenate([values1,values2]) binedges = np.arange(0,101,1) bincenters = (binedges[1:] + binedges[:-1])/2. # plt.hist gives you the ability to histogram and plot all in one command. x1 = plt.hist(valuestot, bins=binedges, color='g', alpha=0.5, label='total') x2 = plt.hist(values2, bins=binedges, color='r', alpha=0.5, histtype='step', linewidth=3, label='values 1') x3 = plt.hist(values1, bins=binedges, color='b', alpha=0.5, histtype='step', linewidth=3, label='values 2') plt.legend(loc=7) Explanation: Nice scatter Example from the MPL website. Note that the kwargs are different here. Quick inspection of the docs is handy (shift + tab in jupy notebooks). End of explanation fig = plt.figure(figsize=(10,6)) # Make an axes as if the figure had 1 row, 2 columns and it would be the first of the two sub-divisions. ax1 = fig.add_subplot(121) plot1 = ax1.plot([1,2,3,4,1,0]) ax1.set_xlabel('time since start of talk') ax1.set_ylabel('interest level') ax1.set_xbound([-1.,6.]) # Make an axes as if the figure had 1 row, 2 columns and it would be the second of the two sub-divisions. ax2 = fig.add_subplot(122) plot2 = ax2.scatter([1,1,1,2,2,2,3,3,3,4,4,4], [1,2,3]*4) ax2.set_title('A commentary on chairs with wheels') print(plot1) print(plot2) Explanation: Loads of examples and plot types in the Matplotlib.org Gallery Its worth looking through some examples just to get a feel for what types of plots are available and how they are used. Figures and Axes Working with MPL Figure and Axes objects gives you more control. You can quickly make multiple plots, shared axes, etc... on the same Figure. Figure command Subplot command Different sized subplots Axes controls ticks, labels, else? End of explanation fig2 = plt.figure(figsize=(10,10)) ax1 = fig2.add_axes([0.1,0.1,0.8,0.4]) histvals = ax1.hist(np.random.exponential(0.5,5000), bins=np.arange(0,5, 0.1)) ax1.set_xlabel('Sampled Value') ax1.set_ylabel('Counts per bin') ax2 = fig2.add_axes([0.3,0.55, 0.7, 0.45]) ax2.plot([13,8,5,3,2,1,1],'r:',lw=3) Explanation: fig.add_axes is another option for adding axes as you wish. * Relative lower left corner x and y coordinates * Relative x and y spans of the axes End of explanation import scipy.stats as stats # With subplots we can make all of the axes at ones. # The axes are return in a list of lists. f, [[ax0, ax1], [ax2, ax3]] = plt.subplots(nrows=2, ncols=2, sharex=True, sharey=False) # Remove the space between the top and bottom rows of plots # wspace would do the same for left and right columns... f.subplots_adjust(hspace=0) ax0.plot(range(50,250), np.exp(np.arange(50,250) / 23.) ) ax2.scatter(np.random.normal(125,27,100), np.random.binomial(200,0.4,100)) ax1.plot(range(0,300), np.random.exponential(0.5,300), 'g') ax3.plot(range(0,300), stats.norm.pdf(np.arange(0,300),150, 30) , 'g') Explanation: plt.subplots gives an alternative route, creating all of the axes at once. Less flexability since you'll end up with a grid of subplots, but thats exactly what you want a lot of the time. sharex and sharey kwargs do exactly that for all of the axes. End of explanation plt.colormaps() cmap0 = plt.cm.cubehelix cmap1 = plt.cm.Accent cmap2 = plt.cm.Set1 cmap3 = plt.cm.Spectral colmaps = [cmap0,cmap1,cmap2,cmap3] Ncolors = 12 col0 = cmap0(np.linspace(0,1,Ncolors)) f, [[ax0, ax1], [ax2, ax3]] = plt.subplots(nrows=2, ncols=2, figsize=(13,13)) x = np.linspace(0.01,100,1000) for idx, axis in enumerate([ax0,ax1,ax2,ax3]): colormap = colmaps[idx] colors = colormap(np.linspace(0,1,Ncolors)) axis.set_title(colormap.name) for val in range(Ncolors): axis.plot(x,x**(1.0 + 0.1 * val), c=colors[val], lw=3, label=val) axis.loglog() Explanation: Colors and colormaps MPL has a variety of Colormaps to choose from. I also use the python library Palettable to gain access to a few other colors and colormaps in convienent ways. I won't use this library today, but if you're interested in some other options from what MPL has it is worth a look. End of explanation # Lets look at a two distributions on an exponential noise background... Nnoise = 475000 Nnorm1 = 10000 Nnorm2 = 15000 # Uniform noise in x, exponential in y xnoise = np.random.rand(Nnoise) * 100 ynoise = np.random.exponential(250,475000) # Uniform in X, normal in Y xnorm1 = np.random.rand(Nnorm1) * 100 ynorm1 = np.random.normal(800, 50, Nnorm1) # Normal in X and Y xnorm2 = np.random.normal(50, 30, 15000) ynorm2 = np.random.normal(200, 25, 15000) xtot = np.concatenate([xnoise, xnorm1, xnorm2]) ytot = np.concatenate([ynoise, ynorm1, ynorm2]) xbins = np.arange(0,100,10) ybins = np.arange(0,1000,10) H, xe, ye = np.histogram2d(xtot, ytot, bins=[xbins, ybins]) X,Y = np.meshgrid(ybins,xbins) fig4 = plt.figure(figsize=(13,8)) ax1 = fig4.add_axes([0.1,0.1,0.35,0.4]) ax2 = fig4.add_axes([0.5,0.1,0.35,0.4]) pcolplot = ax1.pcolor(X, Y, H, cmap=cm.GnBu) ax1.set_title('Linear Color Scale') plt.colorbar(pcolplot, ax=ax1) from matplotlib.colors import LogNorm pcolplot2 = ax2.pcolor(X, Y, H, norm=LogNorm(vmin=H.min(), vmax=H.max()), cmap=cm.GnBu) ax2.set_title('Log Color Scale') plt.colorbar(pcolplot2, ax=ax2) Explanation: Colormap normalization can also be pretty handy! They are found in matplotlib.colors. Lets look at Lograthmic (LogNorm), but also symmetric log, power law, discrete bounds, and custom ranges available. * Colormap Normalization End of explanation xvals = np.arange(0,120,0.1) # Define a few functions to use f1 = lambda x: 50. * np.exp(-x/20.) f2 = lambda x: 30. * stats.norm.pdf(x, loc=25,scale=5) f3 = lambda x: 200. * stats.norm.pdf(x,loc=40,scale=10) f4 = lambda x: 25. * stats.gamma.pdf(x, 8., loc=45, scale=4.) # Normalize to define PDFs pdf1 = f1(xvals) / (f1(xvals)).sum() pdf2 = f2(xvals) / (f2(xvals)).sum() pdf3 = f3(xvals) / (f3(xvals)).sum() pdf4 = f4(xvals) / (f4(xvals)).sum() # Combine them and normalize again pdftot = pdf1 + pdf2 + pdf3 + pdf4 pdftot = pdftot / pdftot.sum() fig5 = plt.figure(figsize=(11,8)) ax3 = fig5.add_axes([0.1,0.1,0.9,0.9]) # Plot the pdfs, and the total pdf lines = ax3.plot(xvals, pdf1,'r', xvals,pdf2,'b', xvals,pdf3,'g', xvals,pdf4,'m') lines = ax3.plot(xvals, pdftot, 'k', lw=5.) Explanation: Lines and text Adding horizontal and vertical lines hlines and vlines Adding text to your figures is also often needed. End of explanation # Calculate the mean mean1 = (xvals * pdf1).sum() mean2 = (xvals * pdf2).sum() mean3 = (xvals * pdf3).sum() mean4 = (xvals * pdf4).sum() fig6 = plt.figure(figsize=(11,8)) ax4 = fig6.add_axes([0.1,0.1,0.9,0.9]) # Plot the total PDF ax4.plot(xvals, pdftot, 'k', lw=5.) # Grabe the limits of the y-axis for defining the extent of our vertical lines axmin, axmax = ax4.get_ylim() # Draw vertical lines. (x location, ymin, ymax, color, linestyle) ax4.vlines(mean1, axmin, axmax, 'r',':') ax4.vlines(mean2, axmin, axmax, 'b',':') ax4.vlines(mean3, axmin, axmax, 'g',':') ax4.vlines(mean4, axmin, axmax, 'm',':') # Add some text to figure to describe the curves # (xloc, yloc, text, color, fontsize, rotation, ...) ax4.text(mean1-18, 0.0028, r'mean of $f_1(X)$', color='r', fontsize=18) ax4.text(mean2+1, 0.0005, r'mean of $f_2(X)$', color='b', fontsize=18) ax4.text(mean3+1, 0.0002, r'mean of $f_3(X)$', color='g', fontsize=18) ax4.text(mean4+1, 0.0028, r'mean of $f_4(X)$', color='m', fontsize=18, rotation=-25) temp = ax4.text(50, 0.0009, r'$f_{tot}(X)$', color='k', fontsize=22) Explanation: Lets use vertical lines to represent the means of our distributions instead of plotting all of them. We'll also add some text to describe these vertical lines. End of explanation # Compute CDFs cdf1 = pdf1.cumsum() cdf2 = pdf2.cumsum() cdf3 = pdf3.cumsum() cdf4 = pdf4.cumsum() cdftot = pdftot.cumsum() fig7 = plt.figure(figsize=(11,8)) ax7 = fig7.add_axes([0.1,0.1,0.9,0.9]) # Plot them ax7.plot(xvals, cdftot, 'k', lw=3) ax7.plot(xvals, cdf1, 'r', ls=':', lw=2) ax7.plot(xvals, cdf2, 'b', ls=':', lw=2) ax7.plot(xvals, cdf3, 'g', ls=':', lw=2) ax7.plot(xvals, cdf4, 'm', ls=':', lw=2) # Force the y limits to be (0,1) ax7.set_ylim(0,1.) # Add 50% and 90% lines. ax7.hlines(0.5, 0, 120., 'k', '--', lw=2) ax7.hlines(0.95, 0, 120., 'k', '--', lw=3) # Add some text ax7.set_title('CDFs of dists 1-4 and total with 50% and 95% bounds') ax7.text(110, 0.46, r'$50\%$ ', color='k', fontsize=20) temp = ax7.text(110, 0.91, r'$95\%$ ', color='k', fontsize=20) Explanation: We can do the same with horizontal lines End of explanation import matplotlib.image as mpimg img=mpimg.imread('Tahoe.png') imgplot = plt.imshow(img) Explanation: Displaying images Loading image data is supported by the Pillow library. Natively, matplotlib only supports PNG images. The commands shown below fall back on Pillow if the native read fails. Matplotlib plotting can handle float32 and uint8, but image reading/writing for any format other than PNG is limited to uint8 data. End of explanation f, [ax0,ax1,ax2] = plt.subplots(nrows=3, ncols=1, figsize=(10,15)) f.subplots_adjust(hspace=0.05) for ax in [ax0,ax1,ax2]: # ax.set_xticklabels([]) ax.set_xticks([]) ax.set_yticklabels([]) ax0.imshow(img[:,:,0], cmap=cm.Spectral) ax1.imshow(img[:,:,1], cmap=cm.Spectral) ax2.imshow(img[:,:,2], cmap=cm.Spectral) Explanation: Lets plot the R, G, and B components of this image. End of explanation
1,677
Given the following text description, write Python code to implement the functionality described below step by step Description: Active Subspaces Example Function Step1: First we draw M samples randomly from the input space. Step2: Now we normalize the sampled values of the input parameters. The uniform inputs are linearly scaled to the interval $[-1, 1]$, normal inputs are scaled like $\frac{x-\mu}{\sigma}$, and logs of lognormal inputs are scaled like normal inputs. Step3: Compute gradients at the normalized input values to approximate the matrix on which the active subspace is based. Step4: Now we use our gradient samples to compute the active subspace. Step5: We use plotting utilities to plot eigenvalues, subspace error, components for the first two eigenvectors, and 1D and 2D sufficient summary plots (plots of function values vs. active variable values).
Python Code: import active_subspaces as ac import numpy as np %matplotlib inline # The borehole_functions.py file contains two functions: the borehole function (borehole(xx)) # and its gradient (borehole_grad(xx)). Each takes an Mx8 matrix (M is the number of data # points) with rows being normalized inputs; borehole returns a column vector of function # values at each row of the input and borehole_grad returns a matrix whose ith row is the # gradient of borehole at the ith row of x with respect to the normalized inputs from borehole_functions import * Explanation: Active Subspaces Example Function: Borehole Water Flow Ryan Howard, CO School of Mines, &#114;&#121;&#104;&#111;&#119;&#97;&#114;&#100;&#64;&#109;&#105;&#110;&#101;&#115;&#46;&#101;&#100;&#117; Paul Constantine, CO School of Mines, &#112;&#99;&#111;&#110;&#115;&#116;&#97;&#110;&#64;&#109;&#105;&#110;&#101;&#115;&#46;&#101;&#100;&#117; <br> In this tutorial, we'll be demonstrating active subspaces on the function $$ f = \frac{2\pi T_u(H_u-H_l)}{\ln (r/r_w)\left(1+\frac{2LT_u}{\ln(r/r_w)r_w^2K_w}+\frac{T_u}{T_l}\right)}, $$ as seen on http://www.sfu.ca/~ssurjano/borehole.html. This function describes water flow through a borehole, and its inputs and their distributions are described in the table below. Variable|Symbol|Distribution (U(min, max), N(mu, sigma), or LN(mu, sigma)) :-----|:-----:|:----- radius of borehole|$r_w$|N(.1, 0.0161812) radius of influence|$r$|LN(7.71, 1.0056) transmissivity of upper aquifer|$T_u$|U(63070, 115600) potentiometric head of upper aquifer|$H_u$|U(990, 1110) transmissivity of lower aquifer|$T_l$|U(63.1, 116) potentiometric head of lower aquifer|$H_l$|U(700, 820) length of borehole|$L$|U(1120, 1680) hydraulic conductivity of borehole|$K_w$|U(9855, 12045) End of explanation M = 1000 #This is the number of data points to use #Sample the input space according to the distributions in the table above rw = np.random.normal(.1, .0161812, (M, 1)) r = np.exp(np.random.normal(7.71, 1.0056, (M, 1))) Tu = np.random.uniform(63070, 115600, (M, 1)) Hu = np.random.uniform(990, 1110, (M, 1)) Tl = np.random.uniform(63.1, 116, (M, 1)) Hl = np.random.uniform(700, 820, (M, 1)) L = np.random.uniform(1120, 1680, (M, 1)) Kw = np.random.uniform(9855, 12045, (M, 1)) #the input matrix x = np.hstack((rw, r, Tu, Hu, Tl, Hl, L, Kw)) Explanation: First we draw M samples randomly from the input space. End of explanation #Upper and lower limits for uniform-bounded inputs xl = np.array([63070, 990, 63.1, 700, 1120, 9855]) xu = np.array([115600, 1110, 116, 820, 1680, 12045]) #XX = normalized input matrix XX = ac.utils.misc.BoundedNormalizer(xl, xu).normalize(x[:, 2:]) #normalize non-uniform inputs rw_norm = ((rw - .1)/.0161812).reshape(M, 1) r_norm = np.log(r); r_norm = ((r_norm - 7.71)/1.0056).reshape(M, 1) XX = np.hstack((rw_norm, r_norm, XX)) Explanation: Now we normalize the sampled values of the input parameters. The uniform inputs are linearly scaled to the interval $[-1, 1]$, normal inputs are scaled like $\frac{x-\mu}{\sigma}$, and logs of lognormal inputs are scaled like normal inputs. End of explanation #output values (f) and gradients (df) f = borehole(XX) df = borehole_grad(XX) Explanation: Compute gradients at the normalized input values to approximate the matrix on which the active subspace is based. End of explanation #Set up our subspace using the gradient samples ss = ac.subspaces.Subspaces() ss.compute(df=df, nboot=500) Explanation: Now we use our gradient samples to compute the active subspace. End of explanation #Component labels in_labels = ['rw', 'r', 'Tu', 'Hu', 'Tl', 'Hl', 'L', 'Kw'] #plot eigenvalues, subspace errors ac.utils.plotters.eigenvalues(ss.eigenvals, ss.e_br) ac.utils.plotters.subspace_errors(ss.sub_br) #manually make the subspace 2D for the eigenvector and 2D summary plots ss.partition(2) #Compute the active variable values y = XX.dot(ss.W1) #Plot eigenvectors, sufficient summaries ac.utils.plotters.eigenvectors(ss.W1, in_labels=in_labels) ac.utils.plotters.sufficient_summary(y, f) Explanation: We use plotting utilities to plot eigenvalues, subspace error, components for the first two eigenvectors, and 1D and 2D sufficient summary plots (plots of function values vs. active variable values). End of explanation
1,678
Given the following text description, write Python code to implement the functionality described below step by step Description: EEG processing and Event Related Potentials (ERPs) For a generic introduction to the computation of ERP and ERF see tut_epoching_and_averaging. Here we cover the specifics of EEG, namely Step1: Setup for reading the raw data Step2: Let's restrict the data to the EEG channels Step3: By looking at the measurement info you will see that we have now 59 EEG channels and 1 EOG channel Step4: In practice it's quite common to have some EEG channels that are actually EOG channels. To change a channel type you can use the Step5: And to change the nameo of the EOG channel Step6: Let's reset the EOG channel back to EOG type. Step7: The EEG channels in the sample dataset already have locations. These locations are available in the 'loc' of each channel description. For the first channel we get Step8: And it's actually possible to plot the channel locations using Step9: Setting EEG montage In the case where your data don't have locations you can set them using a Step10: To apply a montage on your data use the set_montage method. function. Here don't actually call this function as our demo dataset already contains good EEG channel locations. Next we'll explore the definition of the reference. Setting EEG reference Let's first remove the reference from our Raw object. This explicitly prevents MNE from adding a default EEG average reference required for source localization. Step11: We next define Epochs and compute an ERP for the left auditory condition. Step12: Average reference Step13: Custom reference Step14: Evoked arithmetics Trial subsets from Epochs can be selected using 'tags' separated by '/'. Evoked objects support basic arithmetic. First, we create an Epochs object containing 4 conditions. Step15: Next, we create averages of stimulation-left vs stimulation-right trials. We can use basic arithmetic to, for example, construct and plot difference ERPs. Step16: This is an equal-weighting difference. If you have imbalanced trial numbers, you could also consider either equalizing the number of events per condition (using Step17: This can be simplified with a Python list comprehension Step18: Often, it makes sense to store Evoked objects in a dictionary or a list - either different conditions, or different subjects.
Python Code: import mne from mne.datasets import sample Explanation: EEG processing and Event Related Potentials (ERPs) For a generic introduction to the computation of ERP and ERF see tut_epoching_and_averaging. Here we cover the specifics of EEG, namely: - setting the reference - using standard montages :func:`mne.channels.Montage` - Evoked arithmetic (e.g. differences) End of explanation data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' # these data already have an EEG average reference raw = mne.io.read_raw_fif(raw_fname, preload=True) Explanation: Setup for reading the raw data End of explanation raw.pick_types(meg=False, eeg=True, eog=True) Explanation: Let's restrict the data to the EEG channels End of explanation print(raw.info) Explanation: By looking at the measurement info you will see that we have now 59 EEG channels and 1 EOG channel End of explanation raw.set_channel_types(mapping={'EOG 061': 'eeg'}) print(raw.info) Explanation: In practice it's quite common to have some EEG channels that are actually EOG channels. To change a channel type you can use the :func:mne.io.Raw.set_channel_types method. For example to treat an EOG channel as EEG you can change its type using End of explanation raw.rename_channels(mapping={'EOG 061': 'EOG'}) Explanation: And to change the nameo of the EOG channel End of explanation raw.set_channel_types(mapping={'EOG': 'eog'}) Explanation: Let's reset the EOG channel back to EOG type. End of explanation print(raw.info['chs'][0]['loc']) Explanation: The EEG channels in the sample dataset already have locations. These locations are available in the 'loc' of each channel description. For the first channel we get End of explanation raw.plot_sensors() raw.plot_sensors('3d') # in 3D Explanation: And it's actually possible to plot the channel locations using :func:mne.io.Raw.plot_sensors. End of explanation montage = mne.channels.read_montage('standard_1020') print(montage) Explanation: Setting EEG montage In the case where your data don't have locations you can set them using a :class:mne.channels.Montage. MNE comes with a set of default montages. To read one of them do: End of explanation raw_no_ref, _ = mne.set_eeg_reference(raw, []) Explanation: To apply a montage on your data use the set_montage method. function. Here don't actually call this function as our demo dataset already contains good EEG channel locations. Next we'll explore the definition of the reference. Setting EEG reference Let's first remove the reference from our Raw object. This explicitly prevents MNE from adding a default EEG average reference required for source localization. End of explanation reject = dict(eeg=180e-6, eog=150e-6) event_id, tmin, tmax = {'left/auditory': 1}, -0.2, 0.5 events = mne.read_events(event_fname) epochs_params = dict(events=events, event_id=event_id, tmin=tmin, tmax=tmax, reject=reject) evoked_no_ref = mne.Epochs(raw_no_ref, **epochs_params).average() del raw_no_ref # save memory title = 'EEG Original reference' evoked_no_ref.plot(titles=dict(eeg=title), time_unit='s') evoked_no_ref.plot_topomap(times=[0.1], size=3., title=title, time_unit='s') Explanation: We next define Epochs and compute an ERP for the left auditory condition. End of explanation raw.del_proj() raw_car, _ = mne.set_eeg_reference(raw, 'average', projection=True) evoked_car = mne.Epochs(raw_car, **epochs_params).average() del raw_car # save memory title = 'EEG Average reference' evoked_car.plot(titles=dict(eeg=title), time_unit='s') evoked_car.plot_topomap(times=[0.1], size=3., title=title, time_unit='s') Explanation: Average reference: This is normally added by default, but can also be added explicitly. End of explanation raw_custom, _ = mne.set_eeg_reference(raw, ['EEG 001', 'EEG 002']) evoked_custom = mne.Epochs(raw_custom, **epochs_params).average() del raw_custom # save memory title = 'EEG Custom reference' evoked_custom.plot(titles=dict(eeg=title), time_unit='s') evoked_custom.plot_topomap(times=[0.1], size=3., title=title, time_unit='s') Explanation: Custom reference: Use the mean of channels EEG 001 and EEG 002 as a reference End of explanation event_id = {'left/auditory': 1, 'right/auditory': 2, 'left/visual': 3, 'right/visual': 4} epochs_params = dict(events=events, event_id=event_id, tmin=tmin, tmax=tmax, reject=reject) epochs = mne.Epochs(raw, **epochs_params) print(epochs) Explanation: Evoked arithmetics Trial subsets from Epochs can be selected using 'tags' separated by '/'. Evoked objects support basic arithmetic. First, we create an Epochs object containing 4 conditions. End of explanation left, right = epochs["left"].average(), epochs["right"].average() # create and plot difference ERP joint_kwargs = dict(ts_args=dict(time_unit='s'), topomap_args=dict(time_unit='s')) mne.combine_evoked([left, -right], weights='equal').plot_joint(**joint_kwargs) Explanation: Next, we create averages of stimulation-left vs stimulation-right trials. We can use basic arithmetic to, for example, construct and plot difference ERPs. End of explanation aud_l = epochs["auditory", "left"].average() aud_r = epochs["auditory", "right"].average() vis_l = epochs["visual", "left"].average() vis_r = epochs["visual", "right"].average() all_evokeds = [aud_l, aud_r, vis_l, vis_r] print(all_evokeds) Explanation: This is an equal-weighting difference. If you have imbalanced trial numbers, you could also consider either equalizing the number of events per condition (using :meth:epochs.equalize_event_counts &lt;mne.Epochs.equalize_event_counts&gt;). As an example, first, we create individual ERPs for each condition. End of explanation all_evokeds = [epochs[cond].average() for cond in sorted(event_id.keys())] print(all_evokeds) # Then, we construct and plot an unweighted average of left vs. right trials # this way, too: mne.combine_evoked( all_evokeds, weights=(0.25, -0.25, 0.25, -0.25)).plot_joint(**joint_kwargs) Explanation: This can be simplified with a Python list comprehension: End of explanation # If they are stored in a list, they can be easily averaged, for example, # for a grand average across subjects (or conditions). grand_average = mne.grand_average(all_evokeds) mne.write_evokeds('/tmp/tmp-ave.fif', all_evokeds) # If Evokeds objects are stored in a dictionary, they can be retrieved by name. all_evokeds = dict((cond, epochs[cond].average()) for cond in event_id) print(all_evokeds['left/auditory']) # Besides for explicit access, this can be used for example to set titles. for cond in all_evokeds: all_evokeds[cond].plot_joint(title=cond, **joint_kwargs) Explanation: Often, it makes sense to store Evoked objects in a dictionary or a list - either different conditions, or different subjects. End of explanation
1,679
Given the following text description, write Python code to implement the functionality described below step by step Description: Handwritten Number Recognition with TFLearn and MNIST In this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9. This kind of neural network is used in a variety of real-world applications including Step1: Retrieving training and test data The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data. Each MNIST data point has Step2: Visualize the training data Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title. Step3: Building the network TFLearn lets you build the network by defining the layers in that network. For this example, you'll define Step4: Training the network Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely! Step5: Testing After you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results. A good result will be higher than 95% accuracy. Some simple models have been known to get up to 99.7% accuracy!
Python Code: # Import Numpy, TensorFlow, TFLearn, and MNIST data import numpy as np import tensorflow as tf import tflearn import tflearn.datasets.mnist as mnist Explanation: Handwritten Number Recognition with TFLearn and MNIST In this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9. This kind of neural network is used in a variety of real-world applications including: recognizing phone numbers and sorting postal mail by address. To build the network, we'll be using the MNIST data set, which consists of images of handwritten numbers and their correct labels 0-9. We'll be using TFLearn, a high-level library built on top of TensorFlow to build the neural network. We'll start off by importing all the modules we'll need, then load the data, and finally build the network. End of explanation # Retrieve the training and test data trainX, trainY, testX, testY = mnist.load_data(one_hot=True) print(trainX.shape) print(trainY.shape) Explanation: Retrieving training and test data The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data. Each MNIST data point has: 1. an image of a handwritten digit and 2. a corresponding label (a number 0-9 that identifies the image) We'll call the images, which will be the input to our neural network, X and their corresponding labels Y. We're going to want our labels as one-hot vectors, which are vectors that holds mostly 0's and one 1. It's easiest to see this in a example. As a one-hot vector, the number 0 is represented as [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], and 4 is represented as [0, 0, 0, 0, 1, 0, 0, 0, 0, 0]. Flattened data For this example, we'll be using flattened data or a representation of MNIST images in one dimension rather than two. So, each handwritten number image, which is 28x28 pixels, will be represented as a one dimensional array of 784 pixel values. Flattening the data throws away information about the 2D structure of the image, but it simplifies our data so that all of the training data can be contained in one array whose shape is [55000, 784]; the first dimension is the number of training images and the second dimension is the number of pixels in each image. This is the kind of data that is easy to analyze using a simple neural network. End of explanation # Visualizing the data import matplotlib.pyplot as plt %matplotlib inline # Function for displaying a training image by it's index in the MNIST set def show_digit(index): label = trainY[index].argmax(axis=0) # Reshape 784 array into 28x28 image image = trainX[index].reshape([28,28]) plt.title('Training data, index: %d, Label: %d' % (index, label)) plt.imshow(image, cmap='gray_r') plt.show() # Display the first (index 0) training image show_digit(5) Explanation: Visualize the training data Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title. End of explanation # Define the neural network def build_model(): # This resets all parameters and variables, leave this here tf.reset_default_graph() #### Your code #### # Include the input layer, hidden layer(s), and set how you want to train the model #Input layer net = tflearn.input_data([None, 784])#len(trainX[0])]) #Hidden layers net = tflearn.fully_connected(net, 392, activation="ReLU") #ReLU -> f(x)=max(x,0) net = tflearn.fully_connected(net, 196, activation="ReLU") #ReLU -> f(x)=max(x,0) #Output layer net = tflearn.fully_connected(net, 10, activation="softmax") #ReLU -> f(x)=max(x,0) net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy') # This model assumes that your network is named "net" model = tflearn.DNN(net) return model # Build the model model = build_model() Explanation: Building the network TFLearn lets you build the network by defining the layers in that network. For this example, you'll define: The input layer, which tells the network the number of inputs it should expect for each piece of MNIST data. Hidden layers, which recognize patterns in data and connect the input to the output layer, and The output layer, which defines how the network learns and outputs a label for a given image. Let's start with the input layer; to define the input layer, you'll define the type of data that the network expects. For example, net = tflearn.input_data([None, 100]) would create a network with 100 inputs. The number of inputs to your network needs to match the size of your data. For this example, we're using 784 element long vectors to encode our input data, so we need 784 input units. Adding layers To add new hidden layers, you use net = tflearn.fully_connected(net, n_units, activation='ReLU') This adds a fully connected layer where every unit (or node) in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call, it designates the input to the hidden layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling tflearn.fully_connected(net, n_units). Then, to set how you train the network, use: net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy') Again, this is passing in the network you've been building. The keywords: optimizer sets the training method, here stochastic gradient descent learning_rate is the learning rate loss determines how the network error is calculated. In this example, with categorical cross-entropy. Finally, you put all this together to create the model with tflearn.DNN(net). Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc. Hint: The final output layer must have 10 output nodes (one for each digit 0-9). It's also recommended to use a softmax activation layer as your final output layer. End of explanation # Training model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=20) Explanation: Training the network Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely! End of explanation # Compare the labels that our model predicts with the actual labels # Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample. predictions = np.array(model.predict(testX)).argmax(axis=1) # Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels actual = testY.argmax(axis=1) test_accuracy = np.mean(predictions == actual, axis=0) # Print out the result print("Test accuracy: ", test_accuracy) Explanation: Testing After you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results. A good result will be higher than 95% accuracy. Some simple models have been known to get up to 99.7% accuracy! End of explanation
1,680
Given the following text description, write Python code to implement the functionality described below step by step Description: "Brute force" optimization with Scipy Official documentation Step1: Define the objective function Step2: Minimize using the "Brute force" algorithm Uses the "brute force" method, i.e. computes the function's value at each point of a multidimensional grid of points, to find the global minimum of the function. See https Step3: Second example
Python Code: %matplotlib inline import matplotlib matplotlib.rcParams['figure.figsize'] = (8, 8) # Setup PyAI import sys sys.path.insert(0, '/Users/jdecock/git/pub/jdhp/pyai') import numpy as np from scipy import optimize # Plot functions from pyai.optimize.utils import plot_contour_2d_solution_space from pyai.optimize.utils import plot_2d_solution_space Explanation: "Brute force" optimization with Scipy Official documentation: - https://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html - https://docs.scipy.org/doc/scipy/reference/optimize.html Import required modules End of explanation # Set the objective function from pyai.optimize.functions import sphere1d from pyai.optimize.functions import sphere2d Explanation: Define the objective function End of explanation %%time search_ranges = (slice(-3., 3.5, 0.5),) res = optimize.brute(sphere1d, search_ranges, #args=params, full_output=True, finish=None) # optimize.fmin) print("x* =", res[0]) print("f(x*) =", res[1]) print(res[2].shape) print("tested x:", res[2]) print(res[3].shape) print("tested f(x):", res[3]) x_star = res[0] y_star = res[1] x = res[2] y = res[3] fig, ax = plt.subplots() ax.set_title('Objective function') ax.plot(x, y, 'k-', alpha=0.25, label="f") ax.plot(x, y, 'g.', label="tested points") ax.plot(x_star, y_star, 'ro', label="$x^*$") ax.legend(fontsize=12); Explanation: Minimize using the "Brute force" algorithm Uses the "brute force" method, i.e. computes the function's value at each point of a multidimensional grid of points, to find the global minimum of the function. See https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.brute.html#scipy.optimize.brute First example: the 1D sphere function End of explanation %%time search_ranges = (slice(-2., 2.5, 0.5), slice(-2., 2.5, 0.5)) res = optimize.brute(sphere2d, search_ranges, #args=params, full_output=True, finish=None) # optimize.fmin) print("x* =", res[0]) print("f(x*) =", res[1]) print(res[2].shape) print("tested x:", res[2]) print() print(res[3].shape) print("tested f(x):", res[3]) # Setup data ######################### # Using the following 3 lines, pcolormesh won't display the last row and the last collumn... #xx = res[2][0] #yy = res[2][1] #zz = res[3] # Workaround to avoid pcolormesh ignoring the last row and last collumn... x = res[2][0][:,0] y = res[2][1][0,:] x = np.append(x, x[-1] + x[-1] - x[-2]) y = np.append(y, y[-1] + y[-1] - y[-2]) # Make the meshgrid xx, yy = np.meshgrid(x, y) # "Ideally the dimensions of X and Y should be one greater than those of C; # if the dimensions are the same, then the last row and column of C will be ignored." # https://stackoverflow.com/questions/44526052/can-someone-explain-this-matplotlib-pcolormesh-quirk zz = res[3] # Plot the image ##################### fig, ax = plt.subplots() ax.set_title('Objective function') # Shift to center pixels to data (workaround...) # (https://stackoverflow.com/questions/43128106/pcolormesh-ticks-center-for-each-data-point-tile) xx -= (x[-1] - x[-2])/2. yy -= (y[-1] - y[-2])/2. #im = ax.imshow(z, interpolation='bilinear', origin='lower') im = ax.pcolormesh(xx, yy, zz, cmap='gnuplot2_r') plt.colorbar(im) # draw the colorbar # Plot contours ###################### max_value = np.nanmax(zz) levels = np.array([0.1*max_value, 0.3*max_value, 0.6*max_value]) # Shift back pixels for contours (workaround...) # (https://stackoverflow.com/questions/43128106/pcolormesh-ticks-center-for-each-data-point-tile) xx += (x[-1] - x[-2])/2. yy += (y[-1] - y[-2])/2. cs = plt.contour(xx[:-1,:-1], yy[:-1,:-1], zz, levels, linewidths=(2, 2, 3), linestyles=('dotted', 'dashed', 'solid'), alpha=0.5, colors='blue') ax.clabel(cs, inline=False, fontsize=12) # Plot x* ############################ ax.scatter(*res[0], c='red', label="$x^*$") ax.legend(fontsize=12); Explanation: Second example: the 2D sphere function End of explanation
1,681
Given the following text description, write Python code to implement the functionality described below step by step Description: Chapter 1 Step1: Can you describe what this code did? Can you adapt the code in this box yourself and make it print your own name? Apart from printing words to your screen, you can also use Python to do calculations. Use the code block below to calculate how many minutes there are in seven weeks? (Hint Step2: Excellent! You have just written and executed your very first program! Please make sure to run every single one of the following code blocks in the same manner - otherwise a lot of the examples won't properly work. So far, we used only Python as a pretty minimalistic calculator, but there is more to discover. Variables and values Imagine that we want to store the number we just calculated so we can use it later. To do this we need to 'assign' a name to a value using the = symbol. Step3: If you vaguely remember your math-classes in school, this should look familiar. It is basically the same notation with the name of the variable on the left, the value on the right, and the = sign in the middle. In the code block above, two things happen. First, we fill x with a value, in our case 2. This variable x behaves pretty much like a box on which we write an x with a thick, black marker to find it back later. We then print the contents of this box, using the print() command. Now copy the outcome of your code calculating the number of minutes in seven weeks and assign this number to x. Run the code again. The box metaphor for a variable goes a long way Step4: So far, we have only used a variable called x. Nevertheless, we are entirely free to change the names of our variables, as long as these names do not contain strange characters, such as spaces, numbers or punctuation marks. (Underscores, however, are allowed inside names!) In the following block, we assign the outcome of our calculation to a variable that has a more meaningful name than the abstract name x. Step5: In Python we can also copy the contents of a variable into another variable, which is what happens in the block below. You should of course watch out in such cases Step6: Remember Step7: Variables are also case sensitive, accessing months is not the same as Months Step8: So far we have only assigned numbers such as 2 or 70560 to our variables. Such whole numbers are called 'integers' in programming, because they don't have anymore digits 'after the dot'. Numbers that do have digits after the dot (e.g. 67.278 or 191.200), are called 'floating-point numbers' in programming or simply 'floats'. Note that Python uses dots in floats, whereas some European languages use a comma here. Both integers and floats can be positive numbers (e.g. 70 or 4.36) as well as negative numbers (e.g. -70 or 4.36). You can just as easily assign floats to variables Step9: On the whole, the difference between integers and floats is of course important for divisions where you often end up with floats Step10: You will undoubtedly remember from your math classes in high school that there is something called 'operator precedence', meaning that multiplication, for instance, will always be executed before subtraction. In Python you can explicitly set the order in which arithmetic operations are executed, using round brackets. Compare the following lines of code Step11: Using the operators we have learned about above, we can change the variables in our code as many times as we want. We can assign new values to old variables, just like we can put new or more things in the boxes which we already had. Say, for instance, that yesterday we counted how many books we have in our office and that we stored this count in our code as follows Step12: Suppose that we buy a new book for the office today Step13: Updates like these happen a lot. Python therefore provides a shortcut and you can write the same thing using +=. Step14: This special shortcut (+=) is called an operator too. Apart from multiplication (+=), the operator has variants for subtraction (-=), multiplication (*=) and division (/=) too Step15: What we have learnt To finish this section, here is an overview of the concepts you have learnt. Go through the list and make sure you understand all the concepts. variable value assignment operator (=) difference between variables and values integers vs. floats operators for multiplication (*), subtraction (-), addition (+), division (/) the shortcut operators Step16: Such a piece of text ("The Lord of the Flies") is called a 'string' in Python (cf. a string of characters). Strings in Python must always be enclosed with 'quotes' (either single or double quotes). Without those quotes, Python will think it's dealing with the name of some variable that has been defined earlier, because variable names never take quotes. The following distinction is confusing, but extremely important Step17: Some of the arithmetic operators we saw earlier can also be used to do useful things with strings. Both the multiplication operator (*) and the addition operator (+) provide interesting functionality for dealing with strings, as the block below illustrates. Step18: Adding strings together is called 'string concatenation' or simply 'concatenation' in programming. Use the block below to find out whether you could can also use the shortcut += operator for adding an 'h' to the variable original_string. Don't forget to check the result by printing it! Step19: We now would like you to write some code that defines a variable, name, and assign to it a string that is your name. If your first name is shorter than 5 characters, use your last name. If your last name is also shorter than 5 characters, use the combination of your first and last name. Now print the variable containing your name to the screen. Step20: Strings are called strings because they consist of a series (or 'string') of individual characters. We can access these individual characters in Python with the help of 'indexing', because each character in a string has a unique 'index'. To print the first letter of your name, you can type Step21: Take a look at the string "Mr White". We use the index 0 to access the first character in the string. This might seem odd, but remember that all indexes in Python start at zero. Whenever you count in Python, you start at 0 instead of 1. Note that the space character gets an index too, namely 2. This is something you will have to get used to! Because you know the length of your name you can ask for the last letter of your name Step22: It is rather inconvenient having to know how long our strings are if we want to find out what its last letter is. Python provides a simple way of accessing a string 'from the rear' Step23: To access the last character in a string you have to use the index [-1]. Alternatively, there is the len() command which returns the length of a string Step24: Do you understand the following code block? Can you explain what is happening? Step25: Now can you write some code that defines a variable but_last_letter and assigns to this variable the one but last letter of your name? Step26: You're starting to become a real expert in indexing strings. Now what if we would like to find out what the last two or three letters of our name are? In Python we can use so-called 'slice-indexes' or 'slices' for short. To find the first two letters of our name we type in Step27: The 0 index is optional, so we could just as well type in name[ Step28: Because we did not specify the end index, Python continues until it reaches the end of our string. If we would like to find out what the last two letters of our name are, we can type in Step29: DIY Can you define a variable middle_letters and assign to it all letters of your name except for the first two and the last two? Step30: Given the following two words, can you write code that prints out the word humanities using only slicing and concatenation? (So, no quotes are allowed in your code.) Can you print out how many characters the word humanities counts? Step31: "Casting" variables Above, we have already learned that each variable as a data type Step32: This should raise an error on your machine Step34: Other types of conversions are possible as well, and we will see a couple of them in the next chapters. Because variables can change data type anytime they want, we say that Python uses 'dynamic typing', as opposed to other more 'strict' languages that use 'strong typing'. You can check a variable's type using the type()command. DIY When you exchange code with fellow programmers (as you will often have to do in the real world), it is really helpful if you include some useful information about your scripts. Have a look at the code block below and read about commenting on Python code in the comments Step35: So, how many ways are there to comment on your code in Python? What we have learnt To finish this section, here is an overview of what we have learnt. Go through the list and make sure you understand all the concepts. concatenation index slicing zero-indexed numbering len() type casting Step36: Ex. 2 Step37: Ex. 3 Step38: Ex. 4 Step39: Ex. 5 Step40: Ex. 6 Step41: Ex. 7 Step42: Ex. 8 Step43: Ex. 9 Step44: You've reached the end of Chapter 1! You can safely ignore the code block below -- it's only there to make the page prettier.
Python Code: print("Mike") Explanation: Chapter 1: Variables -- A Python Course for the Humanities by Folgert Karsdorp and Maarten van Gompel, with modifications by Mike Kestemont and Lars Wieneke First steps Everyone can learn how to program and the best way to learn it is by doing it. This tutorial on the Python programming language for people from the Humanities is extremely hands-on: you will have to write a lot of programming code yourself from the very beginning onwards. For writing the Python code in this tutorial, you can use the many 'code blocks' you will encounter, such as the grey block immediately below. Place your cursor inside this block and press ctrl+enter to "run" or execute the code. Let's begin right away: run your first little program! End of explanation # insert your own code here! Explanation: Can you describe what this code did? Can you adapt the code in this box yourself and make it print your own name? Apart from printing words to your screen, you can also use Python to do calculations. Use the code block below to calculate how many minutes there are in seven weeks? (Hint: multiplication is done using the * symbol in Python.) End of explanation x = 5 print(x) Explanation: Excellent! You have just written and executed your very first program! Please make sure to run every single one of the following code blocks in the same manner - otherwise a lot of the examples won't properly work. So far, we used only Python as a pretty minimalistic calculator, but there is more to discover. Variables and values Imagine that we want to store the number we just calculated so we can use it later. To do this we need to 'assign' a name to a value using the = symbol. End of explanation x = 2 print(x) print(x * x) print(x + x) print(x - 6) Explanation: If you vaguely remember your math-classes in school, this should look familiar. It is basically the same notation with the name of the variable on the left, the value on the right, and the = sign in the middle. In the code block above, two things happen. First, we fill x with a value, in our case 2. This variable x behaves pretty much like a box on which we write an x with a thick, black marker to find it back later. We then print the contents of this box, using the print() command. Now copy the outcome of your code calculating the number of minutes in seven weeks and assign this number to x. Run the code again. The box metaphor for a variable goes a long way: in such a box you can put whatever value you want, e.g. the number of minutes in seven weeks. When you re-assign a variable, you remove the content of the box and put something new in it. In Python, the term 'variable' refers to such a box, whereas the term 'value' refers to what is inside this box. When we have stored values inside variables, we can do interesting things with these variables. You can, for instance, run the calculations in the block below to see the effect of the following five lines of code. Symbols like =, +, - and * are called 'operators' in programming: they all provide a very basic functionality such as assigning values to variables or doing multiplication and subtraction. End of explanation seconds_in_seven_weeks = 70560 print(seconds_in_seven_weeks) Explanation: So far, we have only used a variable called x. Nevertheless, we are entirely free to change the names of our variables, as long as these names do not contain strange characters, such as spaces, numbers or punctuation marks. (Underscores, however, are allowed inside names!) In the following block, we assign the outcome of our calculation to a variable that has a more meaningful name than the abstract name x. End of explanation first_number = 5 second_number = first_number first_number = 3 print(first_number) print(second_number) Explanation: In Python we can also copy the contents of a variable into another variable, which is what happens in the block below. You should of course watch out in such cases: make sure that you keep track of the value of each individual variable in your code. Each variable will always contain the value that you last assigned to it: End of explanation # not recommended... months = 70560 print(months) Explanation: Remember: as with boxes in real life, it is always a good idea to give the box a clear, yet short name, with your black marker - the name should accurately reflect what is inside the box. Just like you don't write cookies on a box that in reality contains bananas, it is important to always give your Python variables a sensible name. In the code block below, for instance, we make the stupid mistake of calling a variable months, while it actually contains seconds... End of explanation print(months) print(Months) Explanation: Variables are also case sensitive, accessing months is not the same as Months End of explanation some_float = 23.987 print(some_float) some_float = -4.56 print(some_float) Explanation: So far we have only assigned numbers such as 2 or 70560 to our variables. Such whole numbers are called 'integers' in programming, because they don't have anymore digits 'after the dot'. Numbers that do have digits after the dot (e.g. 67.278 or 191.200), are called 'floating-point numbers' in programming or simply 'floats'. Note that Python uses dots in floats, whereas some European languages use a comma here. Both integers and floats can be positive numbers (e.g. 70 or 4.36) as well as negative numbers (e.g. -70 or 4.36). You can just as easily assign floats to variables: End of explanation x = 5/2 print(x) Explanation: On the whole, the difference between integers and floats is of course important for divisions where you often end up with floats: End of explanation nr1 = 10-2/4 nr2 = (10-2)/4 nr3 = 10-(2/4) print(nr1) print(nr2) print(nr3) Explanation: You will undoubtedly remember from your math classes in high school that there is something called 'operator precedence', meaning that multiplication, for instance, will always be executed before subtraction. In Python you can explicitly set the order in which arithmetic operations are executed, using round brackets. Compare the following lines of code: End of explanation number_of_books = 100 Explanation: Using the operators we have learned about above, we can change the variables in our code as many times as we want. We can assign new values to old variables, just like we can put new or more things in the boxes which we already had. Say, for instance, that yesterday we counted how many books we have in our office and that we stored this count in our code as follows: End of explanation number_of_books = number_of_books + 1 print(number_of_books) Explanation: Suppose that we buy a new book for the office today: we can now update our book count accordingly, by adding one to our previous count: End of explanation number_of_books += 5 print(number_of_books) Explanation: Updates like these happen a lot. Python therefore provides a shortcut and you can write the same thing using +=. End of explanation number_of_books -= 5 print(number_of_books) number_of_books *= 2 print(number_of_books) number_of_books /= 2 print(number_of_books) Explanation: This special shortcut (+=) is called an operator too. Apart from multiplication (+=), the operator has variants for subtraction (-=), multiplication (*=) and division (/=) too: End of explanation book = "The Lord of the Flies" print(book) Explanation: What we have learnt To finish this section, here is an overview of the concepts you have learnt. Go through the list and make sure you understand all the concepts. variable value assignment operator (=) difference between variables and values integers vs. floats operators for multiplication (*), subtraction (-), addition (+), division (/) the shortcut operators: +=, -=, *=, /= print() Text strings So far, we have only worked with variables that contain numbers (integers like -5 or 72 or floats like 45.89 or -5.609). Note, however, that variables can also contain other things than numbers. Many disciplines within the humanities work on texts. Quite naturally, programming skills for the humanities will have to focus a lot on manipulating texts. Have a look at the code block below, for instance. Here we put text, namely the title of a book, as a value inside the variable book. Then, we print what is inside the book variable. End of explanation name = "Bonny" Bonny = "name" Clyde = "Clyde" print(name) print (Bonny) print(Clyde) Explanation: Such a piece of text ("The Lord of the Flies") is called a 'string' in Python (cf. a string of characters). Strings in Python must always be enclosed with 'quotes' (either single or double quotes). Without those quotes, Python will think it's dealing with the name of some variable that has been defined earlier, because variable names never take quotes. The following distinction is confusing, but extremely important: variable names (without quotes) and string values (with quotes) look similar, but they serve a completely different purpose. Compare: End of explanation original_string = "bla" new_string = 2*original_string print(new_string) new_string = new_string+"h" print(new_string) Explanation: Some of the arithmetic operators we saw earlier can also be used to do useful things with strings. Both the multiplication operator (*) and the addition operator (+) provide interesting functionality for dealing with strings, as the block below illustrates. End of explanation original_string = "blabla" # add an 'h'... print(original_string) Explanation: Adding strings together is called 'string concatenation' or simply 'concatenation' in programming. Use the block below to find out whether you could can also use the shortcut += operator for adding an 'h' to the variable original_string. Don't forget to check the result by printing it! End of explanation # your name code goes here... Explanation: We now would like you to write some code that defines a variable, name, and assign to it a string that is your name. If your first name is shorter than 5 characters, use your last name. If your last name is also shorter than 5 characters, use the combination of your first and last name. Now print the variable containing your name to the screen. End of explanation first_letter = name[0] print(first_letter) Explanation: Strings are called strings because they consist of a series (or 'string') of individual characters. We can access these individual characters in Python with the help of 'indexing', because each character in a string has a unique 'index'. To print the first letter of your name, you can type: End of explanation last_letter = name[# fill in the last index of your name (tip indexes start at 0)] print(last_letter) Explanation: Take a look at the string "Mr White". We use the index 0 to access the first character in the string. This might seem odd, but remember that all indexes in Python start at zero. Whenever you count in Python, you start at 0 instead of 1. Note that the space character gets an index too, namely 2. This is something you will have to get used to! Because you know the length of your name you can ask for the last letter of your name: End of explanation last_letter = name[-1] print(last_letter) Explanation: It is rather inconvenient having to know how long our strings are if we want to find out what its last letter is. Python provides a simple way of accessing a string 'from the rear': End of explanation print(len(name)) Explanation: To access the last character in a string you have to use the index [-1]. Alternatively, there is the len() command which returns the length of a string: End of explanation print(name[len(name)-1]) Explanation: Do you understand the following code block? Can you explain what is happening? End of explanation but_last_letter = name[# insert your code here] print(but_last_letter) Explanation: Now can you write some code that defines a variable but_last_letter and assigns to this variable the one but last letter of your name? End of explanation first_two_letters = name[0:2] print(first_two_letters) Explanation: You're starting to become a real expert in indexing strings. Now what if we would like to find out what the last two or three letters of our name are? In Python we can use so-called 'slice-indexes' or 'slices' for short. To find the first two letters of our name we type in: End of explanation without_first_two_letters = name[2:] print(without_first_two_letters) Explanation: The 0 index is optional, so we could just as well type in name[:2]. This says: take all characters of name until you reach index 2 (i.e. up to the third letter, but not including the third letter). We can also start at index 2 and leave the end index unspecified: End of explanation last_two_letters = name[-2:] print(last_two_letters) Explanation: Because we did not specify the end index, Python continues until it reaches the end of our string. If we would like to find out what the last two letters of our name are, we can type in: End of explanation # insert your middle_letters code here Explanation: DIY Can you define a variable middle_letters and assign to it all letters of your name except for the first two and the last two? End of explanation word1 = "human" word2 = "opportunities" Explanation: Given the following two words, can you write code that prints out the word humanities using only slicing and concatenation? (So, no quotes are allowed in your code.) Can you print out how many characters the word humanities counts? End of explanation x = "5" y = 2 print(x + y) Explanation: "Casting" variables Above, we have already learned that each variable as a data type: variables can be strings, floats, integers, etc. Sometimes it is necessary to convert one type into the other. Consider this: End of explanation x = "5" y = 2 print(x + str(y)) print(int(x) + y) Explanation: This should raise an error on your machine: does the error message gives you a hint as to why this doesn't work? x is a string, and y is an integer. Because of this, you cannot sum them. Luckily there exist ways to 'cast' variables from one type of variable into another type of variable. Do you understand the outcome of the following code? Can you comment in your own words on the effect of applying int() and str() to variables? End of explanation # comment: insert your code here. # BTW: Have you noticed that everything behind the hashtag print("Something...") # on a line is ignored by your python interpreter? print("and something else..") # this is really helpful to comment on your code! Another way of commenting on your code is via triple quotes -- these can be distributed over multiple # lines print("Done.") Explanation: Other types of conversions are possible as well, and we will see a couple of them in the next chapters. Because variables can change data type anytime they want, we say that Python uses 'dynamic typing', as opposed to other more 'strict' languages that use 'strong typing'. You can check a variable's type using the type()command. DIY When you exchange code with fellow programmers (as you will often have to do in the real world), it is really helpful if you include some useful information about your scripts. Have a look at the code block below and read about commenting on Python code in the comments: End of explanation # your code goes here Explanation: So, how many ways are there to comment on your code in Python? What we have learnt To finish this section, here is an overview of what we have learnt. Go through the list and make sure you understand all the concepts. concatenation index slicing zero-indexed numbering len() type casting: int() and str() type() code commenting via hashtags and triple double quotes Final Exercises Chapter 1 Inspired by Think Python by Allen B. Downey (http://thinkpython.com), Introduction to Programming Using Python by Y. Liang (Pearson, 2013). Some exercises below have been taken from: http://www.ling.gu.se/~lager/python_exercises.html. Ex. 1: Suppose the cover price of a book is 24.95 EUR, but bookstores get a 40 percent discount. Shipping costs 3 EUR for the first copy and 75 cents for each additional copy. What is the total wholesale cost for 60 copies? Print the result in a pretty fashion, using casting where necessary! End of explanation print("A message"). print("A message') print('A messagef"') Explanation: Ex. 2: Can you identify and explain the errors in the following lines of code? Correct them please! End of explanation # ZeroDivisionError Explanation: Ex. 3: When something is wrong with your code, Python will raise errors. Often these will be 'syntax errors' that signal that something is wrong with the form of your code (i.e. a SyntaxError like the one thrown in the previous exercice). There are also 'runtime errors' that signal that your code was in itself formally correct, but that something went wrong during the code's execution. A good example is the ZeroDivisionError. Try to make Python throw such a ZeroDivisionError! End of explanation # insert your code here Explanation: Ex. 4: Write a program that assigns the result of 9.5 * 4.5 - 2.5 * 345.5 - 3.5 to a variable. Print this variable. Use round brackets to indicate 'operator precedence' and make sure that subtractions are performed before multiplications. When you convert the outcome to a string, how many characters does it count? End of explanation # numbers Explanation: Ex. 5: Define the variables a=2, b=20007 and c=5. Using only the operations you learned about above, can you now print the following numbers: 2005, 252525252, 2510, -60025 and 2002507? (Hint: use type casting and string slicing to access parts of the original numbers!) End of explanation # average Explanation: Ex. 6: Define three variables var1, var2 and var3. Calculate the average of these variables and assign it to average. Print the result in a fancy manner. Add three comments to this piece of code using three different ways. End of explanation # circle code Explanation: Ex. 7: Write a little program that can compute the surface of circle, using the variables radius and pi=3.14159. The formula is of course radius, multiplied by radius, multiplied by pi. Print the outcome of your program as follows: 'The surface area of a circle with radius ... is: ...'. End of explanation # try out the modulus operator! Explanation: Ex. 8: There is one operator (like the ones for multiplication and subtraction) that we did not mention yet, namely the modulus operator %. Could you figure by yourself what it does when you place it between two numbers (e.g. 113 % 9)? (PS: It's OK to get help online...) You don't need this operator all that often, but when you do, it comes in really handy! End of explanation # cashier code Explanation: Ex. 9: Can you use the modulus operator you just learned about to solve the following task? Write a code block that classifies a given amount of money into smaller monetary units. Set the amount variable to 11.56. You code should outputs a report listing the monetary equivalent in dollars, quarters, dimes, nickels, and pennies. Your program should report the maximum number of dollars, then the number of quarters, dimes, nickels, and pennies, in this order, to result in the minimum number of coins. Here are the steps in developing the program: Convert the amount (11.56) into cents (1156). Divide the cents by 100 to find the number of dollars, but first subtract the rest using the modulus operator! Divide the remaining cents by 25 to find the number of quarters, but, again, first subtract the rest using the modulus operator! Divide the remaining cents by 10 to find the number of dimes, etc. Divide the remaining cents by 5 to find the number of nickels, etc. The remaining cents are the pennies. Now display the result for your cashier! End of explanation from IPython.core.display import HTML def css_styling(): styles = open("styles/custom.css", "r").read() return HTML(styles) css_styling() Explanation: You've reached the end of Chapter 1! You can safely ignore the code block below -- it's only there to make the page prettier. End of explanation
1,682
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Land MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Conservation Properties 3. Key Properties --&gt; Timestepping Framework 4. Key Properties --&gt; Software Properties 5. Grid 6. Grid --&gt; Horizontal 7. Grid --&gt; Vertical 8. Soil 9. Soil --&gt; Soil Map 10. Soil --&gt; Snow Free Albedo 11. Soil --&gt; Hydrology 12. Soil --&gt; Hydrology --&gt; Freezing 13. Soil --&gt; Hydrology --&gt; Drainage 14. Soil --&gt; Heat Treatment 15. Snow 16. Snow --&gt; Snow Albedo 17. Vegetation 18. Energy Balance 19. Carbon Cycle 20. Carbon Cycle --&gt; Vegetation 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality 26. Carbon Cycle --&gt; Litter 27. Carbon Cycle --&gt; Soil 28. Carbon Cycle --&gt; Permafrost Carbon 29. Nitrogen Cycle 30. River Routing 31. River Routing --&gt; Oceanic Discharge 32. Lakes 33. Lakes --&gt; Method 34. Lakes --&gt; Wetlands 1. Key Properties Land surface key properties 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Description Is Required Step7: 1.4. Land Atmosphere Flux Exchanges Is Required Step8: 1.5. Atmospheric Coupling Treatment Is Required Step9: 1.6. Land Cover Is Required Step10: 1.7. Land Cover Change Is Required Step11: 1.8. Tiling Is Required Step12: 2. Key Properties --&gt; Conservation Properties TODO 2.1. Energy Is Required Step13: 2.2. Water Is Required Step14: 2.3. Carbon Is Required Step15: 3. Key Properties --&gt; Timestepping Framework TODO 3.1. Timestep Dependent On Atmosphere Is Required Step16: 3.2. Time Step Is Required Step17: 3.3. Timestepping Method Is Required Step18: 4. Key Properties --&gt; Software Properties Software properties of land surface code 4.1. Repository Is Required Step19: 4.2. Code Version Is Required Step20: 4.3. Code Languages Is Required Step21: 5. Grid Land surface grid 5.1. Overview Is Required Step22: 6. Grid --&gt; Horizontal The horizontal grid in the land surface 6.1. Description Is Required Step23: 6.2. Matches Atmosphere Grid Is Required Step24: 7. Grid --&gt; Vertical The vertical grid in the soil 7.1. Description Is Required Step25: 7.2. Total Depth Is Required Step26: 8. Soil Land surface soil 8.1. Overview Is Required Step27: 8.2. Heat Water Coupling Is Required Step28: 8.3. Number Of Soil layers Is Required Step29: 8.4. Prognostic Variables Is Required Step30: 9. Soil --&gt; Soil Map Key properties of the land surface soil map 9.1. Description Is Required Step31: 9.2. Structure Is Required Step32: 9.3. Texture Is Required Step33: 9.4. Organic Matter Is Required Step34: 9.5. Albedo Is Required Step35: 9.6. Water Table Is Required Step36: 9.7. Continuously Varying Soil Depth Is Required Step37: 9.8. Soil Depth Is Required Step38: 10. Soil --&gt; Snow Free Albedo TODO 10.1. Prognostic Is Required Step39: 10.2. Functions Is Required Step40: 10.3. Direct Diffuse Is Required Step41: 10.4. Number Of Wavelength Bands Is Required Step42: 11. Soil --&gt; Hydrology Key properties of the land surface soil hydrology 11.1. Description Is Required Step43: 11.2. Time Step Is Required Step44: 11.3. Tiling Is Required Step45: 11.4. Vertical Discretisation Is Required Step46: 11.5. Number Of Ground Water Layers Is Required Step47: 11.6. Lateral Connectivity Is Required Step48: 11.7. Method Is Required Step49: 12. Soil --&gt; Hydrology --&gt; Freezing TODO 12.1. Number Of Ground Ice Layers Is Required Step50: 12.2. Ice Storage Method Is Required Step51: 12.3. Permafrost Is Required Step52: 13. Soil --&gt; Hydrology --&gt; Drainage TODO 13.1. Description Is Required Step53: 13.2. Types Is Required Step54: 14. Soil --&gt; Heat Treatment TODO 14.1. Description Is Required Step55: 14.2. Time Step Is Required Step56: 14.3. Tiling Is Required Step57: 14.4. Vertical Discretisation Is Required Step58: 14.5. Heat Storage Is Required Step59: 14.6. Processes Is Required Step60: 15. Snow Land surface snow 15.1. Overview Is Required Step61: 15.2. Tiling Is Required Step62: 15.3. Number Of Snow Layers Is Required Step63: 15.4. Density Is Required Step64: 15.5. Water Equivalent Is Required Step65: 15.6. Heat Content Is Required Step66: 15.7. Temperature Is Required Step67: 15.8. Liquid Water Content Is Required Step68: 15.9. Snow Cover Fractions Is Required Step69: 15.10. Processes Is Required Step70: 15.11. Prognostic Variables Is Required Step71: 16. Snow --&gt; Snow Albedo TODO 16.1. Type Is Required Step72: 16.2. Functions Is Required Step73: 17. Vegetation Land surface vegetation 17.1. Overview Is Required Step74: 17.2. Time Step Is Required Step75: 17.3. Dynamic Vegetation Is Required Step76: 17.4. Tiling Is Required Step77: 17.5. Vegetation Representation Is Required Step78: 17.6. Vegetation Types Is Required Step79: 17.7. Biome Types Is Required Step80: 17.8. Vegetation Time Variation Is Required Step81: 17.9. Vegetation Map Is Required Step82: 17.10. Interception Is Required Step83: 17.11. Phenology Is Required Step84: 17.12. Phenology Description Is Required Step85: 17.13. Leaf Area Index Is Required Step86: 17.14. Leaf Area Index Description Is Required Step87: 17.15. Biomass Is Required Step88: 17.16. Biomass Description Is Required Step89: 17.17. Biogeography Is Required Step90: 17.18. Biogeography Description Is Required Step91: 17.19. Stomatal Resistance Is Required Step92: 17.20. Stomatal Resistance Description Is Required Step93: 17.21. Prognostic Variables Is Required Step94: 18. Energy Balance Land surface energy balance 18.1. Overview Is Required Step95: 18.2. Tiling Is Required Step96: 18.3. Number Of Surface Temperatures Is Required Step97: 18.4. Evaporation Is Required Step98: 18.5. Processes Is Required Step99: 19. Carbon Cycle Land surface carbon cycle 19.1. Overview Is Required Step100: 19.2. Tiling Is Required Step101: 19.3. Time Step Is Required Step102: 19.4. Anthropogenic Carbon Is Required Step103: 19.5. Prognostic Variables Is Required Step104: 20. Carbon Cycle --&gt; Vegetation TODO 20.1. Number Of Carbon Pools Is Required Step105: 20.2. Carbon Pools Is Required Step106: 20.3. Forest Stand Dynamics Is Required Step107: 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis TODO 21.1. Method Is Required Step108: 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration TODO 22.1. Maintainance Respiration Is Required Step109: 22.2. Growth Respiration Is Required Step110: 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation TODO 23.1. Method Is Required Step111: 23.2. Allocation Bins Is Required Step112: 23.3. Allocation Fractions Is Required Step113: 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology TODO 24.1. Method Is Required Step114: 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality TODO 25.1. Method Is Required Step115: 26. Carbon Cycle --&gt; Litter TODO 26.1. Number Of Carbon Pools Is Required Step116: 26.2. Carbon Pools Is Required Step117: 26.3. Decomposition Is Required Step118: 26.4. Method Is Required Step119: 27. Carbon Cycle --&gt; Soil TODO 27.1. Number Of Carbon Pools Is Required Step120: 27.2. Carbon Pools Is Required Step121: 27.3. Decomposition Is Required Step122: 27.4. Method Is Required Step123: 28. Carbon Cycle --&gt; Permafrost Carbon TODO 28.1. Is Permafrost Included Is Required Step124: 28.2. Emitted Greenhouse Gases Is Required Step125: 28.3. Decomposition Is Required Step126: 28.4. Impact On Soil Properties Is Required Step127: 29. Nitrogen Cycle Land surface nitrogen cycle 29.1. Overview Is Required Step128: 29.2. Tiling Is Required Step129: 29.3. Time Step Is Required Step130: 29.4. Prognostic Variables Is Required Step131: 30. River Routing Land surface river routing 30.1. Overview Is Required Step132: 30.2. Tiling Is Required Step133: 30.3. Time Step Is Required Step134: 30.4. Grid Inherited From Land Surface Is Required Step135: 30.5. Grid Description Is Required Step136: 30.6. Number Of Reservoirs Is Required Step137: 30.7. Water Re Evaporation Is Required Step138: 30.8. Coupled To Atmosphere Is Required Step139: 30.9. Coupled To Land Is Required Step140: 30.10. Quantities Exchanged With Atmosphere Is Required Step141: 30.11. Basin Flow Direction Map Is Required Step142: 30.12. Flooding Is Required Step143: 30.13. Prognostic Variables Is Required Step144: 31. River Routing --&gt; Oceanic Discharge TODO 31.1. Discharge Type Is Required Step145: 31.2. Quantities Transported Is Required Step146: 32. Lakes Land surface lakes 32.1. Overview Is Required Step147: 32.2. Coupling With Rivers Is Required Step148: 32.3. Time Step Is Required Step149: 32.4. Quantities Exchanged With Rivers Is Required Step150: 32.5. Vertical Grid Is Required Step151: 32.6. Prognostic Variables Is Required Step152: 33. Lakes --&gt; Method TODO 33.1. Ice Treatment Is Required Step153: 33.2. Albedo Is Required Step154: 33.3. Dynamics Is Required Step155: 33.4. Dynamic Lake Extent Is Required Step156: 33.5. Endorheic Basins Is Required Step157: 34. Lakes --&gt; Wetlands TODO 34.1. Description Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'mri', 'sandbox-3', 'land') Explanation: ES-DOC CMIP6 Model Properties - Land MIP Era: CMIP6 Institute: MRI Source ID: SANDBOX-3 Topic: Land Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. Properties: 154 (96 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:19 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Conservation Properties 3. Key Properties --&gt; Timestepping Framework 4. Key Properties --&gt; Software Properties 5. Grid 6. Grid --&gt; Horizontal 7. Grid --&gt; Vertical 8. Soil 9. Soil --&gt; Soil Map 10. Soil --&gt; Snow Free Albedo 11. Soil --&gt; Hydrology 12. Soil --&gt; Hydrology --&gt; Freezing 13. Soil --&gt; Hydrology --&gt; Drainage 14. Soil --&gt; Heat Treatment 15. Snow 16. Snow --&gt; Snow Albedo 17. Vegetation 18. Energy Balance 19. Carbon Cycle 20. Carbon Cycle --&gt; Vegetation 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality 26. Carbon Cycle --&gt; Litter 27. Carbon Cycle --&gt; Soil 28. Carbon Cycle --&gt; Permafrost Carbon 29. Nitrogen Cycle 30. River Routing 31. River Routing --&gt; Oceanic Discharge 32. Lakes 33. Lakes --&gt; Method 34. Lakes --&gt; Wetlands 1. Key Properties Land surface key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of land surface model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of land surface model code (e.g. MOSES2.2) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.3. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "water" # "energy" # "carbon" # "nitrogen" # "phospherous" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.4. Land Atmosphere Flux Exchanges Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Fluxes exchanged with the atmopshere. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.5. Atmospheric Coupling Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_cover') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bare soil" # "urban" # "lake" # "land ice" # "lake ice" # "vegetated" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.6. Land Cover Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Types of land cover defined in the land surface model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_cover_change') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.7. Land Cover Change Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how land cover change is managed (e.g. the use of net or gross transitions) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.8. Tiling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.energy') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Conservation Properties TODO 2.1. Energy Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.water') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.2. Water Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how water is conserved globally and to what level (e.g. within X [units]/year) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.3. Carbon Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Timestepping Framework TODO 3.1. Timestep Dependent On Atmosphere Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is a time step dependent on the frequency of atmosphere coupling? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overall timestep of land surface model (i.e. time between calls) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.3. Timestepping Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of time stepping method and associated time step(s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Software Properties Software properties of land surface code 4.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Grid Land surface grid 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the grid in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.horizontal.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Grid --&gt; Horizontal The horizontal grid in the land surface 6.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general structure of the horizontal grid (not including any tiling) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.2. Matches Atmosphere Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the horizontal grid match the atmosphere? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.vertical.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Grid --&gt; Vertical The vertical grid in the soil 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general structure of the vertical grid in the soil (not including any tiling) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.vertical.total_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 7.2. Total Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The total depth of the soil (in metres) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Soil Land surface soil 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of soil in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_water_coupling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.2. Heat Water Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the coupling between heat and water in the soil End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.number_of_soil layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 8.3. Number Of Soil layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of soil layers End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the soil scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Soil --&gt; Soil Map Key properties of the land surface soil map 9.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of soil map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.structure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.2. Structure Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil structure map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.texture') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.3. Texture Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil texture map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.organic_matter') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.4. Organic Matter Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil organic matter map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.5. Albedo Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil albedo map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.water_table') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.6. Water Table Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil water table map, if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 9.7. Continuously Varying Soil Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the soil properties vary continuously with depth? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.soil_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.8. Soil Depth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil depth map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 10. Soil --&gt; Snow Free Albedo TODO 10.1. Prognostic Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is snow free albedo prognostic? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.functions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation type" # "soil humidity" # "vegetation state" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10.2. Functions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If prognostic, describe the dependancies on snow free albedo calculations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "distinction between direct and diffuse albedo" # "no distinction between direct and diffuse albedo" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10.3. Direct Diffuse Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, describe the distinction between direct and diffuse albedo End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 10.4. Number Of Wavelength Bands Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, enter the number of wavelength bands used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11. Soil --&gt; Hydrology Key properties of the land surface soil hydrology 11.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of the soil hydrological model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 11.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of river soil hydrology in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.3. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil hydrology tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.4. Vertical Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the typical vertical discretisation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 11.5. Number Of Ground Water Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of soil layers that may contain water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "perfect connectivity" # "Darcian flow" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.6. Lateral Connectivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe the lateral connectivity between tiles End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Bucket" # "Force-restore" # "Choisnel" # "Explicit diffusion" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.7. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The hydrological dynamics scheme in the land surface model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 12. Soil --&gt; Hydrology --&gt; Freezing TODO 12.1. Number Of Ground Ice Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How many soil layers may contain ground ice End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.2. Ice Storage Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method of ice storage End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.3. Permafrost Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of permafrost, if any, within the land surface scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.drainage.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 13. Soil --&gt; Hydrology --&gt; Drainage TODO 13.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General describe how drainage is included in the land surface scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.drainage.types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Gravity drainage" # "Horton mechanism" # "topmodel-based" # "Dunne mechanism" # "Lateral subsurface flow" # "Baseflow from groundwater" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.2. Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Different types of runoff represented by the land surface model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14. Soil --&gt; Heat Treatment TODO 14.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of how heat treatment properties are defined End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 14.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of soil heat scheme in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14.3. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil heat treatment tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14.4. Vertical Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the typical vertical discretisation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Force-restore" # "Explicit diffusion" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.5. Heat Storage Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify the method of heat storage End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "soil moisture freeze-thaw" # "coupling with snow temperature" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.6. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe processes included in the treatment of soil heat End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15. Snow Land surface snow 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of snow in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the snow tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.number_of_snow_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 15.3. Number Of Snow Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of snow levels used in the land surface scheme/model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.density') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "constant" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.4. Density Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow density End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.water_equivalent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.5. Water Equivalent Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of the snow water equivalent End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.heat_content') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.6. Heat Content Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of the heat content of snow End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.temperature') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.7. Temperature Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow temperature End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.liquid_water_content') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.8. Liquid Water Content Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow liquid water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_cover_fractions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "ground snow fraction" # "vegetation snow fraction" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.9. Snow Cover Fractions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify cover fractions used in the surface snow scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "snow interception" # "snow melting" # "snow freezing" # "blowing snow" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.10. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Snow related processes in the land surface scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.11. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the snow scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_albedo.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "prescribed" # "constant" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16. Snow --&gt; Snow Albedo TODO 16.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of snow-covered land albedo End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_albedo.functions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation type" # "snow age" # "snow density" # "snow grain type" # "aerosol deposition" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16.2. Functions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N *If prognostic, * End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17. Vegetation Land surface vegetation 17.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of vegetation in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 17.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of vegetation scheme in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.dynamic_vegetation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 17.3. Dynamic Vegetation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there dynamic evolution of vegetation? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.4. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the vegetation tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_representation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation types" # "biome types" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.5. Vegetation Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Vegetation classification used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "broadleaf tree" # "needleleaf tree" # "C3 grass" # "C4 grass" # "vegetated" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.6. Vegetation Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of vegetation types in the classification, if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biome_types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "evergreen needleleaf forest" # "evergreen broadleaf forest" # "deciduous needleleaf forest" # "deciduous broadleaf forest" # "mixed forest" # "woodland" # "wooded grassland" # "closed shrubland" # "opne shrubland" # "grassland" # "cropland" # "wetlands" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.7. Biome Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of biome types in the classification, if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_time_variation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed (not varying)" # "prescribed (varying from files)" # "dynamical (varying from simulation)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.8. Vegetation Time Variation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How the vegetation fractions in each tile are varying with time End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_map') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.9. Vegetation Map Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.interception') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 17.10. Interception Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is vegetation interception of rainwater represented? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.phenology') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic (vegetation map)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.11. Phenology Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation phenology End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.phenology_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.12. Phenology Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation phenology End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.leaf_area_index') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prescribed" # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.13. Leaf Area Index Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation leaf area index End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.leaf_area_index_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.14. Leaf Area Index Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of leaf area index End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biomass') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.15. Biomass Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 *Treatment of vegetation biomass * End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biomass_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.16. Biomass Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation biomass End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biogeography') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.17. Biogeography Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation biogeography End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biogeography_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.18. Biogeography Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation biogeography End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.stomatal_resistance') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "light" # "temperature" # "water availability" # "CO2" # "O3" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.19. Stomatal Resistance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify what the vegetation stomatal resistance depends on End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.20. Stomatal Resistance Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation stomatal resistance End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.21. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the vegetation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 18. Energy Balance Land surface energy balance 18.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of energy balance in land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 18.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the energy balance tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 18.3. Number Of Surface Temperatures Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.evaporation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "alpha" # "beta" # "combined" # "Monteith potential evaporation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18.4. Evaporation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify the formulation method for land surface evaporation, from soil and vegetation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "transpiration" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18.5. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe which processes are included in the energy balance scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19. Carbon Cycle Land surface carbon cycle 19.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of carbon cycle in land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the carbon cycle tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 19.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of carbon cycle in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "grand slam protocol" # "residence time" # "decay time" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 19.4. Anthropogenic Carbon Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Describe the treament of the anthropogenic carbon pool End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.5. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the carbon scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 20. Carbon Cycle --&gt; Vegetation TODO 20.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 20.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 20.3. Forest Stand Dynamics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the treatment of forest stand dyanmics End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis TODO 21.1. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration TODO 22.1. Maintainance Respiration Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for maintainence respiration End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22.2. Growth Respiration Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for growth respiration End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation TODO 23.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the allocation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "leaves + stems + roots" # "leaves + stems + roots (leafy + woody)" # "leaves + fine roots + coarse roots + stems" # "whole plant (no distinction)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23.2. Allocation Bins Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify distinct carbon bins used in allocation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "function of vegetation type" # "function of plant allometry" # "explicitly calculated" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23.3. Allocation Fractions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how the fractions of allocation are calculated End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology TODO 24.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the phenology scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality TODO 25.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the mortality scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 26. Carbon Cycle --&gt; Litter TODO 26.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.4. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the general method used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 27. Carbon Cycle --&gt; Soil TODO 27.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.4. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the general method used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 28. Carbon Cycle --&gt; Permafrost Carbon TODO 28.1. Is Permafrost Included Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is permafrost included? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28.2. Emitted Greenhouse Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the GHGs emitted End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28.4. Impact On Soil Properties Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the impact of permafrost on soil properties End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 29. Nitrogen Cycle Land surface nitrogen cycle 29.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the nitrogen cycle in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 29.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the notrogen cycle tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 29.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of nitrogen cycle in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 29.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the nitrogen scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30. River Routing Land surface river routing 30.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of river routing in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the river routing, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of river routing scheme in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 30.4. Grid Inherited From Land Surface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the grid inherited from land surface? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.grid_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.5. Grid Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of grid, if not inherited from land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.number_of_reservoirs') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.6. Number Of Reservoirs Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of reservoirs End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.water_re_evaporation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "flood plains" # "irrigation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30.7. Water Re Evaporation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N TODO End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 30.8. Coupled To Atmosphere Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Is river routing coupled to the atmosphere model component? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.coupled_to_land') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.9. Coupled To Land Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the coupling between land and rivers End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30.10. Quantities Exchanged With Atmosphere Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "present day" # "adapted for other periods" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30.11. Basin Flow Direction Map Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What type of basin flow direction map is being used? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.flooding') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.12. Flooding Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the representation of flooding, if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.13. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the river routing End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "direct (large rivers)" # "diffuse" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31. River Routing --&gt; Oceanic Discharge TODO 31.1. Discharge Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify how rivers are discharged to the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31.2. Quantities Transported Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Quantities that are exchanged from river-routing to the ocean model component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 32. Lakes Land surface lakes 32.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of lakes in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.coupling_with_rivers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 32.2. Coupling With Rivers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are lakes coupled to the river routing model component? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 32.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of lake scheme in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 32.4. Quantities Exchanged With Rivers Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If coupling with rivers, which quantities are exchanged between the lakes and rivers End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.vertical_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 32.5. Vertical Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the vertical grid of lakes End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 32.6. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the lake scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.ice_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 33. Lakes --&gt; Method TODO 33.1. Ice Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is lake ice included? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 33.2. Albedo Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of lake albedo End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.dynamics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "No lake dynamics" # "vertical" # "horizontal" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 33.3. Dynamics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which dynamics of lakes are treated? horizontal, vertical, etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 33.4. Dynamic Lake Extent Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is a dynamic lake extent scheme included? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.endorheic_basins') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 33.5. Endorheic Basins Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Basins not flowing to ocean included? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.wetlands.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 34. Lakes --&gt; Wetlands TODO 34.1. Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the treatment of wetlands, if any End of explanation
1,683
Given the following text description, write Python code to implement the functionality described below step by step Description: CNN transfer learning - Keras+TensorFlow This is for CNN models transferred from pretrained model, using Keras based on TensorFlow. First, some preparation work. Step1: Read the MNIST data. Notice that we assume that it's 'kaggle-DigitRecognizer/data/train.csv', and we use helper function to read into a dictionary. Step2: Freeze-weights transfer We would use ResNet50 provided in Keras. In this section, the pretrained model would all be freezed, and new output layer would be attatched to the model, and only this output layer would be trained. Step3: Fine-tune transfer In this section, the model is the same as before, but all weights are trained along with the final layer using smaller learning rate. Step4: Fine-tune transfer with early stopping Based on the previous section, the test set is used as the validation set, so as to monitor for early stopping. Step5: Create submissions Load the saved trained models and produce predictions for submission on Kaggle.
Python Code: from keras.layers import Conv2D, MaxPooling2D, Input, Dense, Flatten, Activation, add, Lambda from keras.layers.normalization import BatchNormalization from keras.layers.pooling import GlobalAveragePooling2D from keras.optimizers import RMSprop from keras.backend import tf as ktf from keras.models import Model, Sequential, load_model from keras.callbacks import ModelCheckpoint, EarlyStopping from keras.applications.resnet50 import ResNet50 from lib.data_utils import get_MNIST_data Explanation: CNN transfer learning - Keras+TensorFlow This is for CNN models transferred from pretrained model, using Keras based on TensorFlow. First, some preparation work. End of explanation data = get_MNIST_data(num_validation=0, fit=True) # see if we get the data correctly print('image size: ', data['X_train'].shape) Explanation: Read the MNIST data. Notice that we assume that it's 'kaggle-DigitRecognizer/data/train.csv', and we use helper function to read into a dictionary. End of explanation # build the model # preprocess to (28,28,3), then build a resize layer using tf.resize_images() to (224,224,3) as input inputs = Input(shape=(28,28,3)) inputs_resize = Lambda(lambda img: ktf.image.resize_images(img, (224,224)))(inputs) # resize layer resnet50 = ResNet50(include_top=False, input_tensor=inputs_resize, input_shape=(224,224,3), pooling='avg') x = resnet50.output #x = Dense(units=1024, activation='relu')(x) predictions = Dense(units=10, activation='softmax')(x) # connect the model freezemodel = Model(inputs=inputs, outputs=predictions) #freezemodel.summary() # freeze all ResNet50 layers for layer in resnet50.layers: layer.trainable = False # set the loss and optimizer freezemodel.compile(optimizer='rmsprop', loss='sparse_categorical_crossentropy', metrics=['accuracy']) # fit the model checkpoint = ModelCheckpoint('../models/freezeResNet_{epoch:02d}-{loss:.2f}.h5', monitor='loss', save_best_only=True) freezemodel.fit(data['X_train'], data['y_train'].reshape(-1,1), batch_size=16, epochs=10, callbacks=[checkpoint], initial_epoch=1) # test the model and see accuracy score = freezemodel.evaluate(data['X_test'], data['y_test'].reshape(-1, 1), batch_size=32) print(score) # save the model: 0.96 freezemodel.save('ResNet50_freeze.h5') # continue the model training freezemodel = load_model('../models/ResNet50_freeze.h5', custom_objects={'ktf': ktf}) # set the loss and optimizer rmsprop = RMSprop(lr=0.0001) freezemodel.compile(optimizer=rmsprop, loss='sparse_categorical_crossentropy', metrics=['accuracy']) # fit the model checkpoint = ModelCheckpoint('../models/freezeResNet_{epoch:02d}-{loss:.2f}.h5', monitor='loss', save_best_only=True) freezemodel.fit(data['X_train'], data['y_train'].reshape(-1, 1), batch_size=16, epochs=10, callbacks=[checkpoint], initial_epoch=4) Explanation: Freeze-weights transfer We would use ResNet50 provided in Keras. In this section, the pretrained model would all be freezed, and new output layer would be attatched to the model, and only this output layer would be trained. End of explanation # build the model # preprocess to (28,28,3), then build a resize layer using tf.resize_images() to (224,224,3) as input inputs = Input(shape=(28,28,3)) inputs_resize = Lambda(lambda img: ktf.image.resize_images(img, (224,224)))(inputs) # resize layer resnet50 = ResNet50(include_top=False, input_tensor=inputs_resize, input_shape=(224,224,3), pooling='avg') x = resnet50.output #x = Dense(units=1024, activation='relu')(x) predictions = Dense(units=10, activation='softmax')(x) # connect the model tunemodel = Model(inputs=inputs, outputs=predictions) #freezemodel.summary() # set the loss and optimizer rmsprop = RMSprop(lr=0.0001) tunemodel.compile(optimizer=rmsprop, loss='sparse_categorical_crossentropy', metrics=['accuracy']) # fit the model checkpoint = ModelCheckpoint('../models/tuneResNet_{epoch:02d}-{loss:.2f}.h5', monitor='loss', save_best_only=True) tunemodel.fit(data['X_train'], data['y_train'].reshape(-1, 1), batch_size=16, epochs=10, callbacks=[checkpoint], initial_epoch=0) # test the model and see accuracy score = tunemodel.evaluate(data['X_test'], data['y_test'].reshape(-1, 1), batch_size=32) print(score) Explanation: Fine-tune transfer In this section, the model is the same as before, but all weights are trained along with the final layer using smaller learning rate. End of explanation # build the model # preprocess to (28,28,3), then build a resize layer using tf.resize_images() to (224,224,3) as input inputs = Input(shape=(28,28,3)) inputs_resize = Lambda(lambda img: ktf.image.resize_images(img, (224,224)))(inputs) # resize layer resnet50 = ResNet50(include_top=False, input_tensor=inputs_resize, input_shape=(224,224,3), pooling='avg') x = resnet50.output predictions = Dense(units=10, activation='softmax')(x) # connect the model tunemodel = Model(inputs=inputs, outputs=predictions) # set the loss and optimizer rmsprop = RMSprop(lr=0.0001) tunemodel.compile(optimizer=rmsprop, loss='sparse_categorical_crossentropy', metrics=['accuracy']) # fit the model checkpoint = ModelCheckpoint('../models/tuneResNet_early_{epoch:02d}-{loss:.2f}.h5', monitor='loss', save_best_only=True) earlystop = EarlyStopping(min_delta=0.0001, patience=1) tunemodel.fit(data['X_train'], data['y_train'].reshape(-1, 1), batch_size=16, epochs=10, validation_data=(data['X_test'], data['y_test'].reshape(-1, 1)), callbacks=[checkpoint, earlystop], initial_epoch=0) # test the model and see accuracy score = tunemodel.evaluate(data['X_test'], data['y_test'].reshape(-1, 1), batch_size=16) print(score) Explanation: Fine-tune transfer with early stopping Based on the previous section, the test set is used as the validation set, so as to monitor for early stopping. End of explanation from lib.data_utils import create_submission from keras.models import load_model # for freeze ResNet50 model (3 epochs) simple_CNN = load_model('../models/freezeResNet_03-0.09.h5', custom_objects={'ktf': ktf}) print('Load model successfully.') create_submission(simple_CNN, '../data/test.csv', '../submission/submission_freezeResNet_03.csv', 16, fit=True) Explanation: Create submissions Load the saved trained models and produce predictions for submission on Kaggle. End of explanation
1,684
Given the following text description, write Python code to implement the functionality described below step by step Description: Advanced Functions Test For this test, you should use the built-in functions to be able to write the requested functions in one line. Problem 1 Use map to create a function which finds the length of each word in the phrase (broken by spaces) and return the values in a list. The function will have an input of a string, and output a list of integers. Step1: Problem 2 Use reduce to take a list of digits and return the number that they correspond to. Do not convert the integers to strings! Step2: Problem 3 Use filter to return the words from a list of words which start with a target letter. Step3: Problem 4 Use zip and list comprehension to return a list of the same length where each value is the two strings from L1 and L2 concatenated together with connector between them. Look at the example output below Step4: Problem 5 Use enumerate and other skills to return a dictionary which has the values of the list as keys and the index as the value. You may assume that a value will only appear once in the given list. Step5: Problem 6 Use enumerate and other skills from above to return the count of the number of items in the list whose value equals its index.
Python Code: def word_lengths(phrase): pass word_lengths('How long are the words in this phrase') Explanation: Advanced Functions Test For this test, you should use the built-in functions to be able to write the requested functions in one line. Problem 1 Use map to create a function which finds the length of each word in the phrase (broken by spaces) and return the values in a list. The function will have an input of a string, and output a list of integers. End of explanation def digits_to_num(digits): pass digits_to_num([3,4,3,2,1]) Explanation: Problem 2 Use reduce to take a list of digits and return the number that they correspond to. Do not convert the integers to strings! End of explanation def filter_words(word_list, letter): pass l = ['hello','are','cat','dog','ham','hi','go','to','heart'] filter_words(l,'h') Explanation: Problem 3 Use filter to return the words from a list of words which start with a target letter. End of explanation def concatenate(L1, L2, connector): pass concatenate(['A','B'],['a','b'],'-') Explanation: Problem 4 Use zip and list comprehension to return a list of the same length where each value is the two strings from L1 and L2 concatenated together with connector between them. Look at the example output below: End of explanation def d_list(L): pass d_list(['a','b','c']) Explanation: Problem 5 Use enumerate and other skills to return a dictionary which has the values of the list as keys and the index as the value. You may assume that a value will only appear once in the given list. End of explanation def count_match_index(L): pass count_match_index([0,2,2,1,5,5,6,10]) Explanation: Problem 6 Use enumerate and other skills from above to return the count of the number of items in the list whose value equals its index. End of explanation
1,685
Given the following text description, write Python code to implement the functionality described below step by step Description: #There are 4149 elements, and PE has a significant amount of missing values Step1: The two wells have all PE missed Step2: The PE of all wells have no strong variance; For now, fillin the Missing value of median Fancy visualization from forum Step3: ### Build up Initial Test Loop for model and feature engineering Step4: Bad indicator of model performance. It means no accurate prediction was found in one class /home/computer/anaconda3/lib/python3.5/site-packages/sklearn/metrics/classification.py
Python Code: well_PE_Miss = train.loc[train["PE"].isnull(),"Well Name"].unique() well_PE_Miss train.loc[train["Well Name"] == well_PE_Miss[0]].count() train.loc[train["Well Name"] == well_PE_Miss[1]].count() Explanation: #There are 4149 elements, and PE has a significant amount of missing values End of explanation (train.groupby("Well Name"))["PE"].mean() (train.groupby("Well Name"))["PE"].median() train["PE"] = train["PE"].fillna(train["PE"].median()) print(train.loc[train["Well Name"] == "CHURCHMAN BIBLE","PE"].mean()) print(train.loc[train["Well Name"] == "CHURCHMAN BIBLE","PE"].median()) print((train.groupby("Well Name"))["PE"].median()) ## QC for the fill in print(train.loc[train["Well Name"] == "CHURCHMAN BIBLE","PE"].mean()) print(train.loc[train["Well Name"] == "CHURCHMAN BIBLE","PE"].median()) plt.show() Explanation: The two wells have all PE missed End of explanation features = ['GR', 'ILD_log10', 'DeltaPHI', 'PHIND','PE','NM_M', 'RELPOS'] feature_vectors = train[features] facies_labels = train['Facies'] ## 1=sandstone 2=c_siltstone 3=f_siltstone ## 4=marine_silt_shale 5=mudstone 6=wackestone 7=dolomite ## 8=packstone 9=bafflestone facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00', '#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D'] facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS', 'WS', 'D','PS', 'BS'] #facies_color_map is a dictionary that maps facies labels #to their respective colors facies_color_map = {} for ind, label in enumerate(facies_labels): facies_color_map[label] = facies_colors[ind] def label_facies(row, labels): return labels[ row['Facies'] -1] train.loc[:,'FaciesLabels'] = train.apply(lambda row: label_facies(row, facies_labels), axis=1) # def make_facies_log_plot(logs, facies_colors): #make sure logs are sorted by depth logs = logs.sort_values(by='Depth') cmap_facies = colors.ListedColormap( facies_colors[0:len(facies_colors)], 'indexed') ztop=logs.Depth.min(); zbot=logs.Depth.max() cluster=np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1) f, ax = plt.subplots(nrows=1, ncols=6, figsize=(8, 12)) ax[0].plot(logs.GR, logs.Depth, '-g') ax[1].plot(logs.ILD_log10, logs.Depth, '-') ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5') ax[3].plot(logs.PHIND, logs.Depth, '-', color='r') ax[4].plot(logs.PE, logs.Depth, '-', color='black') im=ax[5].imshow(cluster, interpolation='none', aspect='auto', cmap=cmap_facies,vmin=1,vmax=9) divider = make_axes_locatable(ax[5]) cax = divider.append_axes("right", size="20%", pad=0.05) cbar=plt.colorbar(im, cax=cax) cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS', 'SiSh', ' MS ', ' WS ', ' D ', ' PS ', ' BS '])) cbar.set_ticks(range(0,1)); cbar.set_ticklabels('') for i in range(len(ax)-1): ax[i].set_ylim(ztop,zbot) ax[i].invert_yaxis() ax[i].grid() ax[i].locator_params(axis='x', nbins=3) ax[0].set_xlabel("GR") ax[0].set_xlim(logs.GR.min(),logs.GR.max()) ax[1].set_xlabel("ILD_log10") ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max()) ax[2].set_xlabel("DeltaPHI") ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max()) ax[3].set_xlabel("PHIND") ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max()) ax[4].set_xlabel("PE") ax[4].set_xlim(logs.PE.min(),logs.PE.max()) ax[5].set_xlabel('Facies') ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([]) ax[4].set_yticklabels([]); ax[5].set_yticklabels([]) ax[5].set_xticklabels([]) f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94) make_facies_log_plot( train[train['Well Name'] == 'SHRIMPLIN'], facies_colors) plt.show() ## Investigate the dependencies of the depth feature and Facies wells = train["Well Name"].unique() #train.plot(x = "Depth", y = "Facies") #plt.show() pi = 0 for well in wells: pi = pi + 1 # Plot index ax = plt.subplot(3, 4, pi) depthi = train.loc[train["Well Name"] == well, "Depth"].values faci = train.loc[train["Well Name"] == well, "Facies"].values plt.plot(faci,depthi) ax.set_title(well) ## Create dummy variables for Well Name, Formation, which may have geologic or geospatial information train_dummy = pd.get_dummies(train[["Formation"]]) train_dummy.describe() cols_dummy = train_dummy.columns.values train[cols_dummy] = train_dummy[cols_dummy] print(len(cols_dummy)) ## For trainning drop Formation, FaciesLabels Leave Well Name for Later group splitting wellgroups = train["Well Name"].values train_inp = train.drop(["Formation","Well Name",'FaciesLabels'],axis =1) train_inp.info() Explanation: The PE of all wells have no strong variance; For now, fillin the Missing value of median Fancy visualization from forum End of explanation from sklearn.model_selection import LeavePGroupsOut X = train_inp.drop(["Facies","Depth"],axis = 1).values y = train_inp["Facies"].values lpgo = LeavePGroupsOut(n_groups=2) split_no = lpgo.get_n_splits(X,y,wellgroups) Explanation: ### Build up Initial Test Loop for model and feature engineering : Test 1 SVC End of explanation svc_b1 = SVC(C =1, gamma = 0.001, kernel = 'rbf') svc_b1.fit(X,y) test = pd.read_csv('01_raw_data/validation_data_nofacies.csv') test.count() test["Formation"].unique() test_dummy = pd.get_dummies(test[["Formation"]]) test_cols_dummy = test_dummy.columns.values test[test_cols_dummy] = test_dummy[cols_dummy] test_inp = test.drop(["Formation","Well Name"],axis =1) X_test = test_inp.drop(["Depth"],axis = 1).values svc_b1.predict(X_test) test = test.drop(test_cols_dummy,axis = 1) test["Facies"] = svc_b1.predict(X_test) test.to_csv("Houston_J_sub_1.csv") Explanation: Bad indicator of model performance. It means no accurate prediction was found in one class /home/computer/anaconda3/lib/python3.5/site-packages/sklearn/metrics/classification.py:1113: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples. 'precision', 'predicted', average, warn_for) End of explanation
1,686
Given the following text description, write Python code to implement the functionality described below step by step Description: <a> <img align=left src="files/images/pyspark-page1.svg" width=500 height=250 /> </a> DataFrame API GitHub related blog post <a> <img align=left src="files/images/pyspark-page2.svg" width=500 height=500> </a> Click on a picture to view pyspark docs Step1: <a href="http Step2: <a href="http Step3: <a href="http Step4: <a href="http Step5: <a href="http Step6: <a href="http Step7: <a href="http Step8: <a href="http Step9: <a href="http Step10: <a href="http Step11: <a href="http Step12: <a href="http Step13: <a href="http Step14: <a href="http Step15: <a href="http Step16: <a href="http Step17: <a href="http Step18: <a href="http Step19: <a href="http Step20: <a href="http Step21: <a href="http Step22: <a href="http Step23: <a href="http Step24: <a href="http Step25: <a href="http Step26: <a href="http Step27: <a href="http Step28: <a href="http Step29: <a href="http Step30: <a href="http Step31: <a href="http Step32: <a href="http Step33: <a href="http Step34: <a href="http Step35: <a href="http Step36: <a href="http Step37: <a href="http Step38: <a href="http Step39: <a href="http Step40: <a href="http Step41: <a href="http Step42: <a href="http Step43: <a href="http Step44: <a href="http Step45: <a href="http Step46: <a href="http Step47: <a href="http Step48: <a href="http Step49: <a href="http Step50: <a href="http Step51: <a href="http Step52: <a href="http Step53: <a href="http Step54: <a href="http Step55: <a href="http Step56: <a href="http Step57: <a href="http Step58: <a href="http Step59: <a href="http Step60: <a href="http Step61: <a href="http Step62: <a href="http Step63: <a href="http Step64: <a href="http Step65: <a href="http
Python Code: import IPython print("pyspark version:" + str(sc.version)) print("Ipython version:" + str(IPython.__version__)) Explanation: <a> <img align=left src="files/images/pyspark-page1.svg" width=500 height=250 /> </a> DataFrame API GitHub related blog post <a> <img align=left src="files/images/pyspark-page2.svg" width=500 height=500> </a> Click on a picture to view pyspark docs End of explanation # map x = sc.parallelize([1,2,3]) # sc = spark context, parallelize creates an RDD from the passed object y = x.map(lambda x: (x,x**2)) print(x.collect()) # collect copies RDD elements to a list on the driver print(y.collect()) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.map"> <img align=left src="files/images/pyspark-page3.svg" width=500 height=500 /> </a> End of explanation # flatMap x = sc.parallelize([1,2,3]) y = x.flatMap(lambda x: (x, 100*x, x**2)) print(x.collect()) print(y.collect()) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.flatMap"> <img align=left src="files/images/pyspark-page4.svg" width=500 height=500 /> </a> End of explanation # mapPartitions x = sc.parallelize([1,2,3], 2) def f(iterator): yield sum(iterator) y = x.mapPartitions(f) print(x.glom().collect()) # glom() flattens elements on the same partition print(y.glom().collect()) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.mapPartitions"> <img align=left src="files/images/pyspark-page5.svg" width=500 height=500 /> </a> End of explanation # mapPartitionsWithIndex x = sc.parallelize([1,2,3], 2) def f(partitionIndex, iterator): yield (partitionIndex,sum(iterator)) y = x.mapPartitionsWithIndex(f) print(x.glom().collect()) # glom() flattens elements on the same partition print(y.glom().collect()) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.mapPartitionsWithIndex"> <img align=left src="files/images/pyspark-page6.svg" width=500 height=500 /> </a> End of explanation # getNumPartitions x = sc.parallelize([1,2,3], 2) y = x.getNumPartitions() print(x.glom().collect()) print(y) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.getNumPartitions"> <img align=left src="files/images/pyspark-page7.svg" width=500 height=500 /> </a> End of explanation # filter x = sc.parallelize([1,2,3]) y = x.filter(lambda x: x%2 == 1) # filters out even elements print(x.collect()) print(y.collect()) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.filter"> <img align=left src="files/images/pyspark-page8.svg" width=500 height=500 /> </a> End of explanation # distinct x = sc.parallelize(['A','A','B']) y = x.distinct() print(x.collect()) print(y.collect()) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.distinct"> <img align=left src="files/images/pyspark-page9.svg" width=500 height=500 /> </a> End of explanation # sample x = sc.parallelize(range(7)) ylist = [x.sample(withReplacement=False, fraction=0.5) for i in range(5)] # call 'sample' 5 times print('x = ' + str(x.collect())) for cnt,y in zip(range(len(ylist)), ylist): print('sample:' + str(cnt) + ' y = ' + str(y.collect())) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.sample"> <img align=left src="files/images/pyspark-page10.svg" width=500 height=500 /> </a> End of explanation # takeSample x = sc.parallelize(range(7)) ylist = [x.takeSample(withReplacement=False, num=3) for i in range(5)] # call 'sample' 5 times print('x = ' + str(x.collect())) for cnt,y in zip(range(len(ylist)), ylist): print('sample:' + str(cnt) + ' y = ' + str(y)) # no collect on y Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.takeSample"> <img align=left src="files/images/pyspark-page11.svg" width=500 height=500 /> </a> End of explanation # union x = sc.parallelize(['A','A','B']) y = sc.parallelize(['D','C','A']) z = x.union(y) print(x.collect()) print(y.collect()) print(z.collect()) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.union"> <img align=left src="files/images/pyspark-page12.svg" width=500 height=500 /> </a> End of explanation # intersection x = sc.parallelize(['A','A','B']) y = sc.parallelize(['A','C','D']) z = x.intersection(y) print(x.collect()) print(y.collect()) print(z.collect()) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.intersection"> <img align=left src="files/images/pyspark-page13.svg" width=500 height=500 /> </a> End of explanation # sortByKey x = sc.parallelize([('B',1),('A',2),('C',3)]) y = x.sortByKey() print(x.collect()) print(y.collect()) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.sortByKey"> <img align=left src="files/images/pyspark-page14.svg" width=500 height=500 /> </a> End of explanation # sortBy x = sc.parallelize(['Cat','Apple','Bat']) def keyGen(val): return val[0] y = x.sortBy(keyGen) print(y.collect()) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.sortBy"> <img align=left src="files/images/pyspark-page15.svg" width=500 height=500 /> </a> End of explanation # glom x = sc.parallelize(['C','B','A'], 2) y = x.glom() print(x.collect()) print(y.collect()) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.glom"> <img align=left src="files/images/pyspark-page16.svg" width=500 height=500 /> </a> End of explanation # cartesian x = sc.parallelize(['A','B']) y = sc.parallelize(['C','D']) z = x.cartesian(y) print(x.collect()) print(y.collect()) print(z.collect()) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.cartesian"> <img align=left src="files/images/pyspark-page17.svg" width=500 height=500 /> </a> End of explanation # groupBy x = sc.parallelize([1,2,3]) y = x.groupBy(lambda x: 'A' if (x%2 == 1) else 'B' ) print(x.collect()) print([(j[0],[i for i in j[1]]) for j in y.collect()]) # y is nested, this iterates through it Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.groupBy"> <img align=left src="files/images/pyspark-page18.svg" width=500 height=500 /> < End of explanation # pipe x = sc.parallelize(['A', 'Ba', 'C', 'AD']) y = x.pipe('grep -i "A"') # calls out to grep, may fail under Windows print(x.collect()) print(y.collect()) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.pipe"> <img align=left src="files/images/pyspark-page19.svg" width=500 height=500 /> </a> End of explanation # foreach from __future__ import print_function x = sc.parallelize([1,2,3]) def f(el): '''side effect: append the current RDD elements to a file''' f1=open("./foreachExample.txt", 'a+') print(el,file=f1) open('./foreachExample.txt', 'w').close() # first clear the file contents y = x.foreach(f) # writes into foreachExample.txt print(x.collect()) print(y) # foreach returns 'None' # print the contents of foreachExample.txt with open("./foreachExample.txt", "r") as foreachExample: print (foreachExample.read()) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.foreach"> <img align=left src="files/images/pyspark-page20.svg" width=500 height=500 /> </a> End of explanation # foreachPartition from __future__ import print_function x = sc.parallelize([1,2,3],5) def f(parition): '''side effect: append the current RDD partition contents to a file''' f1=open("./foreachPartitionExample.txt", 'a+') print([el for el in parition],file=f1) open('./foreachPartitionExample.txt', 'w').close() # first clear the file contents y = x.foreachPartition(f) # writes into foreachExample.txt print(x.glom().collect()) print(y) # foreach returns 'None' # print the contents of foreachExample.txt with open("./foreachPartitionExample.txt", "r") as foreachExample: print (foreachExample.read()) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.foreachPartition"> <img align=left src="files/images/pyspark-page21.svg" width=500 height=500 /> </a> End of explanation # collect x = sc.parallelize([1,2,3]) y = x.collect() print(x) # distributed print(y) # not distributed Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.collect"> <img align=left src="files/images/pyspark-page22.svg" width=500 height=500 /> </a> End of explanation # reduce x = sc.parallelize([1,2,3]) y = x.reduce(lambda obj, accumulated: obj + accumulated) # computes a cumulative sum print(x.collect()) print(y) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.reduce"> <img align=left src="files/images/pyspark-page23.svg" width=500 height=500 /> </a> End of explanation # fold x = sc.parallelize([1,2,3]) neutral_zero_value = 0 # 0 for sum, 1 for multiplication y = x.fold(neutral_zero_value,lambda obj, accumulated: accumulated + obj) # computes cumulative sum print(x.collect()) print(y) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.fold"> <img align=left src="files/images/pyspark-page24.svg" width=500 height=500 /> </a> End of explanation # aggregate x = sc.parallelize([2,3,4]) neutral_zero_value = (0,1) # sum: x+0 = x, product: 1*x = x seqOp = (lambda aggregated, el: (aggregated[0] + el, aggregated[1] * el)) combOp = (lambda aggregated, el: (aggregated[0] + el[0], aggregated[1] * el[1])) y = x.aggregate(neutral_zero_value,seqOp,combOp) # computes (cumulative sum, cumulative product) print(x.collect()) print(y) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.aggregate"> <img align=left src="files/images/pyspark-page25.svg" width=500 height=500 /> </a> End of explanation # max x = sc.parallelize([1,3,2]) y = x.max() print(x.collect()) print(y) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.max"> <img align=left src="files/images/pyspark-page26.svg" width=500 height=500 /> </a> End of explanation # min x = sc.parallelize([1,3,2]) y = x.min() print(x.collect()) print(y) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.min"> <img align=left src="files/images/pyspark-page27.svg" width=500 height=500 /> </a> End of explanation # sum x = sc.parallelize([1,3,2]) y = x.sum() print(x.collect()) print(y) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.sum"> <img align=left src="files/images/pyspark-page28.svg" width=500 height=500 /> </a> End of explanation # count x = sc.parallelize([1,3,2]) y = x.count() print(x.collect()) print(y) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.count"> <img align=left src="files/images/pyspark-page29.svg" width=500 height=500 /> </a> End of explanation # histogram (example #1) x = sc.parallelize([1,3,1,2,3]) y = x.histogram(buckets = 2) print(x.collect()) print(y) # histogram (example #2) x = sc.parallelize([1,3,1,2,3]) y = x.histogram([0,0.5,1,1.5,2,2.5,3,3.5]) print(x.collect()) print(y) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.histogram"> <img align=left src="files/images/pyspark-page30.svg" width=500 height=500 /> </a> End of explanation # mean x = sc.parallelize([1,3,2]) y = x.mean() print(x.collect()) print(y) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.mean"> <img align=left src="files/images/pyspark-page31.svg" width=500 height=500 /> </a> End of explanation # variance x = sc.parallelize([1,3,2]) y = x.variance() # divides by N print(x.collect()) print(y) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.variance"> <img align=left src="files/images/pyspark-page32.svg" width=500 height=500 /> </a> End of explanation # stdev x = sc.parallelize([1,3,2]) y = x.stdev() # divides by N print(x.collect()) print(y) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.stdev"> <img align=left src="files/images/pyspark-page33.svg" width=500 height=500 /> </a> End of explanation # sampleStdev x = sc.parallelize([1,3,2]) y = x.sampleStdev() # divides by N-1 print(x.collect()) print(y) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.sampleStdev"> <img align=left src="files/images/pyspark-page34.svg" width=500 height=500 /> </a> End of explanation # sampleVariance x = sc.parallelize([1,3,2]) y = x.sampleVariance() # divides by N-1 print(x.collect()) print(y) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.sampleVariance"> <img align=left src="files/images/pyspark-page35.svg" width=500 height=500 /> </a> End of explanation # countByValue x = sc.parallelize([1,3,1,2,3]) y = x.countByValue() print(x.collect()) print(y) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.countByValue"> <img align=left src="files/images/pyspark-page36.svg" width=500 height=500 /> </a> End of explanation # top x = sc.parallelize([1,3,1,2,3]) y = x.top(num = 3) print(x.collect()) print(y) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.top"> <img align=left src="files/images/pyspark-page37.svg" width=500 height=500 /> </a> End of explanation # takeOrdered x = sc.parallelize([1,3,1,2,3]) y = x.takeOrdered(num = 3) print(x.collect()) print(y) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.takeOrdered"> <img align=left src="files/images/pyspark-page38.svg" width=500 height=500 /> </a> End of explanation # take x = sc.parallelize([1,3,1,2,3]) y = x.take(num = 3) print(x.collect()) print(y) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.take"> <img align=left src="files/images/pyspark-page39.svg" width=500 height=500 /> </a> End of explanation # first x = sc.parallelize([1,3,1,2,3]) y = x.first() print(x.collect()) print(y) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.first"> <img align=left src="files/images/pyspark-page40.svg" width=500 height=500 /> </a> End of explanation # collectAsMap x = sc.parallelize([('C',3),('A',1),('B',2)]) y = x.collectAsMap() print(x.collect()) print(y) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.collectAsMap"> <img align=left src="files/images/pyspark-page41.svg" width=500 height=500 /> </a> End of explanation # keys x = sc.parallelize([('C',3),('A',1),('B',2)]) y = x.keys() print(x.collect()) print(y.collect()) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.keys"> <img align=left src="files/images/pyspark-page42.svg" width=500 height=500 /> </a> End of explanation # values x = sc.parallelize([('C',3),('A',1),('B',2)]) y = x.values() print(x.collect()) print(y.collect()) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.values"> <img align=left src="files/images/pyspark-page43.svg" width=500 height=500 /> </a> End of explanation # reduceByKey x = sc.parallelize([('B',1),('B',2),('A',3),('A',4),('A',5)]) y = x.reduceByKey(lambda agg, obj: agg + obj) print(x.collect()) print(y.collect()) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.reduceByKey"> <img align=left src="files/images/pyspark-page44.svg" width=500 height=500 /> </a> End of explanation # reduceByKeyLocally x = sc.parallelize([('B',1),('B',2),('A',3),('A',4),('A',5)]) y = x.reduceByKeyLocally(lambda agg, obj: agg + obj) print(x.collect()) print(y) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.reduceByKeyLocally"> <img align=left src="files/images/pyspark-page45.svg" width=500 height=500 /> </a> End of explanation # countByKey x = sc.parallelize([('B',1),('B',2),('A',3),('A',4),('A',5)]) y = x.countByKey() print(x.collect()) print(y) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.countByKey"> <img align=left src="files/images/pyspark-page46.svg" width=500 height=500 /> </a> End of explanation # join x = sc.parallelize([('C',4),('B',3),('A',2),('A',1)]) y = sc.parallelize([('A',8),('B',7),('A',6),('D',5)]) z = x.join(y) print(x.collect()) print(y.collect()) print(z.collect()) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.join"> <img align=left src="files/images/pyspark-page47.svg" width=500 height=500 /> </a> End of explanation # leftOuterJoin x = sc.parallelize([('C',4),('B',3),('A',2),('A',1)]) y = sc.parallelize([('A',8),('B',7),('A',6),('D',5)]) z = x.leftOuterJoin(y) print(x.collect()) print(y.collect()) print(z.collect()) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.leftOuterJoin"> <img align=left src="files/images/pyspark-page48.svg" width=500 height=500 /> </a> End of explanation # rightOuterJoin x = sc.parallelize([('C',4),('B',3),('A',2),('A',1)]) y = sc.parallelize([('A',8),('B',7),('A',6),('D',5)]) z = x.rightOuterJoin(y) print(x.collect()) print(y.collect()) print(z.collect()) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.rightOuterJoin"> <img align=left src="files/images/pyspark-page49.svg" width=500 height=500 /> </a> End of explanation # partitionBy x = sc.parallelize([(0,1),(1,2),(2,3)],2) y = x.partitionBy(numPartitions = 3, partitionFunc = lambda x: x) # only key is passed to paritionFunc print(x.glom().collect()) print(y.glom().collect()) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.partitionBy"> <img align=left src="files/images/pyspark-page50.svg" width=500 height=500 /> </a> End of explanation # combineByKey x = sc.parallelize([('B',1),('B',2),('A',3),('A',4),('A',5)]) createCombiner = (lambda el: [(el,el**2)]) mergeVal = (lambda aggregated, el: aggregated + [(el,el**2)]) # append to aggregated mergeComb = (lambda agg1,agg2: agg1 + agg2 ) # append agg1 with agg2 y = x.combineByKey(createCombiner,mergeVal,mergeComb) print(x.collect()) print(y.collect()) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.combineByKey"> <img align=left src="files/images/pyspark-page51.svg" width=500 height=500 /> </a> End of explanation # aggregateByKey x = sc.parallelize([('B',1),('B',2),('A',3),('A',4),('A',5)]) zeroValue = [] # empty list is 'zero value' for append operation mergeVal = (lambda aggregated, el: aggregated + [(el,el**2)]) mergeComb = (lambda agg1,agg2: agg1 + agg2 ) y = x.aggregateByKey(zeroValue,mergeVal,mergeComb) print(x.collect()) print(y.collect()) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.aggregateByKey"> <img align=left src="files/images/pyspark-page52.svg" width=500 height=500 /> </a> End of explanation # foldByKey x = sc.parallelize([('B',1),('B',2),('A',3),('A',4),('A',5)]) zeroValue = 1 # one is 'zero value' for multiplication y = x.foldByKey(zeroValue,lambda agg,x: agg*x ) # computes cumulative product within each key print(x.collect()) print(y.collect()) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.foldByKey"> <img align=left src="files/images/pyspark-page53.svg" width=500 height=500 /> </a> End of explanation # groupByKey x = sc.parallelize([('B',5),('B',4),('A',3),('A',2),('A',1)]) y = x.groupByKey() print(x.collect()) print([(j[0],[i for i in j[1]]) for j in y.collect()]) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.groupByKey"> <img align=left src="files/images/pyspark-page54.svg" width=500 height=500 /> </a> End of explanation # flatMapValues x = sc.parallelize([('A',(1,2,3)),('B',(4,5))]) y = x.flatMapValues(lambda x: [i**2 for i in x]) # function is applied to entire value, then result is flattened print(x.collect()) print(y.collect()) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.flatMapValues"> <img align=left src="files/images/pyspark-page55.svg" width=500 height=500 /> </a> End of explanation # mapValues x = sc.parallelize([('A',(1,2,3)),('B',(4,5))]) y = x.mapValues(lambda x: [i**2 for i in x]) # function is applied to entire value print(x.collect()) print(y.collect()) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.mapValues"> <img align=left src="files/images/pyspark-page56.svg" width=500 height=500 /> </a> End of explanation # groupWith x = sc.parallelize([('C',4),('B',(3,3)),('A',2),('A',(1,1))]) y = sc.parallelize([('B',(7,7)),('A',6),('D',(5,5))]) z = sc.parallelize([('D',9),('B',(8,8))]) a = x.groupWith(y,z) print(x.collect()) print(y.collect()) print(z.collect()) print("Result:") for key,val in list(a.collect()): print(key, [list(i) for i in val]) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.groupWith"> <img align=left src="files/images/pyspark-page57.svg" width=500 height=500 /> </a> End of explanation # cogroup x = sc.parallelize([('C',4),('B',(3,3)),('A',2),('A',(1,1))]) y = sc.parallelize([('A',8),('B',7),('A',6),('D',(5,5))]) z = x.cogroup(y) print(x.collect()) print(y.collect()) for key,val in list(z.collect()): print(key, [list(i) for i in val]) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.cogroup"> <img align=left src="files/images/pyspark-page58.svg" width=500 height=500 /> </a> End of explanation # sampleByKey x = sc.parallelize([('A',1),('B',2),('C',3),('B',4),('A',5)]) y = x.sampleByKey(withReplacement=False, fractions={'A':0.5, 'B':1, 'C':0.2}) print(x.collect()) print(y.collect()) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.sampleByKey"> <img align=left src="files/images/pyspark-page59.svg" width=500 height=500 /> </a> End of explanation # subtractByKey x = sc.parallelize([('C',1),('B',2),('A',3),('A',4)]) y = sc.parallelize([('A',5),('D',6),('A',7),('D',8)]) z = x.subtractByKey(y) print(x.collect()) print(y.collect()) print(z.collect()) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.subtractByKey"> <img align=left src="files/images/pyspark-page60.svg" width=500 height=500 /> </a> End of explanation # subtract x = sc.parallelize([('C',4),('B',3),('A',2),('A',1)]) y = sc.parallelize([('C',8),('A',2),('D',1)]) z = x.subtract(y) print(x.collect()) print(y.collect()) print(z.collect()) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.subtract"> <img align=left src="files/images/pyspark-page61.svg" width=500 height=500 /> </a> End of explanation # keyBy x = sc.parallelize([1,2,3]) y = x.keyBy(lambda x: x**2) print(x.collect()) print(y.collect()) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.keyBy"> <img align=left src="files/images/pyspark-page62.svg" width=500 height=500 /> </a> End of explanation # repartition x = sc.parallelize([1,2,3,4,5],2) y = x.repartition(numPartitions=3) print(x.glom().collect()) print(y.glom().collect()) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.repartition"> <img align=left src="files/images/pyspark-page63.svg" width=500 height=500 /> </a> End of explanation # coalesce x = sc.parallelize([1,2,3,4,5],2) y = x.coalesce(numPartitions=1) print(x.glom().collect()) print(y.glom().collect()) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.coalesce"> <img align=left src="files/images/pyspark-page64.svg" width=500 height=500 /> </a> End of explanation # zip x = sc.parallelize(['B','A','A']) y = x.map(lambda x: ord(x)) # zip expects x and y to have same #partitions and #elements/partition z = x.zip(y) print(x.collect()) print(y.collect()) print(z.collect()) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.zip"> <img align=left src="files/images/pyspark-page65.svg" width=500 height=500 /> </a> End of explanation # zipWithIndex x = sc.parallelize(['B','A','A'],2) y = x.zipWithIndex() print(x.glom().collect()) print(y.collect()) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.zipWithIndex"> <img align=left src="files/images/pyspark-page66.svg" width=500 height=500 /> </a> End of explanation # zipWithUniqueId x = sc.parallelize(['B','A','A'],2) y = x.zipWithUniqueId() print(x.glom().collect()) print(y.collect()) Explanation: <a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.zipWithUniqueId"> <img align=left src="files/images/pyspark-page67.svg" width=500 height=500 /> </a> End of explanation
1,687
Given the following text description, write Python code to implement the functionality described below step by step Description: Illustration of w-imaging Step1: Generate baseline coordinates for an observation with the VLA over 6 hours, with a visibility recorded every 10 minutes. The phase center is fixed at a declination of 45 degrees. We assume that the imaged sky says at that position over the course of the observation. Note how this gives rise to fairly large $w$-values. Step2: We can now generate visibilities for these baselines by simulation. We place three sources. Step3: Using imaging, we can now reconstruct the image. The easiest option is to use simple imaging without a convolution function Step4: Zooming in shows the source structure in detail Step5: If we use convolution kernels for $w$-reprojection, we can improve the sharpness of imaging. First we make a cache to hold the convolution kernels.
Python Code: %matplotlib inline import sys sys.path.append('../..') from matplotlib import pylab pylab.rcParams['figure.figsize'] = 12, 10 import functools import numpy import scipy import scipy.special from crocodile.clean import * from crocodile.synthesis import * from crocodile.simulate import * from util.visualize import * from arl.test_support import create_named_configuration Explanation: Illustration of w-imaging End of explanation vlas = create_named_configuration('VLAA') ha_range = numpy.arange(numpy.radians(0), numpy.radians(90), numpy.radians(90 / 36)) dec = numpy.radians(45) vobs = xyz_to_baselines(vlas.data['xyz'], ha_range, dec) # Wavelength: 5 metres wvl=5 uvw = vobs / wvl from mpl_toolkits.mplot3d import Axes3D import matplotlib.pyplot as plt ax = plt.figure().add_subplot(121, projection='3d') ax.scatter(uvw[:,0], uvw[:,1] , uvw[:,2]) max_uvw = numpy.amax(uvw) ax.set_xlabel('U [$\lambda$]'); ax.set_xlim((-max_uvw, max_uvw)) ax.set_ylabel('V [$\lambda$]'); ax.set_ylim((-max_uvw, max_uvw)) ax.set_zlabel('W [$\lambda$]'); ax.set_zlim((-max_uvw, max_uvw)) ax.view_init(20, 20) pylab.show() Explanation: Generate baseline coordinates for an observation with the VLA over 6 hours, with a visibility recorded every 10 minutes. The phase center is fixed at a declination of 45 degrees. We assume that the imaged sky says at that position over the course of the observation. Note how this gives rise to fairly large $w$-values. End of explanation import itertools vis = numpy.zeros(len(uvw), dtype=complex) for u,v in itertools.product(range(-3, 4), range(-3, 4)): vis += 1.0*simulate_point(uvw, 0.010*u, 0.010*v) plt.clf() uvdist=numpy.sqrt(uvw[:,0]**2+uvw[:,1]**2) plt.plot(uvdist, numpy.abs(vis), '.', color='r') Explanation: We can now generate visibilities for these baselines by simulation. We place three sources. End of explanation theta = 2*0.05 lam = 18000 d,p,_=do_imaging(theta, lam, uvw, None, vis, simple_imaging) show_image(d, "image", theta) Explanation: Using imaging, we can now reconstruct the image. The easiest option is to use simple imaging without a convolution function: End of explanation step=int(theta*lam/10) def zoom(x, y=step): pylab.matshow(d[y:y+2*step,x:x+2*step]) ; pylab.colorbar(shrink=.4,pad=0.025); pylab.show() from ipywidgets import interact interact(zoom, x=(0,d.shape[0]-2*step,step), y=(0,d.shape[1]-2*step,step)); Explanation: Zooming in shows the source structure in detail End of explanation wstep=100 wcachesize=int(numpy.ceil(numpy.abs(uvw[:,2]).max()/wstep)) print("Making w-kernel cache of %d kernels" % wcachesize) wcache=pylru.FunctionCacheManager(w_kernel, wcachesize) imgfn = functools.partial(w_cache_imaging, kernel_cache=w_conj_kernel_fn(wcache), wstep=wstep, Qpx=2, NpixFF=256, NpixKern=31) d_w,p_w,_=do_imaging(theta, lam, uvw, None, vis, imgfn) show_image(d_w, "image", theta) step=int(theta*lam/10) def zoom_w(x=720,y=step): pylab.matshow(d_w[y:y+2*step,x:x+2*step]); pylab.colorbar(shrink=.4,pad=0.025); pylab.show() interact(zoom_w, x=(0,d.shape[0]-2*step,step), y=(0,d.shape[1]-2*step,step)) Explanation: If we use convolution kernels for $w$-reprojection, we can improve the sharpness of imaging. First we make a cache to hold the convolution kernels. End of explanation
1,688
Given the following text description, write Python code to implement the functionality described below step by step Description: Figure 5 Step1: Out-of-country performance In this experiment, we compare the performance of models trained in-country with models trained out-of-country. The parameters needed to produce the plots for Panels A and B are as follows Step2: Panel B
Python Code: from fig_utils import * import matplotlib.pyplot as plt import time %matplotlib inline Explanation: Figure 5: Cross-border model generalization This notebook generates individual panels of Figure 5 in "Combining satellite imagery and machine learning to predict poverty". End of explanation # Parameters country_names = ['nigeria', 'tanzania', 'uganda', 'malawi', 'pooled'] country_paths = ['../data/output/LSMS/nigeria/', '../data/output/LSMS/tanzania/', '../data/output/LSMS/uganda/', '../data/output/LSMS/malawi/', '../data/output/LSMS/pooled/'] survey = 'lsms' dimension = 100 k = 10 trials = 10 points = 30 alpha_low = -2 alpha_high = 5 cmap = 'Greens' t0 = time.time() performance_matrix = evaluate_models(country_names, country_paths, survey, dimension, k, trials, points, alpha_low, alpha_high, cmap) t1 = time.time() print 'Time elapsed: {} seconds'.format(t1-t0) print 'Corresponding values:' print performance_matrix Explanation: Out-of-country performance In this experiment, we compare the performance of models trained in-country with models trained out-of-country. The parameters needed to produce the plots for Panels A and B are as follows: country_names: Names of survey data countries country_paths: Paths of directories containing pooled survey data survey: Either 'lsms' or 'dhs' dimension: Number of dimensions to reduce image features to using PCA k: Number of cross validation folds trials: Number of trials to average over points: Number of regularization parameters to try alpha_low: Log of smallest regularization parameter to try alpha_high: Log of largest regularization parameter to try cmap: Color scheme to use for plot, e.g., 'Blues' or 'Greens' For 10 trials, the LSMS plot should take around 5 minutes and the DHS plot should take around 15 minutes. Each data directory should contain the following 4 files: conv_features.npy: (n, 4096) array containing image features corresponding to n clusters nightlights.npy: (n,) vector containing the average nightlights value for each cluster households.npy: (n,) vector containing the number of households for each cluster image_counts.npy: (n,) vector containing the number of images available for each cluster Each data directory should also contain one of the following: consumptions.npy: (n,) vector containing average cluster consumption expenditures for LSMS surveys assets.npy: (n,) vector containing average cluster asset index for DHS surveys Exact results may differ slightly with each run due to randomly splitting data into training and test sets. Panel A: LSMS consumption expenditures End of explanation # Parameters country_names = ['nigeria', 'tanzania', 'uganda', 'malawi', 'rwanda', 'pooled'] country_paths = ['../data/output/DHS/nigeria/', '../data/output/DHS/tanzania/', '../data/output/DHS/uganda/', '../data/output/DHS/malawi/', '../data/output/DHS/rwanda/', '../data/output/DHS/pooled/'] survey = 'dhs' dimension = 100 k = 10 trials = 10 points = 30 alpha_low = -2 alpha_high = 5 cmap = 'Blues' t0 = time.time() performance_matrix = evaluate_models(country_names, country_paths, survey, dimension, k, trials, points, alpha_low, alpha_high, cmap) t1 = time.time() print 'Time elapsed: {} seconds'.format(t1-t0) print 'Corresponding values:' print performance_matrix Explanation: Panel B: DHS assets End of explanation
1,689
Given the following text description, write Python code to implement the functionality described below step by step Description: 3.1 Step1: Read the data Data are in the child.iq directory of the ARM_Data download-- you might have to change the path I use below to reflect the path on your computer. Step2: First regression-- binary predictor, Pg 31 Fit the regression using the non-jittered data Step3: Plot Figure 3.1, Pg 32 A note for the python version Step4: Second regression -- continuous predictor, Pg 32 Step5: Figure 3.2, Pg 33
Python Code: from __future__ import print_function, division %matplotlib inline import matplotlib import numpy as np import pandas as pd import matplotlib.pyplot as plt # use matplotlib style sheet plt.style.use('ggplot') # import statsmodels for R-style regression import statsmodels.formula.api as smf Explanation: 3.1: One predictor End of explanation kidiq = pd.read_stata("../../ARM_Data/child.iq/kidiq.dta") kidiq.head() Explanation: Read the data Data are in the child.iq directory of the ARM_Data download-- you might have to change the path I use below to reflect the path on your computer. End of explanation fit0 = smf.ols('kid_score ~ mom_hs', data=kidiq).fit() print(fit0.summary()) Explanation: First regression-- binary predictor, Pg 31 Fit the regression using the non-jittered data End of explanation fig0, ax0 = plt.subplots(figsize=(8, 6)) hs_linspace = np.linspace(kidiq['mom_hs'].min(), kidiq['mom_hs'].max(), 50) # default color cycle colors = plt.rcParams['axes.color_cycle'] # plot points plt.scatter(kidiq['mom_hs'], kidiq['kid_score'], s=60, alpha=0.5, c=colors[1]) # add fit plt.plot(hs_linspace, fit0.params[0] + fit0.params[1] * hs_linspace, lw=3, c=colors[1]) plt.xlabel("Mother completed high school") plt.ylabel("Child test score") Explanation: Plot Figure 3.1, Pg 32 A note for the python version: I have not included jitter, in the vertical or horizontal directions. Instead, the data is plotted with opacity so the regions with high data-density can be distinguished. End of explanation fit1 = smf.ols('kid_score ~ mom_iq', data=kidiq).fit() print(fit1.summary()) Explanation: Second regression -- continuous predictor, Pg 32 End of explanation fig1, ax1 = plt.subplots(figsize=(8, 6)) iq_linspace = np.linspace(kidiq['mom_iq'].min(), kidiq['mom_iq'].max(), 50) # default color cycle colors = plt.rcParams['axes.color_cycle'] # plot points plt.scatter(kidiq['mom_iq'], kidiq['kid_score'], s=60, alpha=0.5, c=colors[1]) # add fit plt.plot(iq_linspace, fit1.params[0] + fit1.params[1] * iq_linspace, lw=3, c=colors[1]) plt.xlabel("Mother IQ score") plt.ylabel("Child test score") Explanation: Figure 3.2, Pg 33 End of explanation
1,690
Given the following text description, write Python code to implement the functionality described below step by step Description: probability density function - derivative of a CDF. Evaluating for x gives a probability density or "the probability per unit of x. In order to get a probability mass, you have to integrate over x. Pdf class probides... * Density take a value, x and returns the density at x * Render evaluates the density at a discrete set of values and returns a pair of sequences Step1: Kernel density estimation - an algorithm that takes a sampel and finds an approximately smooth PDF that fits the data. Step2: Advantages of KDE Step3: ...note that when k = 2, the second central moment is variance. If we attach a weight along a ruler at each location, $x_i$, and then spin the ruler around the mean, the moment of inertia of the spinning weights is the variance of the values Skewness describes the shape of a distribution. Negative means distribution skews left. Positive means skews right. To compute sample skewness $g1$... Step4: Pearson's median skewness coefficient is a measure of the skewness based on the difference between the sample mean and median Step5: To summarize the Moments Step6: Compute the mean, median, skewness, and Pearson's skewness. What fraction of households report a taxable income below the mean?
Python Code: %matplotlib inline import thinkstats2 import thinkplot import pandas as pd import numpy as np import math, random mean, var = 163, 52.8 std = math.sqrt(var) pdf = thinkstats2.NormalPdf(mean, std) print "Density:",pdf.Density(mean + std) thinkplot.Pdf(pdf, label='normal') thinkplot.Show() #by default, makes pmf stetching 3*sigma in either direction pmf = pdf.MakePmf() thinkplot.Pmf(pmf,label='normal') thinkplot.Show() Explanation: probability density function - derivative of a CDF. Evaluating for x gives a probability density or "the probability per unit of x. In order to get a probability mass, you have to integrate over x. Pdf class probides... * Density take a value, x and returns the density at x * Render evaluates the density at a discrete set of values and returns a pair of sequences: sorted values, xs, and their probabilty densities. * MakePmf, evaluates Density at a discrete set of values and returns a normalized Pmf that approximates the Pdf. * GetLinspace, returns the default set of points used by Render and MakePmf ...but they are implemented in children classes End of explanation sample = [random.gauss(mean, std) for i in range(500)] sample_pdf = thinkstats2.EstimatedPdf(sample) thinkplot.Pdf(sample_pdf, label='sample PDF made by KDE') ##Evaluates PDF at 101 points pmf = sample_pdf.MakePmf() thinkplot.Pmf(pmf, label='sample PMF') thinkplot.Show() Explanation: Kernel density estimation - an algorithm that takes a sampel and finds an approximately smooth PDF that fits the data. End of explanation def RawMoment(xs, k): return sum(x**k for x in xs) / len(xs) def CentralMoment(xs, k): mean = RawMoment(xs, 1) return sum((x - mean)**k for x in xs) / len(xs) Explanation: Advantages of KDE: Visualiztion - estimated pdf are easy to get when you look at them. Interpolation - If you think smooth, you can use KDE to estimate the in-between values in a PDF. Simulation - smooths out a small sample allowing for wider degree of outcomes during simulations discretizing a PMF if you evaluate a PDF at discrete points, you can generate a PMF that is an approximation of the PDF. statistic Any time you take a sample and reduce it to a single number, that number is a statistic. raw moment if you have a sample of values, $x_i$, the $k$th raw moment is: $$ m'_k = \frac{1}{n} \sum_i x_i^k $$ when k = 1 the result is the sample mean. central moments are more useful... End of explanation ##normalized so there are no units def StandardizedMoment(xs, k): var = CentralMoment(xs, 2) std = math.sqrt(var) return CentralMoment(xs, k) / std**k def Skewness(xs): return StandardizedMoment(xs, 3) Explanation: ...note that when k = 2, the second central moment is variance. If we attach a weight along a ruler at each location, $x_i$, and then spin the ruler around the mean, the moment of inertia of the spinning weights is the variance of the values Skewness describes the shape of a distribution. Negative means distribution skews left. Positive means skews right. To compute sample skewness $g1$... End of explanation def Median(xs): cdf = thinkstats2.Cdf(xs) return cdf.Value(0.5) def PearsonMedianSkewness(xs): median = Median(xs) mean = RawMoment(xs, 1) var = CentralMoment(xs, 2) std = math.sqrt(var) gp = 3 * (mean - median) / std return gp Explanation: Pearson's median skewness coefficient is a measure of the skewness based on the difference between the sample mean and median: $$ g_p = 3(\bar{x}-m)/S $$ It is a more robust statistic than sample skewness because it is less sensitive to outliers. End of explanation import hinc, hinc2 print "starting..." df = hinc.ReadData() log_sample = hinc2.InterpolateSample(df) log_cdf = thinkstats2.Cdf(log_sample) print "done" # thinkplot.Cdf(log_cdf) # thinkplot.Show(xlabel='household income', # ylabel='CDF') Explanation: To summarize the Moments: the mean is a raw moment with k = 1 the variance is a central moment with k = 2 the sample skewness is a standardized moment with k = 3 note that Pearson Median Skewness is a more robust measure of skewness. Exercise End of explanation import density sample = np.power(10,log_sample) mean, median = density.Summarize(sample) log_pdf = thinkstats2.EstimatedPdf(log_sample) thinkplot.Pdf(log_pdf, label='KDE of income') thinkplot.Show(xlabel='log10 $', ylabel='PDF') thinkplot.PrePlot(2, rows=2) thinkplot.SubPlot(1) sample_cdf = thinkstats2.Cdf(sample, label='SampleCdf') thinkplot.Cdf(sample_cdf) thinkplot.SubPlot(2) sample_pdf = thinkstats2.EstimatedPdf(sample) thinkplot.Pdf(sample_pdf) pctBelowMean = sample_cdf.Prob(mean) * 100 print "%d%% of households report taxable incomes below the mean" % pctBelowMean Explanation: Compute the mean, median, skewness, and Pearson's skewness. What fraction of households report a taxable income below the mean? End of explanation
1,691
Given the following text description, write Python code to implement the functionality described below step by step Description: Qualitative Examples of Machine Learning Applications Classification Step1: We need a two-dimensional, [n_samples, n_features] representation. We can accomplish this by treating each pixel in the image as a feature. Additionally, we need the target array. Step2: Unsupervised learning Step3: Let's plot this data to see if we can learn anything from its structure Step4: Classification on digits Let's apply a classification algorithm to the digits. First, we will split the data into a training and testing set, and fit a Gaussian naive Bayes model Step5: Now that we have predicted our model, we can gauge its accuracy by comparing the true values of the test set to the predictions Step6: With even this extremely simple model, we find about 80% accuracy for classification of the digits! However, this single number doesn't tell us where we've gone wrong—one nice way to do this is to use the confusion matrix Step7: Another way to gain intuition into the characteristics of the model is to plot the inputs again, with their predicted labels. We'll use green for correct labels, and red for incorrect labels
Python Code: from sklearn.datasets import load_digits digits = load_digits() digits.images.shape idx = 14 digits.target[idx], digits.images[idx] import matplotlib.pyplot as plt fig, axes = plt.subplots(10, 10, figsize=(8, 8), subplot_kw={'xticks':[], 'yticks':[]}, gridspec_kw=dict(hspace=0.1, wspace=0.1)) for i, ax in enumerate(axes.flat): ax.imshow(digits.images[i], cmap='binary', interpolation='nearest') ax.text(0.05, 0.05, str(digits.target[i]), transform=ax.transAxes, color='green') plt.show() Explanation: Qualitative Examples of Machine Learning Applications Classification: Predicting discrete labels We will first take a look at a simple classification task, in which you are given a set of labeled points and want to use these to classify some unlabeled points. Imagine that we have the data shown in this figure: Task description: * Two features (x, y) * One class label (by color) * Goal: creating a model that decides whether a new point should be "red" or "blue" Model constraint: * We want to use a simple model: a straight seperating the data * The location of the line is to be learned from the data There are numerous models that can succeed. Are they all equally good? The optimal solution Now, image we obtain new data on which we want to predict the labels. We can use our trained model to do that: This looks trivial. The separation can easily be done by a look at the data. Why do we have to employ an algorithm? This is similar to the task of automated spam detection for email; in this case, we might use the following features and labels: feature 1, feature 2, etc. $\to$ normalized counts of important words or phrases ("Viagra", "Nigerian prince", etc.) label $\to$ "spam" or "not spam" Regression: Predicting continuous labels We will next look at a regression task in which the labels are continuous quantities. Consider the data shown in the following figure, which consists of a set of points each with a continuous label: As with the classification example, we have two-dimensional data: that is, there are two features describing each data point. The color of each point represents the continuous label for that point. we will use a simple linear regression model assumes that if we treat the label as a third spatial dimension, we can fit a plane to the data. This is a generalization of fitting a line to data with two coordinates. We can visualize this setup as shown in the following figure: From this view, it seems reasonable that fitting a plane through this three-dimensional data would allow us to predict the expected label for any set of input parameters. Returning to the two-dimensional projection, when we fit such a plane we get the result shown in the following figure: Now, as we aquire new data, we predit their labels: Again, this seems trivial, but why does it actually make sense? Clustering: Inferring labels on unlabeled data Unsupervised learning involves models that describe data without reference to any known labels. Clustering is a task in which data is automatically assigned to some number of discrete groups. For example, we might have some two-dimensional data like that shown in the following figure: How many cluster do you see? The k-means algorithm find the following cluster: Again, this might seem like a trivial exercise in two dimensions, but as our data becomes larger and more complex, such clustering algorithms can be employed to extract useful information from the dataset. Dimensionality reduction: Inferring structure of unlabeled data Dimensionality reduction is a bit more abstract than the examples we looked at before, but generally it seeks to pull out some low-dimensional representation of data that in some way preserves relevant qualities of the full dataset. As an example of this, consider the data shown in the following figure: There is some structure in this data: it is drawn from a one-dimensional line that is arranged in a spiral. In a sense, you could say that this data is "intrinsically" only one dimensional A suitable dimensionality reduction model in this case would be sensitive to this nonlinear embedded structure, and be able to pull out this lower-dimensionality representation. The following figure shows a visualization of the results of the Isomap algorithm, a manifold learning algorithm that does exactly this: Notice that the colors change uniformly along the spiral. As with the previous examples, the power of dimensionality reduction algorithms becomes clearer in higher-dimensional cases. For example, we might wish to visualize important relationships within a dataset that has 100 or 1,000 features. Application: Exploring Hand-written Digits Loading and visualizing the digits data We'll use Scikit-Learn's data access interface and take a look at this data: End of explanation X = digits.data X.shape y = digits.target y.shape Explanation: We need a two-dimensional, [n_samples, n_features] representation. We can accomplish this by treating each pixel in the image as a feature. Additionally, we need the target array. End of explanation from sklearn.manifold import Isomap iso = Isomap(n_components=2) iso.fit(digits.data) data_projected = iso.transform(digits.data) data_projected.shape Explanation: Unsupervised learning: Dimensionality reduction We'd like to visualize our points within the 64-dimensional parameter space, but it's difficult to effectively visualize points in such a high-dimensional space. Instead we'll reduce the dimensions to 2, using an unsupervised method: End of explanation import seaborn as sns plt.scatter(data_projected[:, 0], data_projected[:, 1], c=digits.target, edgecolor='none', alpha=0.5, s=20, cmap=plt.cm.get_cmap('nipy_spectral', 10)) plt.colorbar(label='digit label', ticks=range(10)) plt.clim(-0.5, 9.5) plt.show() Explanation: Let's plot this data to see if we can learn anything from its structure: End of explanation from sklearn.model_selection import train_test_split Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, random_state=0) Xtrain.shape, Xtest.shape, ytrain.shape, ytest.shape from sklearn.naive_bayes import GaussianNB model = GaussianNB() model.fit(Xtrain, ytrain) y_model = model.predict(Xtest) Explanation: Classification on digits Let's apply a classification algorithm to the digits. First, we will split the data into a training and testing set, and fit a Gaussian naive Bayes model: End of explanation from sklearn.metrics import accuracy_score accuracy_score(ytest, y_model) Explanation: Now that we have predicted our model, we can gauge its accuracy by comparing the true values of the test set to the predictions: End of explanation from sklearn.metrics import confusion_matrix mat = confusion_matrix(ytest, y_model) sns.heatmap(mat, square=True, annot=True, cbar=False) plt.xlabel('predicted value') plt.ylabel('true value') plt.show() Explanation: With even this extremely simple model, we find about 80% accuracy for classification of the digits! However, this single number doesn't tell us where we've gone wrong—one nice way to do this is to use the confusion matrix: End of explanation fig, axes = plt.subplots(10, 10, figsize=(8, 8), subplot_kw={'xticks':[], 'yticks':[]}, gridspec_kw=dict(hspace=0.1, wspace=0.1)) test_images = Xtest.reshape(-1, 8, 8) for i, ax in enumerate(axes.flat): ax.imshow(test_images[i], cmap='binary', interpolation='nearest') ax.text(0.05, 0.05, str(y_model[i]), transform=ax.transAxes, color='green' if (ytest[i] == y_model[i]) else 'red') plt.show() Explanation: Another way to gain intuition into the characteristics of the model is to plot the inputs again, with their predicted labels. We'll use green for correct labels, and red for incorrect labels: End of explanation
1,692
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: CSV exercises Step3: More about PandasSQL package Step4: API excercise We can access data on files or database but what about the data that sits in websites. We need to crawl the websites and parsing through all html is complicated. Another way is using this website's api (application programming interface) which can provide us machine readable format data. There are several different kinds of APIs but one of the most common types is representational state transfer (REST) Step5: Imputation excercise Dealing with missing data. One explanation for missing data is
Python Code: import pandas as pd def add_full_name(path_to_csv, path_to_new_csv): #Assume you will be reading in a csv file with the same columns that the #Lahman baseball data set has -- most importantly, there are columns #called 'nameFirst' and 'nameLast'. #1) Write a function that reads a csv #located at "path_to_csv" into a pandas dataframe and adds a new column #called 'nameFull' with a player's full name. # #For example: # for Hank Aaron, nameFull would be 'Hank Aaron', # #2) Write the data in the pandas dataFrame to a new csv file located at #path_to_new_csv #WRITE YOUR CODE HERE # Create a dataframe which hosted the data from csv file. baseball_data = pd.read_csv(path_to_csv) # Create new header called nameFull which contain full name of player. baseball_data['nameFull'] = baseball_data['nameFirst'] + ' ' + baseball_data['nameLast'] # Write the new thing to new csv file baseball_data.to_csv(path_to_new_csv) if __name__ == "__main__": # For local use only # If you are running this on your own machine add the path to the # Lahman baseball csv and a path for the new csv. # The dataset can be downloaded from this website: http://www.seanlahman.com/baseball-archive/statistics # We are using the file Master.csv path_to_csv = "" path_to_new_csv = "" add_full_name(path_to_csv, path_to_new_csv) # Relational Database import pandas import pandasql def select_first_50(filename): # Read in our aadhaar_data csv to a pandas dataframe. Afterwards, we rename the columns # by replacing spaces with underscores and setting all characters to lowercase, so the # column names more closely resemble columns names one might find in a table. aadhaar_data = pandas.read_csv(filename) aadhaar_data.rename(columns = lambda x: x.replace(' ', '_').lower(), inplace=True) # Select out the first 50 values for "registrar" and "enrolment_agency" # in the aadhaar_data table using SQL syntax. # # Note that "enrolment_agency" is spelled with one l. Also, the order # of the select does matter. Make sure you select registrar then enrolment agency # in your query. # # You can download a copy of the aadhaar data that we are passing # into this exercise below: # https://s3.amazonaws.com/content.udacity-data.com/courses/ud359/aadhaar_data.csv # SELECT Headers (Attributes) FROM DataFrame with Limitation number of items which will be selected. q = -- YOUR QUERY HERE SELECT registrar, enrolment_agency FROM aadhaar_data LIMIT 50; #Execute your SQL command against the pandas frame aadhaar_solution = pandasql.sqldf(q.lower(), locals()) return aadhaar_solution Explanation: CSV exercises End of explanation import pandas import pandasql def aggregate_query(filename): # Read in our aadhaar_data csv to a pandas dataframe. Afterwards, we rename the columns # by replacing spaces with underscores and setting all characters to lowercase, so the # column names more closely resemble columns names one might find in a table. aadhaar_data = pandas.read_csv(filename) aadhaar_data.rename(columns = lambda x: x.replace(' ', '_').lower(), inplace=True) # Write a query that will select from the aadhaar_data table how many men and how # many women over the age of 50 have had aadhaar generated for them in each district. # aadhaar_generated is a column in the Aadhaar Data that denotes the number who have had # aadhaar generated in each row of the table. # # Note that in this quiz, the SQL query keywords are case sensitive. # For example, if you want to do a sum make sure you type 'sum' rather than 'SUM'. # # The possible columns to select from aadhaar data are: # 1) registrar # 2) enrolment_agency # 3) state # 4) district # 5) sub_district # 6) pin_code # 7) gender # 8) age # 9) aadhaar_generated # 10) enrolment_rejected # 11) residents_providing_email, # 12) residents_providing_mobile_number # # You can download a copy of the aadhaar data that we are passing # into this exercise below: # https://s3.amazonaws.com/content.udacity-data.com/courses/ud359/aadhaar_data.csv q = SELECT gender, district, SUM(aadhaar_generated) FROM aadhaar_data WHERE age>50 GROUP BY gender, district; # Execute your SQL command against the pandas frame aadhaar_solution = pandasql.sqldf(q.lower(), locals()) return aadhaar_solution Explanation: More about PandasSQL package: The pandasql package allows us to perform queries on dataframes using the SQLite syntax. More complex SQL Queries End of explanation import json import requests import pprint def api_get_request(url): # In this exercise, you want to call the last.fm API to get a list of the # top artists in Spain. The grader will supply the URL as an argument to # the function; you do not need to construct the address or call this # function in your grader submission. # # Once you've done this, return the name of the number 1 top artist in # Spain. # More about pprint here: https://docs.python.org/2/library/pprint.html # Make API call using the request library and load the results into a dictionary. data = requests.get(url).text data = json.loads(data) pp = pprint.PrettyPrinter(indent = 4) # Print out the name of the #1 artist, we look at the topartists # key then artist key then the first entry and there is the name. pp.pprint(data['topartists']['artist'][0]['name']) return data['topartists']['artist'][0]['name'] # return the top artist in Spain Explanation: API excercise We can access data on files or database but what about the data that sits in websites. We need to crawl the websites and parsing through all html is complicated. Another way is using this website's api (application programming interface) which can provide us machine readable format data. There are several different kinds of APIs but one of the most common types is representational state transfer (REST) End of explanation import pandas import numpy def imputation(filename): # Pandas dataframes have a method called 'fillna(value)', such that you can # pass in a `single value` to replace any NAs in a dataframe or series. You # can call it like this: # dataframe['column'] = dataframe['column'].fillna(value) # # Using the numpy.mean function, which calculates the mean of a numpy # array, impute any missing values in our Lahman baseball # data sets 'weight' column by setting them equal to the average weight. # # You can access the 'weight' colum in the baseball data frame by # calling baseball['weight'] baseball = pandas.read_csv(filename) #YOUR CODE GOES HERE mean_weight = numpy.mean(baseball['weight']) # print mean_weight baseball['weight'] = baseball['weight'].fillna(mean_weight) return baseball Explanation: Imputation excercise Dealing with missing data. One explanation for missing data is End of explanation
1,693
Given the following text description, write Python code to implement the functionality described below step by step Description: There are two ways to load a DiVinE model Step1: The spot.ltsmin.load function compiles the model using the ltlmin interface and load it. This should work with DiVinE models if divine --LTSmin works, and with Promela models if spins is installed. Step2: Compiling the model creates all several kinds of files. The test1.dve file is converted into a C++ source code test1.dve.cpp which is then compiled into a shared library test1.dve2c. Becauce spot.ltsmin.load() has already loaded this shared library, all those files can be erased. If you do not erase the files, spot.ltsmin.load() will use the timestamps to decide whether the library should be recompiled or not everytime you load the library. For editing and loading DVE file from a notebook, it is a better to use the %%dve as shown next. Step3: Loading from a notebook cell The %%dve cell magic implements all of the above steps (saving the model into a temporary file, compiling it, loading it, erasing the temporary files). The variable name that should receive the model (here m) should be indicated on the first line, after %dve. Step4: Working with an ltsmin model Printing an ltsmin model shows some information about the variables it contains and their types, however the info() methods provide the data in a map that is easier to work with. Step5: To obtain a Kripke structure, call kripke and supply a list of atomic propositions to observe in the model. Step6: If we want to create a model_check function that takes a model and formula, we need to get the list of atomic propositions used in the formula using atomic_prop_collect(). This returns an atomic_prop_set Step7: Instead of otf_product(x, y).is_empty() we prefer to call !x.intersects(y). There is also x.intersecting_run(y) that can be used to return a counterexample. Step8: This accepting run can be represented as an automaton (the True argument requires the state names to be preserved). This can be more readable.
Python Code: !rm -f test1.dve %%file test1.dve int a = 0, b = 0; process P { state x; init x; trans x -> x { guard a < 3 && b < 3; effect a = a + 1; }, x -> x { guard a < 3 && b < 3; effect b = b + 1; }; } process Q { state wait, work; init wait; trans wait -> work { guard b > 1; }, work -> wait { guard a > 1; }; } system async; Explanation: There are two ways to load a DiVinE model: from a file or from a cell. Loading from a file We will first start with the file version, however because this notebook should also be a self-contained test case, we start by writing a model into a file. End of explanation m = spot.ltsmin.load('test1.dve') Explanation: The spot.ltsmin.load function compiles the model using the ltlmin interface and load it. This should work with DiVinE models if divine --LTSmin works, and with Promela models if spins is installed. End of explanation !rm -f test1.dve test1.dve.cpp test1.dve2C Explanation: Compiling the model creates all several kinds of files. The test1.dve file is converted into a C++ source code test1.dve.cpp which is then compiled into a shared library test1.dve2c. Becauce spot.ltsmin.load() has already loaded this shared library, all those files can be erased. If you do not erase the files, spot.ltsmin.load() will use the timestamps to decide whether the library should be recompiled or not everytime you load the library. For editing and loading DVE file from a notebook, it is a better to use the %%dve as shown next. End of explanation %%dve m int a = 0, b = 0; process P { state x; init x; trans x -> x { guard a < 3 && b < 3; effect a = a + 1; }, x -> x { guard a < 3 && b < 3; effect b = b + 1; }; } process Q { state wait, work; init wait; trans wait -> work { guard b > 1; }, work -> wait { guard a > 1; }; } system async; Explanation: Loading from a notebook cell The %%dve cell magic implements all of the above steps (saving the model into a temporary file, compiling it, loading it, erasing the temporary files). The variable name that should receive the model (here m) should be indicated on the first line, after %dve. End of explanation m sorted(m.info().items()) Explanation: Working with an ltsmin model Printing an ltsmin model shows some information about the variables it contains and their types, however the info() methods provide the data in a map that is easier to work with. End of explanation k = m.kripke(["a<1", "b>2"]) k k.show('.<15') k.show('.<0') # unlimited output a = spot.translate('"a<1" U "b>2"'); a spot.otf_product(k, a) Explanation: To obtain a Kripke structure, call kripke and supply a list of atomic propositions to observe in the model. End of explanation a = spot.atomic_prop_collect(spot.formula('"a < 2" W "b == 1"')); a def model_check(f, m): f = spot.formula(f) ss = m.kripke(spot.atomic_prop_collect(f)) nf = spot.formula_Not(f).translate() return spot.otf_product(ss, nf).is_empty() model_check('"a<1" R "b > 1"', m) Explanation: If we want to create a model_check function that takes a model and formula, we need to get the list of atomic propositions used in the formula using atomic_prop_collect(). This returns an atomic_prop_set: End of explanation def model_debug(f, m): f = spot.formula(f) ss = m.kripke(spot.atomic_prop_collect(f)) nf = spot.formula_Not(f).translate() return ss.intersecting_run(nf) run = model_debug('"a<1" R "b > 1"', m); run Explanation: Instead of otf_product(x, y).is_empty() we prefer to call !x.intersects(y). There is also x.intersecting_run(y) that can be used to return a counterexample. End of explanation run.as_twa(True) Explanation: This accepting run can be represented as an automaton (the True argument requires the state names to be preserved). This can be more readable. End of explanation
1,694
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: Atividade de Regressão Linear Código-fonte disponível em Step2: Questões 1. Rode o mesmo programa nos dados contendo anos de escolaridade (primeira coluna) versus salário (segunda coluna). Baixe os dados aqui. Esse exemplo foi trabalhado em sala de aula em várias ocasiões. Os itens a seguir devem ser respondidos usando esses dados. RESOLUÇÃO Step3: 3. O que acontece com o RSS ao longo das iterações (aumenta ou diminui) se você usar 1000 iterações e um learning_rate (tamanho do passo do gradiente) de 0.001? Por que você acha que isso acontece? Step4: Com esse gráfico é possível observar que Step5: Ao observar os valores de RSS calculados quando o número de iterações aumenta, é possível observar que o RSS obtido diminui cada vez mais. 4. Teste valores diferentes do número de iterações e learning_rate até que w0 e w1 sejam aproximadamente iguais a -39 e 5 respectivamente. Reporte os valores do número de iterações e learning_rate usados para atingir esses valores. Foram testados diferentes valores para o número de iterações, e diferentes frações do Learning Rate até que com a seguinte configuração, obteve o valor desejado Step6: 5. O algoritmo do vídeo usa o número de iterações como critério de parada. Mude o algoritmo para considerar um critério de tolerância que é comparado ao tamanho do gradiente (como no algoritmo dos slides apresentados em sala). A metodologia aplicada foi a seguinte Step7: 6. Ache um valor de tolerância que se aproxime dos valores dos parâmetros do item 4 acima. Que valor foi esse? O valor utilizado, conforme descrito na questão anterior, foi 0,001. Ou seja, quando o tamanho do gradiente for menor que 0,001, então, o algoritmo entenderá que a aproximação convergiu e terminará o processamento. 7. Implemente a forma fechada (equações normais) de calcular os coeficientes de regressão (vide algoritmo nos slides). Compare o tempo de processamento com o gradiente descendente considerando sua solução do item 6. Foi implementada a função considerando a forma fechada. Dessa maneira, foi observado que o tempo de processamento descrito na questão 6 foi, aproximadamente, cinco vezes maior. Mesmo considerando o código implementado já na versão vetorizada.
Python Code: %matplotlib notebook #!/usr/bin/env python # -*- coding: utf-8 -*- #Federal University of Campina Grande (UFCG) #Author: Ítalo de Pontes Oliveira #Adapted from: Siraj Raval #Available at: https://github.com/llSourcell/linear_regression_live #The optimal values of m and b can be actually calculated with way less effort than doing a linear regression. #this is just to demonstrate gradient descent This project will calculate linear regression import matplotlib.pyplot as plt %matplotlib inline import numpy from numpy import * import sys # y = mx + b # m is slope, b is y-intercept ## Compute the errors for a given line # @param b Is the linear coefficient # @param m Is the angular coefficient # @param x Domain points # @param y Domain points def compute_error_for_line_given_points(w0, w1, x, y): totalError = sum((y - (w1 * x + w0)) ** 2) totalError /= float(len(x)) return totalError ## Calculate a new linear and angular coefficient step by a learning rate. # @param w0_current Current linear coefficient # @param w1_current Current linear coefficient # @param x Domain points # @param y Image points # @param learningRate The rate in which the gradient will be changed in one step def step_gradient(w0_current, w1_current, x, y, learningRate): w0_gradient = 0 w1_gradient = 0 norma = 0 N = float(len(x)) w0_gradient = -2 * sum( y - ( w0_current + ( w1_current * x ) ) ) / N w1_gradient = -2 * sum( ( y - ( w0_current + ( w1_current * x ) ) ) * x ) / N norma = numpy.linalg.norm(w0_gradient - w1_gradient) new_w0 = w0_current - (learningRate * w0_gradient) new_w1 = w1_current - (learningRate * w1_gradient) return [new_w0, new_w1, norma] ## Run the descending gradient # @param x Domain points # @param y Image points # @param starting_w0 Linear coefficient initial # @param starting_w1 Angular coefficient initial # @param learning_rate The rate in which the gradient will be changed in one step # @param num_iterations Interactions number that the slope line will approximate before a stop. def gradient_descent_runner(x, y, starting_w0, starting_w1, learning_rate, num_iterations): w0 = starting_w0 w1 = starting_w1 rss_by_step = 0 rss_total = [] norma = learning_rate iteration_number = 0 condiction = True if num_iterations < 1: condiction = False while (norma > 0.001 and not condiction) or ( iteration_number < num_iterations and condiction): rss_by_step = compute_error_for_line_given_points(w0, w1, x, y) rss_total.append(rss_by_step) w0, w1, norma = step_gradient(w0, w1, x, y, learning_rate) iteration_number += 1 return [w0, w1, iteration_number, rss_total] Explanation: Atividade de Regressão Linear Código-fonte disponível em: link End of explanation ## Show figure # @param data Data to show in the graphic. # @param xlabel Text to be shown in abscissa axis. # @param ylabel Text to be shown in ordinate axis. def show_figure(data, xlabel, ylabel): plt.plot(data) plt.xlabel(xlabel) plt.ylabel(ylabel) Explanation: Questões 1. Rode o mesmo programa nos dados contendo anos de escolaridade (primeira coluna) versus salário (segunda coluna). Baixe os dados aqui. Esse exemplo foi trabalhado em sala de aula em várias ocasiões. Os itens a seguir devem ser respondidos usando esses dados. RESOLUÇÃO: Arquivo baixado, encontra-se no diretório atual com o nome "income.csv". 2. Modifique o código original para imprimir o RSS a cada iteração do gradiente descendente. RESOLUÇÃO: Foi preferível adicionar uma nova funcionalidade ao código. Ao final da execução é salvo um gráfico com o RSS para todas as iterações. End of explanation points = genfromtxt("income.csv", delimiter=",") x = points[:,0] y = points[:,1] starting_w0 = 0 starting_w1 = 0 learning_rate = 0.001 iterations_number = 50 [w0, w1, iter_number, rss_total] = gradient_descent_runner(x, y, starting_w0, starting_w1, learning_rate, iterations_number) show_figure(rss_total, "Iteraction", "RSS") print("RSS na última iteração: %.2f" % rss_total[-1]) learning_rate = 0.0001 [w0, w1, iter_number, rss_total] = gradient_descent_runner(x, y, starting_w0, starting_w1, learning_rate, iterations_number) show_figure(rss_total, "Iteraction", "RSS") print("RSS na última iteração: %.2f" % rss_total[-1]) Explanation: 3. O que acontece com o RSS ao longo das iterações (aumenta ou diminui) se você usar 1000 iterações e um learning_rate (tamanho do passo do gradiente) de 0.001? Por que você acha que isso acontece? End of explanation learning_rate = 0.001 iterations_number = 1000 [w0, w1, iter_number, rss_total] = gradient_descent_runner(x, y, starting_w0, starting_w1, learning_rate, iterations_number) print("RSS na última iteração: %.2f" % rss_total[-1]) iterations_number = 10000 [w0, w1, iter_number, rss_total] = gradient_descent_runner(x, y, starting_w0, starting_w1, learning_rate, iterations_number) print("RSS na última iteração: %.2f" % rss_total[-1]) Explanation: Com esse gráfico é possível observar que: Quanto maior o Learning Rate, maior o número de iterações necessárias para se atingir um mesmo erro. End of explanation learning_rate = 0.0025 iterations_number = 20000 [w0, w1, iter_number, rss_total] = gradient_descent_runner(x, y, starting_w0, starting_w1, learning_rate, iterations_number) print("W0: %.2f" % w0) print("W1: %.2f" % w1) print("RSS na última iteração: %.2f" % rss_total[-1]) Explanation: Ao observar os valores de RSS calculados quando o número de iterações aumenta, é possível observar que o RSS obtido diminui cada vez mais. 4. Teste valores diferentes do número de iterações e learning_rate até que w0 e w1 sejam aproximadamente iguais a -39 e 5 respectivamente. Reporte os valores do número de iterações e learning_rate usados para atingir esses valores. Foram testados diferentes valores para o número de iterações, e diferentes frações do Learning Rate até que com a seguinte configuração, obteve o valor desejado: End of explanation learning_rate = 0.0025 iterations_number = 0 [w0, w1, iter_number, rss_total] = gradient_descent_runner(x, y, starting_w0, starting_w1, learning_rate, iterations_number) print("W0: %.2f" % w0) print("W1: %.2f" % w1) print("RSS na última iteração: %.2f" % rss_total[-1]) Explanation: 5. O algoritmo do vídeo usa o número de iterações como critério de parada. Mude o algoritmo para considerar um critério de tolerância que é comparado ao tamanho do gradiente (como no algoritmo dos slides apresentados em sala). A metodologia aplicada foi a seguinte: quando não se fornece o número de iterações por parâmetro, o algoritmo irá iterar até que a norma do gradiente descendente seja igual a 0,001. Ou: $\vert\vert (W_{0}^{grad}, W_{1}^{grad} ) \vert\vert < 0,001 $ End of explanation import time start_time = time.time() [w0, w1, iter_number, rss_total] = gradient_descent_runner(x, y, starting_w0, starting_w1, learning_rate, iterations_number) gradient_time = float(time.time()-start_time) print("Tempo para calcular os coeficientes pelo gradiente descendente: %.2f s." % gradient_time) start_time = time.time() ## Compute the W0 and W1 by derivative # @param x Domain points # @param y Image points def compute_normal_equation(x, y): x_mean = numpy.mean(x) y_mean = numpy.mean(y) w1 = sum((x - x_mean)*(y - y_mean))/sum((x - x_mean)**2) w0 = y_mean-(w1*x_mean) return [w0, w1] derivative_time = float(time.time()-start_time) print("Tempo para calcular os coeficientes de maneira fechada: %.4f s." % derivative_time) ratio = float(gradient_time/derivative_time) print("Ou seja, calcular os coeficientes por meio da forma fechada é %.0f vezes mais rápido que via gradiente." % (ratio)) Explanation: 6. Ache um valor de tolerância que se aproxime dos valores dos parâmetros do item 4 acima. Que valor foi esse? O valor utilizado, conforme descrito na questão anterior, foi 0,001. Ou seja, quando o tamanho do gradiente for menor que 0,001, então, o algoritmo entenderá que a aproximação convergiu e terminará o processamento. 7. Implemente a forma fechada (equações normais) de calcular os coeficientes de regressão (vide algoritmo nos slides). Compare o tempo de processamento com o gradiente descendente considerando sua solução do item 6. Foi implementada a função considerando a forma fechada. Dessa maneira, foi observado que o tempo de processamento descrito na questão 6 foi, aproximadamente, cinco vezes maior. Mesmo considerando o código implementado já na versão vetorizada. End of explanation
1,695
Given the following text description, write Python code to implement the functionality described below step by step Description: Background I wrote this notebook as a simple training exercise to better understand feedforward neural networks. The naming conventions in this code match with Andrew Ng's free online course in Machine Learning on Coursera (highly recommended). This neural network has a single hidden layer. Here's how the neural network is connected and equations for calculating the hypothesis, h_theta(x). This neural network also implements backpropagation during training to determine the difference between the hypothesis and the training data in order to update the thetas, or weights, in the network. The example has a trivial training set with X equal to <table width="50%"> <tr><td>0</td><td>0</td></tr> <tr><td>0</td><td>1</td></tr> <tr><td>1</td><td>0</td></tr> <tr><td>1</td><td>1</td></tr> </table> and the y vector used for this supervised learning matches the exclusive or (XOR) pattern. <table width="50%"> <tr><td>0</td></tr> <tr><td>1</td></tr> <tr><td>1</td></tr> <tr><td>0</td></tr> </table> Note Step1: The theta_init function is used to initialize the thetas (weights) in the network. It returns a random matrix with values in the range of [-epsilon, epsilon]. Step2: This network uses a sigmoid activating function. The sigmoid derivative is used during backpropagation. Step3: The mean squared error (MSE) provides measure of the distance between the actual value and what is estimated by the neural network. Step4: The nn_train function trains an artificial neural network with a single hidden layer. Each column in X is a feature and each row in X is a single training observation. The y value contains the classifications for each observation. For multi-classification problems, y will have more than one column. After training, this function returns the calculated theta values (weights) that can be used for predictions. The training will end when the desired error or maximum iterations is reached whichever comes first. Step5: The nn_predict function takes the theta values calculated by nn_train to make predictions about the data in X. Step6: Example We start by plugging our data and classifications into our neural network which returns the weights we can use to make predictions with new data. Step7: Now that we've trained the neural network. We can make predictions for new data.
Python Code: # NumPy is the fundamental package for scientific computing with Python. import numpy as np Explanation: Background I wrote this notebook as a simple training exercise to better understand feedforward neural networks. The naming conventions in this code match with Andrew Ng's free online course in Machine Learning on Coursera (highly recommended). This neural network has a single hidden layer. Here's how the neural network is connected and equations for calculating the hypothesis, h_theta(x). This neural network also implements backpropagation during training to determine the difference between the hypothesis and the training data in order to update the thetas, or weights, in the network. The example has a trivial training set with X equal to <table width="50%"> <tr><td>0</td><td>0</td></tr> <tr><td>0</td><td>1</td></tr> <tr><td>1</td><td>0</td></tr> <tr><td>1</td><td>1</td></tr> </table> and the y vector used for this supervised learning matches the exclusive or (XOR) pattern. <table width="50%"> <tr><td>0</td></tr> <tr><td>1</td></tr> <tr><td>1</td></tr> <tr><td>0</td></tr> </table> Note: the images above are from Andrew Ng's Machine Learning Course. End of explanation def theta_init(in_size, out_size, epsilon = 0.12): return np.random.rand(in_size + 1, out_size) * 2 * epsilon - epsilon Explanation: The theta_init function is used to initialize the thetas (weights) in the network. It returns a random matrix with values in the range of [-epsilon, epsilon]. End of explanation def sigmoid(x): return np.divide(1.0, (1.0 + np.exp(-x))) def sigmoid_derivative(x): return np.multiply(x, (1.0 - x)) Explanation: This network uses a sigmoid activating function. The sigmoid derivative is used during backpropagation. End of explanation def mean_squared_error(X): return np.power(X, 2).mean(axis=None) Explanation: The mean squared error (MSE) provides measure of the distance between the actual value and what is estimated by the neural network. End of explanation def nn_train(X, y, desired_error = 0.001, max_iterations = 100000, hidden_nodes = 5): m = X.shape[0] input_nodes = X.shape[1] output_nodes = y.shape[1] a1 = np.insert(X, 0, 1, axis=1) theta1 = theta_init(input_nodes, hidden_nodes) theta2 = theta_init(hidden_nodes, output_nodes) for x in range(0, max_iterations): # Feedforward a2 = np.insert(sigmoid(a1.dot(theta1)), 0, 1, axis=1) a3 = sigmoid(a2.dot(theta2)) # Calculate error using backpropagation a3_delta = np.subtract(y, a3) mse = mean_squared_error(a3_delta) if mse <= desired_error: print "Achieved requested MSE %f at iteration %d" % (mse, x) break a2_error = a3_delta.dot(theta2.T) a2_delta = np.multiply(a2_error, sigmoid_derivative(a2)) # Update thetas to reduce the error on the next iteration theta2 += np.divide(a2.T.dot(a3_delta), m) theta1 += np.delete(np.divide(a1.T.dot(a2_delta), m), 0, 1) return (theta1, theta2) Explanation: The nn_train function trains an artificial neural network with a single hidden layer. Each column in X is a feature and each row in X is a single training observation. The y value contains the classifications for each observation. For multi-classification problems, y will have more than one column. After training, this function returns the calculated theta values (weights) that can be used for predictions. The training will end when the desired error or maximum iterations is reached whichever comes first. End of explanation def nn_predict(X, theta1, theta2): a2 = sigmoid(np.insert(X, 0, 1, axis=1).dot(theta1)) return sigmoid(np.insert(a2, 0, 1, axis=1).dot(theta2)) Explanation: The nn_predict function takes the theta values calculated by nn_train to make predictions about the data in X. End of explanation X = np.matrix('0 0; 0 1; 1 0; 1 1') y = np.matrix('0; 1; 1; 0') (theta1, theta2) = nn_train(X, y) print "\nTrained weights for calculating the hidden layer from the input layer" print theta1 print "\nTrained weights for calculating from the hidden layer to the output layer" print theta2 Explanation: Example We start by plugging our data and classifications into our neural network which returns the weights we can use to make predictions with new data. End of explanation # Our test input doesn't match our training input 'X' X_test = np.matrix('1 1; 0 1; 0 0; 1 0') y_test = np.matrix('0; 1; 0; 1') y_calc = nn_predict(X_test, theta1, theta2) y_diff = np.subtract(y_test, y_calc) print "The MSE for our test set is %f" % (mean_squared_error(y_diff)) print np.concatenate((y_test, y_calc, y_diff), axis=1) Explanation: Now that we've trained the neural network. We can make predictions for new data. End of explanation
1,696
Given the following text description, write Python code to implement the functionality described below step by step Description: Primera parte Step1: Se mostrará una aplicación de la SVD a la compresión de imágenes y reducción de ruido. Step2: Pregunta Step3: Hacer una función que me resuelva un sistema de ecuaciones Ax=b Step4: A es una matriz es que representa al sistema de ecuaciones $ X_{1}+X_{2}=b_{1}$ y $b_{2}=0$, su imagen es $b = \begin{bmatrix}b_{1}\0 \end{bmatrix}$. La solución no es única ya que hay una variable libre. Dada la forma de la pseudoinversa de A $\begin{bmatrix}.5 & 0\.5 & 0 \end{bmatrix}$ la función siempre regresará una respuesta $\begin{bmatrix}x_{1}=b_{1}/2\x_{2}=b_{2}/2 \end{bmatrix}$ aunque $b_{2} != 0$ Step5: En este caso la matriz tiene una solución única ya que $b_{2}=exp-32$ y las x´s son diferentes a las del caso anterior. OLS Step6: Al hacer una aproximación de la forma $sat_score= \alpha + \betastudy_hours + \epsilon$ podemos plantear la minimización de errores al cuadrado de tal manera que $$min \epsilon^{2} = min_{\alpha,\hat{\beta}} satscore - \alpha-\hat{\beta}study hours$$ $$min_{\alpha,\hat{\beta}} y - X\hat{\beta}$$ En este caso $\alpha$ y $\beta$ resultantes de las condiciones de primer orden del problema de minimización son los gradientes que estamos buscando
Python Code: # Segunda parte: Aplicaciones en Python Explanation: Primera parte: Concimiento básico de Algebra Lineal Pregunta 1:¿Por qué una matriz equivale a una transformación lineal entre espacios vectoriales? Porque na matriz realiza las operaciones básicas de suma, resta y multiplicación sobre los vectores canonicos Al aplicar una matriz a un vector estoy obteniendo los cambios resultantes de estas operaciones sobre el vectr. Pregunta 2:¿Cuál es el efecto de transformación lineal de una matriz diagonal y el de una matriz ortogonal? Una matriz diagonal es un reescalamiento mientras que una ortogonal es una rotación. Pregunta 3:¿Qué es la descomposición en valores singulares de una matriz? La descomposición en valores singulares de una matriz es una aproximación a $A = UDV^{t}$ Donde: $U$ y $V$ son los valores singulares de $A$ y $D$ es una matriz diagonal. Pregunta 4: ¿Qué es diagonalizar una matriz y que representan los eigenvectores? La diagonalización de una matriz cuadrada $A_{m x m}$ es llevar el sistema de ecuaciones a la forma canonica, es decir encontrar sus eigenvectores. Los eigenvectores representan los valores propios de la ecuación. Pregunta 5: Intuitivamente qué son los eigenvectores? Los eigenvectores son la base ortogonal del espacio de la matriz. Ellos hacen las rotaciones y son reescalados por lambda. Pregunta 6: Cómo interpretas la descomposición en valores singulares como una composición de tres tipos de transformaciones lineales simples? La descomposición SVD lo que hace es rotar, reescalar y volver a rotar, siendo que en el caso de que la matriz A sea cuadrada, la segunda rotación llevará a la matriz a su posición original. Pregunta 7: ¿Qué relación hay entre la descomposición en valores singulares y la diagonalización? Los elementos $U,D,V$ ,resultantes de una descomposición en valores singulares, son los eigenvectores de la matriz A. Donde $D$ es una diagonalizacion. Pregunta 8:¿Cómo se usa la descomposición en valores singulares para dar un aproximación de rango menor a una matriz? En la descomposición $A = UDV^{t}$ , $D$ puede verse como una multiplicacion de matrices menores $\begin{bmatrix}D_{1} & 0\0 & D_{2}\end{bmatrix}$ con el rango de D_{1} y D_{2} menor al rango de D. Entonces $A^{}=U_{1}D_{1}V_{1}^{t}$ aquel que resuelve el problema de $ min ||A-A^{}||$ s.a. $rank(A^{*})<rank(A)$ Pregunta 9: Describe el método de descenso gradiente El metodo de optimización de la función $f(x)$ por descenso gradiente busca, de manera local: 1)encontrar los valores $x^{}$ tal que $\nabla f(x^)$ sea la dirección de mayor decenso(o asenso en caso de maximizar) 2)moverse en la dirección encontrada en un factor $\beta$ conocido como coeficiente de aprendizaje (o learning rate) de manera que $\hat{x}=x-\beta*\nabla f(x)$ Pregunta 10: Menciona dos problemas de optimizacion con restricciones y dos sin restricciones que te parezcan interesantes como científico de datos Sin restricciones: 1) Maximizar el tiempo de uso de cierta aplicación móvil (un juego) y el life spam de esta. 2) Maximizar la probabilidad de encontrar vida en otros planetas. Con restricciones 1) Mejorar el acceso a servicios públicos mejorando la planeación urbana. 2) Optimización de despliegue de paneles solares o molinos para energía eolica dada una superficie limitada y condiciones climatológicas End of explanation %matplotlib inline import matplotlib.pyplot as plt import numpy as np import time from PIL import Image ##Aquí abro la imagen y la convierto a gris im = Image.open("/Users/usuario/Documents/MaestriaCD/Propedeutico/PropedeuticoDataScience2017/Tarea/Simpsons.png") gris =im.convert("1") plt.figure(figsize=(9, 6)) ##convierto los datos de la imagen en una matriz con valores mat_val = np.array(gris) plt.title("Imagen Original") plt.imshow(mat_val, cmap='gray'); ###hago el SVD U, D, V = np.linalg.svd(mat_val) ##Reconstruyo y grafíco la imagen de manera completa img_rec = np.matrix(U) * np.diag(D) * np.matrix(V) plt.title("Imagen reconstruida de manera completa") plt.imshow(img_rec, cmap='gray'); ##Escogiendo K=100 para reconstruir la imagen con solo K elementos del SVD k=100 rec =np.matrix(U[:,:k]) *np.diag(D[:k])* np.matrix (V[:k,:]) plt.title("Imagen reconstruida con 100 valores") plt.imshow(rec, cmap='gray') ##Escogiendo K=150 para reconstruir la imagen con solo K elementos del SVD k=150 rec =np.matrix(U[:,:k]) *np.diag(D[:k])* np.matrix (V[:k,:]) plt.title("Imagen reconstruida con 150 valores") plt.imshow(rec, cmap='gray') ##Escogiendo K=200 para reconstruir la imagen con solo K elementos del SVD k=200 rec =np.matrix(U[:,:k]) *np.diag(D[:k])* np.matrix (V[:k,:]) plt.title("Imagen reconstruida con 200 valores") plt.imshow(rec, cmap='gray') Explanation: Se mostrará una aplicación de la SVD a la compresión de imágenes y reducción de ruido. End of explanation import numpy as np def pseudo (A): X =np.array(A) U, D, V = np.linalg.svd(X, full_matrices=False) V_t = np.transpose(V) U_t = np.transpose(U) D_diag=np.diag(D) rows, col =D_diag.shape D_inv = np.zeros((rows,col)) ##Aquí calculo la inversa de D, invirtiendo los valores y poniendo 0 en vez de 1/0 for i in range(0,max(rows,col)): if D_diag[i,i]!= 0 : D_inv[i,i]=1/D_diag[i,i] else : D_inv[i,i]= 0 ##aquí reconstruyo la pseudoinversa de A pseudo = np.dot(np.dot(V_t, D_inv), U_t) return pseudo Explanation: Pregunta: Qué tiene que ver este proyecto con compresión de imágenes? Se puede utilizar la descomposición SVD con una aproximacion de grado K optima para guardar la imagen en un tamaño menor al original, pudiendo despues reconstruirla a partir de los elementos de SVD. Pseudoinversa y sistemas de ecuaciones. Hacer una función que me de la pseudoinversa de una matriz A End of explanation def solve (A,y): A= np.array(A) Y=np.array(y) ## reviso el tamaño rows, col =A.shape vec_rows, vec_col = Y.shape if rows != vec_rows: raise Exception ("El tamaño de la matriz y el vector no coinciden") else: inv=pseudo(A) solve= np.dot(inv, Y) return (solve) A=[[1,1],[0,0]] b=[[1],[1]] solve(A,b) pseudo(A) Explanation: Hacer una función que me resuelva un sistema de ecuaciones Ax=b End of explanation import math A=[[1,1],[0,1*math.exp(-32)]] print(pseudo(A)) solve(A,b) Explanation: A es una matriz es que representa al sistema de ecuaciones $ X_{1}+X_{2}=b_{1}$ y $b_{2}=0$, su imagen es $b = \begin{bmatrix}b_{1}\0 \end{bmatrix}$. La solución no es única ya que hay una variable libre. Dada la forma de la pseudoinversa de A $\begin{bmatrix}.5 & 0\.5 & 0 \end{bmatrix}$ la función siempre regresará una respuesta $\begin{bmatrix}x_{1}=b_{1}/2\x_{2}=b_{2}/2 \end{bmatrix}$ aunque $b_{2} != 0$ End of explanation ##Programando un script para descargar el archivo .csv de Github y convertirlo en un data frame import numpy as np import pandas as pd import statsmodels.formula.api as sm import matplotlib.pyplot as plt #Script para descargar archivo y convertirlo en Data Frame con Pandas #url="/Users/usuario/Documents/MaestriaCD/Propedeutico/PropedeuticoDataScience2017/study_vs_sat.csv" url="https://raw.githubusercontent.com/mauriciogtec/PropedeuticoDataScience2017/master/Tarea/study_vs_sat.csv" data = pd.read_csv(url) data=pd.read_csv(url) data= pd.DataFrame(data) Explanation: En este caso la matriz tiene una solución única ya que $b_{2}=exp-32$ y las x´s son diferentes a las del caso anterior. OLS End of explanation ##Deefine una función que me de una predicción para cada valor de sat_score ## def prediction (S,alpha,beta): prediction=np.zeros(len(data)) for i in range(len(data)): score=S[i] prediction[i]=alpha+beta*score return prediction ##puedo usar datos que se me ocurran alpha=-353.164879499 beta= 25.3264677779 S=data["study_hours"] ##Entonces usando esta información puedo hacer la predicción usando mi función score_pred=prediction(S,alpha,beta) print(score_pred) ##Definiendo el numpy array con 1 en el primer vector ##y sat_score en el segundo Origen= data['Origen'] = np.ones(( len(data), )) X=data[["Origen","study_hours"]] print(X) ###Calculando X^+ * study_hours para obtener la alpha y beta X_inv=pseudo(X) alpha_aprox,beta_aprox=np.dot(X_inv,data["sat_score"]) print(alpha_aprox, beta_aprox) ##ahora, haciendo la formula de estimadores de OLS de veras para calcular alpha t beta X_t=np.transpose(X) Sxx=np.dot(X_t,X) Sxy=np.dot(X_t,data["sat_score"]) Sxx_inv= pseudo(Sxx) alpha,beta= np.dot(Sxx_inv,Sxy) print(alpha,beta) ##visualizando los datos correctos vs las aproximaciones alpha=353.164879499 beta= 25.3264677779 S=data["study_hours"] prediccion=prediction(S, alpha,beta) colors = ['red', 'blue'] pred=plt.plot(prediccion, 'bo', markersize=10) # blue circle with size 10 val=plt.plot(data["sat_score"], 'ro', ms=10,) plt.legend((pred, val), ('Valores de la predicción',"Valor Real"), scatterpoints=1, loc='lower left', ncol=3, fontsize=8) plt.show() Explanation: Al hacer una aproximación de la forma $sat_score= \alpha + \betastudy_hours + \epsilon$ podemos plantear la minimización de errores al cuadrado de tal manera que $$min \epsilon^{2} = min_{\alpha,\hat{\beta}} satscore - \alpha-\hat{\beta}study hours$$ $$min_{\alpha,\hat{\beta}} y - X\hat{\beta}$$ En este caso $\alpha$ y $\beta$ resultantes de las condiciones de primer orden del problema de minimización son los gradientes que estamos buscando End of explanation
1,697
Given the following text description, write Python code to implement the functionality described below step by step Description: Sound as 1D-Signal Step1: Sound as 2D-Signal Step2: Prepare a data Step3: Nearest Neighbors genre classification Step4: Convolution Nural Nets http Step5: Find Simular Tracks <img src="./img/cnn_gr.png" width="500"> Step6: Maps of tracks by svd and tsne Help
Python Code: plt.figure(figsize=(20,4)) pylab.plot(np.arange(len(y)) * 1.0 /sr, y, 'k') pylab.xlim([0, 10]) pylab.show() Explanation: Sound as 1D-Signal End of explanation S = librosa.feature.melspectrogram(y, sr=sr, n_mels=128) log_S = librosa.logamplitude(S, ref_power=np.max) plt.figure(figsize=(20,4)) librosa.display.specshow(log_S, sr=sr, x_axis='time', y_axis='mel', cmap='hot') plt.title('mel power spectrogram') plt.colorbar(format='%+02.0f dB') plt.tight_layout() def get_spectgorgamm(fname): y, sr = librosa.load(fname) S = librosa.feature.melspectrogram(y, sr=sr, n_mels=128) log_S = librosa.logamplitude(S, ref_power=np.max) return log_S[:, :1200] def plot_spectrogramm(log_S): plt.figure(figsize=(20,4)) librosa.display.specshow(log_S, sr=sr, x_axis='time', y_axis='mel', cmap='hot') plt.title('mel power spectrogram') plt.colorbar(format='%+02.0f dB') plt.tight_layout() Explanation: Sound as 2D-Signal End of explanation geners = ['blues', 'country', 'hiphop', 'metal', 'reggae', 'classical', 'disco', 'jazz', 'pop', 'rock'] id2gener = dict() X_names, y = [], [] for gener_id, gener in enumerate(geners): id2gener[gener_id] = gener for track in os.listdir('./genres/' + gener): if '.mp3' in track or '.au' in track and '_' not in track: trackfile = os.path.join('./genres/', gener, track) X_names.append(trackfile) y.append(gener_id) from multiprocessing import Pool ncpu = 4 X = Pool(ncpu).map(get_spectgorgamm, X_names) Explanation: Prepare a data End of explanation perm = np.random.permutation(len(y)) X, X_names, y = np.array(X)[perm].astype('float32'), np.array(X_names)[perm], np.array(y)[perm] Xreshape = X.reshape(X.shape[0], X.shape[1], X.shape[2]) X_train, X_valid = Xreshape[:800], Xreshape[800:] y_train, y_valid = y[:800], y[800:] from sklearn.metrics import accuracy_score from sklearn.neighbors import KNeighborsClassifier clf = KNeighborsClassifier(n_jobs=ncpu) clf = <train clf> y_val_pred = <make prediction on valid set> print accuracy_score(y_valid, y_val_pred) Explanation: Nearest Neighbors genre classification End of explanation import theano import lasagne import theano.tensor as T perm = np.random.permutation(len(y)) X, y = np.array(X)[perm].astype('float32'), np.array(y)[perm] Xreshape = X.reshape(X.shape[0], X.shape[1], X.shape[2]) X_train, X_valid = Xreshape[:800], Xreshape[800:] y_train, y_valid = y[:800], y[800:] input_X, target_y = T.tensor3("X", dtype='float64'), T.vector("y", dtype='int32') nn = lasagne.layers.InputLayer(shape=(None, X.shape[1], X.shape[2]), input_var=input_X) nn = <Build convnet using Conv1DLayer MaxPool1DLayer> nn = <Add several DenseLayers and DropoutLayer> nn = lasagne.layers.DenseLayer(nn, 10, nonlinearity=lasagne.nonlinearities.softmax) y_predicted = lasagne.layers.get_output(nn) all_weights = lasagne.layers.get_all_params(nn) loss = lasagne.objectives.categorical_crossentropy(y_predicted, target_y).mean() accuracy = lasagne.objectives.categorical_accuracy(y_predicted, target_y).mean() updates_sgd = <Your favorite optimizer> train_fun = theano.function([input_X, target_y], [loss, accuracy], allow_input_downcast=True, updates=updates_sgd) test_fun = theano.function([input_X, target_y], [loss, accuracy], allow_input_downcast=True) %%time conv_nn = train_net(nn, train_fun, test_fun, X_train, y_train, X_valid, y_valid, num_epochs=10, batch_size=50) plt.figure(figsize=(5, 5), dpi=500) W = lasagne.layers.get_all_params(nn)[0].get_value() W[::2, :, :] = 0.2 W = np.hstack(W) pylab.imshow(W, cmap='hot', interpolation="nearest") pylab.axis('off') pylab.show() Explanation: Convolution Nural Nets http://benanne.github.io/2014/08/05/spotify-cnns.html End of explanation from sklearn.neighbors import NearestNeighbors represent = <Get features from last but one layer> represent_fun = theano.function([input_X], [represent], allow_input_downcast=True) f = lambda x: np.array(represent_fun([x])[0]) track_vectors = map(f, X_train) + map(f, X_valid) track_vectors = np.concatenate(track_vectors, axis=0) nn_pred = NearestNeighbors(metric='cosine', algorithm='brute') nn_pred = nn_pred.fit(track_vectors) X_names[0] ans = list(X_names[nn_pred.kneighbors(track_vectors[0])[1][0]]) ans sound_file = ans[0] y, sr = librosa.load(sound_file) librosa.output.write_wav('./genres/tmp.wav', y, sr, norm=True) Audio(url='./genres/tmp.wav') sound_file = ans[1] y, sr = librosa.load(sound_file) librosa.output.write_wav('./genres/tmp.wav', y, sr, norm=True) Audio(url='./genres/tmp.wav') Explanation: Find Simular Tracks <img src="./img/cnn_gr.png" width="500"> End of explanation from sklearn.manifold import TSNE represent = lasagne.layers.get_output(nn.input_layer) represent_fun = theano.function([input_X], [represent], allow_input_downcast=True) f = lambda x: np.array(represent_fun([x])[0]) track_vectors = map(f, X_train) + map(f, X_valid) track_vectors = np.concatenate(track_vectors, axis=0) track_labels = np.array(list(y_train) + list(y_valid)) X_tsne = <Make TSNE Features> plt.figure(figsize=(10,10), dpi=500) colors = cm.hot(np.linspace(0, 1, len(id2gener))) for idx, gener in id2gener.items(): idx_ = np.where(track_labels == idx) pylab.scatter(X_tsne[:, 0][idx_], X_tsne[:, 1][idx_], c=colors[idx], cmap=cm.hot, label=gener,s=50) pylab.legend(loc=0, ncol=5) Explanation: Maps of tracks by svd and tsne Help: https://lts2.epfl.ch/blog/perekres/category/visualizing-hidden-structures-in-datasets-using-deep-learning/ End of explanation
1,698
Given the following text description, write Python code to implement the functionality described below step by step Description: I recommend installing Anaconda for ease Step1: All credit for the data, and our many thanks, go to the principal investigators who collected this data and made it available Step2: What sort of study was this? A. Observational or experimental Observational and experimental data to know if your data is obersevational or experimental, you ask if the explanatory variable was manipulated or not In a true experiment, 3 conditions Step3: number of observations to variables largely exceeds heuristic rule of 5 to 1 see Victoria Stodden (2006) for a discussion of why this is important https Step4: let's check for missing/outlier values Why bother? Nearly all statistical methods assume complete information. In the presence of missing data Step5: this is not approach we would use in a real world study, but for exploratory purposes in this tutorial we can retain 85% of observations that were properly recorded regarding all variables I will add here an example with conditional mean imputation for one of the variables Statsmodels very limited in options = very manual work. From version 0.8.0 onwards MICE function. now that we cleaned the data we can move on to univariate analysis - one variable at a time data reporting tells you what is happening, but data analysis tells you why it is happening Descriptive statistics a parameter is calculated from the population, while a statistic from a sample In studying samples, we assume the Central Limit Theorem holds Step6: the results definitely suggest a discussion about class imbalance is in order here. Step7: rules for visualizing two variables (recap). Their order does not matter here, hence Q-C and C-Q use the same visualization rule C-Q Step8: Now we can move to inferential statistics We call them inferential statistics because we try to infer the population's paramters from our sample The process is always the same and involves hypothesis testing Step9: What interests us in an ANOVA table are Step10: We will now test another hypothesis Step11: the relationship between chest pain and the diagnosis holds for both levels of the sex variables, hence it is not a moderator. we would test for moderator variables in the case of a quantitative response variable the same way Step12: Linear regression Step13: We now add explanatory variables to our model one at a time. In doing this, we keep an eye on what impact adding an explanatory variable will have on our model. Sometimes for example we add a variable, and another variable that was statistically significant (had a p value below 0.05), suddenly becomes insignificant. This means the new variable is a confounder, a moderator, variable in the relationship between that other variable and the response variable. Step14: We identified some explanatory variables that are associated with age, but our model overall barely explains 27% of the variation in the response variable let's run some diagnostics Step15: Machine learning for data exploration For me machine learning is simply the branch of statistics that has traditionally focused on working with large, heterogenous datasets, characterized by numerous variables that interact in nonlinear ways. It is a collection of tools, just like statistics in general. An amazing article on traditional parametric statistics vs machine learning is Step16: Random forests Random forests, an ensemble learning method, are a more sophisticated method of using trees Pioneered by Tin Kam Ho, Leo Breiman (who also created 'bagging') and Adele Cutler For a full discussion, please see https Step17: random forests typically outperform decision trees, in particular when our decision tree model exhibits instability the lower accuracy here is perhaps a result of our implementation of the random forest - we would need to tune its parameters in order to optimize it for the problem at hand Support vector machines SVMs are a set of supervised learning methods, used for classification and regression They handle nonlinearity, through kernalization, much better than a standard logistic classifier A couple of very good explanations
Python Code: #import the packages we will use import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt #use simplest tool available import statsmodels.formula.api as smf import statsmodels.stats.multicomp as multi import statsmodels.api as sm import scipy.stats from sklearn.cross_validation import train_test_split from sklearn.tree import DecisionTreeClassifier from sklearn.metrics import classification_report import sklearn.metrics from sklearn import datasets from sklearn.ensemble import ExtraTreesClassifier from sklearn.ensemble import RandomForestClassifier #from sklearn import preprocessing #from sklearn.linear_model import LassoLarsCV #from sklearn.cluster import KMeans %matplotlib inline sns.set(style="whitegrid", color_codes=True) #Whenever you are not sure of a function or parameter used, please verify documentation for that function at: #http://pandas.pydata.org/pandas-docs/stable/ Explanation: I recommend installing Anaconda for ease: https://www.continuum.io/downloads certain packages i.e. seaborn, do not come with the standard installation. A simple search for how to 'install seaborn with anaconda' will easily find the installation instructions. i.e. in this case: http://seaborn.pydata.org/installing.html in terminal (mac): conda install seaborn Alexandru Agachi m.a.agachi@gmail.com Background: pursued degrees and graduate and postgraduate diplomas in international relations, energy science, surgical robotics, neuro anatomy and imagery, and biomedical innovation Working with a data driven firm in London that operates on financial markets, and teaching a big data module in the biomedical innovation degree at Pierre et Marie Curie University in Paris. Why Why statistics Why focus on data Better to be in an expanding world and not quite in exactly the right field, than to be in a contracting world where people's worst behavior comes out. Eric Weinstein, Fellow University of Oxford, MD Thiel Capital. Why healthcare Ripe for data analysis 30% of world's data The biomedical sciences have been the pillar of the health care system for a long time now. The new system will have two equal pillars — the biomedical sciences and the data sciences. Dr Scott Zeger, Johns Hopkins University. Roadmap - Blueprint for statistical data exploration End of explanation #let's load the first dataset in pandas with the read_csv function. This will create pandas dataframe objects per below. cleveland = pd.read_csv('processed.cleveland.data.txt', header=None) cleveland.head(10) hungary = pd.read_csv('processed.hungarian.data.txt', header=None) hungary.head(10) switzerland = pd.read_csv('processed.switzerland.data.txt', header=None) switzerland.head(10) va = pd.read_csv('processed.va.data.txt', header=None) va.head(10) df = pd.concat([cleveland, hungary, va, switzerland]) df #we rename all columns to make them more legible df.columns = ['age', 'sex', 'chest_pain', 'rest_bp', 'cholesterol', 'fasting_bs', 'rest_ecg', 'max_heart_rate', 'exercise_angina', 'st_depression', 'slope', 'fluoroscopy', 'defect', 'diagnosis'] #let's look at our dataframe. the head(10) option tells pandas to return only the first ten rows of our dataframe. df.head(10) Explanation: All credit for the data, and our many thanks, go to the principal investigators who collected this data and made it available: Hungarian Institute of Cardiology. Budapest: Andras Janosi, M.D. University Hospital, Zurich, Switzerland: William Steinbrunn, M.D. University Hospital, Basel, Switzerland: Matthias Pfisterer, M.D. V.A. Medical Center, Long Beach and Cleveland Clinic Foundation: Robert Detrano, M.D., Ph.D. Online repositories for data used and for this notebook: http://archive.ics.uci.edu/ml/machine-learning-databases/heart-disease/ https://github.com/AlexandruAgachi/introductory_statistics_tutorial/blob/master/Introductory%20statistical%20data%20analysis%20with%20pandas%20PyData%20Berlin%202017.ipynb here specify your own directory for where you stored the datasets on your computer: For me: cd Downloads/Heart disease dataset note: when downloading the datasets, some people had to rename them manually with a 'txt' ending the four data files that interest us should be named: processed.cleveland.data.txt processed.hungarian.data.txt processed.switzerland.data.txt processed.va.data.txt Data management is an integral part of your research process Choices you make here will influence your entire study End of explanation print(len(df)) print(len(df.columns)) Explanation: What sort of study was this? A. Observational or experimental Observational and experimental data to know if your data is obersevational or experimental, you ask if the explanatory variable was manipulated or not In a true experiment, 3 conditions: 1. only one variable is manipulated 2. we have a control group 3. random* assignment (analysis stage in randomized trials: check clss imbalances, and if any found consider incl. in model as explanatory variables, for statistical control). In theory, in this case one could determine causality. Quasi experiment: 1. only one variable is manipulated 2. control group 3. no random assignment; groups pre selected. i.e. drug users study. To improve a quasi experimental design: add confounding variables; have a control group; use a pre-test/post-test design confounder=control variable=covariate=third variable=lurking variable in an observational study the regression line only describes data you see. it cannot be used to predict result of intervention *randomization works best as your sample size approaches infinity. for small sizes, imbalances in the groups can occur. B. Longitudinal or cross sectional Then data analysis? Wrong! Background research. Always starts with background research. Domain knowledge. We quickly find three studies using this dataset: Detrano et al.: http://www.ajconline.org/article/0002-9149(89)90524-9/pdf Sundaram and Kakade: http://www2.rmcil.edu/dataanalytics/v2015/papers/Clinical_Decision_Support_For_Heart_Disease.pdf Soni et al.: http://www.enggjournals.com/ijcse/doc/IJCSE11-03-06-120.pdf Value the time of domain experts. Dataset we'll focus on today. look at txt file AND at explanation First stage in a study is always exploratory data analysis We typically receive the data without context, in a raw file that we cannot easily interpret The five steps of exploratory data analysis 1. ? 2. organizing and summarizing the data 3. Looking for important features and patterns 4. Looking for exceptions 5. Interpreting these findings in the context of the research question at hand This can be summarized in a notebook or even an initial data report for everyone involved in the data project End of explanation #mark all variables as numeric data, and signify, for the relevant ones, that they are categorical rather than #quantitative variables #errors='coerce' tells pandas to return invalid values as NaN rather than as the input values themselves #crucial to do this step otherwise subsequent analyses will not work properly. i.e. pandas would interpret missing #values as strings df['age'] = pd.to_numeric(df['age'], errors='coerce') df['sex'] = pd.to_numeric(df['sex'], errors='coerce').astype('category') df['chest_pain'] = pd.to_numeric(df['chest_pain'], errors='coerce').astype('category') df['rest_bp'] = pd.to_numeric(df['rest_bp'], errors='coerce') df['cholesterol'] = pd.to_numeric(df['cholesterol'], errors='coerce') df['fasting_bs'] = pd.to_numeric(df['fasting_bs'], errors='coerce').astype('category') df['rest_ecg'] = pd.to_numeric(df['rest_ecg'], errors='coerce').astype('category') df['max_heart_rate'] = pd.to_numeric(df['max_heart_rate'], errors='coerce') df['exercise_angina'] = pd.to_numeric(df['exercise_angina'], errors='coerce').astype('category') df['st_depression'] = pd.to_numeric(df['st_depression'], errors='coerce') df['slope'] = pd.to_numeric(df['slope'], errors='coerce').astype('category') df['fluoroscopy'] = pd.to_numeric(df['fluoroscopy'], errors='coerce').astype('category') df['defect'] = pd.to_numeric(df['defect'], errors='coerce').astype('category') df['diagnosis'] = pd.to_numeric(df['diagnosis'], errors='coerce').astype('category') Explanation: number of observations to variables largely exceeds heuristic rule of 5 to 1 see Victoria Stodden (2006) for a discussion of why this is important https://web.stanford.edu/~vcs/thesis.pdf convert all variables to numeric ones End of explanation df['age'].isnull().value_counts() df['sex'].isnull().value_counts() df['sex'].value_counts() #the normalize = True parameter returns for us the % of total rather than the absolute number. df['sex'].value_counts(normalize=True) df['chest_pain'].isnull().value_counts() df['chest_pain'].value_counts() df['rest_bp'].isnull().value_counts() #we divide by len(df), to manually calculate the % of total observations that this represents each time. df['rest_bp'].isnull().value_counts()/len(df) df['cholesterol'].isnull().value_counts() df['cholesterol'].isnull().value_counts()/len(df) df['fasting_bs'].isnull().value_counts() df['fasting_bs'].isnull().value_counts()/len(df) #standard value_counts() function drops missing values. To avoid this you can add dropna=False argument to function. df['fasting_bs'].value_counts() df['rest_ecg'].isnull().value_counts()/len(df) df['rest_ecg'].value_counts() df['max_heart_rate'].isnull().value_counts() df['max_heart_rate'].isnull().value_counts()/len(df) df['exercise_angina'].isnull().value_counts() df['exercise_angina'].isnull().value_counts()/len(df) df['exercise_angina'].value_counts() df['st_depression'].isnull().value_counts() df['st_depression'].isnull().value_counts()/len(df) df['slope'].isnull().value_counts() df['slope'].isnull().value_counts()/len(df) df['slope'].value_counts() df['fluoroscopy'].isnull().value_counts() df['fluoroscopy'].isnull().value_counts()/len(df) df['fluoroscopy'].value_counts() df['defect'].isnull().value_counts() df['defect'].isnull().value_counts()/len(df) df['defect'].value_counts() df['diagnosis'].isnull().value_counts() df['diagnosis'].value_counts() len(df.columns) #the variables slope, defect and fluoroscopy have 33-47% of missing values #context. Why? #can we correct for this without introducing other biases into our dataset? #Here we will decide to eliminate these variables. #Data analysts are mere mortals too. Cannot fix everything. df_red = df[['age', 'sex', 'chest_pain', 'rest_bp', 'cholesterol', 'fasting_bs', 'rest_ecg', 'max_heart_rate', 'exercise_angina', 'st_depression', 'diagnosis']] len(df_red) #rest_bp, cholesterol, fasting_bs, rest_ecg, max_heart_rate, exercise_angina, st_depression #rest_ecg is a categorical variable with only 2% of missing values. #we make the choice to impute missing values with a straightforward method: the mode #we are conscious this may introduce biases but due to low number of missing values, in what is #a small range categorical variable (no extreme outliers possible) we feel comfortable doing this df_red['rest_ecg'].fillna(df_red['rest_ecg'].mode().iloc[0], inplace=True) df_red['rest_ecg'].isnull().value_counts() #let's see if there is any overlap between missing variables relative to observations, and what would happen if we #were to limit our dataset to observations with non missing values df_clean = df_red[df_red['rest_bp'].notnull() & df_red['cholesterol'].notnull() & df_red['fasting_bs'].notnull() & df['max_heart_rate'].notnull() & df['exercise_angina'].notnull() & df['st_depression'].notnull()] len(df_clean) df_clean.isnull().any() Explanation: let's check for missing/outlier values Why bother? Nearly all statistical methods assume complete information. In the presence of missing data: 1. parameter estimates may be biased* 2. statistical power weakens 3. precision of confidence intervals is diminished Three types of missing variables: 1. Missing completely at random (MCAR): p(missing data on Y) independent of p(Y value) or p(other variables values) but: p(missing data on Y) may be linked to p(missing data on other variables in dataset)) 2. Missing at random (MAR): p(missing data on Y) independent of value of Y after controlling for other variables 3. Not missing at random (NMAR) If MAR, missing data is ignorable and there is no need to model missing data mechanism if NMAR, missing data mechanism is not ignorable and one must develop v good understanding of missing data process to model it Standard options: 1. listwise deletion. works well if MCAR, which is rarely true, and can delete significant part of our sample 2. imputation. 2.a. marginal mean imputation: leads to biased estimates of variance and covariance and should be avoided 2.b. conditional mean imputation: we regress missing values on values of all other variables present in dataset. MCAR assumption. Generalized least squares usually shows good results. Overall issue with imputation methods: underestimate standard errors and overstimate test statistics. Advanced options: 1. Multiple imputation 2. Maximum likelihood 3. Bayesian simulation 4. Hot deck (selects at random, with replacement, a value from observations with similar values for other variables) For a great discussion of missing data: http://www.bu.edu/sph/files/2014/05/Marina-tech-report.pdf Statsmodels and Scikit learn imputation functions: http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.Imputer.html http://www.statsmodels.org/dev/imputation.html At this point in time, pandas, scikit learn and statsmodels offer relatively poor methods of data imputation. You therefore will need to manually create your processes. But this will be worth your time! "The only really good solution to the missing data problem is not to have any. So in the design and execution of research projects, it is essential to put great effort into minimizing the occurrence of missing data. Statistical adjustments can never make up for sloppy research." Paul Allison, 2001. *Bias affects all measurements the same way, while chance errors vary from measurement to measurement therefore bias cannot be noticed by looking just at measurements, we need an external/theoretical benchmark as well. Bias is often discussed in conjunction with variance. Now we go through each variable in our dataset one by one. We check for missing values if any, how many, and what % of total values they represent for each variable. For this we use: isnull() function checks for missing values value_counts() counts them we also look at what values our variables take, by applying the value_counts() function directly, without isnull() End of explanation #Gender #given that gender is a categorical variable, we use a countplot to visualize it #we always plot the variable of interest on the x axis, and the count or frequency on the y axis sns.countplot(x='sex', data = df_clean) plt.suptitle('Frequency of observations by gender') Explanation: this is not approach we would use in a real world study, but for exploratory purposes in this tutorial we can retain 85% of observations that were properly recorded regarding all variables I will add here an example with conditional mean imputation for one of the variables Statsmodels very limited in options = very manual work. From version 0.8.0 onwards MICE function. now that we cleaned the data we can move on to univariate analysis - one variable at a time data reporting tells you what is happening, but data analysis tells you why it is happening Descriptive statistics a parameter is calculated from the population, while a statistic from a sample In studying samples, we assume the Central Limit Theorem holds: if you draw enough samples, from a population, and each sample is large enough, the distribution of the statistics of the samples will be normally distributed. normal curve discovered around 1720 by Abraham de Moivre. Around 1870, Adolph Quetelet thought of using it as the curve of an ideal histogram center-spread-shape 3 measures of center: mean, median and mode 1 measure of spread: standard deviation 2 attributes of shape: 1. symmetry or skewness. skewed right (i.e. salaries), skewed left (i.e. age of natural deaths) 2. modality or peakness: unimodal, bimodal, uniform... now we will aim to get an idea of the shape, center, and spread of these variables we will analyze the shape visually by checking for modality and skewness we will check for measures of center such as mean, median and mode we will check the spread through the standard deviation for quantitative variables standard deviation: how far away observations are from their average in a normal distribution roughly 68% are within 1 SD and 95% within 2 SDs. Quantitative variables shape, center and spread histogram Categorical variables mode bar chart or frequency distribution rules for visualizing data: for visualizing one variable: if it is categorical we use a bar chart i.e. sns's countplot function if it is quantitative, we can combine a kernel density estimate and a histogram with sns's distplot function for visualizing two variables: C-Q: bivariate bar graph with sns factorplot (bin/collapse explanatory variable), categories on x axis, and mean of response variable on y axis Q-Q: scatterplot with sns regplot C-C: you can plot them one a time. problem with a bivariate graph is that mean has no meaning in context of a categorical variable for further reading: http://sphweb.bumc.bu.edu/otlt/mph-modules/bs/datapresentation/DataPresentation7.html End of explanation #Diagnosis #we check the counts per each value of the variable. sort=False tells pandas not to sort the results by values. #normalize = True tells it to return the relative frequencies rather than the absolute counts #if we had not cleaned the data, we could add parameter dropna=False so that value_counts does not drop null values df_clean['diagnosis'].value_counts(sort=False, normalize=True) sns.countplot(x='diagnosis', data=df_clean) plt.suptitle('Frequency distribution of diagnosis state') #Let's look at age now #the describe request gives us the count, mean, std, min, max, as well as the quartiles for the #respective value distribution df_clean['age'].describe() #sns distplot function combines matplotlib hist() function with sns kdeplot() function. sns.distplot(df_clean['age']) plt.suptitle('Distribution of age') df_clean['max_heart_rate'].describe() sns.distplot(df_clean['max_heart_rate']) plt.suptitle('Distribution of maximal heart rate') sns.swarmplot('diagnosis', data=df_clean) Explanation: the results definitely suggest a discussion about class imbalance is in order here. End of explanation #kind = bar asks for a bar graph and ci=None suppresses error bars sns.factorplot(x='sex', y='age', kind='bar', data=df_clean, ci=None) plt.suptitle('Gender vs age') #categorical explanatory variable 'sex' and quantitative response variable 'rest_bp' sns.factorplot(x='sex', y='rest_bp', data=df_clean, kind='bar') df_clean['chest_pain'].dtype #We use a regplot to plot two quantitative variables, age and cholesterol, while also having a regression line suggesting #any association present #we always plot the explanatory variable on the x axis, and the response variable on the y axis sns.regplot(x='age', y='cholesterol', data=df_clean) #We see that several observations have a cholesterol of "0". This will need to be investigated subsequently and treated #as missing values #how can we gain a better idea of how two categorical variables interact? df_clean.groupby('sex')['diagnosis'].value_counts()/len(df) #common statistical boxplot visualization sns.boxplot(x='cholesterol', y = 'sex', data=df_clean) plt.suptitle('Cholesterol levels by gender') #describing the dataset/the variables of the dataset, after we group the values in the dataset by diagnostic category #the describe function gives us the key statistics for each quantitative variable in the dataset df_clean.groupby('diagnosis').describe() #before starting to manipulate the dataset itself, we make a copy, and will work on the copy rather than the original df_clean_copy = df_clean.copy() Explanation: rules for visualizing two variables (recap). Their order does not matter here, hence Q-C and C-Q use the same visualization rule C-Q: bivariate bar graph with sns factorplot, categories on x axis, and mean of response variable on y axis Q-Q: scatterplot with sns regplot C-C: you can plot them one a time. problem with a bivariate graph is that mean has no meaning in context of a categorical variable End of explanation #Going the wrong way? #In regression analysis you can change your predictor and response variables. This is because they may be correlated, #in which case X is correlated to Y and by definition Y is correlated with X. There is no directionality implied, which #is also why you cannot talk of causation, but only of correlation. test1 = smf.ols(formula = 'age ~ C(diagnosis)', data = df_clean_copy).fit() print(test1.summary()) #*Whenever we have an explanatory categorical variable with more than two levels, we need to explicitly state this for #statsmodels. Here we state it by adding 'C'and the variable name in parentheses. By definition, statsmodels then #converts the first level of the variable into the reference group, therefore showing only n-1 levels in the results. #each level is in brief a comparison to the reference group. Here the reference group becomes a diagnosis of 0. Explanation: Now we can move to inferential statistics We call them inferential statistics because we try to infer the population's paramters from our sample The process is always the same and involves hypothesis testing: 1. define null hypothesis and alternate hypothesis 2. analyze evidence 3. interpret results Typical H0: there is no relationship between the explanatory and response variable Typical H1: there is a statistically significant relationship Bivariate statistical tools: Three main tools: ANOVA; chi-square; correlation coefficient Type 1 vs Type 2 errors Type 1 error: the incorrect rejection of a true null hypothesis Type 2 error: retaining a false null hypothesis we will test the null hypothesis that age and diagnosis are not related. the type of variables we have (explanatory/response and categorical/quantitative for each) determines the type of statistical tools we will use: Explanatory categorical and response quantitative: ANOVA Explanatory categorical and response categorical: Chi Square test Explanatory quantitative and response categorical: classify/bin explanatory variable and use chi square test Explanatory quantitative and response quantitative: pearson correlation a result is statistically significant if it is unlikely to be the result of chance before performing these analyses, one needs to use the .dropna() function to include only valid data End of explanation #now we examine the means and standard deviations grouped1_mean = df_clean_copy.groupby('diagnosis').mean()['age'] print(grouped1_mean) grouped1_std = df_clean_copy.groupby('diagnosis').std()['age'] print(grouped1_std) #given that we have an explanatory categorical variable with multiple levels, we use the #tuckey hsd test #other tests: #Holm T #Least Significant Difference tuckey1 = multi.MultiComparison(df_clean_copy['age'], df_clean_copy['diagnosis']) res1 = tuckey1.tukeyhsd() print(res1.summary()) Explanation: What interests us in an ANOVA table are: R squared The F statistic The p value (here called 'Prob (F-statistic) The coefficients of each explanatory variable and the associated p value for it (here 'coef' and 'P>|t| columns) The 95% confidence interval for our coefficients. Here called '[95.0% Conf. Int.] The 'R Squared' statistic. This is a measure of how much of the variability in the response variable, age here, is explained by our model. We see this here it is 0.134 only. So diagnostic group only helps us explain 13.4% of the variability in age. This can indicate that either we omit important explanatory variables that we can add to the model, or that we miss the structure of the association. The F statistic: An F test is any statistical test in which the test statistic has an F distribution under the null hypothesis The F statistic = variation among sample means/variation within sample groups ANOVA F Test = are the differences among the sample means due to true differences among the population means, or merely due to sampling variability? p value in an ANOVA table = probability of getting an F value as large or larger if H0 is true probability of finding this value if there is no difference between sample means. I would like to thank M.N. for sending me the below blog, which contains a couple of great posts on common statistical concepts: http://blog.minitab.com/blog/adventures-in-statistics-2/how-to-correctly-interpret-p-values In an ANOVA table, we can decide if the relationship is statistically significant by checking the value of the F statistic against the relevant statistical table, F=28.42 above, or by looking at the p value. The latter is easiest. A long tradition in statistics is to consider a p value (also called alpha) below 0.05 as indicative of a significant relationship. When conducting such tests, you need to decide what p value/alpha value you feel comfortable with, and from then onwards you create a binary framework: either a statistic has an associated p value below the value you decided on at the beginning, or not. In this sense, a p value of 0.0001 is not indicative of a 5 times stronger relationship than a p value of 0.0005. In our case here a p value of 5.56e-22 means the relationship is statistically significant (5.56e-22 < 0.05). The coefficients for each variable. Here for example our model would be: age = 50.3025 + 2.6438diagnosis1 + 8.1279 diagnosis2 + 8.8770 * diagnosis3 + 8.9248 * diagnosis4 We can also say that being having a diagnosis of 1 increases someone's age by 2.6438 assuming we hold all other explanatory variables in our model constant at 0. a negative but significant coefficient indicates that our explanatory variable and the respons variable are negatively associated: as one increases, the other decreases. the higher the coefficient, the more impact it will have on the value of our response variable based on our model. However, these coefficients result from our sample, meaning that the population parameters may differ from these. The confidence intervals give us a range in which these parameters can be for the population, with 95% confidence. For example we can be 95% confident that the population parameter for diagnosis1 is between 1.134 and 4.153. Whenever the explanatory variable has more than 2 levels, we need to also perform post hoc statistical tests to better understand the relationship between the explanatory variable and the response variable we know the groups tested are different overall, but not exactly where/how they are different for explanatory variables with multiples levels, F test and p value do not tell us why the group means are not equal, or how. there are many ways in which this can be the case. How are the response and explanatory variables associated per level of the explanatory variable? post hoc tests aim to protect against type 1 error when explanatory variable is multilevel End of explanation diagnosis_dic = {0:0, 1:1, 2:1, 3:1, 4:1} df_clean_copy['diagnosis_binary'] = df_clean_copy['diagnosis'].map(diagnosis_dic) df_clean_copy['diagnosis_binary'].value_counts() #contingency table of observed counts #the crosstab function allows us to cross one variable with another #when creating contingency tables, we put the response variable first (therefore vertical in table), #and the explanatory variable second, therefore horizontal at the top of the table. ct1 = pd.crosstab(df_clean_copy['diagnosis_binary'], df_clean_copy['chest_pain']) print(ct1) #column percentages colsum = ct1.sum(axis=0) colpct = ct1/colsum print(colpct) #chi square test #Expected counts: p assuming events are independent. p(1) * p(2) | column total*row total/table total #Chi square statistic summarizes this. difference between our obersavtion and what we would expect if H0 is true #We rely on the p value, as different distributions define whether the chi square itself is large or not print('chi-square value, p value, expected counts') cs1 = scipy.stats.chi2_contingency(ct1) print(cs1) #Explanatory variable with multiple levels! #we would have to do a pairwise comparison between every two groups of the explanatory #variable, vs the response variable #This would be a Bonferroni adjustment - we adjust p value we use by number of pairwise comparisons, and test these. df_clean_copy.columns ct2 = pd.crosstab(df_clean_copy['diagnosis_binary'], df_clean_copy['sex']) print(ct2) #column percentages colsum2 = ct2.sum(axis=0) colpct2 = ct2/colsum2 print(colpct2) #chi square test print('chi-square value, p value, expected counts') cs2 = scipy.stats.chi2_contingency(ct2) print(cs2) #Moderators #a moderator is a third variable that affects the direction and/or strength between your explanatory and response variables #the question is, is our response variable associated with our explanatory variable, for each level of our third variable? #let's see if sex is a moderator in the statistically significant relationship between chest pain and diagnosis df_clean_copy_men = df_clean_copy[df_clean_copy['sex'] == 0] len(df_clean_copy_men) df_clean_copy_women = df_clean_copy[df_clean_copy['sex'] == 1] len(df_clean_copy_women) #contingency table of observed counts #when creating contingency tables, we put the response variable first (therefore vertical in table), #and the explanatory variable second, therefore horizontal at the top of the table. ct3 = pd.crosstab(df_clean_copy_men['diagnosis_binary'], df_clean_copy_men['chest_pain']) print(ct3) #column percentages colsum = ct3.sum(axis=0) colpct = ct3/colsum print(colpct) #chi square test print('chi-square value, p value, expected counts') cs3 = scipy.stats.chi2_contingency(ct3) print(cs3) #contingency table of observed counts #when creating contingency tables, we put the response variable first (therefore vertical in table), #and the explanatory variable second, therefore horizontal at the top of the table. ct4 = pd.crosstab(df_clean_copy_women['diagnosis_binary'], df_clean_copy_women['chest_pain']) print(ct4) #column percentages colsum = ct4.sum(axis=0) colpct = ct4/colsum print(colpct) #chi square test print('chi-square value, p value, expected counts') cs4 = scipy.stats.chi2_contingency(ct4) print(cs4) df_clean_copy_women.groupby('chest_pain')['diagnosis'].value_counts() Explanation: We will now test another hypothesis: Hypothesis(0)(a): the presence of chest pain and the diagnosis (0 or 1) are independent Alternative Hypothesis 1: presence of chest pain and diagnosis are not independent Feature engineering Paradox: you can get better results with great feature engineering and a poor model than with poor feature engineering but a great model A feature is an attribute that is useful to your problem Dr. Jason Brownlee "The algorithms we used are very standard for Kagglers...We spent most of our efforts in feature engineering. Xavier Conort, #1 Kaggler in 2013 Aims to convert data attributes into data features Aims to optimize data modelling Requires understanding of the dataset and research problem, and understanding of model you plan on using can be domain driven, or data driven i.e. for SVM with linear kernel you need to manually construct nonlinear interactions between features and feed them as input to your SVM model. An SVM with polynomial kernel will naturally capture them. other example: SVMs are very sensitive to dimensions of features, while DT/RFs are not With tabular data you combine, aggregate, split and/or decompose features in order to create new ones Given an output y and a feature x, you can try the following transforms first: e^x, log(x), x^2, x^3 an indicator of the usefulness of the transformation is if the correlation between y and x' is higher than the correlation between y and x best way to validate this is to check your model error with or without the transformed feature http://machinelearningmastery.com/discover-feature-engineering-how-to-engineer-features-and-how-to-get-good-at-it/ http://trevorstephens.com/kaggle-titanic-tutorial/r-part-4-feature-engineering/ End of explanation df_clean_copy.columns scat1 = sns.regplot(x='age', y = 'cholesterol', fit_reg=True, data = df_clean_copy) plt.xlabel('Age') plt.ylabel('Cholesterol') plt.suptitle('Relationship between age and cholesterol') scat1 #the r coefficient is a measure of association, of how closely points are clustered around a line #correlations are always between -1 and 1 #r measures solely linear association #association, not causation #it is a number without units #it can mislead in presence of outliers or non linear association -> always draw a scatter plot as well and check visually #when you look at a scatter plot you look at direction form and strength of relationship #if you identify a nonlinear association, with one or more curves, then you know you ought to add nonlinear explanatory #variables to your regression model, to aim to capture this nonlinear association as well. #ecological correlations based on averages can be misleading and overstate strength of associations for individual #units print('association between age and cholesterol') print(scipy.stats.pearsonr(df_clean_copy['age'], df_clean_copy['cholesterol'])) Explanation: the relationship between chest pain and the diagnosis holds for both levels of the sex variables, hence it is not a moderator. we would test for moderator variables in the case of a quantitative response variable the same way: 1. divide the population into the sublevels of the third variables 2. conduct an smf.ols test for each to see if the relationship is statistically significant for each level identifying a confounding variable does not allow to establish causation, just to get closer to a causal connection. due to infinite number of possible lurking variables, observational studies cannot rly establish causation a lurking of confounding variable is a third variable that is associated with both the explanatory and response variables. i.e. x=firefighters; y=damage caused by a fire. plot would suggest more firefighters causes more fire damage. in reality there is a third confounding variable that influences both, seriousness of the fire. In a study we want to demonstrate that our statistical relationship is valid even after controlling for confounders. now we will test whether there is a relationship between two quantitative variables, age and cholesterol for this we use the pearson correlation test r, going from -1 to 1 only tells us whether the two variables are linearly related. they may be related in nonlinear ways therefore it's always important to look at r in parallel with a scatterplot of the two variables r squared is a measure of how much variability in one variable can be explained by the other variable to calculate the pearson coefficient we need to remove all missing values Please remember that when two variables are correlated it is possible that: X causes Y or Y causes X Z causes both X and Y X and Y are correlated by chance - a spurious correlation End of explanation df_clean_copy.columns #categorical variables: sex, chest_pain, fasting_bs, rest_ecg, exercise_angina #quantitative variables: age, rest_bp, cholesterol, max_heart_rate, st_depression df_clean_copy['chest_pain'].value_counts() recode_chest_pain = {1:0, 2:1, 3:2, 4:3} df_clean_copy['chest_pain_p'] = df_clean_copy['chest_pain'].map(recode_chest_pain) df['fasting_bs'].value_counts() df['rest_ecg'].value_counts() df['exercise_angina'].value_counts() df_clean_copy['age_c'] = df_clean_copy['age'] - df_clean_copy['age'].mean() df_clean_copy['rest_bp_c'] = df_clean_copy['rest_bp'] - df_clean_copy['rest_bp'].mean() df_clean_copy['cholesterol_c'] = df_clean_copy['cholesterol'] - df_clean_copy['cholesterol'].mean() df_clean_copy['max_heart_rate_c'] = df_clean_copy['max_heart_rate'] - df_clean_copy['max_heart_rate'].mean() df_clean_copy['st_depression_c'] = df_clean_copy['st_depression'] - df_clean_copy['st_depression'].mean() df_clean_copy.columns df_clean_copy_c = df_clean_copy[['age_c', 'sex', 'chest_pain_p', 'rest_bp_c', 'cholesterol_c', 'fasting_bs', 'rest_ecg', 'max_heart_rate_c', 'exercise_angina', 'st_depression_c', 'diagnosis_binary']] df_clean_copy_c.columns model1 = smf.ols(formula = 'age_c ~ sex', data = df_clean_copy_c).fit() print(model1.summary()) Explanation: Linear regression: multivariate linear regression for quantitative response variable logistic regression for binary categorical response variable Assumptions of these types of models: Normality: residuals from our linear regression model are normally distributed. if they are not, our model may be misspecified Linearity: association between explanatory and response variable is linear Homoscedasticity (or assumption of constant variance): variability in the response variable is the same at all levels of the explanatory variable. i.e. if residuals are spread around the regression line in a similar manner as you move along the x axis (values of the explanatory variable) Independence: observations are not correlated with each other. Longitudinal data can violate this assumption, as well as hierarchical nesting/clustering data i.e. looking at students by classes. this assumption is the most serious to be violated, and also cannot be fixed by modifying the variables. the data structure itself is the problem. We have to contend with: Multicollinearity: explanatory variables are highly correlated with each other. this can mess up your parameter estimates or make them highly unstable. Signs: 1. highly associated variable not significant. 2. negative regression coefficient that should be positive 3. taking out an explanatory variable drastically changes the results Outliers: can affect your regression line multiple regression model allows us to find the relationship between one explanatory variable and the reponse variable, while controlling (holding constant at 0) all the other variables. for interpretability of our model, each variable needs to include a meaningful value of 0, so as to make it easier to interpret the coefficients (what does it mean to hold cholesterol constant at 0 if its range has no value of 0?) for a categorical variable, we can just recode one of the values to be 0 for a quantitative variable, we have to center it. Centering = subtracting the mean of a variable from the value of the variable. We are therefore recoding it so that its mean=0. if a quantitative explanatory variable includes a meaningful value of 0 already, we may not need to center it. in linear regression we only center explanatory variables not response one in logistic regression we always need to code response binary variable so that 0 means no outcome and 1 outcome occurred this is true whether outcome is positive or negative. We will create a multiple regression model, investigating the relationship between our explanatory variables and our response variable diagnosis we will first center the explanatory variables. for categorical variables, one of the categories needs to be 0, for quantitative variables, we need to subtract the mean from each value. Notes: do not center the response variable. is using logistic regression, do recode the binary response variable to make sure one class is coded as 0 End of explanation model2 = smf.ols(formula = 'age_c ~ sex + cholesterol_c', data=df_clean_copy_c).fit() print(model2.summary()) #here we will skip a couple of steps and create the overall model with all variables. model3 = smf.ols(formula = 'age_c ~ sex + C(chest_pain_p) + (rest_bp_c) + cholesterol_c + fasting_bs + C(rest_ecg) + \ max_heart_rate_c + exercise_angina + st_depression_c + diagnosis_binary', data = df_clean_copy_c).fit() print(model3.summary()) Explanation: We now add explanatory variables to our model one at a time. In doing this, we keep an eye on what impact adding an explanatory variable will have on our model. Sometimes for example we add a variable, and another variable that was statistically significant (had a p value below 0.05), suddenly becomes insignificant. This means the new variable is a confounder, a moderator, variable in the relationship between that other variable and the response variable. End of explanation #Q-Q plot for normality fig1 = sm.qqplot(model3.resid, line='r') #red line represents residuals we would expect if model residuals were normally distributed #our residuals below deviate somewhat from red line, especially at lower and higher quantiles, meaning they do not #follow a normal distribution. This means the curvilinear association in our model is not fully explained by our model. #We could add more explanatory variables in this case to try to better explain any curvilinear association. #simple plot of residuals stdres = pd.DataFrame(model3.resid_pearson) plt.plot(stdres, 'o', ls='None') l = plt.axhline(y=0, color='r') plt.ylabel('Standardized Residual') plt.xlabel('Observation Number') #resid_pearson normalizes our model's residuals #ls='none' means points will not be connected #we expect most residuals to fall within 2sd of the mean. More than 2 indicate there are outliers, and more than 3 #extreme outliers. #if more than 1% of our observations have standardized residuals with an absolute value greater than 2.5, or more than 5% #have one greater than or equal to 2, there is evidence that the fit of the model is poor. top cause of this is #ommission of important explanatory variables in our model. #standardized residuals in linear regression will always be linear, and the line will be horizontal #normalizing or standardizing residuals amounts to making them have a mean of 0 and sd of 1 so as to fit a normal #standard distribution #if residuals show a strong pattern (up, down, polynomial) then it is a good indication of nonlinearity #in the underlying relationship #leverage plot fig4 = sm.graphics.influence_plot(model3, size=8) print(fig4) #leverage, always between 0 and 1. At 0, a standardized residual has no influence on our model. #leverage is a measure of how much influence a specific residual (and therefore observation) has on our model. #we see that we have extreme outliers, but they are low leverage, meaning they do not have an undue influence on our #estimation of the regression model. #we have an observation that is both high leverage and and outlier, observation 33. We would need to further investigate #the corresponding observation. #now let's focus on our actual response variable in the study #since it is a binary variable, we will need to use a logistic regression model lreg1 = smf.logit(formula = 'diagnosis_binary ~ C(chest_pain_p)', data = df_clean_copy_c).fit() print(lreg1.summary()) #again, normally we would add variables one at a time, but here we will go faster. lreg2 = smf.logit(formula = 'diagnosis_binary ~ age_c + sex + C(chest_pain_p) + cholesterol_c', data = df_clean_copy_c).fit() print(lreg2.summary()) lreg3 = smf.logit(formula = 'diagnosis_binary ~ age_c + sex + C(chest_pain_p) + rest_bp_c + \ cholesterol_c + fasting_bs + C(rest_ecg) + max_heart_rate_c + \ exercise_angina + st_depression_c', data = df_clean_copy_c).fit() print(lreg3.summary()) #however, for logistic regression it makes much more sense to calculate the odds ratio #this is because in binary logistic regression, we only deal with probabilities, of an outcome (response variable) #being 0 or 1. #odds are calculated as the exponentiation of our coefficients as calculated in a normal ANOVA table. #Odds ratio (OR) for an explanatory variable: #if OR=1, there is no association meaningful association between explanatory and response variables #if OR<1, the response variable becomes less likely as the explanatory one increases #if OR>1, the response variable becomes more likely as the explanatory one increases print('Odds Ratios') print(np.exp(lreg3.params)) #Interpretation of OR #Here we would say that based on our sample, women were 3.7 times more likely than men to have a diagnosis of 1. # odd ratios with 95% confidence intervals params = lreg3.params conf = lreg3.conf_int() conf['OR'] = params conf.columns = ['Lower CI', 'Upper CI', 'OR'] print (np.exp(conf)) #we have 95% confidence that the sex odds ratio will be between 2.24 and 6.12 for the population. #statsmodels offers significantly less residuals analysis options for logistic regression compared to linear one #yet there is no reason we cannot get information by studying the residuals #simple plot of residuals stdres = pd.DataFrame(lreg3.resid_pearson) plt.plot(stdres, 'o', ls='None') l = plt.axhline(y=0, color='r') plt.ylabel('Standardized Residual') plt.xlabel('Observation Number') #resid_pearson normalizes our model's residuals #ls='none' means points will not be connected #we expect most residuals to fall within 2sd of the mean. More than 2 are outliers, and more than 3 extreme outliers. #if more than 1% of our observations have standardized residuals with an absolute value greater than 2.5, or more than 5% #have one greater than or equal to 2, there is evidence that the fit of the model is poor. largest cause of this is ommission #of important explanatory variables in our model. #standardized residuals in linear regression will always be linear, and the line will be horizontal #if residuals show a strong pattern (up, down, polynomial) then it is a good indication of nonlinearity #in the underlying relationship #Statsmodels and scikit learn still offer few and poor 'off the shelf' options for imputing missing values in more #statistically sound ways. #As of statsmodels 0.8.0 at least, statsmodels offers the MICE imputation function #a description is available here: http://www.statsmodels.org/dev/imputation.html #and the details behind the implementation are here: #http://www.statsmodels.org/dev/_modules/statsmodels/imputation/mice.html #to be able to use MICE you will need to update the statsmodels coming with Anaconda, to 0.8.0 #I am yet to find nice tutorials/examples of the mice function #Let's create here a dataset with some missing values for one of our variables df_clean_mice = df_red[df_red['rest_bp'].notnull() & df_red['cholesterol'].notnull() & df_red['fasting_bs'].notnull() & \ df['max_heart_rate'].notnull() & df['exercise_angina'].notnull()] df_clean_mice.isnull().any() df_clean_mice.describe() #we create a dataframe with only quantitative variables df_clean_mice = df_clean_mice[['age', 'rest_bp', 'cholesterol', 'max_heart_rate', 'st_depression']] import statsmodels.imputation.mice as mice import statsmodels from statsmodels.base.model import LikelihoodModelResults from statsmodels.regression.linear_model import OLS from collections import defaultdict #we wrap our dataframe in a MICEData object imp = mice.MICEData(df_clean_mice) #we specify our analysis model formula_mice = 'diagnosis_binary ~ age + rest_bp + cholesterol + max_heart_rate + st_depression_c' #We now take the MICEData object, and the formula specification and run both the imputation and the analysis mice = mice.MICE(formula_mice, smf.logit, imp) #various plots and summary statistics are available and/or being developed with the MICE package. Explanation: We identified some explanatory variables that are associated with age, but our model overall barely explains 27% of the variation in the response variable let's run some diagnostics End of explanation df_clean_copy.columns #although decision trees and random forests are particularly suitable for dealing with categorical variables, and in #many statistical packages we can input the categorical variables directly, because of how they are implemented in #scikit learn, in this package we cannot input categorical variables directly. We first have to encode them in a #process called one hot encoding. This creates a separate column (variable) for each value of our explanatory #categorical variable. For this we use pandas' get_dummies function df_sex = pd.get_dummies(df_clean_copy['sex'], prefix = 'sex') df_chest_pain = pd.get_dummies(df_clean_copy['chest_pain'], prefix = 'chest_pain') df_fasting_bs = pd.get_dummies(df_clean_copy['fasting_bs'], prefix = 'fasting_bs') df_rest_ecg = pd.get_dummies(df_clean_copy['rest_ecg'], prefix = 'rest_ecg') df_exercise_angina = pd.get_dummies(df_clean_copy['exercise_angina'], prefix='exercise_angina') df_merged = pd.concat([df_clean_copy, df_sex, df_chest_pain, df_fasting_bs, df_rest_ecg, df_exercise_angina], axis=1) df_merged.columns df_dt = df_merged[['age', 'sex_0.0', 'sex_1.0', 'chest_pain_1.0', 'chest_pain_2.0', 'chest_pain_3.0', \ 'chest_pain_4.0', 'rest_bp', 'cholesterol', 'fasting_bs_0.0', 'fasting_bs_1.0', \ 'rest_ecg_1.0', 'rest_ecg_2.0', 'max_heart_rate', 'exercise_angina_0.0', 'exercise_angina_1.0',\ 'st_depression', 'diagnosis_binary']] #here we select as predictors all variables in our dataset expect for the response one predictors = df_dt.ix[:, df_dt.columns != 'diagnosis_binary'] predictors.head(5) #we select as the target our response variable target = df_dt['diagnosis_binary'] #we create our training and testing datasets. Each will have its predictors and its target (response variable). pred_train, pred_test, tar_train, tar_test = train_test_split(predictors, target, test_size = 0.4) print(pred_train.shape, pred_test.shape, tar_train.shape, tar_test.shape) classifier = DecisionTreeClassifier() classifier = classifier.fit(pred_train, tar_train) predictions = classifier.predict(pred_test) sklearn.metrics.confusion_matrix(tar_test, predictions) #let's look at which of our variables it considers most important: print(classifier.feature_importances_) sklearn.metrics.accuracy_score(tar_test, predictions) Explanation: Machine learning for data exploration For me machine learning is simply the branch of statistics that has traditionally focused on working with large, heterogenous datasets, characterized by numerous variables that interact in nonlinear ways. It is a collection of tools, just like statistics in general. An amazing article on traditional parametric statistics vs machine learning is: "Statistical Modeling: The Two Cultures," Leo Breiman, 2001. Machine learning can be used for: 1. regression 2. classification 3. feature engineering and/or selection Accuracy = test error rate. The rate at which an algorithm correctly classifies or estimates. Goal is to minimize test error rate. (in linear regression, which we saw before, accuracy was the mean squared error) (in logistic regression: accuracy = how well the model classifies observations) Supervised vs unsupervised learning In supervised learning we work with labeled data In unsupervised learning we aim to find patterns in unlabeled data In machine learning we will regularly face the bias-variance trade off: Variance = change in parameter estimates across different data sets Bias = how far off model estimated values are from true values ideally we want low variance and low bias, but they are negatively associated. As one decreases, the other increases. Generally, as complexity of model increases, this leads to higher variance and lower bias Simple models will have lower variance, but also be more biased. We will briefly apply three machine learning algorithms here, which we could use in our exploratory data analysis: Decision trees Random forests Support vector machines I intend to expand this section in the future. It is difficult today to not take advantage of some of the machine learning tools available, including when it comes to data exploration We often forget that some key applications of machine learning tools are to help us gain insights faster than with more manual methods, and in feature engineering and/or selection In the data exploration phase, you would use very raw, off the shelf, machine learning algorithms, simply for exploratory/descriptive purposes Decision trees Note: decision trees cannot handle missing data! End of explanation classifier2 = RandomForestClassifier(n_estimators = 25) classifier2 = classifier2.fit(pred_train, tar_train) predictions2 = classifier2.predict(pred_test) sklearn.metrics.confusion_matrix(tar_test, predictions2) #let's look at which of our variables the random forest considers most important: #(these follow the order in which we input them to the random forest i.e. our dataset) print(classifier2.feature_importances_) sklearn.metrics.accuracy_score(tar_test, predictions2) Explanation: Random forests Random forests, an ensemble learning method, are a more sophisticated method of using trees Pioneered by Tin Kam Ho, Leo Breiman (who also created 'bagging') and Adele Cutler For a full discussion, please see https://www.stat.berkeley.edu/~breiman/RandomForests/cc_home.htm http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html End of explanation from sklearn import svm classifier3 = svm.SVC() classifier3 = classifier3.fit(pred_train, tar_train) predictions3 = classifier3.predict(pred_test) sklearn.metrics.confusion_matrix(tar_test, predictions3) sklearn.metrics.accuracy_score(tar_test, predictions3) Explanation: random forests typically outperform decision trees, in particular when our decision tree model exhibits instability the lower accuracy here is perhaps a result of our implementation of the random forest - we would need to tune its parameters in order to optimize it for the problem at hand Support vector machines SVMs are a set of supervised learning methods, used for classification and regression They handle nonlinearity, through kernalization, much better than a standard logistic classifier A couple of very good explanations: http://www.kdnuggets.com/2016/07/support-vector-machines-simple-explanation.html http://www.cs.columbia.edu/~kathy/cs4701/documents/jason_svm_tutorial.pdf http://scikit-learn.org/stable/modules/svm.html End of explanation
1,699
Given the following text description, write Python code to implement the functionality described below step by step Description: Taller evaluable sobre la extracción, transformación y visualización de datos usando IPython Juan David Velásquez Henao jdvelasq@unal.edu.co Universidad Nacional de Colombia, Sede Medellín Facultad de Minas Medellín, Colombia Instrucciones Para realizar el taller Step1: 2.-- Compute e imprima el número de registros con datos faltantes. Step2: 3.-- Compute e imprima el número de registros duplicados. Step3: 4.-- Elimine los registros con datos duplicados o datos faltantes, e imprima la cantidad de registros que quedan (registros completos). Step4: 5.-- Compute y grafique el precio primedio diario. Step5: Ya que la grafica comienza en 1996, se corrigen los valores de la fecha para verla completa Step6: 6.-- Compute y grafique el precio máximo por mes. Step7: 7.-- Compute y grafique el precio mínimo mensual. Step8: 8.-- Haga un gráfico para comparar el precio máximo del mes (para cada mes) y el precio promedio mensual. Step9: 9.-- Haga un histograma que muestre a que horas se produce el máximo precio diario para los días laborales. Step10: Este metodo tiene en cuenta el valor maximo diario, y todas las horas a las cuales se presenta. Step11: Este metodo tiene en cuenta unicamente la primera hora a la cual se presenta el valor maximo en el dia 10.-- Haga un histograma que muestre a que horas se produce el máximo precio diario para los días sabado. Step12: Este metodo tiene en cuenta el valor maximo diario, y todas las horas a las cuales se presenta. Step13: Este metodo tiene en cuenta unicamente la primera hora a la cual se presenta el valor maximo en el dia 11.-- Haga un histograma que muestre a que horas se produce el máximo precio diario para los días domingo. Step14: Este metodo tiene en cuenta el valor maximo diario, y todas las horas a las cuales se presenta. Step15: Este metodo tiene en cuenta unicamente la primera hora a la cual se presenta el valor maximo en el dia 12.-- Imprima una tabla con la fecha y el valor más bajo por año del precio de bolsa. Step16: 13.-- Haga una gráfica en que se muestre el precio promedio diario y el precio promedio mensual.
Python Code: import pandas as pd x=pd.DataFrame() #Mejor hasta ahora for m in range(1995,2018): if m < 2016: o='.xlsx' else: o='.xls' if m < 2000: sK=3 else: sK=2 n='Precio_Bolsa_Nacional_($kwh)_' + str(m) + o y=pd.read_excel(n, skiprows=sK, parse_cols=24) x= x.append(y) print(x.head()) print(x.tail()) print(str(len(x.index)) + ' Filas y ' + str(len(x.columns)) + ' Columnas') Explanation: Taller evaluable sobre la extracción, transformación y visualización de datos usando IPython Juan David Velásquez Henao jdvelasq@unal.edu.co Universidad Nacional de Colombia, Sede Medellín Facultad de Minas Medellín, Colombia Instrucciones Para realizar el taller: En la carpeta 'Taller' del repositorio 'ETVL-IPython' se encuentran los archivos 'Precio_Bolsa_Nacional_($kwh)_'*'.xls' en formato de Microsoft Excel, los cuales contienen los precios históricos horarios de la electricidad para el mercado eléctrico Colombiano entre los años 1995 y 2017 en COL-PESOS/kWh. A partir de la información suministrada resuelva los siguientes puntos usando el lenguaje de programación Python. Para el envío: Al terminar el taller, y dentro de las fechas especificadas en la plataforma de OLADE, debe subir este archivo a su perfil de GitHub. En la plataforma debe copiar el enlace a este archivo, a modo de entregable. Preguntas 1.-- Lea los archivos y cree una tabla única concatenando la información para cada uno de los años. Imprima el encabezamiento de la tabla usando head(). End of explanation print('Hay un total de ' + str(len(x)-len(x.dropna())) + ' registros con datos faltantes.') Explanation: 2.-- Compute e imprima el número de registros con datos faltantes. End of explanation print('Hay un total de ' + str(len(x)-len(x.drop_duplicates())) + ' registros con datos duplicados.') Explanation: 3.-- Compute e imprima el número de registros duplicados. End of explanation print('Hay un total de ' + str(len(x))+' Registros.') z=x.dropna().drop_duplicates() print('Al eliminar los registros duplicados o con datos faltantes quedan ' + str(len(z))+' registros completos.') print('Finalmente quedan ' + str((len(z)-len(z.dropna()))+(len(z)-len(z.drop_duplicates()))) + ' registros duplicados o con datos faltantes, es decir, ninguno.') Explanation: 4.-- Elimine los registros con datos duplicados o datos faltantes, e imprima la cantidad de registros que quedan (registros completos). End of explanation import matplotlib %matplotlib inline import numpy as np import matplotlib.pyplot as plt z['Prom']=z.mean(axis=1) z.groupby('Fecha').mean()['Prom'].plot(kind='line', title='Precio Promedio Diario Erroneamente comenzando en 1996') Explanation: 5.-- Compute y grafique el precio primedio diario. End of explanation w=[] for n in range(len(z['Fecha'])): w.append(str(z.iloc[n,0])[0:10]) z['Fecha']=w z['Prom']=z.mean(axis=1) z.groupby('Fecha').mean()['Prom'].plot(kind='line', title='Precio Promedio Diario Correcto comenzando en 1995', y='FFF') Explanation: Ya que la grafica comienza en 1996, se corrigen los valores de la fecha para verla completa End of explanation z['Max']=z.max(axis=1) w=[] for n in range(len(z['Fecha'])): w.append(str(z.iloc[n,0])[0:7]) z['Ano-Mes']=w z.groupby('Ano-Mes').max()['Max'].plot(kind='line', title='Precio Máximo Mensual') Explanation: 6.-- Compute y grafique el precio máximo por mes. End of explanation z['Min']=z.min(axis=1) z.groupby('Ano-Mes').min()['Min'].plot(kind='line', title='Precio Mínimo Mensual') Explanation: 7.-- Compute y grafique el precio mínimo mensual. End of explanation z.groupby('Ano-Mes').max()['Max'].plot(kind='line', legend='true') z.groupby('Ano-Mes').mean()['Prom'].plot(kind='line', legend='true', title='Comparación Precio Promedio y Precio Máximo Mensual') Explanation: 8.-- Haga un gráfico para comparar el precio máximo del mes (para cada mes) y el precio promedio mensual. End of explanation import datetime w=[] v=[] for n in range(len(z['Fecha'])): temp=str(z.iloc[n,0]) ano, mes, dia = temp.split('-') dia=str(dia)[0:3] year=int(ano) month=int(mes) day=int(dia) semnum=datetime.date(year, month, day).weekday() if semnum>4: Labor=0 else: Labor=1 w.append(semnum) v.append(Labor) z['Semana']=w z[['Semana']]=z[['Semana']].apply(pd.to_numeric) z['Labor']=v z[['Labor']]=z[['Labor']].apply(pd.to_numeric) w=[] w=z[z['Labor']==1] v=[] for n in range(len(w['Fecha'])): for m in range(1,25): if w.iloc[n,m]==w.iloc[n,26]: v.append(m-1) continue ParaHist=pd.DataFrame() ParaHist['Maximo para Laborales']=v ParaHist.plot.hist(alpha=0.5, title='Histograma con el Precio Maximo Diario para dias Laborales') Explanation: 9.-- Haga un histograma que muestre a que horas se produce el máximo precio diario para los días laborales. End of explanation s=[] zZ=[] tT=[] for n in range(len(w['Fecha'])): s=w.iloc[n].values[1:25] tT=[i for i, e in enumerate(s) if e == max(s)] zZ.append(tT[0]) tT=[] continue ParaHist=pd.DataFrame() ParaHist['Maximo para Laborales']=zZ ParaHist.plot.hist(alpha=0.5, title='Histograma con el Precio Maximo Diario para dias Laborales') Explanation: Este metodo tiene en cuenta el valor maximo diario, y todas las horas a las cuales se presenta. End of explanation w=z[z['Semana']==5] v=[] for n in range(len(w['Fecha'])): for m in range(1,25): if w.iloc[n,m]==w.iloc[n,26]: v.append(m-1) continue ParaHist=pd.DataFrame() ParaHist['Maximo para Sabados']=v ParaHist.plot.hist(alpha=0.5, title='Histograma con el Precio Maximo Diario para los Sabados') Explanation: Este metodo tiene en cuenta unicamente la primera hora a la cual se presenta el valor maximo en el dia 10.-- Haga un histograma que muestre a que horas se produce el máximo precio diario para los días sabado. End of explanation s=[] zZ=[] tT=[] for n in range(len(w['Fecha'])): s=w.iloc[n].values[1:25] tT=[i for i, e in enumerate(s) if e == max(s)] zZ.append(tT[0]) tT=[] continue ParaHist=pd.DataFrame() ParaHist['Maximo para Sabados']=zZ ParaHist.plot.hist(alpha=0.5, title='Histograma con el Precio Maximo Diario para los Sabados') Explanation: Este metodo tiene en cuenta el valor maximo diario, y todas las horas a las cuales se presenta. End of explanation w=z[z['Semana']==6] v=[] for n in range(len(w['Fecha'])): for m in range(1,25): if w.iloc[n,m]==w.iloc[n,26]: v.append(m-1) continue ParaHist=pd.DataFrame() ParaHist['Maximo para Domingos']=v ParaHist.plot.hist(alpha=0.5, title='Histograma con el Precio Maximo Diario para los Domingos') Explanation: Este metodo tiene en cuenta unicamente la primera hora a la cual se presenta el valor maximo en el dia 11.-- Haga un histograma que muestre a que horas se produce el máximo precio diario para los días domingo. End of explanation s=[] zZ=[] tT=[] for n in range(len(w['Fecha'])): s=w.iloc[n].values[1:25] tT=[i for i, e in enumerate(s) if e == max(s)] zZ.append(tT[0]) tT=[] continue ParaHist=pd.DataFrame() ParaHist['Maximo para Domingos']=zZ ParaHist.plot.hist(alpha=0.5, title='Histograma con el Precio Maximo Diario para los Domingos') Explanation: Este metodo tiene en cuenta el valor maximo diario, y todas las horas a las cuales se presenta. End of explanation w=[] for n in range(len(z['Fecha'])): w.append(str(z.iloc[n,0])[0:4]) z['Ano']=w w=pd.DataFrame() w=z.groupby(['Ano']).min()['Min'] w Explanation: Este metodo tiene en cuenta unicamente la primera hora a la cual se presenta el valor maximo en el dia 12.-- Imprima una tabla con la fecha y el valor más bajo por año del precio de bolsa. End of explanation Lt=pd.DataFrame() Lt['Fecha']=z['Fecha'] Lt['Ano-Mes']=z['Ano-Mes'] Lt['Prom']=z['Prom'] print(Lt.groupby(Lt['Ano-Mes']).mean().plot(kind='line',legend='false')) print(Lt.groupby(Lt['Fecha']).mean().plot(kind='line',legend='false')) z.groupby('Fecha').mean()['Prom'].plot(kind='line', legend='false') z.groupby('Ano-Mes').mean()['Prom'].plot(kind='line', legend='false') Explanation: 13.-- Haga una gráfica en que se muestre el precio promedio diario y el precio promedio mensual. End of explanation