Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
2,600 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Computation and comparision of the bispectrum and the rotational bispectrum
We show how to compute the bispectrum and the rotational bispectrum, as presented in the paper
Image processing in the semidiscrete group of rototranslations by D. Prandi, U. Boscain and J.-P. Gauthier.
Step7: Auxiliary functions
Step8: Spiral architecture implementation
Spiral architecture has been introduced by Sheridan in
Spiral Architecture for Machine Vision, PhD thesis
Pseudo-invariant image transformations on a hexagonal lattice, P. Sheridan, T. Hintz, and D. Alexander, Image Vis. Comput. 18, 907 (2000).
The implementation with hyperpels that we use in the following is presented in
A New Simulation of Spiral Architecture, X. He, T. Hintz, Q. Wu, H. Wang, and W. Jia, Proceedings of International Conference on Image Processing, Computer Vision, and Pattern Recognition (2006).
Hexagonal structure for intelligent vision, X. He and W. Jia, in Proc. 1st Int. Conf. Inf. Commun. Technol. ICICT 2005 (2005), pp. 52–64.
For a more detailed implementation, see the notebook Hexagonal grid.
We start by defining the centered hyperpel, which is defined on a 9x9 grid and is composed of 56 pixels. It has the shape
# o o x x x x x o o
# o x x x x x x x o
# o x x x x x x x o
# x x x x x x x x x
# x x x C x x x x x
# o x x x x x x x o
# o x x x x x x x o
# o o x x x x x o o
Step9: We now compute, in sa2hex, the address of the center of the hyperpel corresponding to a certain spiral address.
Step11: Then, we compute the value of the hyperpel corresponding to the spiral address, by averaging the values on the subpixels.
Step12: Spiral addition and multiplication
Step14: Computation of the bispectrum
We start by computing the vector $\omega_f(\lambda)$, where $\lambda$ is a certain spiral address.
Step16: Then, we can compute the "generalized invariant" corresponding to $\lambda_1$, $\lambda_2$ and $\lambda_3$, starting from the FFT of the image.
That is
$$
I^3_f(\lambda_1,\lambda_2,\lambda_3) = \langle\omega_f(\lambda_1)\odot\omega_f(\lambda_2),\omega_f(\lambda_3)\rangle.
$$
Step18: Finally, this function computes the bispectrum (or the rotational bispectrum) corresponding to the spiral addresses in the following picture.
<img src="./pixels.png" alt="Hexagonal pixels" style="width
Step19: Some timing tests.
Step21: Tests
Here we define various functions to batch test the images in the test folder.
Step25: Some timing tests.
Step27: Construction of the table for the paper | Python Code:
import numpy as np
from numpy import fft
from numpy import linalg as LA
from scipy import ndimage
from scipy import signal
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import os
%matplotlib inline
Explanation: Computation and comparision of the bispectrum and the rotational bispectrum
We show how to compute the bispectrum and the rotational bispectrum, as presented in the paper
Image processing in the semidiscrete group of rototranslations by D. Prandi, U. Boscain and J.-P. Gauthier.
End of explanation
def int2intvec(a):
Auxiliary function to recover a vector with the digits of a
given integer (in inverse order)
`a` : integer
digit = a%10
vec = np.array([digit],dtype=int)
a = (a-digit)/10
while a!=0:
digit = a%10
vec = np.append(vec,int(digit))
a = (a-digit)/10
return vec
ALPHABET7 = "0123456"
ALPHABET10 = "0123456789"
def base_encode(num, alphabet):
Encode a number in Base X
`num`: The number to encode
if (str(num) == alphabet[0]):
return int(0)
arr = []
base = len(alphabet)
while num:
rem = num % base
num = num // base
arr.append(alphabet[rem])
arr.reverse()
return int(''.join(arr))
def base7to10(num):
Convert a number from base 10 to base 7
`num`: The number to convert
arr = int2intvec(num)
num = 0
for i in range(len(arr)):
num += arr[i]*(7**(i))
return num
def base10to7(num):
Convert a number from base 7 to base 10
`num`: The number to convert
return base_encode(num, ALPHABET7)
def rgb2gray(rgb):
Convert an image from RGB to grayscale
`rgb`: The image to convert
r, g, b = rgb[:,:,0], rgb[:,:,1], rgb[:,:,2]
gray = 0.2989 * r + 0.5870 * g + 0.1140 * b
return gray
def oversampling(image, factor = 7):
Oversample a grayscale image by a certain factor, dividing each
pixel in factor*factor subpixels with the same intensity.
`image`: The image to oversample
`factor`: The oversampling factor
old_shape = image.shape
new_shape = (factor*old_shape[0], factor*old_shape[1])
new_image = np.zeros(new_shape, dtype = image.dtype)
for i in range(old_shape[0]):
for j in range(old_shape[1]):
new_image[factor*i:factor*i+factor,factor*j:factor*j+factor] = image[i,j]*np.ones((factor,factor))
return new_image
Explanation: Auxiliary functions
End of explanation
# The centered hyperpel
hyperpel = np.array([\
[-1,4],[0,4],[1,4],[2,4],[3,4],\
[-2,3],[-1,3], [0,3], [1,3], [2,3], [3,3], [4,3],\
[-2,2],[-1,2], [0,2], [1,2], [2,2], [3,2], [4,2],\
[-3,1],[-2,1],[-1,1], [0,1], [1,1], [2,1], [3,1], [4,1],[5,1],\
[-3,0],[-2,0],[-1,0], [0,0], [1,0], [2,0], [3,0], [4,0],[5,0],\
[-2,-1],[-1,-1], [0,-1], [1,-1], [2,-1], [3,-1], [4,-1],\
[-2,-2],[-1,-2], [0,-2], [1,-2], [2,-2], [3,-2], [4,-2],\
[-1,-3], [0,-3], [1,-3], [2,-3], [3,-3]])
hyperpel_sa = hyperpel - np.array([1,1])
Explanation: Spiral architecture implementation
Spiral architecture has been introduced by Sheridan in
Spiral Architecture for Machine Vision, PhD thesis
Pseudo-invariant image transformations on a hexagonal lattice, P. Sheridan, T. Hintz, and D. Alexander, Image Vis. Comput. 18, 907 (2000).
The implementation with hyperpels that we use in the following is presented in
A New Simulation of Spiral Architecture, X. He, T. Hintz, Q. Wu, H. Wang, and W. Jia, Proceedings of International Conference on Image Processing, Computer Vision, and Pattern Recognition (2006).
Hexagonal structure for intelligent vision, X. He and W. Jia, in Proc. 1st Int. Conf. Inf. Commun. Technol. ICICT 2005 (2005), pp. 52–64.
For a more detailed implementation, see the notebook Hexagonal grid.
We start by defining the centered hyperpel, which is defined on a 9x9 grid and is composed of 56 pixels. It has the shape
# o o x x x x x o o
# o x x x x x x x o
# o x x x x x x x o
# x x x x x x x x x
# x x x C x x x x x
# o x x x x x x x o
# o x x x x x x x o
# o o x x x x x o o
End of explanation
def sa2hex(spiral_address):
# Split the number in basic unit and call the auxiliary function
# Here we reverse the order, so that the index corresponds to the
# decimal position
digits = str(spiral_address)[::-1]
hex_address = np.array([0,0])
for i in range(len(digits)):
if int(digits[i])<0 or int(digits[i])>6:
print("Invalid spiral address!")
return
elif digits[i]!= '0':
hex_address += sa2hex_aux(int(digits[i]),i)
return hex_address
# This computes the row/column positions of the base cases,
# that is, in the form a*10^(zeros).
def sa2hex_aux(a, zeros):
# Base cases
if zeros == 0:
if a == 0:
return np.array([0,0])
elif a == 1:
return np.array([0,8])
elif a == 2:
return np.array([-7,4])
elif a == 3:
return np.array([-7,-4])
elif a == 4:
return np.array([0,-8])
elif a == 5:
return np.array([7,-4])
elif a == 6:
return np.array([7,4])
return sa2hex_aux(a,zeros-1)+ 2*sa2hex_aux(a%6 +1,zeros-1)
Explanation: We now compute, in sa2hex, the address of the center of the hyperpel corresponding to a certain spiral address.
End of explanation
def sa_value(oversampled_image,spiral_address):
Computes the value of the hyperpel corresponding to the given
spiral coordinate.
hp = hyperpel_sa + sa2hex(spiral_address)
val = 0.
for i in range(56):
val += oversampled_image[hp[i,0],hp[i,1]]
return val/56
Explanation: Then, we compute the value of the hyperpel corresponding to the spiral address, by averaging the values on the subpixels.
End of explanation
def spiral_add(a,b,mod=0):
addition_table = [
[0,1,2,3,4,5,6],
[1,63,15,2,0,6,64],
[2,15,14,26,3,0,1],
[3,2,26,25,31,4,0],
[4,0,3,31,36,42,5],
[5,6,0,4,42,41,53],
[6,64,1,0,5,53,52]
]
dig_a = int2intvec(a)
dig_b = int2intvec(b)
if (dig_a<0).any() or (dig_a>7).any() \
or (dig_b<0).any() or (dig_b>7).any():
print("Invalid spiral address!")
return
if len(dig_a) == 1 and len(dig_b)==1:
return addition_table[a][b]
if len(dig_a) < len(dig_b):
dig_a.resize(len(dig_b))
elif len(dig_b) < len(dig_a):
dig_b.resize(len(dig_a))
res = 0
for i in range(len(dig_a)):
if i == len(dig_a)-1:
res += spiral_add(dig_a[i],dig_b[i])*(10**i)
else:
temp = spiral_add(dig_a[i],dig_b[i])
res += (temp%10)*(10**i)
carry_on = spiral_add(dig_a[i+1],(temp - temp%10)/10)
dig_a[i+1] = str(carry_on)
if mod!=0:
return res%mod
return res
def spiral_mult(a,b, mod=0):
multiplication_table = [
[0,0,0,0,0,0,0],
[0,1,2,3,4,5,6],
[0,2,3,4,5,6,1],
[0,3,4,5,6,1,2],
[0,4,5,6,1,2,3],
[0,5,6,1,2,3,4],
[0,6,1,2,3,4,5],
]
dig_a = int2intvec(a)
dig_b = int2intvec(b)
if (dig_a<0).any() or (dig_a>7).any() \
or (dig_b<0).any() or (dig_b>7).any():
print("Invalid spiral address!")
return
sa_mult = int(0)
for i in range(len(dig_b)):
for j in range(len(dig_a)):
temp = multiplication_table[dig_a[j]][dig_b[i]]*(10**(i+j))
sa_mult=spiral_add(sa_mult,temp)
if mod!=0:
return sa_mult%mod
return sa_mult
Explanation: Spiral addition and multiplication
End of explanation
def omegaf(fft_oversampled, sa):
Evaluates the vector omegaf corresponding to the given
spiral address sa.
`fft_oversampled`: the oversampled FFT of the image
`sa`: the spiral address where to compute the vector
omegaf = np.zeros(6, dtype=fft_oversampled.dtype)
for i in range(1,7):
omegaf[i-1] = sa_value(fft_oversampled,spiral_mult(sa,i))
return omegaf
Explanation: Computation of the bispectrum
We start by computing the vector $\omega_f(\lambda)$, where $\lambda$ is a certain spiral address.
End of explanation
def invariant(fft_oversampled, sa1,sa2,sa3):
Evaluates the generalized invariant of f on sa1, sa2 and sa3
`fft_oversampled`: the oversampled FFT of the image
`sa1`, `sa2`, `sa3`: the spiral addresses where to compute the invariant
omega1 = omegaf(fft_oversampled,sa1)
omega2 = omegaf(fft_oversampled,sa2)
omega3 = omegaf(fft_oversampled,sa3)
# Attention: np.vdot uses the scalar product with the complex
# conjugation at the first place!
return np.vdot(omega1*omega2,omega3)
Explanation: Then, we can compute the "generalized invariant" corresponding to $\lambda_1$, $\lambda_2$ and $\lambda_3$, starting from the FFT of the image.
That is
$$
I^3_f(\lambda_1,\lambda_2,\lambda_3) = \langle\omega_f(\lambda_1)\odot\omega_f(\lambda_2),\omega_f(\lambda_3)\rangle.
$$
End of explanation
def bispectral_inv(fft_oversampled_example, rotational = False):
Computes the (rotational) bispectral invariants for any sa1
and any sa2 in the above picture.
`fft_oversampled_example`: oversampled FFT of the image
`rotational`: if True, we compute the rotational bispectrum
if rotational == True:
bispectrum = np.zeros(9**2*6,dtype = fft_oversampled_example.dtype)
else:
bispectrum = np.zeros(9**2,dtype = fft_oversampled_example.dtype)
indexes = [0,1,10,11,12,13,14,15,16]
count = 0
for i in range(9):
sa1 = indexes[i]
sa1_base10 = base7to10(sa1)
for k in range(9):
sa2 = indexes[k]
if rotational == True:
for r in range(6):
sa2_rot = spiral_mult(sa2,r)
sa2_rot_base10 = base7to10(sa2_rot)
sa3 = base10to7(sa1_base10+sa2_rot_base10)
bispectrum[count]=invariant(fft_oversampled_example,sa1,sa2,sa3)
count += 1
else:
sa2_base10 = base7to10(sa2)
sa3 = base10to7(sa1_base10+sa2_base10)
bispectrum[count]=invariant(fft_oversampled_example,sa1,sa2,sa3)
count += 1
return bispectrum
Explanation: Finally, this function computes the bispectrum (or the rotational bispectrum) corresponding to the spiral addresses in the following picture.
<img src="./pixels.png" alt="Hexagonal pixels" style="width: 200px;"/>
End of explanation
example = 1 - rgb2gray(plt.imread('./test-images/butterfly.png'))
fft_example = np.fft.fftshift(np.fft.fft2(example))
fft_oversampled_example = oversampling(fft_example)
%%timeit
bispectral_inv(fft_oversampled_example)
%%timeit
bispectral_inv(fft_oversampled_example, rotational=True)
Explanation: Some timing tests.
End of explanation
folder = './test-images'
def evaluate_invariants(image, rot = False):
Evaluates the invariants of the given image.
`image`: the matrix representing the image (not oversampled)
`rot`: if True we compute the rotational bispectrum
# compute the normalized FFT
fft = np.fft.fftshift(np.fft.fft2(image))
fft /= fft / LA.norm(fft)
# oversample it
fft_oversampled = oversampling(fft)
return bispectral_inv(fft_oversampled, rotational = rot)
Explanation: Tests
Here we define various functions to batch test the images in the test folder.
End of explanation
%%timeit
evaluate_invariants(example)
%%timeit
evaluate_invariants(example, rot = True)
def bispectral_folder(folder_name = folder, rot = False):
Evaluates all the invariants of the images in the selected folder,
storing them in a dictionary with their names as keys.
`folder_name`: path to the folder
`rot`: if True we compute the rotational bispectrum
# we store the results in a dictionary
results = {}
for filename in os.listdir(folder_name):
infilename = os.path.join(folder_name, filename)
if not os.path.isfile(infilename):
continue
base, extension = os.path.splitext(infilename)
if extension == '.png':
test_img = 1 - rgb2gray(plt.imread(infilename))
bispectrum = evaluate_invariants(test_img, rot = rot)
results[os.path.splitext(filename)[0]] = bispectrum
return results
def bispectral_comparison(bispectrums, comparison = 'triangle', plot = True, log_scale = True):
Returns the difference of the norms of the given invariants w.r.t. the
comparison element.
`bispectrums`: a dictionary with as keys the names of the images and
as values their invariants
`comparison`: the element to use as comparison
if comparison not in bispectrums:
print("The requested comparison is not in the folder")
return
bispectrum_diff = {}
for elem in bispectrums:
diff = LA.norm(bispectrums[elem]-bispectrums[comparison])
# we remove nan results
if not np.isnan(diff):
bispectrum_diff[elem] = diff
return bispectrum_diff
def bispectral_plot(bispectrums, comparison = 'triangle', log_scale = True):
Plots the difference of the norms of the given invariants w.r.t. the
comparison element (by default in logarithmic scale).
`bispectrums`: a dictionary with as keys the names of the images and
as values their invariants
`comparison`: the element to use as comparison
`log_scale`: wheter the plot should be in log_scale
bispectrum_diff = bispectral_comparison(bispectrums, comparison = comparison)
plt.plot(bispectrum_diff.values(),'ro')
if log_scale == True:
plt.yscale('log')
for i in range(len(bispectrum_diff.values())):
# if we plot in log scale, we do not put labels on items that are
# too small, otherwise they exit the plot area.
if log_scale and bispectrum_diff.values()[i] < 10**(-3):
continue
plt.text(i,bispectrum_diff.values()[i],bispectrum_diff.keys()[i][:3])
plt.title("Comparison with as reference '"+ comparison +"'")
Explanation: Some timing tests.
End of explanation
comparisons_paper = ['triangle', 'rectangle', 'ellipse', 'etoile', 'diamond']
def extract_table_values(bispectrums, comparisons = comparisons_paper):
Extract the values for the table of the paper.
`bispectrums`: a dictionary with as keys the names of the images and
as values their invariants
`comparison`: list of elements to use as comparison
Returns a list of tuples. Each tuple contains the name of the comparison
element, the maximal value of the difference of the norm of the invariants
with its rotated and the minimal values of the same difference with the
other images.
table_values = []
for elem in comparisons:
diff = bispectral_comparison(bispectrums, comparison= elem, plot=False)
l = len(elem)
match = [x for x in diff.keys() if x[:l]==elem]
not_match = [x for x in diff.keys() if x[:l]!=elem]
max_match = max([ diff[k] for k in match ])
min_not_match = min([ diff[k] for k in not_match ])
table_values.append((elem,'%.2E' % (max_match),'%.2E' % min_not_match))
return table_values
bispectrums = bispectral_folder()
bispectrums_rotational = bispectral_folder(rot=True)
extract_table_values(bispectrums)
extract_table_values(bispectrums_rotational)
Explanation: Construction of the table for the paper
End of explanation |
2,601 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
os.path
Writing code to work with files on multiple platforms is easy using the functions included in the os.path module. Even programs not intended to be ported between platforms should use os.path for reliable filename parsing.
Parsing Path
Step1: Building Path
If any argument to join begins with os.sep, all of the previous arguments are discarded and the new one becomes the beginning of the return value.
Step2: Normal Path
Step3: File Time | Python Code:
import os.path
PATHS = [
'/one/two/three',
'/one/two/three/',
'/',
'.',
'',
]
for path in PATHS:
print('{!r:>17} : {}'.format(path, os.path.split(path)))
for path in PATHS:
print('{!r:>17}:{}'.format(path, os.path.basename(path)))
for path in PATHS:
print('{!r:>17}:{}'.format(path, os.path.dirname(path)))
import os.path
PATHS = [
'filename.txt',
'filename',
'/path/to/filename.txt',
'/',
'',
'my-archive.tar.gz',
'no-extension.',
]
for path in PATHS:
print('{!r:>21} : {!r}'.format(path, os.path.splitext(path)))
import os.path
paths = ['/one/two/three/four',
'/one/two/threefold',
'/one/two/three/',
]
for path in paths:
print('PATH:', path)
print()
print('PREFIX:', os.path.commonprefix(paths))
import os.path
paths = ['/one/two/three/four',
'/one/two/threefold',
'/one/two/three/',
]
for path in paths:
print('PATH:', path)
print()
print('PREFIX:', os.path.commonpath(paths))
Explanation: os.path
Writing code to work with files on multiple platforms is easy using the functions included in the os.path module. Even programs not intended to be ported between platforms should use os.path for reliable filename parsing.
Parsing Path
End of explanation
import os.path
PATHS = [
('one', 'two', 'three'),
('/', 'one', 'two', 'three'),
('/one', '/two', '/three'),
]
for parts in PATHS:
print('{} : {!r}'.format(parts, os.path.join(*parts)))
import os.path
for user in ['', 'gaufung', 'nosuchuser']:
lookup = '~' + user
print('{!r:>15} : {!r}'.format(
lookup, os.path.expanduser(lookup)))
import os.path
import os
os.environ['MYVAR'] = 'VALUE'
print(os.path.expandvars('/path/to/$MYVAR'))
Explanation: Building Path
If any argument to join begins with os.sep, all of the previous arguments are discarded and the new one becomes the beginning of the return value.
End of explanation
import os.path
PATHS = [
'one//two//three',
'one/./two/./three',
'one/../alt/two/three',
]
for path in PATHS:
print('{!r:>22} : {!r}'.format(path, os.path.normpath(path)))
import os
import os.path
os.chdir('/usr')
PATHS = [
'.',
'..',
'./one/two/three',
'../one/two/three',
]
for path in PATHS:
print('{!r:>21} : {!r}'.format(path, os.path.abspath(path)))
Explanation: Normal Path
End of explanation
import os.path
import time
print('File :', '~/WorkSpace/PythonStandardLibrary/FileSystem/Path.ipynb')
print('Access time :', time.ctime(os.path.getatime('/Users/gaufung/WorkSpace/PythonStandardLibrary/FileSystem/Path.ipynb')))
print('Modified time:', time.ctime(os.path.getmtime('/Users/gaufung/WorkSpace/PythonStandardLibrary/FileSystem/Path.ipynb')))
print('Change time :', time.ctime(os.path.getctime('/Users/gaufung/WorkSpace/PythonStandardLibrary/FileSystem/Path.ipynb')))
print('Size :', os.path.getsize('/Users/gaufung/WorkSpace/PythonStandardLibrary/FileSystem/Path.ipynb'))
Explanation: File Time
End of explanation |
2,602 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data-analyysityöpaja Kajaanin Tiedepäivillä
Mitä data-analyysi on? Data-analyysi tarkoitaa sitä, että datan pohjalta päätellään jotain uutta. Esimerkiksi mittausdatan perusteella voidaan todeta, että uusi lääkeaine näyttää laskevan verenpainetta.
No mitä se data on? Nykypäivänä data voi olla mitä tahansa, mikä on saatavissa digitaalisessa muodossa. Perinteisesti data on ollut tieteellisiä havaintoja, joita on tunnollisesti kirjattu ylös, vaikkapa jonkinlaiseksi taulukoksi. Näin on edellisen verenpaine-esimerkin tapauksessa. Nykyään kuitenkin tehdään jo paljon analyysiä esimerkiksi reaaliaikaisesta videokuvasta. Hyvä esimerkki tästä on vaikkapa robottilennokki, joka lentää pitkin voimalinjoja ja videokameran kuvan avulla analysoi, että milloin lumikuorma on vaarallisen suuri.
Mihin data-analyysia tarvitaan? Jos visionäärejä on uskominen, niin kohta ihan kaikkeen. Tieteessä datan analysointi on ollut keskeistä viimeistään 1900-luvun alusta alkaen. Tämä perinteinen tieteen ja asiantuntijatyön analytiikka on kuitenkin nyt saamassa rinnalleen uuden käyttäjäkunnan, kun arkisemmat data-analyysitarpeet ovat suoraan sanoen räjähtäneet. Facebookin ja Googlen kaltaiset internetajan yritykset vetävät uuden data-analytiikan nopeaa kehitystä. Yritysmaailmassa niin kutsuttu Big Data on tällä hetkellä hyvin kuuma aihe.
Joka tapauksessa on selvää, että tulevaisuudessa data-analyysiä tehdään paljon enemmän ja paljon laajemmin. Eli ei pelkästään tutkimuslaitoksissa, vaan myös tavallisissa yrityksissä, virastoissa ja järjestöissä. Jos opettelee ainakin perusasiat, niin saa melkoisen hyödyn tulevaisuutta ajatellen.
Asiaan
Jotta voi analysoida dataa, niin aluksi pitää ladata dataa. Alla oleva koodinpätkä tekee juuri sen.
Koodi ajetaan klikkaamalla harmaaseen laatikkoon, jolloin se tulee valituksi. Valitse ylävalikosta Cell -> Run ja koodi käynnistyy. Sen merkkinä ilmestyy In-riville tähti. Kun homma on valmis, niin alle ilmestyy tulokset. Tässä tapauksessa pitäisi tulla ladattu data näkyviin taulukkona.
Jatkossa koodin voi ajaa myös näppärämmin painamalla Ctrl ja Enter.
Step1: Miltä data näyttää?
Data-analyysi alkaa yleensä sillä, että piirretään data kuvaajaksi eli visualisoidaan se. Tai jos rehellisiä ollaan, niin aluksi yleensä tapellaan sen kanssa, että data saadaan siivottua, oikeaan muotoon ja ladattua koneelle. Mutta sen jälkeen siis visualisoidaan.
Meidän datataulukkomme sisältää vähän kaikenlaista tietoa kymmenen viime vuoden ajalta. Sen perusajatus on, että kaikki tiedot on kytketty kuukauteen. Eli on jokaiselle kuukaudelle erilaisia mittauksia ja muita tietoja, kuten kuukauden keskilämpötila Ilmatieteen laitokselta, myyntitilastoja ja sanojen esiintymistä Suomi24-keskustelufoorumin keskusteluissa.
Piirretään taulukosta löytyvät jäätelömyynnin määrät kuukausittain. Dataa on pitkältä ajalta ja visualisointi on siksi vaikea lukea sellaisenaan, mutta sitä pystyy liikuttelemaan ja sillä tavalla tekemään paremmin ymmärrettäväksi.
Step2: Surffaile hetken ajan ylläolevaa visualisointia. Mikä on jäätelömyynnin perusidea?
Satunnaisuus
Katsotaan ensimmäistä vuotta tarkemmin. Mitä muuta huomaat kuin kuin kesän vaikutuksen?
Step3: Joulukuun myynti on myös kohollaan. Syödäänkö jouluna paljon jäätelöä? Ehkäpä.
Nyt katsoimme kuitenkin vain yhtä vuotta ja yksittäiseen havaintoon ei pidä luottaa. Valitettavasti suuresta kaaviosta on mahdoton nähdä, että mitä joulukuussa myynti on keskimäärin verrattuna muihin talvikuukausiin. Voimme kuitenkin kätevästi poimia datasta halutut tiedot ja piirtää uudet visualisoinnit.
Step4: Nyt näemme, että talvikuukausien myynti vaihtelee aika paljon, eivätkä punaiseksi piirretyt joulukuut poikkea muista kuukausista juurikaan. Varsinaisessa data-analyysissa satunnaisen vaihtelun käsittelyyn käytetään tilastollisia testejä. Tällainen yksinkertainen visualisointi kuitenkin auttaa jo silmämääräisesti hahmottamaan, että kuinka suurta satunnaista vaihtelua datassa on ja tekemään jonkinlaisen arvion siitä, että onko havaittu arvo todellakin poikkeava.
Datan yhdistelyä
Data-analyytikkoa usein kiinnostaa, että millaisia yhteyksiä kahden eri asian välillä on. Meidän taulukossamme se käytännössä tarkoittaa, että löytyyko eri sarakkeiden arvojen väliltä kiinnostavia yhteyksiä.
Piirretään datasta kaksi eri tietoa
Step5: Suoraan piirrettynä tiedoista näkee, että molemmissa on selvä vuositasolla toistuva kuvio. Mutta osuvatko ne yhteen, ja jos osuvat, niin kuinka paljon?
Sitä varten vaihdetaan toisenlaiseen kuvaajaan, nimittäin hajontakaavioon. Siinä nämä kaksi tietoa x- ja y-akseleille. Jokaista täplää vastaa yksi kuukausi ja sen x- ja y-koordinaatit otetaan jäätelö- ja allergialääkemyyntiä vastaavista sarakkeista.
Step6: Nähdään, että nämä kaksi asiaa tavallaan kulkevat käsi kädessä. Kun jäätelömyynti on suuri, niin myös allergialääkkeiden myynti on suuri. Niinpä hajontakaavio näyttää vasemmasta alakulmasta oikeaan yläkulmaan lentävälle täpläparvelle.
Voidaanko tästä siis päätellä, että toinen aiheuttaa toisen? Eli jos syödään paljon jäätelöä, niin se aiheuttaa heinänuhaa? Tai että jos podetaan heinänuhaa, niin sitä hoidetaan lurpsimalla jäätelöä? Koska sitähän se data näyttäisi sanovan?
Näiden kahden muuttujan välinen suhde ei kuitenkaan ole näin yksinkertainen, vaan mukana on tavallaan kolmas pyörä. Piirretäänpä sama hajontakaavio uudelleen niin, että kesäkuukaudet saavat vihreän värin.
Step7: Mysteeri ratkesi! Taitaakin olla niin, että kesäkelit aiheuttavat sekä jäätelömyynnin kohoamista, että heinänuhaa.
Uusien yhteyksien löytämistä
Tähän asti emme ole koskeneet koodiin. Mutta seuraavaksi pääset itse tutkailemaan eri sarakkeiden välisiä yhteyksiä. Katsotaan aluksi, että minkä nimiset sarakkeet datasta löytyvät.
Step8: Alla on esimerkiksi piirretty jäätelömyynnin ja kuukauden keskilämpötilan yhteys. Voit muuttaa sarakkeiden nimiä ja ajaa koodin uudelleen, jolloin näet valitsemiesi sarakkeiden yhteyden.
Step9: Voit esimerkiksi kokeilla Suomi24-foorumin sanojen yleisyyttä ja verrata niitä toisiinsa, tai muihin sarakkeisiin.
Vähän lisää automaatiota
Sarakkeiden välisten yhteyksien etsimistä voi myös automatisoida. Voidaan esimerkiksi mitata sarakkeiden yhteys nk. korrelaation avulla ja tällä tavalla verrata kaikki sarakkeita keskenään. Tuloksena piirretään lämpökartta, jossa tumma väri vastaa vahvaa korrelaatiota. | Python Code:
# Luetaan loitsut, jotka alustavat ympäristön
from pandas import DataFrame, Series, read_csv
from numpy import vstack, round, random
from bokeh.plotting import figure, show, output_notebook, hplot
from bokeh.charts import Bar, Scatter
from bokeh._legacy_charts import HeatMap
from bokeh.palettes import YlOrRd9
output_notebook()
import warnings
warnings.filterwarnings("ignore")
# Ladataan datatiedosto
data = read_csv('https://raw.githubusercontent.com/CSC-IT-Center-for-Science/kajaani-science-days-workshop/master/data.csv', sep=';', decimal=',')
# Katsotaan miltä data näyttää
data
Explanation: Data-analyysityöpaja Kajaanin Tiedepäivillä
Mitä data-analyysi on? Data-analyysi tarkoitaa sitä, että datan pohjalta päätellään jotain uutta. Esimerkiksi mittausdatan perusteella voidaan todeta, että uusi lääkeaine näyttää laskevan verenpainetta.
No mitä se data on? Nykypäivänä data voi olla mitä tahansa, mikä on saatavissa digitaalisessa muodossa. Perinteisesti data on ollut tieteellisiä havaintoja, joita on tunnollisesti kirjattu ylös, vaikkapa jonkinlaiseksi taulukoksi. Näin on edellisen verenpaine-esimerkin tapauksessa. Nykyään kuitenkin tehdään jo paljon analyysiä esimerkiksi reaaliaikaisesta videokuvasta. Hyvä esimerkki tästä on vaikkapa robottilennokki, joka lentää pitkin voimalinjoja ja videokameran kuvan avulla analysoi, että milloin lumikuorma on vaarallisen suuri.
Mihin data-analyysia tarvitaan? Jos visionäärejä on uskominen, niin kohta ihan kaikkeen. Tieteessä datan analysointi on ollut keskeistä viimeistään 1900-luvun alusta alkaen. Tämä perinteinen tieteen ja asiantuntijatyön analytiikka on kuitenkin nyt saamassa rinnalleen uuden käyttäjäkunnan, kun arkisemmat data-analyysitarpeet ovat suoraan sanoen räjähtäneet. Facebookin ja Googlen kaltaiset internetajan yritykset vetävät uuden data-analytiikan nopeaa kehitystä. Yritysmaailmassa niin kutsuttu Big Data on tällä hetkellä hyvin kuuma aihe.
Joka tapauksessa on selvää, että tulevaisuudessa data-analyysiä tehdään paljon enemmän ja paljon laajemmin. Eli ei pelkästään tutkimuslaitoksissa, vaan myös tavallisissa yrityksissä, virastoissa ja järjestöissä. Jos opettelee ainakin perusasiat, niin saa melkoisen hyödyn tulevaisuutta ajatellen.
Asiaan
Jotta voi analysoida dataa, niin aluksi pitää ladata dataa. Alla oleva koodinpätkä tekee juuri sen.
Koodi ajetaan klikkaamalla harmaaseen laatikkoon, jolloin se tulee valituksi. Valitse ylävalikosta Cell -> Run ja koodi käynnistyy. Sen merkkinä ilmestyy In-riville tähti. Kun homma on valmis, niin alle ilmestyy tulokset. Tässä tapauksessa pitäisi tulla ladattu data näkyviin taulukkona.
Jatkossa koodin voi ajaa myös näppärämmin painamalla Ctrl ja Enter.
End of explanation
show(Bar(data, label='Kuukausi', values='Jaatelomyynti'))
Explanation: Miltä data näyttää?
Data-analyysi alkaa yleensä sillä, että piirretään data kuvaajaksi eli visualisoidaan se. Tai jos rehellisiä ollaan, niin aluksi yleensä tapellaan sen kanssa, että data saadaan siivottua, oikeaan muotoon ja ladattua koneelle. Mutta sen jälkeen siis visualisoidaan.
Meidän datataulukkomme sisältää vähän kaikenlaista tietoa kymmenen viime vuoden ajalta. Sen perusajatus on, että kaikki tiedot on kytketty kuukauteen. Eli on jokaiselle kuukaudelle erilaisia mittauksia ja muita tietoja, kuten kuukauden keskilämpötila Ilmatieteen laitokselta, myyntitilastoja ja sanojen esiintymistä Suomi24-keskustelufoorumin keskusteluissa.
Piirretään taulukosta löytyvät jäätelömyynnin määrät kuukausittain. Dataa on pitkältä ajalta ja visualisointi on siksi vaikea lukea sellaisenaan, mutta sitä pystyy liikuttelemaan ja sillä tavalla tekemään paremmin ymmärrettäväksi.
End of explanation
data2005 = data[0:12]
show(Bar(data2005, label='Kuukausi', values='Jaatelomyynti'))
Explanation: Surffaile hetken ajan ylläolevaa visualisointia. Mikä on jäätelömyynnin perusidea?
Satunnaisuus
Katsotaan ensimmäistä vuotta tarkemmin. Mitä muuta huomaat kuin kuin kesän vaikutuksen?
End of explanation
talvikuukaudet = [i % 12 in (11, 0, 1) for i in range(120)]
datajoulu = data.copy()
datajoulu['Joulu'] = Series(i[-2:] != '12' for i in data['Kuukausi'])
show(Bar(datajoulu[talvikuukaudet], label='Kuukausi', values='Jaatelomyynti', title='Talven myynti', group='Joulu'))
Explanation: Joulukuun myynti on myös kohollaan. Syödäänkö jouluna paljon jäätelöä? Ehkäpä.
Nyt katsoimme kuitenkin vain yhtä vuotta ja yksittäiseen havaintoon ei pidä luottaa. Valitettavasti suuresta kaaviosta on mahdoton nähdä, että mitä joulukuussa myynti on keskimäärin verrattuna muihin talvikuukausiin. Voimme kuitenkin kätevästi poimia datasta halutut tiedot ja piirtää uudet visualisoinnit.
End of explanation
show(Bar(data, label='Kuukausi', values='Jaatelomyynti'))
show(Bar(data, label='Kuukausi', values='Allergialaakemyynti'))
Explanation: Nyt näemme, että talvikuukausien myynti vaihtelee aika paljon, eivätkä punaiseksi piirretyt joulukuut poikkea muista kuukausista juurikaan. Varsinaisessa data-analyysissa satunnaisen vaihtelun käsittelyyn käytetään tilastollisia testejä. Tällainen yksinkertainen visualisointi kuitenkin auttaa jo silmämääräisesti hahmottamaan, että kuinka suurta satunnaista vaihtelua datassa on ja tekemään jonkinlaisen arvion siitä, että onko havaittu arvo todellakin poikkeava.
Datan yhdistelyä
Data-analyytikkoa usein kiinnostaa, että millaisia yhteyksiä kahden eri asian välillä on. Meidän taulukossamme se käytännössä tarkoittaa, että löytyyko eri sarakkeiden arvojen väliltä kiinnostavia yhteyksiä.
Piirretään datasta kaksi eri tietoa: jäätelömyynti ja allergialääkkeiden myynti. Voiko näistä kaavioista nähdä mitään?
End of explanation
show(Scatter(data, x='Jaatelomyynti', y='Allergialaakemyynti'))
Explanation: Suoraan piirrettynä tiedoista näkee, että molemmissa on selvä vuositasolla toistuva kuvio. Mutta osuvatko ne yhteen, ja jos osuvat, niin kuinka paljon?
Sitä varten vaihdetaan toisenlaiseen kuvaajaan, nimittäin hajontakaavioon. Siinä nämä kaksi tietoa x- ja y-akseleille. Jokaista täplää vastaa yksi kuukausi ja sen x- ja y-koordinaatit otetaan jäätelö- ja allergialääkemyyntiä vastaavista sarakkeista.
End of explanation
datakesa = data.copy()
datakesa['Kesa'] = Series(i[-2:] in ('06', '07', '08') for i in data['Kuukausi'])
show(Scatter(datakesa, x='Jaatelomyynti', y='Allergialaakemyynti', color='Kesa'))
Explanation: Nähdään, että nämä kaksi asiaa tavallaan kulkevat käsi kädessä. Kun jäätelömyynti on suuri, niin myös allergialääkkeiden myynti on suuri. Niinpä hajontakaavio näyttää vasemmasta alakulmasta oikeaan yläkulmaan lentävälle täpläparvelle.
Voidaanko tästä siis päätellä, että toinen aiheuttaa toisen? Eli jos syödään paljon jäätelöä, niin se aiheuttaa heinänuhaa? Tai että jos podetaan heinänuhaa, niin sitä hoidetaan lurpsimalla jäätelöä? Koska sitähän se data näyttäisi sanovan?
Näiden kahden muuttujan välinen suhde ei kuitenkaan ole näin yksinkertainen, vaan mukana on tavallaan kolmas pyörä. Piirretäänpä sama hajontakaavio uudelleen niin, että kesäkuukaudet saavat vihreän värin.
End of explanation
data.columns.values.tolist()[1:10]
Explanation: Mysteeri ratkesi! Taitaakin olla niin, että kesäkelit aiheuttavat sekä jäätelömyynnin kohoamista, että heinänuhaa.
Uusien yhteyksien löytämistä
Tähän asti emme ole koskeneet koodiin. Mutta seuraavaksi pääset itse tutkailemaan eri sarakkeiden välisiä yhteyksiä. Katsotaan aluksi, että minkä nimiset sarakkeet datasta löytyvät.
End of explanation
# Muuta hipsujen sisällä olevia arvoja alla
# Älä poista hipsuja tai lisää välilyöntejä niiden sisälle
sarake1 = 'Jaatelomyynti'
sarake2 = 'Lampotila'
# Ja piirretään
show(Scatter(data, x=sarake1, y=sarake2))
Explanation: Alla on esimerkiksi piirretty jäätelömyynnin ja kuukauden keskilämpötilan yhteys. Voit muuttaa sarakkeiden nimiä ja ajaa koodin uudelleen, jolloin näet valitsemiesi sarakkeiden yhteyden.
End of explanation
show(HeatMap(data.corr(), title="Sarakkeiden yhteys (korrelaatio)", palette=YlOrRd9[::-1]))
Explanation: Voit esimerkiksi kokeilla Suomi24-foorumin sanojen yleisyyttä ja verrata niitä toisiinsa, tai muihin sarakkeisiin.
Vähän lisää automaatiota
Sarakkeiden välisten yhteyksien etsimistä voi myös automatisoida. Voidaan esimerkiksi mitata sarakkeiden yhteys nk. korrelaation avulla ja tällä tavalla verrata kaikki sarakkeita keskenään. Tuloksena piirretään lämpökartta, jossa tumma väri vastaa vahvaa korrelaatiota.
End of explanation |
2,603 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Chemistry Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 1.8. Coupling With Chemical Reactivity
Is Required
Step12: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step13: 2.2. Code Version
Is Required
Step14: 2.3. Code Languages
Is Required
Step15: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required
Step16: 3.2. Split Operator Advection Timestep
Is Required
Step17: 3.3. Split Operator Physical Timestep
Is Required
Step18: 3.4. Split Operator Chemistry Timestep
Is Required
Step19: 3.5. Split Operator Alternate Order
Is Required
Step20: 3.6. Integrated Timestep
Is Required
Step21: 3.7. Integrated Scheme Type
Is Required
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required
Step23: 4.2. Convection
Is Required
Step24: 4.3. Precipitation
Is Required
Step25: 4.4. Emissions
Is Required
Step26: 4.5. Deposition
Is Required
Step27: 4.6. Gas Phase Chemistry
Is Required
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required
Step30: 4.9. Photo Chemistry
Is Required
Step31: 4.10. Aerosols
Is Required
Step32: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required
Step33: 5.2. Global Mean Metrics Used
Is Required
Step34: 5.3. Regional Metrics Used
Is Required
Step35: 5.4. Trend Metrics Used
Is Required
Step36: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required
Step37: 6.2. Matches Atmosphere Grid
Is Required
Step38: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required
Step39: 7.2. Canonical Horizontal Resolution
Is Required
Step40: 7.3. Number Of Horizontal Gridpoints
Is Required
Step41: 7.4. Number Of Vertical Levels
Is Required
Step42: 7.5. Is Adaptive Grid
Is Required
Step43: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required
Step44: 8.2. Use Atmospheric Transport
Is Required
Step45: 8.3. Transport Details
Is Required
Step46: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required
Step47: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required
Step48: 10.2. Method
Is Required
Step49: 10.3. Prescribed Climatology Emitted Species
Is Required
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step51: 10.5. Interactive Emitted Species
Is Required
Step52: 10.6. Other Emitted Species
Is Required
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required
Step54: 11.2. Method
Is Required
Step55: 11.3. Prescribed Climatology Emitted Species
Is Required
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step57: 11.5. Interactive Emitted Species
Is Required
Step58: 11.6. Other Emitted Species
Is Required
Step59: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required
Step60: 12.2. Prescribed Upper Boundary
Is Required
Step61: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required
Step62: 13.2. Species
Is Required
Step63: 13.3. Number Of Bimolecular Reactions
Is Required
Step64: 13.4. Number Of Termolecular Reactions
Is Required
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required
Step67: 13.7. Number Of Advected Species
Is Required
Step68: 13.8. Number Of Steady State Species
Is Required
Step69: 13.9. Interactive Dry Deposition
Is Required
Step70: 13.10. Wet Deposition
Is Required
Step71: 13.11. Wet Oxidation
Is Required
Step72: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required
Step73: 14.2. Gas Phase Species
Is Required
Step74: 14.3. Aerosol Species
Is Required
Step75: 14.4. Number Of Steady State Species
Is Required
Step76: 14.5. Sedimentation
Is Required
Step77: 14.6. Coagulation
Is Required
Step78: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required
Step79: 15.2. Gas Phase Species
Is Required
Step80: 15.3. Aerosol Species
Is Required
Step81: 15.4. Number Of Steady State Species
Is Required
Step82: 15.5. Interactive Dry Deposition
Is Required
Step83: 15.6. Coagulation
Is Required
Step84: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required
Step85: 16.2. Number Of Reactions
Is Required
Step86: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required
Step87: 17.2. Environmental Conditions
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cnrm-cerfacs', 'sandbox-2', 'atmoschem')
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: CNRM-CERFACS
Source ID: SANDBOX-2
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:52
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation |
2,604 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ModelFree Parser Demo
Arthur G. Palmer, III and Michelle L. Gill
2015/12/06
This IPython notebook demonstartes how to parse various types of ModelFree STAR output files with the library mfoutparser. The output of each of the files is returned in two variables, one called loops, which is a dictionary containing all of the table information. (The name is because tables are called loops in the ModelFree STAR file.) The second variable, tags, is a dictionary containing all other information from various output file sections that does not reside in a table.
The ModelFree output files are parsed with the parse_mfout command, which takes a path to the output file as input and returns the two dictionaries mentioned above.
The tag and loop data can all be written to a file using the function write_all_to_file, which takes the dictionaries containing the tag and loop data as inputs in addition to an optional file prefix. It will generate a tab-delimited file for each of the tables and the miscellaneous items in the tags variable.
General data selection can be accomplished with get_data_selection as well as with any Pandas-dataframe compatible method.
A correlation matrix of the correlation values can be created with make_correlation_matrices and written to a file using the function write_correlation_matrix_to_file.
mfoutparser has been tested on python 2.7 and 3.4. It requires the Numpy (tested on version 1.10.1) and Pandas (tested on version 0.17.1) libraries. Matplotlib is also required if plotting of the data is desired.
Step1: Single Field Data
Input file
Step2: Overview of tags and loops data structure
Preview the returned tags.
Step3: Preview the returned loops (tables).
Step4: Selecting and plotting data
Extract data using a given parameter as an inequality using the get_data_selection function. The selection is based on selector_dict which is a dictionary whose key corresponds to the column name and value corresponds to the selected value. Multiple criteria can be used, as shown below.
The function will accept a specific (in)equality parameter as a string ('==', '<', '>', '<=', '>=') and '==' is the default. To combine selections with different inequalities, simply call the command additional times on the previous command's output.
Step5: This command will extract all relaxation data for residues whose numbers are <= 5
Step6: This command will extract data for a given relaxation parameter at a specific field strength (R1 data at 500 MHz).
Step7: Plot the selected R1 data.
Step8: Select a Model-free parameter and plot.
Step9: Creation of a correlation matrix
The correlation table can also be manipulated so the values are converted to a true matrix representation. Values that are missing are undefined.
Step10: The matrix corresponding to a single residue can be selected as before.
Step11: And just the values of this matrix can be extracted as a numpy two-dimensional array for use in statistical analysis.
Step12: Parts of the matrix can be extracted as normal with numpy arrays. Here are just the $S^2_s$, $\theta$, and $\tau_e$ elements.
Step13: The correlation matrix can be written to a file as well. The filename will have the text correlation_matrix_pivot added to it.
Step14: Multiple Model Data
Input file
Step15: Multiple Field Data
Input file | Python Code:
import os
# Use sans-serif fonts for plotting
import matplotlib
matplotlib.rcParams[u'font.family'] = [u'sans-serif']
matplotlib.rcParams[u'mathtext.default'] = u'regular'
import matplotlib.pyplot as plt
%matplotlib inline
# %matplotlib notebook # Alternative to `%matplotlib inline` that will make plots interactive
# Import mfoutparser
try:
# If mfoutparser is in PYTHONPATH (or is installed), then use a direct import
import mfoutparser as mf
except ImportError:
# Attempt to demo mfoutparser without installing by importing from directory
mfoutparser_path = '../../mfoutparser'
print("Module mfoutparser was not found in PYTHONPATH. Looking for module in directory '{:s}'\n".format(mfoutparser_path))
if os.path.exists(mfoutparser_path):
import imp
mf = imp.load_package('mf', mfoutparser_path)
print("Module mfoutparser was found in the directory '{:s}' and imported.\n".format(mfoutparser_path))
else:
raise ImportError("Module mfoutparser could not be found in the directory '{:s}'.\n".format(mfoutparser_path) + \
"This demonstration will not run until the module is located.")
help(mf)
Explanation: ModelFree Parser Demo
Arthur G. Palmer, III and Michelle L. Gill
2015/12/06
This IPython notebook demonstartes how to parse various types of ModelFree STAR output files with the library mfoutparser. The output of each of the files is returned in two variables, one called loops, which is a dictionary containing all of the table information. (The name is because tables are called loops in the ModelFree STAR file.) The second variable, tags, is a dictionary containing all other information from various output file sections that does not reside in a table.
The ModelFree output files are parsed with the parse_mfout command, which takes a path to the output file as input and returns the two dictionaries mentioned above.
The tag and loop data can all be written to a file using the function write_all_to_file, which takes the dictionaries containing the tag and loop data as inputs in addition to an optional file prefix. It will generate a tab-delimited file for each of the tables and the miscellaneous items in the tags variable.
General data selection can be accomplished with get_data_selection as well as with any Pandas-dataframe compatible method.
A correlation matrix of the correlation values can be created with make_correlation_matrices and written to a file using the function write_correlation_matrix_to_file.
mfoutparser has been tested on python 2.7 and 3.4. It requires the Numpy (tested on version 1.10.1) and Pandas (tested on version 0.17.1) libraries. Matplotlib is also required if plotting of the data is desired.
End of explanation
help(mf.parse_mfout)
help(mf.write_all_to_file)
# Create the output directory if necessary
output_directory = 'output'
if not os.path.exists(output_directory):
os.mkdir(output_directory)
# Parse the ModelFree output file
input_directory = 'input_data'
mfoutfilename = os.sep.join([input_directory, 'mfout.singlefield'])
tags, loops = mf.parse_mfout(mfoutfilename)
# Write everything to a file
output_filename = os.sep.join([output_directory, 'mfout_singlefield'])
mf.write_all_to_file(tags, loops, output_filename)
Explanation: Single Field Data
Input file: mfout.singlefield
End of explanation
tags.keys()
tags['header']
tags['chi_square']
Explanation: Overview of tags and loops data structure
Preview the returned tags.
End of explanation
loops.keys()
loops['header_1']
loops['header_2']
loops['header_3']
loops['chi_square']
loops['relaxation']
Explanation: Preview the returned loops (tables).
End of explanation
help(mf.get_data_selection)
Explanation: Selecting and plotting data
Extract data using a given parameter as an inequality using the get_data_selection function. The selection is based on selector_dict which is a dictionary whose key corresponds to the column name and value corresponds to the selected value. Multiple criteria can be used, as shown below.
The function will accept a specific (in)equality parameter as a string ('==', '<', '>', '<=', '>=') and '==' is the default. To combine selections with different inequalities, simply call the command additional times on the previous command's output.
End of explanation
selector_dict = {'residue':5}
residue_5_rates = mf.get_data_selection(loops['relaxation'], selector_dict, '<=')
residue_5_rates
Explanation: This command will extract all relaxation data for residues whose numbers are <= 5:
End of explanation
selector_dict = {'relaxation_rate_name':'R1', 'field':500.13}
r1_data = mf.get_data_selection(loops['relaxation'], selector_dict)
r1_data
Explanation: This command will extract data for a given relaxation parameter at a specific field strength (R1 data at 500 MHz).
End of explanation
fig = plt.gcf()
fig.set_size_inches(5,3)
ax = plt.axes()
# The extra ".values" property is sometimes required with certain Matplotlib and Pandas version combinations
ax.errorbar(r1_data.residue.values, r1_data.value.values, yerr=r1_data.uncertainty.values,
color='blue', marker='o', ls='', capthick=1.0)
# Using periods followed by the column name is a shortcut for this:
# ax.errorbar(r1_data['residue'].values, r1_data['value'].values, yerr=r1_data['uncertainty'].values,
# color='blue', marker='o', ls='', capthick=1.0)
ax.set_ylim(0,3)
ax.set_xlim(0, r1_data.residue.max()+1)
ax.set_xlabel('Residue')
ax.set_ylabel('$R_1$ $s^{-1}$')
# Set the fontsize for the label and tick labels
fontsize = 12.0
ax.xaxis.label.set_fontsize(fontsize)
ax.yaxis.label.set_fontsize(fontsize)
for tick in ax.xaxis.get_ticklabels():
tick.set_fontsize(fontsize)
for tick in ax.yaxis.get_ticklabels():
tick.set_fontsize(fontsize)
fig.tight_layout()
output_filename = os.sep.join([output_directory, 'R1_500MHz_plot.pdf'])
fig.savefig(output_filename)
output_filename = os.sep.join([output_directory, 'R1_500MHz_plot.png'])
fig.savefig(output_filename, dpi=300)
Explanation: Plot the selected R1 data.
End of explanation
loops['model_1']
selector_dict = {'model_free_name':'S2'}
s2_data = mf.get_data_selection(loops['model_1'], selector_dict)
fig = plt.gcf()
fig.set_size_inches(5,3)
ax = plt.axes()
ax.errorbar(s2_data.residue.values, s2_data.fit_value.values, yerr=s2_data.sim_error.values,
color='blue', marker='o', ls='', capthick=1.0)
ax.set_ylim(0,1)
ax.set_xlim(0, s2_data.residue.max()+1)
ax.set_xlabel('Residue')
ax.set_ylabel('$S^2$')
# Set the fontsize for the label and tick labels
fontsize = 12.0
ax.xaxis.label.set_fontsize(fontsize)
ax.yaxis.label.set_fontsize(fontsize)
for tick in ax.xaxis.get_ticklabels():
tick.set_fontsize(fontsize)
for tick in ax.yaxis.get_ticklabels():
tick.set_fontsize(fontsize)
fig.tight_layout()
output_filename = os.sep.join([output_directory, 'S2_plot.pdf'])
fig.savefig(output_filename)
output_filename = os.sep.join([output_directory, 'S2_plot.png'])
fig.savefig(output_filename, dpi=300)
Explanation: Select a Model-free parameter and plot.
End of explanation
help(mf.make_correlation_matrices)
correlation_matrix = mf.make_correlation_matrices(loops['correlation_matrix'])
correlation_matrix
Explanation: Creation of a correlation matrix
The correlation table can also be manipulated so the values are converted to a true matrix representation. Values that are missing are undefined.
End of explanation
selector_dict = {'residue':2}
residue2_correlation_matrix = mf.get_data_selection(correlation_matrix, selector_dict)
residue2_correlation_matrix
Explanation: The matrix corresponding to a single residue can be selected as before.
End of explanation
residue2_correlation_matrix.values
Explanation: And just the values of this matrix can be extracted as a numpy two-dimensional array for use in statistical analysis.
End of explanation
residue2_correlation_matrix.values[:, 2:]
Explanation: Parts of the matrix can be extracted as normal with numpy arrays. Here are just the $S^2_s$, $\theta$, and $\tau_e$ elements.
End of explanation
help(mf.write_correlation_matrix_to_file)
output_filename = os.sep.join([output_directory, 'mfout_singlefield_loop'])
mf.write_correlation_matrix_to_file(correlation_matrix, output_filename)
Explanation: The correlation matrix can be written to a file as well. The filename will have the text correlation_matrix_pivot added to it.
End of explanation
# Parse the ModelFree output file
mfoutfilename = os.sep.join([input_directory, 'mfout.compare'])
tags, loops = mf.parse_mfout(mfoutfilename)
# Write everything to a file
output_filename = os.sep.join([output_directory, 'mfout_compare'])
mf.write_all_to_file(tags, loops, output_filename)
loops['F_dist']
loops['model_2']
Explanation: Multiple Model Data
Input file: mfout.compare
End of explanation
# Parse the ModelFree output file
mfoutfilename = os.sep.join([input_directory, 'mfout.multifield'])
tags, loops = mf.parse_mfout(mfoutfilename)
# Write everything to a file
output_filename = os.sep.join([output_directory, 'mfout_multifield'])
mf.write_all_to_file(tags, loops, output_filename)
loops['relaxation']
Explanation: Multiple Field Data
Input file: mfout.multifield
End of explanation |
2,605 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vertex SDK
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
Step3: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the following APIs
Step4: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas
Step5: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
Step6: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step7: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Step11: Initialize Vertex SDK for Python
Initialize the Vertex SDK for Python for your project and corresponding bucket.
Step12: Tutorial
Now you are ready to start creating your own AutoML image classification model.
Location of Cloud Storage training data.
Now set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage.
Step13: Quick peek at your data
This tutorial uses a version of the Flowers dataset that is stored in a public Cloud Storage bucket, using a CSV index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows.
Step14: Create the Dataset
Next, create the Dataset resource using the create method for the ImageDataset class, which takes the following parameters
Step15: Create and run training pipeline
To train an AutoML model, you perform two steps
Step16: Run the training pipeline
Next, you run the DAG to start the training job by invoking the method run, with the following parameters
Step17: Review model evaluation scores
After your model has finished training, you can review the evaluation scores for it.
First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project.
Step18: Export as Edge model
You can export an AutoML image classification model as a Edge model which you can then custom deploy to an edge device or download locally. Use the method export_model() to export the model to Cloud Storage, which takes the following parameters
Step19: Download the TFLite model artifacts
Now that you have an exported TFLite version of your model, you can test the exported model locally, but first downloading it from Cloud Storage.
Step20: Instantiate a TFLite interpreter
The TFLite version of the model is not a TensorFlow SavedModel format. You cannot directly use methods like predict(). Instead, one uses the TFLite interpreter. You must first setup the interpreter for the TFLite model as follows
Step21: Get test item
You will use an arbitrary example out of the dataset as a test item. Don't be concerned that the example was likely used in training the model -- we just want to demonstrate how to make a prediction.
Step22: Make a prediction with TFLite model
Finally, you do a prediction using your TFLite model, as follows
Step23: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | Python Code:
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
Explanation: Vertex SDK: AutoML training image classification model for export to edge
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_image_classification_online_export_edge.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_image_classification_online_export_edge.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/ai/platform/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_image_classification_online_export_edge.ipynb">
Open in Google Cloud Notebooks
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use the Vertex SDK to create image classification models to export as an Edge model using a Google Cloud AutoML model.
Dataset
The dataset used for this tutorial is the Flowers dataset from TensorFlow Datasets. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the type of flower an image is from a class of five flowers: daisy, dandelion, rose, sunflower, or tulip.
Objective
In this tutorial, you create a AutoML image classification model from a Python script using the Vertex SDK, and then export the model as an Edge model in TFLite format. You can alternatively create models with AutoML using the gcloud command-line tool or online using the Cloud Console.
The steps performed include:
Create a Vertex Dataset resource.
Train the model.
Export the Edge model from the Model resource to Cloud Storage.
Download the model locally.
Make a local prediction.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements. You need the following:
The Cloud Storage SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:
Install and initialize the SDK.
Install Python 3.
Install virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment.
To install Jupyter, run pip3 install jupyter on the command-line in a terminal shell.
To launch Jupyter, run jupyter notebook on the command-line in a terminal shell.
Open this notebook in the Jupyter Notebook Dashboard.
Installation
Install the latest version of Vertex SDK for Python.
End of explanation
! pip3 install -U google-cloud-storage $USER_FLAG
if os.environ["IS_TESTING"]:
! pip3 install --upgrade tensorflow $USER_FLAG
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $.
End of explanation
REGION = "us-central1" # @param {type: "string"}
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about Vertex AI regions
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
End of explanation
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
! gsutil mb -l $REGION $BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
import google.cloud.aiplatform as aip
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
End of explanation
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
Explanation: Initialize Vertex SDK for Python
Initialize the Vertex SDK for Python for your project and corresponding bucket.
End of explanation
IMPORT_FILE = (
"gs://cloud-samples-data/vision/automl_classification/flowers/all_data_v2.csv"
)
Explanation: Tutorial
Now you are ready to start creating your own AutoML image classification model.
Location of Cloud Storage training data.
Now set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage.
End of explanation
if "IMPORT_FILES" in globals():
FILE = IMPORT_FILES[0]
else:
FILE = IMPORT_FILE
count = ! gsutil cat $FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $FILE | head
Explanation: Quick peek at your data
This tutorial uses a version of the Flowers dataset that is stored in a public Cloud Storage bucket, using a CSV index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows.
End of explanation
dataset = aip.ImageDataset.create(
display_name="Flowers" + "_" + TIMESTAMP,
gcs_source=[IMPORT_FILE],
import_schema_uri=aip.schema.dataset.ioformat.image.single_label_classification,
)
print(dataset.resource_name)
Explanation: Create the Dataset
Next, create the Dataset resource using the create method for the ImageDataset class, which takes the following parameters:
display_name: The human readable name for the Dataset resource.
gcs_source: A list of one or more dataset index files to import the data items into the Dataset resource.
import_schema_uri: The data labeling schema for the data items.
This operation may take several minutes.
End of explanation
dag = aip.AutoMLImageTrainingJob(
display_name="flowers_" + TIMESTAMP,
prediction_type="classification",
multi_label=False,
model_type="MOBILE_TF_LOW_LATENCY_1",
base_model=None,
)
print(dag)
Explanation: Create and run training pipeline
To train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline.
Create training pipeline
An AutoML training pipeline is created with the AutoMLImageTrainingJob class, with the following parameters:
display_name: The human readable name for the TrainingJob resource.
prediction_type: The type task to train the model for.
classification: An image classification model.
object_detection: An image object detection model.
multi_label: If a classification task, whether single (False) or multi-labeled (True).
model_type: The type of model for deployment.
CLOUD: Deployment on Google Cloud
CLOUD_HIGH_ACCURACY_1: Optimized for accuracy over latency for deployment on Google Cloud.
CLOUD_LOW_LATENCY_: Optimized for latency over accuracy for deployment on Google Cloud.
MOBILE_TF_VERSATILE_1: Deployment on an edge device.
MOBILE_TF_HIGH_ACCURACY_1:Optimized for accuracy over latency for deployment on an edge device.
MOBILE_TF_LOW_LATENCY_1: Optimized for latency over accuracy for deployment on an edge device.
base_model: (optional) Transfer learning from existing Model resource -- supported for image classification only.
The instantiated object is the DAG (directed acyclic graph) for the training job.
End of explanation
model = dag.run(
dataset=dataset,
model_display_name="flowers_" + TIMESTAMP,
training_fraction_split=0.8,
validation_fraction_split=0.1,
test_fraction_split=0.1,
budget_milli_node_hours=8000,
disable_early_stopping=False,
)
Explanation: Run the training pipeline
Next, you run the DAG to start the training job by invoking the method run, with the following parameters:
dataset: The Dataset resource to train the model.
model_display_name: The human readable name for the trained model.
training_fraction_split: The percentage of the dataset to use for training.
test_fraction_split: The percentage of the dataset to use for test (holdout data).
validation_fraction_split: The percentage of the dataset to use for validation.
budget_milli_node_hours: (optional) Maximum training time specified in unit of millihours (1000 = hour).
disable_early_stopping: If True, training maybe completed before using the entire budget if the service believes it cannot further improve on the model objective measurements.
The run method when completed returns the Model resource.
The execution of the training pipeline will take upto 20 minutes.
End of explanation
# Get model resource ID
models = aip.Model.list(filter="display_name=flowers_" + TIMESTAMP)
# Get a reference to the Model Service client
client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"}
model_service_client = aip.gapic.ModelServiceClient(client_options=client_options)
model_evaluations = model_service_client.list_model_evaluations(
parent=models[0].resource_name
)
model_evaluation = list(model_evaluations)[0]
print(model_evaluation)
Explanation: Review model evaluation scores
After your model has finished training, you can review the evaluation scores for it.
First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project.
End of explanation
response = model.export_model(
artifact_destination=BUCKET_NAME, export_format_id="tflite", sync=True
)
model_package = response["artifactOutputUri"]
Explanation: Export as Edge model
You can export an AutoML image classification model as a Edge model which you can then custom deploy to an edge device or download locally. Use the method export_model() to export the model to Cloud Storage, which takes the following parameters:
artifact_destination: The Cloud Storage location to store the SavedFormat model artifacts to.
export_format_id: The format to save the model format as. For AutoML image classification there is just one option:
tf-saved-model: TensorFlow SavedFormat for deployment to a container.
tflite: TensorFlow Lite for deployment to an edge or mobile device.
edgetpu-tflite: TensorFlow Lite for TPU
tf-js: TensorFlow for web client
coral-ml: for Coral devices
sync: Whether to perform operational sychronously or asynchronously.
End of explanation
! gsutil ls $model_package
# Download the model artifacts
! gsutil cp -r $model_package tflite
tflite_path = "tflite/model.tflite"
Explanation: Download the TFLite model artifacts
Now that you have an exported TFLite version of your model, you can test the exported model locally, but first downloading it from Cloud Storage.
End of explanation
import tensorflow as tf
interpreter = tf.lite.Interpreter(model_path=tflite_path)
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
input_shape = input_details[0]["shape"]
print("input tensor shape", input_shape)
Explanation: Instantiate a TFLite interpreter
The TFLite version of the model is not a TensorFlow SavedModel format. You cannot directly use methods like predict(). Instead, one uses the TFLite interpreter. You must first setup the interpreter for the TFLite model as follows:
Instantiate an TFLite interpreter for the TFLite model.
Instruct the interpreter to allocate input and output tensors for the model.
Get detail information about the models input and output tensors that will need to be known for prediction.
End of explanation
test_items = ! gsutil cat $IMPORT_FILE | head -n1
test_item = test_items[0].split(",")[0]
with tf.io.gfile.GFile(test_item, "rb") as f:
content = f.read()
test_image = tf.io.decode_jpeg(content)
print("test image shape", test_image.shape)
test_image = tf.image.resize(test_image, (224, 224))
print("test image shape", test_image.shape, test_image.dtype)
test_image = tf.cast(test_image, dtype=tf.uint8).numpy()
Explanation: Get test item
You will use an arbitrary example out of the dataset as a test item. Don't be concerned that the example was likely used in training the model -- we just want to demonstrate how to make a prediction.
End of explanation
import numpy as np
data = np.expand_dims(test_image, axis=0)
interpreter.set_tensor(input_details[0]["index"], data)
interpreter.invoke()
softmax = interpreter.get_tensor(output_details[0]["index"])
label = np.argmax(softmax)
print(label)
Explanation: Make a prediction with TFLite model
Finally, you do a prediction using your TFLite model, as follows:
Convert the test image into a batch of a single image (np.expand_dims)
Set the input tensor for the interpreter to your batch of a single image (data).
Invoke the interpreter.
Retrieve the softmax probabilities for the prediction (get_tensor).
Determine which label had the highest probability (np.argmax).
End of explanation
delete_all = True
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
# Delete the endpoint using the Vertex endpoint object
try:
if "endpoint" in globals():
endpoint.delete()
except Exception as e:
print(e)
# Delete the AutoML or Pipeline trainig job
try:
if "dag" in globals():
dag.delete()
except Exception as e:
print(e)
# Delete the custom trainig job
try:
if "job" in globals():
job.delete()
except Exception as e:
print(e)
# Delete the batch prediction job using the Vertex batch prediction object
try:
if "batch_predict_job" in globals():
batch_predict_job.delete()
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object
try:
if "hpt_job" in globals():
hpt_job.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Pipeline
Model
Endpoint
AutoML Training Job
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation |
2,606 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gaussian Mixture Model
This is tutorial demonstrates how to marginalize out discrete latent variables in Pyro through the motivating example of a mixture model. We'll focus on the mechanics of parallel enumeration, keeping the model simple by training a trivial 1-D Gaussian model on a tiny 5-point dataset. See also the enumeration tutorial for a broader introduction to parallel enumeration.
Table of contents
Overview
Training a MAP estimator
Serving the model
Step1: Overview
Pyro's TraceEnum_ELBO can automatically marginalize out variables in both the guide and the model. When enumerating guide variables, Pyro can either enumerate sequentially (which is useful if the variables determine downstream control flow), or enumerate in parallel by allocating a new tensor dimension and using nonstandard evaluation to create a tensor of possible values at the variable's sample site. These nonstandard values are then replayed in the model. When enumerating variables in the model, the variables must be enumerated in parallel and must not appear in the guide. Mathematically, guide-side enumeration simply reduces variance in a stochastic ELBO by enumerating all values, whereas model-side enumeration avoids an application of Jensen's inequality by exactly marginalizing out a variable.
Here is our tiny dataset. It has five points.
Step2: Training a MAP estimator
Let's start by learning model parameters weights, locs, and scale given priors and data. We will learn point estimates of these using an AutoDelta guide (named after its delta distributions). Our model will learn global mixture weights, the location of each mixture component, and a shared scale that is common to both components. During inference, TraceEnum_ELBO will marginalize out the assignments of datapoints to clusters.
Step3: To run inference with this (model,guide) pair, we use Pyro's config_enumerate() handler to enumerate over all assignments in each iteration. Since we've wrapped the batched Categorical assignments in a pyro.plate indepencence context, this enumeration can happen in parallel
Step4: Before inference we'll initialize to plausible values. Mixture models are very succeptible to local modes. A common approach is choose the best among many randomly initializations, where the cluster means are initialized from random subsamples of the data. Since we're using an AutoDelta guide, we can initialize by defining a custom init_loc_fn().
Step5: During training, we'll collect both losses and gradient norms to monitor convergence. We can do this using PyTorch's .register_hook() method.
Step6: Here are the learned parameters
Step7: The model's weights are as expected, with about 2/5 of the data in the first component and 3/5 in the second component. Next let's visualize the mixture model.
Step8: Finally note that optimization with mixture models is non-convex and can often get stuck in local optima. For example in this tutorial, we observed that the mixture model gets stuck in an everthing-in-one-cluster hypothesis if scale is initialized to be too large.
Serving the model
Step9: Indeed we can run this classifer on new data
Step10: To generate random posterior assignments rather than MAP assignments, we could set temperature=1.
Step11: Since the classes are very well separated, we zoom in to the boundary between classes, around 5.75.
Step12: Predicting membership by enumerating in the guide
A second way to predict class membership is to enumerate in the guide. This doesn't work well for serving classifier models, since we need to run stochastic optimization for each new input data batch, but it is more general in that it can be embedded in larger variational models.
To read cluster assignments from the guide, we'll define a new full_guide that fits both global parameters (as above) and local parameters (which were previously marginalized out). Since we've already learned good values for the global variables, we will block SVI from updating those by using poutine.block.
Step13: We can now examine the guide's local assignment_probs variable.
Step14: MCMC
Next we'll explore the full posterior over component parameters using collapsed NUTS, i.e. we'll use NUTS and marginalize out all discrete latent variables.
Step15: Note that due to nonidentifiability of the mixture components the likelihood landscape has two equally likely modes, near (11,0.5) and (0.5,11). NUTS has difficulty switching between the two modes. | Python Code:
import os
from collections import defaultdict
import torch
import numpy as np
import scipy.stats
from torch.distributions import constraints
from matplotlib import pyplot
%matplotlib inline
import pyro
import pyro.distributions as dist
from pyro import poutine
from pyro.infer.autoguide import AutoDelta
from pyro.optim import Adam
from pyro.infer import SVI, TraceEnum_ELBO, config_enumerate, infer_discrete
smoke_test = ('CI' in os.environ)
assert pyro.__version__.startswith('1.7.0')
Explanation: Gaussian Mixture Model
This is tutorial demonstrates how to marginalize out discrete latent variables in Pyro through the motivating example of a mixture model. We'll focus on the mechanics of parallel enumeration, keeping the model simple by training a trivial 1-D Gaussian model on a tiny 5-point dataset. See also the enumeration tutorial for a broader introduction to parallel enumeration.
Table of contents
Overview
Training a MAP estimator
Serving the model: predicting membership
Predicting membership using discrete inference
Predicting membership by enumerating in the guide
MCMC
End of explanation
data = torch.tensor([0., 1., 10., 11., 12.])
Explanation: Overview
Pyro's TraceEnum_ELBO can automatically marginalize out variables in both the guide and the model. When enumerating guide variables, Pyro can either enumerate sequentially (which is useful if the variables determine downstream control flow), or enumerate in parallel by allocating a new tensor dimension and using nonstandard evaluation to create a tensor of possible values at the variable's sample site. These nonstandard values are then replayed in the model. When enumerating variables in the model, the variables must be enumerated in parallel and must not appear in the guide. Mathematically, guide-side enumeration simply reduces variance in a stochastic ELBO by enumerating all values, whereas model-side enumeration avoids an application of Jensen's inequality by exactly marginalizing out a variable.
Here is our tiny dataset. It has five points.
End of explanation
K = 2 # Fixed number of components.
@config_enumerate
def model(data):
# Global variables.
weights = pyro.sample('weights', dist.Dirichlet(0.5 * torch.ones(K)))
scale = pyro.sample('scale', dist.LogNormal(0., 2.))
with pyro.plate('components', K):
locs = pyro.sample('locs', dist.Normal(0., 10.))
with pyro.plate('data', len(data)):
# Local variables.
assignment = pyro.sample('assignment', dist.Categorical(weights))
pyro.sample('obs', dist.Normal(locs[assignment], scale), obs=data)
Explanation: Training a MAP estimator
Let's start by learning model parameters weights, locs, and scale given priors and data. We will learn point estimates of these using an AutoDelta guide (named after its delta distributions). Our model will learn global mixture weights, the location of each mixture component, and a shared scale that is common to both components. During inference, TraceEnum_ELBO will marginalize out the assignments of datapoints to clusters.
End of explanation
optim = pyro.optim.Adam({'lr': 0.1, 'betas': [0.8, 0.99]})
elbo = TraceEnum_ELBO(max_plate_nesting=1)
Explanation: To run inference with this (model,guide) pair, we use Pyro's config_enumerate() handler to enumerate over all assignments in each iteration. Since we've wrapped the batched Categorical assignments in a pyro.plate indepencence context, this enumeration can happen in parallel: we enumerate only 2 possibilites, rather than 2**len(data) = 32. Finally, to use the parallel version of enumeration, we inform Pyro that we're only using a single plate via max_plate_nesting=1; this lets Pyro know that we're using the rightmost dimension plate and that Pyro can use any other dimension for parallelization.
End of explanation
def init_loc_fn(site):
if site["name"] == "weights":
# Initialize weights to uniform.
return torch.ones(K) / K
if site["name"] == "scale":
return (data.var() / 2).sqrt()
if site["name"] == "locs":
return data[torch.multinomial(torch.ones(len(data)) / len(data), K)]
raise ValueError(site["name"])
def initialize(seed):
global global_guide, svi
pyro.set_rng_seed(seed)
pyro.clear_param_store()
global_guide = AutoDelta(poutine.block(model, expose=['weights', 'locs', 'scale']),
init_loc_fn=init_loc_fn)
svi = SVI(model, global_guide, optim, loss=elbo)
return svi.loss(model, global_guide, data)
# Choose the best among 100 random initializations.
loss, seed = min((initialize(seed), seed) for seed in range(100))
initialize(seed)
print('seed = {}, initial_loss = {}'.format(seed, loss))
Explanation: Before inference we'll initialize to plausible values. Mixture models are very succeptible to local modes. A common approach is choose the best among many randomly initializations, where the cluster means are initialized from random subsamples of the data. Since we're using an AutoDelta guide, we can initialize by defining a custom init_loc_fn().
End of explanation
# Register hooks to monitor gradient norms.
gradient_norms = defaultdict(list)
for name, value in pyro.get_param_store().named_parameters():
value.register_hook(lambda g, name=name: gradient_norms[name].append(g.norm().item()))
losses = []
for i in range(200 if not smoke_test else 2):
loss = svi.step(data)
losses.append(loss)
print('.' if i % 100 else '\n', end='')
pyplot.figure(figsize=(10,3), dpi=100).set_facecolor('white')
pyplot.plot(losses)
pyplot.xlabel('iters')
pyplot.ylabel('loss')
pyplot.yscale('log')
pyplot.title('Convergence of SVI');
pyplot.figure(figsize=(10,4), dpi=100).set_facecolor('white')
for name, grad_norms in gradient_norms.items():
pyplot.plot(grad_norms, label=name)
pyplot.xlabel('iters')
pyplot.ylabel('gradient norm')
pyplot.yscale('log')
pyplot.legend(loc='best')
pyplot.title('Gradient norms during SVI');
Explanation: During training, we'll collect both losses and gradient norms to monitor convergence. We can do this using PyTorch's .register_hook() method.
End of explanation
map_estimates = global_guide(data)
weights = map_estimates['weights']
locs = map_estimates['locs']
scale = map_estimates['scale']
print('weights = {}'.format(weights.data.numpy()))
print('locs = {}'.format(locs.data.numpy()))
print('scale = {}'.format(scale.data.numpy()))
Explanation: Here are the learned parameters:
End of explanation
X = np.arange(-3,15,0.1)
Y1 = weights[0].item() * scipy.stats.norm.pdf((X - locs[0].item()) / scale.item())
Y2 = weights[1].item() * scipy.stats.norm.pdf((X - locs[1].item()) / scale.item())
pyplot.figure(figsize=(10, 4), dpi=100).set_facecolor('white')
pyplot.plot(X, Y1, 'r-')
pyplot.plot(X, Y2, 'b-')
pyplot.plot(X, Y1 + Y2, 'k--')
pyplot.plot(data.data.numpy(), np.zeros(len(data)), 'k*')
pyplot.title('Density of two-component mixture model')
pyplot.ylabel('probability density');
Explanation: The model's weights are as expected, with about 2/5 of the data in the first component and 3/5 in the second component. Next let's visualize the mixture model.
End of explanation
guide_trace = poutine.trace(global_guide).get_trace(data) # record the globals
trained_model = poutine.replay(model, trace=guide_trace) # replay the globals
def classifier(data, temperature=0):
inferred_model = infer_discrete(trained_model, temperature=temperature,
first_available_dim=-2) # avoid conflict with data plate
trace = poutine.trace(inferred_model).get_trace(data)
return trace.nodes["assignment"]["value"]
print(classifier(data))
Explanation: Finally note that optimization with mixture models is non-convex and can often get stuck in local optima. For example in this tutorial, we observed that the mixture model gets stuck in an everthing-in-one-cluster hypothesis if scale is initialized to be too large.
Serving the model: predicting membership
Now that we've trained a mixture model, we might want to use the model as a classifier.
During training we marginalized out the assignment variables in the model. While this provides fast convergence, it prevents us from reading the cluster assignments from the guide. We'll discuss two options for treating the model as a classifier: first using infer_discrete (much faster) and second by training a secondary guide using enumeration inside SVI (slower but more general).
Predicting membership using discrete inference
The fastest way to predict membership is to use the infer_discrete handler, together with trace and replay. Let's start out with a MAP classifier, setting infer_discrete's temperature parameter to zero. For a deeper look at effect handlers like trace, replay, and infer_discrete, see the effect handler tutorial.
End of explanation
new_data = torch.arange(-3, 15, 0.1)
assignment = classifier(new_data)
pyplot.figure(figsize=(8, 2), dpi=100).set_facecolor('white')
pyplot.plot(new_data.numpy(), assignment.numpy())
pyplot.title('MAP assignment')
pyplot.xlabel('data value')
pyplot.ylabel('class assignment');
Explanation: Indeed we can run this classifer on new data
End of explanation
print(classifier(data, temperature=1))
Explanation: To generate random posterior assignments rather than MAP assignments, we could set temperature=1.
End of explanation
new_data = torch.arange(5.5, 6.0, 0.005)
assignment = classifier(new_data, temperature=1)
pyplot.figure(figsize=(8, 2), dpi=100).set_facecolor('white')
pyplot.plot(new_data.numpy(), assignment.numpy(), 'bx', color='C0')
pyplot.title('Random posterior assignment')
pyplot.xlabel('data value')
pyplot.ylabel('class assignment');
Explanation: Since the classes are very well separated, we zoom in to the boundary between classes, around 5.75.
End of explanation
@config_enumerate
def full_guide(data):
# Global variables.
with poutine.block(hide_types=["param"]): # Keep our learned values of global parameters.
global_guide(data)
# Local variables.
with pyro.plate('data', len(data)):
assignment_probs = pyro.param('assignment_probs', torch.ones(len(data), K) / K,
constraint=constraints.unit_interval)
pyro.sample('assignment', dist.Categorical(assignment_probs))
optim = pyro.optim.Adam({'lr': 0.2, 'betas': [0.8, 0.99]})
elbo = TraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, full_guide, optim, loss=elbo)
# Register hooks to monitor gradient norms.
gradient_norms = defaultdict(list)
svi.loss(model, full_guide, data) # Initializes param store.
for name, value in pyro.get_param_store().named_parameters():
value.register_hook(lambda g, name=name: gradient_norms[name].append(g.norm().item()))
losses = []
for i in range(200 if not smoke_test else 2):
loss = svi.step(data)
losses.append(loss)
print('.' if i % 100 else '\n', end='')
pyplot.figure(figsize=(10,3), dpi=100).set_facecolor('white')
pyplot.plot(losses)
pyplot.xlabel('iters')
pyplot.ylabel('loss')
pyplot.yscale('log')
pyplot.title('Convergence of SVI');
pyplot.figure(figsize=(10,4), dpi=100).set_facecolor('white')
for name, grad_norms in gradient_norms.items():
pyplot.plot(grad_norms, label=name)
pyplot.xlabel('iters')
pyplot.ylabel('gradient norm')
pyplot.yscale('log')
pyplot.legend(loc='best')
pyplot.title('Gradient norms during SVI');
Explanation: Predicting membership by enumerating in the guide
A second way to predict class membership is to enumerate in the guide. This doesn't work well for serving classifier models, since we need to run stochastic optimization for each new input data batch, but it is more general in that it can be embedded in larger variational models.
To read cluster assignments from the guide, we'll define a new full_guide that fits both global parameters (as above) and local parameters (which were previously marginalized out). Since we've already learned good values for the global variables, we will block SVI from updating those by using poutine.block.
End of explanation
assignment_probs = pyro.param('assignment_probs')
pyplot.figure(figsize=(8, 3), dpi=100).set_facecolor('white')
pyplot.plot(data.data.numpy(), assignment_probs.data.numpy()[:, 0], 'ro',
label='component with mean {:0.2g}'.format(locs[0]))
pyplot.plot(data.data.numpy(), assignment_probs.data.numpy()[:, 1], 'bo',
label='component with mean {:0.2g}'.format(locs[1]))
pyplot.title('Mixture assignment probabilities')
pyplot.xlabel('data value')
pyplot.ylabel('assignment probability')
pyplot.legend(loc='center');
Explanation: We can now examine the guide's local assignment_probs variable.
End of explanation
from pyro.infer.mcmc.api import MCMC
from pyro.infer.mcmc import NUTS
pyro.set_rng_seed(2)
kernel = NUTS(model)
mcmc = MCMC(kernel, num_samples=250, warmup_steps=50)
mcmc.run(data)
posterior_samples = mcmc.get_samples()
X, Y = posterior_samples["locs"].t()
pyplot.figure(figsize=(8, 8), dpi=100).set_facecolor('white')
h, xs, ys, image = pyplot.hist2d(X.numpy(), Y.numpy(), bins=[20, 20])
pyplot.contour(np.log(h + 3).T, extent=[xs.min(), xs.max(), ys.min(), ys.max()],
colors='white', alpha=0.8)
pyplot.title('Posterior density as estimated by collapsed NUTS')
pyplot.xlabel('loc of component 0')
pyplot.ylabel('loc of component 1')
pyplot.tight_layout()
Explanation: MCMC
Next we'll explore the full posterior over component parameters using collapsed NUTS, i.e. we'll use NUTS and marginalize out all discrete latent variables.
End of explanation
pyplot.figure(figsize=(8, 3), dpi=100).set_facecolor('white')
pyplot.plot(X.numpy(), color='red')
pyplot.plot(Y.numpy(), color='blue')
pyplot.xlabel('NUTS step')
pyplot.ylabel('loc')
pyplot.title('Trace plot of loc parameter during NUTS inference')
pyplot.tight_layout()
Explanation: Note that due to nonidentifiability of the mixture components the likelihood landscape has two equally likely modes, near (11,0.5) and (0.5,11). NUTS has difficulty switching between the two modes.
End of explanation |
2,607 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Net Surgery
Caffe networks can be transformed to your particular needs by editing the model parameters. The data, diffs, and parameters of a net are all exposed in pycaffe.
Roll up your sleeves for net surgery with pycaffe!
Step1: Designer Filters
To show how to load, manipulate, and save parameters we'll design our own filters into a simple network that's only a single convolution layer. This net has two blobs, data for the input and conv for the convolution output and one parameter conv for the convolution filter weights and biases.
Step2: The convolution weights are initialized from Gaussian noise while the biases are initialized to zero. These random filters give output somewhat like edge detections.
Step3: Raising the bias of a filter will correspondingly raise its output
Step4: Altering the filter weights is more exciting since we can assign any kernel like Gaussian blur, the Sobel operator for edges, and so on. The following surgery turns the 0th filter into a Gaussian blur and the 1st and 2nd filters into the horizontal and vertical gradient parts of the Sobel operator.
See how the 0th output is blurred, the 1st picks up horizontal edges, and the 2nd picks up vertical edges.
Step5: With net surgery, parameters can be transplanted across nets, regularized by custom per-parameter operations, and transformed according to your schemes.
Casting a Classifier into a Fully Convolutional Network
Let's take the standard Caffe Reference ImageNet model "CaffeNet" and transform it into a fully convolutional net for efficient, dense inference on large inputs. This model generates a classification map that covers a given input size instead of a single classification. In particular a 8 $\times$ 8 classification map on a 451 $\times$ 451 input gives 64x the output in only 3x the time. The computation exploits a natural efficiency of convolutional network (convnet) structure by amortizing the computation of overlapping receptive fields.
To do so we translate the InnerProduct matrix multiplication layers of CaffeNet into Convolutional layers. This is the only change
Step6: The only differences needed in the architecture are to change the fully connected classifier inner product layers into convolutional layers with the right filter size -- 6 x 6, since the reference model classifiers take the 36 elements of pool5 as input -- and stride 1 for dense classification. Note that the layers are renamed so that Caffe does not try to blindly load the old parameters when it maps layer names to the pretrained model.
Step7: Consider the shapes of the inner product parameters. The weight dimensions are the output and input sizes while the bias dimension is the output size.
Step8: The convolution weights are arranged in output $\times$ input $\times$ height $\times$ width dimensions. To map the inner product weights to convolution filters, we could roll the flat inner product vectors into channel $\times$ height $\times$ width filter matrices, but actually these are identical in memory (as row major arrays) so we can assign them directly.
The biases are identical to those of the inner product.
Let's transplant!
Step9: Next, save the new model weights.
Step10: To conclude, let's make a classification map from the example cat image and visualize the confidence of "tiger cat" as a probability heatmap. This gives an 8-by-8 prediction on overlapping regions of the 451 $\times$ 451 input. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Make sure that caffe is on the python path:
caffe_root = '../' # this file is expected to be in {caffe_root}/examples
import sys
sys.path.insert(0, caffe_root + 'python')
import caffe
# configure plotting
plt.rcParams['figure.figsize'] = (10, 10)
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
Explanation: Net Surgery
Caffe networks can be transformed to your particular needs by editing the model parameters. The data, diffs, and parameters of a net are all exposed in pycaffe.
Roll up your sleeves for net surgery with pycaffe!
End of explanation
# Load the net, list its data and params, and filter an example image.
caffe.set_mode_cpu()
net = caffe.Net('net_surgery/conv.prototxt', caffe.TEST)
print("blobs {}\nparams {}".format(net.blobs.keys(), net.params.keys()))
# load image and prepare as a single input batch for Caffe
im = np.array(caffe.io.load_image('images/cat_gray.jpg', color=False)).squeeze()
plt.title("original image")
plt.imshow(im)
plt.axis('off')
im_input = im[np.newaxis, np.newaxis, :, :]
net.blobs['data'].reshape(*im_input.shape)
net.blobs['data'].data[...] = im_input
Explanation: Designer Filters
To show how to load, manipulate, and save parameters we'll design our own filters into a simple network that's only a single convolution layer. This net has two blobs, data for the input and conv for the convolution output and one parameter conv for the convolution filter weights and biases.
End of explanation
# helper show filter outputs
def show_filters(net):
net.forward()
plt.figure()
filt_min, filt_max = net.blobs['conv'].data.min(), net.blobs['conv'].data.max()
for i in range(3):
plt.subplot(1,4,i+2)
plt.title("filter #{} output".format(i))
plt.imshow(net.blobs['conv'].data[0, i], vmin=filt_min, vmax=filt_max)
plt.tight_layout()
plt.axis('off')
# filter the image with initial
show_filters(net)
Explanation: The convolution weights are initialized from Gaussian noise while the biases are initialized to zero. These random filters give output somewhat like edge detections.
End of explanation
# pick first filter output
conv0 = net.blobs['conv'].data[0, 0]
print("pre-surgery output mean {:.2f}".format(conv0.mean()))
# set first filter bias to 1
net.params['conv'][1].data[0] = 1.
net.forward()
print("post-surgery output mean {:.2f}".format(conv0.mean()))
Explanation: Raising the bias of a filter will correspondingly raise its output:
End of explanation
ksize = net.params['conv'][0].data.shape[2:]
# make Gaussian blur
sigma = 1.
y, x = np.mgrid[-ksize[0]//2 + 1:ksize[0]//2 + 1, -ksize[1]//2 + 1:ksize[1]//2 + 1]
g = np.exp(-((x**2 + y**2)/(2.0*sigma**2)))
gaussian = (g / g.sum()).astype(np.float32)
net.params['conv'][0].data[0] = gaussian
# make Sobel operator for edge detection
net.params['conv'][0].data[1:] = 0.
sobel = np.array((-1, -2, -1, 0, 0, 0, 1, 2, 1), dtype=np.float32).reshape((3,3))
net.params['conv'][0].data[1, 0, 1:-1, 1:-1] = sobel # horizontal
net.params['conv'][0].data[2, 0, 1:-1, 1:-1] = sobel.T # vertical
show_filters(net)
Explanation: Altering the filter weights is more exciting since we can assign any kernel like Gaussian blur, the Sobel operator for edges, and so on. The following surgery turns the 0th filter into a Gaussian blur and the 1st and 2nd filters into the horizontal and vertical gradient parts of the Sobel operator.
See how the 0th output is blurred, the 1st picks up horizontal edges, and the 2nd picks up vertical edges.
End of explanation
!diff net_surgery/bvlc_caffenet_full_conv.prototxt ../models/bvlc_reference_caffenet/deploy.prototxt
Explanation: With net surgery, parameters can be transplanted across nets, regularized by custom per-parameter operations, and transformed according to your schemes.
Casting a Classifier into a Fully Convolutional Network
Let's take the standard Caffe Reference ImageNet model "CaffeNet" and transform it into a fully convolutional net for efficient, dense inference on large inputs. This model generates a classification map that covers a given input size instead of a single classification. In particular a 8 $\times$ 8 classification map on a 451 $\times$ 451 input gives 64x the output in only 3x the time. The computation exploits a natural efficiency of convolutional network (convnet) structure by amortizing the computation of overlapping receptive fields.
To do so we translate the InnerProduct matrix multiplication layers of CaffeNet into Convolutional layers. This is the only change: the other layer types are agnostic to spatial size. Convolution is translation-invariant, activations are elementwise operations, and so on. The fc6 inner product when carried out as convolution by fc6-conv turns into a 6 \times 6 filter with stride 1 on pool5. Back in image space this gives a classification for each 227 $\times$ 227 box with stride 32 in pixels. Remember the equation for output map / receptive field size, output = (input - kernel_size) / stride + 1, and work out the indexing details for a clear understanding.
End of explanation
# Load the original network and extract the fully connected layers' parameters.
net = caffe.Net('../models/bvlc_reference_caffenet/deploy.prototxt',
'../models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel',
caffe.TEST)
params = ['fc6', 'fc7', 'fc8']
# fc_params = {name: (weights, biases)}
fc_params = {pr: (net.params[pr][0].data, net.params[pr][1].data) for pr in params}
for fc in params:
print '{} weights are {} dimensional and biases are {} dimensional'.format(fc, fc_params[fc][0].shape, fc_params[fc][1].shape)
Explanation: The only differences needed in the architecture are to change the fully connected classifier inner product layers into convolutional layers with the right filter size -- 6 x 6, since the reference model classifiers take the 36 elements of pool5 as input -- and stride 1 for dense classification. Note that the layers are renamed so that Caffe does not try to blindly load the old parameters when it maps layer names to the pretrained model.
End of explanation
# Load the fully convolutional network to transplant the parameters.
net_full_conv = caffe.Net('net_surgery/bvlc_caffenet_full_conv.prototxt',
'../models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel',
caffe.TEST)
params_full_conv = ['fc6-conv', 'fc7-conv', 'fc8-conv']
# conv_params = {name: (weights, biases)}
conv_params = {pr: (net_full_conv.params[pr][0].data, net_full_conv.params[pr][1].data) for pr in params_full_conv}
for conv in params_full_conv:
print '{} weights are {} dimensional and biases are {} dimensional'.format(conv, conv_params[conv][0].shape, conv_params[conv][1].shape)
Explanation: Consider the shapes of the inner product parameters. The weight dimensions are the output and input sizes while the bias dimension is the output size.
End of explanation
for pr, pr_conv in zip(params, params_full_conv):
conv_params[pr_conv][0].flat = fc_params[pr][0].flat # flat unrolls the arrays
conv_params[pr_conv][1][...] = fc_params[pr][1]
Explanation: The convolution weights are arranged in output $\times$ input $\times$ height $\times$ width dimensions. To map the inner product weights to convolution filters, we could roll the flat inner product vectors into channel $\times$ height $\times$ width filter matrices, but actually these are identical in memory (as row major arrays) so we can assign them directly.
The biases are identical to those of the inner product.
Let's transplant!
End of explanation
net_full_conv.save('net_surgery/bvlc_caffenet_full_conv.caffemodel')
Explanation: Next, save the new model weights.
End of explanation
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# load input and configure preprocessing
im = caffe.io.load_image('images/cat.jpg')
transformer = caffe.io.Transformer({'data': net_full_conv.blobs['data'].data.shape})
transformer.set_mean('data', np.load('../python/caffe/imagenet/ilsvrc_2012_mean.npy').mean(1).mean(1))
transformer.set_transpose('data', (2,0,1))
transformer.set_channel_swap('data', (2,1,0))
transformer.set_raw_scale('data', 255.0)
# make classification map by forward and print prediction indices at each location
out = net_full_conv.forward_all(data=np.asarray([transformer.preprocess('data', im)]))
print out['prob'][0].argmax(axis=0)
# show net input and confidence map (probability of the top prediction at each location)
plt.subplot(1, 2, 1)
plt.imshow(transformer.deprocess('data', net_full_conv.blobs['data'].data[0]))
plt.subplot(1, 2, 2)
plt.imshow(out['prob'][0,281])
Explanation: To conclude, let's make a classification map from the example cat image and visualize the confidence of "tiger cat" as a probability heatmap. This gives an 8-by-8 prediction on overlapping regions of the 451 $\times$ 451 input.
End of explanation |
2,608 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression Week 5
Step1: Load in house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
Step2: Create new features
As in Week 2, we consider features that are some transformations of inputs.
Step3: Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this variable will mostly affect houses with many bedrooms.
On the other hand, taking square root of sqft_living will decrease the separation between big house and small house. The owner may not be exactly twice as happy for getting a house that is twice as big.
Learn regression weights with L1 penalty
Let us fit a model with all the features available, plus the features we just created above.
Step4: Applying L1 penalty requires adding an extra parameter (l1_penalty) to the linear regression call in GraphLab Create. (Other tools may have separate implementations of LASSO.) Note that it's important to set l2_penalty=0 to ensure we don't introduce an additional L2 penalty.
Step5: Find what features had non-zero weight.
Step6: Note that a majority of the weights have been set to zero. So by setting an L1 penalty that's large enough, we are performing a subset selection.
QUIZ QUESTION
Step7: Next, we write a loop that does the following
Step8: QUIZ QUESTIONS
1. What was the best value for the l1_penalty?
2. What is the RSS on TEST data of the model with the best l1_penalty?
Step9: QUIZ QUESTION
Also, using this value of L1 penalty, how many nonzero weights do you have?
Step10: Limit the number of nonzero weights
What if we absolutely wanted to limit ourselves to, say, 7 features? This may be important if we want to derive "a rule of thumb" --- an interpretable model that has only a few features in them.
In this section, you are going to implement a simple, two phase procedure to achive this goal
Step11: Exploring the larger range of values to find a narrow range with the desired sparsity
Let's define a wide range of possible l1_penalty_values
Step12: Now, implement a loop that search through this space of possible l1_penalty values
Step13: Out of this large range, we want to find the two ends of our desired narrow range of l1_penalty. At one end, we will have l1_penalty values that have too few non-zeros, and at the other end, we will have an l1_penalty that has too many non-zeros.
More formally, find
Step14: QUIZ QUESTIONS
What values did you find for l1_penalty_min andl1_penalty_max?
Exploring the narrow range of values to find the solution with the right number of non-zeros that has lowest RSS on the validation set
We will now explore the narrow region of l1_penalty values we found
Step15: For l1_penalty in np.linspace(l1_penalty_min,l1_penalty_max,20)
Step16: QUIZ QUESTIONS
1. What value of l1_penalty in our narrow range has the lowest RSS on the VALIDATION set and has sparsity equal to max_nonzeros?
2. What features in this model have non-zero coefficients? | Python Code:
import graphlab
Explanation: Regression Week 5: Feature Selection and LASSO (Interpretation)
In this notebook, you will use LASSO to select features, building on a pre-implemented solver for LASSO (using GraphLab Create, though you can use other solvers). You will:
* Run LASSO with different L1 penalties.
* Choose best L1 penalty using a validation set.
* Choose best L1 penalty using a validation set, with additional constraint on the size of subset.
In the second notebook, you will implement your own LASSO solver, using coordinate descent.
Fire up graphlab create
End of explanation
sales = graphlab.SFrame('kc_house_data.gl/')
Explanation: Load in house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
End of explanation
from math import log, sqrt
sales['sqft_living_sqrt'] = sales['sqft_living'].apply(sqrt)
sales['sqft_lot_sqrt'] = sales['sqft_lot'].apply(sqrt)
sales['bedrooms_square'] = sales['bedrooms']*sales['bedrooms']
# In the dataset, 'floors' was defined with type string,
# so we'll convert them to int, before creating a new feature.
sales['floors'] = sales['floors'].astype(int)
sales['floors_square'] = sales['floors']*sales['floors']
Explanation: Create new features
As in Week 2, we consider features that are some transformations of inputs.
End of explanation
all_features = ['bedrooms',
'bedrooms_square',
'bathrooms',
'sqft_living',
'sqft_living_sqrt',
'sqft_lot',
'sqft_lot_sqrt',
'floors',
'floors_square',
'waterfront',
'view',
'condition',
'grade',
'sqft_above',
'sqft_basement',
'yr_built',
'yr_renovated']
Explanation: Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this variable will mostly affect houses with many bedrooms.
On the other hand, taking square root of sqft_living will decrease the separation between big house and small house. The owner may not be exactly twice as happy for getting a house that is twice as big.
Learn regression weights with L1 penalty
Let us fit a model with all the features available, plus the features we just created above.
End of explanation
model_all = graphlab.linear_regression.create(sales, target='price', features=all_features,
validation_set=None,
l2_penalty=0., l1_penalty=1e10)
Explanation: Applying L1 penalty requires adding an extra parameter (l1_penalty) to the linear regression call in GraphLab Create. (Other tools may have separate implementations of LASSO.) Note that it's important to set l2_penalty=0 to ensure we don't introduce an additional L2 penalty.
End of explanation
model_all['coefficients'].print_rows(num_rows=18)
Explanation: Find what features had non-zero weight.
End of explanation
(training_and_validation, testing) = sales.random_split(.9,seed=1) # initial train/test split
(training, validation) = training_and_validation.random_split(0.5, seed=1) # split training into train and validate
Explanation: Note that a majority of the weights have been set to zero. So by setting an L1 penalty that's large enough, we are performing a subset selection.
QUIZ QUESTION:
According to this list of weights, which of the features have been chosen?
Selecting an L1 penalty
To find a good L1 penalty, we will explore multiple values using a validation set. Let us do three way split into train, validation, and test sets:
* Split our sales data into 2 sets: training and test
* Further split our training data into two sets: train, validation
Be very careful that you use seed = 1 to ensure you get the same answer!
End of explanation
import numpy as np # note this allows us to refer to numpy as np instead
all_rss, models = dict(), dict()
for l1_penalty in np.logspace(1, 7, num=13):
model = graphlab.linear_regression.create(training,
target='price',
features=all_features,
validation_set=None,
l2_penalty=0.,
l1_penalty=l1_penalty,
verbose = False)
predictions = model.predict(validation)
residuals = validation['price'] - predictions
RSS = (residuals**2).sum()
print l1_penalty, "\t\t", RSS, model['coefficients']['value'].nnz()
all_rss[l1_penalty] = RSS
models[l1_penalty] = model
best_l1_penalty = min(all_rss, key=all_rss.get)
print "Min L1_penalty", best_l1_penalty, RSS
print "NNZ", models[best_l1_penalty]['coefficients']['value'].nnz()
Explanation: Next, we write a loop that does the following:
* For l1_penalty in [10^1, 10^1.5, 10^2, 10^2.5, ..., 10^7] (to get this in Python, type np.logspace(1, 7, num=13).)
* Fit a regression model with a given l1_penalty on TRAIN data. Specify l1_penalty=l1_penalty and l2_penalty=0. in the parameter list.
* Compute the RSS on VALIDATION data (here you will want to use .predict()) for that l1_penalty
* Report which l1_penalty produced the lowest RSS on validation data.
When you call linear_regression.create() make sure you set validation_set = None.
Note: you can turn off the print out of linear_regression.create() with verbose = False
End of explanation
model_test = graphlab.linear_regression.create(training,
target='price',
features=all_features,
validation_set=None,
l2_penalty=0.,
l1_penalty= best_l1_penalty,
verbose = False)
predictions = model_test.predict(testing)
residuals = testing['price'] - predictions
RSS = (residuals*residuals).sum()
print "RSS on TEST data", RSS
Explanation: QUIZ QUESTIONS
1. What was the best value for the l1_penalty?
2. What is the RSS on TEST data of the model with the best l1_penalty?
End of explanation
model_test['coefficients'].print_rows(num_rows=18)
Explanation: QUIZ QUESTION
Also, using this value of L1 penalty, how many nonzero weights do you have?
End of explanation
max_nonzeros = 7
Explanation: Limit the number of nonzero weights
What if we absolutely wanted to limit ourselves to, say, 7 features? This may be important if we want to derive "a rule of thumb" --- an interpretable model that has only a few features in them.
In this section, you are going to implement a simple, two phase procedure to achive this goal:
1. Explore a large range of l1_penalty values to find a narrow region of l1_penalty values where models are likely to have the desired number of non-zero weights.
2. Further explore the narrow region you found to find a good value for l1_penalty that achieves the desired sparsity. Here, we will again use a validation set to choose the best value for l1_penalty.
End of explanation
l1_penalty_values = np.logspace(8, 10, num=20)
Explanation: Exploring the larger range of values to find a narrow range with the desired sparsity
Let's define a wide range of possible l1_penalty_values:
End of explanation
l1_penalties_max, l1_penalties_min = list(), list()
for l1_penalty in l1_penalty_values:
model = graphlab.linear_regression.create(training,
target='price',
features=all_features,
validation_set=None,
l2_penalty=0.,
l1_penalty=l1_penalty,
verbose = False)
nr_of_nnz_weights = model['coefficients']['value'].nnz()
if nr_of_nnz_weights > max_nonzeros:
#print "max", l1_penalty
l1_penalties_max.append(l1_penalty)
if nr_of_nnz_weights < max_nonzeros:
#print "min", l1_penalty
l1_penalties_min.append(l1_penalty)
print l1_penalty, nr_of_nnz_weights
Explanation: Now, implement a loop that search through this space of possible l1_penalty values:
For l1_penalty in np.logspace(8, 10, num=20):
Fit a regression model with a given l1_penalty on TRAIN data. Specify l1_penalty=l1_penalty and l2_penalty=0. in the parameter list. When you call linear_regression.create() make sure you set validation_set = None
Extract the weights of the model and count the number of nonzeros. Save the number of nonzeros to a list.
Hint: model['coefficients']['value'] gives you an SArray with the parameters you learned. If you call the method .nnz() on it, you will find the number of non-zero parameters!
End of explanation
l1_penalty_min = max(l1_penalties_max)
l1_penalty_max = min(l1_penalties_min)
print l1_penalty_min, l1_penalty_max
Explanation: Out of this large range, we want to find the two ends of our desired narrow range of l1_penalty. At one end, we will have l1_penalty values that have too few non-zeros, and at the other end, we will have an l1_penalty that has too many non-zeros.
More formally, find:
* The largest l1_penalty that has more non-zeros than max_nonzero (if we pick a penalty smaller than this value, we will definitely have too many non-zero weights)
* Store this value in the variable l1_penalty_min (we will use it later)
* The smallest l1_penalty that has fewer non-zeros than max_nonzero (if we pick a penalty larger than this value, we will definitely have too few non-zero weights)
* Store this value in the variable l1_penalty_max (we will use it later)
Hint: there are many ways to do this, e.g.:
* Programmatically within the loop above
* Creating a list with the number of non-zeros for each value of l1_penalty and inspecting it to find the appropriate boundaries.
End of explanation
l1_penalty_values = np.linspace(l1_penalty_min,l1_penalty_max,20)
Explanation: QUIZ QUESTIONS
What values did you find for l1_penalty_min andl1_penalty_max?
Exploring the narrow range of values to find the solution with the right number of non-zeros that has lowest RSS on the validation set
We will now explore the narrow region of l1_penalty values we found:
End of explanation
l1_penalties = dict()
coefficients = dict()
for l1_penalty in l1_penalty_values:
model = graphlab.linear_regression.create(training,
target='price',
features=all_features,
validation_set=None,
l2_penalty=0.,
l1_penalty=l1_penalty,
verbose = False)
predictions = model.predict(validation)
residuals = validation['price'] - predictions
RSS = (residuals*residuals).sum()
nr_of_nnz_weights = model['coefficients']['value'].nnz()
if nr_of_nnz_weights == 7:
l1_penalties[l1_penalty] = RSS
coefficients[l1_penalty] = model
print nr_of_nnz_weights, RSS, model['coefficients'].print_rows(num_rows=18)
Explanation: For l1_penalty in np.linspace(l1_penalty_min,l1_penalty_max,20):
Fit a regression model with a given l1_penalty on TRAIN data. Specify l1_penalty=l1_penalty and l2_penalty=0. in the parameter list. When you call linear_regression.create() make sure you set validation_set = None
Measure the RSS of the learned model on the VALIDATION set
Find the model that the lowest RSS on the VALIDATION set and has sparsity equal to max_nonzero.
End of explanation
best_l1_penalty = min(l1_penalties, key=l1_penalties.get)
print best_l1_penalty, l1_penalties[best_l1_penalty]
print coefficients[best_l1_penalty]['coefficients'].print_rows(num_rows=18)
Explanation: QUIZ QUESTIONS
1. What value of l1_penalty in our narrow range has the lowest RSS on the VALIDATION set and has sparsity equal to max_nonzeros?
2. What features in this model have non-zero coefficients?
End of explanation |
2,609 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Type
Is Required
Step7: 1.4. Elemental Stoichiometry
Is Required
Step8: 1.5. Elemental Stoichiometry Details
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 1.7. Diagnostic Variables
Is Required
Step11: 1.8. Damping
Is Required
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required
Step13: 2.2. Timestep If Not From Ocean
Is Required
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required
Step15: 3.2. Timestep If Not From Ocean
Is Required
Step16: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required
Step17: 4.2. Scheme
Is Required
Step18: 4.3. Use Different Scheme
Is Required
Step19: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required
Step20: 5.2. River Input
Is Required
Step21: 5.3. Sediments From Boundary Conditions
Is Required
Step22: 5.4. Sediments From Explicit Model
Is Required
Step23: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required
Step24: 6.2. CO2 Exchange Type
Is Required
Step25: 6.3. O2 Exchange Present
Is Required
Step26: 6.4. O2 Exchange Type
Is Required
Step27: 6.5. DMS Exchange Present
Is Required
Step28: 6.6. DMS Exchange Type
Is Required
Step29: 6.7. N2 Exchange Present
Is Required
Step30: 6.8. N2 Exchange Type
Is Required
Step31: 6.9. N2O Exchange Present
Is Required
Step32: 6.10. N2O Exchange Type
Is Required
Step33: 6.11. CFC11 Exchange Present
Is Required
Step34: 6.12. CFC11 Exchange Type
Is Required
Step35: 6.13. CFC12 Exchange Present
Is Required
Step36: 6.14. CFC12 Exchange Type
Is Required
Step37: 6.15. SF6 Exchange Present
Is Required
Step38: 6.16. SF6 Exchange Type
Is Required
Step39: 6.17. 13CO2 Exchange Present
Is Required
Step40: 6.18. 13CO2 Exchange Type
Is Required
Step41: 6.19. 14CO2 Exchange Present
Is Required
Step42: 6.20. 14CO2 Exchange Type
Is Required
Step43: 6.21. Other Gases
Is Required
Step44: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required
Step45: 7.2. PH Scale
Is Required
Step46: 7.3. Constants If Not OMIP
Is Required
Step47: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required
Step48: 8.2. Sulfur Cycle Present
Is Required
Step49: 8.3. Nutrients Present
Is Required
Step50: 8.4. Nitrous Species If N
Is Required
Step51: 8.5. Nitrous Processes If N
Is Required
Step52: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required
Step53: 9.2. Upper Trophic Levels Treatment
Is Required
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required
Step55: 10.2. Pft
Is Required
Step56: 10.3. Size Classes
Is Required
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required
Step58: 11.2. Size Classes
Is Required
Step59: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required
Step60: 12.2. Lability
Is Required
Step61: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required
Step62: 13.2. Types If Prognostic
Is Required
Step63: 13.3. Size If Prognostic
Is Required
Step64: 13.4. Size If Discrete
Is Required
Step65: 13.5. Sinking Speed If Prognostic
Is Required
Step66: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required
Step67: 14.2. Abiotic Carbon
Is Required
Step68: 14.3. Alkalinity
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'sandbox-2', 'ocnbgchem')
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: MOHC
Source ID: SANDBOX-2
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:15
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation |
2,610 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Spam Analysis
This notebook will contain the codes and documentations for spam analysis. A report is generated on the analysis in the dropbox paper https
Step1: Spam by Source and User type (Registered & non-registered)
Step2: Create Spam Type and Spam By Repeat Tables
Step3: Repeated Questions Count
Check for Greetings
Step4: Check for Testing Questions
Step5: Check for Random Characters
Step6: Check for Irrelevant Questions
Step7: Check for Abusive Questions
Step8: Check for Repeated Questions
Step9: Spam Analysis by Type
Step10: Repeated questions by Time and User | Python Code:
from database import Database
database = Database(
'<host name>',
'<database name>',
'<user name>',
'<password>',
'utf8mb4'
)
Explanation: Spam Analysis
This notebook will contain the codes and documentations for spam analysis. A report is generated on the analysis in the dropbox paper https://paper.dropbox.com/doc/Spam-Analysis-of-Maya-questions-NubDXwEKR6NDOBghYGgQ4.
First we will connect to our database.
End of explanation
import matplotlib.pyplot as plt
plt.style.use('ggplot')
import pandas as pd
if connection:
try:
with connection.cursor() as cursor:
query = "select count(*) as count, source from questions where status='spam' GROUP BY source,user_id is not null"
cursor.execute(query)
data = cursor.fetchall()
df2 = pd.DataFrame(data)
df2.plot.bar()
plt.show()
print data
finally:
connection.close()
Explanation: Spam by Source and User type (Registered & non-registered)
End of explanation
def create_spam_type_table(cursor, connection):
create_schema_sql = "CREATE TABLE spam_type(id int(11) unsigned NOT NULL AUTO_INCREMENT,question_id int(10) unsigned NOT NULL,type enum('repeat','abusive','random','greeting','irrelevant','test') DEFAULT NULL,PRIMARY KEY (id),KEY question_id (question_id),CONSTRAINT spam_type_ibfk_1 FOREIGN KEY (question_id) REFERENCES questions (id)) ENGINE=InnoDB DEFAULT CHARSET=utf8"
cursor.execute(create_schema_sql)
connection.commit()
def create_spam_repeat_table(cursor, connection):
create_schema_sql = "CREATE TABLE spam_by_repeat(id int(11) unsigned NOT NULL AUTO_INCREMENT,question_id int(10) unsigned NOT NULL,parent_id int(10) unsigned NOT NULL,is_same_user tinyint(1) NOT NULL,time_dif int(11) unsigned NOT NULL,PRIMARY KEY (id),KEY question_id (question_id),KEY parent_id (parent_id),CONSTRAINT spam_by_repeat_ibfk_1 FOREIGN KEY (question_id) REFERENCES questions (id),CONSTRAINT spam_by_repeat_ibfk_2 FOREIGN KEY (parent_id) REFERENCES questions (id)) ENGINE=InnoDB DEFAULT CHARSET=utf8"
cursor.execute(create_schema_sql)
connection.commit()
Explanation: Create Spam Type and Spam By Repeat Tables
End of explanation
from pyxdameraulevenshtein import damerau_levenshtein_distance as dl_distance
def check_for_greeting(sentence):
greeting_word = ['hi', 'hey', 'hello', 'bye', 'thank', 'কেমন'.decode('utf-8')]
if len(sentence.split(' ')) < 10:
# greetings
for words in sentence.split(' '):
for i in greeting_word:
if dl_distance(i, words) <= 1:
sql = "INSERT INTO spam_type(question_id, type) VALUES('" + str(record['id']) + "','greeting')"
cursor.execute(sql)
connection.commit()
return True
return False
Explanation: Repeated Questions Count
Check for Greetings
End of explanation
def check_for_test(sentence):
testing_word = ['test', 'check', 'testing', 'checking']
if len(sentence.split(' ')) < 10:
# greetings
for words in sentence.split(' '):
for i in testing_word:
if dl_distance(i, words) <= 1 or i in sentence:
sql = "INSERT INTO spam_type(question_id, type) VALUES('" + str(record['id']) + "','test')"
cursor.execute(sql)
connection.commit()
return True
return False
Explanation: Check for Testing Questions
End of explanation
def mark_as_random(record):
sql = "INSERT INTO spam_type(question_id, type) VALUES('" + str(record['id']) + "','random')"
cursor.execute(sql)
connection.commit()
Explanation: Check for Random Characters
End of explanation
def check_for_irrelevant(sentence):
irrelevant_word = ['voice']
if len(sentence.split(' ')) < 10:
# greetings
for words in sentence.split(' '):
for i in irrelevant_word:
if dl_distance(i, words) <= 1:
sql = "INSERT INTO spam_type(question_id, type) VALUES('" + str(record['id']) + "','irrelevant')"
cursor.execute(sql)
connection.commit()
return True
return False
Explanation: Check for Irrelevant Questions
End of explanation
def check_for_abusive(sentence):
abusive_word = ['sex','সেক্স'.decode('utf-8'),'যৌন'.decode('utf-8'),'দুধ'.decode('utf-8'),'চুদ'.decode('utf-8'), 'লিঙ্গ'.decode('utf-8')]
# greetings
for words in sentence.split(' '):
for i in abusive_word:
if dl_distance(i, words) <= 1 or i in sentence:
sql = "INSERT INTO spam_type(question_id, type) VALUES('" + str(record['id']) + "','abusive')"
cursor.execute(sql)
connection.commit()
return True
return False
Explanation: Check for Abusive Questions
End of explanation
def check_for_repeat(record):
sql = "SELECT id, email, created_at FROM questions WHERE id < " + str(record['id']) + " and body='" + record['body'] + "'"
cursor.execute(sql)
result = cursor.fetchall()
if result:
match = 0
for i in result:
if i['id'] > match:
match = i['id']
data = i
sql = "INSERT INTO spam_type(question_id, type) VALUES('" + str(record['id']) + "','repeat')"
cursor.execute(sql)
connection.commit()
if (data['email'] == record['email']):
sql = "INSERT INTO spam_by_repeat(question_id, parent_id, is_same_user, time_dif) VALUES('" + str(record['id']) + "','" + str(data['id']) + "','1','" + str(abs((record['created_at'] - data['created_at']).total_seconds())) + "')"
else:
sql = "INSERT INTO spam_by_repeat(question_id, parent_id, is_same_user, time_dif) VALUES('" + str(record['id']) + "','" + str(data['id']) + "','0','" + str(abs((record['created_at'] - data['created_at']).total_seconds())) + "')"
cursor.execute(sql)
connection.commit()
return True
return False
Explanation: Check for Repeated Questions
End of explanation
connection = database.connect_with_pymysql()
if connection:
try:
with connection.cursor() as cursor:
create_spam_type_table(cursor, connection)
create_spam_repeat_table(cursor, connection)
sql = "SELECT id, body, email, source, created_at FROM questions WHERE status='spam'"
cursor.execute(sql)
data = cursor.fetchall()
for record in data:
if record['body']:
# no vowel
if check_for_greeting(record['body']):
continue
elif check_for_test(record['body']):
continue
elif check_for_irrelevant(record['body']):
continue
elif len(record['body'].split(' ')) <= 3:
mark_as_random(record)
continue
elif check_for_repeat(record):
continue
elif check_for_abusive(record['body']):
continue
sql = "INSERT INTO spam_type(question_id) VALUES('" + str(record['id']) + "')"
cursor.execute(sql)
connection.commit()
else:
# random meaningless characters or blank message
mark_as_random(record)
finally:
connection.close()
Explanation: Spam Analysis by Type
End of explanation
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import sys
timeline = []
connection = database.connect_with_pymysql()
if connection:
try:
with connection.cursor() as cursor:
sql = "SELECT time_dif FROM spam_by_repeat where is_same_user=0"
cursor.execute(sql)
result = cursor.fetchall()
finally:
connection.close()
for i in result:
timeline.append(i['time_dif']/60)
plt.style.use('ggplot')
ranges = [0, 2, 60, 1440, 10080, sys.maxint]
col = ['<2min', '2-60min', '1-24hr', '1-7day', '>1week']
val = np.zeros(5)
for i in range(len(ranges)-1):
for j in timeline:
if ranges[i] <= j < ranges[i+1]:
val[i] += 1
df2 = pd.DataFrame(np.array(val), col, columns=['Count of repeated questions by time by different user'])
ax = df2.plot.bar()
for p in ax.patches:
b=p.get_bbox()
ax.annotate("{}".format(int(b.y1 + b.y0)), ((b.x0 + b.x1)/2 - 0.1, b.y1))
plt.show()
Explanation: Repeated questions by Time and User
End of explanation |
2,611 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analyzing Airbnb Prices in New York
Jihyun Kim
Final Project for Data Bootcamp, Fall 2016
What determines each Airbnb's listing price?
Background
Everything in New York is expensive. For first time travelers, New York may seem even more expensive. At the same time, travelers have different wants and needs from their accomodation that a student or a working person would. So I wanted to analyze the price trend of the Airbnb listing prices in the eyes of a traveler.
Travelers of different budget and purpose would have different priorities, but most would definately prefer good accessibility to the top tourist attractions they want to visit. Will this have an effect on the Airbnb rental price?
Data Source
For this data analysis, I used the Airbnb open data avaiable here. I used the listing.csv file for New York.
Libraries and API
Since the csv file contained more than 20,000 entries, I decided to do some basic scrubbing first and then export to a different csv using the csv library. I then used the pandas library to manipulate and display selected data and used the matplotlib and seaborn libraries for visualization. To calculate the average distance from each listing to the top rated tourist attractions of New York, I used the Beautiful Soup library to parse the website and retrieve a list of attraction names. I then used the Google Places API to get each attraction spot's detailed latitude and longitude to calculate the great circle distance from each airbnb apartment.
Step1: Google Places API Configuration
Step2: Write a function to calculate the distance from each listing to top trip advisor attractions
Visualize data including where the closest ones are, the most expensive, the relative borderline
Write a new function that calculates the distance from the closest subwaby stations
somehow visualize the convenience and access from each subway station using Google maps API
decide where is the best value/distance
make a widget that allows you to copy and paste the link
Defining Functions
1. tripadvisor_attractions( url, how_many )
This function takes 2 parameters, the url of the trip advisor link and the number of top attractions one wants to check. It then uses the beautiful soup library to find the div that contains the list of top rated tourist attractions in the city and returns them as a list.
Step3: 2. ta_detail(ta_list, city)
This function takes the list returned by the tripadvisor_attractions() function as well as the city name in a string. I explictly ask for the city name so that Google Places API will find more accurate place details when it looks up each tourist attraction. It returns a dataframe of the tourist attraction, its google place ID, longitude, and latitude.
Step4: 3. latlong_tuple(ta_df)
This function takes the tourist attraction data frame created above then returns a list of (latitude, longitude) tuples for every one of them.
Step5: 4. clean_csv(data_in, geo_tuples)
This function is the main data scraping function. I tried to first import the csv as a dataframe then clearning each entry, but the pandas iterrow and itertuple took a very long time so I decided to do the basic scrubbing when I was importing the csv. This function automatically saves a new copy of the cleaned csv with a file name extension _out.csv. The function itself doesn't return anything.
Step6: Reading in the data
Step7: The cell below reads in the original csv file, removes some unwanted listings, and adds a new column that has the average distance from the top 10 Trip Advisor approved(!!) tourist attractions.
Step8: We then make a copy dataframe listing to play around with.
Step9: Visualizing the Data
Neighbourhood
First, I used the groupby function to group the data by neighbourhood groups. I then make 2 different data frames to plot the price and average distance.
Step10: Then I used the groupby function for neighbourhoods to see a price comparison between different New York neighbourhoods
Step11: The most expensive neighbourhood
Step12: The Second Most Expensive
Step13: Room Type
To account for the price difference between room types, I grouped the data by the room_type column and made some visualizations.
Step14: Plotting the Entire Room listings without the top 20 most expensive ones show that there are 2 concentrated correlated areas between average distance and price. The bimodal distribution in average distance might be the concentration of Airbnb listings in Manhattan and Brooklyn
Step15: Plotting a violin diagram of the prices of all entire homes in different neighbourhood groups show us that Manhattan has more distrubted price range of apartments, albeit on the higher end, while Queens and Bronx have higher concentration of listings at a specific point at a lower price range.
Dealing with Outliers
To deal with some of the outliers at the top, I tried deleting the top 10 or 20 most expensive ones, but this method wasn't very scalable across the dataset neither was it an accurate depiction of the price variety. So I decided to first get an understanding of the most expensive listings in New York and then to create a separate dataframe that removes data entries with price higher or lower than 3 standard deviations from the mean.
Step16: It is likely that some of the listings listed above are specifically for events and photography, rather than for traveler's accomodation. Also it seems like some of the hosts who didn't want to remove their listing from Airbnb but wasn't available to host rather listed the price as 9,900 USD.
Some of the listings that seemed "normal" but had a very high price were
Step17: The 2 plots above try to find if there would be any relationship between the number of reviews per month (trust and approval) as well as the average distance from the top attractions. Reviews per month plot does not seem to display any positive correlation between price and user approval, which makes sense as there are many other factors that determine an apartment rental price than user approval.
The average distance plot shows an interesting negative correlation between average distance and price. The lower the average distance is, the higher the price seems to be.
Both graphs show that many hosts like to mark prices discretely, by increments of 5 or 10, as there is a heavy concentration of data along y axis along the grid lines.
Step18: The scatterplot above shows how big of a discrepancy apartment prices in Manhattan is. The top 25% of the apartments in Manhattan range in price from 400 USD to more than 700 USD, while those in Bronx span range of just 200 to 300. | Python Code:
import sys
import pandas as pd
import matplotlib.pyplot as plt
import datetime as dt
import numpy as np
import seaborn as sns
import statistics
import csv
from scipy import stats
from bs4 import BeautifulSoup as bs
import urllib.request
from googleplaces import GooglePlaces, types, lang
from geopy.distance import great_circle
import geocoder
%matplotlib inline
print('Python version: ', sys.version)
print('Pandas version: ', pd.__version__)
print('Today: ', dt.date.today())
Explanation: Analyzing Airbnb Prices in New York
Jihyun Kim
Final Project for Data Bootcamp, Fall 2016
What determines each Airbnb's listing price?
Background
Everything in New York is expensive. For first time travelers, New York may seem even more expensive. At the same time, travelers have different wants and needs from their accomodation that a student or a working person would. So I wanted to analyze the price trend of the Airbnb listing prices in the eyes of a traveler.
Travelers of different budget and purpose would have different priorities, but most would definately prefer good accessibility to the top tourist attractions they want to visit. Will this have an effect on the Airbnb rental price?
Data Source
For this data analysis, I used the Airbnb open data avaiable here. I used the listing.csv file for New York.
Libraries and API
Since the csv file contained more than 20,000 entries, I decided to do some basic scrubbing first and then export to a different csv using the csv library. I then used the pandas library to manipulate and display selected data and used the matplotlib and seaborn libraries for visualization. To calculate the average distance from each listing to the top rated tourist attractions of New York, I used the Beautiful Soup library to parse the website and retrieve a list of attraction names. I then used the Google Places API to get each attraction spot's detailed latitude and longitude to calculate the great circle distance from each airbnb apartment.
End of explanation
apikey = 'AIzaSyAiZn9omCnuF2q89cArLpVfxmlGV7nnjFA'
gplaces = GooglePlaces(apikey)
Explanation: Google Places API Configuration
End of explanation
def tripadvisor_attractions(url, how_many):
page = urllib.request.urlopen(url)
#using beautiful soup to select targeted div
soup = bs(page.read(), "lxml")
filtered = soup.find("div", {"id": "FILTERED_LIST"})
top_list = filtered.find_all("div", class_="property_title")
sites = []
#save the text within hyperlink into an empty list
for site in top_list:
site = (site.a).text
site = str(site)
if not any(char.isdigit() for char in site):
sites.append(site)
#splices the list by how many places user wants to include
sites = sites[:how_many]
return sites
Explanation: Write a function to calculate the distance from each listing to top trip advisor attractions
Visualize data including where the closest ones are, the most expensive, the relative borderline
Write a new function that calculates the distance from the closest subwaby stations
somehow visualize the convenience and access from each subway station using Google maps API
decide where is the best value/distance
make a widget that allows you to copy and paste the link
Defining Functions
1. tripadvisor_attractions( url, how_many )
This function takes 2 parameters, the url of the trip advisor link and the number of top attractions one wants to check. It then uses the beautiful soup library to find the div that contains the list of top rated tourist attractions in the city and returns them as a list.
End of explanation
#ta short for tourist attraction
def ta_detail(ta_list, city):
ta_df = pd.DataFrame( {'Tourist Attraction' : '',
'place_id' : '',
'longitude' : '',
'latitude' : '' },
index = range(len(ta_list)))
for i in range(len(ta_list)):
query_result = gplaces.nearby_search(
location = city,
keyword = ta_list[i],
radius=20000)
#get only the top first query
query = query_result.places[0]
ta_df.loc[i, 'Tourist Attraction'] = query.name
ta_df.loc[i, 'longitude'] = query.geo_location['lng']
ta_df.loc[i, 'latitude'] = query.geo_location['lat']
ta_df.loc[i, 'place_id'] = query.place_id
return ta_df
Explanation: 2. ta_detail(ta_list, city)
This function takes the list returned by the tripadvisor_attractions() function as well as the city name in a string. I explictly ask for the city name so that Google Places API will find more accurate place details when it looks up each tourist attraction. It returns a dataframe of the tourist attraction, its google place ID, longitude, and latitude.
End of explanation
def latlong_tuple(ta_df):
tuple_list = []
for j, ta in ta_df.iterrows():
ta_geo = (float(ta['latitude']), float(ta['longitude']))
tuple_list.append(ta_geo)
return tuple_list
Explanation: 3. latlong_tuple(ta_df)
This function takes the tourist attraction data frame created above then returns a list of (latitude, longitude) tuples for every one of them.
End of explanation
def clean_csv(data_in, geo_tuples):
#automatically generates a cleaned csv file with the same name with _out.csv extension
index = data_in.find('.csv')
data_out = data_in[:index] + '_out' + data_in[index:]
#some error checking when opening
try:
s = open(data_in, 'r')
except:
print('File not found or cannot be opened')
else:
t = open(data_out, 'w')
print('\n Output from an iterable object created from the csv file')
reader = csv.reader(s)
writer = csv.writer(t, delimiter=',')
#counter for number or rows removed during filtering
removed = 0
added = 0
header = True
for row in reader:
if header:
header = False
for i in range(len(row)):
#saving indices for specific columns
if row[i] == 'latitude':
lat = i
elif row[i] == 'longitude':
lng = i
row.append('avg_dist')
writer.writerow(row)
#only add the row if the number of reviews is more than 1
elif(int(row[-1]) > 7):
#creaing a geo tuple for easy calculation later on
tlat = row[lat]
tlng = row[lng]
ttuple = (tlat, tlng)
dist_calc = []
#calculate the distance from each listing and to every top tourist attractions we saved
#if the distance is for some reason greater than 100, don't add it as it would skew the result.
for i in geo_tuples:
dist_from_spot = round(great_circle(i, ttuple).kilometers, 2)
if (dist_from_spot < 100):
dist_calc.append(dist_from_spot)
else:
print(ta['Tourist Attraction'] + " is too far.")
#calculates the average distance between the listing and all of the toursist attractions
avg_dist = round(statistics.mean(dist_calc), 3)
row.append(avg_dist)
writer.writerow(row)
added += 1
else:
removed += 1
s.close()
t.close()
print('Function Finished')
print(added, 'listings saved')
print(removed, 'listings removed')
Explanation: 4. clean_csv(data_in, geo_tuples)
This function is the main data scraping function. I tried to first import the csv as a dataframe then clearning each entry, but the pandas iterrow and itertuple took a very long time so I decided to do the basic scrubbing when I was importing the csv. This function automatically saves a new copy of the cleaned csv with a file name extension _out.csv. The function itself doesn't return anything.
End of explanation
url = "https://www.tripadvisor.com/Attractions-g60763-Activities-New_York_City_New_York.html"
top_10 = tripadvisor_attractions(url, 10)
print(top_10)
ta_df = ta_detail(top_10, 'New York, NY')
geo_tuples = latlong_tuple(ta_df)
ta_df
Explanation: Reading in the data: Time for fun!
Reading in the trip advisor url for New York and saving the data
In the cell below, we read in the trip advisor url for New York and save only the top 10 in a list. When we print it, we can validate that these are the famous places New York is famous for.
End of explanation
clean_csv("data/listings.csv", geo_tuples)
Explanation: The cell below reads in the original csv file, removes some unwanted listings, and adds a new column that has the average distance from the top 10 Trip Advisor approved(!!) tourist attractions.
End of explanation
df = pd.read_csv('data/listings_out.csv')
print('Dimensions:', df.shape)
df.head()
listing = df.copy()
listing.head()
Explanation: We then make a copy dataframe listing to play around with.
End of explanation
area = listing.groupby('neighbourhood_group')
nbhood_price = area['price'].agg([np.sum, np.mean, np.std])
nbhood_dist = area['avg_dist'].agg([np.sum, np.mean, np.std])
fig, ax = plt.subplots(nrows=2, ncols=1, sharex=True)
fig.suptitle('NY Neighbourhoods: Price vs Average Distance to Top Spots', fontsize=10, fontweight='bold')
nbhood_price['mean'].plot(kind='bar', ax=ax[0], color='mediumslateblue')
nbhood_dist['mean'].plot(kind='bar', ax=ax[1], color = 'orchid')
ax[0].set_ylabel('Price', fontsize=10)
ax[1].set_ylabel('Average Distance', fontsize=10)
Explanation: Visualizing the Data
Neighbourhood
First, I used the groupby function to group the data by neighbourhood groups. I then make 2 different data frames to plot the price and average distance.
End of explanation
area2 = listing.groupby('neighbourhood')
nb_price = area2['price'].agg([np.sum, np.mean, np.std]).sort_values(['mean'])
nb_dist = area2['avg_dist'].agg([np.sum, np.mean, np.std])
fig, ax = plt.subplots(figsize=(4, 35))
fig.suptitle('Most Expensive Neighbourhoods on Airbnb', fontsize=10, fontweight='bold')
nb_price['mean'].plot(kind='barh', ax=ax, color='salmon')
Explanation: Then I used the groupby function for neighbourhoods to see a price comparison between different New York neighbourhoods
End of explanation
breezy = listing.loc[listing['neighbourhood'] == 'Breezy Point']
breezy
Explanation: The most expensive neighbourhood: Breezy Point
Why is Breezy Point so expensive? Below code displays the Airbnb listings in Breezy Point, which turned out to be the "Tremendous stylish hotel" which was the only listing in Breezy Point.
End of explanation
beach = listing.loc[listing['neighbourhood'] == 'Manhattan Beach']
beach
Explanation: The Second Most Expensive: Manhattan Beach
The second most expensive neighbourhood is also not in Manhattan, in contrast to the first visualization we did that showed Manhattan had the highest average Airbnb price. All apartments in Manhattan Beach turns out to be reasonably priced except "Manhattan Beach for summer rent" which costs 2,800 USD per night.
It seems that outliers are skewing the data quite significantly.
End of explanation
area = listing.groupby('room_type')
room_price = area['price'].agg([np.sum, np.mean, np.std])
room_dist = area['avg_dist'].agg([np.sum, np.mean, np.std])
room_price['mean'].plot(title="Average Price by Room Type")
apt = listing.loc[listing['room_type'] == 'Entire home/apt']
apt = apt.sort_values('price', ascending=False)
apt.drop(apt.head(20).index, inplace=True)
apt.head()
sns.jointplot(x='avg_dist', y="price", data=apt, kind='kde')
Explanation: Room Type
To account for the price difference between room types, I grouped the data by the room_type column and made some visualizations.
End of explanation
f, ax = plt.subplots(figsize=(11, 6))
sns.violinplot(x="neighbourhood_group", y="price", data=apt, palette="Set3")
Explanation: Plotting the Entire Room listings without the top 20 most expensive ones show that there are 2 concentrated correlated areas between average distance and price. The bimodal distribution in average distance might be the concentration of Airbnb listings in Manhattan and Brooklyn
End of explanation
fancy = listing.sort_values('price', ascending=False).iloc[:50]
fancy.head(10)
fancy.describe()
Explanation: Plotting a violin diagram of the prices of all entire homes in different neighbourhood groups show us that Manhattan has more distrubted price range of apartments, albeit on the higher end, while Queens and Bronx have higher concentration of listings at a specific point at a lower price range.
Dealing with Outliers
To deal with some of the outliers at the top, I tried deleting the top 10 or 20 most expensive ones, but this method wasn't very scalable across the dataset neither was it an accurate depiction of the price variety. So I decided to first get an understanding of the most expensive listings in New York and then to create a separate dataframe that removes data entries with price higher or lower than 3 standard deviations from the mean.
End of explanation
reviewed = listing.loc[listing['number_of_reviews'] > 1]
reviewed.describe()
reviewed = reviewed[((reviewed['price'] - reviewed['price'].mean()) / reviewed['price'].std()).abs() < 3]
reviewed.describe()
fig, axs = plt.subplots(1, 2, sharey=True)
fig.suptitle('Do Reviews and Price Matter?', fontsize=20, fontweight='bold')
reviewed.plot(kind='scatter', x='reviews_per_month', y='price', ax=axs[0], figsize=(16, 8))
reviewed.plot(kind='scatter', x='avg_dist', y='price', ax=axs[1])
Explanation: It is likely that some of the listings listed above are specifically for events and photography, rather than for traveler's accomodation. Also it seems like some of the hosts who didn't want to remove their listing from Airbnb but wasn't available to host rather listed the price as 9,900 USD.
Some of the listings that seemed "normal" but had a very high price were:
Comfortable one bedroom in Harlem
Lovely Room , 1 Block subway to NYC
99.7 percent of the listings
Using simple statistic, I saved a new dataframe named reviewed that has more than 1 review and is at least within 3 standard deviations from the mean.
End of explanation
f, ax = plt.subplots(figsize=(11, 5))
sns.boxplot(x="neighbourhood_group", y="price", hue="room_type", data=reviewed, palette="PRGn")
Explanation: The 2 plots above try to find if there would be any relationship between the number of reviews per month (trust and approval) as well as the average distance from the top attractions. Reviews per month plot does not seem to display any positive correlation between price and user approval, which makes sense as there are many other factors that determine an apartment rental price than user approval.
The average distance plot shows an interesting negative correlation between average distance and price. The lower the average distance is, the higher the price seems to be.
Both graphs show that many hosts like to mark prices discretely, by increments of 5 or 10, as there is a heavy concentration of data along y axis along the grid lines.
End of explanation
reviewed2 = reviewed[((reviewed['price'] - reviewed['price'].mean()) / reviewed['price'].std()).abs() < 2]
sns.jointplot(x='avg_dist', y="price", data=reviewed2, kind='kde')
Explanation: The scatterplot above shows how big of a discrepancy apartment prices in Manhattan is. The top 25% of the apartments in Manhattan range in price from 400 USD to more than 700 USD, while those in Bronx span range of just 200 to 300.
End of explanation |
2,612 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1A.soft - Tests unitaires, setup et ingéniérie logicielle
On vérifie toujours qu'un code fonctionne quand on l'écrit mais cela ne veut pas dire qu'il continuera à fonctionner à l'avenir. La robustesse d'un code vient de tout ce qu'on fait autour pour s'assurer qu'il continue d'exécuter correctement.
Step1: Petite histoire
Supposons que vous ayez implémenté trois fonctions qui dépendent les unes des autres. la fonction f3 utilise les fonctions f1 et f2.
Step2: Six mois plus tard, vous créez une fonction f5 qui appelle une fonction f4 et la fonction f2.
Step3: Ah au fait, ce faisant, vous modifiez la fonction f2 et vous avez un peu oublié ce que faisait la fonction f3... Bref, vous ne savez pas si la fonction f3 sera impactée par la modification introduite dans la fonction f2 ? C'est ce type de problème qu'on rencontre tous les jours quand on écrit un logiciel à plusieurs et sur une longue durée. Ce notebook présente les briques classiques pour s'assurer de la robustesse d'un logiciel.
les tests unitaires
un logiciel de suivi de source
calcul de couverture
l'intégration continue
écrire un setup
écrire la documentation
publier sur PyPi
Ecrire une fonction
N'importe quel fonction qui fait un calcul, par exemple une fonction qui résoud une équation du second degré.
Step4: Ecrire un test unitaire
Un test unitaire est une fonction qui s'assure qu'une autre fonction retourne bien le résultat souhaité. Le plus simple est d'utiliser le module standard unittest et de quitter les notebooks pour utiliser des fichiers. Parmi les autres alternatives
Step5: Il y a des badges un peu pour tout.
Ecrire un setup
Le fichier setup.py détermin la façon dont le module python doit être installé pour un utilisateur qui ne l'a pas développé. Comment construire un setup | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
from pyensae.graphhelper import draw_diagram
Explanation: 1A.soft - Tests unitaires, setup et ingéniérie logicielle
On vérifie toujours qu'un code fonctionne quand on l'écrit mais cela ne veut pas dire qu'il continuera à fonctionner à l'avenir. La robustesse d'un code vient de tout ce qu'on fait autour pour s'assurer qu'il continue d'exécuter correctement.
End of explanation
draw_diagram("blockdiag { f0 -> f1 -> f3; f2 -> f3;}")
Explanation: Petite histoire
Supposons que vous ayez implémenté trois fonctions qui dépendent les unes des autres. la fonction f3 utilise les fonctions f1 et f2.
End of explanation
draw_diagram('blockdiag { f0 -> f1 -> f3; f2 -> f3; f2 -> f5 [color="red"]; f4 -> f5 [color="red"]; }')
Explanation: Six mois plus tard, vous créez une fonction f5 qui appelle une fonction f4 et la fonction f2.
End of explanation
def solve_polynom(a, b, c):
# ....
return None
Explanation: Ah au fait, ce faisant, vous modifiez la fonction f2 et vous avez un peu oublié ce que faisait la fonction f3... Bref, vous ne savez pas si la fonction f3 sera impactée par la modification introduite dans la fonction f2 ? C'est ce type de problème qu'on rencontre tous les jours quand on écrit un logiciel à plusieurs et sur une longue durée. Ce notebook présente les briques classiques pour s'assurer de la robustesse d'un logiciel.
les tests unitaires
un logiciel de suivi de source
calcul de couverture
l'intégration continue
écrire un setup
écrire la documentation
publier sur PyPi
Ecrire une fonction
N'importe quel fonction qui fait un calcul, par exemple une fonction qui résoud une équation du second degré.
End of explanation
from IPython.display import SVG
SVG("https://travis-ci.com/sdpython/ensae_teaching_cs.svg?branch=master")
SVG("https://codecov.io/github/sdpython/ensae_teaching_cs/coverage.svg?branch=master")
Explanation: Ecrire un test unitaire
Un test unitaire est une fonction qui s'assure qu'une autre fonction retourne bien le résultat souhaité. Le plus simple est d'utiliser le module standard unittest et de quitter les notebooks pour utiliser des fichiers. Parmi les autres alternatives : pytest et nose.
Couverture ou coverage
La couverture de code est l'ensemble des lignes exécutées par les tests unitaires. Cela ne signifie pas toujours qu'elles soient correctes mais seulement qu'elles ont été exécutées une ou plusieurs sans provoquer d'erreur. Le module le plus simple est coverage. Il produit des rapports de ce type : mlstatpy/coverage.
Créer un compte GitHub
GitHub est un site qui contient la majorité des codes des projets open-source. Il faut créer un compte si vous n'en avez pas, c'est gratuit pour les projets open souce, puis créer un projet et enfin y insérer votre projet. Votre ordinateur a besoin de :
git
GitHub destkop
Vous pouvez lire GitHub Pour les Nuls : Pas de Panique, Lancez-Vous ! (Première Partie) et bien sûr faire plein de recherches internet.
Note
Tout ce que vous mettez sur GitHub pour un projet open-source est en accès libre. Veillez à ne rien mettre de personnel. Un compte GitHub fait aussi partie des choses qu'un recruteur ira regarder en premier.
Intégration continue
L'intégration continue a pour objectif de réduire le temps entre une modification et sa mise en production. Typiquement, un développeur fait une modification, une machine exécute tous les tests unitaires. On en déduit que le logiciel fonctionne sous tous les angles, on peut sans crainte le mettre à disposition des utilisateurs. Si je résume, l'intégration continue consiste à lancer une batterie de tests dès qu'une modification est détectée. Si tout fonctionne, le logiciel est construit et prêt à être partagé ou déployé si c'est un site web.
Là encore pour des projets open-source, il est possible de trouver des sites qui offre ce service gratuitement :
travis - Linux
appveyor - Windows - 1 job à la fois, pas plus d'une heure.
circle-ci - Linux et Mac OSX (payant)
GitLab-ci
A part GitLab-ci, ces trois services font tourner les tests unitaires sur des machines hébergés par chacun des sociétés. Il faut s'enregistrer sur le site, définir un fichier .travis.yml, .appveyor.yml ou circle.yml puis activer le projet sur le site correspondant. Quelques exemples sont disponibles à pyquickhelper ou scikit-learn. Le fichier doit être ajouté au projet sur GitHub et activé sur le site d'intégration continue choisi. La moindre modification déclenchera un nouveau build.permet
La plupart des sites permettent l'insertion de badge de façon à signifier que le build fonctionne.
End of explanation
SVG("https://badge.fury.io/py/ensae_teaching_cs.svg")
Explanation: Il y a des badges un peu pour tout.
Ecrire un setup
Le fichier setup.py détermin la façon dont le module python doit être installé pour un utilisateur qui ne l'a pas développé. Comment construire un setup : setup.
Ecrire la documentation
L'outil est le plus utilisé est sphinx. Saurez-vous l'utiliser ?
Dernière étape : PyPi
PyPi est un serveur qui permet de mettre un module à la disposition de tout le monde. Il suffit d'uploader le module... Packaging and Distributing Projects ou How to submit a package to PyPI. PyPi permet aussi l'insertion de badge.
End of explanation |
2,613 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 The TensorFlow Authors.
Step1: Migrate LoggingTensorHook and StopAtStepHook to Keras callbacks
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: TensorFlow 1
Step3: TensorFlow 2
Step4: When finished, pass the new callbacks—StopAtStepCallback and LoggingTensorCallback—to the callbacks parameter of Model.fit | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 The TensorFlow Authors.
End of explanation
import tensorflow as tf
import tensorflow.compat.v1 as tf1
features = [[1., 1.5], [2., 2.5], [3., 3.5]]
labels = [[0.3], [0.5], [0.7]]
# Define an input function.
def _input_fn():
return tf1.data.Dataset.from_tensor_slices((features, labels)).batch(1)
Explanation: Migrate LoggingTensorHook and StopAtStepHook to Keras callbacks
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/migrate/logging_stop_hook">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/migrate/logging_stop_hook.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/migrate/logging_stop_hook.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/migrate/logging_stop_hook.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
In TensorFlow 1, you use tf.estimator.LoggingTensorHook to monitor and log tensors, while tf.estimator.StopAtStepHook helps stop training at a specified step when training with tf.estimator.Estimator. This notebook demonstrates how to migrate from these APIs to their equivalents in TensorFlow 2 using custom Keras callbacks (tf.keras.callbacks.Callback) with Model.fit.
Keras callbacks are objects that are called at different points during training/evaluation/prediction in the built-in Keras Model.fit/Model.evaluate/Model.predict APIs. You can learn more about callbacks in the tf.keras.callbacks.Callback API docs, as well as the Writing your own callbacks and Training and evaluation with the built-in methods (the Using callbacks section) guides. For migrating from SessionRunHook in TensorFlow 1 to Keras callbacks in TensorFlow 2, check out the Migrate training with assisted logic guide.
Setup
Start with imports and a simple dataset for demonstration purposes:
End of explanation
def _model_fn(features, labels, mode):
dense = tf1.layers.Dense(1)
logits = dense(features)
loss = tf1.losses.mean_squared_error(labels=labels, predictions=logits)
optimizer = tf1.train.AdagradOptimizer(0.05)
train_op = optimizer.minimize(loss, global_step=tf1.train.get_global_step())
# Define the stop hook.
stop_hook = tf1.train.StopAtStepHook(num_steps=2)
# Access tensors to be logged by names.
kernel_name = tf.identity(dense.weights[0])
bias_name = tf.identity(dense.weights[1])
logging_weight_hook = tf1.train.LoggingTensorHook(
tensors=[kernel_name, bias_name],
every_n_iter=1)
# Log the training loss by the tensor object.
logging_loss_hook = tf1.train.LoggingTensorHook(
{'loss from LoggingTensorHook': loss},
every_n_secs=3)
# Pass all hooks to `EstimatorSpec`.
return tf1.estimator.EstimatorSpec(mode,
loss=loss,
train_op=train_op,
training_hooks=[stop_hook,
logging_weight_hook,
logging_loss_hook])
estimator = tf1.estimator.Estimator(model_fn=_model_fn)
# Begin training.
# The training will stop after 2 steps, and the weights/loss will also be logged.
estimator.train(_input_fn)
Explanation: TensorFlow 1: Log tensors and stop training with tf.estimator APIs
In TensorFlow 1, you define various hooks to control the training behavior. Then, you pass these hooks to tf.estimator.EstimatorSpec.
In the example below:
To monitor/log tensors—for example, model weights or losses—you use tf.estimator.LoggingTensorHook (tf.train.LoggingTensorHook is its alias).
To stop training at a specific step, you use tf.estimator.StopAtStepHook (tf.train.StopAtStepHook is its alias).
End of explanation
class StopAtStepCallback(tf.keras.callbacks.Callback):
def __init__(self, stop_step=None):
super().__init__()
self._stop_step = stop_step
def on_batch_end(self, batch, logs=None):
if self.model.optimizer.iterations >= self._stop_step:
self.model.stop_training = True
print('\nstop training now')
class LoggingTensorCallback(tf.keras.callbacks.Callback):
def __init__(self, every_n_iter):
super().__init__()
self._every_n_iter = every_n_iter
self._log_count = every_n_iter
def on_batch_end(self, batch, logs=None):
if self._log_count > 0:
self._log_count -= 1
print("Logging Tensor Callback: dense/kernel:",
model.layers[0].weights[0])
print("Logging Tensor Callback: dense/bias:",
model.layers[0].weights[1])
print("Logging Tensor Callback loss:", logs["loss"])
else:
self._log_count -= self._every_n_iter
Explanation: TensorFlow 2: Log tensors and stop training with custom callbacks and Model.fit
In TensorFlow 2, when you use the built-in Keras Model.fit (or Model.evaluate) for training/evaluation, you can configure tensor monitoring and training stopping by defining custom Keras tf.keras.callbacks.Callbacks. Then, you pass them to the callbacks parameter of Model.fit (or Model.evaluate). (Learn more in the Writing your own callbacks guide.)
In the example below:
To recreate the functionalities of StopAtStepHook, define a custom callback (named StopAtStepCallback below) where you override the on_batch_end method to stop training after a certain number of steps.
To recreate the LoggingTensorHook behavior, define a custom callback (LoggingTensorCallback) where you record and output the logged tensors manually, since accessing to tensors by names is not supported. You can also implement the logging frequency inside the custom callback. The example below will print the weights every two steps. Other strategies like logging every N seconds are also possible.
End of explanation
dataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(1)
model = tf.keras.models.Sequential([tf.keras.layers.Dense(1)])
optimizer = tf.keras.optimizers.Adagrad(learning_rate=0.05)
model.compile(optimizer, "mse")
# Begin training.
# The training will stop after 2 steps, and the weights/loss will also be logged.
model.fit(dataset, callbacks=[StopAtStepCallback(stop_step=2),
LoggingTensorCallback(every_n_iter=2)])
Explanation: When finished, pass the new callbacks—StopAtStepCallback and LoggingTensorCallback—to the callbacks parameter of Model.fit:
End of explanation |
2,614 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GDAL command line
Example
Step1: What is this combination of commands?
! this is a jupyter notebook-thing, telling it we're running something on the command line instead of in Python
../scratch/deelbekkens_wgs84 the output location of the created file
-t_srs "EPSG
Step2: but there are great online resources with good examples you can easily copy paste for your own applications...
Example 2
Step3: We can start working with this date...
Step4: Actually, GDAL can directly query the WFS data
Step5: I do know that the Meetplaatsen Oppervlaktewaterkwaliteit are also available as a WFS web service. However, I'm only interested in the locations for fytoplankton
Step6: Actually, the same type of subselection are also possible on shapefiles,...
Extracting a specific DEELBEKKEN from the deelbekken shapefile
Step7: (If you're wondering how I know how to setup these commands and arguments, check the (draft) introduction_webservices.ipynb in the scratch folder. <br>
I use the python interafce of GDAL/OGR and the package owslib to find out how to setup the arguments.)
GDAL command line, but inside Python...
No problem if this is still unclear... an example application!
Clipping example
The example we will use is to clip raster data using a shapefile. We use a data set from natural earth, which we will unzip to start working on it (off course using Python itself)
Step8: The GDAL function that support the clipping of a raster file, is called gdalwarp. Again, the documentation looks rather overwhelming... Let's start with an example execution
Step9: ! this is a jupyter notebook-thing, telling it we're running something on the command line instead of in Python
gdalwarp is the GDAL command to use
../scratch/NE1_50M_SR/NE1_50M_SR.tif the source file location
../scratch/cliptest.tif the output location of the created file
-cutline "../scratch/subcat.shp" the shape file to cut the raster with
-crop_to_cutline an additional argument to GDAL to make the clipping
-overwrite overwrite eventual existing file with the same name
Step10: This is off course a dummy example (to keep runtime low), but it illustrates the concept.
the subprocess trick...
Doing the same using pure Python code and from osgeo import gdal is actually not so beneficial, as the command above is reather straighforward... However, the dependency of the command line provides a switch of environment in any data analysis pipeline. I actually do want to have the best of both worlds
Step11: Doing the same as above, but actually using Python code to run the command with given variables as input
Step12: Remark when GDAL provides a zero as return statement, this is a GOOD sign!
Step14: Hence, the result is the same, but calling the command from Python. By writing a Python function for this routine, I do have a reusable functionality in my toolbox that I can load in any other Python script
Step15: More advanced clipping
Consider the data set of the provinces we called from the WFS server earlier
Step16: We can actually use a selection of the provinces data set to execute the clipping
Step17: By having it as a Python call, we can do the same action for each of the individual provinces in the dataset and create for each of the provinces a clipped raster data set | Python Code:
!ogr2ogr ../scratch/deelbekkens_wgs84 -t_srs "EPSG:4326" ../data/deelbekkens/Deelbekken.shp
Explanation: GDAL command line
Example: reprojection
GDAL is a really powerful library for handling GIS data. It provides a number of functionalities to interact with spatial data. As a typical example, take the reprojection of a shapefile to another CRS:
End of explanation
!ogr2ogr --help
Explanation: What is this combination of commands?
! this is a jupyter notebook-thing, telling it we're running something on the command line instead of in Python
../scratch/deelbekkens_wgs84 the output location of the created file
-t_srs "EPSG:4326" the CRS information for to which the data should be projected
../data/deelbekkens/Deelbekken.shp the source file location
The documentation is a bit overwhelming:
End of explanation
!ogr2ogr -f 'Geojson' ../scratch/provinces.geojson WFS:"https://geoservices.informatievlaanderen.be/overdrachtdiensten/VRBG/wfs" Refprv
Explanation: but there are great online resources with good examples you can easily copy paste for your own applications...
Example 2: Accessing online webservice data (Web Feature Service - WFS)
A lot of expensive terminology...
Let's illustrate this with an example: The information about municipalities is available as open data on geopunt (coming from informatie Vlaanderen). The publication is provided as a WFS service...
Take home message -> GDAL can handle WFS web services ;-)
Downloading the province boundaries from the WFS service provided by informatie Vlaanderen/Geopunt to a geojson file is as follows:
End of explanation
provinces = gpd.read_file("../scratch/provinces.geojson")
provinces.plot()
Explanation: We can start working with this date...
End of explanation
!ogr2ogr -f 'Geojson' ../scratch/antwerp_prov.geojson WFS:"https://geoservices.informatievlaanderen.be/overdrachtdiensten/VRBG/wfs" Refprv -where "NAAM = 'Antwerpen'"
antwerp = gpd.read_file("../scratch/antwerp_prov.geojson")
antwerp.plot()
Explanation: Actually, GDAL can directly query the WFS data:
Let's say I only need the province of Antwerp:
End of explanation
!ogr2ogr -f 'Geojson' ../scratch/metingen_fytoplankton.geojson WFS:"https://geoservices.informatievlaanderen.be/overdrachtdiensten/MeetplOppervlwaterkwal/wfs" Mtploppw -where "FYTOPLANKT = '1'"
import mplleaflet
fyto = gpd.read_file("../scratch/metingen_fytoplankton.geojson")
fyto.head()
fyto.to_crs('+init=epsg:4326').plot(markersize=5)
mplleaflet.display()
fyto.head()
Explanation: I do know that the Meetplaatsen Oppervlaktewaterkwaliteit are also available as a WFS web service. However, I'm only interested in the locations for fytoplankton:
End of explanation
!ogr2ogr ../scratch/subcat.shp ../data/deelbekkens/Deelbekken.shp -where "DEELBEKKEN = '10-10'"
Explanation: Actually, the same type of subselection are also possible on shapefiles,...
Extracting a specific DEELBEKKEN from the deelbekken shapefile:
End of explanation
import zipfile
zip_ref = zipfile.ZipFile("../data/NE1_50m_SR.zip", 'r')
zip_ref.extractall("../scratch")
zip_ref.close()
Explanation: (If you're wondering how I know how to setup these commands and arguments, check the (draft) introduction_webservices.ipynb in the scratch folder. <br>
I use the python interafce of GDAL/OGR and the package owslib to find out how to setup the arguments.)
GDAL command line, but inside Python...
No problem if this is still unclear... an example application!
Clipping example
The example we will use is to clip raster data using a shapefile. We use a data set from natural earth, which we will unzip to start working on it (off course using Python itself):
End of explanation
!gdalwarp ../scratch/NE1_50M_SR/NE1_50M_SR.tif ../scratch/cliptest.tif -cutline "../scratch/subcat.shp" -crop_to_cutline -overwrite
Explanation: The GDAL function that support the clipping of a raster file, is called gdalwarp. Again, the documentation looks rather overwhelming... Let's start with an example execution:
End of explanation
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib inline
img=mpimg.imread('../scratch/cliptest.tif')
plt.imshow(img)
Explanation: ! this is a jupyter notebook-thing, telling it we're running something on the command line instead of in Python
gdalwarp is the GDAL command to use
../scratch/NE1_50M_SR/NE1_50M_SR.tif the source file location
../scratch/cliptest.tif the output location of the created file
-cutline "../scratch/subcat.shp" the shape file to cut the raster with
-crop_to_cutline an additional argument to GDAL to make the clipping
-overwrite overwrite eventual existing file with the same name
End of explanation
import subprocess
Explanation: This is off course a dummy example (to keep runtime low), but it illustrates the concept.
the subprocess trick...
Doing the same using pure Python code and from osgeo import gdal is actually not so beneficial, as the command above is reather straighforward... However, the dependency of the command line provides a switch of environment in any data analysis pipeline. I actually do want to have the best of both worlds:
Using Python code, but running the command line version of GDAL...
...therefore we need subprocess!
End of explanation
inraster = '../scratch/NE1_50M_SR/NE1_50M_SR.tif'
outraster = inraster.replace('.tif', '{}.tif'.format("_out")) # same location, but adding _out to the output
inshape = "../scratch/subcat.shp"
subprocess.call(['gdalwarp', inraster, outraster, '-cutline', inshape,
'-crop_to_cutline', '-overwrite'])
Explanation: Doing the same as above, but actually using Python code to run the command with given variables as input:
End of explanation
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib inline
img=mpimg.imread('../scratch/NE1_50M_SR/NE1_50M_SR_out.tif')
plt.imshow(img)
Explanation: Remark when GDAL provides a zero as return statement, this is a GOOD sign!
End of explanation
def clip_raster(inraster, outraster, invector):
clip a raster image with a vector file
Parameters
----------
inraster : GDAL compatible raster format
outraster : GDAL compatible raster format
invector : GDAL compatible vector format
response = subprocess.call(['gdalwarp', inraster, outraster, '-cutline',
invector, '-crop_to_cutline', '-overwrite'])
return(response)
inraster = '../scratch/NE1_50M_SR/NE1_50M_SR.tif'
outraster = inraster.replace('.tif', '{}.tif'.format("_out")) # same location, but adding _out to the output
inshape = "../scratch/subcat.shp"
clip_raster(inraster, outraster, inshape)
Explanation: Hence, the result is the same, but calling the command from Python. By writing a Python function for this routine, I do have a reusable functionality in my toolbox that I can load in any other Python script:
End of explanation
provinces
Explanation: More advanced clipping
Consider the data set of the provinces we called from the WFS server earlier:
End of explanation
inraster = '../scratch/NE1_50M_SR/NE1_50M_SR.tif'
outraster = inraster.replace('.tif', '{}.tif'.format("_OostVlaanderen"))
invector = "../scratch/provinces.geojson"
subprocess.call(['gdalwarp', inraster, outraster, '-cutline', invector,
'-cwhere', "NAAM='OOST-VLAANDEREN'",
'-crop_to_cutline',
'-overwrite'])
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib inline
img=mpimg.imread('../scratch/NE1_50M_SR/NE1_50M_SR_OostVlaanderen.tif')
plt.imshow(img)
Explanation: We can actually use a selection of the provinces data set to execute the clipping:
End of explanation
import ogr
inraster = '../scratch/NE1_50M_SR/NE1_50M_SR.tif'
invector = "../scratch/provinces.geojson"
# GDAL magic...
ds = ogr.Open(invector)
lyr = ds.GetLayer(0)
lyr.ResetReading()
ft = lyr.GetNextFeature()
# clipping for each of the features (provincesin this case)
while ft:
province_name = ft.GetFieldAsString('NAAM')
print(province_name)
outraster = inraster.replace('.tif', '_%s.tif' % province_name.replace('-', '_'))
subprocess.call(['gdalwarp', inraster, outraster, '-cutline', invector,
'-crop_to_cutline', '-cwhere', "NAAM='%s'" %province_name])
ft = lyr.GetNextFeature()
ds = None
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib inline
img=mpimg.imread('../scratch/NE1_50M_SR/NE1_50M_SR_West_Vlaanderen.tif') # check also Antwerpen,...
plt.imshow(img)
Explanation: By having it as a Python call, we can do the same action for each of the individual provinces in the dataset and create for each of the provinces a clipped raster data set:
End of explanation |
2,615 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gensim Doc2vec Tutorial on the IMDB Sentiment Dataset
Introduction
In this tutorial, we will learn how to apply Doc2vec using gensim by recreating the results of <a href="https
Step1: The text data is small enough to be read into memory.
Step2: Set-up Doc2Vec Training & Evaluation Models
We approximate the experiment of Le & Mikolov "Distributed Representations of Sentences and Documents" with guidance from Mikolov's example go.sh
Step3: Le and Mikolov notes that combining a paragraph vector from Distributed Bag of Words (DBOW) and Distributed Memory (DM) improves performance. We will follow, pairing the models together for evaluation. Here, we concatenate the paragraph vectors obtained from each model.
Step5: Predictive Evaluation Methods
Let's define some helper methods for evaluating the performance of our Doc2vec using paragraph vectors. We will classify document sentiments using a logistic regression model based on our paragraph embeddings. We will compare the error rates based on word embeddings from our various Doc2vec models.
Step6: Bulk Training
We use an explicit multiple-pass, alpha-reduction approach as sketched in this gensim doc2vec blog post with added shuffling of corpus on each pass.
Note that vector training is occurring on all documents of the dataset, which includes all TRAIN/TEST/DEV docs.
We evaluate each model's sentiment predictive power based on error rate, and the evaluation is repeated after each pass so we can see the rates of relative improvement. The base numbers reuse the TRAIN and TEST vectors stored in the models for the logistic regression, while the inferred results use newly-inferred TEST vectors.
(On a 4-core 2.6Ghz Intel Core i7, these 20 passes training and evaluating 3 main models takes about an hour.)
Step7: Achieved Sentiment-Prediction Accuracy
Step8: In our testing, contrary to the results of the paper, PV-DBOW performs best. Concatenating vectors from different models only offers a small predictive improvement over averaging vectors. There best results reproduced are just under 10% error rate, still a long way from the paper's reported 7.42% error rate.
Examining Results
Are inferred vectors close to the precalculated ones?
Step9: (Yes, here the stored vector from 20 epochs of training is usually one of the closest to a freshly-inferred vector for the same words. Note the defaults for inference are very abbreviated – just 3 steps starting at a high alpha – and likely need tuning for other applications.)
Do close documents seem more related than distant ones?
Step10: (Somewhat, in terms of reviewer tone, movie genre, etc... the MOST cosine-similar docs usually seem more like the TARGET than the MEDIAN or LEAST.)
Do the word vectors show useful similarities?
Step11: Do the DBOW words look meaningless? That's because the gensim DBOW model doesn't train word vectors – they remain at their random initialized values – unless you ask with the dbow_words=1 initialization parameter. Concurrent word-training slows DBOW mode significantly, and offers little improvement (and sometimes a little worsening) of the error rate on this IMDB sentiment-prediction task.
Words from DM models tend to show meaningfully similar words when there are many examples in the training data (as with 'plot' or 'actor'). (All DM modes inherently involve word vector training concurrent with doc vector training.)
Are the word vectors from this dataset any good at analogies?
Step12: Even though this is a tiny, domain-specific dataset, it shows some meager capability on the general word analogies – at least for the DM/concat and DM/mean models which actually train word vectors. (The untrained random-initialized words of the DBOW model of course fail miserably.)
Slop
Step13: To mix the Google dataset (if locally available) into the word tests...
Step14: To get copious logging output from above steps...
Step15: To auto-reload python code while developing... | Python Code:
import locale
import glob
import os.path
import requests
import tarfile
import sys
import codecs
import smart_open
dirname = 'aclImdb'
filename = 'aclImdb_v1.tar.gz'
locale.setlocale(locale.LC_ALL, 'C')
if sys.version > '3':
control_chars = [chr(0x85)]
else:
control_chars = [unichr(0x85)]
# Convert text to lower-case and strip punctuation/symbols from words
def normalize_text(text):
norm_text = text.lower()
# Replace breaks with spaces
norm_text = norm_text.replace('<br />', ' ')
# Pad punctuation with spaces on both sides
for char in ['.', '"', ',', '(', ')', '!', '?', ';', ':']:
norm_text = norm_text.replace(char, ' ' + char + ' ')
return norm_text
import time
import smart_open
start = time.clock()
if not os.path.isfile('aclImdb/alldata-id.txt'):
if not os.path.isdir(dirname):
if not os.path.isfile(filename):
# Download IMDB archive
print("Downloading IMDB archive...")
url = u'http://ai.stanford.edu/~amaas/data/sentiment/' + filename
r = requests.get(url)
with smart_open.smart_open(filename, 'wb') as f:
f.write(r.content)
tar = tarfile.open(filename, mode='r')
tar.extractall()
tar.close()
# Concatenate and normalize test/train data
print("Cleaning up dataset...")
folders = ['train/pos', 'train/neg', 'test/pos', 'test/neg', 'train/unsup']
alldata = u''
for fol in folders:
temp = u''
output = fol.replace('/', '-') + '.txt'
# Is there a better pattern to use?
txt_files = glob.glob(os.path.join(dirname, fol, '*.txt'))
for txt in txt_files:
with smart_open.smart_open(txt, "rb") as t:
t_clean = t.read().decode("utf-8")
for c in control_chars:
t_clean = t_clean.replace(c, ' ')
temp += t_clean
temp += "\n"
temp_norm = normalize_text(temp)
with smart_open.smart_open(os.path.join(dirname, output), "wb") as n:
n.write(temp_norm.encode("utf-8"))
alldata += temp_norm
with smart_open.smart_open(os.path.join(dirname, 'alldata-id.txt'), 'wb') as f:
for idx, line in enumerate(alldata.splitlines()):
num_line = u"_*{0} {1}\n".format(idx, line)
f.write(num_line.encode("utf-8"))
end = time.clock()
print ("Total running time: ", end-start)
import os.path
assert os.path.isfile("aclImdb/alldata-id.txt"), "alldata-id.txt unavailable"
Explanation: Gensim Doc2vec Tutorial on the IMDB Sentiment Dataset
Introduction
In this tutorial, we will learn how to apply Doc2vec using gensim by recreating the results of <a href="https://arxiv.org/pdf/1405.4053.pdf">Le and Mikolov 2014</a>.
Bag-of-words Model
Previous state-of-the-art document representations were based on the <a href="https://en.wikipedia.org/wiki/Bag-of-words_model">bag-of-words model</a>, which represent input documents as a fixed-length vector. For example, borrowing from the Wikipedia article, the two documents
(1) John likes to watch movies. Mary likes movies too.
(2) John also likes to watch football games.
are used to construct a length 10 list of words
["John", "likes", "to", "watch", "movies", "Mary", "too", "also", "football", "games"]
so then we can represent the two documents as fixed length vectors whose elements are the frequencies of the corresponding words in our list
(1) [1, 2, 1, 1, 2, 1, 1, 0, 0, 0]
(2) [1, 1, 1, 1, 0, 0, 0, 1, 1, 1]
Bag-of-words models are surprisingly effective but still lose information about word order. Bag of <a href="https://en.wikipedia.org/wiki/N-gram">n-grams</a> models consider word phrases of length n to represent documents as fixed-length vectors to capture local word order but suffer from data sparsity and high dimensionality.
Word2vec Model
Word2vec is a more recent model that embeds words in a high-dimensional vector space using a shallow neural network. The result is a set of word vectors where vectors close together in vector space have similar meanings based on context, and word vectors distant to each other have differing meanings. For example, strong and powerful would be close together and strong and Paris would be relatively far. There are two versions of this model based on skip-grams and continuous bag of words.
Word2vec - Skip-gram Model
The skip-gram <a href="http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/">word2vec</a> model, for example, takes in pairs (word1, word2) generated by moving a window across text data, and trains a 1-hidden-layer neural network based on the fake task of given an input word, giving us a predicted probability distribution of nearby words to the input. The hidden-to-output weights in the neural network give us the word embeddings. So if the hidden layer has 300 neurons, this network will give us 300-dimensional word embeddings. We use <a href="https://en.wikipedia.org/wiki/One-hot">one-hot</a> encoding for the words.
Word2vec - Continuous-bag-of-words Model
Continuous-bag-of-words Word2vec is very similar to the skip-gram model. It is also a 1-hidden-layer neural network. The fake task is based on the input context words in a window around a center word, predict the center word. Again, the hidden-to-output weights give us the word embeddings and we use one-hot encoding.
Paragraph Vector
Le and Mikolov 2014 introduces the <i>Paragraph Vector</i>, which outperforms more naïve representations of documents such as averaging the Word2vec word vectors of a document. The idea is straightforward: we act as if a paragraph (or document) is just another vector like a word vector, but we will call it a paragraph vector. We determine the embedding of the paragraph in vector space in the same way as words. Our paragraph vector model considers local word order like bag of n-grams, but gives us a denser representation in vector space compared to a sparse, high-dimensional representation.
Paragraph Vector - Distributed Memory (PV-DM)
This is the Paragraph Vector model analogous to Continuous-bag-of-words Word2vec. The paragraph vectors are obtained by training a neural network on the fake task of inferring a center word based on context words and a context paragraph. A paragraph is a context for all words in the paragraph, and a word in a paragraph can have that paragraph as a context.
Paragraph Vector - Distributed Bag of Words (PV-DBOW)
This is the Paragraph Vector model analogous to Skip-gram Word2vec. The paragraph vectors are obtained by training a neural network on the fake task of predicting a probability distribution of words in a paragraph given a randomly-sampled word from the paragraph.
Requirements
The following python modules are dependencies for this tutorial:
* testfixtures ( pip install testfixtures )
* statsmodels ( pip install statsmodels )
Load corpus
Let's download the IMDB archive if it is not already downloaded (84 MB). This will be our text data for this tutorial.
The data can be found here: http://ai.stanford.edu/~amaas/data/sentiment/
End of explanation
import gensim
from gensim.models.doc2vec import TaggedDocument
from collections import namedtuple
from smart_open import smart_open
SentimentDocument = namedtuple('SentimentDocument', 'words tags split sentiment')
alldocs = [] # Will hold all docs in original order
with smart_open('aclImdb/alldata-id.txt', 'rb') as alldata:
alldata = alldata.read().decode('utf-8')
for line_no, line in enumerate(alldata):
tokens = gensim.utils.to_unicode(line).split()
words = tokens[1:]
tags = [line_no] # 'tags = [tokens[0]]' would also work at extra memory cost
split = ['train', 'test', 'extra', 'extra'][line_no//25000] # 25k train, 25k test, 25k extra
sentiment = [1.0, 0.0, 1.0, 0.0, None, None, None, None][line_no//12500] # [12.5K pos, 12.5K neg]*2 then unknown
alldocs.append(SentimentDocument(words, tags, split, sentiment))
train_docs = [doc for doc in alldocs if doc.split == 'train']
test_docs = [doc for doc in alldocs if doc.split == 'test']
doc_list = alldocs[:] # For reshuffling per pass
print('%d docs: %d train-sentiment, %d test-sentiment' % (len(doc_list), len(train_docs), len(test_docs)))
Explanation: The text data is small enough to be read into memory.
End of explanation
from gensim.models import Doc2Vec
import gensim.models.doc2vec
from collections import OrderedDict
import multiprocessing
cores = multiprocessing.cpu_count()
assert gensim.models.doc2vec.FAST_VERSION > -1, "This will be painfully slow otherwise"
simple_models = [
# PV-DM w/ concatenation - window=5 (both sides) approximates paper's 10-word total window size
Doc2Vec(dm=1, dm_concat=1, size=100, window=5, negative=5, hs=0, min_count=2, workers=cores),
# PV-DBOW
Doc2Vec(dm=0, size=100, negative=5, hs=0, min_count=2, workers=cores),
# PV-DM w/ average
Doc2Vec(dm=1, dm_mean=1, size=100, window=10, negative=5, hs=0, min_count=2, workers=cores),
]
# Speed up setup by sharing results of the 1st model's vocabulary scan
simple_models[0].build_vocab(alldocs) # PV-DM w/ concat requires one special NULL word so it serves as template
print(simple_models[0])
for model in simple_models[1:]:
model.reset_from(simple_models[0])
print(model)
models_by_name = OrderedDict((str(model), model) for model in simple_models)
Explanation: Set-up Doc2Vec Training & Evaluation Models
We approximate the experiment of Le & Mikolov "Distributed Representations of Sentences and Documents" with guidance from Mikolov's example go.sh:
./word2vec -train ../alldata-id.txt -output vectors.txt -cbow 0 -size 100 -window 10 -negative 5 -hs 0 -sample 1e-4 -threads 40 -binary 0 -iter 20 -min-count 1 -sentence-vectors 1
We vary the following parameter choices:
* 100-dimensional vectors, as the 400-d vectors of the paper don't seem to offer much benefit on this task
* Similarly, frequent word subsampling seems to decrease sentiment-prediction accuracy, so it's left out
* cbow=0 means skip-gram which is equivalent to the paper's 'PV-DBOW' mode, matched in gensim with dm=0
* Added to that DBOW model are two DM models, one which averages context vectors (dm_mean) and one which concatenates them (dm_concat, resulting in a much larger, slower, more data-hungry model)
* A min_count=2 saves quite a bit of model memory, discarding only words that appear in a single doc (and are thus no more expressive than the unique-to-each doc vectors themselves)
End of explanation
from gensim.test.test_doc2vec import ConcatenatedDoc2Vec
models_by_name['dbow+dmm'] = ConcatenatedDoc2Vec([simple_models[1], simple_models[2]])
models_by_name['dbow+dmc'] = ConcatenatedDoc2Vec([simple_models[1], simple_models[0]])
Explanation: Le and Mikolov notes that combining a paragraph vector from Distributed Bag of Words (DBOW) and Distributed Memory (DM) improves performance. We will follow, pairing the models together for evaluation. Here, we concatenate the paragraph vectors obtained from each model.
End of explanation
import numpy as np
import statsmodels.api as sm
from random import sample
# For timing
from contextlib import contextmanager
from timeit import default_timer
import time
@contextmanager
def elapsed_timer():
start = default_timer()
elapser = lambda: default_timer() - start
yield lambda: elapser()
end = default_timer()
elapser = lambda: end-start
def logistic_predictor_from_data(train_targets, train_regressors):
logit = sm.Logit(train_targets, train_regressors)
predictor = logit.fit(disp=0)
# print(predictor.summary())
return predictor
def error_rate_for_model(test_model, train_set, test_set, infer=False, infer_steps=3, infer_alpha=0.1, infer_subsample=0.1):
Report error rate on test_doc sentiments, using supplied model and train_docs
train_targets, train_regressors = zip(*[(doc.sentiment, test_model.docvecs[doc.tags[0]]) for doc in train_set])
train_regressors = sm.add_constant(train_regressors)
predictor = logistic_predictor_from_data(train_targets, train_regressors)
test_data = test_set
if infer:
if infer_subsample < 1.0:
test_data = sample(test_data, int(infer_subsample * len(test_data)))
test_regressors = [test_model.infer_vector(doc.words, steps=infer_steps, alpha=infer_alpha) for doc in test_data]
else:
test_regressors = [test_model.docvecs[doc.tags[0]] for doc in test_docs]
test_regressors = sm.add_constant(test_regressors)
# Predict & evaluate
test_predictions = predictor.predict(test_regressors)
corrects = sum(np.rint(test_predictions) == [doc.sentiment for doc in test_data])
errors = len(test_predictions) - corrects
error_rate = float(errors) / len(test_predictions)
return (error_rate, errors, len(test_predictions), predictor)
Explanation: Predictive Evaluation Methods
Let's define some helper methods for evaluating the performance of our Doc2vec using paragraph vectors. We will classify document sentiments using a logistic regression model based on our paragraph embeddings. We will compare the error rates based on word embeddings from our various Doc2vec models.
End of explanation
from collections import defaultdict
best_error = defaultdict(lambda: 1.0) # To selectively print only best errors achieved
from random import shuffle
import datetime
alpha, min_alpha, passes = (0.025, 0.001, 20)
alpha_delta = (alpha - min_alpha) / passes
print("START %s" % datetime.datetime.now())
for epoch in range(passes):
shuffle(doc_list) # Shuffling gets best results
for name, train_model in models_by_name.items():
# Train
duration = 'na'
train_model.alpha, train_model.min_alpha = alpha, alpha
with elapsed_timer() as elapsed:
train_model.train(doc_list, total_examples=len(doc_list), epochs=1)
duration = '%.1f' % elapsed()
# Evaluate
eval_duration = ''
with elapsed_timer() as eval_elapsed:
err, err_count, test_count, predictor = error_rate_for_model(train_model, train_docs, test_docs)
eval_duration = '%.1f' % eval_elapsed()
best_indicator = ' '
if err <= best_error[name]:
best_error[name] = err
best_indicator = '*'
print("%s%f : %i passes : %s %ss %ss" % (best_indicator, err, epoch + 1, name, duration, eval_duration))
if ((epoch + 1) % 5) == 0 or epoch == 0:
eval_duration = ''
with elapsed_timer() as eval_elapsed:
infer_err, err_count, test_count, predictor = error_rate_for_model(train_model, train_docs, test_docs, infer=True)
eval_duration = '%.1f' % eval_elapsed()
best_indicator = ' '
if infer_err < best_error[name + '_inferred']:
best_error[name + '_inferred'] = infer_err
best_indicator = '*'
print("%s%f : %i passes : %s %ss %ss" % (best_indicator, infer_err, epoch + 1, name + '_inferred', duration, eval_duration))
print('Completed pass %i at alpha %f' % (epoch + 1, alpha))
alpha -= alpha_delta
print("END %s" % str(datetime.datetime.now()))
Explanation: Bulk Training
We use an explicit multiple-pass, alpha-reduction approach as sketched in this gensim doc2vec blog post with added shuffling of corpus on each pass.
Note that vector training is occurring on all documents of the dataset, which includes all TRAIN/TEST/DEV docs.
We evaluate each model's sentiment predictive power based on error rate, and the evaluation is repeated after each pass so we can see the rates of relative improvement. The base numbers reuse the TRAIN and TEST vectors stored in the models for the logistic regression, while the inferred results use newly-inferred TEST vectors.
(On a 4-core 2.6Ghz Intel Core i7, these 20 passes training and evaluating 3 main models takes about an hour.)
End of explanation
# Print best error rates achieved
print("Err rate Model")
for rate, name in sorted((rate, name) for name, rate in best_error.items()):
print("%f %s" % (rate, name))
Explanation: Achieved Sentiment-Prediction Accuracy
End of explanation
doc_id = np.random.randint(simple_models[0].docvecs.count) # Pick random doc; re-run cell for more examples
print('for doc %d...' % doc_id)
for model in simple_models:
inferred_docvec = model.infer_vector(alldocs[doc_id].words)
print('%s:\n %s' % (model, model.docvecs.most_similar([inferred_docvec], topn=3)))
Explanation: In our testing, contrary to the results of the paper, PV-DBOW performs best. Concatenating vectors from different models only offers a small predictive improvement over averaging vectors. There best results reproduced are just under 10% error rate, still a long way from the paper's reported 7.42% error rate.
Examining Results
Are inferred vectors close to the precalculated ones?
End of explanation
import random
doc_id = np.random.randint(simple_models[0].docvecs.count) # pick random doc, re-run cell for more examples
model = random.choice(simple_models) # and a random model
sims = model.docvecs.most_similar(doc_id, topn=model.docvecs.count) # get *all* similar documents
print(u'TARGET (%d): «%s»\n' % (doc_id, ' '.join(alldocs[doc_id].words)))
print(u'SIMILAR/DISSIMILAR DOCS PER MODEL %s:\n' % model)
for label, index in [('MOST', 0), ('MEDIAN', len(sims)//2), ('LEAST', len(sims) - 1)]:
print(u'%s %s: «%s»\n' % (label, sims[index], ' '.join(alldocs[sims[index][0]].words)))
Explanation: (Yes, here the stored vector from 20 epochs of training is usually one of the closest to a freshly-inferred vector for the same words. Note the defaults for inference are very abbreviated – just 3 steps starting at a high alpha – and likely need tuning for other applications.)
Do close documents seem more related than distant ones?
End of explanation
word_models = simple_models[:]
import random
from IPython.display import HTML
# pick a random word with a suitable number of occurences
while True:
word = random.choice(word_models[0].wv.index2word)
if word_models[0].wv.vocab[word].count > 10:
break
# or uncomment below line, to just pick a word from the relevant domain:
#word = 'comedy/drama'
similars_per_model = [str(model.most_similar(word, topn=20)).replace('), ','),<br>\n') for model in word_models]
similar_table = ("<table><tr><th>" +
"</th><th>".join([str(model) for model in word_models]) +
"</th></tr><tr><td>" +
"</td><td>".join(similars_per_model) +
"</td></tr></table>")
print("most similar words for '%s' (%d occurences)" % (word, simple_models[0].wv.vocab[word].count))
HTML(similar_table)
Explanation: (Somewhat, in terms of reviewer tone, movie genre, etc... the MOST cosine-similar docs usually seem more like the TARGET than the MEDIAN or LEAST.)
Do the word vectors show useful similarities?
End of explanation
# Download this file: https://github.com/nicholas-leonard/word2vec/blob/master/questions-words.txt
# and place it in the local directory
# Note: this takes many minutes
if os.path.isfile('questions-words.txt'):
for model in word_models:
sections = model.accuracy('questions-words.txt')
correct, incorrect = len(sections[-1]['correct']), len(sections[-1]['incorrect'])
print('%s: %0.2f%% correct (%d of %d)' % (model, float(correct*100)/(correct+incorrect), correct, correct+incorrect))
Explanation: Do the DBOW words look meaningless? That's because the gensim DBOW model doesn't train word vectors – they remain at their random initialized values – unless you ask with the dbow_words=1 initialization parameter. Concurrent word-training slows DBOW mode significantly, and offers little improvement (and sometimes a little worsening) of the error rate on this IMDB sentiment-prediction task.
Words from DM models tend to show meaningfully similar words when there are many examples in the training data (as with 'plot' or 'actor'). (All DM modes inherently involve word vector training concurrent with doc vector training.)
Are the word vectors from this dataset any good at analogies?
End of explanation
This cell left intentionally erroneous.
Explanation: Even though this is a tiny, domain-specific dataset, it shows some meager capability on the general word analogies – at least for the DM/concat and DM/mean models which actually train word vectors. (The untrained random-initialized words of the DBOW model of course fail miserably.)
Slop
End of explanation
from gensim.models import KeyedVectors
w2v_g100b = KeyedVectors.load_word2vec_format('GoogleNews-vectors-negative300.bin.gz', binary=True)
w2v_g100b.compact_name = 'w2v_g100b'
word_models.append(w2v_g100b)
Explanation: To mix the Google dataset (if locally available) into the word tests...
End of explanation
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
rootLogger = logging.getLogger()
rootLogger.setLevel(logging.INFO)
Explanation: To get copious logging output from above steps...
End of explanation
%load_ext autoreload
%autoreload 2
Explanation: To auto-reload python code while developing...
End of explanation |
2,616 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Working with ECoG data
MNE supports working with more than just MEG and EEG data. Here we show some
of the functions that can be used to facilitate working with
electrocorticography (ECoG) data.
Step1: Let's load some ECoG electrode locations and names, and turn them into
a
Step2: Now that we have our electrode positions in MRI coordinates, we can create
our measurement info structure.
Step3: We can then plot the locations of our electrodes on our subject's brain.
<div class="alert alert-info"><h4>Note</h4><p>These are not real electrodes for this subject, so they
do not align to the cortical surface perfectly.</p></div>
Step4: Sometimes it is useful to make a scatterplot for the current figure view.
This is best accomplished with matplotlib. We can capture an image of the
current mayavi view, along with the xy position of each electrode, with the
snapshot_brain_montage function. | Python Code:
# Authors: Eric Larson <larson.eric.d@gmail.com>
# Chris Holdgraf <choldgraf@gmail.com>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
from scipy.io import loadmat
from mayavi import mlab
import mne
from mne.viz import plot_alignment, snapshot_brain_montage
print(__doc__)
Explanation: Working with ECoG data
MNE supports working with more than just MEG and EEG data. Here we show some
of the functions that can be used to facilitate working with
electrocorticography (ECoG) data.
End of explanation
mat = loadmat(mne.datasets.misc.data_path() + '/ecog/sample_ecog.mat')
ch_names = mat['ch_names'].tolist()
elec = mat['elec'] # electrode positions given in meters
dig_ch_pos = dict(zip(ch_names, elec))
mon = mne.channels.DigMontage(dig_ch_pos=dig_ch_pos)
print('Created %s channel positions' % len(ch_names))
Explanation: Let's load some ECoG electrode locations and names, and turn them into
a :class:mne.channels.DigMontage class.
End of explanation
info = mne.create_info(ch_names, 1000., 'ecog', montage=mon)
Explanation: Now that we have our electrode positions in MRI coordinates, we can create
our measurement info structure.
End of explanation
subjects_dir = mne.datasets.sample.data_path() + '/subjects'
fig = plot_alignment(info, subject='sample', subjects_dir=subjects_dir,
surfaces=['pial'])
mlab.view(200, 70)
Explanation: We can then plot the locations of our electrodes on our subject's brain.
<div class="alert alert-info"><h4>Note</h4><p>These are not real electrodes for this subject, so they
do not align to the cortical surface perfectly.</p></div>
End of explanation
# We'll once again plot the surface, then take a snapshot.
fig = plot_alignment(info, subject='sample', subjects_dir=subjects_dir,
surfaces='pial')
mlab.view(200, 70)
xy, im = snapshot_brain_montage(fig, mon)
# Convert from a dictionary to array to plot
xy_pts = np.vstack(xy[ch] for ch in info['ch_names'])
# Define an arbitrary "activity" pattern for viz
activity = np.linspace(100, 200, xy_pts.shape[0])
# This allows us to use matplotlib to create arbitrary 2d scatterplots
_, ax = plt.subplots(figsize=(10, 10))
ax.imshow(im)
ax.scatter(*xy_pts.T, c=activity, s=200, cmap='coolwarm')
ax.set_axis_off()
plt.show()
Explanation: Sometimes it is useful to make a scatterplot for the current figure view.
This is best accomplished with matplotlib. We can capture an image of the
current mayavi view, along with the xy position of each electrode, with the
snapshot_brain_montage function.
End of explanation |
2,617 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="http
Step1: We can use negative and regular indexing with a list
Step2: Lists can contain strings, floats, and integers. We can nest other lists, and we can also nest tuples and other data structures. The same indexing conventions apply for nesting
Step3: We can also perform slicing in lists. For example, if we want the last two elements, we use the following command
Step4: <a ><img src = "https
Step5: We can use the method "extend" to add new elements to the list
Step6: Another similar method is 'appended'. If we apply 'appended' instead of 'extended', we add one element to the list
Step7: Each time we apply a method, the list changes. If we apply "extend" we add two new elements to the list. The list L is then modified by adding two new elements
Step8: If we append the list ['a','b'] we have one new element consisting of a nested list
Step9: As lists are mutable, we can change them. For example, we can change the first element as follows
Step10: We can also delete an element of a list using the del command
Step11: We can convert a string to a list using 'split'. For example, the method split translates every group of characters separated by a space into an element in a list
Step12: We can use the split function to separate strings on a specific character. We pass the character we would like to split on into the argument, which in this case is a comma. The result is a list, and each element corresponds to a set of characters that have been separated by a comma
Step13: When we set one variable B equal to A; both A and B are referencing the same list in memory
Step14: <a ><img src = 'https
Step15: This is demonstrated in the following figure
Step16: Variable B references a new copy or clone of the original list; this is demonstrated in the following figure
Step17: <a id="ref2"></a>
<center><h2>Quiz</h2></center>
Create a list 'a_list' , with the following elements 1, “hello”, [1,2,3 ] and True.
Step18: <div align="right">
<a href="#q1" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="q1" class="collapse">
```
a_list=[1, 'hello', [1,2,3 ] , True]
a_list
```
</div>
Find the value stored at index 1 of 'a_list'.
Step19: <div align="right">
<a href="#q2" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="q2" class="collapse">
```
a_list[1]
```
</div>
Retrieve the elements stored at index 1 and 2 of 'a_list'.
Step20: <div align="right">
<a href="#q3" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="q3" class="collapse">
```
a_list[1 | Python Code:
L = ["Michael Jackson" , 10.1,1982]
L
Explanation: <a href="http://cocl.us/topNotebooksPython101Coursera"><img src = "https://ibm.box.com/shared/static/yfe6h4az47ktg2mm9h05wby2n7e8kei3.png" width = 750, align = "center"></a>
<a href="https://www.bigdatauniversity.com"><img src = "https://ibm.box.com/shared/static/ugcqz6ohbvff804xp84y4kqnvvk3bq1g.png" width = 300, align = "center"></a>
<h1 align=center><font size = 5>LISTS IN PYTHON</font></h1>
Table of Contents
<div class="alert alert-block alert-info" style="margin-top: 20px">
<li><a href="#ref0">About the Dataset</a></li>
<li><a href="#ref1">Lists</a></li>
<li><a href="#ref2">Quiz</a></li>
<br>
<p></p>
Estimated Time Needed: <strong>15 min</strong>
</div>
<hr>
<a id="ref0"></a>
<center><h2>About the Dataset</h2></center>
Imagine you received many music recommendations from your friends and compiled all of the recommendations into a table, with specific information about each movie.
The table has one row for each album and several columns:
artist - Name of the artist
album - Name of the album
released_year - Year the album was released
length_min_sec - Length of the album (hours,minutes,seconds)
genre - Genre of the album
music_recording_sales_millions - Music recording sales (millions in USD) on SONG://DATABASE
claimed_sales_millions - Album's claimed sales (millions in USD) on SONG://DATABASE
date_released - Date on which the album was released
soundtrack - Indicates if the album is the movie soundtrack (Y) or (N)
rating_of_friends - Indicates the rating from your friends from 1 to 10
<br>
<br>
The dataset can be seen below:
<font size="1">
<table font-size:xx-small style="width:25%">
<tr>
<th>Artist</th>
<th>Album</th>
<th>Released</th>
<th>Length</th>
<th>Genre</th>
<th>Music recording sales (millions)</th>
<th>Claimed sales (millions)</th>
<th>Released</th>
<th>Soundtrack</th>
<th>Rating (friends)</th>
</tr>
<tr>
<td>Michael Jackson</td>
<td>Thriller</td>
<td>1982</td>
<td>00:42:19</td>
<td>Pop, rock, R&B</td>
<td>46</td>
<td>65</td>
<td>30-Nov-82</td>
<td></td>
<td>10.0</td>
</tr>
<tr>
<td>AC/DC</td>
<td>Back in Black</td>
<td>1980</td>
<td>00:42:11</td>
<td>Hard rock</td>
<td>26.1</td>
<td>50</td>
<td>25-Jul-80</td>
<td></td>
<td>8.5</td>
</tr>
<tr>
<td>Pink Floyd</td>
<td>The Dark Side of the Moon</td>
<td>1973</td>
<td>00:42:49</td>
<td>Progressive rock</td>
<td>24.2</td>
<td>45</td>
<td>01-Mar-73</td>
<td></td>
<td>9.5</td>
</tr>
<tr>
<td>Whitney Houston</td>
<td>The Bodyguard</td>
<td>1992</td>
<td>00:57:44</td>
<td>Soundtrack/R&B, soul, pop</td>
<td>26.1</td>
<td>50</td>
<td>25-Jul-80</td>
<td>Y</td>
<td>7.0</td>
</tr>
<tr>
<td>Meat Loaf</td>
<td>Bat Out of Hell</td>
<td>1977</td>
<td>00:46:33</td>
<td>Hard rock, progressive rock</td>
<td>20.6</td>
<td>43</td>
<td>21-Oct-77</td>
<td></td>
<td>7.0</td>
</tr>
<tr>
<td>Eagles</td>
<td>Their Greatest Hits (1971-1975)</td>
<td>1976</td>
<td>00:43:08</td>
<td>Rock, soft rock, folk rock</td>
<td>32.2</td>
<td>42</td>
<td>17-Feb-76</td>
<td></td>
<td>9.5</td>
</tr>
<tr>
<td>Bee Gees</td>
<td>Saturday Night Fever</td>
<td>1977</td>
<td>1:15:54</td>
<td>Disco</td>
<td>20.6</td>
<td>40</td>
<td>15-Nov-77</td>
<td>Y</td>
<td>9.0</td>
</tr>
<tr>
<td>Fleetwood Mac</td>
<td>Rumours</td>
<td>1977</td>
<td>00:40:01</td>
<td>Soft rock</td>
<td>27.9</td>
<td>40</td>
<td>04-Feb-77</td>
<td></td>
<td>9.5</td>
</tr>
</table>
</font>
<hr>
<a id="ref1"></a>
<center><h2>Lists</h2></center>
We are going to take a look at lists in Python. A list is a sequenced collection of different objects such as integers, strings, and other lists as well. The address of each element within a list is called an 'index'. An index is used to access and refer to items within a list.
<a ><img src = "https://ibm.box.com/shared/static/eln445fv5nzv3wlm4u8dnfhbrcrv0hff.png" width = 1000, align = "center"></a>
<h4 align=center> Representation of a list
</h4>
To create a list, type the list within square brackets [ ], with your content inside the parenthesis and separated by commas. Let’s try it!
End of explanation
print('the same element using negative and positive indexing:\n Postive:',L[0],
'\n Negative:' , L[-3] )
print('the same element using negative and positive indexing:\n Postive:',L[1],
'\n Negative:' , L[-2] )
print('the same element using negative and positive indexing:\n Postive:',L[2],
'\n Negative:' , L[-1] )
Explanation: We can use negative and regular indexing with a list :
<a ><img src = "https://ibm.box.com/shared/static/a7ac9lnvmcaz29n86ffez4as27fl3n9m.png" width = 1000, align = "center"></a>
<h4 align=center> Representation of a list
</h4>
End of explanation
temp = [ "Michael Jackson", 10.1,1982,[1,2],("A",1) ]
for i in range(0, len(temp)):
print(temp[i], temp[-len(temp)+i])
Explanation: Lists can contain strings, floats, and integers. We can nest other lists, and we can also nest tuples and other data structures. The same indexing conventions apply for nesting:
End of explanation
L = [ "Michael Jackson", 10.1,1982,"MJ",1]
L
Explanation: We can also perform slicing in lists. For example, if we want the last two elements, we use the following command:
End of explanation
L[3:5]
Explanation: <a ><img src = "https://ibm.box.com/shared/static/pt3pfp1sg5okwuwwpy0dnj8e94fl2mwy.png" width = 1000, align = "center"></a>
<h4 align=center> Representation of a list
</h4>
End of explanation
L = [ "Michael Jackson", 10.2]
L.extend(['pop',10])
L
Explanation: We can use the method "extend" to add new elements to the list:
End of explanation
L = [ "Michael Jackson", 10.2]
L.append(['pop',10])
L
Explanation: Another similar method is 'appended'. If we apply 'appended' instead of 'extended', we add one element to the list:
End of explanation
L = [ "Michael Jackson", 10.2]
L.extend(['pop',10])
L
Explanation: Each time we apply a method, the list changes. If we apply "extend" we add two new elements to the list. The list L is then modified by adding two new elements:
End of explanation
L.append(['a','b'])
L
Explanation: If we append the list ['a','b'] we have one new element consisting of a nested list:
End of explanation
A = ["disco",10,1.2]
print('Before change:', A)
A[0] = 'hard rock'
print('After change:', A)
Explanation: As lists are mutable, we can change them. For example, we can change the first element as follows:
End of explanation
print('Before change:', A)
del(A[0])
print('After change:', A)
Explanation: We can also delete an element of a list using the del command:
End of explanation
'hard rock'.split()
Explanation: We can convert a string to a list using 'split'. For example, the method split translates every group of characters separated by a space into an element in a list:
End of explanation
'A,B,C,D'.split(',')
Explanation: We can use the split function to separate strings on a specific character. We pass the character we would like to split on into the argument, which in this case is a comma. The result is a list, and each element corresponds to a set of characters that have been separated by a comma:
End of explanation
A = ["hard rock",10,1.2]
B = A
print('A:', A)
print('B:', B)
Explanation: When we set one variable B equal to A; both A and B are referencing the same list in memory :
End of explanation
print('Before changing A[0], B[0] is ',B[0])
A[0] = "banana"
print('After changing A[0], A[0] is ',A[0])
print('After changing A[0], B[0] is ',B[0])
Explanation: <a ><img src = 'https://ibm.box.com/shared/static/7g2u8hqqb4birdwn7m9uir4s9wfj8mko.png' width = 1000, align = "center"></a>
Initially, the value of the first element in B is set as hard rock. If we change the first element in A to 'banana', we get an unexpected side effect. As A and B are referencing the same list, if we change list A, then list B also changes. If we check the first element of B we get banana instead of hard rock:
End of explanation
B = A[:]
B
Explanation: This is demonstrated in the following figure:
<a ><img src = https://ibm.box.com/shared/static/thdu6y5pzh99qpun4tu2fjvj86st0hbu.gif width = 1000, align = "center"></a>
You can clone list A by using the following syntax:
End of explanation
print('Before changing A[0], B[0] is ',B[0])
A[0] = "apple"
print('After changing A[0], A[0] is ',A[0])
print('After changing A[0], B[0] is ',B[0])
Explanation: Variable B references a new copy or clone of the original list; this is demonstrated in the following figure:
<a ><img src = https://ibm.box.com/shared/static/gwx86gaoeizqjvx7xj96cb8i9hn684ei.gif width = 1000, align = "center"></a>
Now if you change A, B will not change:
End of explanation
a_list = [1, "hello", [1, 2, 3], True]
a_list
Explanation: <a id="ref2"></a>
<center><h2>Quiz</h2></center>
Create a list 'a_list' , with the following elements 1, “hello”, [1,2,3 ] and True.
End of explanation
a_list[1]
Explanation: <div align="right">
<a href="#q1" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="q1" class="collapse">
```
a_list=[1, 'hello', [1,2,3 ] , True]
a_list
```
</div>
Find the value stored at index 1 of 'a_list'.
End of explanation
a_list[1:3]
Explanation: <div align="right">
<a href="#q2" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="q2" class="collapse">
```
a_list[1]
```
</div>
Retrieve the elements stored at index 1 and 2 of 'a_list'.
End of explanation
A = [1, 'a']
print(A)
B = [2, 1, 'd']
print(B)
print(A + B)
Explanation: <div align="right">
<a href="#q3" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="q3" class="collapse">
```
a_list[1:3]
```
#### 4) Concatenate the following lists A=[1,'a'] abd B=[2,1,'d']:
End of explanation |
2,618 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plotting Data
Visualizing the metadata is very useful to get a first look at the nature and quality of the run.
First we need a DataFrame with the meta data. You can make one with porekit.gather_metadata once, and then load it later from a hdf file or something similar.
Step1: Read length distribution
Step2: This is a histogram showing the distribution of read length. In this case it's the max of template and complement length. This plots ignores a small part of the longest reads in order to be more readable.
Reads over time
Step3: Yield Curves
Step4: This plot shows the sequence yields in Megabases over time.
Template length vs complement length
Step5: In the standard 2D library preparation, a "hairpin" is attached to one end of double stranded DNA. Then, when the strand goes through the nanopore, first one strand translocates, then the hairpin and finally the complement. Because template and complement both carry the same information, they can be used to improve accuracy of the basecalling.
However, not all molecules have a hairpin attached, not all have a complement strand, and in most cases, the template and complement length does not match completely. This can be seen in the plot above, where most data points are on a diagonal with template and complement length being almost the same. There are more points under the diagonal than above it, and there is a solid line at the bottom, showing reads with no complement.
Occupancy
Step6: This shows the occupancy of pores over time. In General, pores break over time, which is a major factor in limiting the total yield over the lifetime of a flowcell.
Squiggle Dots
The squiggle_dots function takes a Fast5 File and outputs a plot of all event means as dots on a graph. This way of plotting event data does a better job at characterizing a long read than the traditional "squiggle" plot. In this example there is a marked difference between the traces of the template and the complement, as segmented by the detected hairpin section.
Step7: Customizing plots
The plots inside porekit.plots are designed to work best inside the Jupyter notebook when exploring nanopore data interactively, and showing nanopore data as published notebooks or presentations. This is why they use colors and a wide aspect ratio.
But the plots can be customized somewhat using standard matplotlib. Every plot function returns a figure and an axis object
Step8: Sometimes you want to subdivide a figure into multiple plots. You can do it like this | Python Code:
df = pd.read_hdf("../examples/data/ru9_meta.h5", "meta")
Explanation: Plotting Data
Visualizing the metadata is very useful to get a first look at the nature and quality of the run.
First we need a DataFrame with the meta data. You can make one with porekit.gather_metadata once, and then load it later from a hdf file or something similar.
End of explanation
porekit.plots.read_length_distribution(df);
Explanation: Read length distribution
End of explanation
porekit.plots.reads_vs_time(df);
Explanation: This is a histogram showing the distribution of read length. In this case it's the max of template and complement length. This plots ignores a small part of the longest reads in order to be more readable.
Reads over time
End of explanation
porekit.plots.yield_curves(df);
Explanation: Yield Curves
End of explanation
porekit.plots.template_vs_complement(df);
Explanation: This plot shows the sequence yields in Megabases over time.
Template length vs complement length
End of explanation
porekit.plots.occupancy(df);
Explanation: In the standard 2D library preparation, a "hairpin" is attached to one end of double stranded DNA. Then, when the strand goes through the nanopore, first one strand translocates, then the hairpin and finally the complement. Because template and complement both carry the same information, they can be used to improve accuracy of the basecalling.
However, not all molecules have a hairpin attached, not all have a complement strand, and in most cases, the template and complement length does not match completely. This can be seen in the plot above, where most data points are on a diagonal with template and complement length being almost the same. There are more points under the diagonal than above it, and there is a solid line at the bottom, showing reads with no complement.
Occupancy
End of explanation
fast5 = porekit.Fast5File(df.iloc[1002].absolute_filename)
porekit.plots.squiggle_dots(fast5)
fast5.close()
Explanation: This shows the occupancy of pores over time. In General, pores break over time, which is a major factor in limiting the total yield over the lifetime of a flowcell.
Squiggle Dots
The squiggle_dots function takes a Fast5 File and outputs a plot of all event means as dots on a graph. This way of plotting event data does a better job at characterizing a long read than the traditional "squiggle" plot. In this example there is a marked difference between the traces of the template and the complement, as segmented by the detected hairpin section.
End of explanation
f, ax = porekit.plots.read_length_distribution(df)
f.suptitle("Hello World");
f.set_figwidth(6)
Explanation: Customizing plots
The plots inside porekit.plots are designed to work best inside the Jupyter notebook when exploring nanopore data interactively, and showing nanopore data as published notebooks or presentations. This is why they use colors and a wide aspect ratio.
But the plots can be customized somewhat using standard matplotlib. Every plot function returns a figure and an axis object:
End of explanation
f, axes = plt.subplots(1,2)
f.set_figwidth(14)
ax1, ax2 = axes
porekit.plots.read_length_distribution(df, ax=ax1);
porekit.plots.yield_curves(df, ax=ax2);
Explanation: Sometimes you want to subdivide a figure into multiple plots. You can do it like this:
End of explanation |
2,619 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Representation of trajectories
sktracker.trajectories.Trajectories is probably the most important class in sktracker as it represents detected objects and links between them. Trajectories is a subclass of pandas.DataFrame which provides convenient methods.
A Trajectories object consists of several single trajectory. Each row contains an object which has several features (columns) and two integer as index. The first integer is a time stamp t_stamp and the second one is a label. Objects from the same label belong to the same trajectory.
Be aware that t_stamp are time index and does not represent time in second or minute. Time position (in second or minute) can be stored as object's features in columns (with 't' for example).
Trajectories creation
All you need to create a Trajectories object is a pandas.DataFrame. Note that sktracker makes an heavy use of pandas.DataFrame. If you are not familiar with it, take a look at the wonderfull Pandas documentation.
Step1: To create Trajectories, dataframe need to have
Step2: Trajectories viewer
First thing you want to do is probably to visualize trajectories you're working on. First load some sample dataset.
Step3: You can change axis to display.
Step4: You can also add a legend.
Step5: You can also build more complex figures.
Step6: Trajectories.show() is a nice way to quickly build visualizations. However sktracker.ui module provides more complex functions and classes in order to visualize your trajectories/dataset. See here for more details.
Retrieve informations
Here you will find how to retrieve informations specific to trajectories. Remember that trajectory and segment are the same as well as object/peak and spot are the same.
Step7: Some other methods such as
Step8: Global modifications
Reverse trajectories according to a column (time column makes sense most of the time
Step9: Merge two trajectories together taking care to not mix labels.
Step10: Relabel trajectories from zero. Note that it will also sort labels order.
Step11: time_interpolate() can "fill" holes in your dataset. For example if you have trajs with a missing timepoint, this method will try to "guess" the value of the missing timepoint.
Step12: The method return a new Trajectories with interpolated value for missing timepoint. v_* values are speeds and a_* values are accelerations.
Step13: See also
Step14: Remove a segment/trajectory
Step15: Merge two segments
Step16: Cut a segment
Step17: Duplicate a segment
Step18: Because hard coded trajectories modifications can take long time and be boring, we designed a smart GUI that allows all kind of local trajectory editions such as remove, duplicate, merge and so forth.
For more infos please go here.
Measurements on trajectories
Step19: Get the differences between each consecutive timepoints for a same trajectory (label).
Step20: Get the instantaneous speeds between each consecutive timepoints for a same trajectory (label). | Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
trajs = pd.DataFrame(np.random.random((30, 3)), columns=['x', 'y', 'z'])
trajs['t_stamp'] = np.sort(np.random.choice(range(10), (len(trajs),)))
trajs['label'] = list(range(len(trajs)))
trajs['t'] = trajs['t_stamp'] * 60 # t are in seconds for example
trajs.set_index(['t_stamp', 'label'], inplace=True)
trajs.sort_index(inplace=True)
trajs.head()
Explanation: Representation of trajectories
sktracker.trajectories.Trajectories is probably the most important class in sktracker as it represents detected objects and links between them. Trajectories is a subclass of pandas.DataFrame which provides convenient methods.
A Trajectories object consists of several single trajectory. Each row contains an object which has several features (columns) and two integer as index. The first integer is a time stamp t_stamp and the second one is a label. Objects from the same label belong to the same trajectory.
Be aware that t_stamp are time index and does not represent time in second or minute. Time position (in second or minute) can be stored as object's features in columns (with 't' for example).
Trajectories creation
All you need to create a Trajectories object is a pandas.DataFrame. Note that sktracker makes an heavy use of pandas.DataFrame. If you are not familiar with it, take a look at the wonderfull Pandas documentation.
End of explanation
from sktracker.trajectories import Trajectories
# Create a Trajectories instance
trajs = Trajectories(trajs)
Explanation: To create Trajectories, dataframe need to have:
columns ('x', 'y', 'z', 't' here)
a multi index (see pandas doc) with two levels : t_stamp and label
While t_stamp and label are required. Columns can contain anything you want/need.
End of explanation
import numpy as np
from sktracker import data
from sktracker.trajectories import Trajectories
trajs = data.with_gaps_df()
trajs = Trajectories(trajs)
trajs.head()
trajs.show()
Explanation: Trajectories viewer
First thing you want to do is probably to visualize trajectories you're working on. First load some sample dataset.
End of explanation
trajs.show(xaxis='t', yaxis='y')
Explanation: You can change axis to display.
End of explanation
trajs.show(legend=True)
Explanation: You can also add a legend.
End of explanation
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(15, 3))
ax1 = plt.subplot2grid((1, 3), (0, 0))
ax2 = plt.subplot2grid((1, 3), (0, 1))
ax3 = plt.subplot2grid((1, 3), (0, 2))
trajs.show(xaxis='t', yaxis='x', ax=ax1)
trajs.show(xaxis='t', yaxis='y', ax=ax2)
trajs.show(xaxis='t', yaxis='z', ax=ax3)
Explanation: You can also build more complex figures.
End of explanation
import numpy as np
from sktracker import data
from sktracker.trajectories import Trajectories
trajs = data.with_gaps_df()
trajs = Trajectories(trajs)
trajs.head()
trajs.t_stamps
# Each label corresponds to one segment/trajectory
trajs.labels
# Get dict if dataframe index of segments (sorted by labels)
trajs.segment_idxs[0]
# Iterator over segments
for label, segment in trajs.iter_segments:
print(label, end=' ')
# Get bounds (first and last spots/objects) of each segment
trajs.get_bounds()
# Get a different colors for each segments
trajs.get_colors()
Explanation: Trajectories.show() is a nice way to quickly build visualizations. However sktracker.ui module provides more complex functions and classes in order to visualize your trajectories/dataset. See here for more details.
Retrieve informations
Here you will find how to retrieve informations specific to trajectories. Remember that trajectory and segment are the same as well as object/peak and spot are the same.
End of explanation
import numpy as np
from sktracker import data
from sktracker.trajectories import Trajectories
trajs = data.with_gaps_df()
trajs = Trajectories(trajs)
trajs.head()
Explanation: Some other methods such as:
get_segments()
get_longest_segments()
get_shortest_segments()
get_t_stamps_correspondences()
See Trajectories API for more informations.
Modify trajectories
Automatic objects detection and tracking is very powerfull. However sometime you'll need to manually edit and modify trajectories. Here it is presented methods to help you with that. Methods are separated in two kinds : global and local trajectories modifications.
End of explanation
reversed_traj = trajs.reverse(time_column='t', inplace=False)
reversed_traj['t'].head()
Explanation: Global modifications
Reverse trajectories according to a column (time column makes sense most of the time :-))
End of explanation
print("Original trajs labels:", trajs.labels)
merged_trajs = trajs.merge(trajs.copy())
print("Merged trajs new labels:", merged_trajs.labels)
Explanation: Merge two trajectories together taking care to not mix labels.
End of explanation
print("Original trajs labels:", merged_trajs.labels)
relabeled_trajs = merged_trajs.relabel_fromzero()
print("Relabeled trajs labels:", relabeled_trajs.labels)
Explanation: Relabel trajectories from zero. Note that it will also sort labels order.
End of explanation
# t = 1 is missing here
missing_trajs = Trajectories(trajs[trajs['t'] != 1])
missing_trajs.head(10)
Explanation: time_interpolate() can "fill" holes in your dataset. For example if you have trajs with a missing timepoint, this method will try to "guess" the value of the missing timepoint.
End of explanation
# t = 1 has been "guessed"
interpolated_trajs = missing_trajs.time_interpolate()
interpolated_trajs.head(10)
Explanation: The method return a new Trajectories with interpolated value for missing timepoint. v_* values are speeds and a_* values are accelerations.
End of explanation
trajs.head()
trajs.remove_spots((0, 2), inplace=False).head()
Explanation: See also:
relabel()
scale()
project() : project each spots on a line specified by two spots.
See Trajectories API for more informations.
Local modifications
Let's see how to edit trajectories details. Almost in all methods, spots are identified with a tuple (t_stamp, label) and trajectory by an integer label.
Remove a spot (can be a list of spots)
End of explanation
trajs.labels
trajs.remove_segments(3).labels
Explanation: Remove a segment/trajectory
End of explanation
print("Size of segment #0 :", len(trajs.get_segments()[0]))
print("Size of segment #3 :", len(trajs.get_segments()[3]))
merged_trajs = trajs.merge_segments((0, 3), inplace=False)
print("Size of segment #0 (merged with #3):", len(merged_trajs.get_segments()[0]))
Explanation: Merge two segments
End of explanation
print("Size of segment #4:", len(trajs.get_segments()[4]))
cut_trajs = trajs.cut_segments((13, 4), inplace=False)
print("Size of segment #4 :", len(cut_trajs.get_segments()[4]))
print("Size of segment #7 (new segment after cut) :", len(cut_trajs.get_segments()[7]))
Explanation: Cut a segment
End of explanation
dupli_trajs = trajs.duplicate_segments(4)
# Check wether #4 and #7 (duplicated) are the same
np.all(dupli_trajs.get_segments()[4].values == dupli_trajs.get_segments()[7].values)
Explanation: Duplicate a segment
End of explanation
from sktracker import data
from sktracker.trajectories import Trajectories
trajs = Trajectories(data.brownian_trajs_df())
Explanation: Because hard coded trajectories modifications can take long time and be boring, we designed a smart GUI that allows all kind of local trajectory editions such as remove, duplicate, merge and so forth.
For more infos please go here.
Measurements on trajectories
End of explanation
trajs.get_diff().head(15)
Explanation: Get the differences between each consecutive timepoints for a same trajectory (label).
End of explanation
trajs.get_speeds().head(15)
# Run this cell first.
%load_ext autoreload
%autoreload 2
Explanation: Get the instantaneous speeds between each consecutive timepoints for a same trajectory (label).
End of explanation |
2,620 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Thermochemistry Validation Test
Han, Kehang (hkh12@mit.edu)
This notebook is designed to use a big set of tricyclics for testing the performance of new polycyclics thermo estimator. Currently the dataset contains 2903 tricyclics that passed isomorphic check.
Set up
Step6: Validation Test
Collect data from heuristic algorithm and qm library
Step7: Create pandas dataframe for easy data validation
Step8: categorize error sources
Step9: Parity Plot
Step10: Histogram of abs(heuristic-qm) | Python Code:
from rmgpy.data.rmg import RMGDatabase
from rmgpy import settings
from rmgpy.species import Species
from rmgpy.molecule import Group
from rmgpy.rmg.main import RMG
from IPython.display import display
import numpy as np
import os
import pandas as pd
from pymongo import MongoClient
import logging
logging.disable(logging.CRITICAL)
from bokeh.charts import Histogram
from bokeh.plotting import figure, show
from bokeh.io import output_notebook
output_notebook()
host = 'mongodb://user:user@rmg.mit.edu/admin'
port = 27018
client = MongoClient(host, port)
db = getattr(client, 'sdata134k')
db.collection_names()
def get_data(db, collection_name):
collection = getattr(db, collection_name)
db_cursor = collection.find()
# collect data
print('reading data...')
db_mols = []
for db_mol in db_cursor:
db_mols.append(db_mol)
print('done')
return db_mols
database = RMGDatabase()
database.load(
settings['database.directory'],
thermoLibraries=[],
kineticsFamilies='none',
kineticsDepositories='none',
reactionLibraries = []
)
thermoDatabase = database.thermo
# fetch testing dataset
collection_name = 'large_linear_polycyclic_table'
db_mols = get_data(db, collection_name)
print len(db_mols)
Explanation: Thermochemistry Validation Test
Han, Kehang (hkh12@mit.edu)
This notebook is designed to use a big set of tricyclics for testing the performance of new polycyclics thermo estimator. Currently the dataset contains 2903 tricyclics that passed isomorphic check.
Set up
End of explanation
filterList = [
Group().fromAdjacencyList(1 R u0 p0 c0 {2,S} {3,S} {5,S}
2 R u0 p0 c0 {1,S} {3,S} {6,S}
3 R u0 p0 c0 {1,S} {2,S} {4,S}
4 R u0 p0 c0 {3,S} {5,S} {6,S}
5 R u0 p0 c0 {1,S} {4,S} {6,S}
6 R u0 p0 c0 {2,S} {4,S} {5,S}
),
Group().fromAdjacencyList(1 R u0 p0 c0 {2,S} {6,S} {7,S}
2 R u0 p0 c0 {1,S} {3,S} {4,S}
3 R u0 p0 c0 {2,S} {4,S} {8,S}
4 R u0 p0 c0 {2,S} {3,S} {5,S}
5 R u0 p0 c0 {4,S} {6,S} {8,S}
6 R u0 p0 c0 {1,S} {5,S} {7,S}
7 R u0 p0 c0 {1,S} {6,S} {8,S}
8 R u0 p0 c0 {3,S} {5,S} {7,S}
),
Group().fromAdjacencyList(1 R u0 p0 c0 {2,S} {4,S}
2 R u0 p0 c0 {1,S} {3,S} {9,S}
3 R u0 p0 c0 {2,S} {4,S}
4 R u0 p0 c0 {1,S} {3,S} {5,S}
5 R u0 p0 c0 {4,S} {6,S} {8,S}
6 R u0 p0 c0 {5,S} {7,S}
7 R u0 p0 c0 {6,S} {8,S} {9,S}
8 R u0 p0 c0 {5,S} {7,S}
9 R u0 p0 c0 {2,S} {7,S}
),
Group().fromAdjacencyList(1 R u0 p0 c0 {2,S} {9,S}
2 R u0 p0 c0 {1,S} {3,S} {9,S}
3 R u0 p0 c0 {2,S} {4,[S,D]}
4 R u0 p0 c0 {3,[S,D]} {5,S}
5 R u0 p0 c0 {4,S} {6,S} {8,S}
6 R u0 p0 c0 {5,S} {7,S}
7 R u0 p0 c0 {6,S} {8,S} {9,S}
8 R u0 p0 c0 {5,S} {7,S}
9 R u0 p0 c0 {1,S} {2,S} {7,S}
),
Group().fromAdjacencyList(1 R u0 p0 c0 {2,S} {9,S}
2 R u0 p0 c0 {1,S} {3,S} {9,S}
3 R u0 p0 c0 {2,S} {4,S}
4 R u0 p0 c0 {3,S} {5,S} {7,S}
5 R u0 p0 c0 {4,S} {6,S}
6 R u0 p0 c0 {5,S} {7,S} {8,S}
7 R u0 p0 c0 {4,S} {6,S}
8 R u0 p0 c0 {6,S} {9,S}
9 R u0 p0 c0 {1,S} {2,S} {8,S}
),
]
filterList = []
test_size = 0
R = 1.987 # unit: cal/mol/K
validation_test_dict = {} # key: spec.label, value: (thermo_heuristic, thermo_qm)
spec_labels = []
spec_dict = {}
H298s_qm = []
Cp298s_qm = []
H298s_gav = []
Cp298s_gav = []
for db_mol in db_mols:
smiles_in = str(db_mol["SMILES_input"])
spec_in = Species().fromSMILES(smiles_in)
# remove unwanted species
for grp in filterList:
if spec_in.molecule[0].isSubgraphIsomorphic(grp):
break
else:
spec_in.generate_resonance_structures()
spec_labels.append(smiles_in)
# qm: just free energy but not free energy of formation
G298_qm = float(db_mol["G298"])*627.51 # unit: kcal/mol
H298_qm = float(db_mol["Hf298(kcal/mol)"]) # unit: kcal/mol
Cv298_qm = float(db_mol["Cv298"]) # unit: cal/mol/K
Cp298_qm = Cv298_qm + R # unit: cal/mol/K
H298s_qm.append(H298_qm)
Cp298s_qm.append(Cp298_qm)
# gav
thermo_gav = thermoDatabase.getThermoDataFromGroups(spec_in)
H298_gav = thermo_gav.H298.value_si/4184.0 # unit: kcal/mol
Cp298_gav = thermo_gav.getHeatCapacity(298)/4.184 # unit: cal/mol
H298s_gav.append(H298_gav)
Cp298s_gav.append(Cp298_gav)
spec_dict[smiles_in] = spec_in
Explanation: Validation Test
Collect data from heuristic algorithm and qm library
End of explanation
# create pandas dataframe
validation_test_df = pd.DataFrame(index=spec_labels)
validation_test_df['H298_heuristic(kcal/mol/K)'] = pd.Series(H298s_gav, index=validation_test_df.index)
validation_test_df['H298_qm(kcal/mol/K)'] = pd.Series(H298s_qm, index=validation_test_df.index)
heuristic_qm_diff = abs(validation_test_df['H298_heuristic(kcal/mol/K)']-validation_test_df['H298_qm(kcal/mol/K)'])
validation_test_df['H298_heuristic_qm_diff(kcal/mol/K)'] = pd.Series(heuristic_qm_diff, index=validation_test_df.index)
display(validation_test_df.head())
print "Validation test dataframe has {0} tricyclics.".format(len(spec_labels))
validation_test_df['H298_heuristic_qm_diff(kcal/mol/K)'].describe()
Explanation: Create pandas dataframe for easy data validation
End of explanation
diff20_df = validation_test_df[(validation_test_df['H298_heuristic_qm_diff(kcal/mol/K)'] > 50)
& (validation_test_df['H298_heuristic_qm_diff(kcal/mol/K)'] <= 200)]
len(diff20_df)
print len(diff20_df)
for smiles in diff20_df.index:
print "***********heur = {0}************".format(diff20_df[diff20_df.index==smiles]['H298_heuristic(kcal/mol/K)'])
print "***********qm = {0}************".format(diff20_df[diff20_df.index==smiles]['H298_qm(kcal/mol/K)'])
spe = spec_dict[smiles]
display(spe)
Explanation: categorize error sources
End of explanation
p = figure(plot_width=500, plot_height=400)
# plot_df = validation_test_df[validation_test_df['H298_heuristic_qm_diff(kcal/mol)'] < 10]
plot_df = validation_test_df
# add a square renderer with a size, color, and alpha
p.circle(plot_df['H298_heuristic(kcal/mol/K)'], plot_df['H298_qm(kcal/mol/K)'],
size=5, color="green", alpha=0.5)
x = np.array([-50, 150])
y = x
p.line(x=x, y=y, line_width=2, color='#636363')
p.line(x=x, y=y+10, line_width=2,line_dash="dashed", color='#bdbdbd')
p.line(x=x, y=y-10, line_width=2, line_dash="dashed", color='#bdbdbd')
p.xaxis.axis_label = "H298 GAV (kcal/mol/K)"
p.yaxis.axis_label = "H298 Quantum (kcal/mol/K)"
p.xaxis.axis_label_text_font_style = "normal"
p.yaxis.axis_label_text_font_style = "normal"
p.xaxis.axis_label_text_font_size = "16pt"
p.yaxis.axis_label_text_font_size = "16pt"
p.xaxis.major_label_text_font_size = "12pt"
p.yaxis.major_label_text_font_size = "12pt"
show(p)
len(plot_df.index)
Explanation: Parity Plot: heuristic vs. qm
End of explanation
from bokeh.models import Range1d
hist = Histogram(validation_test_df,
values='Cp298_heuristic_qm_diff(cal/mol/K)', xlabel='Cp Prediction Error (cal/mol/K)',
ylabel='Number of Testing Molecules',
bins=50,\
plot_width=500, plot_height=300)
# hist.y_range = Range1d(0, 1640)
hist.x_range = Range1d(0, 20)
show(hist)
with open('validation_test_sdata134k_2903_pyPoly_dbPoly.csv', 'w') as fout:
validation_test_df.to_csv(fout)
Explanation: Histogram of abs(heuristic-qm)
End of explanation |
2,621 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression
1. Information Generation
Simulation of values to train and test the linear regression model.
Step1: 2. ages_train vs ages_test relationship
Is there a trend we can model ?
Step2: 3. Model Creation and Predictions
Model fitting | Python Code:
# importing packages
import numpy
import random
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
# setting ageNetWorthData
def ageNetWorthData():
random.seed(42)
numpy.random.seed(42)
ages = []
for ii in range(100):
ages.append( random.randint(20,65))
net_worths = [ii * 6.25 + numpy.random.normal(scale=40.) for ii in ages]
ages = numpy.reshape( numpy.array(ages), (len(ages), 1))
net_worths = numpy.reshape( numpy.array(net_worths), (len(net_worths), 1))
from sklearn.cross_validation import train_test_split
ages_train, ages_test, net_worths_train, net_worths_test = train_test_split(ages, net_worths)
return ages_train, ages_test, net_worths_train, net_worths_test
# using ageNetWorthData
ages_train, ages_test, net_worths_train, net_worths_test = ageNetWorthData()
Explanation: Regression
1. Information Generation
Simulation of values to train and test the linear regression model.
End of explanation
# Plot ages_train vs net_worths_train
plt.scatter(ages_train,net_worths_train)
# Setting axis labels
plt.xlabel('Ages')
plt.ylabel('Net_worths')
plt.show()
Explanation: 2. ages_train vs ages_test relationship
Is there a trend we can model ?
End of explanation
%matplotlib inline
reg = LinearRegression()
reg.fit(ages_train, net_worths_train)
fig = plt.figure()
ax = fig.add_subplot(111)
plt.scatter(ages_train,net_worths_train)
plt.plot(ages_test, reg.predict(ages_test), color='blue',linewidth=3)
intercep = reg.intercept_
slope = reg.coef_
ax.text(20, 400, r' $Net worths =' + str(round(intercep.item(0),2)) +' + '
+ str(round(slope.item(0),2)) + '(Age) $', fontsize=10)
Explanation: 3. Model Creation and Predictions
Model fitting: $y = a + bx$
parameters: $a,b$
Independent variable: $x$
End of explanation |
2,622 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Machine Translation in Python 3 with NLTK
(C) 2017 by Damir Cavar
Version
Step1: We can load a word-level alignment corpus for English and French from the NLTK dataset
Step2: Print out the words in the corpus as a list
Step3: Access a word by index in the list
Step4: We can load the aligned sentences. Here we will load just one sentence, the firs one in the corpus
Step5: The alignments can be accessed via the alignment property
Step6: We can display the alignment using the invert function
Step7: We can also create alignments directly using the NLTK translate module. We import the translation modules from NLTK
Step8: We can create an alignment example
Step9: Translating with IBM Model 1 in NLTK
We already imported comtrans from NLTK in the code above. We have to import IBMModel1 from nltk.translate
Step10: We can create an IBMModel1 using 20 iterations to run the learning algorithm using the first 10 sentences from the aligned corpus; see the EM explanation on the slides and the following publications | Python Code:
from nltk.corpus import comtrans
Explanation: Machine Translation in Python 3 with NLTK
(C) 2017 by Damir Cavar
Version: 1.0, November 2017
License: Creative Commons Attribution-ShareAlike 4.0 International License (CA BY-SA 4.0)
This is a brief introduction to the Machine Translation components in NLTK.
Loading an Aligned Corpus
Import the comtrans module from nltk.corpus.
End of explanation
words = comtrans.words("alignment-en-fr.txt")
Explanation: We can load a word-level alignment corpus for English and French from the NLTK dataset:
End of explanation
for word in words[:20]:
print(word)
print("...")
Explanation: Print out the words in the corpus as a list:
End of explanation
print(words[0])
Explanation: Access a word by index in the list:
End of explanation
als = comtrans.aligned_sents("alignment-en-fr.txt")[0]
als
print(" ".join(als.words))
print(" ".join(als.mots))
Explanation: We can load the aligned sentences. Here we will load just one sentence, the firs one in the corpus:
End of explanation
als.alignment
Explanation: The alignments can be accessed via the alignment property:
End of explanation
als.invert()
Explanation: We can display the alignment using the invert function:
End of explanation
from nltk.translate import Alignment, AlignedSent
Explanation: We can also create alignments directly using the NLTK translate module. We import the translation modules from NLTK:
End of explanation
als = AlignedSent( ["Reprise", "de", "la", "session" ], \
["Resumption", "of", "the", "session" ] , \
Alignment( [ (0 , 0), (1 , 1), (2 , 2), (3 , 3) ] ) )
Explanation: We can create an alignment example:
End of explanation
from nltk.translate import IBMModel1
Explanation: Translating with IBM Model 1 in NLTK
We already imported comtrans from NLTK in the code above. We have to import IBMModel1 from nltk.translate:
End of explanation
com_ibm1 = IBMModel1(comtrans.aligned_sents()[:10], 100)
print(round(com_ibm1.translation_table["bitte"]["Please"], 3) )
print(round(com_ibm1.translation_table["Sitzungsperiode"]["session"] , 3) )
Explanation: We can create an IBMModel1 using 20 iterations to run the learning algorithm using the first 10 sentences from the aligned corpus; see the EM explanation on the slides and the following publications:
Philipp Koehn. 2010. Statistical Machine Translation. Cambridge University Press, New York.
Peter E Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The Mathematics of Statistical Machine Translation: Parameter Estimation. Computational Linguistics, 19 (2), 263-311.
End of explanation |
2,623 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Standardized Precipitation Index (SPI)
This notebook is inspired by the NCL SPI example.
The Standardized Precipitation Index (SPI) is a probability index that gives a better representation of abnormal wetness and dryness than the Palmer Severe Drought Index (PSDI). The World Meteorological Organization (WMO) recommends, that all national meteorological and hydrological services should use the SPI for monitoring of dry spells. Some advantages of the SPI
Step1: 2. Read monthly precipitation data
2.1 Read data
Step2: 2.2 Parse times
Step3: 3. Calculate SPI
Step4: 4. Visualize | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt # to generate plots
from mpl_toolkits.basemap import Basemap # plot on map projections
import datetime
from netCDF4 import Dataset # http://unidata.github.io/netcdf4-python/
from netCDF4 import netcdftime
from netcdftime import utime
from dim_spi_n import * # private lib
from timeit import default_timer as timer
import warnings
warnings.simplefilter('ignore')
Explanation: Standardized Precipitation Index (SPI)
This notebook is inspired by the NCL SPI example.
The Standardized Precipitation Index (SPI) is a probability index that gives a better representation of abnormal wetness and dryness than the Palmer Severe Drought Index (PSDI). The World Meteorological Organization (WMO) recommends, that all national meteorological and hydrological services should use the SPI for monitoring of dry spells. Some advantages of the SPI:
* It requires only monthly precipitation.
* It can be compared across regions with markedly different climates.
* The standardization of the SPI allows the index to determine the rarity of a current drought.
* It can be created for differing periods of 1-to-36 months.
A shortcoming of the SPI, as noted by Trenbert et al (2014):
"the SPI are based on precipitation alone and provide a measure only for water supply. They are very useful as a measure of precipitation deficits or meteorological drought but are limited because they do not deal with the ET [evapotranspiration] side of the issue."
In this notebook, SPI is obtained by fitting a gamma distribution to monthly GPCP precipitation data from 1979 to 2010. The data can be downloaded from https://www.ncl.ucar.edu/Applications/Data/.
1. Import basic libraries
End of explanation
infile = r'data/V22_GPCP.1979-2010.nc'
fh = Dataset(infile, mode='r') # file handle, open in read only mode
fh.set_auto_mask(False)
lons = fh.variables['lon'][:]
lats = fh.variables['lat'][:]
nctime = fh.variables['time'][:]
t_unit = fh.variables['time'].units
pr = fh.variables['PREC'][:]
try :
t_cal = fh.variables['time'].calendar
except AttributeError : # Attribute doesn't exist
t_cal = u"gregorian" # or standard
fh.close() # close the file
undef = -99999.0
pr[pr==undef] = np.nan
pr = pr.astype(np.float64)
nt,nlat,nlon = pr.shape
ngrd = nlat*nlon
Explanation: 2. Read monthly precipitation data
2.1 Read data
End of explanation
utime = netcdftime.utime(t_unit, calendar = t_cal)
datevar = utime.num2date(nctime)
datevar[0:5]
Explanation: 2.2 Parse times
End of explanation
pr_grd = pr.reshape((nt,ngrd), order='F')
spi_grd = np.zeros(pr_grd.shape)
spi_grd[:,:] = np.nan
nrun = 24
s = timer()
for igrd in np.arange(ngrd):
one_pr = pr_grd[:,igrd]
if (isinstance(one_pr, np.ma.MaskedArray)) and one_pr.mask.all():
print(igrd)
continue
else:
spi_grd[:,igrd] = dim_spi_n(one_pr, nrun)
spi = spi_grd.reshape((nt,nlat,nlon), order='F')
e = timer()
print(e - s)
Explanation: 3. Calculate SPI
End of explanation
m = Basemap(projection='cyl', llcrnrlon=min(lons), llcrnrlat=min(lats),
urcrnrlon=max(lons), urcrnrlat=max(lats))
x, y = m(*np.meshgrid(lons, lats))
clevs = np.linspace(-3.0, 3.0, 21)
fig = plt.figure(figsize=(15,12))
# Plot the first one
ax = fig.add_subplot(211)
idx = 258
cs = m.contourf(x, y,spi[idx,:,:], clevs, cmap=plt.cm.RdBu)
m.drawcoastlines()
cb = m.colorbar(cs)
plt.title('SPI-' + str(nrun) + ' In '+ datetime.date.strftime(datevar[idx], "%m/%Y"), fontsize=16)
# plot the second one
ax = fig.add_subplot(212)
idx = -1
cs = m.contourf(x, y,spi[idx,:,:], clevs, cmap=plt.cm.RdBu)
m.drawcoastlines()
cb = m.colorbar(cs)
plt.title('SPI-' + str(nrun) + ' In '+ datetime.date.strftime(datevar[idx], "%m/%Y"), fontsize=16)
Explanation: 4. Visualize
End of explanation |
2,624 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Genetic-Algorithm" data-toc-modified-id="Genetic-Algorithm-1"><span class="toc-item-num">1 </span>Genetic Algorithm</a></span><ul class="toc-item"><li><span><a href="#Chromosome" data-toc-modified-id="Chromosome-1.1"><span class="toc-item-num">1.1 </span>Chromosome</a></span></li><li><span><a href="#Population" data-toc-modified-id="Population-1.2"><span class="toc-item-num">1.2 </span>Population</a></span></li><li><span><a href="#Cost-Function" data-toc-modified-id="Cost-Function-1.3"><span class="toc-item-num">1.3 </span>Cost Function</a></span></li><li><span><a href="#Evolution---Crossover" data-toc-modified-id="Evolution---Crossover-1.4"><span class="toc-item-num">1.4 </span>Evolution - Crossover</a></span></li><li><span><a href="#Evolution---Mutation" data-toc-modified-id="Evolution---Mutation-1.5"><span class="toc-item-num">1.5 </span>Evolution - Mutation</a></span></li><li><span><a href="#Recap" data-toc-modified-id="Recap-1.6"><span class="toc-item-num">1.6 </span>Recap</a></span></li><li><span><a href="#Supplement" data-toc-modified-id="Supplement-1.7"><span class="toc-item-num">1.7 </span>Supplement</a></span></li></ul></li><li><span><a href="#Travel-Salesman-Problem-(TSP)" data-toc-modified-id="Travel-Salesman-Problem-(TSP)-2"><span class="toc-item-num">2 </span>Travel Salesman Problem (TSP)</a></span></li><li><span><a href="#Reference" data-toc-modified-id="Reference-3"><span class="toc-item-num">3 </span>Reference</a></span></li></ul></div>
Step1: Genetic Algorithm
Genetic Algorithm (GA) is a class of algorithms that is used for optimization. They find better answers to a "defined" question.
The intuition for GA is that they generate a bunch of "answer candidates" and use some sort of feedback to figure out how close the candidate is to the "optimal" solution. During the process, far-from-optimal candidates gets dropped and are never seen again, while close-to-optimal candidates are combined with each other and maybe mutate slightly to see if they can get closer to optimal. The mutation is an attempt to modify the candidates from time to time to prevent the solution from getting stuck at the local optima.
Chromosome
The "answer candidates" mentioned above are called chromosomes, which is the representation of a solution candidate.
Chromosomes mate and mutate to produce offspring. They either die due to survival of the fittest, or are allowed to produce offspring who may have more desirable traits and adhere to natural selection.
Suppose We're trying to optimize a very simple problem
Step2: Population
The collection of chromosomes is referred to as our population. When you run a GA, you don't just look at one chromosome at a time. You might have a population of 20 or 100 or 5,000 chromosomes going all at once.
Step4: Cost Function
But given all those randomly generated chromosomes, how do we measure the optimality of a chromosome and find the "correct" (or globally-optimum) chromosome?
The cost function (or error function, or fitness function as the inverse) is some sort of measure of the optimality of a chromosome. If we're calling it "fitness function" then we're shooting for higher scores, and if we're using "cost function" then we're looking for low scores.
In this case, we might define a cost function to be the absolute difference between the sum of the and the target number X. The reason we're using the square of the difference is so that we never end up with a negative cost, you can choose to square it if you want to.
In this case, since this problem is easy and contrived, we know that we're shooting for a cost of 0 (our sum of the numbers in the chromosome equals exactly to our target number) and that we can stop there. Sometimes that's not the case. Sometimes you're just looking for the lowest cost you can find, and need to figure out different ways to end the calculation. Other times you're looking for the highest fitness score you can find, and similarly need to figure out some other criteria to use to stop the calculation.
Using that rule as a cost function, we can calculate the costs of our population (each collection of chromosomes).
Step5: Evolution - Crossover
Just like in evolution, you might be inclined to have the best and strongest chromosomes of the population mate with each other (The technical term for mating is crossover), with the hope that their offspring will be even healthier than either parent.
For each generation we'll retain a portion of the best performing chromosomes as judged by our cost/fitness function (the portion is a parameter that you can tune). These high-performers will be the parents of the next generation, or more intuitely, the next iteration.
Mating these parents is also very simple. You randomly pick two chromosomes, a male and a female (just a metaphor), and pick a point in the middle. This point can be dead-center if you want, or randomized if you prefer. Take that middle point (called a "pivot" point), and make two new chromosomes by combining the first half of one with the second half of the other and vice versa (It's usually recommended to use even numbers as your population size).
By repeating the mating step, we repopulate the population to its desired size for the next generation. e.g. if you take the top 30 chromosomes in a population of 100, then you'd need to create 100 new chromosomes by mating them.
Evolution - Mutation
Crossover is the main step of how you get from one generation of chromosomes to the next, but it alone has a problem
Step7: Even though we defined our generation number to be 10, we can reach our goal for only a small iteration count. Thus for problems where you don't the optimal answer, it's best to define a stopping criteria, instead of letting it run wild.
Supplement
Recall that in the beginning, we said that Genetic Algorithm (GA) is a class of algorithms that is used for optimization. The term "is a class of algorithms" means that there're many different variations of GA. For example, in the evolution stage, instead of retaining a portion of the best performing chromosomes as judged by our cost/fitness function liked we mentioned above, we can throw darts to decide who gets to stay.
By thowing darts, we are simply saying that we have some probability to select some of the lesser performing chromosomes in the current generation to be considered for the evolution stage. This MIGHT decrease our likelihood of getting stuck in the local optima. Example below
Step9: As you can see, we're not restricted to choose only the "best" chromosomes to be considered for the evolution stage. Each chromosome has a chance of being chosen, but we of course, are still in favor of the chromosomes that are performing well (has a higher probability of getting chosen).
Step10: Travel Salesman Problem (TSP)
We can also use the Genetic Algorithm on a slightly more complex problem, the travel salesman problem. The problem that we're trying to solve is given a set of cities and distance between every pair of cities, we wish to find the shortest possible route that visits every city exactly once and returns to the starting point.
Please refer to the first section of this post - Tutorial
Step11: The Genetic Algorithm for the travel salesman problem is written as a module this time. [link] | Python Code:
# code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', 'notebook_format'))
from formats import load_style
load_style(plot_style = False)
os.chdir(path)
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# 1. magic for inline plot
# 2. magic to print version
# 3. magic so that the notebook will reload external python modules
%matplotlib inline
%load_ext watermark
%load_ext autoreload
%autoreload 2
%watermark -a 'Ethen' -d -t -v -p numpy,pandas,matplotlib
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Genetic-Algorithm" data-toc-modified-id="Genetic-Algorithm-1"><span class="toc-item-num">1 </span>Genetic Algorithm</a></span><ul class="toc-item"><li><span><a href="#Chromosome" data-toc-modified-id="Chromosome-1.1"><span class="toc-item-num">1.1 </span>Chromosome</a></span></li><li><span><a href="#Population" data-toc-modified-id="Population-1.2"><span class="toc-item-num">1.2 </span>Population</a></span></li><li><span><a href="#Cost-Function" data-toc-modified-id="Cost-Function-1.3"><span class="toc-item-num">1.3 </span>Cost Function</a></span></li><li><span><a href="#Evolution---Crossover" data-toc-modified-id="Evolution---Crossover-1.4"><span class="toc-item-num">1.4 </span>Evolution - Crossover</a></span></li><li><span><a href="#Evolution---Mutation" data-toc-modified-id="Evolution---Mutation-1.5"><span class="toc-item-num">1.5 </span>Evolution - Mutation</a></span></li><li><span><a href="#Recap" data-toc-modified-id="Recap-1.6"><span class="toc-item-num">1.6 </span>Recap</a></span></li><li><span><a href="#Supplement" data-toc-modified-id="Supplement-1.7"><span class="toc-item-num">1.7 </span>Supplement</a></span></li></ul></li><li><span><a href="#Travel-Salesman-Problem-(TSP)" data-toc-modified-id="Travel-Salesman-Problem-(TSP)-2"><span class="toc-item-num">2 </span>Travel Salesman Problem (TSP)</a></span></li><li><span><a href="#Reference" data-toc-modified-id="Reference-3"><span class="toc-item-num">3 </span>Reference</a></span></li></ul></div>
End of explanation
chromo_size = 5
low = 0 # we want our solution to be bounded between 0 to 100 (inclusive)
high = 100
np.random.randint(low, high + 1, chromo_size)
Explanation: Genetic Algorithm
Genetic Algorithm (GA) is a class of algorithms that is used for optimization. They find better answers to a "defined" question.
The intuition for GA is that they generate a bunch of "answer candidates" and use some sort of feedback to figure out how close the candidate is to the "optimal" solution. During the process, far-from-optimal candidates gets dropped and are never seen again, while close-to-optimal candidates are combined with each other and maybe mutate slightly to see if they can get closer to optimal. The mutation is an attempt to modify the candidates from time to time to prevent the solution from getting stuck at the local optima.
Chromosome
The "answer candidates" mentioned above are called chromosomes, which is the representation of a solution candidate.
Chromosomes mate and mutate to produce offspring. They either die due to survival of the fittest, or are allowed to produce offspring who may have more desirable traits and adhere to natural selection.
Suppose We're trying to optimize a very simple problem: trying to create a list of N integer numbers that equal X when summed together. If we set N = 5 and X = 200, then our chromosomes will simply be a list (an array) with length of 5. Here is one possible chromosome that could be a solution candidate for our problem:
End of explanation
pop_size = 6
pop = np.random.randint(low, high + 1, (pop_size, chromo_size))
pop
Explanation: Population
The collection of chromosomes is referred to as our population. When you run a GA, you don't just look at one chromosome at a time. You might have a population of 20 or 100 or 5,000 chromosomes going all at once.
End of explanation
# determine the cost of an chromosome. lower is better
target = 200
cost = np.abs(np.sum(pop, axis = 1) - target)
# combine the cost and chromosome into one list
graded = [(c, list(p)) for p, c in zip(pop, cost)]
for cost, chromo in graded:
print("chromo {}'s cost is {}".format(chromo, cost))
# side note, we're converting the array to a list in `list(p)` since
# if the cost for two chromosomes are the same, then the sorting will break
# when the second element is an array
test1 = (30, np.array((1, 2, 4)))
test2 = (30, np.array((3, 4, 5)))
sorted([test1, test2])
print('code above will break')
Explanation: Cost Function
But given all those randomly generated chromosomes, how do we measure the optimality of a chromosome and find the "correct" (or globally-optimum) chromosome?
The cost function (or error function, or fitness function as the inverse) is some sort of measure of the optimality of a chromosome. If we're calling it "fitness function" then we're shooting for higher scores, and if we're using "cost function" then we're looking for low scores.
In this case, we might define a cost function to be the absolute difference between the sum of the and the target number X. The reason we're using the square of the difference is so that we never end up with a negative cost, you can choose to square it if you want to.
In this case, since this problem is easy and contrived, we know that we're shooting for a cost of 0 (our sum of the numbers in the chromosome equals exactly to our target number) and that we can stop there. Sometimes that's not the case. Sometimes you're just looking for the lowest cost you can find, and need to figure out different ways to end the calculation. Other times you're looking for the highest fitness score you can find, and similarly need to figure out some other criteria to use to stop the calculation.
Using that rule as a cost function, we can calculate the costs of our population (each collection of chromosomes).
End of explanation
from ga import GA
# calls the Genetic Algorithm
ga1 = GA(
generation = 10,
pop_size = 50,
chromo_size = 5,
low = 0,
high = 100,
retain_rate = 0.5,
mutate_rate = 0.2
)
ga1.fit(target = 200)
# the best chromo and its cost during each generation (iteration)
ga1.generation_history
# the overall best
ga1.best
Explanation: Evolution - Crossover
Just like in evolution, you might be inclined to have the best and strongest chromosomes of the population mate with each other (The technical term for mating is crossover), with the hope that their offspring will be even healthier than either parent.
For each generation we'll retain a portion of the best performing chromosomes as judged by our cost/fitness function (the portion is a parameter that you can tune). These high-performers will be the parents of the next generation, or more intuitely, the next iteration.
Mating these parents is also very simple. You randomly pick two chromosomes, a male and a female (just a metaphor), and pick a point in the middle. This point can be dead-center if you want, or randomized if you prefer. Take that middle point (called a "pivot" point), and make two new chromosomes by combining the first half of one with the second half of the other and vice versa (It's usually recommended to use even numbers as your population size).
By repeating the mating step, we repopulate the population to its desired size for the next generation. e.g. if you take the top 30 chromosomes in a population of 100, then you'd need to create 100 new chromosomes by mating them.
Evolution - Mutation
Crossover is the main step of how you get from one generation of chromosomes to the next, but it alone has a problem: If all you do is mate your candidates to go from generation to generation, you'll have a chance of getting stuck near a "local optimum, an answer that's pretty good but not necessarily the "global optimum" (the best you can hope for).
A GA would achieve very little if not for the combined effects of both crossover and mutation. Crossover helps discover more optimal solutions from already-good solutions, but it's the mutation that pushes the search for solutions in new directions.
Mutation is a completely random process by which you target an unsuspecting chromosome and blast it with just enough radiation to make one of its elements randomly change. How and when you mutate is up to you. e.g. If you choose a mutation rate of 0.1 (again any rate you want). Then if you randomly generated a number from 0 to 1 and if it happens to be below 0.1, the chromosome will mutate.
As for the mutation, it can be randomly picking a element of the chromosome and add the number by 5, dividing the number by 2 or change it to a randomly generated number. Do whatever you want with it as long as it's relevant to the context of the problem. Like in the beginning of our problem, we set a maximum and minimum boundary when generating our initial values for each of the chromosome min = 0 max = 100. Then for our mutation, we can restrict our mutated number to be within this boundary (not really necessary here, but more problems that have indispensible boundaries this is crucial). And if we cross that border we can simply set it back to the value of that border (e.g. the mutated number is 102, we can simply squeeze it back to 100, our upper bound).
Recap
The basic building blocks (parameters) of GA consists of:
Chromosomes. Representations of candidate solutions to your problem. They consist of the representation itself (in our case, a N element list)
Population. A group of chromosomes. The population will remain the same size (you get to choose your population size.), but will typically evolve to better cost/fitness scores over time.
Cost/Fitness Function. Used to evaluate your answer.
The ability to Crossover and Mutate.
The population experiences mutiple generations (iterations, this is a user-specified parameter).
And a typical GA takes the form of:
Initialize a population. Just fill it with completely random chromosomes that does not step over the boundary, if there is one.
Calculate the cost/fitness score for each chromosomes.
Sort the chromosomes by the user-defined cost/fitness score.
Retain a certain number of the parent chromosomes, where you get to pick the number of chromosome that will retain.
Mate the retained parent chromosomes to generate the children chromosomes. You can decide how you want to mate them. This process will keep going until the number of children chromosome is the same as the number of the original parent chromosome.
Mutate the children chromosomes at random. Again, you can decide how to do this (resticted to the boundary).
Compare the parent chromosomes and the children chromosomes and choose the best ones (e.g. you have 100 parent chromosomes and generated 100 children chromosomes, you compare 200 chromosomes and retain the best 100 chromosomes). In other words, we're killing the poorly performed children.
If the algorithm has not met some kind of completion criteria, return to step 2 with the new chromosomes. The completion criteria for this example is pretty simple: stop when you get a cost of 0. But this isn't always the case. Sometimes you don't know the minimum achievable cost. Or, if you're using fitness instead of cost, you may not know the maximum possible fitness. In those cases you can stop the algorithm if the best score hasn't changed in 100 generations (iterations), or any other number depending on how much time are you willing to wait or the computation resources that you have, and use that as your answer.
Putting it all together, the code might look something like this (code on github).
End of explanation
we have our chromosome and cost, the sum of the cost will be
the probability's denominator, while the probability's numerator
is each chromo's cost (NOTE THAT THIS FOR MAXIMIZATION PROBLEM)
denominator = 0
for cost, chromo in graded:
denominator += cost
# the chromo, the cost and its probability of getting chosen as the
# parent used for evolution
for cost, chromo in graded:
prob = cost / denominator
print("chromo {}'s cost is {} and it has a {} prob".format(chromo, cost, prob))
Explanation: Even though we defined our generation number to be 10, we can reach our goal for only a small iteration count. Thus for problems where you don't the optimal answer, it's best to define a stopping criteria, instead of letting it run wild.
Supplement
Recall that in the beginning, we said that Genetic Algorithm (GA) is a class of algorithms that is used for optimization. The term "is a class of algorithms" means that there're many different variations of GA. For example, in the evolution stage, instead of retaining a portion of the best performing chromosomes as judged by our cost/fitness function liked we mentioned above, we can throw darts to decide who gets to stay.
By thowing darts, we are simply saying that we have some probability to select some of the lesser performing chromosomes in the current generation to be considered for the evolution stage. This MIGHT decrease our likelihood of getting stuck in the local optima. Example below:
End of explanation
same idea for MINIMIZATION PROBLEM, except we have to
use reciprocal costs, so that the lower the cost
the higher the probability
denominator = 0
for cost, chromo in graded:
denominator += 1 / cost
# the chromo, the cost and its probability of getting chosen as the
# parent used for evolution
for cost, chromo in graded:
prob = (1 / cost) / denominator
print("chromo {}'s cost is {} and it has a {} prob".format(chromo, cost, prob))
# you can confirm that the probability does add up to one if
# you're suspicious ^^
Explanation: As you can see, we're not restricted to choose only the "best" chromosomes to be considered for the evolution stage. Each chromosome has a chance of being chosen, but we of course, are still in favor of the chromosomes that are performing well (has a higher probability of getting chosen).
End of explanation
import random
from collections import namedtuple
# example dataset
file = 'TSP_berlin52.txt'
tsp_data = pd.read_table(file, skiprows = 1, header = None,
names = ['city', 'x', 'y'], sep = ' ')
print(tsp_data.shape)
tsp_data.head()
Explanation: Travel Salesman Problem (TSP)
We can also use the Genetic Algorithm on a slightly more complex problem, the travel salesman problem. The problem that we're trying to solve is given a set of cities and distance between every pair of cities, we wish to find the shortest possible route that visits every city exactly once and returns to the starting point.
Please refer to the first section of this post - Tutorial: Applying a Genetic Algorithm to the traveling salesman problem for a more comprehensive description of the problem definition and how to modify the Genetic Algorithm a bit so that it becomes suitable for the problem.
End of explanation
from tsp_solver import TSPGA
tsp_ga = TSPGA(
generation = 3000,
population_size = 250,
retain_rate = 0.4,
mutate_rate = 0.3
)
tsp_ga.fit(tsp_data)
# distance convergence plot, and the best tour's distance
# and the corresponding city tour
# change default figure and font size
plt.rcParams['figure.figsize'] = 8, 6
plt.rcParams['font.size'] = 12
tsp_ga.convergence_plot()
tsp_ga.best_tour
Explanation: The Genetic Algorithm for the travel salesman problem is written as a module this time. [link]
End of explanation |
2,625 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
You are currently looking at version 1.1 of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the Jupyter Notebook FAQ course resource.
Assignment 2 - Pandas Introduction
All questions are weighted the same in this assignment.
Part 1
The following code loads the olympics dataset (olympics.csv), which was derrived from the Wikipedia entry on All Time Olympic Games Medals, and does some basic data cleaning.
The columns are organized as # of Summer games, Summer medals, # of Winter games, Winter medals, total # number of games, total # of medals. Use this dataset to answer the questions below.
Step1: Question 0 (Example)
What is the first country in df?
This function should return a Series.
Step2: Question 1
Which country has won the most gold medals in summer games?
This function should return a single string value.
Step3: Question 2
Which country had the biggest difference between their summer and winter gold medal counts?
This function should return a single string value.
Step4: Question 3
Which country has the biggest difference between their summer gold medal counts and winter gold medal counts relative to their total gold medal count?
$$\frac{Summer~Gold - Winter~Gold}{Total~Gold}$$
Only include countries that have won at least 1 gold in both summer and winter.
This function should return a single string value.
Step5: Question 4
Write a function to update the dataframe to include a new column called "Points" which is a weighted value where each gold medal (Gold.2) counts for 3 points, silver medals (Silver.2) for 2 points, and bronze medals (Bronze.2) for 1 point. The function should return only the column (a Series object) which you created.
This function should return a Series named Points of length 146
Step6: Part 2
For the next set of questions, we will be using census data from the United States Census Bureau. Counties are political and geographic subdivisions of states in the United States. This dataset contains population data for counties and states in the US from 2010 to 2015. See this document for a description of the variable names.
The census dataset (census.csv) should be loaded as census_df. Answer questions using this as appropriate.
Question 5
Which state has the most counties in it? (hint
Step7: Question 6
Only looking at the three most populous counties for each state, what are the three most populous states (in order of highest population to lowest population)? Use CENSUS2010POP.
This function should return a list of string values.
Step8: Question 7
Which county has had the largest absolute change in population within the period 2010-2015? (Hint
Step9: Question 8
In this datafile, the United States is broken up into four regions using the "REGION" column.
Create a query that finds the counties that belong to regions 1 or 2, whose name starts with 'Washington', and whose POPESTIMATE2015 was greater than their POPESTIMATE 2014.
This function should return a 5x2 DataFrame with the columns = ['STNAME', 'CTYNAME'] and the same index ID as the census_df (sorted ascending by index). | Python Code:
import pandas as pd
df = pd.read_csv('olympics.csv', index_col=0, skiprows=1)
for col in df.columns:
if col[:2]=='01':
df.rename(columns={col:'Gold'+col[4:]}, inplace=True)
if col[:2]=='02':
df.rename(columns={col:'Silver'+col[4:]}, inplace=True)
if col[:2]=='03':
df.rename(columns={col:'Bronze'+col[4:]}, inplace=True)
if col[:1]=='№':
df.rename(columns={col:'#'+col[1:]}, inplace=True)
names_ids = df.index.str.split('\s\(') # split the index by '('
df.index = names_ids.str[0] # the [0] element is the country name (new index)
df['ID'] = names_ids.str[1].str[:3] # the [1] element is the abbreviation or ID (take first 3 characters from that)
df = df.drop('Totals')
Explanation: You are currently looking at version 1.1 of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the Jupyter Notebook FAQ course resource.
Assignment 2 - Pandas Introduction
All questions are weighted the same in this assignment.
Part 1
The following code loads the olympics dataset (olympics.csv), which was derrived from the Wikipedia entry on All Time Olympic Games Medals, and does some basic data cleaning.
The columns are organized as # of Summer games, Summer medals, # of Winter games, Winter medals, total # number of games, total # of medals. Use this dataset to answer the questions below.
End of explanation
# You should write your whole answer within the function provided. The autograder will call
# this function and compare the return value against the correct solution value
def answer_zero():
# This function returns the row for Afghanistan, which is a Series object. The assignment
# question description will tell you the general format the autograder is expecting
return df.iloc[0]
# You can examine what your function returns by calling it in the cell. If you have questions
# about the assignment formats, check out the discussion forums for any FAQs
answer_zero()
Explanation: Question 0 (Example)
What is the first country in df?
This function should return a Series.
End of explanation
def answer_one():
max_gold = df['Gold'].max()
ret = df[df['Gold'] == max_gold]
ans = ret.index.values
return ans[0]
print(answer_one())
Explanation: Question 1
Which country has won the most gold medals in summer games?
This function should return a single string value.
End of explanation
def answer_two():
df2 = df.copy()
df2['Gold_diff'] = df['Gold'] - df['Gold.1']
score = []
for row in df2['Gold_diff']:
if row < 0:
row = row * -1
score.append(row)
else:
score.append(row)
df2['score'] = score
max_score = df2['score'].max()
name = df2[df2['score'] == max_score]
country_name = name.index.values
return country_name[0]
print(answer_two())
Explanation: Question 2
Which country had the biggest difference between their summer and winter gold medal counts?
This function should return a single string value.
End of explanation
def answer_three():
df2 = df.copy()
df2 = df2[(df2['Gold'] > 0) & (df2['Gold.1'] > 0)]
df2['Gold_diff'] = (df2['Gold'] - df2['Gold.1']) / df2['Gold.2']
score = []
for row in df2['Gold_diff']:
if row < 0:
row = row * -100
score.append(row)
else:
row = row * 100
score.append(row)
df2['score'] = score
df3 = df2[['Gold','Gold.1','Gold.2','score',]]
max_score = df3['score'].max()
name = df3[df3['score'] == max_score]
country_name = name.index.values
return country_name[0]
print(answer_three())
Explanation: Question 3
Which country has the biggest difference between their summer gold medal counts and winter gold medal counts relative to their total gold medal count?
$$\frac{Summer~Gold - Winter~Gold}{Total~Gold}$$
Only include countries that have won at least 1 gold in both summer and winter.
This function should return a single string value.
End of explanation
def answer_four():
df2 = df.copy()
df2['Points'] = df2['Gold.2']*3 + df2['Silver.2']*2 + df2['Bronze.2']*1
df3 = df2[['Gold.2','Silver.2','Bronze.2','Points']]
return df3['Points']
print(answer_four())
Explanation: Question 4
Write a function to update the dataframe to include a new column called "Points" which is a weighted value where each gold medal (Gold.2) counts for 3 points, silver medals (Silver.2) for 2 points, and bronze medals (Bronze.2) for 1 point. The function should return only the column (a Series object) which you created.
This function should return a Series named Points of length 146
End of explanation
census_df = pd.read_csv('census.csv')
census_df
def answer_five():
maximum_country = census_df.groupby(["STNAME"]).size().max()
g2 = census_df.groupby(["STNAME"]).size()
df3 = g2.reset_index()
name = df3[df3[0] == maximum_country]
name = name.set_index('STNAME').index.values[0]
return name
print(answer_five())
Explanation: Part 2
For the next set of questions, we will be using census data from the United States Census Bureau. Counties are political and geographic subdivisions of states in the United States. This dataset contains population data for counties and states in the US from 2010 to 2015. See this document for a description of the variable names.
The census dataset (census.csv) should be loaded as census_df. Answer questions using this as appropriate.
Question 5
Which state has the most counties in it? (hint: consider the sumlevel key carefully! You'll need this for future questions too...)
This function should return a single string value.
End of explanation
def answer_six():
df = census_df.copy()
df=df[df['SUMLEV'] == 50]
df = df[['CTYNAME', 'CENSUS2010POP']]
df = df.set_index('CTYNAME')
idx = df.sum(axis=1).sort_values(ascending=False).head(3).index
# df1 = df.ix[idx]
df1 = list(idx.values)
return idx
print(answer_six())
Explanation: Question 6
Only looking at the three most populous counties for each state, what are the three most populous states (in order of highest population to lowest population)? Use CENSUS2010POP.
This function should return a list of string values.
End of explanation
def answer_seven():
df = census_df.copy()
df=df[df['SUMLEV'] == 50]
df = df[['STNAME','CTYNAME','POPESTIMATE2015','POPESTIMATE2014','POPESTIMATE2013','POPESTIMATE2012','POPESTIMATE2011','POPESTIMATE2010']]
df = df.set_index(['STNAME', 'CTYNAME'])
df1 = df.apply(lambda x: x.max() - x.min(),axis=1)
df2 = df1.reset_index()
df2 = df2.sort_values([0],ascending=[0])
df3 = df2.set_index('CTYNAME').index.values
return df3[0]
print(answer_seven())
Explanation: Question 7
Which county has had the largest absolute change in population within the period 2010-2015? (Hint: population values are stored in columns POPESTIMATE2010 through POPESTIMATE2015, you need to consider all six columns.)
e.g. If County Population in the 5 year period is 100, 120, 80, 105, 100, 130, then its largest change in the period would be |130-80| = 50.
This function should return a single string value.
End of explanation
def answer_eight():
df = census_df.copy()
df = df[(df['REGION'] == 1) | (df['REGION'] == 2)]
df = df[df['CTYNAME'] == 'Washington County']
df = df[df['POPESTIMATE2015'] > df['POPESTIMATE2014']]
return df[['STNAME','CTYNAME']]
print(answer_eight())
Explanation: Question 8
In this datafile, the United States is broken up into four regions using the "REGION" column.
Create a query that finds the counties that belong to regions 1 or 2, whose name starts with 'Washington', and whose POPESTIMATE2015 was greater than their POPESTIMATE 2014.
This function should return a 5x2 DataFrame with the columns = ['STNAME', 'CTYNAME'] and the same index ID as the census_df (sorted ascending by index).
End of explanation |
2,626 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Content and Objectives
Show effects of asynchronity to IQ signal
QPSK symbols are being pulse-shaped by RRC, distorted in the channel by phase, frequency, and noise and depicted in the complex plane
Import
Step1: Function for determining the impulse response of an RRC filter
Step2: Parameters
Step3: Generating Tx Signal
Determine Tx signal by upsampling and rrc filtering.
Step4: Adding Distortions
Step5: Plotting Resulting Signals at Tx, after MF and after Sampling | Python Code:
# importing
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
import matplotlib
# showing figures inline
%matplotlib inline
# plotting options
font = {'size' : 20}
plt.rc('font', **font)
plt.rc('text', usetex=True)
matplotlib.rc('figure', figsize=(18, 6) )
Explanation: Content and Objectives
Show effects of asynchronity to IQ signal
QPSK symbols are being pulse-shaped by RRC, distorted in the channel by phase, frequency, and noise and depicted in the complex plane
Import
End of explanation
########################
# find impulse response of an RRC filter
########################
def get_rrc_ir(K, n_up, t_symb, r):
'''
Determines coefficients of an RRC filter
Formula out of: J. Huber, Trelliscodierung, Springer, 1992, S. 15
At poles, values of wikipedia.de were used (without cross-checking)
NOTE: Length of the IR has to be an odd number
IN: length of IR, upsampling factor, symbol time, roll-off factor
OUT: filter ceofficients
'''
assert K % 2 != 0, "Filter length needs to be odd"
if r == 0:
r = 1e-32
# init
rrc = np.zeros(K)
t_sample = t_symb/n_up
i_steps = np.arange( 0, K)
k_steps = np.arange( -(K-1)/2.0, (K-1)/2.0 + 1 )
t_steps = k_steps*t_sample
for i in i_steps:
if t_steps[i] == 0:
rrc[i] = 1.0/np.sqrt(t_symb) * (1.0 - r + 4.0 * r / np.pi )
elif np.abs( t_steps[i] ) == t_symb/4.0/r:
rrc[i] = r/np.sqrt(2.0*t_symb)*((1+2/np.pi)*np.sin(np.pi/4.0/r)+ \
( 1.0 - 2.0/np.pi ) * np.cos(np.pi/4.0/r) )
else:
rrc[i] = 1.0/np.sqrt(t_symb)*( np.sin( np.pi*t_steps[i]/t_symb*(1-r) ) + \
4.0*r*t_steps[i]/t_symb * np.cos( np.pi*t_steps[i]/t_symb*(1+r) ) ) \
/ (np.pi*t_steps[i]/t_symb*(1.0-(4.0*r*t_steps[i]/t_symb)**2.0))
return rrc
Explanation: Function for determining the impulse response of an RRC filter
End of explanation
# characterize Eb/N0 if you like to
EbN0_dB = 100
# modulation scheme and constellation points
constellation = [ 1 + 1j, -1 + 1j, -1 - 1j, 1 - 1j ]
constellation /= np.sqrt(2)
M = len( constellation )
# number of symbols
n_symb = 100
t_symb = 1.0
# get according noise variance
# NOTE: Since non-binary symbols are used normalization log2(M) has to be applied
sigma2 = 1 / ( np.log2(M) * 10**( EbN0_dB / 10 ) )
# parameters for rrc
beta = 0.33
# oversampling factor; samples per symbol
n_up = 8
# symbols per filter (plus minus in both directions)
syms_per_filt = 10
# length of the fir filter
K_filt = 2*syms_per_filt*n_up+1
# get rrc impulse response
rrc = get_rrc_ir(n_up*syms_per_filt*2+1, n_up, t_symb, beta)
rrc /= np.linalg.norm( rrc )
Explanation: Parameters
End of explanation
# generate random binary vector and modulate the specified modulation scheme
data = np.random.randint( M, size=n_symb )
s = [ constellation[ d ] for d in data ]
# prepare sequence to be filtered
s_up = np.zeros( n_symb * n_up, dtype=complex )
s_up[::n_up] = s
s_up = np.append( s_up, np.zeros( K_filt - 1 ) )
s_Tx = np.convolve( rrc, s_up )
Explanation: Generating Tx Signal
Determine Tx signal by upsampling and rrc filtering.
End of explanation
# vector of time samples
t_vec = np.arange(0, np.size( s_Tx ) * t_symb / n_up, t_symb / n_up)
# determine noise (using the same for all scenarios)
n = np.sqrt( sigma2 / 2 ) * (np.random.randn( len( t_vec ) ) + 1j*np.random.randn( len( t_vec ) ) )
# initialize dict for different signals
r = {}
# Rx signal 1:
# noise only
delta_phi = 0
delta_f = 0
s_Rx = s_Tx * np.exp( 1j * delta_phi ) * np.exp( 1j * 2 * np.pi * delta_f * t_vec )
r[ 'Noise only' ] = s_Rx + n
# Rx signal 2:
# phase distortion only
delta_phi = np.pi/8
delta_f = 0
s_Rx = s_Tx * np.exp( 1j * delta_phi ) * np.exp( 1j * 2 * np.pi * delta_f * t_vec )
r[ 'Phase Distortion plus Noise' ] = s_Rx + n
# Rx signal 3:
# phase and frequency distortion
delta_phi = np.pi/8
delta_f = 1 / ( 1e3 * t_symb / n_up )
s_Rx = s_Tx * np.exp( 1j * delta_phi ) * np.exp( 1j * 2 * np.pi * delta_f * t_vec )
r[ 'Phase and Frequency Distortion plus Noise' ] = s_Rx + n
# find signal after MF
y_mf = {}
for scenario in r:
y_mf[ scenario ] = np.convolve( rrc, r[ scenario ] )
# down-sampling
y_down = {}
for scenario in y_mf:
y_down[ scenario ] = y_mf[ scenario ][ K_filt-1 : K_filt-1 + len(s)*n_up : n_up ]
Explanation: Adding Distortions
End of explanation
# Plotting
for scenario in r:
fig = plt.figure()
fig.text(0.1,1, scenario + ' (SNR={} dB)'.format(EbN0_dB), size='30')
plt.subplot(131)
plt.plot( np.real(s_Tx), np.imag(s_Tx))
plt.grid(True); plt.xlabel('$\\mathrm{I}$'); plt.ylabel('$\\mathrm{Q}$')
plt.title('$s(t)$')
plt.xlim( (-1.5, 1.5) )
plt.ylim( (-1.5, 1.5) )
plt.subplot(132)
plt.plot( np.real(y_mf[ scenario ] ), np.imag(y_mf[ scenario ]))
plt.grid(True); plt.xlabel('$\\mathrm{I}$');
plt.title('$y(t)$')
plt.xlim( (-1.5, 1.5) )
plt.ylim( (-1.5, 1.5) )
plt.subplot(133)
plt.plot( np.real(y_down[ scenario ]), np.imag(y_down[ scenario ]),'o')
plt.grid(True); plt.xlabel('$\\mathrm{I}$');
plt.title('$y(kT)$')
plt.xlim( (-1.5, 1.5) )
plt.ylim( (-1.5, 1.5) )
Explanation: Plotting Resulting Signals at Tx, after MF and after Sampling
End of explanation |
2,627 | Given the following text description, write Python code to implement the functionality described.
Description:
You are given a list of integers.
You need to find the largest prime value and return the sum of its digits.
Examples:
For lst = [0,3,2,1,3,5,7,4,5,5,5,2,181,32,4,32,3,2,32,324,4,3] the output should be 10
For lst = [1,0,1,8,2,4597,2,1,3,40,1,2,1,2,4,2,5,1] the output should be 25
For lst = [1,3,1,32,5107,34,83278,109,163,23,2323,32,30,1,9,3] the output should be 13
For lst = [0,724,32,71,99,32,6,0,5,91,83,0,5,6] the output should be 11
For lst = [0,81,12,3,1,21] the output should be 3
For lst = [0,8,1,2,1,7] the output should be 7
| Python Code:
def skjkasdkd(lst):
def isPrime(n):
for i in range(2,int(n**0.5)+1):
if n%i==0:
return False
return True
maxx = 0
i = 0
while i < len(lst):
if(lst[i] > maxx and isPrime(lst[i])):
maxx = lst[i]
i+=1
result = sum(int(digit) for digit in str(maxx))
return result |
2,628 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Creating discrete Bayesian Networks
In this section, we show an example for creating a Bayesian Network in pgmpy from scratch. We use the cancer model (http
Step1: Step 1
Step2: Step 2
Step3: Step 3
Step4: Step 4
Step5: Loading example models
To quickly try out different features, pgmpy also has the functionality to directly load some example models from the bnlearn repository. | Python Code:
from IPython.display import Image
Image("images/cancer.png")
Explanation: Creating discrete Bayesian Networks
In this section, we show an example for creating a Bayesian Network in pgmpy from scratch. We use the cancer model (http://www.bnlearn.com/bnrepository/#cancer) for the example. The model structure is shown below.
In pgmpy, the model structure and it's parametrization (CPDs) doesn't depend on each other. So, the workflow is to first define the model structure, then define all the parameters (CPDs) and then add these parameters to the model. These CPDs can later on be modified, removed, replaced without changing or defining a new model structure.
End of explanation
from pgmpy.models import BayesianNetwork
cancer_model = BayesianNetwork(
[
("Pollution", "Cancer"),
("Smoker", "Cancer"),
("Cancer", "Xray"),
("Cancer", "Dyspnoea"),
]
)
Explanation: Step 1: Define the model structure
The BayesianModel can be initialized by passing a list of edges in the model structure. In this case, there are 4 edges in the model: Pollution -> Cancer, Smoker -> Cancer, Cancer -> Xray, Cancer -> Dyspnoea.
End of explanation
from pgmpy.factors.discrete import TabularCPD
cpd_poll = TabularCPD(variable="Pollution", variable_card=2, values=[[0.9], [0.1]])
cpd_smoke = TabularCPD(variable="Smoker", variable_card=2, values=[[0.3], [0.7]])
cpd_cancer = TabularCPD(
variable="Cancer",
variable_card=2,
values=[[0.03, 0.05, 0.001, 0.02], [0.97, 0.95, 0.999, 0.98]],
evidence=["Smoker", "Pollution"],
evidence_card=[2, 2],
)
cpd_xray = TabularCPD(
variable="Xray",
variable_card=2,
values=[[0.9, 0.2], [0.1, 0.8]],
evidence=["Cancer"],
evidence_card=[2],
)
cpd_dysp = TabularCPD(
variable="Dyspnoea",
variable_card=2,
values=[[0.65, 0.3], [0.35, 0.7]],
evidence=["Cancer"],
evidence_card=[2],
)
Explanation: Step 2: Define the CPDs
Each node of a Bayesian Network has a CPD associated with it, hence we need to define 5 CPDs in this case. In pgmpy, CPDs can be defined using the TabularCPD class. For details on the parameters, please refer to the documentation: https://pgmpy.org/_modules/pgmpy/factors/discrete/CPD.html
End of explanation
# Associating the parameters with the model structure.
cancer_model.add_cpds(cpd_poll, cpd_smoke, cpd_cancer, cpd_xray, cpd_dysp)
# Checking if the cpds are valid for the model.
cancer_model.check_model()
Explanation: Step 3: Add the CPDs to the model.
After defining the model parameters, we can now add them to the model using add_cpds method. The check_model method can also be used to verify if the CPDs are correctly defined for the model structure.
End of explanation
# Check for d-separation between variables
print(cancer_model.is_dconnected("Pollution", "Smoker"))
print(cancer_model.is_dconnected("Pollution", "Smoker", observed=["Cancer"]))
# Get all d-connected nodes
cancer_model.active_trail_nodes("Pollution")
# List local independencies for a node
cancer_model.local_independencies("Xray")
# Get all model implied independence conditions
cancer_model.get_independencies()
Explanation: Step 4: Run basic operations on the model
End of explanation
from pgmpy.utils import get_example_model
model = get_example_model("cancer")
print("Nodes in the model:", model.nodes())
print("Edges in the model:", model.edges())
model.get_cpds()
Explanation: Loading example models
To quickly try out different features, pgmpy also has the functionality to directly load some example models from the bnlearn repository.
End of explanation |
2,629 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
healpy tutorial
See the Jupyter Notebook version of this tutorial at https
Step1: NSIDE and ordering
Maps are simply numpy arrays, where each array element refers to a location in the sky as defined by the Healpix pixelization schemes (see the healpix website).
Note
Step2: The function healpy.pixelfunc.nside2npix gives the number of pixels NPIX of the map
Step3: The same pixels in the map can be ordered in 2 ways, either RING, where they are numbered in the array in horizontal rings starting from the North pole
Step4: The standard coordinates are the colatitude $\theta$, $0$ at the North Pole, $\pi/2$ at the equator and $\pi$ at the South Pole and the longitude $\phi$ between $0$ and $2\pi$ eastward, in a Mollview projection, $\phi=0$ is at the center and increases eastward toward the left of the map.
We can also use vectors to represent coordinates, for example vec is the normalized vector that points to $\theta=\pi/2, \phi=3/4\pi$
Step5: We can find the indices of all the pixels within $10$ degrees of that point and then change the value of the map at those indices
Step6: We can retrieve colatitude and longitude of each pixel using pix2ang, in this case we notice that the first 4 pixels cover the North Pole with pixel centers just ~$1.5$ degrees South of the Pole all at the same latitude. The fifth pixel is already part of another ring of pixels.
Step7: The RING ordering is necessary for the Spherical Harmonics transforms, the other option is NESTED ordering which is very efficient for map domain operations because scaling up and down maps is achieved just multiplying and rounding pixel indices.
See below how pixel are ordered in the NESTED scheme, notice the structure of the 12 HEALPix base pixels (NSIDE 1)
Step8: All healpy routines assume RING ordering, in fact as soon as you read a map with read_map, even if it was stored as NESTED, it is transformed to RING.
However, you can work in NESTED ordering passing the nest=True argument to most healpy routines.
Reading and writing maps to file
For the following section, it is required to download larger maps by executing from the terminal the bash script healpy_get_wmap_maps.sh which should be available in your path.
This will download the higher resolution WMAP data into the current directory.
Step9: By default, input maps are converted to RING ordering, if they are in NESTED ordering. You can otherwise specify nest=True to retrieve a map is NESTED ordering, or nest=None to keep the ordering unchanged.
By default, read_map loads the first column, for reading other columns you can specify the field keyword.
write_map writes a map to disk in FITS format, if the input map is a list of 3 maps, they are written to a single file as I,Q,U polarization components
Step10: Visualization
As shown above, mollweide projection with mollview is the most common visualization tool for HEALPIX maps. It also supports coordinate transformation, coord does Galactic to ecliptic coordinate transformation, norm='hist' sets a histogram equalized color scale and xsize increases the size of the image. graticule adds meridians and parallels.
Step11: gnomview instead provides gnomonic projection around a position specified by rot, for example you can plot a projection of the galactic center, xsize and ysize change the dimension of the sky patch.
Step12: mollzoom is a powerful tool for interactive inspection of a map, it provides a mollweide projection where you can click to set the center of the adjacent gnomview panel.
Masked map, partial maps
By convention, HEALPIX uses $-1.6375 * 10^{30}$ to mark invalid or unseen pixels. This is stored in healpy as the constant UNSEEN.
All healpy functions automatically deal with maps with UNSEEN pixels, for example mollview marks in grey those sections of a map.
There is an alternative way of dealing with UNSEEN pixel based on the numpyMaskedArray class, hp.ma loads a map as a masked array, by convention the mask is 0 where the data are masked, while numpy defines data masked when the mask is True, so it is necessary to flip the mask.
Step13: Filling a masked array fills in the UNSEEN value and return a standard array that can be used by mollview.
compressed() instead removes all the masked pixels and returns a standard array that can be used for examples by the matplotlib hist() function
Step14: Spherical Harmonics transforms
healpy provides bindings to the C++ HEALPIX library for performing spherical harmonic transforms.
hp.anafast computes the angular power spectrum of a map
Step15: therefore we can plot a normalized CMB spectrum and write it to disk
Step16: Gaussian beam map smoothing is provided by hp.smoothing | Python Code:
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import healpy as hp
Explanation: healpy tutorial
See the Jupyter Notebook version of this tutorial at https://github.com/healpy/healpy/blob/master/doc/healpy_tutorial.ipynb
See a executed version of the notebook with embedded plots at https://gist.github.com/zonca/9c114608e0903a3b8ea0bfe41c96f255
Choose the inline backend of maptlotlib to display the plots inside the Jupyter Notebook
End of explanation
NSIDE = 32
print(
"Approximate resolution at NSIDE {} is {:.2} deg".format(
NSIDE, hp.nside2resol(NSIDE, arcmin=True) / 60
)
)
Explanation: NSIDE and ordering
Maps are simply numpy arrays, where each array element refers to a location in the sky as defined by the Healpix pixelization schemes (see the healpix website).
Note: Running the code below in a regular Python session will not display the maps; it's recommended to use an IPython shell or a Jupyter notebook.
The resolution of the map is defined by the NSIDE parameter, which is generally a power of 2.
End of explanation
NPIX = hp.nside2npix(NSIDE)
print(NPIX)
Explanation: The function healpy.pixelfunc.nside2npix gives the number of pixels NPIX of the map:
End of explanation
m = np.arange(NPIX)
hp.mollview(m, title="Mollview image RING")
hp.graticule()
Explanation: The same pixels in the map can be ordered in 2 ways, either RING, where they are numbered in the array in horizontal rings starting from the North pole:
End of explanation
vec = hp.ang2vec(np.pi / 2, np.pi * 3 / 4)
print(vec)
Explanation: The standard coordinates are the colatitude $\theta$, $0$ at the North Pole, $\pi/2$ at the equator and $\pi$ at the South Pole and the longitude $\phi$ between $0$ and $2\pi$ eastward, in a Mollview projection, $\phi=0$ is at the center and increases eastward toward the left of the map.
We can also use vectors to represent coordinates, for example vec is the normalized vector that points to $\theta=\pi/2, \phi=3/4\pi$:
End of explanation
ipix_disc = hp.query_disc(nside=32, vec=vec, radius=np.radians(10))
m = np.arange(NPIX)
m[ipix_disc] = m.max()
hp.mollview(m, title="Mollview image RING")
Explanation: We can find the indices of all the pixels within $10$ degrees of that point and then change the value of the map at those indices:
End of explanation
theta, phi = np.degrees(hp.pix2ang(nside=32, ipix=[0, 1, 2, 3, 4]))
theta
phi
Explanation: We can retrieve colatitude and longitude of each pixel using pix2ang, in this case we notice that the first 4 pixels cover the North Pole with pixel centers just ~$1.5$ degrees South of the Pole all at the same latitude. The fifth pixel is already part of another ring of pixels.
End of explanation
m = np.arange(NPIX)
hp.mollview(m, nest=True, title="Mollview image NESTED")
Explanation: The RING ordering is necessary for the Spherical Harmonics transforms, the other option is NESTED ordering which is very efficient for map domain operations because scaling up and down maps is achieved just multiplying and rounding pixel indices.
See below how pixel are ordered in the NESTED scheme, notice the structure of the 12 HEALPix base pixels (NSIDE 1):
End of explanation
!healpy_get_wmap_maps.sh
wmap_map_I = hp.read_map("wmap_band_iqumap_r9_7yr_W_v4.fits")
Explanation: All healpy routines assume RING ordering, in fact as soon as you read a map with read_map, even if it was stored as NESTED, it is transformed to RING.
However, you can work in NESTED ordering passing the nest=True argument to most healpy routines.
Reading and writing maps to file
For the following section, it is required to download larger maps by executing from the terminal the bash script healpy_get_wmap_maps.sh which should be available in your path.
This will download the higher resolution WMAP data into the current directory.
End of explanation
hp.write_map("my_map.fits", wmap_map_I, overwrite=True)
Explanation: By default, input maps are converted to RING ordering, if they are in NESTED ordering. You can otherwise specify nest=True to retrieve a map is NESTED ordering, or nest=None to keep the ordering unchanged.
By default, read_map loads the first column, for reading other columns you can specify the field keyword.
write_map writes a map to disk in FITS format, if the input map is a list of 3 maps, they are written to a single file as I,Q,U polarization components:
End of explanation
hp.mollview(
wmap_map_I,
coord=["G", "E"],
title="Histogram equalized Ecliptic",
unit="mK",
norm="hist",
min=-1,
max=1,
)
hp.graticule()
Explanation: Visualization
As shown above, mollweide projection with mollview is the most common visualization tool for HEALPIX maps. It also supports coordinate transformation, coord does Galactic to ecliptic coordinate transformation, norm='hist' sets a histogram equalized color scale and xsize increases the size of the image. graticule adds meridians and parallels.
End of explanation
hp.gnomview(wmap_map_I, rot=[0, 0.3], title="GnomView", unit="mK", format="%.2g")
Explanation: gnomview instead provides gnomonic projection around a position specified by rot, for example you can plot a projection of the galactic center, xsize and ysize change the dimension of the sky patch.
End of explanation
mask = hp.read_map("wmap_temperature_analysis_mask_r9_7yr_v4.fits").astype(np.bool_)
wmap_map_I_masked = hp.ma(wmap_map_I)
wmap_map_I_masked.mask = np.logical_not(mask)
Explanation: mollzoom is a powerful tool for interactive inspection of a map, it provides a mollweide projection where you can click to set the center of the adjacent gnomview panel.
Masked map, partial maps
By convention, HEALPIX uses $-1.6375 * 10^{30}$ to mark invalid or unseen pixels. This is stored in healpy as the constant UNSEEN.
All healpy functions automatically deal with maps with UNSEEN pixels, for example mollview marks in grey those sections of a map.
There is an alternative way of dealing with UNSEEN pixel based on the numpyMaskedArray class, hp.ma loads a map as a masked array, by convention the mask is 0 where the data are masked, while numpy defines data masked when the mask is True, so it is necessary to flip the mask.
End of explanation
hp.mollview(wmap_map_I_masked.filled())
plt.hist(wmap_map_I_masked.compressed(), bins=1000);
Explanation: Filling a masked array fills in the UNSEEN value and return a standard array that can be used by mollview.
compressed() instead removes all the masked pixels and returns a standard array that can be used for examples by the matplotlib hist() function:
End of explanation
LMAX = 1024
cl = hp.anafast(wmap_map_I_masked.filled(), lmax=LMAX)
ell = np.arange(len(cl))
Explanation: Spherical Harmonics transforms
healpy provides bindings to the C++ HEALPIX library for performing spherical harmonic transforms.
hp.anafast computes the angular power spectrum of a map:
End of explanation
plt.figure(figsize=(10, 5))
plt.plot(ell, ell * (ell + 1) * cl)
plt.xlabel("$\ell$")
plt.ylabel("$\ell(\ell+1)C_{\ell}$")
plt.grid()
hp.write_cl("cl.fits", cl, overwrite=True)
Explanation: therefore we can plot a normalized CMB spectrum and write it to disk:
End of explanation
wmap_map_I_smoothed = hp.smoothing(wmap_map_I, fwhm=np.radians(1.))
hp.mollview(wmap_map_I_smoothed, min=-1, max=1, title="Map smoothed 1 deg")
Explanation: Gaussian beam map smoothing is provided by hp.smoothing:
End of explanation |
2,630 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tranlation Matrix Tutorial
What is it ?
Suppose we are given a setofword pairs and their associated vector representaion ${x_{i},z_{i}}{i=1}^{n}$, where $x{i} \in R^{d_{1}}$ is the distibuted representation of word $i$ in the source language, and ${z_{i} \in R^{d_{2}}}$ is the vector representation of its translation. Our goal is to find a transformation matrix $W$ such that $Wx_{i}$ approximates $z_{i}$. In practice, $W$ can be learned by the following optimization prolem
Step1: For this tutorial, we'll be training our model using the English -> Italian word pairs from the OPUS collection. This corpus contains 5000 word pairs. Each pair is a English word and corresponding Italian word.
dataset download
Step2: This tutorial uses 300-dimensional vectors of English words as source and vectors of Italian words as target.(those vector trained by the word2vec toolkit with cbow. The context window was set 5 words to either side of the target,
the sub-sampling option was set to 1e-05 and estimate the probability of a target word with the negative sampling method, drawing 10 samples from the noise distribution)
dataset download
Step3: training the translation matrix
Step4: Prediction Time
Step5: part two
Step6: part three
Step7: The Creation Time for the Translation Matrix
Testing the creation time, we extracted more word pairs from a dictionary built from Europarl(Europara, en-it).we obtain about 20K word pairs and their coresponding word vectors.Or you can download from this.word_dict.pkl
Step8: You will see a two dimensional coordination whose horizontal axis is the size of corpus and vertical axis is the time to train a translation matrix (the unit is second). As the size of corpus increases, the time increases linearly.
Linear Relationship Between Languages
To have a better understanding of the principles behind, we visualized the word vectors using PCA, we noticed that the vector representations of similar words in different languages were related by a linear transformation.
Step9: The figure shows that the word vectors for English number one to five and the corresponding Italian words uno to cinque have similar geometric arrangements. So the relationship between vector spaces that represent these tow languages can be captured by linear mapping.
If we know the translation of one and four from English to Spanish, we can learn the transformation matrix that can help us to translate five or other numbers.
Step10: You probably will see that two kind of different color nodes, one for the English and the other for the Italian. For the translation of word five, we return top 3 similar words [u'cinque', u'quattro', u'tre']. We can easily see that the translation is convincing.
Let's see some animals word, the figue show that most of words are also share the similar geometric arrangements.
Step11: You probably will see that two kind of different color nodes, one for the English and the other for the Italian. For the translation of word bird, we return top 3 similar words [u'uccelli', u'garzette', u'iguane']. We can easily see that the animals' words translation is also convincing as the numbers.
Tranlation Matrix Revisit
As dicussion in this PR, Translation Matrix not only can used to translate the words from one source language to another target lanuage, but also to translate new document vectors back to old model space.
For example, if we have trained 15k documents using doc2vec (we called this as model1), and we are going to train new 35k documents using doc2vec(we called this as model2). So we can include those 15k documents as reference documents into the new 35k documents. Then we can get 15k document vectors from model1 and 50k document vectors from model2, but both of the two models have vectors for those 15k documents. We can use those vectors to build a mapping from model1 to model2. Finally, with this relation, we can back-mapping the model2's vector to model1. Therefore, 35k document vectors are learned using this method.
In this notebook, we use the IMDB dataset as example. For more information about this dataset, please refer to this. And some of code are borrowed from this notebook
Step12: Here, we train two Doc2vec model, the parameters can be determined by yourself. We trained on 15k documents for the model1 and 50k documents for the model2. But you should mixed some documents which from the 15k document in model to the model2 as dicussed before.
Step13: For the IMDB training dataset, we train an classifier on the train data which has 25k documents with positive and negative label. Then using this classifier to predict the test data. To see what accuracy can the document vectors which learned by different method achieve.
Step14: For the experiment one, we use the vector which learned by the Doc2vec method.To evalute those document vector, we use split those 50k document into two part, one for training and the other for testing.
Step15: For the experiment two, the document vectors are learned by the back-mapping method, which has a linear mapping for the model1 and model2. Using this method like translation matrix for the word translation, If we provide the vector for the addtional 35k document vector in model2, we can infer this vector for the model1.
Step16: As we can see that, the vectors learned by back-mapping method performed not bad but still need improved.
Visulization
we pick some documents and extract the vector both from model1 and model2, we can see that they also share the similar geometric arrangment. | Python Code:
import os
from gensim import utils
from gensim.models import translation_matrix
from gensim.models import KeyedVectors
Explanation: Tranlation Matrix Tutorial
What is it ?
Suppose we are given a setofword pairs and their associated vector representaion ${x_{i},z_{i}}{i=1}^{n}$, where $x{i} \in R^{d_{1}}$ is the distibuted representation of word $i$ in the source language, and ${z_{i} \in R^{d_{2}}}$ is the vector representation of its translation. Our goal is to find a transformation matrix $W$ such that $Wx_{i}$ approximates $z_{i}$. In practice, $W$ can be learned by the following optimization prolem:
<center>$\min \limits_{W} \sum \limits_{i=1}^{n} ||Wx_{i}-z_{i}||^{2}$</center>
Resources
Tomas Mikolov, Quoc V Le, Ilya Sutskever. 2013.Exploiting Similarities among Languages for Machine Translation
Georgiana Dinu, Angelikie Lazaridou and Marco Baroni. 2014.Improving zero-shot learning by mitigating the hubness problem
End of explanation
train_file = "OPUS_en_it_europarl_train_5K.txt"
with utils.smart_open(train_file, "r") as f:
word_pair = [tuple(utils.to_unicode(line).strip().split()) for line in f]
print word_pair[:10]
Explanation: For this tutorial, we'll be training our model using the English -> Italian word pairs from the OPUS collection. This corpus contains 5000 word pairs. Each pair is a English word and corresponding Italian word.
dataset download:
OPUS_en_it_europarl_train_5K.txt
End of explanation
# Load the source language word vector
source_word_vec_file = "EN.200K.cbow1_wind5_hs0_neg10_size300_smpl1e-05.txt"
source_word_vec = KeyedVectors.load_word2vec_format(source_word_vec_file, binary=False)
#Load the target language word vector
target_word_vec_file = "IT.200K.cbow1_wind5_hs0_neg10_size300_smpl1e-05.txt"
target_word_vec = KeyedVectors.load_word2vec_format(target_word_vec_file, binary=False)
Explanation: This tutorial uses 300-dimensional vectors of English words as source and vectors of Italian words as target.(those vector trained by the word2vec toolkit with cbow. The context window was set 5 words to either side of the target,
the sub-sampling option was set to 1e-05 and estimate the probability of a target word with the negative sampling method, drawing 10 samples from the noise distribution)
dataset download:
EN.200K.cbow1_wind5_hs0_neg10_size300_smpl1e-05.txt
IT.200K.cbow1_wind5_hs0_neg10_size300_smpl1e-05.txt
End of explanation
transmat = translation_matrix.TranslationMatrix(word_pair, source_word_vec, target_word_vec)
transmat.train(word_pair)
print "the shape of translation matrix is: ", transmat.translation_matrix.shape
Explanation: training the translation matrix
End of explanation
# the piar is (English, Italian), we can see whether the translated word is right or not
words = [("one", "uno"), ("two", "due"), ("three", "tre"), ("four", "quattro"), ("five", "cinque")]
source_word, target_word = zip(*words)
translated_word = transmat.translate(source_word, 5)
for k, v in translated_word.iteritems():
print "word ", k, " and translated word", v
Explanation: Prediction Time: for any given new word, we can map it to the other language space by coputing $z = Wx$, then we find the word whose representation is closet to z in the target language space, using consine similarity as the distance metric.
part one:
Let's look at some number translation. We use English words (one, two, three, four and five) as test.
End of explanation
words = [("apple", "mela"), ("orange", "arancione"), ("grape", "acino"), ("banana", "banana"), ("mango", "mango")]
source_word, target_word = zip(*words)
translated_word = transmat.translate(source_word, 5)
for k, v in translated_word.iteritems():
print "word ", k, " and translated word", v
Explanation: part two:
Let's look at some fruit translations. We use English words (apple, orange, grape, banana and mango) as test.
End of explanation
words = [("dog", "cane"), ("pig", "maiale"), ("cat", "gatto"), ("fish", "cavallo"), ("birds", "uccelli")]
source_word, target_word = zip(*words)
translated_word = transmat.translate(source_word, 5)
for k, v in translated_word.iteritems():
print "word ", k, " and translated word", v
Explanation: part three:
Let's look at some animal translations. We use English words (dog, pig, cat, horse and bird) as test.
End of explanation
import pickle
word_dict = "word_dict.pkl"
with utils.smart_open(word_dict, "r") as f:
word_pair = pickle.load(f)
print "the length of word pair ", len(word_pair)
import time
test_case = 10
word_pair_length = len(word_pair)
step = word_pair_length / test_case
duration = []
sizeofword = []
for idx in xrange(0, test_case):
sub_pair = word_pair[: (idx + 1) * step]
startTime = time.time()
transmat = translation_matrix.TranslationMatrix(sub_pair, source_word_vec, target_word_vec)
transmat.train(sub_pair)
endTime = time.time()
sizeofword.append(len(sub_pair))
duration.append(endTime - startTime)
import plotly
from plotly.graph_objs import Scatter, Layout
plotly.offline.init_notebook_mode(connected=True)
plotly.offline.iplot({
"data": [Scatter(x=sizeofword, y=duration)],
"layout": Layout(title="time for creation"),
}, filename="tm_creation_time.html")
Explanation: The Creation Time for the Translation Matrix
Testing the creation time, we extracted more word pairs from a dictionary built from Europarl(Europara, en-it).we obtain about 20K word pairs and their coresponding word vectors.Or you can download from this.word_dict.pkl
End of explanation
from sklearn.decomposition import PCA
import plotly
from plotly.graph_objs import Scatter, Layout, Figure
plotly.offline.init_notebook_mode(connected=True)
words = [("one", "uno"), ("two", "due"), ("three", "tre"), ("four", "quattro"), ("five", "cinque")]
en_words_vec = [source_word_vec[item[0]] for item in words]
it_words_vec = [target_word_vec[item[1]] for item in words]
en_words, it_words = zip(*words)
pca = PCA(n_components=2)
new_en_words_vec = pca.fit_transform(en_words_vec)
new_it_words_vec = pca.fit_transform(it_words_vec)
# remove the code, use the plotly for ploting instead
# fig = plt.figure()
# fig.add_subplot(121)
# plt.scatter(new_en_words_vec[:, 0], new_en_words_vec[:, 1])
# for idx, item in enumerate(en_words):
# plt.annotate(item, xy=(new_en_words_vec[idx][0], new_en_words_vec[idx][1]))
# fig.add_subplot(122)
# plt.scatter(new_it_words_vec[:, 0], new_it_words_vec[:, 1])
# for idx, item in enumerate(it_words):
# plt.annotate(item, xy=(new_it_words_vec[idx][0], new_it_words_vec[idx][1]))
# plt.show()
# you can also using plotly lib to plot in one figure
trace1 = Scatter(
x = new_en_words_vec[:, 0],
y = new_en_words_vec[:, 1],
mode = 'markers+text',
text = en_words,
textposition = 'top'
)
trace2 = Scatter(
x = new_it_words_vec[:, 0],
y = new_it_words_vec[:, 1],
mode = 'markers+text',
text = it_words,
textposition = 'top'
)
layout = Layout(
showlegend = False
)
data = [trace1, trace2]
fig = Figure(data=data, layout=layout)
plot_url = plotly.offline.iplot(fig, filename='relatie_position_for_number.html')
Explanation: You will see a two dimensional coordination whose horizontal axis is the size of corpus and vertical axis is the time to train a translation matrix (the unit is second). As the size of corpus increases, the time increases linearly.
Linear Relationship Between Languages
To have a better understanding of the principles behind, we visualized the word vectors using PCA, we noticed that the vector representations of similar words in different languages were related by a linear transformation.
End of explanation
words = [("one", "uno"), ("two", "due"), ("three", "tre"), ("four", "quattro"), ("five", "cinque")]
en_words, it_words = zip(*words)
en_words_vec = [source_word_vec[item[0]] for item in words]
it_words_vec = [target_word_vec[item[1]] for item in words]
# translate the English word five to Spanish
translated_word = transmat.translate([en_words[4]], 3)
print "translation of five: ", translated_word
# the translated words of five
for item in translated_word[en_words[4]]:
it_words_vec.append(target_word_vec[item])
pca = PCA(n_components=2)
new_en_words_vec = pca.fit_transform(en_words_vec)
new_it_words_vec = pca.fit_transform(it_words_vec)
# remove the code, use the plotly for ploting instead
# fig = plt.figure()
# fig.add_subplot(121)
# plt.scatter(new_en_words_vec[:, 0], new_en_words_vec[:, 1])
# for idx, item in enumerate(en_words):
# plt.annotate(item, xy=(new_en_words_vec[idx][0], new_en_words_vec[idx][1]))
# fig.add_subplot(122)
# plt.scatter(new_it_words_vec[:, 0], new_it_words_vec[:, 1])
# for idx, item in enumerate(it_words):
# plt.annotate(item, xy=(new_it_words_vec[idx][0], new_it_words_vec[idx][1]))
# # annote for the translation of five, the red text annotation is the translation of five
# for idx, item in enumerate(translated_word[en_words[4]]):
# plt.annotate(item, xy=(new_it_words_vec[idx + 5][0], new_it_words_vec[idx + 5][1]),
# xytext=(new_it_words_vec[idx + 5][0] + 0.1, new_it_words_vec[idx + 5][1] + 0.1),
# color="red",
# arrowprops=dict(facecolor='red', shrink=0.1, width=1, headwidth=2),)
# plt.show()
trace1 = Scatter(
x = new_en_words_vec[:, 0],
y = new_en_words_vec[:, 1],
mode = 'markers+text',
text = en_words,
textposition = 'top'
)
trace2 = Scatter(
x = new_it_words_vec[:, 0],
y = new_it_words_vec[:, 1],
mode = 'markers+text',
text = it_words,
textposition = 'top'
)
layout = Layout(
showlegend = False,
annotations = [dict(
x = new_it_words_vec[5][0],
y = new_it_words_vec[5][1],
text = translated_word[en_words[4]][0],
arrowcolor = "black",
arrowsize = 1.5,
arrowwidth = 1,
arrowhead = 0.5
), dict(
x = new_it_words_vec[6][0],
y = new_it_words_vec[6][1],
text = translated_word[en_words[4]][1],
arrowcolor = "black",
arrowsize = 1.5,
arrowwidth = 1,
arrowhead = 0.5
), dict(
x = new_it_words_vec[7][0],
y = new_it_words_vec[7][1],
text = translated_word[en_words[4]][2],
arrowcolor = "black",
arrowsize = 1.5,
arrowwidth = 1,
arrowhead = 0.5
)]
)
data = [trace1, trace2]
fig = Figure(data=data, layout=layout)
plot_url = plotly.offline.iplot(fig, filename='relatie_position_for_numbers.html')
Explanation: The figure shows that the word vectors for English number one to five and the corresponding Italian words uno to cinque have similar geometric arrangements. So the relationship between vector spaces that represent these tow languages can be captured by linear mapping.
If we know the translation of one and four from English to Spanish, we can learn the transformation matrix that can help us to translate five or other numbers.
End of explanation
words = [("dog", "cane"), ("pig", "maiale"), ("cat", "gatto"), ("horse", "cavallo"), ("birds", "uccelli")]
en_words_vec = [source_word_vec[item[0]] for item in words]
it_words_vec = [target_word_vec[item[1]] for item in words]
en_words, it_words = zip(*words)
# remove the code, use the plotly for ploting instead
# pca = PCA(n_components=2)
# new_en_words_vec = pca.fit_transform(en_words_vec)
# new_it_words_vec = pca.fit_transform(it_words_vec)
# fig = plt.figure()
# fig.add_subplot(121)
# plt.scatter(new_en_words_vec[:, 0], new_en_words_vec[:, 1])
# for idx, item in enumerate(en_words):
# plt.annotate(item, xy=(new_en_words_vec[idx][0], new_en_words_vec[idx][1]))
# fig.add_subplot(122)
# plt.scatter(new_it_words_vec[:, 0], new_it_words_vec[:, 1])
# for idx, item in enumerate(it_words):
# plt.annotate(item, xy=(new_it_words_vec[idx][0], new_it_words_vec[idx][1]))
# plt.show()
trace1 = Scatter(
x = new_en_words_vec[:, 0],
y = new_en_words_vec[:, 1],
mode = 'markers+text',
text = en_words,
textposition = 'top'
)
trace2 = Scatter(
x = new_it_words_vec[:, 0],
y = new_it_words_vec[:, 1],
mode = 'markers+text',
text = it_words,
textposition ='top'
)
layout = Layout(
showlegend = False
)
data = [trace1, trace2]
fig = Figure(data=data, layout=layout)
plot_url = plotly.offline.iplot(fig, filename='relatie_position_for_animal.html')
words = [("dog", "cane"), ("pig", "maiale"), ("cat", "gatto"), ("horse", "cavallo"), ("birds", "uccelli")]
en_words, it_words = zip(*words)
en_words_vec = [source_word_vec[item[0]] for item in words]
it_words_vec = [target_word_vec[item[1]] for item in words]
# translate the English word birds to Spanish
translated_word = transmat.translate([en_words[4]], 3)
print "translation of birds: ", translated_word
# the translated words of birds
for item in translated_word[en_words[4]]:
it_words_vec.append(target_word_vec[item])
pca = PCA(n_components=2)
new_en_words_vec = pca.fit_transform(en_words_vec)
new_it_words_vec = pca.fit_transform(it_words_vec)
# # remove the code, use the plotly for ploting instead
# fig = plt.figure()
# fig.add_subplot(121)
# plt.scatter(new_en_words_vec[:, 0], new_en_words_vec[:, 1])
# for idx, item in enumerate(en_words):
# plt.annotate(item, xy=(new_en_words_vec[idx][0], new_en_words_vec[idx][1]))
# fig.add_subplot(122)
# plt.scatter(new_it_words_vec[:, 0], new_it_words_vec[:, 1])
# for idx, item in enumerate(it_words):
# plt.annotate(item, xy=(new_it_words_vec[idx][0], new_it_words_vec[idx][1]))
# # annote for the translation of five, the red text annotation is the translation of five
# for idx, item in enumerate(translated_word[en_words[4]]):
# plt.annotate(item, xy=(new_it_words_vec[idx + 5][0], new_it_words_vec[idx + 5][1]),
# xytext=(new_it_words_vec[idx + 5][0] + 0.1, new_it_words_vec[idx + 5][1] + 0.1),
# color="red",
# arrowprops=dict(facecolor='red', shrink=0.1, width=1, headwidth=2),)
# plt.show()
trace1 = Scatter(
x = new_en_words_vec[:, 0],
y = new_en_words_vec[:, 1],
mode = 'markers+text',
text = en_words,
textposition = 'top'
)
trace2 = Scatter(
x = new_it_words_vec[:5, 0],
y = new_it_words_vec[:5, 1],
mode = 'markers+text',
text = it_words[:5],
textposition = 'top'
)
layout = Layout(
showlegend = False,
annotations = [dict(
x = new_it_words_vec[5][0],
y = new_it_words_vec[5][1],
text = translated_word[en_words[4]][0],
arrowcolor = "black",
arrowsize = 1.5,
arrowwidth = 1,
arrowhead = 0.5
), dict(
x = new_it_words_vec[6][0],
y = new_it_words_vec[6][1],
text = translated_word[en_words[4]][1],
arrowcolor = "black",
arrowsize = 1.5,
arrowwidth = 1,
arrowhead = 0.5
), dict(
x = new_it_words_vec[7][0],
y = new_it_words_vec[7][1],
text = translated_word[en_words[4]][2],
arrowcolor = "black",
arrowsize = 1.5,
arrowwidth = 1,
arrowhead = 0.5
)]
)
data = [trace1, trace2]
fig = Figure(data=data, layout=layout)
plot_url = plotly.offline.iplot(fig, filename='relatie_position_for_animal.html')
Explanation: You probably will see that two kind of different color nodes, one for the English and the other for the Italian. For the translation of word five, we return top 3 similar words [u'cinque', u'quattro', u'tre']. We can easily see that the translation is convincing.
Let's see some animals word, the figue show that most of words are also share the similar geometric arrangements.
End of explanation
import gensim
from gensim.models.doc2vec import TaggedDocument
from gensim.models import Doc2Vec
from collections import namedtuple
from gensim import utils
def read_sentimentDocs():
SentimentDocument = namedtuple('SentimentDocument', 'words tags split sentiment')
alldocs = [] # will hold all docs in original order
with utils.smart_open('aclImdb/alldata-id.txt', encoding='utf-8') as alldata:
for line_no, line in enumerate(alldata):
tokens = gensim.utils.to_unicode(line).split()
words = tokens[1:]
tags = [line_no] # `tags = [tokens[0]]` would also work at extra memory cost
split = ['train','test','extra','extra'][line_no // 25000] # 25k train, 25k test, 25k extra
sentiment = [1.0, 0.0, 1.0, 0.0, None, None, None, None][line_no // 12500] # [12.5K pos, 12.5K neg]*2 then unknown
alldocs.append(SentimentDocument(words, tags, split, sentiment))
train_docs = [doc for doc in alldocs if doc.split == 'train']
test_docs = [doc for doc in alldocs if doc.split == 'test']
doc_list = alldocs[:] # for reshuffling per pass
print('%d docs: %d train-sentiment, %d test-sentiment' % (len(doc_list), len(train_docs), len(test_docs)))
return train_docs, test_docs, doc_list
train_docs, test_docs, doc_list = read_sentimentDocs()
small_corpus = train_docs[:15000]
large_corpus = train_docs + test_docs
print len(train_docs), len(test_docs), len(doc_list), len(small_corpus), len(large_corpus)
Explanation: You probably will see that two kind of different color nodes, one for the English and the other for the Italian. For the translation of word bird, we return top 3 similar words [u'uccelli', u'garzette', u'iguane']. We can easily see that the animals' words translation is also convincing as the numbers.
Tranlation Matrix Revisit
As dicussion in this PR, Translation Matrix not only can used to translate the words from one source language to another target lanuage, but also to translate new document vectors back to old model space.
For example, if we have trained 15k documents using doc2vec (we called this as model1), and we are going to train new 35k documents using doc2vec(we called this as model2). So we can include those 15k documents as reference documents into the new 35k documents. Then we can get 15k document vectors from model1 and 50k document vectors from model2, but both of the two models have vectors for those 15k documents. We can use those vectors to build a mapping from model1 to model2. Finally, with this relation, we can back-mapping the model2's vector to model1. Therefore, 35k document vectors are learned using this method.
In this notebook, we use the IMDB dataset as example. For more information about this dataset, please refer to this. And some of code are borrowed from this notebook
End of explanation
# for the computer performance limited, didn't run on the notebook.
# You do can trained on the server and save the model to the disk.
import multiprocessing
from random import shuffle
cores = multiprocessing.cpu_count()
model1 = Doc2Vec(dm=1, dm_concat=1, size=100, window=5, negative=5, hs=0, min_count=2, workers=cores)
model2 = Doc2Vec(dm=1, dm_concat=1, size=100, window=5, negative=5, hs=0, min_count=2, workers=cores)
small_train_docs = train_docs[:15000]
# train for small corpus
model1.build_vocab(small_train_docs)
for epoch in xrange(50):
shuffle(small_train_docs)
model1.train(small_train_docs, total_examples=len(small_train_docs), epochs=1)
model.save("small_doc_15000_iter50.bin")
large_train_docs = train_docs + test_docs
# train for large corpus
model2.build_vocab(large_train_docs)
for epoch in xrange(50):
shuffle(large_train_docs)
model2.train(large_train_docs, total_examples=len(train_docs), epochs=1)
# save the model
model2.save("large_doc_50000_iter50.bin")
Explanation: Here, we train two Doc2vec model, the parameters can be determined by yourself. We trained on 15k documents for the model1 and 50k documents for the model2. But you should mixed some documents which from the 15k document in model to the model2 as dicussed before.
End of explanation
import os
import numpy as np
from sklearn.linear_model import LogisticRegression
def test_classifier_error(train, train_label, test, test_label):
classifier = LogisticRegression()
classifier.fit(train, train_label)
score = classifier.score(test, test_label)
print "the classifier score :", score
return score
Explanation: For the IMDB training dataset, we train an classifier on the train data which has 25k documents with positive and negative label. Then using this classifier to predict the test data. To see what accuracy can the document vectors which learned by different method achieve.
End of explanation
#you can change the data folder
basedir = "/home/robotcator/doc2vec"
model2 = Doc2Vec.load(os.path.join(basedir, "large_doc_50000_iter50.bin"))
m2 = []
for i in range(len(large_corpus)):
m2.append(model2.docvecs[large_corpus[i].tags])
train_array = np.zeros((25000, 100))
train_label = np.zeros((25000, 1))
test_array = np.zeros((25000, 100))
test_label = np.zeros((25000, 1))
for i in range(12500):
train_array[i] = m2[i]
train_label[i] = 1
train_array[i + 12500] = m2[i + 12500]
train_label[i + 12500] = 0
test_array[i] = m2[i + 25000]
test_label[i] = 1
test_array[i + 12500] = m2[i + 37500]
test_label[i + 12500] = 0
print "The vectors are learned by doc2vec method"
test_classifier_error(train_array, train_label, test_array, test_label)
Explanation: For the experiment one, we use the vector which learned by the Doc2vec method.To evalute those document vector, we use split those 50k document into two part, one for training and the other for testing.
End of explanation
from gensim.models import translation_matrix
# you can change the data folder
basedir = "/home/robotcator/doc2vec"
model1 = Doc2Vec.load(os.path.join(basedir, "small_doc_15000_iter50.bin"))
model2 = Doc2Vec.load(os.path.join(basedir, "large_doc_50000_iter50.bin"))
l = model1.docvecs.count
l2 = model2.docvecs.count
m1 = np.array([model1.docvecs[large_corpus[i].tags].flatten() for i in range(l)])
# learn the mapping bettween two model
model = translation_matrix.BackMappingTranslationMatrix(large_corpus[:15000], model1, model2)
model.train(large_corpus[:15000])
for i in range(l, l2):
infered_vec = model.infer_vector(model2.docvecs[large_corpus[i].tags])
m1 = np.vstack((m1, infered_vec.flatten()))
train_array = np.zeros((25000, 100))
train_label = np.zeros((25000, 1))
test_array = np.zeros((25000, 100))
test_label = np.zeros((25000, 1))
# because those document, 25k documents are postive label, 25k documents are negative label
for i in range(12500):
train_array[i] = m1[i]
train_label[i] = 1
train_array[i + 12500] = m1[i + 12500]
train_label[i + 12500] = 0
test_array[i] = m1[i + 25000]
test_label[i] = 1
test_array[i + 12500] = m1[i + 37500]
test_label[i + 12500] = 0
print "The vectors are learned by back-mapping method"
test_classifier_error(train_array, train_label, test_array, test_label)
Explanation: For the experiment two, the document vectors are learned by the back-mapping method, which has a linear mapping for the model1 and model2. Using this method like translation matrix for the word translation, If we provide the vector for the addtional 35k document vector in model2, we can infer this vector for the model1.
End of explanation
from sklearn.decomposition import PCA
import plotly
from plotly.graph_objs import Scatter, Layout, Figure
plotly.offline.init_notebook_mode(connected=True)
m1_part = m1[14995: 15000]
m2_part = m2[14995: 15000]
m1_part = np.array(m1_part).reshape(len(m1_part), 100)
m2_part = np.array(m2_part).reshape(len(m2_part), 100)
pca = PCA(n_components=2)
reduced_vec1 = pca.fit_transform(m1_part)
reduced_vec2 = pca.fit_transform(m2_part)
trace1 = Scatter(
x = reduced_vec1[:, 0],
y = reduced_vec1[:, 1],
mode = 'markers+text',
text = ['doc' + str(i) for i in range(len(reduced_vec1))],
textposition = 'top'
)
trace2 = Scatter(
x = reduced_vec2[:, 0],
y = reduced_vec2[:, 1],
mode = 'markers+text',
text = ['doc' + str(i) for i in range(len(reduced_vec1))],
textposition ='top'
)
layout = Layout(
showlegend = False
)
data = [trace1, trace2]
fig = Figure(data=data, layout=layout)
plot_url = plotly.offline.iplot(fig, filename='doc_vec_vis')
m1_part = m1[14995: 15002]
m2_part = m2[14995: 15002]
m1_part = np.array(m1_part).reshape(len(m1_part), 100)
m2_part = np.array(m2_part).reshape(len(m2_part), 100)
pca = PCA(n_components=2)
reduced_vec1 = pca.fit_transform(m1_part)
reduced_vec2 = pca.fit_transform(m2_part)
trace1 = Scatter(
x = reduced_vec1[:, 0],
y = reduced_vec1[:, 1],
mode = 'markers+text',
text = ['sdoc' + str(i) for i in range(len(reduced_vec1))],
textposition = 'top'
)
trace2 = Scatter(
x = reduced_vec2[:, 0],
y = reduced_vec2[:, 1],
mode = 'markers+text',
text = ['tdoc' + str(i) for i in range(len(reduced_vec1))],
textposition ='top'
)
layout = Layout(
showlegend = False
)
data = [trace1, trace2]
fig = Figure(data=data, layout=layout)
plot_url = plotly.offline.iplot(fig, filename='doc_vec_vis')
Explanation: As we can see that, the vectors learned by back-mapping method performed not bad but still need improved.
Visulization
we pick some documents and extract the vector both from model1 and model2, we can see that they also share the similar geometric arrangment.
End of explanation |
2,631 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: BERT Question Answer with TensorFlow Lite Model Maker
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Import the required packages.
Step3: The "End-to-End Overview" demonstrates a simple end-to-end example. The following sections walk through the example step by step to show more detail.
Choose a model_spec that represents a model for question answer
Each model_spec object represents a specific model for question answer. The Model Maker currently supports MobileBERT and BERT-Base models.
Supported Model | Name of model_spec | Model Description
--- | --- | ---
MobileBERT | 'mobilebert_qa' | 4.3x smaller and 5.5x faster than BERT-Base while achieving competitive results, suitable for on-device scenario.
MobileBERT-SQuAD | 'mobilebert_qa_squad' | Same model architecture as MobileBERT model and the initial model is already retrained on SQuAD1.1.
BERT-Base | 'bert_qa' | Standard BERT model that widely used in NLP tasks.
In this tutorial, MobileBERT-SQuAD is used as an example. Since the model is already retrained on SQuAD1.1, it could coverage faster for question answer task.
Step4: Load Input Data Specific to an On-device ML App and Preprocess the Data
The TriviaQA is a reading comprehension dataset containing over 650K question-answer-evidence triples. In this tutorial, you will use a subset of this dataset to learn how to use the Model Maker library.
To load the data, convert the TriviaQA dataset to the SQuAD1.1 format by running the converter Python script with --sample_size=8000 and a set of web data. Modify the conversion code a little bit by
Step5: You can also train the MobileBERT model with your own dataset. If you are running this notebook on Colab, upload your data by using the left sidebar.
<img src="https
Step6: Customize the TensorFlow Model
Create a custom question answer model based on the loaded data. The create function comprises the following steps
Step7: Have a look at the detailed model structure.
Step8: Evaluate the Customized Model
Evaluate the model on the validation data and get a dict of metrics including f1 score and exact match etc. Note that metrics are different for SQuAD1.1 and SQuAD2.0.
Step9: Export to TensorFlow Lite Model
Convert the trained model to TensorFlow Lite model format with metadata so that you can later use in an on-device ML application. The vocab file are embedded in metadata. The default TFLite filename is model.tflite.
In many on-device ML application, the model size is an important factor. Therefore, it is recommended that you apply quantize the model to make it smaller and potentially run faster.
The default post-training quantization technique is dynamic range quantization for the BERT and MobileBERT models.
Step10: You can use the TensorFlow Lite model file in the bert_qa reference app using BertQuestionAnswerer API in TensorFlow Lite Task Library by downloading it from the left sidebar on Colab.
The allowed export formats can be one or a list of the following
Step11: You can also evaluate the tflite model with the evaluate_tflite method. This step is expected to take a long time.
Step12: Advanced Usage
The create function is the critical part of this library in which the model_spec parameter defines the model specification. The BertQASpec class is currently supported. There are 2 models | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
!pip install -q tflite-model-maker
Explanation: BERT Question Answer with TensorFlow Lite Model Maker
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/lite/tutorials/model_maker_question_answer"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_question_answer.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_question_answer.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/tutorials/model_maker_question_answer.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
The TensorFlow Lite Model Maker library simplifies the process of adapting and converting a TensorFlow model to particular input data when deploying this model for on-device ML applications.
This notebook shows an end-to-end example that utilizes the Model Maker library to illustrate the adaptation and conversion of a commonly-used question answer model for question answer task.
Introduction to BERT Question Answer Task
The supported task in this library is extractive question answer task, which means given a passage and a question, the answer is the span in the passage. The image below shows an example for question answer.
<p align="center"><img src="https://storage.googleapis.com/download.tensorflow.org/models/tflite/screenshots/model_maker_squad_showcase.png" width="500"></p>
<p align="center">
<em>Answers are spans in the passage (image credit: <a href="https://rajpurkar.github.io/mlx/qa-and-squad/">SQuAD blog</a>) </em>
</p>
As for the model of question answer task, the inputs should be the passage and question pair that are already preprocessed, the outputs should be the start logits and end logits for each token in the passage.
The size of input could be set and adjusted according to the length of passage and question.
End-to-End Overview
The following code snippet demonstrates how to get the model within a few lines of code. The overall process includes 5 steps: (1) choose a model, (2) load data, (3) retrain the model, (4) evaluate, and (5) export it to TensorFlow Lite format.
```python
Chooses a model specification that represents the model.
spec = model_spec.get('mobilebert_qa')
Gets the training data and validation data.
train_data = DataLoader.from_squad(train_data_path, spec, is_training=True)
validation_data = DataLoader.from_squad(validation_data_path, spec, is_training=False)
Fine-tunes the model.
model = question_answer.create(train_data, model_spec=spec)
Gets the evaluation result.
metric = model.evaluate(validation_data)
Exports the model to the TensorFlow Lite format with metadata in the export directory.
model.export(export_dir)
```
The following sections explain the code in more detail.
Prerequisites
To run this example, install the required packages, including the Model Maker package from the GitHub repo.
End of explanation
import numpy as np
import os
import tensorflow as tf
assert tf.__version__.startswith('2')
from tflite_model_maker import model_spec
from tflite_model_maker import question_answer
from tflite_model_maker.config import ExportFormat
from tflite_model_maker.question_answer import DataLoader
Explanation: Import the required packages.
End of explanation
spec = model_spec.get('mobilebert_qa_squad')
Explanation: The "End-to-End Overview" demonstrates a simple end-to-end example. The following sections walk through the example step by step to show more detail.
Choose a model_spec that represents a model for question answer
Each model_spec object represents a specific model for question answer. The Model Maker currently supports MobileBERT and BERT-Base models.
Supported Model | Name of model_spec | Model Description
--- | --- | ---
MobileBERT | 'mobilebert_qa' | 4.3x smaller and 5.5x faster than BERT-Base while achieving competitive results, suitable for on-device scenario.
MobileBERT-SQuAD | 'mobilebert_qa_squad' | Same model architecture as MobileBERT model and the initial model is already retrained on SQuAD1.1.
BERT-Base | 'bert_qa' | Standard BERT model that widely used in NLP tasks.
In this tutorial, MobileBERT-SQuAD is used as an example. Since the model is already retrained on SQuAD1.1, it could coverage faster for question answer task.
End of explanation
train_data_path = tf.keras.utils.get_file(
fname='triviaqa-web-train-8000.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-web-train-8000.json')
validation_data_path = tf.keras.utils.get_file(
fname='triviaqa-verified-web-dev.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-verified-web-dev.json')
Explanation: Load Input Data Specific to an On-device ML App and Preprocess the Data
The TriviaQA is a reading comprehension dataset containing over 650K question-answer-evidence triples. In this tutorial, you will use a subset of this dataset to learn how to use the Model Maker library.
To load the data, convert the TriviaQA dataset to the SQuAD1.1 format by running the converter Python script with --sample_size=8000 and a set of web data. Modify the conversion code a little bit by:
* Skipping the samples that couldn't find any answer in the context document;
* Getting the original answer in the context without uppercase or lowercase.
Download the archived version of the already converted dataset.
End of explanation
train_data = DataLoader.from_squad(train_data_path, spec, is_training=True)
validation_data = DataLoader.from_squad(validation_data_path, spec, is_training=False)
Explanation: You can also train the MobileBERT model with your own dataset. If you are running this notebook on Colab, upload your data by using the left sidebar.
<img src="https://storage.googleapis.com/download.tensorflow.org/models/tflite/screenshots/model_maker_question_answer.png" alt="Upload File" width="800" hspace="100">
If you prefer not to upload your data to the cloud, you can also run the library offline by following the guide.
Use the DataLoader.from_squad method to load and preprocess the SQuAD format data according to a specific model_spec. You can use either SQuAD2.0 or SQuAD1.1 formats. Setting parameter version_2_with_negative as True means the formats is SQuAD2.0. Otherwise, the format is SQuAD1.1. By default, version_2_with_negative is False.
End of explanation
model = question_answer.create(train_data, model_spec=spec)
Explanation: Customize the TensorFlow Model
Create a custom question answer model based on the loaded data. The create function comprises the following steps:
Creates the model for question answer according to model_spec.
Train the question answer model. The default epochs and the default batch size are set according to two variables default_training_epochs and default_batch_size in the model_spec object.
End of explanation
model.summary()
Explanation: Have a look at the detailed model structure.
End of explanation
model.evaluate(validation_data)
Explanation: Evaluate the Customized Model
Evaluate the model on the validation data and get a dict of metrics including f1 score and exact match etc. Note that metrics are different for SQuAD1.1 and SQuAD2.0.
End of explanation
model.export(export_dir='.')
Explanation: Export to TensorFlow Lite Model
Convert the trained model to TensorFlow Lite model format with metadata so that you can later use in an on-device ML application. The vocab file are embedded in metadata. The default TFLite filename is model.tflite.
In many on-device ML application, the model size is an important factor. Therefore, it is recommended that you apply quantize the model to make it smaller and potentially run faster.
The default post-training quantization technique is dynamic range quantization for the BERT and MobileBERT models.
End of explanation
model.export(export_dir='.', export_format=ExportFormat.VOCAB)
Explanation: You can use the TensorFlow Lite model file in the bert_qa reference app using BertQuestionAnswerer API in TensorFlow Lite Task Library by downloading it from the left sidebar on Colab.
The allowed export formats can be one or a list of the following:
ExportFormat.TFLITE
ExportFormat.VOCAB
ExportFormat.SAVED_MODEL
By default, it just exports TensorFlow Lite model with metadata. You can also selectively export different files. For instance, exporting only the vocab file as follows:
End of explanation
model.evaluate_tflite('model.tflite', validation_data)
Explanation: You can also evaluate the tflite model with the evaluate_tflite method. This step is expected to take a long time.
End of explanation
new_spec = model_spec.get('mobilebert_qa')
new_spec.seq_len = 512
Explanation: Advanced Usage
The create function is the critical part of this library in which the model_spec parameter defines the model specification. The BertQASpec class is currently supported. There are 2 models: MobileBERT model, BERT-Base model. The create function comprises the following steps:
Creates the model for question answer according to model_spec.
Train the question answer model.
This section describes several advanced topics, including adjusting the model, tuning the training hyperparameters etc.
Adjust the model
You can adjust the model infrastructure like parameters seq_len and query_len in the BertQASpec class.
Adjustable parameters for model:
seq_len: Length of the passage to feed into the model.
query_len: Length of the question to feed into the model.
doc_stride: The stride when doing a sliding window approach to take chunks of the documents.
initializer_range: The stdev of the truncated_normal_initializer for initializing all weight matrices.
trainable: Boolean, whether pre-trained layer is trainable.
Adjustable parameters for training pipeline:
model_dir: The location of the model checkpoint files. If not set, temporary directory will be used.
dropout_rate: The rate for dropout.
learning_rate: The initial learning rate for Adam.
predict_batch_size: Batch size for prediction.
tpu: TPU address to connect to. Only used if using tpu.
For example, you can train the model with a longer sequence length. If you change the model, you must first construct a new model_spec.
End of explanation |
2,632 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interact Exercise 5
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
Step1: Interact with SVG display
SVG is a simple way of drawing vector graphics in the browser. Here is a simple example of how SVG can be used to draw a circle in the Notebook
Step3: Write a function named draw_circle that draws a circle using SVG. Your function should take the parameters of the circle as function arguments and have defaults as shown. You will have to write the raw SVG code as a Python string and then use the IPython.display.SVG object and IPython.display.display function.
Step4: Use interactive to build a user interface for exploing the draw_circle function
Step5: Use the display function to show the widgets created by interactive | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.html import widgets
from IPython.display import display, SVG
Explanation: Interact Exercise 5
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
End of explanation
s = ' <svg width="100" height="100"> <circle cx="50" cy="50" r="20" fill="aquamarine" /> </svg>'
SVG(s)
Explanation: Interact with SVG display
SVG is a simple way of drawing vector graphics in the browser. Here is a simple example of how SVG can be used to draw a circle in the Notebook:
End of explanation
def draw_circle(width=100, height=100, cx=25, cy=25, r=5, fill='red'):
Draw an SVG circle.
Parameters
----------
width : int
The width of the svg drawing area in px.
height : int
The height of the svg drawing area in px.
cx : int
The x position of the center of the circle in px.
cy : int
The y position of the center of the circle in px.
r : int
The radius of the circle in px.
fill : str
The fill color of the circle.
a = '<svg width="'+str(width)+'" height="'+str(height)+'"> <circle cx="'+str(cx)+'" cy="'+str(cy)+'" r="'+str(r)+'" fill="'+fill+'"/></svg>'
display(SVG(a))
draw_circle(cx=10, cy=10, r=10, fill='blue')
assert True # leave this to grade the draw_circle function
Explanation: Write a function named draw_circle that draws a circle using SVG. Your function should take the parameters of the circle as function arguments and have defaults as shown. You will have to write the raw SVG code as a Python string and then use the IPython.display.SVG object and IPython.display.display function.
End of explanation
w=interactive(draw_circle, width=fixed(300), height=fixed(300), cx=(0,300,1), cy=(0,300,1), r=(0,50,1), fill='red');
c = w.children
assert c[0].min==0 and c[0].max==300
assert c[1].min==0 and c[1].max==300
assert c[2].min==0 and c[2].max==50
assert c[3].value=='red'
Explanation: Use interactive to build a user interface for exploing the draw_circle function:
width: a fixed value of 300px
height: a fixed value of 300px
cx/cy: a slider in the range [0,300]
r: a slider in the range [0,50]
fill: a text area in which you can type a color's name
Save the return value of interactive to a variable named w.
End of explanation
display(w)
assert True # leave this to grade the display of the widget
Explanation: Use the display function to show the widgets created by interactive:
End of explanation |
2,633 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="imgs/tensorflow_head.png" />
Tensorflow
TensorFlow (https
Step1: Meaning
Step2: Data Flow Graph
(IDEA)
_A Machine Learning application is the result of the repeated computation of complex mathematical expressions, thus
we could describe this computation by using a Data Flow Graph
Data Flow Graph
Step3: Data Types (Tensors)
One Dimensional Tensor (Vector)
Step4: Two Dimensional Tensor (Matrix)
Step5: Basic Operations (Examples)
Step6: Handling Tensors
Step7: Slicing
Step8: Transpose
Step9: Computing the Gradient
Gradients are free!
Step10: Warming up
Step11: <a name="kaggle"></a>
Kaggle Challenge Data
The Otto Group is one of the world’s biggest e-commerce companies, A consistent analysis of the performance of products is crucial. However, due to diverse global infrastructure, many identical products get classified differently.
For this competition, we have provided a dataset with 93 features for more than 200,000 products. The objective is to build a predictive model which is able to distinguish between our main product categories.
Each row corresponds to a single product. There are a total of 93 numerical features, which represent counts of different events. All features have been obfuscated and will not be defined any further.
https
Step12: Hands On - Logistic Regression
Step13: The Model
Step14: Learning
Step15: Prediction
Step16: TF Session | Python Code:
# A simple calculation in Python
x = 1
y = x + 10
print(y)
import tensorflow as tf
# The ~same simple calculation in Tensorflow
x = tf.constant(1, name='x')
y = tf.Variable(x+10, name='y')
print(y)
Explanation: <img src="imgs/tensorflow_head.png" />
Tensorflow
TensorFlow (https://www.tensorflow.org/) is a software library, developed by Google Brain Team within Google's Machine Learning Intelligence research organization, for the purposes of conducting machine learning and deep neural network research.
TensorFlow combines the computational algebra of compilation optimization techniques, making easy the calculation of many mathematical expressions that would be difficult to calculate, instead.
Tensorflow Main Features
Defining, optimizing, and efficiently calculating mathematical expressions involving multi-dimensional arrays (tensors).
Programming support of deep neural networks and machine learning techniques.
Transparent use of GPU computing, automating management and optimization of the same memory and the data used. You can write the same code and run it either on CPUs or GPUs. More specifically, TensorFlow will figure out which parts of the computation should be moved to the GPU.
High scalability of computation across machines and huge data sets.
TensorFlow is available with Python and C++ support, but the Python API is better supported and much easier to learn.
Very Preliminary Example
End of explanation
model = tf.global_variables_initializer() # model is used by convention
with tf.Session() as session:
session.run(model)
print(session.run(y))
Explanation: Meaning: "When the variable y is computed, take the value of the constant x and add 10 to it"
Sessions and Models
To actually calculate the value of the y variable and to evaluate expressions, we need to initialise the variables, and then create a session where the actual computation happens
End of explanation
a = tf.constant(5, name="a")
b = tf.constant(45, name="b")
y = tf.Variable(a+b*2, name='y')
model = tf.global_variables_initializer()
with tf.Session() as session:
# Merge all the summaries collected in the default graph.
merged = tf.summary.merge_all()
# Then we create `SummaryWriter`.
# It will write all the summaries (in this case the execution graph)
# obtained from the code's execution into the specified path”
writer = tf.summary.FileWriter("tmp/tf_logs_simple", session.graph)
session.run(model)
print(session.run(y))
Explanation: Data Flow Graph
(IDEA)
_A Machine Learning application is the result of the repeated computation of complex mathematical expressions, thus
we could describe this computation by using a Data Flow Graph
Data Flow Graph: a graph where:
each Node represents the instance of a mathematical operation
multiply, add, divide
each Edge is a multi-dimensional data set (tensors) on which the operations are performed.
Tensorflow Graph Model
Node: In TensorFlow, each node represents the instantion of an operation.
Each operation has inputs (>= 2) and outputs >= 0.
Edges: In TensorFlow, there are two types of edge:
Data Edges:
They are carriers of data structures (tensors), where an output of one operation (from one node) becomes the input for another operation.
Dependency Edges: These edges indicate a control dependency between two nodes (i.e. "happens before" relationship).
Let's suppose we have two nodes A and B and a dependency edge connecting A to B. This means that B will start its operation only when the operation in A ends.
Tensorflow Graph Model (cont.)
Operation: This represents an abstract computation, such as adding or multiplying matrices.
An operation manages tensors, and It can just be polymorphic: the same operation can manipulate different tensor element types.
For example, the addition of two int32 tensors, the addition of two float tensors, and so on.
Kernel: This represents the concrete implementation of that operation.
A kernel defines the implementation of the operation on a particular device.
For example, an add matrix operation can have a CPU implementation and a GPU one.
Tensorflow Graph Model Session
Session: When the client program has to establish communication with the TensorFlow runtime system, a session must be created.
As soon as the session is created for a client, an initial graph is created and is empty. It has two fundamental methods:
session.extend: To be used during a computation, requesting to add more operations (nodes) and edges (data). The execution graph is then extended accordingly.
session.run: The execution graphs are executed to get the outputs (sometimes, subgraphs are executed thousands/millions of times using run invocations).
Tensorboard
TensorBoard is a visualization tool, devoted to analyzing Data Flow Graph and also to better understand the machine learning models.
It can view different types of statistics about the parameters and details of any part of a computer graph graphically. It often happens that a graph of computation can be very complex.
Tensorboard Example
Run the TensorBoard Server:
shell
tensorboard --logdir=/tmp/tf_logs
Open TensorBoard
Example
End of explanation
import numpy as np
tensor_1d = np.array([1, 2.5, 4.6, 5.75, 9.7])
tf_tensor=tf.convert_to_tensor(tensor_1d,dtype=tf.float64)
with tf.Session() as sess:
print(sess.run(tf_tensor))
print(sess.run(tf_tensor[0]))
print(sess.run(tf_tensor[2]))
Explanation: Data Types (Tensors)
One Dimensional Tensor (Vector)
End of explanation
tensor_2d = np.arange(16).reshape(4, 4)
print(tensor_2d)
tf_tensor = tf.placeholder(tf.float32, shape=(4, 4))
with tf.Session() as sess:
print(sess.run(tf_tensor, feed_dict={tf_tensor: tensor_2d}))
Explanation: Two Dimensional Tensor (Matrix)
End of explanation
matrix1 = np.array([(2,2,2),(2,2,2),(2,2,2)],dtype='float32')
matrix2 = np.array([(1,1,1),(1,1,1),(1,1,1)],dtype='float32')
tf_mat1 = tf.constant(matrix1)
tf_mat2 = tf.constant(matrix2)
matrix_product = tf.matmul(tf_mat1, tf_mat2)
matrix_sum = tf.add(tf_mat1, tf_mat2)
matrix_det = tf.matrix_determinant(matrix2)
with tf.Session() as sess:
prod_res = sess.run(matrix_product)
sum_res = sess.run(matrix_sum)
det_res = sess.run(matrix_det)
print("matrix1*matrix2 : \n", prod_res)
print("matrix1+matrix2 : \n", sum_res)
print("det(matrix2) : \n", det_res)
Explanation: Basic Operations (Examples)
End of explanation
%matplotlib inline
import matplotlib.image as mp_image
filename = "imgs/keras-logo-small.jpg"
input_image = mp_image.imread(filename)
#dimension
print('input dim = {}'.format(input_image.ndim))
#shape
print('input shape = {}'.format(input_image.shape))
import matplotlib.pyplot as plt
plt.imshow(input_image)
plt.show()
Explanation: Handling Tensors
End of explanation
my_image = tf.placeholder("uint8",[None,None,3])
slice = tf.slice(my_image,[10,0,0],[16,-1,-1])
with tf.Session() as session:
result = session.run(slice,feed_dict={my_image: input_image})
print(result.shape)
plt.imshow(result)
plt.show()
Explanation: Slicing
End of explanation
x = tf.Variable(input_image,name='x')
model = tf.global_variables_initializer()
with tf.Session() as session:
x = tf.transpose(x, perm=[1,0,2])
session.run(model)
result=session.run(x)
plt.imshow(result)
plt.show()
Explanation: Transpose
End of explanation
x = tf.placeholder(tf.float32)
y = tf.log(x)
var_grad = tf.gradients(y, x)
with tf.Session() as session:
var_grad_val = session.run(var_grad, feed_dict={x:2})
print(var_grad_val)
Explanation: Computing the Gradient
Gradients are free!
End of explanation
import tensorflow as tf
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Warming up: Logistic Regression
End of explanation
from kaggle_data import load_data, preprocess_data, preprocess_labels
X_train, labels = load_data('data/kaggle_ottogroup/train.csv', train=True)
X_train, scaler = preprocess_data(X_train)
Y_train, encoder = preprocess_labels(labels)
X_test, ids = load_data('data/kaggle_ottogroup/test.csv', train=False)
X_test, _ = preprocess_data(X_test, scaler)
nb_classes = Y_train.shape[1]
print(nb_classes, 'classes')
dims = X_train.shape[1]
print(dims, 'dims')
np.unique(labels)
Explanation: <a name="kaggle"></a>
Kaggle Challenge Data
The Otto Group is one of the world’s biggest e-commerce companies, A consistent analysis of the performance of products is crucial. However, due to diverse global infrastructure, many identical products get classified differently.
For this competition, we have provided a dataset with 93 features for more than 200,000 products. The objective is to build a predictive model which is able to distinguish between our main product categories.
Each row corresponds to a single product. There are a total of 93 numerical features, which represent counts of different events. All features have been obfuscated and will not be defined any further.
https://www.kaggle.com/c/otto-group-product-classification-challenge/data
For this section we will use the Kaggle Otto Group Challenge Data. You will find these data in
data/kaggle_ottogroup/ folder.
Logistic Regression
This algorithm has nothing to do with the canonical linear regression, but it is an algorithm that allows us to solve problems of classification(supervised learning).
In fact, to estimate the dependent variable, now we make use of the so-called logistic function or sigmoid.
It is precisely because of this feature we call this algorithm logistic regression.
Data Preparation
End of explanation
# Parameters
learning_rate = 0.01
training_epochs = 25
display_step = 1
# tf Graph Input
x = tf.placeholder("float", [None, dims])
y = tf.placeholder("float", [None, nb_classes])
Explanation: Hands On - Logistic Regression
End of explanation
# Set model weights
W = tf.Variable(tf.zeros([dims, nb_classes]))
b = tf.Variable(tf.zeros([nb_classes]))
# Construct model
activation = tf.nn.softmax(tf.matmul(x, W) + b) # Softmax
# Minimize error using cross entropy
cross_entropy = y*tf.log(activation)
cost = tf.reduce_mean(-tf.reduce_sum(cross_entropy,reduction_indices=1))
# Set the Optimizer
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
# Initializing the variables
init = tf.global_variables_initializer()
Explanation: The Model
End of explanation
def training_phase(X, Y):
cost_epochs = []
# Training cycle
for epoch in range(training_epochs):
_, c = sess.run([optimizer, cost], feed_dict={x: X, y: Y})
cost_epochs.append(c)
return cost_epochs
Explanation: Learning
End of explanation
def testing_phase(X, Y):
# Test model
correct_prediction = tf.equal(tf.argmax(activation, 1), tf.argmax(y, 1))
# Calculate accuracy
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print("Model accuracy:", accuracy.eval({x: X, y: Y}))
Explanation: Prediction
End of explanation
# Launch the graph
with tf.Session() as sess:
# Plug TensorBoard Visualisation
merged = tf.summary.merge_all()
writer = tf.summary.FileWriter("/tmp/logistic_logs", session.graph)
sess.run(init)
cost_epochs = training_phase(X_train, Y_train)
print("Training phase finished")
#plotting
plt.plot(range(len(cost_epochs)), cost_epochs, 'o', label='Logistic Regression Training phase')
plt.ylabel('cost')
plt.xlabel('epoch')
plt.legend()
plt.show()
prediction = tf.argmax(activation, 1)
print(prediction.eval({x: X_test}))
Explanation: TF Session
End of explanation |
2,634 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Linear Regression
Agenda
Introducing the bikeshare dataset
Reading in the data
Visualizing the data
Linear regression basics
Form of linear regression
Building a linear regression model
Using the model for prediction
Does the scale of the features matter?
Working with multiple features
Visualizing the data (part 2)
Adding more features to the model
Choosing between models
Feature selection
Evaluation metrics for regression problems
Comparing models with train/test split and RMSE
Comparing testing RMSE with null RMSE
Creating features
Handling categorical features
Feature engineering
Comparing linear regression with other models
Reading in the data
We'll be working with a dataset from Capital Bikeshare that was used in a Kaggle competition (data dictionary).
Step1: Questions
Step2: Interpreting the intercept ($\beta_0$)
Step3: Does the scale of the features matter?
Let's say that temperature was measured in Fahrenheit, rather than Celsius. How would that affect the model?
Step4: Conclusion
Step5: Visualizing the data (part 2)
Step6: Are you seeing anything that you did not expect?
Step7: What does this tell us?
There are more rentals in the winter than the spring, but only because the system is experiencing overall growth and the winter months happen to come after the spring months.
Step8: What relationships do you notice?
Adding more features to the model
Step9: Interpreting the coefficients
Step10: Comparing these metrics
Step11: Comparing models with train/test split and RMSE
Step12: Comparing testing RMSE with null RMSE
Null RMSE is the RMSE that could be achieved by always predicting the mean response value. It is a benchmark against which you may want to measure your regression model.
Step13: Handling categorical features
scikit-learn expects all features to be numeric. So how do we include a categorical feature in our model?
Ordered categories
Step14: In general, if you have a categorical feature with k possible values, you create k-1 dummy variables.
If that's confusing, think about why we only need one dummy variable for holiday, not two dummy variables (holiday_yes and holiday_no).
Step15: Feature engineering
See if you can create the following features | Python Code:
# read the data and set the datetime as the index
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (8, 6)
plt.rcParams['font.size'] = 14
import pandas as pd
urls = ['../data/KDCA-201601.csv', '../data/KDCA-201602.csv', '../data/KDCA-201603.csv']
frames = [pd.read_csv(url) for url in urls]
weather = pd.concat(frames)
cols = 'WBAN Date Time StationType SkyCondition Visibility WeatherType DryBulbFarenheit DryBulbCelsius WetBulbFarenheit WetBulbCelsius DewPointFarenheit DewPointCelsius RelativeHumidity WindSpeed WindDirection ValueForWindCharacter StationPressure PressureTendency PressureChange SeaLevelPressure RecordType HourlyPrecip Altimeter'
cols = cols.split()
weather = weather[cols]
weather.rename(columns={'DryBulbFarenheit':'temp',
'RelativeHumidity': 'humidity'}, inplace=True)
# weather['humidity'] = pd.to_numeric(weather.humidity, errors='coerce')
weather['datetime'] = pd.to_datetime(weather.Date.astype(str) + weather.Time.apply('{0:0>4}'.format))
weather['datetime_hour'] = weather.datetime.dt.floor(freq='h')
weather['month'] = weather.datetime.dt.month
bikes = pd.read_csv('../data/2016-Q1-Trips-History-Data.csv')
bikes['start'] = pd.to_datetime(bikes['Start date'], infer_datetime_format=True)
bikes['end'] = pd.to_datetime(bikes['End date'], infer_datetime_format=True)
bikes['datetime_hour'] = bikes.start.dt.floor(freq='h')
weather[['datetime', 'temp']].hist(bins=30)
print(weather.columns)
weather.head()
bikes.merge(weather[['temp', 'datetime_hour', 'datetime']], on='datetime_hour')
hours = bikes.groupby('datetime_hour').agg('count')
hours['datetime_hour'] = hours.index
hours.head()
hours['total'] = hours.start
hours = hours[['total', 'datetime_hour']]
hours.total.plot()
hours_weather = hours.merge(weather, on='datetime_hour')
hours_weather.plot(kind='scatter', x='temp', y='total')
sns.lmplot(x='temp', y='total', data=hours_weather, aspect=1.5, scatter_kws={'alpha':0.8})
weekday = hours_weather[(hours_weather.datetime.dt.hour==11) & (hours_weather.datetime.dt.dayofweek<5) ]
weekday.plot(kind='scatter', x='temp', y='total')
# import seaborn as sns
sns.lmplot(x='temp', y='total', data=weekday, aspect=1.5, scatter_kws={'alpha':0.8})
Explanation: Linear Regression
Agenda
Introducing the bikeshare dataset
Reading in the data
Visualizing the data
Linear regression basics
Form of linear regression
Building a linear regression model
Using the model for prediction
Does the scale of the features matter?
Working with multiple features
Visualizing the data (part 2)
Adding more features to the model
Choosing between models
Feature selection
Evaluation metrics for regression problems
Comparing models with train/test split and RMSE
Comparing testing RMSE with null RMSE
Creating features
Handling categorical features
Feature engineering
Comparing linear regression with other models
Reading in the data
We'll be working with a dataset from Capital Bikeshare that was used in a Kaggle competition (data dictionary).
End of explanation
# create X and y
feature_cols = ['temp']
X = hours_weather[feature_cols]
y = hours_weather.total
# import, instantiate, fit
from sklearn.linear_model import LinearRegression
linreg = LinearRegression()
linreg.fit(X, y)
# print the coefficients
print(linreg.intercept_)
print(linreg.coef_)
Explanation: Questions:
What does each observation represent?
What is the response variable (as defined by Kaggle)?
How many features are there?
Form of linear regression
$y = \beta_0 + \beta_1x_1 + \beta_2x_2 + ... + \beta_nx_n$
$y$ is the response
$\beta_0$ is the intercept
$\beta_1$ is the coefficient for $x_1$ (the first feature)
$\beta_n$ is the coefficient for $x_n$ (the nth feature)
The $\beta$ values are called the model coefficients:
These values are estimated (or "learned") during the model fitting process using the least squares criterion.
Specifically, we are find the line (mathematically) which minimizes the sum of squared residuals (or "sum of squared errors").
And once we've learned these coefficients, we can use the model to predict the response.
In the diagram above:
The black dots are the observed values of x and y.
The blue line is our least squares line.
The red lines are the residuals, which are the vertical distances between the observed values and the least squares line.
Building a linear regression model
End of explanation
# manually calculate the prediction
linreg.intercept_ + linreg.coef_ * 77
# use the predict method
linreg.predict(77)
Explanation: Interpreting the intercept ($\beta_0$):
It is the value of $y$ when $x$=0.
Thus, it is the estimated number of rentals when the temperature is 0 degrees Celsius.
Note: It does not always make sense to interpret the intercept. (Why?)
Interpreting the "temp" coefficient ($\beta_1$):
It is the change in $y$ divided by change in $x$, or the "slope".
Thus, a temperature increase of 1 degree F is associated with a rental increase of 9.17 bikes.
This is not a statement of causation.
$\beta_1$ would be negative if an increase in temperature was associated with a decrease in rentals.
Using the model for prediction
How many bike rentals would we predict if the temperature was 77 degrees F?
End of explanation
# create a new column for Fahrenheit temperature
hours_weather['temp_C'] = (hours_weather.temp - 32) * 5/9
hours_weather.head()
# Seaborn scatter plot with regression line
sns.lmplot(x='temp_C', y='total', data=hours_weather, aspect=1.5, scatter_kws={'alpha':0.2})
sns.lmplot(x='temp', y='total', data=hours_weather, aspect=1.5, scatter_kws={'alpha':0.2})
# create X and y
feature_cols = ['temp_C']
X = hours_weather[feature_cols]
y = hours_weather.total
# instantiate and fit
linreg = LinearRegression()
linreg.fit(X, y)
# print the coefficients
print(linreg.intercept_, linreg.coef_)
# convert 77 degrees Fahrenheit to Celsius
(77 - 32)* 5/9
# predict rentals for 25 degrees Celsius
linreg.predict([[25], [30]])
Explanation: Does the scale of the features matter?
Let's say that temperature was measured in Fahrenheit, rather than Celsius. How would that affect the model?
End of explanation
# remove the temp_F column
# bikes.drop('temp_C', axis=1, inplace=True)
Explanation: Conclusion: The scale of the features is irrelevant for linear regression models. When changing the scale, we simply change our interpretation of the coefficients.
End of explanation
# explore more features
feature_cols = ['temp', 'month', 'humidity']
# multiple scatter plots in Seaborn
# print(hours_weather.humidity != 'M')
hours_weather.humidity = hours_weather.humidity.apply(lambda x: -1 if isinstance(x, str) else x)
# hours_weather.loc[hours_weather.humidity.dtype != int].humidity = 100
sns.pairplot(hours_weather, x_vars=feature_cols, y_vars='total', kind='reg')
# multiple scatter plots in Pandas
fig, axs = plt.subplots(1, len(feature_cols), sharey=True)
for index, feature in enumerate(feature_cols):
hours_weather.plot(kind='scatter', x=feature, y='total', ax=axs[index], figsize=(16, 3))
Explanation: Visualizing the data (part 2)
End of explanation
# cross-tabulation of season and month
pd.crosstab(hours_weather.month, hours_weather.datetime.dt.dayofweek)
# box plot of rentals, grouped by season
hours_weather.boxplot(column='total', by='month')
# line plot of rentals
hours_weather.total.plot()
Explanation: Are you seeing anything that you did not expect?
End of explanation
# correlation matrix (ranges from 1 to -1)
hours_weather.corr()
# visualize correlation matrix in Seaborn using a heatmap
sns.heatmap(hours_weather.corr())
Explanation: What does this tell us?
There are more rentals in the winter than the spring, but only because the system is experiencing overall growth and the winter months happen to come after the spring months.
End of explanation
# create a list of features
feature_cols = ['temp', 'month', 'humidity']
# create X and y
X = hours_weather[feature_cols]
y = hours_weather.total
# instantiate and fit
linreg = LinearRegression()
linreg.fit(X, y)
# print the coefficients
print(linreg.intercept_, linreg.coef_)
# pair the feature names with the coefficients
list(zip(feature_cols, linreg.coef_))
Explanation: What relationships do you notice?
Adding more features to the model
End of explanation
# example true and predicted response values
true = [10, 7, 5, 5]
pred = [8, 6, 5, 10]
# calculate these metrics by hand!
from sklearn import metrics
import numpy as np
print('MAE:', metrics.mean_absolute_error(true, pred))
print('MSE:', metrics.mean_squared_error(true, pred))
print('RMSE:', np.sqrt(metrics.mean_squared_error(true, pred)))
Explanation: Interpreting the coefficients:
Holding all other features fixed, a 1 unit increase in temperature is associated with a rental increase of 9.3 bikes.
Holding all other features fixed, a 1 unit increase in month is associated with a rental increase of 30.6 bikes.
Holding all other features fixed, a 1 unit increase in humidity is associated with a rental decrease of .60 bikes.
Does anything look incorrect?
Feature selection
How do we choose which features to include in the model? We're going to use train/test split (and eventually cross-validation).
Why not use of p-values or R-squared for feature selection?
Linear models rely upon a lot of assumptions (such as the features being independent), and if those assumptions are violated, p-values and R-squared are less reliable. Train/test split relies on fewer assumptions.
Features that are unrelated to the response can still have significant p-values.
Adding features to your model that are unrelated to the response will always increase the R-squared value, and adjusted R-squared does not sufficiently account for this.
p-values and R-squared are proxies for our goal of generalization, whereas train/test split and cross-validation attempt to directly estimate how well the model will generalize to out-of-sample data.
More generally:
There are different methodologies that can be used for solving any given data science problem, and this course follows a machine learning methodology.
This course focuses on general purpose approaches that can be applied to any model, rather than model-specific approaches.
Evaluation metrics for regression problems
Evaluation metrics for classification problems, such as accuracy, are not useful for regression problems. We need evaluation metrics designed for comparing continuous values.
Here are three common evaluation metrics for regression problems:
Mean Absolute Error (MAE) is the mean of the absolute value of the errors:
$$\frac 1n\sum_{i=1}^n|y_i-\hat{y}_i|$$
Mean Squared Error (MSE) is the mean of the squared errors:
$$\frac 1n\sum_{i=1}^n(y_i-\hat{y}_i)^2$$
Root Mean Squared Error (RMSE) is the square root of the mean of the squared errors:
$$\sqrt{\frac 1n\sum_{i=1}^n(y_i-\hat{y}_i)^2}$$
End of explanation
# same true values as above
true = [10, 7, 5, 5]
# new set of predicted values
pred = [10, 7, 5, 13]
# MAE is the same as before
print('MAE:', metrics.mean_absolute_error(true, pred))
# MSE and RMSE are larger than before
print('MSE:', metrics.mean_squared_error(true, pred))
print('RMSE:', np.sqrt(metrics.mean_squared_error(true, pred)))
rmse = np.sqrt(metrics.mean_squared_error(true, pred))
rmse/pred
Explanation: Comparing these metrics:
MAE is the easiest to understand, because it's the average error.
MSE is more popular than MAE, because MSE "punishes" larger errors, which tends to be useful in the real world.
RMSE is even more popular than MSE, because RMSE is interpretable in the "y" units.
All of these are loss functions, because we want to minimize them.
Here's an additional example, to demonstrate how MSE/RMSE punish larger errors:
End of explanation
from sklearn.cross_validation import train_test_split
import sklearn.metrics as metrics
import numpy as np
# define a function that accepts a list of features and returns testing RMSE
def train_test_rmse(feature_cols, data):
X = data[feature_cols]
y = data.total
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=123)
linreg = LinearRegression()
linreg.fit(X_train, y_train)
y_pred = linreg.predict(X_test)
return np.sqrt(metrics.mean_squared_error(y_test, y_pred))
# compare different sets of features
print(train_test_rmse(['temp', 'month', 'humidity'], hours_weather))
print(train_test_rmse(['temp', 'month'], hours_weather))
print(train_test_rmse(['temp', 'humidity'], hours_weather))
print(train_test_rmse(['temp'], hours_weather))
print(train_test_rmse(['temp'], weekday))
Explanation: Comparing models with train/test split and RMSE
End of explanation
# split X and y into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(weekday[['temp']], weekday.total, random_state=123)
# create a NumPy array with the same shape as y_test
y_null = np.zeros_like(y_test, dtype=float)
# fill the array with the mean value of y_test
y_null.fill(y_test.mean())
y_null
# compute null RMSE
np.sqrt(metrics.mean_squared_error(y_test, y_null))
Explanation: Comparing testing RMSE with null RMSE
Null RMSE is the RMSE that could be achieved by always predicting the mean response value. It is a benchmark against which you may want to measure your regression model.
End of explanation
# create dummy variables
season_dummies = pd.get_dummies(hours_weather.month, prefix='month')
# print 5 random rows
season_dummies.sample(n=5, random_state=1)
Explanation: Handling categorical features
scikit-learn expects all features to be numeric. So how do we include a categorical feature in our model?
Ordered categories: transform them to sensible numeric values (example: small=1, medium=2, large=3)
Unordered categories: use dummy encoding (0/1)
What are the categorical features in our dataset?
Ordered categories: weather (already encoded with sensible numeric values)
Unordered categories: season (needs dummy encoding), holiday (already dummy encoded), workingday (already dummy encoded)
For season, we can't simply leave the encoding as 1 = spring, 2 = summer, 3 = fall, and 4 = winter, because that would imply an ordered relationship. Instead, we create multiple dummy variables:
End of explanation
# concatenate the original DataFrame and the dummy DataFrame (axis=0 means rows, axis=1 means columns)
hw_dum = pd.concat([hours_weather, season_dummies], axis=1)
# print 5 random rows
hw_dum.sample(n=5, random_state=1)
# include dummy variables for season in the model
feature_cols = ['temp','month_1', 'month_2', 'month_3', 'humidity']
X = hw_dum[feature_cols]
y = hw_dum.total
linreg = LinearRegression()
linreg.fit(X, y)
list(zip(feature_cols, linreg.coef_))
# compare original season variable with dummy variables
print(train_test_rmse(['temp', 'month', 'humidity'], hw_dum))
print(train_test_rmse(['temp', 'month_2', 'month', 'humidity'], hw_dum))
print(train_test_rmse(['temp', 'month_2', 'month_1', 'humidity'], hw_dum))
Explanation: In general, if you have a categorical feature with k possible values, you create k-1 dummy variables.
If that's confusing, think about why we only need one dummy variable for holiday, not two dummy variables (holiday_yes and holiday_no).
End of explanation
# hour as a numeric feature
hw_dum['hour'] = hw_dum.datetime.dt.hour
# hour as a categorical feature
hour_dummies = pd.get_dummies(hw_dum.hour, prefix='hour')
# hour_dummies.drop(hour_dummies.columns[0], axis=1, inplace=True)
hw_dum = pd.concat([hw_dum, hour_dummies], axis=1)
# daytime as a categorical feature
hw_dum['daytime'] = ((hw_dum.hour > 6) & (hw_dum.hour < 21)).astype(int)
print(train_test_rmse(['hour'], hw_dum),
train_test_rmse(hw_dum.columns[hw_dum.columns.str.startswith('hour_')], hw_dum)
,train_test_rmse(['daytime'], hw_dum))
Explanation: Feature engineering
See if you can create the following features:
hour: as a single numeric feature (0 through 23)
hour: as a categorical feature (use 23 dummy variables)
daytime: as a single categorical feature (daytime=1 from 7am to 8pm, and daytime=0 otherwise)
Then, try using each of the three features (on its own) with train_test_rmse to see which one performs the best!
End of explanation |
2,635 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
An Astronomical Application of Machine Learning
Step1: Problem 1) Examine the Training Data
For this problem the training set, i.e. sources with known labels, includes stars and galaxies that have been confirmed with spectroscopic observations. The machine learning model is needed because there are $\gg 10^8$ sources with photometric observations in SDSS, and only $4 \times 10^6$ sources with spectroscopic observations. The model will allow us to translate our knowledge from the spectroscopic observations to the entire data set. The features include each $r$-band magnitude measurement made by SDSS (don't worry if you don't know what this means...). This yields 8 features to train the models (significantly fewer than the 454 properties measured for each source in SDSS).
If you are curious (and it is fine if you are not) this training set was constructed by running the following query on the SDSS database
Step2: Problem 1b
Based on your plots of the data, which feature do you think will be the most important for separating stars and galaxies? Why?
write your answer here - do not change it after later completing the problem
The final data preparation step it to create an independent test set to evalute the generalization error of the final tuned model. Independent test sets are generated by witholding a fraction of the training set. No hard and fast rules apply for the fraction to be withheld, though typical choices vary between $\sim{0.2}-0.5$.
sklearn.model_selection has a useful helper function train_test_split.
Problem 1c Split the 20k spectroscopic sources 70-30 into training and test sets. Save the results in arrays called
Step3: We will now ignore everything in the test set until we have fully optimized the machine learning model.
Problem 2) Model Building
After curating the data, you must select a specific machine learning algorithm. With experience, it is possible to develop intuition for the best ML algorithm given a specific problem.
Short of that? Try two (or three, or four, or five) different models and choose whichever works the best.
Problem 2a
Train a $k$-nearest neighbors model on the star-galaxy training set. Select $k$ = 25 for this model.
Hint - the KNeighborsClassifier object in the sklearn.neighbors module may be useful for this task.
Step4: Problem 2b
Train a Random Forest (RF) model (Breiman 2001) on the training set. Include 50 trees in the forest using the n_estimators parameter. Again, set random_state = rs.
Hint - use the RandomForestClassifier object from the sklearn.ensemble module. Also - be sure to set n_jobs = -1 in every call of RandomForestClassifier.
Step5: A nice property of RF, relative to $k$NN, is that RF naturally provides an estimate of the most important features in a model.
RF feature importance is measured by randomly shuffling the values of a particular feature, and measuring the decrease in the model's overall accuracy. The relative feature importances can be accessed using the .feature_importances_ attribute associated with the RandomForestClassifer() object. The higher the value, the more important feature.
Problem 2c
Calculate the relative importance of each feature.
Which feature is most important? Does this match your answer from 1c?
Step6: write your answer here
Problem 3) Model Evaluation
To evaluate the performance of the model we establish a baseline (or figure of merit) that we would like to exceed. For our current application we want to maximize the accuracy of the model.
If the model does not improve upon the baseline (or reach the desired figure of merit) then one must iterate on previous steps (feature engineering, algorithm selection, etc) to accomplish the desired goal.
The SDSS photometric pipeline uses a simple parametric model to classify sources as either stars or galaxies. If we are going to the trouble of building a complex ML model, then it stands to reason that its performance should exceed that of the simple model. Thus, we adopt the SDSS photometric classifier as our baseline.
The SDSS photometric classifier uses a single hard cut to separate stars and galaxies in imaging data
Step7: Problem 3b
Use 10-fold cross validation to estimate the FoM for the $k$NN model. Take the mean value across all folds as the FoM estimate.
Hint - the cross_val_score function from the sklearn.model_selection module performs the necessary calculations.
Step8: Problem 3c
Use 10-fold cross validation to estimate the FoM for the random forest model.
Step9: Problem 3d
Do the machine-learning models outperform the SDSS photometric classifier?
write your answer here
Problem 4) Model Optimization
While the "off-the-shelf" model provides an improvement over the SDSS photometric classifier, we can further refine and improve the performance of the machine learning model by adjusting the model tuning parameters. A process known as model optimization.
All machine-learning models have tuning parameters. In brief, these parameters capture the smoothness of the model in the multidimentional-feature space. Whether the model is smooth or coarse is application dependent -- be weary of over-fitting or under-fitting the data. Generally speaking, RF (and most tree-based methods) have 3 flavors of tuning parameter
Step10: write your answer here
Problem 4b
Determine the 10-fold cross validation accuracy for RF models with $N_\mathrm{tree}$ = 1, 10, 30, 100, and 300.
How do you expect changing the number of trees to affect the results?
Step11: write your answer here
Now you are ready for the moment of truth!
Problem 5) Model Predictions
Problem 5a
Calculate the FoM for the SDSS photometric model on the test set.
Step12: Problem 5b
Using the optimal number of trees from 4b calculate the FoM for the random forest model.
Hint - remember that the model should be trained on the training set, but the predictions are for the test set.
Step13: Problem 5c
Calculate the confusion matrix for the test set. Is there symmetry to the misclassifications?
Hint - the confusion_matrix function in sklearn.metrics will help.
Step14: write your answer here
Problem 5d
Calculate (and plot the region of interest) the ROC curve assumming that stars are the positive class.
Hint 1 - you will need to calculate probabilistic classifications for the test set using the predict_proba() method.
Hint 2 - the roc_curve function in the sklearn.metrics module will be useful.
Step15: Problem 5e
Suppose that (like me) you really care about supernovae. In this case you want a model that correctly classifies 99% of all stars, so that stellar flares do not fool you into thinking you have found a new supernova.
What classification threshold should be adopted for this model?
What fraction of galaxies does this model misclassify?
Step16: Problem 6) Classify New Data
Run the cell below to load in some new data (which in this case happens to have known labels, but in practice this will almost never be the case...)
Step17: Problem 6a
Create a feature and label array for the new data.
Hint - copy the code you developed above in Problem 2.
Step18: Problem 6b
Calculate the accuracy of the model predictions on the new data.
Step19: Problem 6c
Can you explain why the accuracy for the new data is significantly lower than what you calculated previously?
If you can build and train a better model (using the trianing data) for classifying the new data - I will be extremely impressed.
write your answer here
Challenge Problem) Full RF Optimization
Now we will optimize the model over all tuning parameters. How does one actually determine the optimal set of tuning parameters? Brute force.
We will optimize the model via a grid search that performs CV at each point in the 3D grid. The final model will adopt the point with the highest accuracy.
It is important to remember two general rules of thumb | Python Code:
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: An Astronomical Application of Machine Learning:
Separating Stars and Galaxies from SDSS
Version 0.3
By AA Miller 2017 Jan 22
AA Miller 2022 Mar 06 (v0.03)
The problems in the following notebook develop an end-to-end machine learning model using actual astronomical data to separate stars and galaxies. There are 5 steps in this machine learning workflow:
Data Preparation
Model Building
Model Evaluation
Model Optimization
Model Predictions
The data come from the Sloan Digital Sky Survey (SDSS), an imaging survey that has several similarities to LSST (though the telescope was significantly smaller and the survey did not cover as large an area).
Science background: Many (nearly all?) of the science applications for LSST data will rely on the accurate separation of stars and galaxies in the LSST imaging data. As an example, imagine measuring the structure of the Milky Way without knowing which sources are galaxies and which are stars.
During this exercise, we will utilize supervised machine learning methods to separate extended sources (galaxies) and point sources (stars) in imaging data. These methods are highly flexible, and as a result can classify sources at higher fidelity than methods that simply make cuts in a low-dimensional space.
End of explanation
sdss_df = pd.read_hdf("sdss_training_set.h5")
sns.pairplot(sdss_df, hue = 'class', diag_kind = 'hist')
Explanation: Problem 1) Examine the Training Data
For this problem the training set, i.e. sources with known labels, includes stars and galaxies that have been confirmed with spectroscopic observations. The machine learning model is needed because there are $\gg 10^8$ sources with photometric observations in SDSS, and only $4 \times 10^6$ sources with spectroscopic observations. The model will allow us to translate our knowledge from the spectroscopic observations to the entire data set. The features include each $r$-band magnitude measurement made by SDSS (don't worry if you don't know what this means...). This yields 8 features to train the models (significantly fewer than the 454 properties measured for each source in SDSS).
If you are curious (and it is fine if you are not) this training set was constructed by running the following query on the SDSS database:
SELECT TOP 20000
p.psfMag_r, p.fiberMag_r, p.fiber2Mag_r, p.petroMag_r,
p.deVMag_r, p.expMag_r, p.modelMag_r, p.cModelMag_r,
s.class
FROM PhotoObjAll AS p JOIN specObjAll s ON s.bestobjid = p.objid
WHERE p.mode = 1 AND s.sciencePrimary = 1 AND p.clean = 1 AND s.class != 'QSO'
ORDER BY p.objid ASC
First download the training set and the blind test set for this problem.
Problem 1a
Visualize the training set data. The data have 8 features ['psfMag_r', 'fiberMag_r', 'fiber2Mag_r', 'petroMag_r', 'deVMag_r', 'expMag_r', 'modelMag_r', 'cModelMag_r'], and a 9th column ['class'] corresponding to the labels ('STAR' or 'GALAXY' in this case).
Hint - just execute the cell below.
End of explanation
from sklearn.model_selection import train_test_split
rs = 1851
feats = list(sdss_df.columns)
feats.remove('class')
X = np.array(sdss_df[feats])
y = np.array(sdss_df['class'])
train_X, test_X, train_y, test_y = train_test_split( X, y, test_size = 0.3, random_state = rs)
Explanation: Problem 1b
Based on your plots of the data, which feature do you think will be the most important for separating stars and galaxies? Why?
write your answer here - do not change it after later completing the problem
The final data preparation step it to create an independent test set to evalute the generalization error of the final tuned model. Independent test sets are generated by witholding a fraction of the training set. No hard and fast rules apply for the fraction to be withheld, though typical choices vary between $\sim{0.2}-0.5$.
sklearn.model_selection has a useful helper function train_test_split.
Problem 1c Split the 20k spectroscopic sources 70-30 into training and test sets. Save the results in arrays called: train_X, train_y, test_X, test_y, respectively. Use rs for the random_state in train_test_split.
Hint - recall that sklearn utilizes X, a 2D np.array(), and y as the features and labels arrays, respecitively.
End of explanation
from sklearn.neighbors import KNeighborsClassifier
knn_clf = KNeighborsClassifier(n_neighbors=25)
knn_clf.fit(train_X, train_y)
Explanation: We will now ignore everything in the test set until we have fully optimized the machine learning model.
Problem 2) Model Building
After curating the data, you must select a specific machine learning algorithm. With experience, it is possible to develop intuition for the best ML algorithm given a specific problem.
Short of that? Try two (or three, or four, or five) different models and choose whichever works the best.
Problem 2a
Train a $k$-nearest neighbors model on the star-galaxy training set. Select $k$ = 25 for this model.
Hint - the KNeighborsClassifier object in the sklearn.neighbors module may be useful for this task.
End of explanation
from sklearn.ensemble import RandomForestClassifier
rf_clf = RandomForestClassifier(n_estimators=50, random_state=rs, n_jobs=-1)
rf_clf.fit(train_X, train_y)
Explanation: Problem 2b
Train a Random Forest (RF) model (Breiman 2001) on the training set. Include 50 trees in the forest using the n_estimators parameter. Again, set random_state = rs.
Hint - use the RandomForestClassifier object from the sklearn.ensemble module. Also - be sure to set n_jobs = -1 in every call of RandomForestClassifier.
End of explanation
feat_str = ',\n'.join(['{}'.format(feat) for feat in np.array(feats)[np.argsort(rf_clf.feature_importances_)[::-1]]])
print('From most to least important: \n{}'.format(feat_str))
Explanation: A nice property of RF, relative to $k$NN, is that RF naturally provides an estimate of the most important features in a model.
RF feature importance is measured by randomly shuffling the values of a particular feature, and measuring the decrease in the model's overall accuracy. The relative feature importances can be accessed using the .feature_importances_ attribute associated with the RandomForestClassifer() object. The higher the value, the more important feature.
Problem 2c
Calculate the relative importance of each feature.
Which feature is most important? Does this match your answer from 1c?
End of explanation
from sklearn.metrics import accuracy_score
phot_y = np.empty_like(train_y)
phot_gal = np.logical_not(train_X[:,0] - train_X[:,-1] < 0.145)
phot_y[phot_gal] = 'GALAXY'
phot_y[~phot_gal] = 'STAR'
print("The baseline FoM = {:.4f}".format(accuracy_score(train_y, phot_y)))
Explanation: write your answer here
Problem 3) Model Evaluation
To evaluate the performance of the model we establish a baseline (or figure of merit) that we would like to exceed. For our current application we want to maximize the accuracy of the model.
If the model does not improve upon the baseline (or reach the desired figure of merit) then one must iterate on previous steps (feature engineering, algorithm selection, etc) to accomplish the desired goal.
The SDSS photometric pipeline uses a simple parametric model to classify sources as either stars or galaxies. If we are going to the trouble of building a complex ML model, then it stands to reason that its performance should exceed that of the simple model. Thus, we adopt the SDSS photometric classifier as our baseline.
The SDSS photometric classifier uses a single hard cut to separate stars and galaxies in imaging data:
$$\mathtt{psfMag_r} - \mathtt{cModelMag_r} > 0.145.$$
Sources that satisfy this criteria are considered galaxies.
Problem 3a
Determine the baseline figure of merit by measuring the accuracy of the SDSS photometric classifier on the training set.
Hint - the accuracy_score function in the sklearn.metrics module may be useful.
End of explanation
from sklearn.model_selection import cross_val_score
knn_cv = cross_val_score(knn_clf, train_X, train_y, cv=10)
print('The kNN model FoM = {:.4f} +/- {:.4f}'.format(np.mean(knn_cv), np.std(knn_cv, ddof=1)))
Explanation: Problem 3b
Use 10-fold cross validation to estimate the FoM for the $k$NN model. Take the mean value across all folds as the FoM estimate.
Hint - the cross_val_score function from the sklearn.model_selection module performs the necessary calculations.
End of explanation
rf_cv = cross_val_score(rf_clf, train_X, train_y, cv=10)
print('The RF model FoM = {:.4f} +/- {:.4f}'.format(np.mean(rf_cv), np.std(rf_cv, ddof=1)))
Explanation: Problem 3c
Use 10-fold cross validation to estimate the FoM for the random forest model.
End of explanation
for k in [1,10,100]:
knn_cv = cross_val_score(KNeighborsClassifier(n_neighbors=k), train_X, train_y, cv=10)
print('With k = {:d}, the kNN FoM = {:.4f} +/- {:.4f}'.format(k, np.mean(knn_cv), np.std(knn_cv, ddof=1)))
Explanation: Problem 3d
Do the machine-learning models outperform the SDSS photometric classifier?
write your answer here
Problem 4) Model Optimization
While the "off-the-shelf" model provides an improvement over the SDSS photometric classifier, we can further refine and improve the performance of the machine learning model by adjusting the model tuning parameters. A process known as model optimization.
All machine-learning models have tuning parameters. In brief, these parameters capture the smoothness of the model in the multidimentional-feature space. Whether the model is smooth or coarse is application dependent -- be weary of over-fitting or under-fitting the data. Generally speaking, RF (and most tree-based methods) have 3 flavors of tuning parameter:
$N_\mathrm{tree}$ - the number of trees in the forest n_estimators (default: 10) in sklearn
$m_\mathrm{try}$ - the number of (random) features to explore as splitting criteria at each node max_features (default: sqrt(n_features)) in sklearn
Pruning criteria - defined stopping criteria for ending continued growth of the tree, there are many choices for this in sklearn (My preference is min_samples_leaf (default: 1) which sets the minimum number of sources allowed in a terminal node, or leaf, of the tree)
Just as we previously evaluated the model using CV, we must optimize the tuning paramters via CV. Until we "finalize" the model by fixing all the input parameters, we cannot evalute the accuracy of the model with the test set as that would be "snooping."
Before globally optimizing the model, let's develop some intuition for how the tuning parameters affect the final model predictions.
Problem 4a
Determine the 10-fold cross validation accuracy for $k$NN models with $k$ = 1, 10, 100.
How do you expect changing the number of neighbors to affect the results?
End of explanation
for ntree in [1,10,30,100,300]:
rf_cv = cross_val_score(RandomForestClassifier(n_estimators=ntree), train_X, train_y, cv=10)
print('With {:d} trees the FoM = {:.4f} +/- {:.4f}'.format(ntree, np.mean(rf_cv), np.std(rf_cv, ddof=1)))
Explanation: write your answer here
Problem 4b
Determine the 10-fold cross validation accuracy for RF models with $N_\mathrm{tree}$ = 1, 10, 30, 100, and 300.
How do you expect changing the number of trees to affect the results?
End of explanation
phot_y = np.empty_like(test_y)
phot_gal = np.logical_not(test_X[:,0] - test_X[:,-1] < 0.145)
phot_y[phot_gal] = 'GALAXY'
phot_y[~phot_gal] = 'STAR'
print("The baseline FoM = {:.4f}".format(accuracy_score(test_y, phot_y)))
Explanation: write your answer here
Now you are ready for the moment of truth!
Problem 5) Model Predictions
Problem 5a
Calculate the FoM for the SDSS photometric model on the test set.
End of explanation
rf_clf = RandomForestClassifier(n_estimators=300, n_jobs=-1)
rf_clf.fit(train_X, train_y)
test_preds = rf_clf.predict(test_X)
print("The RF model has FoM = {:.4f}".format(accuracy_score(test_y, test_preds)))
Explanation: Problem 5b
Using the optimal number of trees from 4b calculate the FoM for the random forest model.
Hint - remember that the model should be trained on the training set, but the predictions are for the test set.
End of explanation
from sklearn.metrics import confusion_matrix
print(confusion_matrix(test_y, test_preds))
Explanation: Problem 5c
Calculate the confusion matrix for the test set. Is there symmetry to the misclassifications?
Hint - the confusion_matrix function in sklearn.metrics will help.
End of explanation
from sklearn.metrics import roc_curve
test_y_int = np.ones_like(test_y, dtype=int)
test_y_int[np.where(test_y == 'GALAXY')] = 0
test_preds_proba = rf_clf.predict_proba(test_X)
fpr, tpr, thresh = roc_curve(test_y_int, test_preds_proba[:,1])
fig, ax = plt.subplots()
ax.plot(fpr, tpr)
ax.set_xlabel('FPR')
ax.set_ylabel('TPR')
ax.set_xlim(2e-3,.2)
ax.set_ylim(0.3,1)
Explanation: write your answer here
Problem 5d
Calculate (and plot the region of interest) the ROC curve assumming that stars are the positive class.
Hint 1 - you will need to calculate probabilistic classifications for the test set using the predict_proba() method.
Hint 2 - the roc_curve function in the sklearn.metrics module will be useful.
End of explanation
tpr_99_thresh = thresh[np.argmin(np.abs(0.99 - tpr))]
print('This model requires a classification threshold of {:.4f}'.format(tpr_99_thresh))
fpr_at_tpr_99 = fpr[np.argmin(np.abs(0.99 - tpr))]
print('This model misclassifies {:.2f}% of galaxies'.format(fpr_at_tpr_99*100))
Explanation: Problem 5e
Suppose that (like me) you really care about supernovae. In this case you want a model that correctly classifies 99% of all stars, so that stellar flares do not fool you into thinking you have found a new supernova.
What classification threshold should be adopted for this model?
What fraction of galaxies does this model misclassify?
End of explanation
new_data_df = pd.read_hdf("blind_test_set.h5")
Explanation: Problem 6) Classify New Data
Run the cell below to load in some new data (which in this case happens to have known labels, but in practice this will almost never be the case...)
End of explanation
new_X = np.array(new_data_df[feats])
new_y = np.array(new_data_df['class'])
Explanation: Problem 6a
Create a feature and label array for the new data.
Hint - copy the code you developed above in Problem 2.
End of explanation
new_preds = rf_clf.predict(new_X)
print("The model has an accuracy of {:.4f}".format(accuracy_score(new_y, new_preds)))
Explanation: Problem 6b
Calculate the accuracy of the model predictions on the new data.
End of explanation
from sklearn.model_selection import GridSearchCV
grid_results = GridSearchCV(RandomForestClassifier(n_jobs=-1),
{'n_estimators': [30, 100, 300],
'max_features': [1, 3, 7],
'min_samples_leaf': [1,10,30]},
cv = 3)
grid_results.fit(train_X, train_y)
print('The best model has {}'.format(grid_results.best_params_))
Explanation: Problem 6c
Can you explain why the accuracy for the new data is significantly lower than what you calculated previously?
If you can build and train a better model (using the trianing data) for classifying the new data - I will be extremely impressed.
write your answer here
Challenge Problem) Full RF Optimization
Now we will optimize the model over all tuning parameters. How does one actually determine the optimal set of tuning parameters? Brute force.
We will optimize the model via a grid search that performs CV at each point in the 3D grid. The final model will adopt the point with the highest accuracy.
It is important to remember two general rules of thumb: (i) if the model is optimized at the edge of the grid, refit a new grid centered on that point, and (ii) the results should be stable in the vicinity of the grid maximum. If this is not the case the model is likely overfit.
Use GridSearchCV to perform a 3-fold CV grid search to optimize the RF star-galaxy model. Remember the rules of thumb.
What are the optimal tuning parameters for the model?
Hint 1 - think about the computational runtime based on the number of points in the grid. Do not start with a very dense or large grid.
Hint 2 - if the runtime is long, don't repeat the grid search even if the optimal model is on an edge of the grid
End of explanation |
2,636 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Coding in Python
Dr. Chris Gwilliams
gwilliamsc@cardiff.ac.uk
Writing in Python
Step1: Types
Python has a type system (variables have types), even if you do not specify it when you declare them.
String
Float
Boolean
Integer
None
Exercise
Give me an example of each of these types.
What is the None type?
Use the Internet to find me examples of 1 other type in Python
We will come back to type as the course progresses.
Literals
Literally a value.
All of the examples you just gave are literals.
Step2: 3.14 is a float literal
Variables
Literals are all well and good for printing but what about when we need to change and store these literals?
Variables are ways of giving literal values a name in order to refer to them later.
Below, we are declaring and instantiating a variable
Step3: Variables in Python
Technically, Python does not have empty variables that are not instantiated. A variable exists as soon as it is assigned a value. Like this
Step4: Variables in Python
In many languages, when a variable is instantiated, it is reserved into a block of memory and the variable points to that memory address, often unique for that variable.
In Python, it is more like that memory address is tagged with the variable name. So, if we create three variables that have the same literal value, then they all point to the same memory address. Like so
Step5: id and Multi Variable Assignment
The id function is built into Python and returns the memory address of variables provided to it.
This is easier to see when we use multi-variable assignment in Python
Step6: Don't Call It That
Step7: Recap Questions
What is a REPL?
What does strongly typed mean?
What does dynamically typed mean?
What is a literal?
Variable Recap
Give me 3 types in Python
What would you use each type for?
How do you declare multiple variables at the same time?
What is wrong with the code below?
name, age, height = 12, 'Terry'
Read, Eval, Print, Loop
Strongly typed is that type errors are caught as errors and Python keeps track of types of variables
Dynamically typed means that variables do not need a type declared at their declaration and the type can be changed at runtime
A literal is literally a value
Boolean - Store flags, String - hold text, Float - decimal numbers
Separate them with a comma
The sides do not match up, there is a missing literal
Finding Types
Got some code and don't know what the types are?
Python has some functions to help with this.
Step8: Exercise
Try this with different types, how many can you find?
What happens when you do type([])?
type is an example of a built-in function, we will come back to these in a few sessions.
Strings
Strings in Python are unlike many languages you have seen before.
They can be
Step9: What do single quotes usually mean in most languages?
Strings wrapped in 'single quotes' are typically chars (single characters). Python does not have this type.
char yes = 'Y' //char (Python does not have this)
string no = "no" //string
What are chars used for? What does Python have instead?
Chars are typically used as single character flags, like a 'Y' or an 'N' as an answer to a question, or to hold an initial.
Anything text based can be stored in a string but flags can be represented as a 0 or 1 or even using a Boolean value,, which is easiest to check against.
What happens if you use a single quote for strings and you write the word don't in the string? Try it out now!
How do we get around that?
Escaping Strings
When you want to include special characters (like ') then it is always good to escape them!
Ever seen a new line written as \n? That is an example of escaping.
Escape Character
An escape character is a character which invokes an alternative interpretation on subsequent characters in a character sequence.
This is pretty much always \
Step10: Strings II
If you do not want to escape every special character, maybe there is a better way?
python
"I am a string and I don't care what is written inside me"
Step12: Exercise
Create a variable of type string and then reassign it to a float literal.
Now try adding a float literal to your variable
Operators
What is an operator? What do you remember from maths?
+ (add)
- (subtract)
/ (divide)
* (multiply)
There are more, but we will get to them!
Exercise
Add 3 and 4
Add 3.14 + 764 (what is the difference to the above answer?)
Subtract 100 from 10
Multiply 10 by 10
Add 13 to 'adios'
Multiply 'hello' by 5
Divide 10 by 3
Divide 10 by 3 but use 2 / (what happens?)
In Python 3, // is floor division and / is floating point division
Adding/Dividing/Subtracting Across Types
Step13: Multiplying Across Types
While Python is not happy to add/divide/subtract numbers from strings, it is more than happy to multiply
Step14: Operating and Assigning
You may have a variable and want to change the value, this is reassigning, right?
python
year = 1998
year = 1999
There has to be an easier way. THERE IS. What is it?
Step15: Exercise
Try this with all the operators you know.
Built in Functions
A function is a block of code that
Step16: help | Python Code:
# Does this make sense without comments?
with open('myfile.csv', 'rb') as opened_csv:
spamreader = csv.reader(csvfile, delimiter=' ', quotechar='|')
for row in spamreader:
print (', '.join(row))
# How about this?
#open csv file in readable format
with open('myfile.csv', 'rbU') as opened_csv:
# read opened csv file with spaces as delimiters
spamreader = csv.reader(csvfile, delimiter=' ', quotechar='|')
# loop through and print each line
for row in spamreader:
print (', '.join(row))
Explanation: Coding in Python
Dr. Chris Gwilliams
gwilliamsc@cardiff.ac.uk
Writing in Python: PEP
Python Enhancement Proposals
Unsure how your code should be written? PEP is a style guide for Python and provides details on what is expected.
Use 4 spaces instead of tabs
Lines should be 79 characters long
Variables should follow snake_case
All lower case words, separated by underscores (_)
Classes should be Capitalised Words (MyClassExample)
PEP
Comments
Sometimes, you need to describe your code and the logic may be a bit complicated, or it took you a while to figure it out and you want to make a note.
You can't just write some text in the file or you will get errors, this is where comments come in! Comments are descriptions that the Python interpreter ignores.
Just type a # amd what ever you want to write and voíla!
It is ALWAYS a good idea to comment your code!
End of explanation
"Gavin" #String Literal
4 #Integer Literal
Explanation: Types
Python has a type system (variables have types), even if you do not specify it when you declare them.
String
Float
Boolean
Integer
None
Exercise
Give me an example of each of these types.
What is the None type?
Use the Internet to find me examples of 1 other type in Python
We will come back to type as the course progresses.
Literals
Literally a value.
All of the examples you just gave are literals.
End of explanation
# declared and instantiated
name = "Gavin"
# declared, but not instantiated
new_name = None
Explanation: 3.14 is a float literal
Variables
Literals are all well and good for printing but what about when we need to change and store these literals?
Variables are ways of giving literal values a name in order to refer to them later.
Below, we are declaring and instantiating a variable
End of explanation
x # does not exist so cannot print it
x = 1
print(x)
Explanation: Variables in Python
Technically, Python does not have empty variables that are not instantiated. A variable exists as soon as it is assigned a value. Like this:
End of explanation
a = 1
b = 1
c = 1
print(id(a))
print(id(b))
Explanation: Variables in Python
In many languages, when a variable is instantiated, it is reserved into a block of memory and the variable points to that memory address, often unique for that variable.
In Python, it is more like that memory address is tagged with the variable name. So, if we create three variables that have the same literal value, then they all point to the same memory address. Like so:
End of explanation
a,b,c = 1,1,1
name, age, yob = "chris", 26, 1989
print(name, age, yob)
print(id(name), id(age), id(yob))
a,b,c = "Name",12,"6ft"
print(a,b,c) #NOTE: always balance the left and the right. 5 variables must have 5 values!
Explanation: id and Multi Variable Assignment
The id function is built into Python and returns the memory address of variables provided to it.
This is easier to see when we use multi-variable assignment in Python:
End of explanation
print("Hello " + "World") #ok
print("hello" + 5) #strongly typed means this cannot happen!
name = "Chris"
name = "Pi"
"pi" + 6 #Strongly typed means no adding different types together!
name = 3.14 #dynamically typed means yes to changing the type of a variable!
Explanation: Don't Call It That:
These are keywords reserved in Python, so do not name any of your variables after these! You will learn about what many of these do throughout this course.
| False | class | finally | is | return |
|----------|--------|----------|-------|--------|
| continue | for | lambda | try | True |
| def | from | nonlocal | while | and |
| del | global | not | with | as |
| elif | if | or | yield | assert |
| else | import | pass | break | except |
| in | raise | None | | |
Useful links on Python variables
http://python.net/~goodger/projects/pycon/2007/idiomatic/handout.html#other-languages-have-variables
http://anh.cs.luc.edu/python/hands-on/3.1/handsonHtml/variables.html
http://foobarnbaz.com/2012/07/08/understanding-python-variables/
http://www.diveintopython.net/native_data_types/declaring_variables.html
Types II
Python is Strongly Typed - The Python interpreter keeps track of all the variables and their associated types.
AND
Python is Dynamically Typed - Variables can be reassigned from their types. A variable is simply a value bound to a name, the variable does not hold a type, only the value does.
End of explanation
type('am I your type?')
Explanation: Recap Questions
What is a REPL?
What does strongly typed mean?
What does dynamically typed mean?
What is a literal?
Variable Recap
Give me 3 types in Python
What would you use each type for?
How do you declare multiple variables at the same time?
What is wrong with the code below?
name, age, height = 12, 'Terry'
Read, Eval, Print, Loop
Strongly typed is that type errors are caught as errors and Python keeps track of types of variables
Dynamically typed means that variables do not need a type declared at their declaration and the type can be changed at runtime
A literal is literally a value
Boolean - Store flags, String - hold text, Float - decimal numbers
Separate them with a comma
The sides do not match up, there is a missing literal
Finding Types
Got some code and don't know what the types are?
Python has some functions to help with this.
End of explanation
'single_quotes'
Explanation: Exercise
Try this with different types, how many can you find?
What happens when you do type([])?
type is an example of a built-in function, we will come back to these in a few sessions.
Strings
Strings in Python are unlike many languages you have seen before.
They can be:
End of explanation
'isn't # not gonna work
'isn\'t' # works a charm!
Explanation: What do single quotes usually mean in most languages?
Strings wrapped in 'single quotes' are typically chars (single characters). Python does not have this type.
char yes = 'Y' //char (Python does not have this)
string no = "no" //string
What are chars used for? What does Python have instead?
Chars are typically used as single character flags, like a 'Y' or an 'N' as an answer to a question, or to hold an initial.
Anything text based can be stored in a string but flags can be represented as a 0 or 1 or even using a Boolean value,, which is easiest to check against.
What happens if you use a single quote for strings and you write the word don't in the string? Try it out now!
How do we get around that?
Escaping Strings
When you want to include special characters (like ') then it is always good to escape them!
Ever seen a new line written as \n? That is an example of escaping.
Escape Character
An escape character is a character which invokes an alternative interpretation on subsequent characters in a character sequence.
This is pretty much always \
End of explanation
print("Got something to say?")
print("Use the print statement")
print("to print a string literal")
print("float literal")
print(3.14)
more_string = "or variable"
print(more_string)
Explanation: Strings II
If you do not want to escape every special character, maybe there is a better way?
python
"I am a string and I don't care what is written inside me"
I am a string with triple double quotes and I can
run across multiple lines
Double quotes is generally better as you do not have to escape these special characters.
Exercise
Declare a variable to store a boolean literal
Declare and instantiate a new variable that stores your age
The first one was a trick! Declaring a new variable means giving it no value!
The second one would be: age = 20
Reassigning Variables
It would not be fun to have to create a new variable for every thing you want to store, right?
As well as being a huge inconvenience, it is actually really inefficient.
```python
age = 40
1 year passes
age = 41
```
Easy, right? By using the same variable name, it is now associated with your new value and the old value will be cleared up.
Printing
We have seen this print keyword thrown around alot, right? This is the best way to show some information.
Especially useful if your script takes a long time to run!
E.g. print("stuff")
End of explanation
float_type = 3.0
int_type = 5
print(int_type + float_type)
string_type = "hello"
print(string_type + float_type) # what is the error?
bool_type = True
print(string_type + bool_type)
print(int_type + bool_type) # does this work? Why?
Explanation: Exercise
Create a variable of type string and then reassign it to a float literal.
Now try adding a float literal to your variable
Operators
What is an operator? What do you remember from maths?
+ (add)
- (subtract)
/ (divide)
* (multiply)
There are more, but we will get to them!
Exercise
Add 3 and 4
Add 3.14 + 764 (what is the difference to the above answer?)
Subtract 100 from 10
Multiply 10 by 10
Add 13 to 'adios'
Multiply 'hello' by 5
Divide 10 by 3
Divide 10 by 3 but use 2 / (what happens?)
In Python 3, // is floor division and / is floating point division
Adding/Dividing/Subtracting Across Types
End of explanation
float_type = 3.0
int_type = 5
print(int_type * float_type)
string_type = "hello"
print(string_type * int_type)
bool_type = True
print(string_type * bool_type) #why does this work?
print(int_type * bool_type)
Explanation: Multiplying Across Types
While Python is not happy to add/divide/subtract numbers from strings, it is more than happy to multiply
End of explanation
year = year + 1
year += 1
Explanation: Operating and Assigning
You may have a variable and want to change the value, this is reassigning, right?
python
year = 1998
year = 1999
There has to be an easier way. THERE IS. What is it?
End of explanation
dir("")
Explanation: Exercise
Try this with all the operators you know.
Built in Functions
A function is a block of code that:
- receives an input(s) (also known as arguments)
- performs an operation (or operations)
- Optionally, returns an output
Python has some built-in functions and we have used one already. What was it?
Functions - Print
print("Hello")
Format:
- function_name
- Open brackets
- inputs (separated, by, commas)
- Close brackets
Sometimes, inputs are optional. Not always. We will get to this.
Other Built in Functions
str()
len()
type()
int()
Exercise
Find out what the above functions do and use them in a script
What happens when you don't give each an argument?
Look up and write up definitions for the id and isinstance functions
str - converts an object to a string type
len - prints the length of an object
type - tells you the type of a literal or a variable
int - converts a type to an integer
id - a unique id that relates to where the item is stored in memory
isinstance - checks if an object if of the supplied type
Functions to Help You
dir
End of explanation
help(int)
help("")
help(1)
Explanation: help
End of explanation |
2,637 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Perceptually Uniform Color Interpolation
This was a small for fun experiment done on a lazy Saturday. Check the original at https
Step1: Parsing
Step2: Conversion from RGB (assumed sRGB) to XYZ
The formulas copied from Wikipedia article on sRGB.
Step3: Conversion from XYZ to L*a*b*
The formulas copied from Wikipedia article on L*a*b* and Bruce Lind Bloom's page on color conversion.
Step4: Interpolate colors in the perceptually uniform L*a*b* space
Step5: Conversion from L*a*b* to XYZ
The formulas copied from Wikipedia article on L*a*b*. and Bruce Lind Bloom's page on color conversion.
Step6: Conversion from XYZ to RGB (assumed sRGB)
The formulas copied from Wikipedia article on sRGB.
Step7: Formatting the output | Python Code:
import re
import colorlover as cl
from IPython.display import HTML
import numpy as np
num_original = 4
num_interpolated = 19
colors = cl.scales[str(num_original).strip()]['qual']['Set1']
HTML(cl.to_html( colors ))
Explanation: Perceptually Uniform Color Interpolation
This was a small for fun experiment done on a lazy Saturday. Check the original at https://github.com/dreavjr/colorinterp.
(c̸) 2017 Eduardo Valle. This software is in Public Domain; it is provided "as is" without any warranties. Please check https://github.com/dreavjr/colorinterp/blob/master/LICENSE.md
Imports and initializations
End of explanation
colors_list = [ re.search('rgb\(([0-9]+),([0-9]+),([0-9]+)\)', c).groups() for c in colors ]
colors_array = np.asarray([ [ float(p) for p in c ] for c in colors_list ])
# DEBUG : forces sRGB correspondence on my hardware
# colors = np.asarray([ [172, 205, 229], [59, 117, 184], [181, 225, 128] ], dtype='float')
colors_array
Explanation: Parsing
End of explanation
Scale = 255.0
srgb = colors_array/Scale
srgb_thres=0.04045
srgb_linear = np.empty(srgb.shape)
srgb_linear[srgb<=srgb_thres] = (srgb/12.92)[srgb<=srgb_thres]
srgb_linear[srgb>srgb_thres] = np.power(((srgb+0.055)/1.055),2.4)[srgb>srgb_thres]
srgb2xyz = np.asarray([ [ 0.4124, 0.3576, 0.1805 ],
[ 0.2126, 0.7152, 0.0722 ],
[ 0.0193, 0.1192, 0.9505 ] ])
# xyz = srgb2xyz.dot(srgb_linear[i])
xyz = srgb_linear.dot(srgb2xyz.T)
xyz
Explanation: Conversion from RGB (assumed sRGB) to XYZ
The formulas copied from Wikipedia article on sRGB.
End of explanation
Lab_thresh = 216.0/24389.0
Lab_kappa = 24389.0/27.0
def xyz2lab(t):
t2 = np.empty(t.shape)
cond = t<=Lab_thresh
t2[cond] = (Lab_kappa*t+16.0)[cond]
t2[np.logical_not(cond)] = np.power(t,(1.0/3.0))[np.logical_not(cond)]
return t2
WhiteD65 = np.asarray([ 0.95047, 1.00000, 1.08883 ])
xyz_ratio = xyz/WhiteD65
pre_lab = xyz2lab(xyz_ratio)
ell = 116.0*pre_lab[:,1] - 16.0
a = 500.0*(pre_lab[:,0]-pre_lab[:,1])
b = 200.0*(pre_lab[:,1]-pre_lab[:,2])
lab = np.asarray([ell, a, b]).T
lab
Explanation: Conversion from XYZ to L*a*b*
The formulas copied from Wikipedia article on L*a*b* and Bruce Lind Bloom's page on color conversion.
End of explanation
originals = np.arange(ell.shape[0])
interpolated = np.linspace(originals[0], originals[-1], num=num_interpolated)
ell_interp = np.interp(interpolated, originals, ell)
a_interp = np.interp(interpolated, originals, a)
b_interp = np.interp(interpolated, originals, b)
# DEBUG: disables interpolation
# ell_interp = ell
# a_interp = a
# b_interp = b
# Demonstration: also interpolates in the rgb space ---
rgb = colors_array.T
red_interp = np.interp(interpolated, originals, rgb[0])
green_interp = np.interp(interpolated, originals, rgb[1])
blue_interp = np.interp(interpolated, originals, rgb[2])
rgb_interp = np.asarray([red_interp, green_interp, blue_interp]).T
# --- this is not needed for the perceptual interpolation, only for comparison
np.asarray([ell_interp, a_interp, b_interp]).T
Explanation: Interpolate colors in the perceptually uniform L*a*b* space
End of explanation
def lab2xyz(t):
t2 = np.empty(t.shape)
cond =t<=Lab_thresh
t2[cond] = ((t-16.0)/Lab_kappa)[cond]
t2[np.logical_not(cond)] = np.power(t,3.0)[np.logical_not(cond)]
return t2
ell_interp_scaled = (ell_interp+16.0)/116.0
a_interp_scaled = a_interp/500.0
b_interp_scaled = b_interp/200.0
pre_x_interp = lab2xyz(ell_interp_scaled+a_interp_scaled)
pre_y_interp = lab2xyz(ell_interp_scaled)
pre_z_interp = lab2xyz(ell_interp_scaled-b_interp_scaled)
pre_xyz_interp = np.asarray([pre_x_interp, pre_y_interp, pre_z_interp]).T
xyz_interp = pre_xyz_interp*WhiteD65
xyz_interp #, xyz
Explanation: Conversion from L*a*b* to XYZ
The formulas copied from Wikipedia article on L*a*b*. and Bruce Lind Bloom's page on color conversion.
End of explanation
xyz2srgb = np.asarray([ [ 3.2406, -1.5372, -0.4986 ],
[ -0.9689, 1.8758, 0.0415 ],
[ 0.0557, -0.2040, 1.0570 ] ])
# srgb_linear_interp = xyz2srgb.dot(xyz_interp[i])
srgb_linear_interp = xyz_interp.dot(xyz2srgb.T)
srgb_interp = np.empty(srgb_linear_interp.shape)
srgb_interp[srgb_linear_interp<=0.0031308] = (12.92*srgb_linear_interp)[srgb_linear_interp<=0.0031308]
srgb_interp[srgb_linear_interp>0.0031308] = ((1.055)*np.power(srgb_linear_interp,(1.0/2.4)) - 0.055)[srgb_linear_interp>0.0031308]
srgb_interp[srgb_interp<0.0] = 0.0
srgb_interp[srgb_interp>1.0] = 1.0
srgb_device_interp = np.round(srgb_interp*Scale)
# print(srgb_linear_interp, srgb_interp, srgb_device_interp, colors, sep='\n\n')
srgb_device_interp
Explanation: Conversion from XYZ to RGB (assumed sRGB)
The formulas copied from Wikipedia article on sRGB.
End of explanation
# This --- finally --- is the desired output
colors_interpolated = [ 'rgb(%d,%d,%d)' % tuple(c) for c in srgb_device_interp ]
after_hsl_and_back = cl.to_rgb(cl.to_hsl(colors_interpolated))
rgb_interpolated = [ 'rgb(%d,%d,%d)' % tuple(c) for c in rgb_interp ]
print(colors)
HTML(cl.to_html( colors ))
print(colors_interpolated)
print(after_hsl_and_back)
print(rgb_interpolated)
HTML(cl.to_html(after_hsl_and_back))
HTML('<div>'+
'<div style="height:20px;width:110px;display:inline-block;">Int. on L*a*b*:</div>'+
cl.to_html( colors_interpolated )+'<br/>'+
'<div style="height:20px;width:110px;display:inline-block;">HSL and back:</div>'+
cl.to_html( after_hsl_and_back )+'<br/>'+
'<div style="height:20px;width:110px;display:inline-block;">Int. on RGB:</div>'+
cl.to_html( rgb_interpolated ))
Explanation: Formatting the output
End of explanation |
2,638 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bayesian Optimization
Ever thought about an automatic way to tune hyperparameters of your beloved machine learning algorithm? for example learning rate, weight decay, and drop out probability in a neural network? here we will look through a proposed way to achive a set of good hyperparameters by bayesian means.
In Bayesian Optimization (BO) a machine learning algorithem can be looked as a blackbox which gives out some measure of performance, e.g accuracy, accuracy per second, or any other score value that change relative to a set of parameters.
By the end of this jupyter notebook we will utilize BO to get optimized parameters for a minimalistic network to learn digit handwriting recognition from MNIST dataset.
It is recommended to go through my notebook on gaussian process regression before progressing with material presented here.
I will use TensorFlow for neural network implementation and it is good if you go through this tutorial if you have no previous experience with this framework.
For an all-in-one Docker image containig major deep learning frameworks consider this repository.
Environment Setup
Step1: We will begin by a simple 1D optimization problem for a function that we dont know the closed form of it (for presentation purposes we will know thee analytic optimum point of our function). Then we will try with a 2D function and finally we will apply our be then prepared tools in a real neural network case. Let's begin with our fun! but first we need our building parts.
GP Tools
Here we will include code for all the GP tools that we will use further in the notebook. For closer explanation you can see my notebook on gaussian process regression.
Step2: Acquisition Function
With acquisition function we determine where to sample next from our GP prior to best achieve our optimization objective. This functions yields an automatic optimized choice between exploration (where GP posterior variance is high) and exploitation (where the mean of GP is high). We will choose new set of parameters where exploitation is high (high GP mean) and also exploration is high (high GP uncertainty).
Different options exist for the choice of acquisition fucntion
Step3: Shall Optimize!
Step4: 1D Bayesian Optimization
We run BO on our toy 1D function and see that the found max is close to the true maximum of our function
Step5: Let's see how our BO solves this maximization problem.
You might need to run it couple of times to get the best result, this strategy is not deterministic
Step6: 2D Bayesian Optimization
BO on a 2D space with an imaginary function.
Step7: Let's see how close BO can get to the real maximum
Step8: Optimizing a Neural Network's Hyperparameters with Bayesian Optimization
Choosing the best configuration of a neural network (it's hidden layer depth, learning rate, batch size, ...) can be seen as an optimization which can be also targeted with bayesian optimization using gaussian processes regressors. Through out this section we will test BO's power in this problem.
First we define the initial network from TensorFlow MNIST tutorial | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import minimize
from scipy.linalg import det
from scipy.linalg import pinv2 as inv #pinv uses linalg.lstsq algorithm while pinv2 uses SVD
from scipy.stats import norm
%matplotlib inline
%load_ext autoreload
%autoreload 2
%autosave 0
Explanation: Bayesian Optimization
Ever thought about an automatic way to tune hyperparameters of your beloved machine learning algorithm? for example learning rate, weight decay, and drop out probability in a neural network? here we will look through a proposed way to achive a set of good hyperparameters by bayesian means.
In Bayesian Optimization (BO) a machine learning algorithem can be looked as a blackbox which gives out some measure of performance, e.g accuracy, accuracy per second, or any other score value that change relative to a set of parameters.
By the end of this jupyter notebook we will utilize BO to get optimized parameters for a minimalistic network to learn digit handwriting recognition from MNIST dataset.
It is recommended to go through my notebook on gaussian process regression before progressing with material presented here.
I will use TensorFlow for neural network implementation and it is good if you go through this tutorial if you have no previous experience with this framework.
For an all-in-one Docker image containig major deep learning frameworks consider this repository.
Environment Setup
End of explanation
# %load GPR.py
# GP squared exponential with its own hyperparameters
def get_kernel(X1,X2,sigmaf,l,sigman):
k = lambda x1,x2,sigmaf,l,sigman:(sigmaf**2)*np.exp(-(1/float(2*(l**2)))*np.dot((x1-x2),(x1-x2).T)) + (sigman**2);
K = np.zeros((X1.shape[0],X2.shape[0]))
for i in range(0,X1.shape[0]):
for j in range(0,X2.shape[0]):
if i==j:
K[i,j] = k(X1[i,:],X2[j,:],sigmaf,l,sigman);
else:
K[i,j] = k(X1[i,:],X2[j,:],sigmaf,l,0);
return K
# finding optimized parameters for GP by maximizing closed-form log p(y|x,theta)
def fit_GP(x,y, gp_params):
# log p(y|x,theta) = 0.5y'K^-1y-0.5log|K|-0.5nlog2pi
# sigman > measurement noise
# sigmaf > maximum covariance value
# l > lenght-scale of our GP with squared exponential kernel
bounds = [(0.001,2),(0.001,1),(0.001,1)]
logpxtheta = lambda p: 0.5*(np.dot(y.T,np.dot(inv(get_kernel(x,x,p[0],p[1],p[2])),y)) + np.log(det(get_kernel(x,x,p[0],p[1],p[2]))) + x.shape[0]*np.log(2*np.pi)).reshape(-1,1)[0];
res = minimize(fun=logpxtheta,
x0=gp_params,
bounds=bounds,
method='L-BFGS-B')
new_gp_params = res['x']
#print 'new GP parameters', new_gp_params
return new_gp_params
#putting everything about GP to use here
def GPR(x_predict,x,y,gp_params):
pdim = x_predict.shape[0]
sigmaf, l, sigman = gp_params
K = get_kernel(x, x, sigmaf, l, sigman)
K_s = get_kernel(x_predict, x, sigmaf, l, 0)
K_ss = get_kernel(x_predict, x_predict, sigmaf, l, sigman)
#print K.shape,K_s.shape, K_ss.shape, y.shape
y_predict_mean = np.dot(np.dot(K_s,inv(K)),y).reshape(pdim,-1)
y_predict_var = np.diag(K_ss - np.dot(K_s,(np.dot(inv(K),K_s.T))))#.reshape(-1,1)
#print y_predict_mean.shape
return (y_predict_mean, y_predict_var)
Explanation: We will begin by a simple 1D optimization problem for a function that we dont know the closed form of it (for presentation purposes we will know thee analytic optimum point of our function). Then we will try with a 2D function and finally we will apply our be then prepared tools in a real neural network case. Let's begin with our fun! but first we need our building parts.
GP Tools
Here we will include code for all the GP tools that we will use further in the notebook. For closer explanation you can see my notebook on gaussian process regression.
End of explanation
def expected_improvement(x, xp, yp, gp_params, kappa=0.0, n_params = 1):
xpredict = np.asarray(x).reshape(-1, n_params)
GP_mu, GP_sigma = GPR(xpredict, xp, yp, gp_params)
GP_sigma = GP_sigma.reshape(-1,1)
f_best = np.max(yp)
#print n_params, xp.shape,yp.shape,xpredict.shape
with np.errstate(divide='ignore'):
gamma_x = (GP_mu - f_best) / GP_sigma
EIx = GP_sigma * (gamma_x * norm.cdf(gamma_x) + norm.pdf(gamma_x))
EIx[GP_sigma == 0.0] == 0
return -1*EIx # negative because we will use a minimizer to find the maximum
def upper_confidence_bound(x, xp, yp, gp_params, kappa=0.0, n_params = 1):
xpredict = np.asarray(x).reshape(-1, n_params)
GP_mu, GP_sigma = GPR(xpredict, xp, yp, gp_params)
GP_sigma = GP_sigma.reshape(-1,n_params)
return -1*(GP_mu + kappa * GP_sigma)
def sample_next_hyperparameter(acquisition_func, xp, yp, gp_params, bounds, kappa=0.0, n_restarts=10):
'''n_restarts: integer.
Number of times to run the minimizer with different starting points.'''
best_x = None
best_acquisition_value = 1
n_params = bounds.shape[0]
for starting_point in np.random.uniform(bounds[:, 0], bounds[:, 1], size=(n_restarts, n_params)):
res = minimize(fun=acquisition_func,
x0=starting_point.reshape(-1, n_params),
bounds=bounds,
method='L-BFGS-B',
args=(xp,yp, gp_params,kappa, n_params))
if res.fun < best_acquisition_value:
best_acquisition_value = res.fun
best_x = res.x
return best_x
Explanation: Acquisition Function
With acquisition function we determine where to sample next from our GP prior to best achieve our optimization objective. This functions yields an automatic optimized choice between exploration (where GP posterior variance is high) and exploitation (where the mean of GP is high). We will choose new set of parameters where exploitation is high (high GP mean) and also exploration is high (high GP uncertainty).
Different options exist for the choice of acquisition fucntion:
- Expected improvement: $a_{EI}(x,{x_n,y_n},\theta) = \sigma_{GP}(x,{x_n,y_n},\theta)\left[\gamma(x)\Phi\left(\gamma(x)\right) + \mathcal{N}(\gamma(x); 0, 1)\right]$
where
$\gamma(x)=\frac{f(x_{best}) - \mu_{GP}(x,{x_n,y_n},\theta)}{\sigma_{GP}(x,{x_n,y_n},\theta)}$
$\Phi$ is the normal cumulative distribution and $x{best}$ is the location of the lowest posterior mean._
In conlusion, each time we get new observed point the question is where to look next to get the optimum (here assume maximum) point in the underlying unseen function; and by finding the optimum point (min or max) of an acquizition function we will choose between exploitation/exploration dillema. The good thing about EI acquisition function is that, it has no parameter of itself and the choice is automatic for optimum point which will be the next point of sampling from the undelying function. Further on we will use LBFGS algorithem to find the min of the acquisition function (or negative of ac function for its maximum) but this can be also part of the OB itself.
End of explanation
def BayesianOptimization(f, bounds,max_iter=10,n_pre_samples=2,
gp_params=(0.5, 1., 0.001),
fit_GP_every=0,dis_every=1,plot_res = 0, kappa = 0.0):
x_list = []
y_list = []
n_param = bounds.shape[0]
#randomly sample the function to be optimized
for j, params in enumerate(np.random.uniform(bounds[:, 0], bounds[:, 1], (n_pre_samples, bounds.shape[0]))):
x_list.append(params)
y_list.append(f(params))
#if dis_every and i%dis_every==0: print 'Generating sample %d ...'%i
xp = np.array(x_list)
yp = np.array(y_list)
for i in range(max_iter):
if fit_GP_every and i%fit_GP_every==0: gp_params = fit_GP(xp,yp, gp_params) # sigmaf, l, sigman
next_sample = sample_next_hyperparameter(expected_improvement, xp, yp,gp_params, bounds=bounds,n_restarts=20)
#next_sample = sample_next_hyperparameter(upper_confidence_bound, xp, yp,gp_params, bounds=bounds,kappa=kappa)
# avoid very close points
while np.any(np.abs(next_sample - xp) <= np.finfo(float).eps):
next_sample = np.random.uniform(bounds[:, 0], bounds[:, 1], bounds.shape[0])
cv_score = f(next_sample)
# save previous values for plotting
prev_xp = xp
prev_yp = yp
# Updates
x_list.append(next_sample)
y_list.append(cv_score)
xp = np.array(x_list)
yp = np.array(y_list)
if dis_every and i%dis_every==0:
print 'Iter. # %d - Best Results: '%(i+1),
for j,val in enumerate(next_sample):
#print 'parameter_%d = %2.2f,'%(j,val),
print 'parameter_%d = %2.2f,'%(j+1,xp[np.argmax(yp),j]),
print ' val=%2.2f'%max(yp)
if plot_res:
if xp.shape[1] == 1: # 2D plots
fig = plt.figure(figsize=(8,8))
param_choices = np.arange(bounds[0,0],bounds[0,1],0.1).reshape(-1,xp.shape[1])
y_mean, y_std = GPR(param_choices, prev_xp, prev_yp, gp_params)
EIx = -expected_improvement(param_choices, prev_xp, prev_yp, gp_params, n_params = 1)
#EIx = -upper_confidence_bound(param_choices, prev_xp, prev_yp, gp_params,kappa, n_params = 1)
EIxnorm=np.linalg.norm(EIx)
if EIxnorm!=0: EIx = EIx/EIxnorm # normalizing for beter visualiation
plt.plot(param_choices[:,0], 2*EIx[:,0],'m')
plt.plot(next_sample,f(next_sample),'ro')
plt.plot(param_choices[:,0],f(param_choices[:,0]),'k--')
plt.plot(prev_xp,prev_yp,'b*')
plt.plot(param_choices[:,0], y_mean[:,0],'b')
plt.fill_between(param_choices[:,0], y_mean[:,0]-y_std,y_mean[:,0]+y_std,alpha=0.5, edgecolor='#CC4F1B', facecolor='#FF9848')
plt.title('Iter. #%2d best val so far is %2.2f'%(i+1,max(yp)))
plt.ylim([-1.5,2])
fig.savefig('tmp_BO_%d.png'%i)
plt.show()
else: print 'High dimensional plots not yet implemented!'
best_params = []
print 'Iterations Done... Best Result: ',
for j in range(xp.shape[1]):
print 'parameter_%d = %2.2f,'%(j+1,xp[np.argmax(yp),j]),
best_params.append(xp[np.argmax(yp),j])
print 'which yiels %2.5f !!'%max(yp)
return tuple(best_params),max(yp)
Explanation: Shall Optimize!
End of explanation
#sample function definiton
f = lambda x: np.exp(-abs(x))*np.cos(0.5*np.pi*x)+2*np.exp(-0.5*abs(x))*np.sin(0.7*np.pi*x)
x = np.linspace(-5,5,1000)
y = f(x)
plt.plot(x,y)
plt.axis([-5, 5, -1.5, 2])
plt.show()
print 'real func max is: %2.3f'%np.max(y)
Explanation: 1D Bayesian Optimization
We run BO on our toy 1D function and see that the found max is close to the true maximum of our function
End of explanation
x,y = BayesianOptimization(f, bounds=np.array([[-5,5]]),
max_iter=10, n_pre_samples=2,
gp_params=(0.6, .8, 0.001),fit_GP_every=0,
dis_every=1,plot_res = 0,kappa = 0)
Explanation: Let's see how our BO solves this maximization problem.
You might need to run it couple of times to get the best result, this strategy is not deterministic
End of explanation
# we define a function here and visualize it
from mpl_toolkits.mplot3d import axes3d #it has to be imported
size = 500
sigma_x = 2.5
sigma_y = 2.5
f2 = lambda p: 10000*(1/(2*np.pi*sigma_x*sigma_y) * np.exp(-(p[0]**2/(2*sigma_x**2) + p[1]**2/(2*sigma_y**2)))*np.sin(0.01*p[0]*p[1]))
fig = plt.figure(figsize = (7,7))
ax = fig.add_subplot(111, projection='3d')
x = np.linspace(-10, 10, size)
y = np.linspace(-10, 10, size)
x, y = np.meshgrid(x, y)
z = f2((x,y))
ax.plot_surface(x, y, z, cmap=plt.cm.hot)
plt.show()
print 'real func max is: %2.3f'%np.max(z)
Explanation: 2D Bayesian Optimization
BO on a 2D space with an imaginary function.
End of explanation
x,y = BayesianOptimization(f2, bounds=np.array([[-5,5],[-5,5]]),
max_iter=10, n_pre_samples=3,
gp_params=(0.6, .8, 0.001),fit_GP_every=0,
dis_every=1,kappa = 0)
Explanation: Let's see how close BO can get to the real maximum
End of explanation
# Getting the data
from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
# Create a 2 layer fully connected network
def multilayer_FCN(x, weights, biases):
# Hidden layer with RELU activation
layer_1 = tf.add(tf.matmul(x, weights['w1']), biases['b1'])
layer_1 = tf.nn.relu(layer_1)
# output layer with RELU activation
layer_2 = tf.add(tf.matmul(layer_1, weights['w2']), biases['b2'])
#layer_2 = tf.nn.relu(layer_2)
return layer_2
# define a function which will construct and train a network with our hyperparameter set
def train_network(learning_rate=0.5,batch_size=100,training_epochs=1,hidden_n=500):
x = tf.placeholder(tf.float32, [None, 784])
weights = {
'w1': tf.Variable(tf.random_normal([784, hidden_n])),
'w2': tf.Variable(tf.random_normal([hidden_n, 10])),
}
biases = {
'b1': tf.Variable(tf.random_normal([hidden_n])),
'b2': tf.Variable(tf.random_normal([10])),
}
y = tf.placeholder(tf.float32, [None, 10])
# Construct model
pred = multilayer_FCN(x, weights, biases)
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
# Initializing the variables
init = tf.global_variables_initializer()
# Launch the graph
with tf.Session() as sess:
sess.run(init)
# Training cycle
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(mnist.train.num_examples/batch_size/2)
# Loop over all batches
for i in range(total_batch):
batch_x, batch_y = mnist.train.next_batch(batch_size)
# Run optimization op (backprop) and cost op (to get loss value)
_, c = sess.run([optimizer, cost], feed_dict={x: batch_x,
y: batch_y})
# Compute average loss
avg_cost += c / total_batch
# Test model
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
# Calculate accuracy
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
accuracy = accuracy.eval({x: mnist.test.images, y: mnist.test.labels})
return accuracy
fneural = lambda p: train_network(learning_rate=p[0],batch_size=int(p[1]))
#########make it a function after this point
x,y = BayesianOptimization(fneural, bounds=np.array([[0.0,1],[30,400]]),
max_iter=100, n_pre_samples=3,
gp_params=(0.6, .8, 0.001),fit_GP_every=0,
dis_every=10)
Explanation: Optimizing a Neural Network's Hyperparameters with Bayesian Optimization
Choosing the best configuration of a neural network (it's hidden layer depth, learning rate, batch size, ...) can be seen as an optimization which can be also targeted with bayesian optimization using gaussian processes regressors. Through out this section we will test BO's power in this problem.
First we define the initial network from TensorFlow MNIST tutorial:
End of explanation |
2,639 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Similarity Queries using Annoy Tutorial
This tutorial is about using the (Annoy Approximate Nearest Neighbors Oh Yeah) library for similarity queries with a Word2Vec model built with gensim.
Why use Annoy?
The current implementation for finding k nearest neighbors in a vector space in gensim has linear complexity via brute force in the number of indexed documents, although with extremely low constant factors. The retrieved results are exact, which is an overkill in many applications
Step1: 1. Download Text8 Corpus
Step2: Import & Set up Logging
I'm not going to set up logging due to the verbose input displaying in notebooks, but if you want that, uncomment the lines in the cell below.
Step3: 2. Build Word2Vec Model
Step5: See the Word2Vec tutorial for how to initialize and save this model.
Comparing the traditional implementation and the Annoy approximation
Step6: This speedup factor is by no means constant and will vary greatly from run to run and is particular to this data set, BLAS setup, Annoy parameters(as tree size increases speedup factor decreases), machine specifications, among other factors.
Note
Step7: Analyzing the results
The closer the cosine similarity of a vector is to 1, the more similar that word is to our query, which was the vector for "science". There are some differences in the ranking of similar words and the set of words included within the 10 most similar words.
4. Verify & Evaluate performance
Persisting Indexes
You can save and load your indexes from/to disk to prevent having to construct them each time. This will create two files on disk, fname and fname.d. Both files are needed to correctly restore all attributes. Before loading an index, you will have to create an empty AnnoyIndexer object.
Step8: Be sure to use the same model at load that was used originally, otherwise you will get unexpected behaviors.
Save memory by memory-mapping indices saved to disk
Annoy library has a useful feature that indices can be memory-mapped from disk. It saves memory when the same index is used by several processes.
Below are two snippets of code. First one has a separate index for each process. The second snipped shares the index between two processes via memory-mapping. The second example uses less total RAM as it is shared.
Step9: Bad Example
Step10: Good example. Two processes load both the Word2vec model and index from disk and memory-map the index
Step11: 5. Evaluate relationship of num_trees to initialization time and accuracy
Step12: Build dataset of Initialization times and accuracy measures
Step13: Plot results | Python Code:
# pip install watermark
%reload_ext watermark
%watermark -v -m -p gensim,numpy,scipy,psutil,matplotlib
Explanation: Similarity Queries using Annoy Tutorial
This tutorial is about using the (Annoy Approximate Nearest Neighbors Oh Yeah) library for similarity queries with a Word2Vec model built with gensim.
Why use Annoy?
The current implementation for finding k nearest neighbors in a vector space in gensim has linear complexity via brute force in the number of indexed documents, although with extremely low constant factors. The retrieved results are exact, which is an overkill in many applications: approximate results retrieved in sub-linear time may be enough. Annoy can find approximate nearest neighbors much faster.
Prerequisites
Additional libraries needed for this tutorial:
- annoy
- psutil
- matplotlib
Outline
Download Text8 Corpus
Build Word2Vec Model
Construct AnnoyIndex with model & make a similarity query
Verify & Evaluate performance
Evaluate relationship of num_trees to initialization time and accuracy
End of explanation
import os.path
if not os.path.isfile('text8'):
!wget -c http://mattmahoney.net/dc/text8.zip
!unzip text8.zip
Explanation: 1. Download Text8 Corpus
End of explanation
LOGS = False
if LOGS:
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
Explanation: Import & Set up Logging
I'm not going to set up logging due to the verbose input displaying in notebooks, but if you want that, uncomment the lines in the cell below.
End of explanation
from gensim.models import Word2Vec, KeyedVectors
from gensim.models.word2vec import Text8Corpus
# using params from Word2Vec_FastText_Comparison
lr = 0.05
dim = 100
ws = 5
epoch = 5
minCount = 5
neg = 5
loss = 'ns'
t = 1e-4
# Same values as used for fastText training above
params = {
'alpha': lr,
'size': dim,
'window': ws,
'iter': epoch,
'min_count': minCount,
'sample': t,
'sg': 1,
'hs': 0,
'negative': neg
}
model = Word2Vec(Text8Corpus('text8'), **params)
print(model)
Explanation: 2. Build Word2Vec Model
End of explanation
#Set up the model and vector that we are using in the comparison
try:
from gensim.similarities.index import AnnoyIndexer
except ImportError:
raise ValueError("SKIP: Please install the annoy indexer")
model.init_sims()
annoy_index = AnnoyIndexer(model, 100)
# Dry run to make sure both indices are fully in RAM
vector = model.wv.syn0norm[0]
model.most_similar([vector], topn=5, indexer=annoy_index)
model.most_similar([vector], topn=5)
import time
import numpy as np
def avg_query_time(annoy_index=None, queries=1000):
Average query time of a most_similar method over 1000 random queries,
uses annoy if given an indexer
total_time = 0
for _ in range(queries):
rand_vec = model.wv.syn0norm[np.random.randint(0, len(model.wv.vocab))]
start_time = time.clock()
model.most_similar([rand_vec], topn=5, indexer=annoy_index)
total_time += time.clock() - start_time
return total_time / queries
queries = 10000
gensim_time = avg_query_time(queries=queries)
annoy_time = avg_query_time(annoy_index, queries=queries)
print("Gensim (s/query):\t{0:.5f}".format(gensim_time))
print("Annoy (s/query):\t{0:.5f}".format(annoy_time))
speed_improvement = gensim_time / annoy_time
print ("\nAnnoy is {0:.2f} times faster on average on this particular run".format(speed_improvement))
Explanation: See the Word2Vec tutorial for how to initialize and save this model.
Comparing the traditional implementation and the Annoy approximation
End of explanation
# 100 trees are being used in this example
annoy_index = AnnoyIndexer(model, 100)
# Derive the vector for the word "science" in our model
vector = model["science"]
# The instance of AnnoyIndexer we just created is passed
approximate_neighbors = model.most_similar([vector], topn=11, indexer=annoy_index)
# Neatly print the approximate_neighbors and their corresponding cosine similarity values
print("Approximate Neighbors")
for neighbor in approximate_neighbors:
print(neighbor)
normal_neighbors = model.most_similar([vector], topn=11)
print("\nNormal (not Annoy-indexed) Neighbors")
for neighbor in normal_neighbors:
print(neighbor)
Explanation: This speedup factor is by no means constant and will vary greatly from run to run and is particular to this data set, BLAS setup, Annoy parameters(as tree size increases speedup factor decreases), machine specifications, among other factors.
Note: Initialization time for the annoy indexer was not included in the times. The optimal knn algorithm for you to use will depend on how many queries you need to make and the size of the corpus. If you are making very few similarity queries, the time taken to initialize the annoy indexer will be longer than the time it would take the brute force method to retrieve results. If you are making many queries however, the time it takes to initialize the annoy indexer will be made up for by the incredibly fast retrieval times for queries once the indexer has been initialized
Note : Gensim's 'most_similar' method is using numpy operations in the form of dot product whereas Annoy's method isnt. If 'numpy' on your machine is using one of the BLAS libraries like ATLAS or LAPACK, it'll run on multiple cores(only if your machine has multicore support ). Check SciPy Cookbook for more details.
3. Construct AnnoyIndex with model & make a similarity query
Creating an indexer
An instance of AnnoyIndexer needs to be created in order to use Annoy in gensim. The AnnoyIndexer class is located in gensim.similarities.index
AnnoyIndexer() takes two parameters:
model: A Word2Vec or Doc2Vec model
num_trees: A positive integer. num_trees effects the build time and the index size. A larger value will give more accurate results, but larger indexes. More information on what trees in Annoy do can be found here. The relationship between num_trees, build time, and accuracy will be investigated later in the tutorial.
Now that we are ready to make a query, lets find the top 5 most similar words to "science" in the Text8 corpus. To make a similarity query we call Word2Vec.most_similar like we would traditionally, but with an added parameter, indexer. The only supported indexer in gensim as of now is Annoy.
End of explanation
fname = 'index'
# Persist index to disk
annoy_index.save(fname)
# Load index back
if os.path.exists(fname):
annoy_index2 = AnnoyIndexer()
annoy_index2.load(fname)
annoy_index2.model = model
# Results should be identical to above
vector = model["science"]
approximate_neighbors2 = model.most_similar([vector], topn=11, indexer=annoy_index2)
for neighbor in approximate_neighbors2:
print(neighbor)
assert approximate_neighbors == approximate_neighbors2
Explanation: Analyzing the results
The closer the cosine similarity of a vector is to 1, the more similar that word is to our query, which was the vector for "science". There are some differences in the ranking of similar words and the set of words included within the 10 most similar words.
4. Verify & Evaluate performance
Persisting Indexes
You can save and load your indexes from/to disk to prevent having to construct them each time. This will create two files on disk, fname and fname.d. Both files are needed to correctly restore all attributes. Before loading an index, you will have to create an empty AnnoyIndexer object.
End of explanation
# Remove verbosity from code below (if logging active)
if LOGS:
logging.disable(logging.CRITICAL)
from multiprocessing import Process
import os
import psutil
Explanation: Be sure to use the same model at load that was used originally, otherwise you will get unexpected behaviors.
Save memory by memory-mapping indices saved to disk
Annoy library has a useful feature that indices can be memory-mapped from disk. It saves memory when the same index is used by several processes.
Below are two snippets of code. First one has a separate index for each process. The second snipped shares the index between two processes via memory-mapping. The second example uses less total RAM as it is shared.
End of explanation
%%time
model.save('/tmp/mymodel')
def f(process_id):
print ('Process Id: ', os.getpid())
process = psutil.Process(os.getpid())
new_model = Word2Vec.load('/tmp/mymodel')
vector = new_model["science"]
annoy_index = AnnoyIndexer(new_model,100)
approximate_neighbors = new_model.most_similar([vector], topn=5, indexer=annoy_index)
print('\nMemory used by process {}: '.format(os.getpid()), process.memory_info(), "\n---")
# Creating and running two parallel process to share the same index file.
p1 = Process(target=f, args=('1',))
p1.start()
p1.join()
p2 = Process(target=f, args=('2',))
p2.start()
p2.join()
Explanation: Bad Example: Two processes load the Word2vec model from disk and create there own Annoy indices from that model.
End of explanation
%%time
model.save('/tmp/mymodel')
def f(process_id):
print('Process Id: ', os.getpid())
process = psutil.Process(os.getpid())
new_model = Word2Vec.load('/tmp/mymodel')
vector = new_model["science"]
annoy_index = AnnoyIndexer()
annoy_index.load('index')
annoy_index.model = new_model
approximate_neighbors = new_model.most_similar([vector], topn=5, indexer=annoy_index)
print('\nMemory used by process {}: '.format(os.getpid()), process.memory_info(), "\n---")
# Creating and running two parallel process to share the same index file.
p1 = Process(target=f, args=('1',))
p1.start()
p1.join()
p2 = Process(target=f, args=('2',))
p2.start()
p2.join()
Explanation: Good example. Two processes load both the Word2vec model and index from disk and memory-map the index
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: 5. Evaluate relationship of num_trees to initialization time and accuracy
End of explanation
exact_results = [element[0] for element in model.most_similar([model.wv.syn0norm[0]], topn=100)]
x_values = []
y_values_init = []
y_values_accuracy = []
for x in range(1, 300, 10):
x_values.append(x)
start_time = time.time()
annoy_index = AnnoyIndexer(model, x)
y_values_init.append(time.time() - start_time)
approximate_results = model.most_similar([model.wv.syn0norm[0]], topn=100, indexer=annoy_index)
top_words = [result[0] for result in approximate_results]
y_values_accuracy.append(len(set(top_words).intersection(exact_results)))
Explanation: Build dataset of Initialization times and accuracy measures
End of explanation
plt.figure(1, figsize=(12, 6))
plt.subplot(121)
plt.plot(x_values, y_values_init)
plt.title("num_trees vs initalization time")
plt.ylabel("Initialization time (s)")
plt.xlabel("num_trees")
plt.subplot(122)
plt.plot(x_values, y_values_accuracy)
plt.title("num_trees vs accuracy")
plt.ylabel("% accuracy")
plt.xlabel("num_trees")
plt.tight_layout()
plt.show()
Explanation: Plot results
End of explanation |
2,640 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
risklearning demo
Most, if not all, operational risk capital models assume the existence of stationary frequency and severity distributions (typically Poisson for frequencies, and a subexponential distribution such as lognormal for severities). Yet every quarter (or whenever the model is recalibrated) risk capital goes up almost without fail, either because frequencies increase, severities increase or both.
The assumption of stationary distributions is just one limitation of current approaches to operational risk modeling, but it offers a good inroad for modeling approaches beyond the usual actuarial model typical in operational capital models.
In this notebook, we give a first example of how neural networks can overcome the stationarity assumptions of traditional approaches. The hope is that this is but one of many examples showing a better way to model operational risk.
Note
Step1: Set up frequency distribution to generate samples
Step2: MLE for training data
For the Poisson distribution, the MLE of the intensity (here lambda) is just the average of the counts per model horizon. In practice, OpRisk models sometimes take a weighted average, with the weight linearly decreasing over a period of years (see e.g. "LDA at Work" by Aue and Kalkbrener).
Step3: Prep simulated losses for neural network
For example
Use one-hot-encoding for L1 and L2 categories (this will make more sense once we look at multiple dependent categories)
Bin count data
Normalize tenors (i.e. scale so that first tenor maps to -1 with 0 preserved)
Export as numpy arrays to feed into keras / tensorflow
Step4: Set up the network architecture and train
We use keras with TensorFlow backend. Later we will look at optimizing metaparameters.
Step5: Evaluating the neural network
Let's see now how the neural network tracks the true distribution over time, and compare with the MLE fitted distribution.
We do this both numerically (Kullback-Leibler divergance) and graphically.
Step6: Optimizing network architecture | Python Code:
import risklearning.learning_frequency as rlf
reload(rlf)
import pandas as pd
import numpy as np
import scipy.stats as stats
import math
import matplotlib.style
matplotlib.style.use('ggplot')
import ggplot as gg
%matplotlib inline
Explanation: risklearning demo
Most, if not all, operational risk capital models assume the existence of stationary frequency and severity distributions (typically Poisson for frequencies, and a subexponential distribution such as lognormal for severities). Yet every quarter (or whenever the model is recalibrated) risk capital goes up almost without fail, either because frequencies increase, severities increase or both.
The assumption of stationary distributions is just one limitation of current approaches to operational risk modeling, but it offers a good inroad for modeling approaches beyond the usual actuarial model typical in operational capital models.
In this notebook, we give a first example of how neural networks can overcome the stationarity assumptions of traditional approaches. The hope is that this is but one of many examples showing a better way to model operational risk.
Note: What follows if very much a work in progress . . .
End of explanation
# Read in Poisson parameters used to simulate loss counts
lambdas_df = pd.read_csv('data/lambdas_tcem_1d.csv')
lambda_start = lambdas_df['TCEM'][0]
lambda_end = lambdas_df['TCEM'].tail(1).iloc[0]
print('Lambda start value: {}, lambda end value: {}'.format(lambda_start, lambda_end))
lambda_ts = lambdas_df['TCEM']
# Read in simulated loss counts
counts_sim_df = pd.read_csv('data/tcem_1d.csv')
# EDPM: Execution, Delivery and Process Management
# TCEM: Transaction Capture, Execution and Maintenance--think fat-finger mistake
counts_sim_df.head()
#%% Do MLE (simple average for Poisson process
t_start = np.min(counts_sim_df['t'])
t_end = np.max(counts_sim_df['t'])
n_tenors_train = -t_start
n_tenors_test = t_end
counts_train = (counts_sim_df[counts_sim_df.t < 0]).groupby('L2_cat').sum()
counts_test = (counts_sim_df[counts_sim_df.t >= 0]).groupby('L2_cat').sum()
Explanation: Set up frequency distribution to generate samples
End of explanation
lambdas_train = counts_train['counts']/n_tenors_train
lambdas_test = counts_test['counts']/n_tenors_test
bin_tops = [1,2,3,4,5,6,7,8,9,10,15,101]
# Recall that digitize (used later) defines bins by lower <= x < upper
count_tops =[count - 1 for count in bin_tops]
# Calculate bin probabilities from MLE poisson
poi_mle = stats.poisson(lambdas_train)
poi_bins = rlf.bin_probs(poi_mle, bin_tops)
mle_probs = pd.DataFrame({'Count Top': count_tops, 'Probs': poi_bins})
# For later comparison
mle_probs_vals = list(mle_probs.Probs)
Explanation: MLE for training data
For the Poisson distribution, the MLE of the intensity (here lambda) is just the average of the counts per model horizon. In practice, OpRisk models sometimes take a weighted average, with the weight linearly decreasing over a period of years (see e.g. "LDA at Work" by Aue and Kalkbrener).
End of explanation
import warnings
warnings.filterwarnings('ignore') # TODO: improve slicing to avoid warnings
x_train, y_train, x_test, y_test = rlf.prep_count_data(counts_sim_df, bin_tops)
Explanation: Prep simulated losses for neural network
For example
Use one-hot-encoding for L1 and L2 categories (this will make more sense once we look at multiple dependent categories)
Bin count data
Normalize tenors (i.e. scale so that first tenor maps to -1 with 0 preserved)
Export as numpy arrays to feed into keras / tensorflow
End of explanation
#from keras.optimizers import SGD
#sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
# rl_train_net is a wrapper for standard keras functionality that
# makes it easier below to optimize hyperparameters
rl_net = rlf.rl_train_net(x_train, y_train, x_test, y_test, [150], \
n_epoch = 300, optimizer = 'adagrad')
proba = rl_net['probs_nn']
Explanation: Set up the network architecture and train
We use keras with TensorFlow backend. Later we will look at optimizing metaparameters.
End of explanation
#% Convert proba from wide to long and append to other probs
mle_probs_vals = list(mle_probs.Probs)
# TODO: Missing last tenor in nn proba (already in x_test, y_test)
probs_list = []
kl_mle_list = []
kl_nn_list = []
for t in range(proba.shape[0]):
nn_probs_t = proba[t]
true_bins_t = rlf.bin_probs(stats.poisson(lambda_ts[-t_start+t]), bin_tops)
probs_t = pd.DataFrame({'Tenor': t, 'Count Top': count_tops, \
'Probs True': true_bins_t, \
'Probs NN': nn_probs_t, \
'Probs MLE': mle_probs_vals}, \
index = range(t*len(count_tops), \
t*len(count_tops) + len(count_tops)))
probs_list.append(probs_t)
# Calculate KL divergences
kl_mle_list.append(stats.entropy(true_bins_t, mle_probs_vals))
kl_nn_list.append(stats.entropy(true_bins_t, nn_probs_t))
probs = pd.concat(probs_list)
probs_tail = probs[probs.Tenor > 360 ]
gg.ggplot(probs_tail, gg.aes(x='Count Top',weight='Probs True')) \
+ gg.facet_grid('Tenor') \
+ gg.geom_bar() \
+ gg.geom_step(gg.aes(y='Probs MLE', color = 'red')) \
+ gg.geom_step(gg.aes(y='Probs NN', color = 'blue')) \
+ gg.scale_x_continuous(limits = (0,len(count_tops)))
# KL divergences
kl_df = pd.DataFrame({'Tenor': range(0, t_end+1), \
'KL MLE': kl_mle_list, \
'KL NN': kl_nn_list})
print kl_df.head()
print kl_df.tail()
#%
# Plot KL divergences
gg.ggplot(kl_df, gg.aes(x='Tenor')) \
+ gg.geom_step(gg.aes(y='KL MLE', color = 'red')) \
+ gg.geom_step(gg.aes(y='KL NN', color = 'blue'))
Explanation: Evaluating the neural network
Let's see now how the neural network tracks the true distribution over time, and compare with the MLE fitted distribution.
We do this both numerically (Kullback-Leibler divergance) and graphically.
End of explanation
# More systematically with NN architecture
# Loop over different architectures, create panel plot
neurons_list = [10, 20,50,100, 150, 200]
#neurons_list = [10, 20,50]
depths_list = [1,2,3]
optimizer = 'adagrad'
#%%
kl_df_list = []
for depth in depths_list:
for n_neurons in neurons_list:
nn_arch = [n_neurons]*depth
print("Training " + str(depth) + " layer(s) of " + str(n_neurons) + " neurons")
rl_net = rlf.rl_train_net(x_train, y_train, x_test, y_test, nn_arch, \
n_epoch = 300, optimizer = optimizer)
proba = rl_net['probs_nn']
print("\nPredicting with " + str(depth) + " layer(s) of " + str(n_neurons) + " neurons")
probs_kl_dict = rlf.probs_kl(proba, lambda_ts, t_start, t_end+1, bin_tops, mle_probs_vals)
probs = probs_kl_dict['Probs']
kl_df_n = probs_kl_dict['KL df']
kl_df_n['Hidden layers'] = depth
kl_df_n['Neurons per layer'] = n_neurons
kl_df_n['Architecture'] = str(depth) + '_layers_of_' + str(n_neurons) \
+ '_neurons'
kl_df_list.append(kl_df_n)
#%%
kl_df_hyper = pd.concat(kl_df_list)
# Plot
kl_mle = kl_df_n['KL MLE'] # These values are constant over the above loops (KL between MLE and true distribution)
for depth in depths_list:
kl_df_depth = kl_df_hyper[kl_df_hyper['Hidden layers'] == depth]
kl_df_depth = kl_df_hyper[kl_df_hyper['Hidden layers'] == depth]
kl_depth_vals = kl_df_depth.pivot(index = 'Tenor', columns = 'Neurons per layer', values = 'KL NN')
kl_depth_vals['KL MLE'] = kl_mle
kl_depth_vals.plot(title = 'Kullback-Leibler divergences from true distribution \n for ' \
+ str(depth) + ' hidden layer(s)', \
figsize = (16,10))
# Try again, but now with RMSprop
neurons_list = [10, 20,50]
#neurons_list = [50]
depths_list = [2,3]
optimizer = 'RMSprop'
#%%
kl_df_list = []
for depth in depths_list:
for n_neurons in neurons_list:
nn_arch = [n_neurons]*depth
print("Training " + str(depth) + " layer(s) of " + str(n_neurons) + " neurons")
rl_net = rlf.rl_train_net(x_train, y_train, x_test, y_test, nn_arch, \
n_epoch = 300, optimizer = optimizer)
proba = rl_net['probs_nn']
print("\nPredicting with " + str(depth) + " layer(s) of " + str(n_neurons) + " neurons")
probs_kl_dict = rlf.probs_kl(proba, lambda_ts, t_start, t_end+1, bin_tops, mle_probs_vals)
probs = probs_kl_dict['Probs']
kl_df_n = probs_kl_dict['KL df']
kl_df_n['Hidden layers'] = depth
kl_df_n['Neurons per layer'] = n_neurons
kl_df_n['Architecture'] = str(depth) + '_layers_of_' + str(n_neurons) \
+ '_neurons'
kl_df_list.append(kl_df_n)
#%%
kl_df_hyper = pd.concat(kl_df_list)
# Plot
kl_mle = kl_df_n['KL MLE'] # These values are constant over the above loops (KL between MLE and true distribution)
for depth in depths_list:
kl_df_depth = kl_df_hyper[kl_df_hyper['Hidden layers'] == depth]
kl_df_depth = kl_df_hyper[kl_df_hyper['Hidden layers'] == depth]
kl_depth_vals = kl_df_depth.pivot(index = 'Tenor', columns = 'Neurons per layer', values = 'KL NN')
kl_depth_vals['KL MLE'] = kl_mle
kl_depth_vals.plot(title = 'Kullback-Leibler divergences from true distribution \n for ' \
+ str(depth) + ' hidden layer(s)', \
figsize = (16,10))
Explanation: Optimizing network architecture
End of explanation |
2,641 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Serialising the Stars
Noodles lets you run jobs remotely and store/retrieve results in case of duplicate jobs or reruns. These features rely on the serialisation (and not unimportant, reconstruction) of all objects that are passed between scheduled functions. Serialisation refers to the process of turning any object into a stream of bytes from which we can reconstruct a functionally identical object. "Easy enough!" you might think, just use pickle.
Step1: However pickle cannot serialise all objects ... "Use dill!" you say; still the pickle/dill method of serializing is rather indiscriminate. Some of our objects may contain runtime data we can't or don't want to store, coroutines, threads, locks, open files, you name it. We work with a Sqlite3 database to store our data. An application might store gigabytes of numerical data. We don't want those binary blobs in our database, rather to store them externally in a HDF5 file.
There are many cases where a more fine-grained control of serialisation is in order. The bottom line being, that there is no silver bullet solution. Here we show some examples on how to customize the Noodles serialisation mechanism.
The registry
Noodles keeps a registry of Serialiser objects that know exactly how to serialise and reconstruct objects. This registry is specified to the backend when we call the one of the run functions. To make the serialisation registry visible to remote parties it is important that the registry can be imported. This is why it has to be a function of zero arguments (a thunk) returning the actual registry object.
```python
def registry()
Step2: Let's see what is made of our objects!
Step3: Great! JSON compatible data stays the same. Now try an object that JSON doesn't know about.
Step5: Objects are encoded as a dictionary containing a '_noodles' key. So what will happen if we serialise an object the registry cannot possibly know about? Next we define a little astronomical class describing a star in the Morgan-Keenan classification scheme.
Step8: The registry obviously doesn't know about Stars, so it falls back to serialisation using pickle. The pickled data is further encoded using base64. This solution won't work if some of your data cannot be pickled. Also, if you're sensitive to aesthetics, the pickled output doesn't look very nice.
serialize and construct
One way to take control of the serialisation of your objects is to add the __serialize__ and __construct__ methods.
Step9: The class became quite a bit bigger. However, the __str__, __repr__ and from_string methods are part of an interface you'd normally implement to make your class more useful.
Step10: The __serialize__ method takes one argument (besides self). The argument pack is a function that creates the data record with all handles attached. The reason for this construct is that it takes keyword arguments for special cases.
python
def pack(data, ref=None, files=None)
Step13: Data classes
Since Python 3.7, it is possible to define classes that are meant to contain "just data" as a dataclass. We'll forgo any data validation at this point.
Step14: Data classes are recognised by Noodles and will be automatically serialised.
Step16: Writing a Serialiser class (example with large data)
Often, the class that needs serialising is not from your own package. In that case we need to write a specialised Serialiser class. For this purpose it may be nice to see how to serialise a Numpy array. This code is already in Noodles; we will look at a trimmed down version.
Given a NumPy array, we need to do two things
Step18: Is this useable for large data? Let's see how this scales (code to generate this plot is below)
Step20: And put it all together in a class derived from SerArray.
Step21: We have to insert the serialiser into a new registry.
Step22: Now we can serialise our first Numpy array!
Step23: Now, we should be able to read back the data directly from the HDF5.
Step24: We have set the ref property to True, we can now read back the serialised object without dereferencing. This will result in a placeholder object containing only the encoded data
Step25: If we want to retrieve the data we should run from_json with deref=True
Step26: Appendix A
Step27: The following code will parse the stellar types we used before
Step29: Appendix B | Python Code:
from noodles.tutorial import display_text
import pickle
function = pickle.dumps(str.upper)
message = pickle.dumps("Hello, Wold!")
display_text("function: " + str(function))
display_text("message: " + str(message))
pickle.loads(function)(pickle.loads(message))
Explanation: Serialising the Stars
Noodles lets you run jobs remotely and store/retrieve results in case of duplicate jobs or reruns. These features rely on the serialisation (and not unimportant, reconstruction) of all objects that are passed between scheduled functions. Serialisation refers to the process of turning any object into a stream of bytes from which we can reconstruct a functionally identical object. "Easy enough!" you might think, just use pickle.
End of explanation
import noodles
def registry():
return noodles.serial.pickle() \
+ noodles.serial.base()
reg = registry()
Explanation: However pickle cannot serialise all objects ... "Use dill!" you say; still the pickle/dill method of serializing is rather indiscriminate. Some of our objects may contain runtime data we can't or don't want to store, coroutines, threads, locks, open files, you name it. We work with a Sqlite3 database to store our data. An application might store gigabytes of numerical data. We don't want those binary blobs in our database, rather to store them externally in a HDF5 file.
There are many cases where a more fine-grained control of serialisation is in order. The bottom line being, that there is no silver bullet solution. Here we show some examples on how to customize the Noodles serialisation mechanism.
The registry
Noodles keeps a registry of Serialiser objects that know exactly how to serialise and reconstruct objects. This registry is specified to the backend when we call the one of the run functions. To make the serialisation registry visible to remote parties it is important that the registry can be imported. This is why it has to be a function of zero arguments (a thunk) returning the actual registry object.
```python
def registry():
return Registry(...)
run(workflow,
db_file='project-cache.db',
registry=registry)
```
The registry that should always be included is noodles.serial.base. This registry knows how to serialise basic Python dictionaries, lists, tuples, sets, strings, bytes, slices and all objects that are internal to Noodles. Special care is taken with objects that have a __name__ attached and can be imported using the __module__.__name__ combination.
Registries can be composed using the + operator. For instance, suppose we want to use pickle as a default option for objects that are not in noodles.serial.base:
End of explanation
display_text(reg.to_json([
"These data are JSON compatible!", 0, 1.3, None,
{"dictionaries": "too!"}], indent=2))
Explanation: Let's see what is made of our objects!
End of explanation
display_text(reg.to_json({1, 2, 3}, indent=2), [1])
Explanation: Great! JSON compatible data stays the same. Now try an object that JSON doesn't know about.
End of explanation
class Star(object):
Morgan-Keenan stellar classification.
def __init__(self, spectral_type, number, luminocity_class):
assert spectral_type in "OBAFGKM"
assert number in range(10)
self.spectral_type = spectral_type
self.number = number
self.luminocity_class = luminocity_class
rigel = Star('B', 8, 'Ia')
display_text(reg.to_json(rigel, indent=2), [4], max_width=60)
Explanation: Objects are encoded as a dictionary containing a '_noodles' key. So what will happen if we serialise an object the registry cannot possibly know about? Next we define a little astronomical class describing a star in the Morgan-Keenan classification scheme.
End of explanation
class Star(object):
Morgan-Keenan stellar classification.
def __init__(self, spectral_type, number, luminocity_class):
assert spectral_type in "OBAFGKM"
assert number in range(10)
self.spectral_type = spectral_type
self.number = number
self.luminocity_class = luminocity_class
def __str__(self):
return f'{self.spectral_type}{self.number}{self.luminocity_class}'
def __repr__(self):
return f'Star.from_string(\'{str(self)}\')'
@staticmethod
def from_string(string):
Construct a new Star from a string describing the stellar type.
return Star(string[0], int(string[1]), string[2:])
def __serialize__(self, pack):
return pack(str(self))
@classmethod
def __construct__(cls, data):
return Star.from_string(data)
Explanation: The registry obviously doesn't know about Stars, so it falls back to serialisation using pickle. The pickled data is further encoded using base64. This solution won't work if some of your data cannot be pickled. Also, if you're sensitive to aesthetics, the pickled output doesn't look very nice.
serialize and construct
One way to take control of the serialisation of your objects is to add the __serialize__ and __construct__ methods.
End of explanation
sun = Star('G', 2, 'V')
print("The Sun is a", sun, "type star.")
encoded_star = reg.to_json(sun, indent=2)
display_text(encoded_star, [4])
Explanation: The class became quite a bit bigger. However, the __str__, __repr__ and from_string methods are part of an interface you'd normally implement to make your class more useful.
End of explanation
decoded_star = reg.from_json(encoded_star)
display_text(repr(decoded_star))
Explanation: The __serialize__ method takes one argument (besides self). The argument pack is a function that creates the data record with all handles attached. The reason for this construct is that it takes keyword arguments for special cases.
python
def pack(data, ref=None, files=None):
pass
The ref argument, if given as True, will make sure that this object will not get reconstructed unnecessarily. One instance where this is incredibly useful, is if the object is a gigabytes large Numpy array.
The files argument, when given, should be a list of filenames. This makes sure Noodles knows about the involvement of external files.
The data passed to pack maybe of any type, as long as the serialisation registry knows how to serialise it.
The __construct__ method must be a class method. The data argument it is given can be expected to be identical to the data passed to the pack function at serialisation.
End of explanation
from dataclasses import dataclass, is_dataclass
@dataclass
class Star:
Morgan-Keenan stellar classification.
spectral_type: str
number: int
luminocity_class: str
def __str__(self):
return f'{self.spectral_type}{self.number}{self.luminocity_class}'
@staticmethod
def from_string(string):
Construct a new Star from a string describing the stellar type.
return Star(string[0], int(string[1]), string[2:])
Explanation: Data classes
Since Python 3.7, it is possible to define classes that are meant to contain "just data" as a dataclass. We'll forgo any data validation at this point.
End of explanation
altair = Star.from_string("A7V")
encoded_star = reg.to_json(altair, indent=2)
display_text(encoded_star, [2])
Explanation: Data classes are recognised by Noodles and will be automatically serialised.
End of explanation
import numpy
import hashlib
import base64
def array_sha256(a):
Create a SHA256 hash from a Numpy array.
dtype = str(a.dtype).encode()
shape = numpy.array(a.shape)
sha = hashlib.sha256()
sha.update(dtype)
sha.update(shape)
sha.update(a.tobytes())
return base64.urlsafe_b64encode(sha.digest()).decode()
Explanation: Writing a Serialiser class (example with large data)
Often, the class that needs serialising is not from your own package. In that case we need to write a specialised Serialiser class. For this purpose it may be nice to see how to serialise a Numpy array. This code is already in Noodles; we will look at a trimmed down version.
Given a NumPy array, we need to do two things:
Generate a token by which to identify the array; we will use a SHA-256 hash to do this.
Store the array effeciently; the HDF5 fileformat is perfectly suited.
SHA-256
We need to hash the combination of datatype, array shape and the binary data:
End of explanation
import h5py
def save_array_to_hdf5(filename, lock, array):
Save an array to a HDF5 file, using the SHA-256 of the array
data as path within the HDF5. The `lock` is needed to prevent
simultaneous access from multiple threads.
hdf5_path = array_sha256(array)
with lock, h5py.File(filename) as hdf5_file:
if not hdf5_path in hdf5_file:
dataset = hdf5_file.create_dataset(
hdf5_path, shape=array.shape, dtype=array.dtype)
dataset[...] = array
hdf5_file.close()
return hdf5_path
Explanation: Is this useable for large data? Let's see how this scales (code to generate this plot is below):
So on my laptop, hashing an array of ~1 GB takes a little over three seconds, and it scales almost perfectly linear. Next we define the storage routine (and a loading routine, but that's a oneliner).
End of explanation
import filelock
from noodles.serial import Serialiser, Registry
class SerArray(Serialiser):
Serialises Numpy array to HDF5 file.
def __init__(self, filename, lockfile):
super().__init__(numpy.ndarray)
self.filename = filename
self.lock = filelock.FileLock(lockfile)
def encode(self, obj, pack):
key = save_array_to_hdf5(self.filename, self.lock, obj)
return pack({
"filename": self.filename,
"hdf5_path": key,
}, files=[self.filename], ref=True)
def decode(self, cls, data):
with self.lock, h5py.File(self.filename) as hdf5_file:
return hdf5_file[data["hdf5_path"]].value
Explanation: And put it all together in a class derived from SerArray.
End of explanation
!rm -f tutorial.h5 # remove from previous run
import noodles
from noodles.tutorial import display_text
def registry():
return Registry(
parent=noodles.serial.base(),
types={
numpy.ndarray: SerArray('tutorial.h5', 'tutorial.lock')
})
reg = registry()
Explanation: We have to insert the serialiser into a new registry.
End of explanation
encoded_array = reg.to_json(numpy.arange(10), host='localhost', indent=2)
display_text(encoded_array, [6])
Explanation: Now we can serialise our first Numpy array!
End of explanation
with h5py.File('tutorial.h5') as f:
result = f['4Z8kdMg-CbjgTKKYlz6b-_-Tsda5VAJL44OheRB10mU='][()]
print(result)
Explanation: Now, we should be able to read back the data directly from the HDF5.
End of explanation
ref = reg.from_json(encoded_array)
display_text(ref)
display_text(vars(ref), max_width=60)
Explanation: We have set the ref property to True, we can now read back the serialised object without dereferencing. This will result in a placeholder object containing only the encoded data:
End of explanation
display_text(reg.from_json(encoded_array, deref=True))
Explanation: If we want to retrieve the data we should run from_json with deref=True:
End of explanation
!pip install pyparsing
Explanation: Appendix A: better parsing
If you're interested in doing a bit better in parsing generic expressions into objects, take a look at pyparsing.
End of explanation
from pyparsing import Literal, replaceWith, OneOrMore, Word, nums, oneOf
def roman_numeral_literal(string, value):
return Literal(string).setParseAction(replaceWith(value))
one = roman_numeral_literal("I", 1)
four = roman_numeral_literal("IV", 4)
five = roman_numeral_literal("V", 5)
roman_numeral = OneOrMore(
(five | four | one).leaveWhitespace()) \
.setName("roman") \
.setParseAction(lambda s, l, t: sum(t))
integer = Word(nums) \
.setName("integer") \
.setParseAction(lambda t:int(t[0]))
mkStar = oneOf(list("OBAFGKM")) + integer + roman_numeral
list(mkStar.parseString('B2IV'))
roman_class = {
'I': 'supergiant',
'II': 'bright giant',
'III': 'regular giant',
'IV': 'sub-giants',
'V': 'main-sequence',
'VI': 'sub-dwarfs',
'VII': 'white dwarfs'
}
Explanation: The following code will parse the stellar types we used before:
End of explanation
import timeit
import matplotlib.pyplot as plt
plt.rcParams['font.family'] = "serif"
from scipy import stats
def benchmark(size, number=10):
Measure performance of SHA-256 hashing large arrays.
data = numpy.random.uniform(size=size)
return timeit.timeit(
stmt=lambda: array_sha256(data),
number=number) / number
sizes = numpy.logspace(10, 25, 16, base=2, dtype=int)
timings = numpy.array([[benchmark(size, 1) for size in sizes]
for i in range(10)])
sizes_MB = sizes * 8 / 1e6
timings_ms = timings.mean(axis=0) * 1000
timings_err = timings.std(axis=0) * 1000
slope, intercept, _, _, _ = stats.linregress(
numpy.log(sizes_MB[5:]),
numpy.log(timings_ms[5:]))
print("scaling:", slope, "(should be ~1)")
print("speed:", numpy.exp(-intercept), "GB/s")
ax = plt.subplot(111)
ax.set_xscale('log', nonposx='clip')
ax.set_yscale('log', nonposy='clip')
ax.plot(sizes_MB, numpy.exp(intercept) * sizes_MB,
label='{:.03} GB/s'.format(numpy.exp(-intercept)))
ax.errorbar(sizes_MB, timings_ms, yerr=timings_err,
marker='.', ls=':', c='k', label='data')
ax.set_xlabel('size ($MB$)')
ax.set_ylabel('time ($ms$)')
ax.set_title('SHA-256 performance', fontsize=10)
ax.legend()
plt.savefig('sha256-performance.svg')
plt.show()
Explanation: Appendix B: measuring SHA-256 performance
End of explanation |
2,642 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Noodles
Easy concurrent programming <s>in</s> using Python
Johan Hidding, Thursday 19-11-2015 @ NLeSC
Step1: But, why?
save time user's time
be flexible
Alternatives
What we discussed
Step2: Our fledgeling Python script kiddie then enters the following code
Step3: resulting in this workflow
Step4: How does it work?
Decorate functions to build a workflow
Use any back-end to run on
The decorator
Step5: Mocking a 'real' Python object
Step6: Merging workflows into a function call
Step7: eeeehm, What can we do (sort of)?
embarrassingly parallel loops
embedded workflows
empirical member assignment
loops
Step8:
Step9: embedded workflows
Step10:
Step11: Using objects
Golden rule
if you change something, return it
Step12:
Step13: | Python Code:
from noodles import schedule, run, run_parallel, gather
Explanation: Noodles
Easy concurrent programming <s>in</s> using Python
Johan Hidding, Thursday 19-11-2015 @ NLeSC
End of explanation
@schedule
def add(a, b):
return a+b
@schedule
def sub(a, b):
return a-b
@schedule
def mul(a, b):
return a*b
Explanation: But, why?
save time user's time
be flexible
Alternatives
What we discussed: Taverna, KNIME, Pegasus etc.
Celery
IPyParallel
Fireworks
Hadoop / Spark
Noodles parable (thank you Oscar!)
start with example
We start with a few functions that happen to exist some out where
End of explanation
u = add(5, 4)
v = sub(u, 3)
w = sub(u, 2)
x = mul(v, w)
draw_workflow('callgraph1.png', x._workflow)
Explanation: Our fledgeling Python script kiddie then enters the following code
End of explanation
run_parallel(x, n_threads = 2)
Explanation: resulting in this workflow:
We may run this in parallel!
End of explanation
def schedule(f):
@wraps(f)
def wrapped(*args, **kwargs):
bound_args = signature(f).bind(*args, **kwargs)
bound_args.apply_defaults()
return PromisedObject(merge_workflow(f, bound_args))
return wrapped
Explanation: How does it work?
Decorate functions to build a workflow
Use any back-end to run on
The decorator
End of explanation
class PromisedObject:
def __init__(self, workflow):
self._workflow = workflow
def __call__(self, *args, **kwargs):
return _do_call(self._workflow, *args, **kwargs)
def __getattr__(self, attr):
if attr[0] == '_':
return self.__dict__[attr]
return _getattr(self._workflow, attr)
def __setattr__(self, attr, value):
if attr[0] == '_':
self.__dict__[attr] = value
return
self._workflow = get_workflow(_setattr(self._workflow, attr, value))
Explanation: Mocking a 'real' Python object
End of explanation
def merge_workflow(f, bound_args):
variadic = next((x.name for x in bound_args.signature.parameters.values()
if x.kind == Parameter.VAR_POSITIONAL), None)
if variadic:
bound_args.arguments[variadic] = list(bound_args.arguments[variadic])
node = FunctionNode(f, bound_args)
idx = id(node)
nodes = {idx: node}
links = {idx: set()}
for address in serialize_arguments(bound_args):
workflow = get_workflow(
ref_argument(bound_args, address))
if not workflow:
continue
set_argument(bound_args, address, Parameter.empty)
for n in workflow.nodes:
if n not in nodes:
nodes[n] = workflow.nodes[n]
links[n] = set()
links[n].update(workflow.links[n])
links[workflow.top].add((idx, address))
return Workflow(id(node), nodes, links)
Explanation: Merging workflows into a function call
End of explanation
from noodles import schedule, run, run_parallel, gather
@schedule
def sum(a, buildin_sum = sum):
return buildin_sum(a)
r1 = add(1, 1)
r2 = sub(3, r1)
def foo(a, b, c):
return mul(add(a, b), c)
multiples = [foo(i, r2, r1) for i in range(6)]
r5 = sum(gather(*multiples))
draw_workflow('callgraph2.png', r5._workflow)
Explanation: eeeehm, What can we do (sort of)?
embarrassingly parallel loops
embedded workflows
empirical member assignment
loops
End of explanation
run_parallel(r5, n_threads = 4)
Explanation:
End of explanation
@schedule
def sqr(a):
return a*a
@schedule
def map(f, lst):
return gather(*[f(x) for x in lst])
@schedule
def num_range(a, b):
return range(a, b)
wf = sum(map(sqr, num_range(0, 1000)))
draw_workflow('callgraph3.png', wf._workflow)
Explanation: embedded workflows
End of explanation
run_parallel(wf, n_threads=4)
Explanation:
End of explanation
@schedule
class A:
def __init__(self, value):
self.value = value
def multiply(self, factor):
self.value *= factor
a = A(5)
a.multiply(10)
a.second = 7
draw_workflow("callgraph4.png", a._workflow)
Explanation: Using objects
Golden rule
if you change something, return it
End of explanation
@schedule
class A:
def __init__(self, value):
self.value = value
def multiply(self, factor):
self.value *= factor
return self
a = A(5)
a = a.multiply(10)
a.second = 7
draw_workflow("callgraph5.png", a._workflow)
Explanation:
End of explanation
result = run_parallel(a, n_threads=4)
print(result.value, result.second)
Explanation:
End of explanation |
2,643 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Connecting to Database
Step1: LOGISTIC REGRESSION
Step2: Logistic Regression - Success
Logistic Regression - MULTIPLE
Step3: RANDOM FOREST
Random Forest- MULTIPLE
Step4: Random Forest- SUCCESS
Step5: Preventing Overfitting of the tree for multiple model
The results are different now due to the different sample used from here as compared to when we built the model shown during presentation; as such, results may vary slightly
Step6: ASSOCIATION RULES
Step7: Violin Plot Visualisastions | Python Code:
import pandas as pd
import numpy as np
terror = pd.read_csv('file.csv', encoding='ISO-8859-1')
cleanedforuse = terror.filter(['imonth', 'iday', 'region','property','propextent','attacktype1','weaptype1','nperps','success','multiple','specificity'])
final = cleanedforuse[~np.isnan(cleanedforuse).any(axis=1)]
final.head()
import sqlite3
conn = sqlite3.connect('Terrorisks.db')
final.to_sql('final',con=conn, flavor='sqlite', if_exists='replace')
df = pd.read_sql_query('SELECT * FROM final', conn)
df.head(10)
Explanation: Connecting to Database
End of explanation
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
from patsy import dmatrices
from sklearn.linear_model import LogisticRegression
from sklearn.cross_validation import train_test_split
from sklearn import metrics
from sklearn.cross_validation import cross_val_score
from sklearn.metrics import roc_curve, auc
y, X = dmatrices('success ~ C(imonth) + C(iday) + region + C(property) + C(propextent) + C(attacktype1) + C(weaptype1)+ C(nperps) + specificity', df, return_type="dataframe")
print(y)
y = np.ravel(y)
# instantiate a logistic regression model, and fit with X and y
model = LogisticRegression()
model = model.fit(X, y)
# what percentage had multiple?
print("Benchmark:")
b = y.mean()
print(b)
# check the accuracy on the training set
a = model.score(X, y)
print("Score:")
print(a)
model.coef_
# evaluate the model by splitting into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
model2 = LogisticRegression()
model2.fit(X_train, y_train)
# predict class labels for the test set
predicted = model2.predict(X_test)
print (predicted)
# generate class probabilities
probs = model2.predict_proba(X_test)
print (probs)
# generate evaluation metrics
print (metrics.accuracy_score(y_test, predicted))
print (metrics.roc_auc_score(y_test, probs[:, 1]))
print (metrics.confusion_matrix(y_test, predicted))
print (metrics.classification_report(y_test, predicted))
scores = cross_val_score(LogisticRegression(), X, y, scoring='accuracy', cv=10)
print (scores)
print (scores.mean())
false_positive_rate, true_positive_rate, thresholds = roc_curve(y_test, predicted)
roc_auc = auc(false_positive_rate, true_positive_rate)
print('AUC = %0.4f'% roc_auc)
plt.title('Receiver Operating Characteristic')
plt.plot(false_positive_rate, true_positive_rate, 'b',label='AUC = %0.2f'% roc_auc)
plt.legend(loc='lower right')
plt.plot([0,1],[0,1],'r--')
plt.xlim([-0.1,1.2])
plt.ylim([-0.1,1.2])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
Explanation: LOGISTIC REGRESSION
End of explanation
y, X = dmatrices('multiple ~ C(imonth) + C(iday) + region + C(property) + C(propextent) + C(attacktype1) + C(weaptype1)+ C(nperps) + specificity', df, return_type="dataframe")
y = np.ravel(y)
# instantiate a logistic regression model, and fit with X and y
model = LogisticRegression()
model = model.fit(X, y)
# what percentage had multiple?
print("Benchmark:")
b = y.mean()
print(b)
# check the accuracy on the training set
a = model.score(X, y)
print("Score:")
print(a)
model.coef_
# evaluate the model by splitting into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
model2 = LogisticRegression()
model2.fit(X_train, y_train)
# predict class labels for the test set
predicted = model2.predict(X_test)
print (predicted)
# generate class probabilities
probs = model2.predict_proba(X_test)
print (probs)
# generate evaluation metrics
print (metrics.accuracy_score(y_test, predicted))
print (metrics.roc_auc_score(y_test, probs[:, 1]))
print (metrics.confusion_matrix(y_test, predicted))
print (metrics.classification_report(y_test, predicted))
scores = cross_val_score(LogisticRegression(), X, y, scoring='accuracy', cv=10)
print (scores)
print (scores.mean())
false_positive_rate, true_positive_rate, thresholds = roc_curve(y_test, predicted)
roc_auc = auc(false_positive_rate, true_positive_rate)
print('AUC = %0.4f'% roc_auc)
plt.title('Receiver Operating Characteristic')
plt.plot(false_positive_rate, true_positive_rate, 'b',label='AUC = %0.2f'% roc_auc)
plt.legend(loc='lower right')
plt.plot([0,1],[0,1],'r--')
plt.xlim([-0.1,1.2])
plt.ylim([-0.1,1.2])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
Explanation: Logistic Regression - Success
Logistic Regression - MULTIPLE
End of explanation
import numpy as np
from sklearn import preprocessing
from sklearn.metrics import roc_curve, auc
from sklearn.ensemble import RandomForestClassifier
from sklearn.cross_validation import train_test_split
from sklearn.metrics import accuracy_score
import matplotlib.pyplot as plt
import pandas as pd
y = df['multiple']
X = df.filter(['imonth', 'iday', 'region','property',
'propextent','attacktype1','weaptype1','nperps','specificity'])
Xone= pd.get_dummies(X, prefix='month', columns=['imonth'])
Xtwo= pd.get_dummies(Xone, prefix='day', columns=['iday'])
Xthree= pd.get_dummies(Xtwo, prefix='region', columns=['region'])
Xfour= pd.get_dummies(Xthree, prefix='attacktype', columns=['attacktype1'])
Xfive= pd.get_dummies(Xfour, prefix='weapontype', columns=['weaptype1'])
Xsix= pd.get_dummies(Xfive, prefix='specificity', columns=['specificity'])
features_train, features_test,target_train, target_test = train_test_split(Xsix,y, test_size = 0.2,random_state=0)
print("Benchmark: " )
print(1-(y.mean()))
#Random Forest
forest=RandomForestClassifier(n_estimators=10)
forest = forest.fit( features_train, target_train)
output = forest.predict(features_test).astype(int)
forest.score(features_train, target_train )
false_positive_rate, true_positive_rate, thresholds = roc_curve(target_test, output)
roc_auc = auc(false_positive_rate, true_positive_rate)
print('AUC = %0.4f'% roc_auc)
plt.title('Receiver Operating Characteristic')
plt.plot(false_positive_rate, true_positive_rate, 'b',label='AUC = %0.2f'% roc_auc)
plt.legend(loc='lower right')
plt.plot([0,1],[0,1],'r--')
plt.xlim([-0.1,1.2])
plt.ylim([-0.1,1.2])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
scores = cross_val_score(forest, X, y, scoring='accuracy', cv=10)
print (scores)
print (scores.mean())
Explanation: RANDOM FOREST
Random Forest- MULTIPLE
End of explanation
y = df['success']
X = df.filter(['imonth', 'iday', 'region','property',
'propextent','attacktype1','weaptype1','nperps','specificity'])
features_train, features_test,target_train, target_test = train_test_split(X,y, test_size = 0.2,random_state=0)
#Random Forest
forest=RandomForestClassifier(n_estimators=10)
forest = forest.fit( features_train, target_train)
output = forest.predict(features_test).astype(int)
score = forest.score(features_train, target_train)
print("Benchmark: " )
print((y.mean()))
print('Our Accuracy:')
print(score)
false_positive_rate, true_positive_rate, thresholds = roc_curve(target_test, output)
roc_auc = auc(false_positive_rate, true_positive_rate)
print('AUC = %0.4f'% roc_auc)
plt.title('Receiver Operating Characteristic')
plt.plot(false_positive_rate, true_positive_rate, 'b',label='AUC = %0.2f'% roc_auc)
plt.legend(loc='lower right')
plt.plot([0,1],[0,1],'r--')
plt.xlim([-0.1,1.2])
plt.ylim([-0.1,1.2])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
Explanation: Random Forest- SUCCESS
End of explanation
from sklearn.tree import _tree
def leaf_depths(tree, node_id = 0):
'''
tree.children_left and tree.children_right store ids
of left and right chidren of a given node
'''
left_child = tree.children_left[node_id]
right_child = tree.children_right[node_id]
'''
If a given node is terminal,
both left and right children are set to _tree.TREE_LEAF
'''
if left_child == _tree.TREE_LEAF:
'''
Set depth of terminal nodes to 0
'''
depths = np.array([0])
else:
'''
Get depths of left and right children and
increment them by 1
'''
left_depths = leaf_depths(tree, left_child) + 1
right_depths = leaf_depths(tree, right_child) + 1
depths = np.append(left_depths, right_depths)
return depths
def leaf_samples(tree, node_id = 0):
left_child = tree.children_left[node_id]
right_child = tree.children_right[node_id]
if left_child == _tree.TREE_LEAF:
samples = np.array([tree.n_node_samples[node_id]])
else:
left_samples = leaf_samples(tree, left_child)
right_samples = leaf_samples(tree, right_child)
samples = np.append(left_samples, right_samples)
return samples
def draw_tree(ensemble, tree_id=0):
plt.figure(figsize=(8,8))
plt.subplot(211)
tree = ensemble.estimators_[tree_id].tree_
depths = leaf_depths(tree)
plt.hist(depths, histtype='step', color='#9933ff',
bins=range(min(depths), max(depths)+1))
plt.xlabel("Depth of leaf nodes (tree %s)" % tree_id)
plt.subplot(212)
samples = leaf_samples(tree)
plt.hist(samples, histtype='step', color='#3399ff',
bins=range(min(samples), max(samples)+1))
plt.xlabel("Number of samples in leaf nodes (tree %s)" % tree_id)
plt.show()
def draw_ensemble(ensemble):
plt.figure(figsize=(8,8))
plt.subplot(211)
depths_all = np.array([], dtype=int)
for x in ensemble.estimators_:
tree = x.tree_
depths = leaf_depths(tree)
depths_all = np.append(depths_all, depths)
plt.hist(depths, histtype='step', color='#ddaaff',
bins=range(min(depths), max(depths)+1))
plt.hist(depths_all, histtype='step', color='#9933ff',
bins=range(min(depths_all), max(depths_all)+1),
weights=np.ones(len(depths_all))/len(ensemble.estimators_),
linewidth=2)
plt.xlabel("Depth of leaf nodes")
samples_all = np.array([], dtype=int)
plt.subplot(212)
for x in ensemble.estimators_:
tree = x.tree_
samples = leaf_samples(tree)
samples_all = np.append(samples_all, samples)
plt.hist(samples, histtype='step', color='#aaddff',
bins=range(min(samples), max(samples)+1))
plt.hist(samples_all, histtype='step', color='#3399ff',
bins=range(min(samples_all), max(samples_all)+1),
weights=np.ones(len(samples_all))/len(ensemble.estimators_),
linewidth=2)
plt.xlabel("Number of samples in leaf nodes")
plt.show()
draw_tree(forest)
draw_ensemble(forest)
y = df['multiple']
X = df.filter(['imonth', 'iday', 'region','property',
'propextent','attacktype1','weaptype1','nperps','specificity'])
features_train, features_test,target_train, target_test = train_test_split(X,y, test_size = 0.2,random_state=0)
#Random Forest
forest=RandomForestClassifier(n_estimators=10, max_depth = 16)
forest = forest.fit( features_train, target_train)
output = forest.predict(features_test).astype(int)
score = forest.score(features_train, target_train)
print("Benchmark: " )
print(1-(y.mean()))
print('Our Accuracy:')
print(score)
false_positive_rate, true_positive_rate, thresholds = roc_curve(target_test, output)
roc_auc = auc(false_positive_rate, true_positive_rate)
print('AUC = %0.4f'% roc_auc)
plt.title('Receiver Operating Characteristic')
plt.plot(false_positive_rate, true_positive_rate, 'b',label='AUC = %0.2f'% roc_auc)
plt.legend(loc='lower right')
plt.plot([0,1],[0,1],'r--')
plt.xlim([-0.1,1.2])
plt.ylim([-0.1,1.2])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
Explanation: Preventing Overfitting of the tree for multiple model
The results are different now due to the different sample used from here as compared to when we built the model shown during presentation; as such, results may vary slightly
End of explanation
import pandas as pd
df = pd.read_csv('/Users/Laishumin/Datasets/globalterrorism.csv', encoding='ISO-8859-1',low_memory=False)
clean=df[['iyear','imonth','iday','region','specificity'
,'vicinity','crit1','crit2','crit3','doubtterr','multiple','success','suicide'
,'attacktype1','ingroup','guncertain1','weaptype1']]
df_dummies1= pd.get_dummies(clean, prefix='month', columns=['imonth'])
df_dummies2= pd.get_dummies(df_dummies1, prefix='region', columns=['region'])
df_dummies3= pd.get_dummies(df_dummies2, prefix='specificity', columns=['specificity'])
df_dummies4= pd.get_dummies(df_dummies3, prefix='attack_type', columns=['attacktype1'])
df_dummies5= pd.get_dummies(df_dummies4, prefix='main_weapon_type', columns=['weaptype1'])
data = df_dummies5
del data['iyear']
del data['iday']
del data['guncertain1']
del data['ingroup']
del data['doubtterr']
names = list(data.columns.values)
names
lift_multiple = []
for i in names:
num_Feature = 0
Count = 0
for sample in data1[i]:
thing = data1[i].astype(str).str.contains('1')
if (thing.iloc[Count] == True):
num_Feature += 1
Count +=1
else:
Count +=1
print("{0} ".format(num_Feature) + " from " + i)
rule_valid = 0
rule_invalid = 0
for j in range(len(data1)):
if data1.iloc[j][i] == 1:
if data1.iloc[j].multiple == 1:
rule_valid += 1
else:
rule_invalid += 1
print("{0} cases of the rule being valid were discovered".format(rule_valid))
print("{0} cases of the rule being invalid were discovered".format(rule_invalid))
# Now we have all the information needed to compute Support and Confidence
support = rule_valid # The Support is the number of times the rule is discovered.
if (num_Feature == 0):
lift_multiple.append(0)
else:
confidence = (rule_valid) / (num_Feature)
lift = confidence / 0.13
lift_multiple.append(lift)
print(i + '-->Multiple')
print("The support is {0}, the confidence is {1:.3f}, and the lift is {2:.3f}.".format(support, confidence, lift))
print("As a percentage, the confidence is {0:.1f}%.".format(100 * confidence))
print("-----------------------------------------------------------------")
lift_multiple_pd = pd.DataFrame(
{'Lift':lift_multiple
},index=names2)
lift_multiple_pd
graph = lift_multiple_pd.sort(['Lift'], ascending=[0])
graph
%matplotlib inline
graph.plot(kind='bar')
Explanation: ASSOCIATION RULES
End of explanation
import numpy as np
import seaborn as sns
import pandas as pd
sns.violinplot(x="weaptype1", y="success", data=df, palette="Set3")
sns.violinplot(x="propextent", y="multiple", data=df, palette="Set3")
sns.violinplot(x="imonth", y="multiple", data=df, palette="Set3")
sns.violinplot(x="property", y="multiple", data=df, palette="Set3")
Explanation: Violin Plot Visualisastions
End of explanation |
2,644 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Resposta a Degrau
Jupyter Notebook desenvolvido por Gustavo S.S.
Resposta a um degrau de um circuito RC
Quando a fonte CC de um circuito RC for aplicada repentinamente, a fonte de
tensão ou de corrente pode ser modelada como uma função degrau, e a resposta
é conhecida como resposta a um degrau.
A resposta a um degrau de um circuito é seu comportamento quando a
excitação for a função degrau, que pode ser uma fonte de tensão ou de
corrente.
\begin{align}
{\Large v(0^-) = v(0^+) = V_0}
\end{align}
A resposta completa (ou resposta total) de um circuito
RC à aplicação súbita de uma fonte de tensão CC, partindo do pressuposto de
que o capacitor esteja inicialmente carregado, é dada como
Step1: Problema Prático 7.10
Determine v(t) para t > 0 no circuito da Figura 7.44. Suponha que a chave esteja aberta
há um longo período e que é fechada em t = 0. Calcule v(t) em t = 0,5.
Step2: Exemplo 7.11
Na Figura 7.45, a chave foi fechada há um longo tempo e é aberta em t = 0. Determine
i e v durante todo o período.
Step3: Problema Prático 7.11
A chave na Figura 7.47 é fechada em t = 0. Determine i(t) e v(t) para todo o período.
Observe que u(–t) = 1 para t < 0 e 0 para t > 0. Da mesma forma, u(–t) = 1 – u(t).
Step4: Resposta a um degrau de um circuito RL
A resposta pode ser a
soma da resposta transiente e a resposta em regime estacionário
Step5: Problema Prático 7.12
A chave na Figura 7.52 foi fechada por um longo tempo, sendo aberta em t = 0. Determine
i(t) para t > 0.
Step6: Exemplo 7.13
Em t = 0, a chave 1 na Figura 7.53 é fechada e a chave 2 é fechada 4 s depois. Determine
i(t) para t > 0. Calcule i para t = 2 s e t = 5 s.
Step7: Problema Prático 7.13
A chave S1 da Figura 7.54 é fechada em t = 0 e a chave S2 é fechada em t = 2 s. Calcule i(t) para qualquer t. Determine i (1) e i (3). | Python Code:
print("Exemplo 7.10")
from sympy import *
m = 10**(-3)
k = 10**3
C = 0.5*m
Vc0 = 24*5*k/(3*k + 5*k) #tensao no capacitor em condicao inicial v0
Vcf = 30 #tensao no capacitor em condicao final
tau = 4*k*C
t = symbols('t')
v = Vcf + (Vc0 - Vcf)*exp(-t/tau)
print("Tensão v(t):",v,"V")
Explanation: Resposta a Degrau
Jupyter Notebook desenvolvido por Gustavo S.S.
Resposta a um degrau de um circuito RC
Quando a fonte CC de um circuito RC for aplicada repentinamente, a fonte de
tensão ou de corrente pode ser modelada como uma função degrau, e a resposta
é conhecida como resposta a um degrau.
A resposta a um degrau de um circuito é seu comportamento quando a
excitação for a função degrau, que pode ser uma fonte de tensão ou de
corrente.
\begin{align}
{\Large v(0^-) = v(0^+) = V_0}
\end{align}
A resposta completa (ou resposta total) de um circuito
RC à aplicação súbita de uma fonte de tensão CC, partindo do pressuposto de
que o capacitor esteja inicialmente carregado, é dada como:
\begin{align}
{\Large v(t) =
\begin{cases}
V_0, & t < 0
\V_S + (V_0 - V_S)e^{-t/ \tau}, & t > 0
\end{cases}}
\end{align}
Se considerarmos que o capacitor esteja inicialmente descarregado, fazemos
que V0 = 0
\begin{align}
{\Large v(t) =
\begin{cases}
0, & t < 0
\V_S(1 - V_S)e^{-t/ \tau}, & t > 0
\end{cases}}
\end{align}
que pode ser escrito de forma alternativa como:
\begin{align}
{\Large v(t) = V_S(1 - e^{-t / \tau})u(t)}
\end{align}
A corrente através do capacitor é obtida usando-se i(t) = C dv/dt. Obtemos:
\begin{align}
{\Large i(t) = \frac{V_S}{R} e^{-t / \tau}u(t)}
\end{align}
Assim:
\begin{align}
{\Large v = v_n + v_f}
\{\Large onde}
\{\Large v_n = V_0 e^{-t / \tau}}
\{\Large v_f = V_S(1 - e^{-t / \tau})}
\end{align}
Em palavras:
Resposta transiente é a resposta temporária do circuito que se extinguirá
com o tempo.
Resposta em regime estacionário é o comportamento do circuito um
longo tempo após a excitação externa ter sido aplicada.
Seja lá qual for o modo que a examinamos, a resposta completa pode ser escrita como:
\begin{align}
{\Large v(t) = v(\infty) + [v(0) - v(\infty)]e^{-t / \tau}}
\end{align}
Portanto, encontrar a resposta a um degrau de um circuito RC requer
três coisas:
A tensão v(0) no capacitor
A tensão final v (∞) no capacitor
A constante de tempo τ
Exemplo 7.10
A chave da Figura 7.43 se encontra na posição A há um bom tempo. Em t = 0, a
chave é mudada para a posição B. Determine v(t) para t > 0 e calcule seu valor em
t = 1 s e 4 s.
End of explanation
print("Problema Prático 7.10")
C = 1/3
Vc0 = 15
Vcf = (15 + 7.5)*6/(6 + 2) - 7.5
R = 6*2/(6 + 2)
tau = R*C
v = Vcf + (Vc0 - Vcf)*exp(-t/tau)
print("Tensão v(t):",v,"V")
print("Tensão v(0.5):",v.subs(t,0.5),"V")
Explanation: Problema Prático 7.10
Determine v(t) para t > 0 no circuito da Figura 7.44. Suponha que a chave esteja aberta
há um longo período e que é fechada em t = 0. Calcule v(t) em t = 0,5.
End of explanation
print("Exemplo 7.11")
C = 1/4
Vc0 = 10
Vcf = 30*20/(20 + 10)
R = 10*20/(10 + 20)
tau = R*C
print("Tensão v0:",Vc0,"V")
v = Vcf + (Vc0 - Vcf)*exp(-t/tau)
print("Tensão v(t):",v,"V")
i0 = -10/10
print("Corrente i0:",i0,"A")
i2 = v/20 + C*diff(v,t)
print("Corrente i(t):",i2,"A")
Explanation: Exemplo 7.11
Na Figura 7.45, a chave foi fechada há um longo tempo e é aberta em t = 0. Determine
i e v durante todo o período.
End of explanation
print("Problema Prático 7.11")
C = 0.2
vs = 20
tau = 5*C
#Para t < 0
v1 = vs*(1 - exp(-t/tau))
print("Tensão v(t) para t < 0:",v1,"V")
v0 = v.subs(t,oo)
print("v0:",v0,"V")
i1 = (20 - v1)/5
print("Corrente i(t) para t < 0:",i1,"A")
i0 = i1.subs(t,oo)
print("i0:",i0)
#Para t > 0
i2 = 3*10/(5 + 10)
Vcf = i2*5
R = 5*10/(5 + 10)
tau = R*C
v = Vcf + Vcf*exp(-t/tau)
print("Tensão v(t) para t > 0:",v,"V")
i = -v/5
print("Corrente i(t) para t > 0:",i,"A")
Explanation: Problema Prático 7.11
A chave na Figura 7.47 é fechada em t = 0. Determine i(t) e v(t) para todo o período.
Observe que u(–t) = 1 para t < 0 e 0 para t > 0. Da mesma forma, u(–t) = 1 – u(t).
End of explanation
print("Exemplo 7.12")
from sympy import *
L = 1/3
Vs = 10
t = symbols('t') #transforma t em uma variavel (sympy)
#Para t < 0
i0 = Vs/2
#Para t > 0
R = 2 + 3
tau = L/R
i_f = Vs/R
i = i_f + (i0 - i_f)*exp(-t/tau)
print("Corrente i(t) para t > 0:",i,"A")
Explanation: Resposta a um degrau de um circuito RL
A resposta pode ser a
soma da resposta transiente e a resposta em regime estacionário:
\begin{align}
{\Large i = i_t + i_{ss}}
\end{align}
É sabido que a resposta transiente sempre é uma exponencial em queda, isto é:
\begin{align}
{\Large i_t = Ae^{-t / \tau}}
\{\Large \tau = \frac{L}{R} }
\end{align}
A resposta em regime estacionário é o valor da corrente um bom tempo depois
de a chave da Figura 7.48a ser fechada. Consequentemente, a resposta em regime estacionário
fica:
\begin{align}
{\Large i_{ss} = \frac{V_S}{R}}
\end{align}
Façamos que I0 seja
a corrente inicial pelo indutor, que pode provir de uma fonte que não seja Vs.
Uma vez que a corrente pelo indutor não pode mudar instantaneamente:
\begin{align}
{\Large i(0^+) = i(0^-) = i(0)}
\end{align}
Assim, obtemos:
\begin{align}
{\Large i(t) = \frac{V_S}{R} + (I_0 - \frac{V_S}{R})e^{-t / \tau}}
\end{align}
Que pode ser escrita como:
\begin{align}
{\Large i(t) = i(\infty) + [i(0) - i(\infty)]e^{-t / \tau}}
\end{align}
Portanto,
determinar a resposta a um degrau de um circuito RL requer três coisas:
A corrente inicial i(0) no indutor em t = 0
A corrente final no indutor i(∞)
A constante de tempo τ
Novamente, se a mudança ocorrer em t = t0 em vez de t = 0, temos:
\begin{align}
{\Large i(t) = i(\infty) + [i(t_0) - i(\infty)]e^{-(t-t_0) / \tau}}
\end{align}
Exemplo 7.12
Determine i(t) no circuito da Figura 7.51 para t > 0. Suponha que a chave tenha sido
fechada há um bom tempo.
End of explanation
print("Problema Prático 7.12")
L = 1.5
Cs = 6
#Para t <0
i0 = Cs
#Para t > 0
i_f = Cs*10/(5 + 10)
R = 5 + 10
tau = L/R
i = i_f + (i0 - i_f)*exp(-t/tau)
print("Corrente i(t) para t > 0:",i,"A")
Explanation: Problema Prático 7.12
A chave na Figura 7.52 foi fechada por um longo tempo, sendo aberta em t = 0. Determine
i(t) para t > 0.
End of explanation
print("Exemplo 7.13")
L = 5
V1 = 40
V2 = 10
#Para t < 0
i0 = 0
print("Corrente i0 para t < 0:",i0,"A")
#Para 0 < t < 4
R = 4 + 6
i_f = V1/R
tau = L/R
i = i_f + (i0 - i_f)*exp(-t/tau)
print("Corrente i(t) para 0 < t < 4:",i,"A")
i2 = i.subs(t,2)
#Para t > 4
i0 = i.subs(t,4)
R2 = (4*6)/(4+6) + 2 # resistencia equivalente vista pela fonte 10V
iv2 = V2/R2 * 4/(4 + 6)#corrente causada pela fonte 10V
R1 = (2*6)/(2 + 6) + 4 # req vista pela fonte 40V
iv1 = V1/R1 * 2/(2 + 6) #corrente causada pela fonte 40V
i_f = iv1 + iv2
R = (4*2)/(4 + 2) + 6 #req vista pelo indutor
tau = L/R
i = i_f + (i0 - i_f)*exp(-t/tau)
print("Corrente i(t) para t > 4:",i,"A")
i5 = i.subs(t,1)
print("Corrente i(2):",i2,"A")
print("Corrente i(5):",i5,"A")
Explanation: Exemplo 7.13
Em t = 0, a chave 1 na Figura 7.53 é fechada e a chave 2 é fechada 4 s depois. Determine
i(t) para t > 0. Calcule i para t = 2 s e t = 5 s.
End of explanation
print("Problema Prático 7.13")
Cs = 6
L = 5
#Para t < 0
i0 = 0
print("Corrente i para t < 0:",i0,"A")
#Para 0 < t < 2
R = 15 + 10 + 20
tau = L/R
i_f = Cs*15/R
i = i_f + (i0 - i_f)*exp(-t/tau)
print("Corrente i(t) para 0 < t < 2:",i,"A")
i1 = i.subs(t,1)
#Para t > 2
i0 = i.subs(t,2)
R = 15 + 10
tau = L/R
i_f = Cs*15/R
i = i_f + (i0 - i_f)*exp(-t/tau)
print("Corrente i(t) para t > 2:",i,"A")
i3 = i.subs(t,1)
print("Corrente i(1)",i1,"A")
print("Corrente i(3)",i3,"A")
Explanation: Problema Prático 7.13
A chave S1 da Figura 7.54 é fechada em t = 0 e a chave S2 é fechada em t = 2 s. Calcule i(t) para qualquer t. Determine i (1) e i (3).
End of explanation |
2,645 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Mass-univariate twoway repeated measures ANOVA on single trial power
This script shows how to conduct a mass-univariate repeated measures
ANOVA. As the model to be fitted assumes two fully crossed factors,
we will study the interplay between perceptual modality
(auditory VS visual) and the location of stimulus presentation
(left VS right). Here we use single trials as replications
(subjects) while iterating over time slices plus frequency bands
for to fit our mass-univariate model. For the sake of simplicity we
will confine this analysis to one single channel of which we know
that it exposes a strong induced response. We will then visualize
each effect by creating a corresponding mass-univariate effect
image. We conclude with accounting for multiple comparisons by
performing a permutation clustering test using the ANOVA as
clustering function. The results final will be compared to
multiple comparisons using False Discovery Rate correction.
Step1: Set parameters
Step2: We have to make sure all conditions have the same counts, as the ANOVA
expects a fully balanced data matrix and does not forgive imbalances that
generously (risk of type-I error).
Step3: Create TFR representations for all conditions
Step4: Setup repeated measures ANOVA
We will tell the ANOVA how to interpret the data matrix in terms of factors.
This is done via the factor levels argument which is a list of the number
factor levels for each factor.
Step5: Now we'll assemble the data matrix and swap axes so the trial replications
are the first dimension and the conditions are the second dimension.
Step6: While the iteration scheme used above for assembling the data matrix
makes sure the first two dimensions are organized as expected (with A =
modality and B = location)
Step7: Account for multiple comparisons using FDR versus permutation clustering test
First we need to slightly modify the ANOVA function to be suitable for
the clustering procedure. Also want to set some defaults.
Let's first override effects to confine the analysis to the interaction
Step8: A stat_fun must deal with a variable number of input arguments.
Inside the clustering function each condition will be passed as flattened
array, necessitated by the clustering procedure. The ANOVA however expects an
input array of dimensions
Step9: Create new stats image with only significant clusters
Step10: Now using FDR | Python Code:
# Authors: Denis Engemann <denis.engemann@gmail.com>
# Eric Larson <larson.eric.d@gmail.com>
# Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.time_frequency import tfr_morlet
from mne.stats import f_threshold_mway_rm, f_mway_rm, fdr_correction
from mne.datasets import sample
print(__doc__)
Explanation: Mass-univariate twoway repeated measures ANOVA on single trial power
This script shows how to conduct a mass-univariate repeated measures
ANOVA. As the model to be fitted assumes two fully crossed factors,
we will study the interplay between perceptual modality
(auditory VS visual) and the location of stimulus presentation
(left VS right). Here we use single trials as replications
(subjects) while iterating over time slices plus frequency bands
for to fit our mass-univariate model. For the sake of simplicity we
will confine this analysis to one single channel of which we know
that it exposes a strong induced response. We will then visualize
each effect by creating a corresponding mass-univariate effect
image. We conclude with accounting for multiple comparisons by
performing a permutation clustering test using the ANOVA as
clustering function. The results final will be compared to
multiple comparisons using False Discovery Rate correction.
End of explanation
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'
tmin, tmax = -0.2, 0.5
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
include = []
raw.info['bads'] += ['MEG 2443'] # bads
# picks MEG gradiometers
picks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True,
stim=False, include=include, exclude='bads')
ch_name = 'MEG 1332'
# Load conditions
reject = dict(grad=4000e-13, eog=150e-6)
event_id = dict(aud_l=1, aud_r=2, vis_l=3, vis_r=4)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax,
picks=picks, baseline=(None, 0), preload=True,
reject=reject)
epochs.pick_channels([ch_name]) # restrict example to one channel
Explanation: Set parameters
End of explanation
epochs.equalize_event_counts(event_id, copy=False)
# Factor to down-sample the temporal dimension of the TFR computed by
# tfr_morlet.
decim = 2
frequencies = np.arange(7, 30, 3) # define frequencies of interest
n_cycles = frequencies / frequencies[0]
zero_mean = False # don't correct morlet wavelet to be of mean zero
# To have a true wavelet zero_mean should be True but here for illustration
# purposes it helps to spot the evoked response.
Explanation: We have to make sure all conditions have the same counts, as the ANOVA
expects a fully balanced data matrix and does not forgive imbalances that
generously (risk of type-I error).
End of explanation
epochs_power = list()
for condition in [epochs[k] for k in event_id]:
this_tfr = tfr_morlet(condition, frequencies, n_cycles=n_cycles,
decim=decim, average=False, zero_mean=zero_mean,
return_itc=False)
this_tfr.apply_baseline(mode='ratio', baseline=(None, 0))
this_power = this_tfr.data[:, 0, :, :] # we only have one channel.
epochs_power.append(this_power)
Explanation: Create TFR representations for all conditions
End of explanation
n_conditions = len(epochs.event_id)
n_replications = epochs.events.shape[0] / n_conditions
factor_levels = [2, 2] # number of levels in each factor
effects = 'A*B' # this is the default signature for computing all effects
# Other possible options are 'A' or 'B' for the corresponding main effects
# or 'A:B' for the interaction effect only (this notation is borrowed from the
# R formula language)
n_frequencies = len(frequencies)
times = 1e3 * epochs.times[::decim]
n_times = len(times)
Explanation: Setup repeated measures ANOVA
We will tell the ANOVA how to interpret the data matrix in terms of factors.
This is done via the factor levels argument which is a list of the number
factor levels for each factor.
End of explanation
data = np.swapaxes(np.asarray(epochs_power), 1, 0)
# reshape last two dimensions in one mass-univariate observation-vector
data = data.reshape(n_replications, n_conditions, n_frequencies * n_times)
# so we have replications * conditions * observations:
print(data.shape)
Explanation: Now we'll assemble the data matrix and swap axes so the trial replications
are the first dimension and the conditions are the second dimension.
End of explanation
fvals, pvals = f_mway_rm(data, factor_levels, effects=effects)
effect_labels = ['modality', 'location', 'modality by location']
# let's visualize our effects by computing f-images
for effect, sig, effect_label in zip(fvals, pvals, effect_labels):
plt.figure()
# show naive F-values in gray
plt.imshow(effect.reshape(8, 211), cmap=plt.cm.gray, extent=[times[0],
times[-1], frequencies[0], frequencies[-1]], aspect='auto',
origin='lower')
# create mask for significant Time-frequency locations
effect = np.ma.masked_array(effect, [sig > .05])
plt.imshow(effect.reshape(8, 211), cmap='RdBu_r', extent=[times[0],
times[-1], frequencies[0], frequencies[-1]], aspect='auto',
origin='lower')
plt.colorbar()
plt.xlabel('Time (ms)')
plt.ylabel('Frequency (Hz)')
plt.title(r"Time-locked response for '%s' (%s)" % (effect_label, ch_name))
plt.show()
Explanation: While the iteration scheme used above for assembling the data matrix
makes sure the first two dimensions are organized as expected (with A =
modality and B = location):
.. table:: Sample data layout
===== ==== ==== ==== ====
trial A1B1 A1B2 A2B1 B2B2
===== ==== ==== ==== ====
1 1.34 2.53 0.97 1.74
... ... ... ... ...
56 2.45 7.90 3.09 4.76
===== ==== ==== ==== ====
Now we're ready to run our repeated measures ANOVA.
Note. As we treat trials as subjects, the test only accounts for
time locked responses despite the 'induced' approach.
For analysis for induced power at the group level averaged TRFs
are required.
End of explanation
effects = 'A:B'
Explanation: Account for multiple comparisons using FDR versus permutation clustering test
First we need to slightly modify the ANOVA function to be suitable for
the clustering procedure. Also want to set some defaults.
Let's first override effects to confine the analysis to the interaction
End of explanation
def stat_fun(*args):
return f_mway_rm(np.swapaxes(args, 1, 0), factor_levels=factor_levels,
effects=effects, return_pvals=False)[0]
# The ANOVA returns a tuple f-values and p-values, we will pick the former.
pthresh = 0.00001 # set threshold rather high to save some time
f_thresh = f_threshold_mway_rm(n_replications, factor_levels, effects,
pthresh)
tail = 1 # f-test, so tail > 0
n_permutations = 256 # Save some time (the test won't be too sensitive ...)
T_obs, clusters, cluster_p_values, h0 = mne.stats.permutation_cluster_test(
epochs_power, stat_fun=stat_fun, threshold=f_thresh, tail=tail, n_jobs=1,
n_permutations=n_permutations, buffer_size=None)
Explanation: A stat_fun must deal with a variable number of input arguments.
Inside the clustering function each condition will be passed as flattened
array, necessitated by the clustering procedure. The ANOVA however expects an
input array of dimensions: subjects X conditions X observations (optional).
The following function catches the list input and swaps the first and
the second dimension and finally calls the ANOVA function.
End of explanation
good_clusers = np.where(cluster_p_values < .05)[0]
T_obs_plot = np.ma.masked_array(T_obs,
np.invert(clusters[np.squeeze(good_clusers)]))
plt.figure()
for f_image, cmap in zip([T_obs, T_obs_plot], [plt.cm.gray, 'RdBu_r']):
plt.imshow(f_image, cmap=cmap, extent=[times[0], times[-1],
frequencies[0], frequencies[-1]], aspect='auto',
origin='lower')
plt.xlabel('Time (ms)')
plt.ylabel('Frequency (Hz)')
plt.title("Time-locked response for 'modality by location' (%s)\n"
" cluster-level corrected (p <= 0.05)" % ch_name)
plt.show()
Explanation: Create new stats image with only significant clusters:
End of explanation
mask, _ = fdr_correction(pvals[2])
T_obs_plot2 = np.ma.masked_array(T_obs, np.invert(mask))
plt.figure()
for f_image, cmap in zip([T_obs, T_obs_plot2], [plt.cm.gray, 'RdBu_r']):
plt.imshow(f_image, cmap=cmap, extent=[times[0], times[-1],
frequencies[0], frequencies[-1]], aspect='auto',
origin='lower')
plt.xlabel('Time (ms)')
plt.ylabel('Frequency (Hz)')
plt.title("Time-locked response for 'modality by location' (%s)\n"
" FDR corrected (p <= 0.05)" % ch_name)
plt.show()
Explanation: Now using FDR:
End of explanation |
2,646 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: We also limit the number of epochs further to 2000 (because we have seen that after that nothing good is going to happen)
Step2: Scores around 80% look good now, there might even be a bit more potential here, but we are not going after a final percent here | Python Code:
!pip install -q tf-nightly-gpu-2.0-preview
import tensorflow as tf
print(tf.__version__)
import matplotlib.pyplot as plt
import pandas as pd
import tensorflow as tf
import numpy as np
from tensorflow import keras
!curl -O https://raw.githubusercontent.com/DJCordhose/deep-learning-crash-course-notebooks/master/data/insurance-customers-1500.csv
df = pd.read_csv('./insurance-customers-1500.csv', sep=';')
y=df['group']
df.drop('group', axis='columns', inplace=True)
X = df.as_matrix()
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=42, stratify=y)
from tensorflow.keras.layers import Dense, Dropout, BatchNormalization, Activation
num_categories = 3
dropout = 0.6
model = tf.keras.Sequential()
model.add(Dense(100, name='hidden1', input_dim=3))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(Dropout(dropout))
model.add(Dense(100, name='hidden2'))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(Dropout(dropout))
model.add(Dense(num_categories, name='softmax', activation='softmax'))
model.compile(loss='sparse_categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.summary()
Explanation: <a href="https://colab.research.google.com/github/DJCordhose/ai/blob/master/notebooks/tf2/nn-final.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
End of explanation
%%time
BATCH_SIZE=1000
EPOCHS = 2000
history = model.fit(X_train, y_train, epochs=EPOCHS, batch_size=BATCH_SIZE, validation_split=0.2, verbose=0)
train_loss, train_accuracy = model.evaluate(X_train, y_train, batch_size=BATCH_SIZE)
train_loss, train_accuracy
test_loss, test_accuracy = model.evaluate(X_test, y_test, batch_size=BATCH_SIZE)
test_loss, test_accuracy
# plt.yscale('log')
plt.ylabel("accuracy")
plt.xlabel("epochs")
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.legend(["Accuracy", "Valdation Accuracy"])
Explanation: We also limit the number of epochs further to 2000 (because we have seen that after that nothing good is going to happen)
End of explanation
model.save('insurance.h5')
# the model has a decent size as we only have a little more than 10.000 parameters
!ls -l insurance.h5
Explanation: Scores around 80% look good now, there might even be a bit more potential here, but we are not going after a final percent here
End of explanation |
2,647 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Step1: Save and restore models
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Get an example dataset
We'll use the MNIST dataset to train our model to demonstrate saving weights. To speed up these demonstration runs, only use the first 1000 examples
Step3: Define a model
Let's build a simple model we'll use to demonstrate saving and loading weights.
Step4: Save checkpoints during training
The primary use case is to automatically save checkpoints during and at the end of training. This way you can use a trained model without having to retrain it, or pick-up training where you left of—in case the training process was interrupted.
tf.keras.callbacks.ModelCheckpoint is a callback that performs this task. The callback takes a couple of arguments to configure checkpointing.
Checkpoint callback usage
Train the model and pass it the ModelCheckpoint callback
Step5: This creates a single collection of TensorFlow checkpoint files that are updated at the end of each epoch
Step6: Create a new, untrained model. When restoring a model from only weights, you must have a model with the same architecture as the original model. Since it's the same model architecture, we can share weights despite that it's a different instance of the model.
Now rebuild a fresh, untrained model, and evaluate it on the test set. An untrained model will perform at chance levels (~10% accuracy)
Step7: Then load the weights from the checkpoint, and re-evaluate
Step8: Checkpoint callback options
The callback provides several options to give the resulting checkpoints unique names, and adjust the checkpointing frequency.
Train a new model, and save uniquely named checkpoints once every 5-epochs
Step9: Now, have a look at the resulting checkpoints (sorting by modification date)
Step10: Note
Step11: What are these files?
The above code stores the weights to a collection of checkpoint-formatted files that contain only the trained weights in a binary format. Checkpoints contain
Step12: Save the entire model
The entire model can be saved to a file that contains the weight values, the model's configuration, and even the optimizer's configuration. This allows you to checkpoint a model and resume training later—from the exact same state—without access to the original code.
Saving a fully-functional model in Keras is very useful—you can load them in TensorFlow.js and then train and run them in web browsers.
Keras provides a basic save format using the HDF5 standard. For our purposes, the saved model can be treated as a single binary blob.
Step13: Now recreate the model from that file
Step14: Check its accuracy | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
!pip install h5py pyyaml
Explanation: Save and restore models
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/keras/save_and_restore_models"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/models/blob/master/samples/core/tutorials/keras/save_and_restore_models.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/models/blob/master/samples/core/tutorials/keras/save_and_restore_models.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Model progress can be saved during—and after—training. This means a model can resume where it left off and avoid long training times. Saving also means you can share your model and others can recreate your work. When publishing research models and techniques, most machine learning practitioners share:
code to create the model, and
the trained weights, or parameters, for the model
Sharing this data helps others understand how the model works and try it themselves with new data.
Caution: Be careful with untrusted code—TensorFlow models are code. See Using TensorFlow Securely for details.
Options
There are different ways to save TensorFlow models—depending on the API you're using. This guide uses tf.keras, a high-level API to build and train models in TensorFlow. For other approaches, see the TensorFlow Save and Restore guide or Saving in eager.
Setup
Installs and imports
Install and import TensorFlow and dependencies:
End of explanation
from __future__ import absolute_import, division, print_function
import os
import tensorflow as tf
from tensorflow import keras
tf.__version__
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data()
train_labels = train_labels[:1000]
test_labels = test_labels[:1000]
train_images = train_images[:1000].reshape(-1, 28 * 28) / 255.0
test_images = test_images[:1000].reshape(-1, 28 * 28) / 255.0
Explanation: Get an example dataset
We'll use the MNIST dataset to train our model to demonstrate saving weights. To speed up these demonstration runs, only use the first 1000 examples:
End of explanation
# Returns a short sequential model
def create_model():
model = tf.keras.models.Sequential([
keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(784,)),
keras.layers.Dropout(0.2),
keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer=tf.keras.optimizers.Adam(),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy'])
return model
# Create a basic model instance
model = create_model()
model.summary()
Explanation: Define a model
Let's build a simple model we'll use to demonstrate saving and loading weights.
End of explanation
checkpoint_path = "training_1/cp.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
# Create checkpoint callback
cp_callback = tf.keras.callbacks.ModelCheckpoint(checkpoint_path,
save_weights_only=True,
verbose=1)
model = create_model()
model.fit(train_images, train_labels, epochs = 10,
validation_data = (test_images,test_labels),
callbacks = [cp_callback]) # pass callback to training
Explanation: Save checkpoints during training
The primary use case is to automatically save checkpoints during and at the end of training. This way you can use a trained model without having to retrain it, or pick-up training where you left of—in case the training process was interrupted.
tf.keras.callbacks.ModelCheckpoint is a callback that performs this task. The callback takes a couple of arguments to configure checkpointing.
Checkpoint callback usage
Train the model and pass it the ModelCheckpoint callback:
End of explanation
!ls {checkpoint_dir}
Explanation: This creates a single collection of TensorFlow checkpoint files that are updated at the end of each epoch:
End of explanation
model = create_model()
loss, acc = model.evaluate(test_images, test_labels)
print("Untrained model, accuracy: {:5.2f}%".format(100*acc))
Explanation: Create a new, untrained model. When restoring a model from only weights, you must have a model with the same architecture as the original model. Since it's the same model architecture, we can share weights despite that it's a different instance of the model.
Now rebuild a fresh, untrained model, and evaluate it on the test set. An untrained model will perform at chance levels (~10% accuracy):
End of explanation
model.load_weights(checkpoint_path)
loss,acc = model.evaluate(test_images, test_labels)
print("Restored model, accuracy: {:5.2f}%".format(100*acc))
Explanation: Then load the weights from the checkpoint, and re-evaluate:
End of explanation
# include the epoch in the file name. (uses `str.format`)
checkpoint_path = "training_2/cp-{epoch:04d}.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
cp_callback = tf.keras.callbacks.ModelCheckpoint(
checkpoint_path, verbose=1, save_weights_only=True,
# Save weights, every 5-epochs.
period=5)
model = create_model()
model.fit(train_images, train_labels,
epochs = 50, callbacks = [cp_callback],
validation_data = (test_images,test_labels),
verbose=0)
Explanation: Checkpoint callback options
The callback provides several options to give the resulting checkpoints unique names, and adjust the checkpointing frequency.
Train a new model, and save uniquely named checkpoints once every 5-epochs:
End of explanation
import pathlib
# Sort the checkpoints by modification time.
checkpoints = pathlib.Path(checkpoint_dir).glob("*.index")
checkpoints = sorted(checkpoints, key=lambda cp:cp.stat().st_mtime)
checkpoints = [cp.with_suffix('') for cp in checkpoints]
latest = str(checkpoints[-1])
checkpoints
Explanation: Now, have a look at the resulting checkpoints (sorting by modification date):
End of explanation
model = create_model()
model.load_weights(latest)
loss, acc = model.evaluate(test_images, test_labels)
print("Restored model, accuracy: {:5.2f}%".format(100*acc))
Explanation: Note: the default tensorflow format only saves the 5 most recent checkpoints.
To test, reset the model and load the latest checkpoint:
End of explanation
# Save the weights
model.save_weights('./checkpoints/my_checkpoint')
# Restore the weights
model = create_model()
model.load_weights('./checkpoints/my_checkpoint')
loss,acc = model.evaluate(test_images, test_labels)
print("Restored model, accuracy: {:5.2f}%".format(100*acc))
Explanation: What are these files?
The above code stores the weights to a collection of checkpoint-formatted files that contain only the trained weights in a binary format. Checkpoints contain:
One or more shards that contain your model's weights.
An index file that indicates which weights are stored in which shard.
If you are only training a model on a single machine, you'll have one shard with the suffix: .data-00000-of-00001
Manually save weights
Above you saw how to load the weights into a model.
Manually saving the weights is just as simple, use the Model.save_weights method.
End of explanation
model = create_model()
model.fit(train_images, train_labels, epochs=5)
# Save entire model to a HDF5 file
model.save('my_model.h5')
Explanation: Save the entire model
The entire model can be saved to a file that contains the weight values, the model's configuration, and even the optimizer's configuration. This allows you to checkpoint a model and resume training later—from the exact same state—without access to the original code.
Saving a fully-functional model in Keras is very useful—you can load them in TensorFlow.js and then train and run them in web browsers.
Keras provides a basic save format using the HDF5 standard. For our purposes, the saved model can be treated as a single binary blob.
End of explanation
# Recreate the exact same model, including weights and optimizer.
new_model = keras.models.load_model('my_model.h5')
new_model.summary()
Explanation: Now recreate the model from that file:
End of explanation
loss, acc = new_model.evaluate(test_images, test_labels)
print("Restored model, accuracy: {:5.2f}%".format(100*acc))
Explanation: Check its accuracy:
End of explanation |
2,648 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
An RNN model for temperature data
This time we will be working with real data
Step1: Hyperparameters
N_FORWARD = 1
Step2: Temperature data
This is what our temperature datasets looks like
Step3: Resampling
Our RNN would need ot be unrolled across 365 steps to capture the yearly temperature cycles. That's a bit too much. We will resample the temparatures and work with 5-day averages for example. This is what resampled (Tmin, Tmax) temperatures look like.
Step4: Visualize training sequences
This is what the neural network will see during training.
Step5: The model definition
<div style="text-align
Step6: Instantiate the model
Step7: Initialize Tensorflow session
This resets all neuron weights and biases to initial random values
Step8: The training loop
You can re-execute this cell to continue training. <br/>
<br/>
Training data must be batched correctly, one weather station per line, continued on the same line across batches. This way, output states computed from one batch are the correct input states for the next batch. The provided utility function rnn_multistation_sampling_temperature_sequencer does the right thing.
Step9: Inference
This is a generative model
Step10: Validation | Python Code:
import math
import sys
import time
import numpy as np
import utils_batching
import utils_args
import tensorflow as tf
from tensorflow.python.lib.io import file_io as gfile
print("Tensorflow version: " + tf.__version__)
from matplotlib import pyplot as plt
import utils_prettystyle
import utils_display
Explanation: An RNN model for temperature data
This time we will be working with real data: daily (Tmin, Tmax) temperature series from 1666 weather stations spanning 50 years. It is to be noted that a pretty good predictor model already exists for temperatures: the average of temperatures on the same day of the year in N previous years. It is not clear if RNNs can do better but we will se how far they can go.
<div class="alert alert-block alert-warning">
This is the solution file. The corresponding tutorial file is [01_RNN_generator_temperatures_playground.ipynb](01_RNN_generator_temperatures_playground.ipynb)
</div>
End of explanation
NB_EPOCHS = 5 # number of times the model sees all the data during training
N_FORWARD = 8 # train the network to predict N in advance (traditionnally 1)
RESAMPLE_BY = 5 # averaging period in days (training on daily data is too much)
RNN_CELLSIZE = 128 # size of the RNN cells
N_LAYERS = 2 # number of stacked RNN cells (needed for tensor shapes but code must be changed manually)
SEQLEN = 128 # unrolled sequence length
BATCHSIZE = 64 # mini-batch size
DROPOUT_PKEEP = 0.7 # probability of neurons not being dropped (should be between 0.5 and 1)
ACTIVATION = tf.nn.tanh # Activation function for GRU cells (tf.nn.relu or tf.nn.tanh)
JOB_DIR = "checkpoints"
DATA_DIR = "temperatures"
# potentially override some settings from command-line arguments
if __name__ == '__main__':
JOB_DIR, DATA_DIR = utils_args.read_args1(JOB_DIR, DATA_DIR)
ALL_FILEPATTERN = DATA_DIR + "/*.csv" # pattern matches all 1666 files
EVAL_FILEPATTERN = DATA_DIR + "/USC000*2.csv" # pattern matches 8 files
# pattern USW*.csv -> 298 files, pattern USW*0.csv -> 28 files
print('Reading data from "{}".\nWrinting checkpoints to "{}".'.format(DATA_DIR, JOB_DIR))
Explanation: Hyperparameters
N_FORWARD = 1: works but model struggles to predict from some positions<br/>
N_FORWARD = 4: better but still bad occasionnally<br/>
N_FORWARD = 8: works perfectly
End of explanation
all_filenames = gfile.get_matching_files(ALL_FILEPATTERN)
eval_filenames = gfile.get_matching_files(EVAL_FILEPATTERN)
train_filenames = list(set(all_filenames) - set(eval_filenames))
# By default, this utility function loads all the files and places data
# from them as-is in an array, one file per line. Later, we will use it
# to shape the dataset as needed for training.
ite = utils_batching.rnn_multistation_sampling_temperature_sequencer(eval_filenames)
evtemps, _, evdates, _, _ = next(ite) # gets everything
print('Pattern "{}" matches {} files'.format(ALL_FILEPATTERN, len(all_filenames)))
print('Pattern "{}" matches {} files'.format(EVAL_FILEPATTERN, len(eval_filenames)))
print("Evaluation files: {}".format(len(eval_filenames)))
print("Training files: {}".format(len(train_filenames)))
print("Initial shape of the evaluation dataset: " + str(evtemps.shape))
print("{} files, {} data points per file, {} values per data point"
" (Tmin, Tmax, is_interpolated) ".format(evtemps.shape[0], evtemps.shape[1],evtemps.shape[2]))
# You can adjust the visualisation range and dataset here.
# Interpolated regions of the dataset are marked in red.
WEATHER_STATION = 0 # 0 to 7 in default eval dataset
START_DATE = 0 # 0 = Jan 2nd 1950
END_DATE = 18262 # 18262 = Dec 31st 2009
visu_temperatures = evtemps[WEATHER_STATION,START_DATE:END_DATE]
visu_dates = evdates[START_DATE:END_DATE]
utils_display.picture_this_4(visu_temperatures, visu_dates)
Explanation: Temperature data
This is what our temperature datasets looks like: sequences of daily (Tmin, Tmax) from 1960 to 2010. They have been cleaned up and eventual missing values have been filled by interpolation. Interpolated regions of the dataset are marked in red on the graph.
End of explanation
# This time we ask the utility function to average temperatures over 5-day periods (RESAMPLE_BY=5)
ite = utils_batching.rnn_multistation_sampling_temperature_sequencer(eval_filenames, RESAMPLE_BY, tminmax=True)
evaltemps, _, evaldates, _, _ = next(ite)
# display five years worth of data
WEATHER_STATION = 0 # 0 to 7 in default eval dataset
START_DATE = 0 # 0 = Jan 2nd 1950
END_DATE = 365*5//RESAMPLE_BY # 5 years
visu_temperatures = evaltemps[WEATHER_STATION, START_DATE:END_DATE]
visu_dates = evaldates[START_DATE:END_DATE]
plt.fill_between(visu_dates, visu_temperatures[:,0], visu_temperatures[:,1])
plt.show()
Explanation: Resampling
Our RNN would need ot be unrolled across 365 steps to capture the yearly temperature cycles. That's a bit too much. We will resample the temparatures and work with 5-day averages for example. This is what resampled (Tmin, Tmax) temperatures look like.
End of explanation
# The function rnn_multistation_sampling_temperature_sequencer puts one weather station per line in
# a batch and continues with data from the same station in corresponding lines in the next batch.
# Features and labels are returned with shapes [BATCHSIZE, SEQLEN, 2]. The last dimension of size 2
# contains (Tmin, Tmax).
ite = utils_batching.rnn_multistation_sampling_temperature_sequencer(eval_filenames,
RESAMPLE_BY,
BATCHSIZE,
SEQLEN,
N_FORWARD,
nb_epochs=1,
tminmax=True)
# load 6 training sequences (each one contains data for all weather stations)
visu_data = [next(ite) for _ in range(6)]
# Check that consecutive training sequences from the same weather station are indeed consecutive
WEATHER_STATION = 4
utils_display.picture_this_5(visu_data, WEATHER_STATION)
Explanation: Visualize training sequences
This is what the neural network will see during training.
End of explanation
def model_rnn_fn(features, Hin, labels, step, dropout_pkeep):
X = features # shape [BATCHSIZE, SEQLEN, 2], 2 for (Tmin, Tmax)
batchsize = tf.shape(X)[0]
seqlen = tf.shape(X)[1]
pairlen = tf.shape(X)[2] # should be 2 (tmin, tmax)
cells = [tf.nn.rnn_cell.GRUCell(RNN_CELLSIZE, activation=ACTIVATION) for _ in range(N_LAYERS)]
# dropout useful between cell layers only: no output dropout on last cell
cells = [tf.nn.rnn_cell.DropoutWrapper(cell, output_keep_prob = dropout_pkeep) for cell in cells]
# a stacked RNN cell still works like an RNN cell
cell = tf.nn.rnn_cell.MultiRNNCell(cells, state_is_tuple=False)
# X[BATCHSIZE, SEQLEN, 2], Hin[BATCHSIZE, RNN_CELLSIZE*N_LAYERS]
# the sequence unrolling happens here
Yn, H = tf.nn.dynamic_rnn(cell, X, initial_state=Hin, dtype=tf.float32)
# Yn[BATCHSIZE, SEQLEN, RNN_CELLSIZE]
Yn = tf.reshape(Yn, [batchsize*seqlen, RNN_CELLSIZE])
Yr = tf.layers.dense(Yn, 2) # Yr [BATCHSIZE*SEQLEN, 2]
Yr = tf.reshape(Yr, [batchsize, seqlen, 2]) # Yr [BATCHSIZE, SEQLEN, 2]
Yout = Yr[:,-N_FORWARD:,:] # Last N_FORWARD outputs Yout [BATCHSIZE, N_FORWARD, 2]
loss = tf.losses.mean_squared_error(Yr, labels) # labels[BATCHSIZE, SEQLEN, 2]
lr = 0.001 + tf.train.exponential_decay(0.01, step, 1000, 0.5)
optimizer = tf.train.AdamOptimizer(learning_rate=lr)
train_op = optimizer.minimize(loss)
return Yout, H, loss, train_op, Yr
Explanation: The model definition
<div style="text-align: right; font-family: monospace">
X shape [BATCHSIZE, SEQLEN, 2]<br/>
Y shape [BATCHSIZE, SEQLEN, 2]<br/>
H shape [BATCHSIZE, RNN_CELLSIZE*NLAYERS]
</div>
When executed, this function instantiates the Tensorflow graph for our model.
End of explanation
tf.reset_default_graph() # restart model graph from scratch
# placeholder for inputs
Hin = tf.placeholder(tf.float32, [None, RNN_CELLSIZE * N_LAYERS])
features = tf.placeholder(tf.float32, [None, None, 2]) # [BATCHSIZE, SEQLEN, 2]
labels = tf.placeholder(tf.float32, [None, None, 2]) # [BATCHSIZE, SEQLEN, 2]
step = tf.placeholder(tf.int32)
dropout_pkeep = tf.placeholder(tf.float32)
# instantiate the model
Yout, H, loss, train_op, Yr = model_rnn_fn(features, Hin, labels, step, dropout_pkeep)
Explanation: Instantiate the model
End of explanation
# variable initialization
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run([init])
saver = tf.train.Saver(max_to_keep=1)
Explanation: Initialize Tensorflow session
This resets all neuron weights and biases to initial random values
End of explanation
losses = []
indices = []
last_epoch = 99999
last_fileid = 99999
for i, (next_features, next_labels, dates, epoch, fileid) in enumerate(
utils_batching.rnn_multistation_sampling_temperature_sequencer(train_filenames,
RESAMPLE_BY,
BATCHSIZE,
SEQLEN,
N_FORWARD,
NB_EPOCHS, tminmax=True)):
# reinintialize state between epochs or when starting on data from a new weather station
if epoch != last_epoch or fileid != last_fileid:
batchsize = next_features.shape[0]
H_ = np.zeros([batchsize, RNN_CELLSIZE * N_LAYERS])
print("State reset")
#train
feed = {Hin: H_, features: next_features, labels: next_labels, step: i, dropout_pkeep: DROPOUT_PKEEP}
Yout_, H_, loss_, _, Yr_ = sess.run([Yout, H, loss, train_op, Yr], feed_dict=feed)
# print progress
if i%20 == 0:
print("{}: epoch {} loss = {} ({} weather stations this epoch)".format(i, epoch, np.mean(loss_), fileid+1))
sys.stdout.flush()
if i%10 == 0:
losses.append(np.mean(loss_))
indices.append(i)
# This visualisation can be helpful to see how the model "locks" on the shape of the curve
# if i%100 == 0:
# plt.figure(figsize=(10,2))
# plt.fill_between(dates, next_features[0,:,0], next_features[0,:,1]).set_alpha(0.2)
# plt.fill_between(dates, next_labels[0,:,0], next_labels[0,:,1])
# plt.fill_between(dates, Yr_[0,:,0], Yr_[0,:,1]).set_alpha(0.8)
# plt.show()
last_epoch = epoch
last_fileid = fileid
# save the trained model
SAVEDMODEL = JOB_DIR + "/ckpt" + str(int(time.time()))
tf.saved_model.simple_save(sess, SAVEDMODEL,
inputs={"features":features, "Hin":Hin, "dropout_pkeep":dropout_pkeep},
outputs={"Yout":Yout, "H":H})
plt.ylim(ymax=np.amax(losses[1:])) # ignore first value for scaling
plt.plot(indices, losses)
plt.show()
Explanation: The training loop
You can re-execute this cell to continue training. <br/>
<br/>
Training data must be batched correctly, one weather station per line, continued on the same line across batches. This way, output states computed from one batch are the correct input states for the next batch. The provided utility function rnn_multistation_sampling_temperature_sequencer does the right thing.
End of explanation
def prediction_run(predict_fn, prime_data, run_length):
H = np.zeros([1, RNN_CELLSIZE * N_LAYERS]) # zero state initially
Yout = np.zeros([1, N_FORWARD, 2])
data_len = prime_data.shape[0]-N_FORWARD
# prime the state from data
if data_len > 0:
Yin = np.array(prime_data[:-N_FORWARD])
Yin = np.reshape(Yin, [1, data_len, 2]) # reshape as one sequence of pairs (Tmin, Tmax)
r = predict_fn({'features': Yin, 'Hin':H, 'dropout_pkeep':1.0}) # no dropout during inference
Yout = r["Yout"]
H = r["H"]
# initaily, put real data on the inputs, not predictions
Yout = np.expand_dims(prime_data[-N_FORWARD:], axis=0)
# Yout shape [1, N_FORWARD, 2]: batch of a single sequence of length N_FORWARD of (Tmin, Tmax) data pointa
# run prediction
# To generate a sequence, run a trained cell in a loop passing as input and input state
# respectively the output and output state from the previous iteration.
results = []
for i in range(run_length//N_FORWARD+1):
r = predict_fn({'features': Yout, 'Hin':H, 'dropout_pkeep':1.0}) # no dropout during inference
Yout = r["Yout"]
H = r["H"]
results.append(Yout[0]) # shape [N_FORWARD, 2]
return np.concatenate(results, axis=0)[:run_length]
Explanation: Inference
This is a generative model: run an trained RNN cell in a loop
End of explanation
QYEAR = 365//(RESAMPLE_BY*4)
YEAR = 365//(RESAMPLE_BY)
# Try starting predictions from January / March / July (resp. OFFSET = YEAR or YEAR+QYEAR or YEAR+2*QYEAR)
# Some start dates are more challenging for the model than others.
OFFSET = 30*YEAR+1*QYEAR
PRIMELEN=5*YEAR
RUNLEN=3*YEAR
PRIMELEN=512
RUNLEN=256
RMSELEN=3*365//(RESAMPLE_BY*2) # accuracy of predictions 1.5 years in advance
# Restore the model from the last checkpoint saved previously.
# Alternative checkpoints:
# Once you have trained on all 1666 weather stations on Google Cloud ML Engine, you can load the checkpoint from there.
# SAVEDMODEL = "gs://{BUCKET}/sinejobs/sines_XXXXXX_XXXXXX/ckptXXXXXXXX"
# A sample checkpoint is provided with the lab. You can try loading it for comparison.
# SAVEDMODEL = "temperatures_best_checkpoint"
predict_fn = tf.contrib.predictor.from_saved_model(SAVEDMODEL)
for evaldata in evaltemps:
prime_data = evaldata[OFFSET:OFFSET+PRIMELEN]
results = prediction_run(predict_fn, prime_data, RUNLEN)
utils_display.picture_this_6(evaldata, evaldates, prime_data, results, PRIMELEN, RUNLEN, OFFSET, RMSELEN)
rmses = []
bad_ones = 0
for offset in [YEAR, YEAR+QYEAR, YEAR+2*QYEAR]:
for evaldata in evaltemps:
prime_data = evaldata[offset:offset+PRIMELEN]
results = prediction_run(predict_fn, prime_data, RUNLEN)
rmse = math.sqrt(np.mean((evaldata[offset+PRIMELEN:offset+PRIMELEN+RMSELEN] - results[:RMSELEN])**2))
rmses.append(rmse)
if rmse>7: bad_ones += 1
print("RMSE on {} predictions (shaded area): {}".format(RMSELEN, rmse))
print("Average RMSE on {} weather stations: {} ({} really bad ones, i.e. >7.0)".format(len(evaltemps), np.mean(rmses), bad_ones))
sys.stdout.flush()
Explanation: Validation
End of explanation |
2,649 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Zollinger
https
Step1: pixelogik
https
Step2: ColorThief
https
Step3: OpenCV pixel count
Step4: Back to Zollinger
Step5: OpenCV's kmeans
http
Step6: pyimagesearch
http
Step7: Final Straw - Back to color mapping | Python Code:
def get_colors(img, numcolors=5):
#image = image.resize((resize, resize))
result = img.convert('P', palette=Image.ADAPTIVE, colors=numcolors)
result.putalpha(0)
return result.getcolors()
image = Image.open(images[0])
%time colors = get_colors(image)
colors = get_colors(Image.open(images[0]))
colshow([col[:3] for count, col in colors])
colors = get_colors(Image.open(images[1]))
colshow([col[:3] for count, col in colors])
colors = get_colors(Image.open(images[2]))
colshow([col[:3] for count, col in colors])
colors = get_colors(Image.open(images[3]))
colshow([col[:3] for count, col in colors])
Explanation: Zollinger
https://gist.github.com/zollinger/1722663
End of explanation
from ColorCube import ColorCube
cc = ColorCube()
image = Image.open(images[0])
%time colors = cc.get_colors(image)
colors = cc.get_colors(Image.open(images[0]))
colors = [tuple(color) for color in colors[:5]]
colshow(colors)
colors = cc.get_colors(Image.open(images[1]))
colors = [tuple(color) for color in colors[:5]]
colshow(colors)
Explanation: pixelogik
https://github.com/pixelogik/ColorCube
End of explanation
from colorthief import ColorThief
color_thief = ColorThief(images[0])
# get the dominant color
%time colors = color_thief.get_color(quality=10)
Explanation: ColorThief
https://github.com/fengsp/color-thief-py
End of explanation
import cv2
import numpy as np
class Palette(object):
def hex_to_rgb(self, value):
value = value.lstrip('#').lower()
lv = len(value)
return [int(value[i:i + lv // 3], 16) for i in range(0, lv, lv // 3)]
def rgb_to_hex(self, rgb):
return '#%02x%02x%02x' % tuple(rgb)
def closest_node(self, node):
deltas = self.labs - node
dist_2 = np.einsum('ij,ij->i', deltas, deltas)
return np.argmin(dist_2)
def rgb_to_lab(self, rgb):
pix = np.array([[rgb]])
return cv2.cvtColor(pix, cv2.COLOR_RGB2LAB)[0][0]
def dominant(self, img, mask):
img = cv2.cvtColor(img, cv2.COLOR_RGB2LAB)
Z = img[np.where(mask==1)]
return np.apply_along_axis(self.closest_node, 1, Z)
def rgb_to_closest_name(self, rgb):
idx = self.closest_node(self.rgb_to_lab(rgb))
return self.names[idx]
def __init__(self, f):
self.names = []
self.labs = []
with open(f) as fr:
for line in fr:
line = line.lower().split(',')
name = line[0].strip()
val = line[1].strip()
self.names.append(name)
rgb = np.uint8(np.array(self.hex_to_rgb(val)))
lab = self.rgb_to_lab(rgb)
self.labs.append(lab)
palette = Palette('palette.csv')
Explanation: OpenCV pixel count
End of explanation
import cv2
import numpy as np
from glob import glob
from utils import background_mask, resize
images = glob('sample/medium/*')
imgs = [cv2.imread(img, cv2.IMREAD_UNCHANGED) for img in sorted(images)]
imgs = [resize(img, max_height=300., max_width=300) for img in imgs]
masks = [background_mask(img) for img in imgs]
imgs = [cv2.cvtColor(img, cv2.COLOR_BGR2RGB) for img in imgs]
palettes = []
pimgs = []
for img, mask in zip(imgs, masks):
img = Image.fromarray(img).convert('P', palette=Image.ADAPTIVE, colors=10)
mask = Image.fromarray(mask).convert('L')
img.putalpha(mask)
colors = img.getcolors()
colors = [(count, color[:3]) for count, color in colors if color[3]>0]
colors.sort(key=lambda tup: tup[0], reverse=True)
tot = sum([count for count, color in colors])
colors = [(100*float(count)/tot, color) for count, color in colors]
colors = [("%0.2f"%per, color, palette.rgb_to_closest_name(np.uint8(color))) for per, color in colors]
palettes.append(colors)
pimgs = []
for img, mask in zip(imgs, masks):
img = Image.fromarray(img).convert('P', palette=Image.ADAPTIVE, colors=10)
mask = Image.fromarray(mask).convert('L')
img.putalpha(mask)
pimgs.append(img)
imshowall(pimgs)
palettes = []
pimgs = []
for psize in [10, 50, 100, 150, 200]:
img = imgs[15]
mask = masks[15]
img = Image.fromarray(img).convert('P', palette=Image.ADAPTIVE, colors=psize)
mask = Image.fromarray(mask).convert('L')
img.putalpha(mask)
colors = img.getcolors()
colors = [(count, color[:3]) for count, color in colors if color[3]>0]
colors.sort(key=lambda tup: tup[0], reverse=True)
tot = sum([count for count, color in colors])
colors = [(100*float(count)/tot, color) for count, color in colors]
colors = [("%0.2f"%per, color, str(color)) for per, color in colors]
palettes.append(colors)
Explanation: Back to Zollinger
End of explanation
import numpy as np
import cv2
from utils import background_mask
img = cv2.cvtColor(cv2.imread('sample/medium/16.jpeg'), cv2.COLOR_BGR2RGB)
mask = background_mask(img)
img = cv2.GaussianBlur(img, (5,5), 0)
img = cv2.bitwise_and(img, img, mask=mask)
Z = img.reshape((-1,3))
# convert to np.float32
Z = np.float32(Z)
# define criteria, number of clusters(K) and apply kmeans()
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
K = 5
ret,label,center=cv2.kmeans(Z, K, None, criteria, 5, cv2.KMEANS_PP_CENTERS)
print("%.2f"%(ret/1000000))
# Now convert back into uint8, and make original image
center = np.uint8(center)
res = center[label.flatten()]
res2 = res.reshape((img.shape))
imshow(res2)
Explanation: OpenCV's kmeans
http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_ml/py_kmeans/py_kmeans_opencv/py_kmeans_opencv.html
End of explanation
def centroid_histogram(clt):
# grab the number of different clusters and create a histogram
# based on the number of pixels assigned to each cluster
numLabels = np.arange(0, len(np.unique(clt.labels_)) + 1)
(hist, _) = np.histogram(clt.labels_, bins = numLabels)
# normalize the histogram, such that it sums to one
hist = hist.astype("float")
hist /= hist.sum()
# return the histogram
return hist
def plot_colors(hist, centroids):
# initialize the bar chart representing the relative frequency
# of each of the colors
bar = np.zeros((50, 300, 3), dtype = "uint8")
startX = 0
# loop over the percentage of each cluster and the color of
# each cluster
for (percent, color) in zip(hist, centroids):
# plot the relative percentage of each cluster
endX = startX + (percent * 300)
cv2.rectangle(bar, (int(startX), 0), (int(endX), 50),
color.astype("uint8").tolist(), -1)
startX = endX
# return the bar chart
return bar
from sklearn.cluster import KMeans
img = cv2.imread('sample/medium/26.png', cv2.IMREAD_UNCHANGED)
mask = background_mask(img)
#img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
#img = cv2.GaussianBlur(img, (5,5), 0)
#img = cv2.bitwise_and(img, img, mask=mask)
#Z = img.reshape((-1,3))
Z = img[np.where(mask>0)]
# convert to np.float32
Z = np.float32(Z)
clt = KMeans(n_clusters = 3)
clt.fit(Z)
hist = centroid_histogram(clt)
bar = plot_colors(hist, clt.cluster_centers_)
# show our color bart
plt.figure()
plt.axis("off")
plt.imshow(bar)
plt.show()
Explanation: pyimagesearch
http://www.pyimagesearch.com/2014/05/26/opencv-python-k-means-color-clustering/
End of explanation
import cv2
import numpy as np
class Palette(object):
tolerance = 30
def hex_to_rgb(self, value):
value = value.lstrip('#').lower()
lv = len(value)
return [int(value[i:i + lv // 3], 16) for i in range(0, lv, lv // 3)]
def rgb_to_hex(self, rgb):
return '#%02x%02x%02x' % tuple(rgb)
def rgb_to_closest_name(self, rgb):
idx = self.closest_node(self.rgb_to_lab(rgb))
return self.names[idx]
def build_boundaries(self):
self.boundaries = []
for rgb in self.rgbs:
mins = np.array([max(0, rgb[0]-self.tolerance),
max(0, rgb[1]-self.tolerance),
max(0, rgb[2]-self.tolerance)], dtype=np.uint8)
maxs = np.array([min(255, rgb[0]+self.tolerance),
min(255, rgb[1]+self.tolerance),
min(255, rgb[2]+self.tolerance)], dtype=np.uint8)
self.boundaries.append((mins, maxs))
def __init__(self, f):
self.names = []
self.rgbs = []
with open(f) as fr:
for line in fr:
line = line.lower().split(',')
name = line[0].strip()
val = line[1].strip()
self.names.append(name)
rgb = np.uint8(np.array(self.hex_to_rgb(val)))
self.rgbs.append(rgb)
self.build_boundaries()
palette = Palette('palette.csv')
img = cv2.imread('sample/medium/26.png', cv2.IMREAD_UNCHANGED)
mask = background_mask(img)
img = cv2.cvtColor(img, cv2.COLOR_BGRA2RGB)
img = cv2.GaussianBlur(img, (5,5), 0)
#img = cv2.bitwise_and(img, img, mask=mask)
#Z = img.reshape((-1,3))
#Z = img[np.where(mask>0)]
outputs = []
dominants = []
images = sorted(glob('sample/medium/*'))
img = cv2.imread(images[12], cv2.IMREAD_UNCHANGED)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
mask = background_mask(img)
tot = len(img[np.where(mask>0)])
for counter, (lower, upper) in enumerate(palette.boundaries):
col_mask = cv2.inRange(img, lower, upper)
col_mask = cv2.bitwise_and(mask, col_mask)
area = len(img[np.where(col_mask>0)])
outputs.append(cv2.bitwise_and(img, img, mask = col_mask))
dominants.append((float(area)*100/tot, counter, palette.names[counter]))
print(sorted(dominants, reverse=True)[:10])
imshow(img)
imshow(mask)
Explanation: Final Straw - Back to color mapping
End of explanation |
2,650 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Topics discussed in knesset committees.
Based on transcripts of the knesset committees.<br/>
The work was done in the 'public knowledge workshop' hackathon and won 3rd place prize.
Analyze the most talked topics in the knesset committees
Step1: Create Graphs
Time based graph - plot the common talked topics by month
Step2: Summary graph -plot the most talked topics in a knesset overall
Step3: Explaining the results - the lexicon words that appears the most in the knesset
Using this we filtered the words 'כספים' and 'אוצר' from the Economics lexicon since they represent budgets more then economy discussions.
Step4: Show correlation between discussed topics
We mainly want to know which topics are being neglected when other topics are getting special attention.
Step5: Graphs!
Knesset 17
Step6: We can see two interesting things in this graph
Step7: Knesset 18
Step8: We were able to find
Step9: Knesset 20
Step10: This graph is basically showing inertion so it's interesting to see what is that 'normal' subject distribution.
Step11: Correlation between topics | Python Code:
import pandas as pd
from matplotlib import pyplot as plt
import warnings
warnings.filterwarnings('ignore')
# Normalize the topics' scores
def normalize_scores(scores):
max_i = (0, -1)
second_i = (0, -1)
third_i = (0, -1)
for i in range(len(scores)):
if scores[i] != 0:
if scores[i] > max_i[0]:
third_i = second_i
second_i = max_i
max_i = (scores[i], i)
elif scores[i] > second_i[0]:
third_i = second_i
second_i = (scores[i], i)
elif scores[i] > third_i[0]:
third_i = (scores[i], i)
scores = [0] * len(scores)
if max_i[1] != -1:
scores[max_i[1]] = 3
if second_i[1] != -1:
scores[second_i[1]] = 2
if third_i[1] != -1:
scores[third_i[1]] = 1
return scores
def get_knesset_topics(knesset_num):
# Get knesset collected data
df_knesset = pd.read_csv("Extracted_data/meetings_topics_knesset_" + str(knesset_num) + ".csv")
smaller_df = df_knesset[['KnessetNum', 'Year', 'Month',
'Diplomacy_score', 'Ecologics_score', 'Economics_score', 'Education_score',
'Health_score', 'Security_score']]
# Normalize scores
topics = smaller_df.apply(lambda row: normalize_scores(row[3:]), axis=1)
topics_df = pd.DataFrame(topics)
smaller_df[['Diplomacy_score', 'Ecologics_score', 'Economics_score', 'Education_score',
'Health_score', 'Security_score']] = pd.DataFrame(topics_df[0].values.tolist(), index= topics_df.index)
smaller_df['Year.Month'] = smaller_df['Year'] + (smaller_df['Month'] -1)/12.0
return smaller_df
Explanation: Topics discussed in knesset committees.
Based on transcripts of the knesset committees.<br/>
The work was done in the 'public knowledge workshop' hackathon and won 3rd place prize.
Analyze the most talked topics in the knesset committees
End of explanation
def draw_knesset_topics_over_time(knesset_num):
df = get_knesset_topics(knesset_num)
# Aggredate per month and year
by_month = df.groupby(['Year.Month']).mean()
# Plot topics graph
by_month.plot(y=['Diplomacy_score', 'Ecologics_score', 'Economics_score',
'Education_score', 'Health_score', 'Security_score'],
title="Knesset " + str(knesset_num) + " - Most popular topics over time",
figsize=(10,5))
Explanation: Create Graphs
Time based graph - plot the common talked topics by month
End of explanation
def draw_knesset_topics(knesset_num):
df = get_knesset_topics(knesset_num)
# Aggredate per month and year
df_mean = df[['Diplomacy_score', 'Ecologics_score', 'Economics_score',
'Education_score', 'Health_score', 'Security_score']].mean()
# Plot topics graph
df_mean.plot(y=['Diplomacy_score', 'Ecologics_score', 'Economics_score',
'Education_score', 'Health_score', 'Security_score'],
title="Knesset " + str(knesset_num) + " - Most popular topics",
kind='barh', figsize=(10,5))
Explanation: Summary graph -plot the most talked topics in a knesset overall
End of explanation
import csv
from wordcloud import WordCloud
def get_and_flip_freq_dictionary_for_knesset(knesset_num):
freq_dictionary = dict()
with open("Extracted_data/words_freq_knesset_" + str(knesset_num) + ".csv", 'r', encoding="utf-8") as csvFile:
reader = csv.reader(csvFile)
first = True
for row in reader:
if first:
first = False
continue
# we flip the word since it's in hebrew and not supported
flipped_word = ''.join(reversed(row[1]))
freq_dictionary[flipped_word] = int(row[2])
return freq_dictionary
def draw_lexicon_word_cloud_per_knesset(knesset_num):
wordcloud = WordCloud(width = 800, height = 800,
background_color ='black',
min_font_size = 10,
font_path='C:/Windows/Fonts/Gisha.ttf')
freq_dictionary = get_and_flip_freq_dictionary_for_knesset(knesset_num)
wordcloud.generate_from_frequencies(frequencies=freq_dictionary, )
plt.figure(figsize = (8, 8), facecolor = None)
plt.imshow(wordcloud)
Explanation: Explaining the results - the lexicon words that appears the most in the knesset
Using this we filtered the words 'כספים' and 'אוצר' from the Economics lexicon since they represent budgets more then economy discussions.
End of explanation
import seaborn as sns
def get_correlation_between_topics(knesset_nums_list):
dfs = [get_knesset_topics(knesset_num) for knesset_num in knesset_nums_list]
total_df = pd.concat(dfs)
topics_df = total_df[['Diplomacy_score', 'Ecologics_score', 'Economics_score',
'Education_score', 'Health_score', 'Security_score']]
labels = ['Diplomacy', 'Ecologics', 'Economics',
'Education', 'Health', 'Security']
corr_df = topics_df.corr()
sns.set(rc={'figure.figsize':(8,8)})
sns.heatmap(corr_df,
xticklabels=labels,
yticklabels=labels)
return corr_df
Explanation: Show correlation between discussed topics
We mainly want to know which topics are being neglected when other topics are getting special attention.
End of explanation
draw_knesset_topics_over_time(17)
Explanation: Graphs!
Knesset 17 : 2006-2009
End of explanation
draw_knesset_topics(17)
draw_lexicon_word_cloud_per_knesset(17)
Explanation: We can see two interesting things in this graph:
Financial crisis of 2007-2008: We can see that from 2007 forward the economics topics got the highest interest in that Knesset.
Operation Summer Rains and the kidnapping of Gilad Shalit - June 25, 2006 - caused a sudden jump in security.
End of explanation
draw_knesset_topics_over_time(18)
Explanation: Knesset 18 : 2009-2013
End of explanation
draw_knesset_topics(18)
draw_lexicon_word_cloud_per_knesset(18)
Explanation: We were able to find:
- The tent protest (מחאת האוהלים) in the summer of 2011
- The Arab spring (winter of 2010) - which caused a gain of interest in the diplomacy, security and the economy topics.
- European debt crisis which made the economics interest spike. (Especially after 2008)
- Doctors collective wages agreement which came just after considering the European debt crisis
End of explanation
draw_knesset_topics_over_time(20)
Explanation: Knesset 20 : 2015-2019
The data from the 19 knesset was missing from the public data workshop's dataset.
End of explanation
draw_knesset_topics(20)
draw_lexicon_word_cloud_per_knesset(20)
Explanation: This graph is basically showing inertion so it's interesting to see what is that 'normal' subject distribution.
End of explanation
get_correlation_between_topics([17,18,20])
Explanation: Correlation between topics
End of explanation |
2,651 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="images/csdms_logo.jpg">
Using a BMI
Step1: Import the Cem class, and instantiate it. In Python, a model with a BMI will have no arguments for its constructor. Note that although the class has been instantiated, it's not yet ready to be run. We'll get to that later!
Step2: Even though we can't run our waves model yet, we can still get some information about it. Just don't try to run it. Some things we can do with our model are get the names of the input variables.
Step3: We can also get information about specific variables. Here we'll look at some info about wave direction. This is the main input of the Cem model. Notice that BMI components always use CSDMS standard names. The CSDMS Standard Name for wave angle is,
"sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity"
Quite a mouthful, I know. With that name we can get information about that variable and the grid that it is on (it's actually not a one).
OK. We're finally ready to run the model. Well not quite. First we initialize the model with the BMI initialize method. Normally we would pass it a string that represents the name of an input file. For this example we'll pass None, which tells Cem to use some defaults.
Step4: Here I define a convenience function for plotting the water depth and making it look pretty. You don't need to worry too much about it's internals for this tutorial. It just saves us some typing later on.
Step5: It generates plots that look like this. We begin with a flat delta (green) and a linear coastline (y = 3 km). The bathymetry drops off linearly to the top of the domain.
Step6: Allocate memory for the sediment discharge array and set the discharge at the coastal cell to some value.
Step7: The CSDMS Standard Name for this variable is
Step8: Set the bedload flux and run the model.
Step9: Let's add another sediment source with a different flux and update the model.
Step10: Here we shut off the sediment supply completely. | Python Code:
%matplotlib inline
import numpy as np
Explanation: <img src="images/csdms_logo.jpg">
Using a BMI: Coupling Waves and Coastline Evolution Model
This example explores how to use a BMI implementation to couple the Waves component with the Coastline Evolution Model component.
Links
CEM source code: Look at the files that have deltas in their name.
CEM description on CSDMS: Detailed information on the CEM model.
Interacting with the Coastline Evolution Model BMI using Python
Some magic that allows us to view images within the notebook.
End of explanation
from cmt.components import Cem, Waves
cem, waves = Cem(), Waves()
Explanation: Import the Cem class, and instantiate it. In Python, a model with a BMI will have no arguments for its constructor. Note that although the class has been instantiated, it's not yet ready to be run. We'll get to that later!
End of explanation
waves.get_output_var_names()
cem.get_input_var_names()
Explanation: Even though we can't run our waves model yet, we can still get some information about it. Just don't try to run it. Some things we can do with our model are get the names of the input variables.
End of explanation
cem.initialize(None)
waves.initialize(None)
Explanation: We can also get information about specific variables. Here we'll look at some info about wave direction. This is the main input of the Cem model. Notice that BMI components always use CSDMS standard names. The CSDMS Standard Name for wave angle is,
"sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity"
Quite a mouthful, I know. With that name we can get information about that variable and the grid that it is on (it's actually not a one).
OK. We're finally ready to run the model. Well not quite. First we initialize the model with the BMI initialize method. Normally we would pass it a string that represents the name of an input file. For this example we'll pass None, which tells Cem to use some defaults.
End of explanation
def plot_coast(spacing, z):
import matplotlib.pyplot as plt
xmin, xmax = 0., z.shape[1] * spacing[1] * 1e-3
ymin, ymax = 0., z.shape[0] * spacing[0] * 1e-3
plt.imshow(z, extent=[xmin, xmax, ymin, ymax], origin='lower', cmap='ocean')
plt.colorbar().ax.set_ylabel('Water Depth (m)')
plt.xlabel('Along shore (km)')
plt.ylabel('Cross shore (km)')
Explanation: Here I define a convenience function for plotting the water depth and making it look pretty. You don't need to worry too much about it's internals for this tutorial. It just saves us some typing later on.
End of explanation
grid_id = cem.get_var_grid('sea_water__depth')
spacing = cem.get_grid_spacing(grid_id)
shape = cem.get_grid_shape(grid_id)
z = np.empty(shape)
cem.get_value('sea_water__depth', out=z)
plot_coast(spacing, z)
Explanation: It generates plots that look like this. We begin with a flat delta (green) and a linear coastline (y = 3 km). The bathymetry drops off linearly to the top of the domain.
End of explanation
qs = np.zeros_like(z)
qs[0, 100] = 750
Explanation: Allocate memory for the sediment discharge array and set the discharge at the coastal cell to some value.
End of explanation
cem.get_var_units('land_surface_water_sediment~bedload__mass_flow_rate')
waves.set_value('sea_shoreline_wave~incoming~deepwater__ashton_et_al_approach_angle_asymmetry_parameter', .3)
waves.set_value('sea_shoreline_wave~incoming~deepwater__ashton_et_al_approach_angle_highness_parameter', .7)
cem.set_value("sea_surface_water_wave__height", 2.)
cem.set_value("sea_surface_water_wave__period", 7.)
Explanation: The CSDMS Standard Name for this variable is:
"land_surface_water_sediment~bedload__mass_flow_rate"
You can get an idea of the units based on the quantity part of the name. "mass_flow_rate" indicates mass per time. You can double-check this with the BMI method function get_var_units.
End of explanation
for time in xrange(3000):
waves.update()
angle = waves.get_value('sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity')
cem.set_value('sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity', angle)
cem.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qs)
cem.update()
cem.get_value('sea_water__depth', out=z)
plot_coast(spacing, z)
Explanation: Set the bedload flux and run the model.
End of explanation
qs[0, 150] = 500
for time in xrange(3750):
waves.update()
angle = waves.get_value('sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity')
cem.set_value('sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity', angle)
cem.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qs)
cem.update()
cem.get_value('sea_water__depth', out=z)
plot_coast(spacing, z)
Explanation: Let's add another sediment source with a different flux and update the model.
End of explanation
qs.fill(0.)
for time in xrange(4000):
waves.update()
angle = waves.get_value('sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity')
cem.set_value('sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity', angle)
cem.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qs)
cem.update()
cem.get_value('sea_water__depth', out=z)
plot_coast(spacing, z)
Explanation: Here we shut off the sediment supply completely.
End of explanation |
2,652 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DiscreteDP Example
Step2: Here, in the state space we include states that are not reached
due to the constraint that the asset can be serviced at most one per year,
i.e., those pairs of the age of asset $a$ and the number of services $s$
such that $s \geq a$.
One can alternatively define the state space excluding those states;
see the section Alternative formulation below.
Step3: Alternative formulation
Define the state space excluding the age-serv pairs that do not realize
Step4: We follow the state-action pairs formulation approach. | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import quantecon as qe
from quantecon.markov import DiscreteDP
maxage = 5 # Maximum asset age
repcost = 75 # Replacement cost
mancost = 10 # Maintainance cost
beta = 0.9 # Discount factor
m = 3 # Number of actions; 0: keep, 1: service, 2: replace
# Construct the state space which is two-dimensional
s0 = np.arange(1, maxage+1) # Possible ages
s1 = np.arange(maxage) # Possible servicings
S = qe.cartesian([s0, s1]) # State space
n = len(S) # Number of states
S
Explanation: DiscreteDP Example: Asset Replacement with Maintenance
Daisuke Oyama
Faculty of Economics, University of Tokyo
From Miranda and Fackler, <i>Applied Computational Economics and Finance</i>, 2002,
Section 7.6.3
End of explanation
# We need a routine to get the index of a age-serv pair
def getindex(age, serv, S):
Get the index of [age, serv] in S.
We know that elements in S are aligned in a lexicographic order.
n = len(S)
for i in range(n):
if S[i, 0] == age:
for k in range(n-i):
if S[i+k, 1] == serv:
return i+k
# Profit function as a function of the age and the number of service
def p(age, serv):
return (1 - (age - serv)/5) * (50 - 2.5 * age - 2.5 * age**2)
# Reward array
R = np.empty((n, m))
R[:, 0] = p(S[:, 0], S[:, 1])
R[:, 1] = p(S[:, 0], S[:, 1]+1) - mancost
R[:, 2] = p(0, 0) - repcost
# Infeasible actions
for serv in range(maxage):
R[getindex(maxage, serv, S), [0, 1]] = -np.inf
R
# (Degenerate) transition probability array
Q = np.zeros((n, m, n))
for i in range(n):
Q[i, 0, getindex(min(S[i, 0]+1, maxage), S[i, 1], S)] = 1
Q[i, 1, getindex(min(S[i, 0]+1, maxage), min(S[i, 1]+1, maxage-1), S)] = 1
Q[i, 2, getindex(1, 0, S)] = 1
# Create a DiscreteDP
ddp = DiscreteDP(R, Q, beta)
# Solve the dynamic optimization problem (by policy iteration)
res = ddp.solve()
# Number of iterations
res.num_iter
# Optimal policy
res.sigma
# Optimal actions for reachable states
for i in range(n):
if S[i, 0] > S[i, 1]:
print(S[i], res.sigma[i])
# Simulate the controlled Markov chain
res.mc.state_values = S # Set the state values
initial_state_value = [1, 0]
nyrs = 12
spath = res.mc.simulate(nyrs+1, init=initial_state_value)
# Plot sample paths of the age of asset (0th coordinate of `spath`)
# and the number of services (1st coordinate of `spath`)
fig, axes = plt.subplots(1, 2, figsize=(12, 4))
captions = ['Age of Asset', 'Number of Services']
for i, caption in zip(range(2), captions):
axes[i].plot(spath[:, i])
axes[i].set_xlim(0, 12)
axes[i].set_xlabel('Year')
axes[i].set_ylabel(caption)
axes[i].set_title('Optimal State Path: ' + caption)
axes[0].set_yticks(np.linspace(1, 4, 4, endpoint=True))
axes[0].set_ylim(1, 4)
axes[1].set_yticks(np.linspace(0, 2, 3, endpoint=True))
axes[1].set_ylim(0, 2.25)
plt.show()
Explanation: Here, in the state space we include states that are not reached
due to the constraint that the asset can be serviced at most one per year,
i.e., those pairs of the age of asset $a$ and the number of services $s$
such that $s \geq a$.
One can alternatively define the state space excluding those states;
see the section Alternative formulation below.
End of explanation
# Construct the state space which is two-dimensional
s0 = np.arange(1, maxage+1) # Possible ages
s1 = np.arange(maxage) # Possible servicings
S = qe.cartesian([s0, s1]) # Including infeasible pairs as previously
S = S[S[:, 0] > S[:, 1]] # Exclude infeasible pairs
n = len(S) # Number of states
S
Explanation: Alternative formulation
Define the state space excluding the age-serv pairs that do not realize:
End of explanation
# Reward array
R = np.empty((n, m))
for i, (age, serv) in enumerate(S):
R[i, 0] = p(age, serv) if age < maxage else -np.infty
R[i, 1] = p(age, serv+1) - mancost if age < maxage else -np.infty
R[i, 2] = p(0, 0) - repcost
R
# Remove the state-action pairs yielding a reward negative infinity
s_indices, a_indices = np.where(R > -np.infty)
R = R[s_indices, a_indices]
R
# Number of feasible state-action pairs
L = len(R)
# (Degenerate) transition probability array
Q = np.zeros((L, n)) # One may use a scipy.sparse matrix for a larger problem
it = np.nditer((s_indices, a_indices), flags=['c_index'])
for s, a in it:
i = it.index
if a == 0:
Q[i, getindex(min(S[s, 0]+1, maxage), S[s, 1], S)] = 1
elif a == 1:
Q[i, getindex(min(S[s, 0]+1, maxage), min(S[s, 1]+1, maxage-1), S)] = 1
else:
Q[i, getindex(1, 0, S)] = 1
# Create a DiscreteDP
ddp = DiscreteDP(R, Q, beta, s_indices, a_indices)
# Solve the dynamic optimization problem (by policy iteration)
res = ddp.solve()
# Number of iterations
res.num_iter
# Optimal policy
res.sigma
# Simulate the controlled Markov chain
res.mc.state_values = S # Set the state values
initial_state_value = [1, 0]
nyrs = 12
spath = res.mc.simulate(nyrs+1, init=initial_state_value)
# Plot sample paths of the age of asset (0th coordinate of `spath`)
# and the number of services (1st coordinate of `spath`)
fig, axes = plt.subplots(1, 2, figsize=(12, 4))
captions = ['Age of Asset', 'Number of Services']
for i, caption in zip(range(2), captions):
axes[i].plot(spath[:, i])
axes[i].set_xlim(0, 12)
axes[i].set_xlabel('Year')
axes[i].set_ylabel(caption)
axes[i].set_title('Optimal State Path: ' + caption)
axes[0].set_yticks(np.linspace(1, 4, 4, endpoint=True))
axes[0].set_ylim(1, 4)
axes[1].set_yticks(np.linspace(0, 2, 3, endpoint=True))
axes[1].set_ylim(0, 2.25)
plt.show()
Explanation: We follow the state-action pairs formulation approach.
End of explanation |
2,653 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Don't forget to delete the hdmi_out and hdmi_in when finished
Motion Blur Filter Example
In this notebook, we will demonstrate how to use the motion blur filter. This filter shows that partially reconfigurable modules can use Xilinx IP cores. This filter blurs the video feed horizontally. The length of the blur is determined by a register in the module. This register is controlled by a python slide widget.
<img src="data/motion.jpg"/>
This filter works by adding up the RBG values of the pixels to the left of the pixel being displayed and then dividing by the number of pixels. The number of pixels to blur is determined by a register. A Xilinx divider core is needed to operate the dividing. This is because the numerator and denominator of the division are variables.
1. Download base overlay to the board
Ensure that the camera is not connected to the board. Run the following script to provide the PYNQ with its base overlay.
Step1: 2. Connect camera
Physically connect the camera to the HDMI-in port of the PYNQ. Run the following code to instruct the PYNQ to capture the video from the camera and to begin streaming video to your monitor (connected to the HDMI-out port).
Step2: 3. Program board
Run the following script to download the Motion Blur Filter to the PYNQ.
Step3: 4. Create a user interface
We will communicate with the filter using a nice user interface. Run the following code to activate that interface.
Step4: 5. Exploration
Move the slider above to change the lenght of the blur. When the slider is set to zero there is no blur. Notice how quickly the filter responds to the movement of the slider.
6. Clean up
When you are done with the filter, run the following code to stop the video stream | Python Code:
from pynq.drivers.video import HDMI
from pynq import Bitstream_Part
from pynq.board import Register
from pynq import Overlay
Overlay("demo.bit").download()
Explanation: Don't forget to delete the hdmi_out and hdmi_in when finished
Motion Blur Filter Example
In this notebook, we will demonstrate how to use the motion blur filter. This filter shows that partially reconfigurable modules can use Xilinx IP cores. This filter blurs the video feed horizontally. The length of the blur is determined by a register in the module. This register is controlled by a python slide widget.
<img src="data/motion.jpg"/>
This filter works by adding up the RBG values of the pixels to the left of the pixel being displayed and then dividing by the number of pixels. The number of pixels to blur is determined by a register. A Xilinx divider core is needed to operate the dividing. This is because the numerator and denominator of the division are variables.
1. Download base overlay to the board
Ensure that the camera is not connected to the board. Run the following script to provide the PYNQ with its base overlay.
End of explanation
hdmi_in = HDMI('in')
hdmi_out = HDMI('out', frame_list=hdmi_in.frame_list)
hdmi_out.mode(2)
hdmi_out.start()
hdmi_in.start()
Explanation: 2. Connect camera
Physically connect the camera to the HDMI-in port of the PYNQ. Run the following code to instruct the PYNQ to capture the video from the camera and to begin streaming video to your monitor (connected to the HDMI-out port).
End of explanation
Bitstream_Part("motion_p.bit").download()
Explanation: 3. Program board
Run the following script to download the Motion Blur Filter to the PYNQ.
End of explanation
import ipywidgets as widgets
R0 =Register(0)
R0.write(255)
R0_s = widgets.IntSlider(
value=255,
min=0,
max=511,
step=1,
description='Blur:',
disabled=False,
continuous_update=True,
orientation='vertical',
readout=True,
readout_format='i',
slider_color='red'
)
def update_r0(*args):
R0.write(R0_s.value)
R0_s.observe(update_r0, 'value')
widgets.HBox([R0_s])
Explanation: 4. Create a user interface
We will communicate with the filter using a nice user interface. Run the following code to activate that interface.
End of explanation
hdmi_out.stop()
hdmi_in.stop()
del hdmi_out
del hdmi_in
Explanation: 5. Exploration
Move the slider above to change the lenght of the blur. When the slider is set to zero there is no blur. Notice how quickly the filter responds to the movement of the slider.
6. Clean up
When you are done with the filter, run the following code to stop the video stream
End of explanation |
2,654 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Protobuf Serialisation
This notebook documents how Acton serialises protobufs.
Protobufs can be serialised and deserialised individually using the built-in methods SerializeToString and ParseFromString
Step1: To serialise multiple protobufs into one file, we serialise each to a string, write the length of this string to a file, then write the string to the file. The length is needed because protobufs are not self-delimiting. We use an unsigned long long with the struct library to store the length.
Step2: We also want to store metadata in the resulting file. This is achieved by encoding the metadata as a bytestring and writing it before we write any protobufs. As with protobufs, we must store the length of the metadata before the metadata itself, and we again use an unsigned long long.
Reading the files back in is the inverse of the above; we simply unpack instead of packing and call ParseFromString. | Python Code:
# Serialising.
with open(path, 'wb') as proto_file:
proto_file.write(proto.SerializeToString())
# Deserialising. (from acton.proto.io)
proto = Proto()
with open(path, 'rb') as proto_file:
proto.ParseFromString(proto_file.read())
Explanation: Protobuf Serialisation
This notebook documents how Acton serialises protobufs.
Protobufs can be serialised and deserialised individually using the built-in methods SerializeToString and ParseFromString:
End of explanation
for proto in protos:
proto = proto.SerializeToString()
length = struct.pack('<Q', len(proto))
proto_file.write(length)
proto_file.write(proto)
Explanation: To serialise multiple protobufs into one file, we serialise each to a string, write the length of this string to a file, then write the string to the file. The length is needed because protobufs are not self-delimiting. We use an unsigned long long with the struct library to store the length.
End of explanation
length = proto_file.read(8) # 8 = long long
while length:
length, = struct.unpack('<Q', length)
proto = Proto()
proto.ParseFromString(proto_file.read(length))
length = proto_file.read(8)
Explanation: We also want to store metadata in the resulting file. This is achieved by encoding the metadata as a bytestring and writing it before we write any protobufs. As with protobufs, we must store the length of the metadata before the metadata itself, and we again use an unsigned long long.
Reading the files back in is the inverse of the above; we simply unpack instead of packing and call ParseFromString.
End of explanation |
2,655 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<H2>A normally distributed random variable</H2>
<P>
Assume $X$ is a random variable which is normally distributed
Step1: <H2>Sum of 2 normally distributed random variables</H2>
<P>
Assume $X$ and $Y$ are <B>independent</B> random variables and are both normally distributed, then the sum of the two random variables $X+Y$ will be also a random variable $Z$ that has a normal distribution with the following properties
Step2: <H3>Resulted mean and standard deviation</H3>
Step3: <H3>Theoretical mean and standard deviation</H3> | Python Code:
# create a normally distributed random variable with mu and sigma
mu = 28.74
sigma = 8.33 # standard deviation!
rv_X = norm(loc = mu, scale = sigma)
# plot the theoretical and empirical distributions
x = np.linspace(start = rv_X.ppf(0.001), stop = rv_X.ppf(0.999), num = 100)
plt.plot(x, rv_X.pdf(x), color = 'r', lw=2, label='theoretical');
plt.hist( rv_X.rvs(size = 1000), rwidth=.85, facecolor='k', normed=1, label='empirical');
plt.legend(frameon=0);
Explanation: <H2>A normally distributed random variable</H2>
<P>
Assume $X$ is a random variable which is normally distributed:
</P>
$X \sim N(\mu_X, \sigma_X^2),$
where $\mu_X \in \mathbb{R} $ is the mean (or location) and $\sigma_X^2 > 0$ is the variance (squared scale)
End of explanation
# create a second normally distributed random variable with mu and sigma
mu = 28.74
sigma = 8.33 # standard deviation!
rv_Y = norm(loc = mu, scale = sigma)
# The theoretical distrubution of Z
rv_Z = norm(loc = mu+mu, scale = sqrt(sigma**2+sigma**2))
# The empirical distribution of Z based on the sum of two random variables
data = rv_X.rvs(100) + rv_Y.rvs(100)
# Plot resulting distributions
x = np.linspace(start = rv_Z.ppf(0.001), stop = rv_Z.ppf(0.999), num = 100)
plt.plot(x, rv_Z.pdf(x), color = 'r', lw=2, label='theoretical');
plt.hist( data, rwidth=.85, facecolor='k', normed=1, label='empirical');
plt.legend(frameon=0);
Explanation: <H2>Sum of 2 normally distributed random variables</H2>
<P>
Assume $X$ and $Y$ are <B>independent</B> random variables and are both normally distributed, then the sum of the two random variables $X+Y$ will be also a random variable $Z$ that has a normal distribution with the following properties:
</P>
$Z \sim N(\mu_Z, \sigma_Z^2),$
The resulting mean is simply sum of the two means: $\mu_Z= \mu_X+\mu_Y,$
and the variance is the sum of the two variances: $\sigma_Z^2 = \sigma_X^2 + \sigma_Y^2,$
or alternatively, the standard deviation: $\sigma_Z = \sqrt{\sigma_X^2 + \sigma_Y^2}$
End of explanation
# Location and scale from data
print('Location = %f, scale = %f'%norm.fit(data))
Explanation: <H3>Resulted mean and standard deviation</H3>
End of explanation
# Theoretical location and scale
mu_Z = mu + mu
sigma_Z = sqrt(sigma**2 + sigma**2)
print('Location = %f, scale =%f'%(mu_Z , sigma_Z))
Explanation: <H3>Theoretical mean and standard deviation</H3>
End of explanation |
2,656 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
tridesclous example with olfactory bulb dataset
Step1: DataIO = define datasource and working dir
trideclous provide some datasets than can be downloaded.
Note this dataset contains 3 trials in 3 different files. (the original contains more!)
Each file is considers as a segment. tridesclous automatically deal with it.
Theses 3 files are in RawData format this means binary format with interleaved channels.
Step2: CatalogueConstructor
Step3: Use automatic parameters and apply the whole chain
tridesclous propose an automatic parameters choice and can apply in one function all the steps.
Step4: apply all catalogue steps
Step5: Open CatalogueWindow for visual check
At the end we can save the catalogue.
Step6: Peeler
Use automatic parameters.
Step7: Open PeelerWindow for visual checking | Python Code:
%matplotlib inline
import time
import numpy as np
import matplotlib.pyplot as plt
import tridesclous as tdc
from tridesclous import DataIO, CatalogueConstructor, Peeler
Explanation: tridesclous example with olfactory bulb dataset
End of explanation
#download dataset
localdir, filenames, params = tdc.download_dataset(name='olfactory_bulb')
print(filenames)
print(params)
print()
#create a DataIO
import os, shutil
dirname = 'tridesclous_olfactory_bulb'
if os.path.exists(dirname):
#remove is already exists
shutil.rmtree(dirname)
dataio = DataIO(dirname=dirname)
# feed DataIO
dataio.set_data_source(type='RawData', filenames=filenames, **params)
dataio.add_one_channel_group(channels=list(range(14)))
print(dataio)
Explanation: DataIO = define datasource and working dir
trideclous provide some datasets than can be downloaded.
Note this dataset contains 3 trials in 3 different files. (the original contains more!)
Each file is considers as a segment. tridesclous automatically deal with it.
Theses 3 files are in RawData format this means binary format with interleaved channels.
End of explanation
cc = CatalogueConstructor(dataio=dataio)
print(cc)
Explanation: CatalogueConstructor
End of explanation
from pprint import pprint
params = tdc.get_auto_params_for_catalogue(dataio, chan_grp=0)
pprint(params)
Explanation: Use automatic parameters and apply the whole chain
tridesclous propose an automatic parameters choice and can apply in one function all the steps.
End of explanation
cc.apply_all_steps(params, verbose=True)
print(cc)
Explanation: apply all catalogue steps
End of explanation
%gui qt5
import pyqtgraph as pg
app = pg.mkQApp()
win = tdc.CatalogueWindow(cc)
win.show()
app.exec_()
# necessary if manual change
cc.make_catalogue_for_peeler()
Explanation: Open CatalogueWindow for visual check
At the end we can save the catalogue.
End of explanation
peeler_params = tdc.get_auto_params_for_peelers(dataio, chan_grp=0)
pprint(peeler_params)
catalogue = dataio.load_catalogue()
peeler = Peeler(dataio)
peeler.change_params(catalogue=catalogue, **peeler_params)
t1 = time.perf_counter()
peeler.run()
t2 = time.perf_counter()
print('peeler.run', t2-t1)
print()
for seg_num in range(dataio.nb_segment):
spikes = dataio.get_spikes(seg_num)
print('seg_num', seg_num, 'nb_spikes', spikes.size)
print(spikes[:3])
Explanation: Peeler
Use automatic parameters.
End of explanation
%gui qt5
import pyqtgraph as pg
app = pg.mkQApp()
win = tdc.PeelerWindow(dataio=dataio, catalogue=initial_catalogue)
win.show()
app.exec_()
Explanation: Open PeelerWindow for visual checking
End of explanation |
2,657 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
An OrderedDict is a dictionary subclass that remembers the order in which its contents are added.
Step1: A regular dict does not track the insertion order, and iterating over it produces the values in order based on how the keys are stored in the hash table, which is in turn influenced by a random value to reduce collisions. In an OrderedDict, by contrast, the order in which the items are inserted is remembered and used when creating an iterator.
Equality
A regular dict looks at its contents when testing for equality. An OrderedDict also considers the order in which the items were added.
Step2: In this case, since the two ordered dictionaries are created from values in a different order, they are considered to be different.
Reordering
It is possible to change the order of the keys in an OrderedDict by moving them to either the beginning or the end of the sequence using move_to_end(). | Python Code:
import collections
print('Regular dictionary:')
d = {}
d['a'] = 'A'
d['b'] = 'B'
d['c'] = 'C'
for k, v in d.items():
print(k, v)
print('\nOrderedDict:')
d = collections.OrderedDict()
d['a'] = 'A'
d['b'] = 'B'
d['c'] = 'C'
for k, v in d.items():
print(k, v)
Explanation: An OrderedDict is a dictionary subclass that remembers the order in which its contents are added.
End of explanation
import collections
print('dict :', end=' ')
d1 = {}
d1['a'] = 'A'
d1['b'] = 'B'
d1['c'] = 'C'
d2 = {}
d2['c'] = 'C'
d2['b'] = 'B'
d2['a'] = 'A'
print(d1 == d2)
print('OrderedDict:', end=' ')
d1 = collections.OrderedDict()
d1['a'] = 'A'
d1['b'] = 'B'
d1['c'] = 'C'
d2 = collections.OrderedDict()
d2['c'] = 'C'
d2['b'] = 'B'
d2['a'] = 'A'
print(d1 == d2)
Explanation: A regular dict does not track the insertion order, and iterating over it produces the values in order based on how the keys are stored in the hash table, which is in turn influenced by a random value to reduce collisions. In an OrderedDict, by contrast, the order in which the items are inserted is remembered and used when creating an iterator.
Equality
A regular dict looks at its contents when testing for equality. An OrderedDict also considers the order in which the items were added.
End of explanation
import collections
d = collections.OrderedDict(
[('a', 'A'), ('b', 'B'), ('c', 'C')]
)
print('Before:')
for k, v in d.items():
print(k, v)
d.move_to_end('b')
print('\nmove_to_end():')
for k, v in d.items():
print(k, v)
d.move_to_end('b', last=False)
print('\nmove_to_end(last=False):')
for k, v in d.items():
print(k, v)
Explanation: In this case, since the two ordered dictionaries are created from values in a different order, they are considered to be different.
Reordering
It is possible to change the order of the keys in an OrderedDict by moving them to either the beginning or the end of the sequence using move_to_end().
End of explanation |
2,658 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DV360 Report To Sheets
Move existing DV360 report into a Sheets tab.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step1: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project
Step2: 3. Enter DV360 Report To Sheets Recipe Parameters
Specify either report name or report id to move a report.
The most recent valid file will be moved to the sheet.
Modify the values below for your use case, can be done multiple times, then click play.
Step3: 4. Execute DV360 Report To Sheets
This does NOT need to be modified unless you are changing the recipe, click play. | Python Code:
!pip install git+https://github.com/google/starthinker
Explanation: DV360 Report To Sheets
Move existing DV360 report into a Sheets tab.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Disclaimer
This is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.
This code generated (see starthinker/scripts for possible source):
- Command: "python starthinker_ui/manage.py colab"
- Command: "python starthinker/tools/colab.py [JSON RECIPE]"
1. Install Dependencies
First install the libraries needed to execute recipes, this only needs to be done once, then click play.
End of explanation
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
Explanation: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project:
Set the configuration project value to the project identifier from these instructions.
If the recipe has auth set to user:
If you have user credentials:
Set the configuration user value to your user credentials JSON.
If you DO NOT have user credentials:
Set the configuration client value to downloaded client credentials.
If the recipe has auth set to service:
Set the configuration service value to downloaded service credentials.
End of explanation
FIELDS = {
'auth_read':'user', # Credentials used for reading data.
'report_id':'', # DV360 report ID given in UI, not needed if name used.
'report_name':'', # Name of report, not needed if ID used.
'sheet':'', # Full URL to sheet being written to.
'tab':'', # Existing tab in sheet to write to.
}
print("Parameters Set To: %s" % FIELDS)
Explanation: 3. Enter DV360 Report To Sheets Recipe Parameters
Specify either report name or report id to move a report.
The most recent valid file will be moved to the sheet.
Modify the values below for your use case, can be done multiple times, then click play.
End of explanation
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'dbm':{
'auth':{'field':{'name':'auth_read','kind':'authentication','order':1,'default':'user','description':'Credentials used for reading data.'}},
'report':{
'report_id':{'field':{'name':'report_id','kind':'integer','order':1,'default':'','description':'DV360 report ID given in UI, not needed if name used.'}},
'name':{'field':{'name':'report_name','kind':'string','order':2,'default':'','description':'Name of report, not needed if ID used.'}}
},
'out':{
'sheets':{
'sheet':{'field':{'name':'sheet','kind':'string','order':3,'default':'','description':'Full URL to sheet being written to.'}},
'tab':{'field':{'name':'tab','kind':'string','order':4,'default':'','description':'Existing tab in sheet to write to.'}},
'range':'A1'
}
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
Explanation: 4. Execute DV360 Report To Sheets
This does NOT need to be modified unless you are changing the recipe, click play.
End of explanation |
2,659 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Functions
Storing individual Python commands for re-use is one thing. Creating a function that can be repeatedly applied to different input data is quite another, and of huge importance in coding.
In VBA there are two related concepts
Step3: We see that the line add(1, 2) is outside the function and so is executed. We can also call the function repeatedly
Step4: The lengthy comment at the start of the function is very useful to remind yourself later what the function should do. You can see this information by typing
Step5: You can also view this in spyder by typing add in the Object window of the Help tab in the top right.
We can save the function to a file and re-use the function by importing the file. Create a new file in the spyder editor containing
```python
def add(x, y)
Step7: if statements and flow control
We often need to make a decision whether to do something, or to do something else. In Visual Basic this uses an If statement
Step8: We see that the Visual Basic If statement becomes the lower case if, and the ElseIf is contracted to elif. The condition (in this case!) compares a variable, count, to a number, using the equality comparison ==. Once again, as in the case of functions, the line containing the if definition is ended with a colon (
Step9: Loops
We will often want to run the same code many times on similar input. Let us suppose we want to add $n$ to $3$, where $n$ is every number between $1$ and $5$. We could do
Step10: This is tedious and there's a high chance of errors.
In VBA you can define a loop that repeats commands as, for example
VB.NET
For n = 1 To 5
Z = 3 + n
Next n
In Python there is also a for loop
Step11: The syntax has similarities to the syntax for functions. The line defining the loop starts with for, specifies the values that n takes, and ends with a colon. The code that is executed inside the loop is indented.
As a short-hand for integer loops, we can use the range function
Step12: We see that
if two numbers are given, range returns all integers from the first number up to but not including the second in steps of $1$;
if one number is given, range starts from $0$;
if three numbers are given, the third is the step.
In fact Python will iterate over any collection of objects
Step13: This is very often used in Python code
Step14: We see that to access individual entries we use square brackets and the number of the entry, starting from $0$. All Python tuples and lists start from $0$. To check that it cannot be modified
Step15: We can use slicing to access many entries at once
Step16: As with the range function, the notation <start>
Step17: Lists
A list is a sequence with a size that can change, and whose entries can be modified
Step18: The same slicing notation can be used, and now can be used to assignment
Step19: Crucially, lists and tuples can contain anything. As with loops, there is no restriction on types, and things can be nested
Step20: Dictionaries
Both lists and tuples are ordered
Step21: As there is no order we access dictionaries using the key. To loop over a dictionary, we take advantage of Python's loose iteration rules
Step22: There is a shortcut to allow you to get both key and value in one go
Step23: Exercise
Write a dictionary with the structure
Step24: Numpy arrays
We've seen python's built-in lists for storing data, however for the numpy library contains the more powerful array datatype. Arrays are essentially a more powerful form of lists which make it easier to handle data. Most importantly, they allow us to apply operations to all elements of an array at once, rather than looping over the elements one-by-one.
To see this, let's create a list and a numpy array, both containing the same data.
Step25: Accessing elements of numpy arrays is very similar to accessing elements of lists, but with slightly less typing. To access elements from an n-dimensional list, we have to use multiple square brackets, e.g. l[0][4][7][8]. For a numpy array, we separate the indices using a comma
Step26: Let's say we now want to square every element of the array. For this 2d list, we would need a for loop
Step27: Note that here we used the function deepcopy from the copy module the copy the list l. If we had simply used squared = l, when we the assigned the elements of squared new values, this would also have changed the values in l. This is in contrast to the simple variables we saw before, where changing the value of one will leave the values of others unchanged.
For numpy arrays, applying operations across the entire array is much simpler
Step28: Numpy has a range of array manipulation routines for rearranging and manipulating elements, such as those below.
Step29: If you've used Matlab before, you may be familiar with logical indexing. This is a way of accessing elements of a array that satisfy some criteria, e.g. all the elements which are greater than 0. We can also do this with numpy arrays using boolean array indexing | Python Code:
def add(x, y):
Add two numbers
Parameters
----------
x : float
First input
y : float
Second input
Returns
-------
x + y : float
return x + y
add(1, 2)
Explanation: Functions
Storing individual Python commands for re-use is one thing. Creating a function that can be repeatedly applied to different input data is quite another, and of huge importance in coding.
In VBA there are two related concepts: subroutines and functions. Subroutines perform actions, functions return results (given inputs). In Python there is no distinction: any function can both return results and perform actions.
In VBA there is a standard layout. For subroutines we have
VB.NET
Sub name()
'
' Comments
'
Code
End Sub
For functions we have
VB.NET
Function name(arguments)
'
' Comments
'
name = ...
End Function
A similar structure holds in Python. Here we have
python
def name(arguments):
Comments
return value
The def keyword says that what follows is a function. Again, the name of the function follows the same rules and conventions as variables and files. The colon : at the end of the first line is essential: everything that follows that is indented will be the code to be executed when the function is called. The indentation is also essential. As soon as the indentation stops, the function stops (like End Function in VBA).
Here is a simple example, that you can type directly into the console or into a file:
End of explanation
print(add(3, 4))
print(add(10.61, 5.99))
Explanation: We see that the line add(1, 2) is outside the function and so is executed. We can also call the function repeatedly:
End of explanation
help(add)
Explanation: The lengthy comment at the start of the function is very useful to remind yourself later what the function should do. You can see this information by typing
End of explanation
import script2
script2.add(1, 2)
Explanation: You can also view this in spyder by typing add in the Object window of the Help tab in the top right.
We can save the function to a file and re-use the function by importing the file. Create a new file in the spyder editor containing
```python
def add(x, y):
Add two numbers
Parameters
----------
x : float
First input
y : float
Second input
Returns
-------
x + y : float
return x + y
```
and save it as script2.py. Then in the console check that it works as expected:
End of explanation
count = 0
if count == 0:
message = "There are no items."
elif count == 1:
message = "There is 1 item."
else:
message = "There are" + count + " items."
print(message)
Explanation: if statements and flow control
We often need to make a decision whether to do something, or to do something else. In Visual Basic this uses an If statement:
```VB.Net
Dim count As Integer = 0
Dim message As String
If count = 0 Then
message = "There are no items."
ElseIf count = 1 Then
message = "There is 1 item."
Else
message = "There are " & count & " items."
End If
```
The equivalent Python code is similar:
End of explanation
def fibonacci(n):
if n == 1 or n == 2:
return 1
else:
return fibonacci(n-1) + fibonacci(n-2)
print('F_1 = ', fibonacci(1))
print('F_2 = ', fibonacci(2))
print('F_5 = ', fibonacci(5))
print('F_10 = ', fibonacci(10))
Explanation: We see that the Visual Basic If statement becomes the lower case if, and the ElseIf is contracted to elif. The condition (in this case!) compares a variable, count, to a number, using the equality comparison ==. Once again, as in the case of functions, the line containing the if definition is ended with a colon (:), and the commands to be executed are indented.
We can include as many branches of the if statement as we like using multiple elif statements. We do not need to use any elif statements, nor an else, unless we want (or need) to. We can nest if statements inside each other.
Exercise
Write a function that, given an integer $n$, returns the $n^{\text{th}}$ Fibonacci number $F_n = F_{n-1} + F_{n-2}$, where $F_1 = 1 = F_2$. Check that it works for $n = 1, 2, 5, 10$ ($F_5 = 5$ and $F_{10} = 55$).
End of explanation
print(add(3, 1))
print(add(3, 2))
print(add(3, 3))
print(add(3, 4))
print(add(3, 5))
Explanation: Loops
We will often want to run the same code many times on similar input. Let us suppose we want to add $n$ to $3$, where $n$ is every number between $1$ and $5$. We could do:
End of explanation
for n in 1, 2, 3, 4, 5:
print(add(3, n))
print("Loop has ended")
Explanation: This is tedious and there's a high chance of errors.
In VBA you can define a loop that repeats commands as, for example
VB.NET
For n = 1 To 5
Z = 3 + n
Next n
In Python there is also a for loop:
End of explanation
for n in range(1, 6):
print("n =", n)
for m in range(3):
print("m =", m)
for k in range(2, 7, 2):
print("k =", k)
Explanation: The syntax has similarities to the syntax for functions. The line defining the loop starts with for, specifies the values that n takes, and ends with a colon. The code that is executed inside the loop is indented.
As a short-hand for integer loops, we can use the range function:
End of explanation
for thing in 1, 2.5, "hello", add:
print("thing is ", thing)
Explanation: We see that
if two numbers are given, range returns all integers from the first number up to but not including the second in steps of $1$;
if one number is given, range starts from $0$;
if three numbers are given, the third is the step.
In fact Python will iterate over any collection of objects: they do not have to be integers:
End of explanation
t1 = (0, 1, 2, 3, 4, 5)
print(t1[0])
print(t1[3])
Explanation: This is very often used in Python code: if you have some way of collecting things together, Python will happily iterate over them all.
Containers, sequences, lists, arrays
So what are the Python ways of collecting things together? In VBA, there are arrays:
VB.NET
Dim A(2) AS DOUBLE
defines an array, or vector, of length $3$, starting from $0$, of double precision floating point numbers.
VB.NET
Dim B() AS DOUBLE
defines an array, or vector, of arbitrary length, starting from $0$, of double precision floating point numbers. You can also start arrays from values other than $0$. The individual entries are accessed and modified using, for example, A(0).
In Python there are many ways of collecting objects together. The closest to VBA are tuples and lists.
Tuples
A tuple is a sequence with fixed size, whose entries cannot be modified:
End of explanation
t1[0] = 1
Explanation: We see that to access individual entries we use square brackets and the number of the entry, starting from $0$. All Python tuples and lists start from $0$. To check that it cannot be modified:
End of explanation
print(t1[1:4])
Explanation: We can use slicing to access many entries at once:
End of explanation
print(t1[-1])
Explanation: As with the range function, the notation <start>:<end> returns the entries from (and including) <start> up to, but not including, <end>.
We can use negative numbers to access from the right of the sequence: -1 is the last entry, -2 the next-to-last, and so on:
End of explanation
l1 = [0, 1, 2, 3, 4, 5]
print(l1[3])
l1[3] = 7
print(l1[3])
l1.append(6)
print(l1)
Explanation: Lists
A list is a sequence with a size that can change, and whose entries can be modified:
End of explanation
l1[0:2] = l1[4:6]
print(l1)
Explanation: The same slicing notation can be used, and now can be used to assignment:
End of explanation
l2 = [0, 1.2, "hello", ["a", 3, 4.5], (0, (1.1, 2.3, 4))]
print(l2[1])
print(l2[3][0])
Explanation: Crucially, lists and tuples can contain anything. As with loops, there is no restriction on types, and things can be nested:
End of explanation
d1 = {"omega": 1.0, "Gamma": 5.7, "N": 100}
print(d1["Gamma"])
Explanation: Dictionaries
Both lists and tuples are ordered: there are accessed by an integer giving there location in the sequence. This doesn't always make sense. Consider an algorithm which depends on parameters $\omega, \Gamma, N$. We want to keep the parameters together, but there's no logical order to them. Instead we can use a dictionary, which is an unordered Python container:
End of explanation
for key in d1:
print("Key is", key, "value is", d1[key])
Explanation: As there is no order we access dictionaries using the key. To loop over a dictionary, we take advantage of Python's loose iteration rules:
End of explanation
for key, value in d1.items():
print("Key is", key, "value is", value)
Explanation: There is a shortcut to allow you to get both key and value in one go:
End of explanation
boaty = {'first name' : 'Boaty',
'last name' : 'McBoatface',
'student ID' : 123456,
'project' : 'Surveying the arctic ocean'}
def f_name(d):
print("My name is {} {}".format(d['first name'], d['last name']))
def f_project(d):
print("Student {} is doing project {}".format(d['student ID'], d['project']))
f_name(boaty)
f_project(boaty)
Explanation: Exercise
Write a dictionary with the structure:
d = {'first name' : ...,
'last name' : ...,
'student ID' : ...,
'project' : ...}
Fill it in with suitable values. Write two functions f_name and f_project. Each should take as input a dictionary.
f_name should print "My name is <first name> <last name>"
f_project should print "Student <student ID> is doing project <project>"
where <X> should fill in the appropriate value from the dictionary.
End of explanation
import numpy
# python list
l = [[1., 2., 3.],
[4., 5., 6.],
[7., 8., 9.]]
a = numpy.array([[1., 2., 3.],
[4., 5., 6.],
[7., 8., 9.]])
print('list l = {}'.format(l))
print('numpy array a = {}'.format(a))
Explanation: Numpy arrays
We've seen python's built-in lists for storing data, however for the numpy library contains the more powerful array datatype. Arrays are essentially a more powerful form of lists which make it easier to handle data. Most importantly, they allow us to apply operations to all elements of an array at once, rather than looping over the elements one-by-one.
To see this, let's create a list and a numpy array, both containing the same data.
End of explanation
print(l[1][2])
print(a[1,2])
Explanation: Accessing elements of numpy arrays is very similar to accessing elements of lists, but with slightly less typing. To access elements from an n-dimensional list, we have to use multiple square brackets, e.g. l[0][4][7][8]. For a numpy array, we separate the indices using a comma: a[0, 4, 7, 8].
End of explanation
import copy
squared = copy.deepcopy(l)
for i in range(3):
for j in range(3):
squared[i][j] = l[i][j]**2
print(squared)
Explanation: Let's say we now want to square every element of the array. For this 2d list, we would need a for loop:
End of explanation
print(a**2)
Explanation: Note that here we used the function deepcopy from the copy module the copy the list l. If we had simply used squared = l, when we the assigned the elements of squared new values, this would also have changed the values in l. This is in contrast to the simple variables we saw before, where changing the value of one will leave the values of others unchanged.
For numpy arrays, applying operations across the entire array is much simpler:
End of explanation
# transpose
a.T
# reshape
numpy.reshape(a, (1,9))
# stack arrays horizontally
numpy.hstack((a,a,a))
Explanation: Numpy has a range of array manipulation routines for rearranging and manipulating elements, such as those below.
End of explanation
a[a > 5]
Explanation: If you've used Matlab before, you may be familiar with logical indexing. This is a way of accessing elements of a array that satisfy some criteria, e.g. all the elements which are greater than 0. We can also do this with numpy arrays using boolean array indexing:
End of explanation |
2,660 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Preprocessing Pipeline
Create a BIDSDataGrabber Node to read data files
Create a IdentityInterface Node to iterate over multiple Subjects
Create following Nodes for preprocessing
Step1: Define Paths
Let's set the directory names
Step2: Adding module to read the parameters and paths from json file
Step3: Checking the Data directory Structure
Step4: To get the metadata associated with a subject. [Takes as argumment the filename of subject ]
Create a list of subjects
Step5: Create our own custom function - BIDSDataGrabber using a Function Interface.
Step6: Wrap it inside a Node
Step7: Return TR
Step8: Skipping 4 starting scans
Extract ROI for skipping first 4 scans of the functional data
Arguments
Step9: Slice time correction
Created a Node that does slice time correction
Arguments
Step10: Motion Correction
Motion correction is done using fsl's mcflirt. It alligns all the volumes of a functional scan to each other
Step11: %%bash
cat /tmp/tmpc2wmdeci/mcflirt/sub-28741_task-rest_run-1_bold_mcf.nii.par
Skull striping
I used fsl's BET
Step12: Note
Step13: Atlas
Step14: Resample
I needed to resample the anatomical file from 1mm to 2mm. Because registering a 1mm file was taking a huge amount of time.
Step15: Matrix operations
For concatenating the transformation matrices
Step16: For finding the inverse of a transformation matrix
Step17: Extracting the mean brain
Step18: Creating mask using the mean brain
Step19: Apply Mask
Step20: Datasink
I needed to define the structure of what files are saved and where.
Step21: To create the substitutions I looked the datasink folder where I was redirecting the output. I manually selected the part of file/folder name that I wanted to change and copied below to be substituted.
TODO
Step22: Apply Mask to functional data
Mean file of the motion corrected functional scan is sent to skullStrip to get just the brain and the mask_image. Mask_image is just a binary file (containing 1 where brain is present and 0 where it isn't).
After getting the mask_image form skullStrip, apply that mask to aligned functional image to extract its brain and remove the skull
Step23: Things learnt
Step24: Some nodes needed for Co-registration and Normalization
I observed using fslsyes that the brain is enlarged if you Normalize a brain resampled to 2mm brain. This in turn causes the functional data to enlarge as well after normalization. So, I will apply MNI152_2mm brain mask to the resample brain after it has been normalized.
For that let's first create a Node - anat2std_reg_masking that applies the MNI152_2mm brain mask to the Output of anat2std_reg.
Step25: I wanted to use the MNI file as input to the workflow so I created an Identity Node that reads the MNI file path and outputs the same MNI file path. Then I connected this node to whereever it was needed.
Step26: Band Pass Filtering
Let's do a band pass filtering on the data using the code from https
Step27: Following is a Join Node that collects the preprocessed file paths and saves them in a file
Step28: AFNI's filter is working good
Step29: Lets see the number of regions in the atlas and display the atlas.
Step30: Workflow for atlas registration from std to functional
Step31: Co-Registration, Normalization and Bandpass Workflow
Co-registration means alligning the func to anat
Normalization means aligning func/anat to standard
Applied band pass filtering in range - highpass=0.008, lowpass=0.08
Step32: Observation
Step33: Main Workflow
Step34: Summary
Step35: Summary [Incomplete]
wf.connect([(infosource, BIDSDataGrabber, [('data_dir','data_dir'), ('subject_id', 'subject_id'),]),
(BIDSDataGrabber, extract, [('func_file_path','in_file')]),
(extract,slicetimer,[('roi_file','in_file')]),
(slicetimer,mcflirt,[('slice_time_corrected_file','in_file')]),
(mcflirt, skullStrip, [('mean_img', 'in_file')]),
(mcflirt,applyMask,[('out_file','brain_file')]),
(skullStrip, applyMask, [('mask_file', 'mask_file')]),
])
In the above created workflow the infosource node iterates over the subject_id, it creates a Node and for each Subject ID it sends data_dir (path where the data resides) and the subject specific subject_id to BIDSDataGrabber Node.
BIDSDataGrabber Node accepts the above 2 parameters, calls the function get_nifti_filenames(subject_id,data_dir)which returns the path of the anatomical and BOLD files of the subject with given subject_id and hence the Node produces output that I call func_file_path and anat_file_path. I have used only func_file_pathright now.
The file path denoted by 'func_file_path' is then fed as input to extract that removes 4 initial brain volumes of the functional scan.
Its output is called - slice_time_corrected_file which is fed to mcflirt node to correct the movion between volumes of an individual subject. This is called Motion Correction.
In next step the mean_image from mcflirt is sent to skullStrip to get the mask. The role of skullStrip is just to obtain mask from the mean EPI image.
The mask got above is then applied to the functional volume to get rif of skull.
The final results are stored in the directory
Step36: $ \alpha_2 $ -- Just checking latex embedding | Python Code:
from bids.grabbids import BIDSLayout
from nipype.interfaces.fsl import (BET, ExtractROI, FAST, FLIRT, ImageMaths,
MCFLIRT, SliceTimer, Threshold,Info, ConvertXFM,MotionOutliers)
from nipype.interfaces.afni import Resample
from nipype.interfaces.io import DataSink
from nipype.pipeline import Node, MapNode, Workflow, JoinNode
from nipype.interfaces.utility import IdentityInterface, Function
import os
from os.path import join as opj
from nipype.interfaces import afni
import nibabel as nib
Explanation: Preprocessing Pipeline
Create a BIDSDataGrabber Node to read data files
Create a IdentityInterface Node to iterate over multiple Subjects
Create following Nodes for preprocessing: (Based on Nan-kuei Chen's resting state analysis pipeline:
[-] convert data to nii in LAS orientation (Skip if NYU is already in LAS Orientation)
[x] Exclude 4 volumes from the functional scan
[x] slice time correction
[x] motion correction, {[Save motion parameter]}
[x] Skull stripping and mask generation using mean of functional scan got using mcflirt
[x] Apply mask to Functional image
[x] Co-Registration with Anatomical Image
[x] normalize functional data
[-] regress out WM/CSF - Not doing coz of the debate that WM also has some activations
[x] bandpass filter
Embed them into a workflow
Do the Preprocessing of 4 subjects
End of explanation
os.chdir('/home1/varunk/Autism-Connectome-Analysis-brain_connectivity/notebooks/')
!pwd
Explanation: Define Paths
Let's set the directory names:
1. base_directory : The directory where all the output of my program will be saved
2. I have created 2 workflows, one onside another:
3. parent_wf_directory: The name of the folder where the top level workflow's output is saved
4. child_wf_directory: The name of the folder where the Second level workfolw's output is saved
5. data_directory: Directory where the BIDS data is stored.
End of explanation
import json
# Paths
path_cwd = os.getcwd()
path_split_list = path_cwd.split('/')
s = path_split_list[0:-2] # for getting to the parent dir of pwd
s = opj('/',*s) # *s converts list to path, # very important to add '/' in the begining so it is read as directory later
# json_path = opj(data_directory,'task-rest_bold.json')
json_path = '../scripts/json/paths.json'
with open(json_path, 'rt') as fp:
task_info = json.load(fp)
# base_directory = opj(s,'result')
# parent_wf_directory = 'preprocessPipeline_ABIDE2_GU1_withfloat'
# child_wf_directory = 'coregistrationPipeline'
# data_directory = opj(s,"data/ABIDE2-BIDS/GU1")
# datasink_name = 'datasink_preprocessed_ABIDE2_GU1_withfloat'
base_directory = opj(s,task_info["base_directory_for_results"])
motion_correction_bet_directory = task_info["motion_correction_bet_directory"]
parent_wf_directory = task_info["parent_wf_directory"]
coreg_reg_directory = task_info["coreg_reg_directory"]
atlas_resize_reg_directory = task_info["atlas_resize_reg_directory"]
# data_directory = opj(s,task_info["data_directory"])
data_directory = opj(s,'data/NYU_Cocaine-BIDS')
datasink_name = task_info["datasink_name"]
atlasPath = opj(s,task_info["atlas_path"])
# mask_file = '/media/varun/LENOVO4/Projects/result/preprocessPipeline/coregistrationPipeline/_subject_id_0050952/skullStrip/sub-0050952_T1w_resample_brain_mask.nii.gz'
# os.chdir(path)
s# data_directory# path_cwd
layout = BIDSLayout(data_directory)
# number_of_subjects = 4 # Number of subjects you wish to preprocess
number_of_subjects = len(layout.get_subjects())
Explanation: Adding module to read the parameters and paths from json file
End of explanation
# !tree /home/jovyan/work/preprocess/data/ABIDE-BIDS/NYU/
len(layout.get_subjects()) # working!Gives us list of all the subjects
layout.get_subjects()
Explanation: Checking the Data directory Structure
End of explanation
subject_list = (layout.get_subjects())[0:number_of_subjects]
layout.get()
# subject_list[960:980]
# To debug some error
for subject_id in subject_list:
# subject_id = '50273'
anat_file_path = [f.filename for f in layout.get(subject=subject_id, type='T1w', extensions=['nii', 'nii.gz'])]
func_file_path = [f.filename for f in layout.get(subject=subject_id, type='bold', extensions=['nii', 'nii.gz'])]
print('In Subject: ',subject_id)
x = anat_file_path[0]
y = func_file_path[0]
# anat_file_path,func_file_path
Explanation: To get the metadata associated with a subject. [Takes as argumment the filename of subject ]
Create a list of subjects
End of explanation
def get_nifti_filenames(subject_id,data_dir):
# Remember that all the necesary imports need to be INSIDE the function for the Function Interface to work!
from bids.grabbids import BIDSLayout
layout = BIDSLayout(data_dir)
anat_file_path = [f.filename for f in layout.get(subject=subject_id, type='T1w', extensions=['nii', 'nii.gz'])]
func_file_path = [f.filename for f in layout.get(subject=subject_id, type='bold', run='1', extensions=['nii', 'nii.gz'])]
return anat_file_path[0],func_file_path[0]
# Refer to Supplementary material section One for info on arguments for layout.get()
Explanation: Create our own custom function - BIDSDataGrabber using a Function Interface.
End of explanation
BIDSDataGrabber = Node(Function(function=get_nifti_filenames, input_names=['subject_id','data_dir'],
output_names=['anat_file_path','func_file_path']), name='BIDSDataGrabber')
# BIDSDataGrabber.iterables = [('subject_id',subject_list)]
BIDSDataGrabber.inputs.data_dir = data_directory
# To test the function wrapped in the node
# os.chdir('/home1/varunk/Autism-Connectome-Analysis-bids-related')
# BIDSDataGrabber.inputs.data_dir = data_directory
# BIDSDataGrabber.inputs.subject_id = layout.get_subjects()[0] # gives the first subject's ID
# res = BIDSDataGrabber.run()
# res.outputs
Explanation: Wrap it inside a Node
End of explanation
def get_TR(in_file):
from bids.grabbids import BIDSLayout
data_directory = '/home1/varunk/data/ABIDE1/RawDataBIDs'
layout = BIDSLayout(data_directory)
metadata = layout.get_metadata(path=in_file)
TR = metadata['RepetitionTime']
return TR
# type(get_TR('/home1/varunk/data/ABIDE1/RawDataBIDs/Pitt/sub-0050002/func/sub-0050002_task-rest_run-1_bold.nii.gz'))
# in_file = '/home1/varunk/data/ABIDE1/RawDataBIDs/Pitt/sub-0050002/func/sub-0050002_task-rest_run-1_bold.nii.gz'
in_file = '/home1/varunk/data/ABIDE1/RawDataBIDs/SBL/sub-0051556/func/sub-0051556_task-rest_run-1_bold.nii.gz'
metadata = layout.get_metadata(path=in_file)
metadata['RepetitionTime'], metadata['SliceAcquisitionOrder']
def _getMetadata(in_file):
from bids.grabbids import BIDSLayout
interleaved = True
index_dir = False
data_directory = '/home1/varunk/data/ABIDE1/RawDataBIDs'
layout = BIDSLayout(data_directory)
metadata = layout.get_metadata(path=in_file)
tr = metadata['RepetitionTime']
slice_order = metadata['SliceAcquisitionOrder']
if slice_order.split(' ')[0] == 'Sequential':
interleaved = False
if slice_order.split(' ')[1] == 'Descending':
index_dir = True
return tr, index_dir, interleaved
getMetadata = Node(Function(function=_getMetadata, input_names=['in_file'],
output_names=['tr','index_dir','interleaved']), name='getMetadata')
# Test run
# getMetadata.inputs.in_file = in_file
# res = getMetadata.run()
# res.outputs
Explanation: Return TR
End of explanation
# ExtractROI - skip dummy scans
extract = Node(ExtractROI(t_min=4, t_size=-1),
output_type='NIFTI',
name="extract")
Explanation: Skipping 4 starting scans
Extract ROI for skipping first 4 scans of the functional data
Arguments:
t_min: (corresponds to time dimension) Denotes the starting time of the inclusion
t_size: Denotes the number of scans to include
The logic behind skipping 4 initial scans is to take scans after the subject has stabalized in the scanner.
End of explanation
slicetimer = Node(SliceTimer(
output_type='NIFTI'
),
name="slicetimer")
# index_dir=False, interleaved=True,
# To test Slicetimer
# subject_id = layout.get_subjects()[0] # gives the first subject's ID
# func_file_path = [f.filename for f in layout.get(subject=subject_id, type='bold', extensions=['nii', 'nii.gz'])]
# slicetimer.inputs.in_file = func_file_path[0]
# res = slicetimer.run()
# res.outputs
Explanation: Slice time correction
Created a Node that does slice time correction
Arguments:
index_dir=False -> Slices were taken bottom to top i.e. in ascending order
interleaved=True means odd slices were acquired first and then even slices [or vice versa(Not sure)]
End of explanation
# MCFLIRT - motion correction
mcflirt = Node(MCFLIRT( mean_vol=True,
save_plots=True,
output_type='NIFTI'),
name="mcflirt")
# ref_vol = 1,
# To test mcflirt
# subject_id = layout.get_subjects()[0] # gives the first subject's ID
# func_file_path = [f.filename for f in layout.get(subject=subject_id, type='bold', extensions=['nii', 'nii.gz'])]
# mcflirt.inputs.in_file = func_file_path[0]
# res_mcflirt = mcflirt.run()
# res_mcflirt.outputs
Explanation: Motion Correction
Motion correction is done using fsl's mcflirt. It alligns all the volumes of a functional scan to each other
End of explanation
skullStrip = Node(BET(mask=False, frac=0.3, robust=True ),name='skullStrip')
Explanation: %%bash
cat /tmp/tmpc2wmdeci/mcflirt/sub-28741_task-rest_run-1_bold_mcf.nii.par
Skull striping
I used fsl's BET
End of explanation
# BET.help(); # Useful to see what are the parameters taken by BET
Explanation: Note: Do not include special characters in name field above coz then wf.writegraph will cause issues
End of explanation
# Put in the path of atlas you wish to use
# atlasPath = opj(s,'atlas/Full_brain_atlas_thr0-2mm/fullbrain_atlas_thr0-2mm.nii.gz')
# # Read the atlas
# atlasObject = nib.load(atlasPath)
# atlas = atlasObject.get_data()
Explanation: Atlas
End of explanation
# Resample - resample anatomy to 3x3x3 voxel resolution
resample_mni = Node(Resample(voxel_size=(3, 3, 3), resample_mode='Cu', # cubic interpolation
outputtype='NIFTI'),
name="resample_mni")
resample_anat = Node(Resample(voxel_size=(3, 3, 3), resample_mode='Cu', # cubic interpolation
outputtype='NIFTI'),
name="resample_anat")
resample_atlas = Node(Resample(voxel_size=(3, 3, 3), resample_mode='NN', # cubic interpolation
outputtype='NIFTI'),
name="resample_atlas")
resample_atlas.inputs.in_file = atlasPath
# Resample.help() # To understand what all parameters Resample supports
# resample.outputs
Explanation: Resample
I needed to resample the anatomical file from 1mm to 2mm. Because registering a 1mm file was taking a huge amount of time.
End of explanation
concat_xform = Node(ConvertXFM(concat_xfm=True),name='concat_xform')
# .cmdline
Explanation: Matrix operations
For concatenating the transformation matrices
End of explanation
# Node to calculate the inverse of func2std matrix
inv_mat = Node(ConvertXFM(invert_xfm=True), name='inv_mat')
# inv_mat.inputs
Explanation: For finding the inverse of a transformation matrix
End of explanation
meanfunc = Node(interface=ImageMaths(op_string='-Tmean',
suffix='_mean'),
name='meanfunc')
# preproc.connect(motion_correct, ('out_file', pickfirst), meanfunc, 'in_file')
# in_file = '/home1/varunk/data/ABIDE1/RawDataBIDs/Pitt/sub-0050002/func/sub-0050002_task-rest_run-1_bold.nii.gz'
# meanfunc.inputs.in_file = in_file
# res = meanfunc.run()
Explanation: Extracting the mean brain
End of explanation
meanfuncmask = Node(interface=BET(mask=True,
no_output=True,
frac=0.3),
name='meanfuncmask')
# in_file = '/home1/varunk/data/ABIDE1/RawDataBIDs/Pitt/sub-0050002/func/sub-0050002_task-rest_run-1_bold.nii.gz'
# meanfuncmask.inputs.in_file = in_file
# res = meanfuncmask.run()
Explanation: Creating mask using the mean brain
End of explanation
# Does BET (masking) on the whole func scan [Not using this, creates bug for join node]
maskfunc = Node(interface=ImageMaths(suffix='_bet',
op_string='-mas'),
name='maskfunc')
# Does BET (masking) on the mean func scan
maskfunc4mean = Node(interface=ImageMaths(suffix='_bet',
op_string='-mas'),
name='maskfunc4mean')
# in_file = '/home1/varunk/data/ABIDE1/RawDataBIDs/Pitt/sub-0050002/func/sub-0050002_task-rest_run-1_bold.nii.gz'
# in_file = '/usr/local/fsl/data/standard/MNI152_T1_2mm.nii.gz'
# in_file2 = '/usr/local/fsl/data/standard/MNI152_T1_2mm_brain_mask.nii.gz'
# maskfunc.inputs.in_file = in_file
# maskfunc.inputs.in_file2 = in_file2
# res = maskfunc.run()
# res.outputs.out_file
Explanation: Apply Mask
End of explanation
# Create DataSink object
dataSink = Node(DataSink(), name='datasink')
# Name of the output folder
dataSink.inputs.base_directory = opj(base_directory,datasink_name)
base_directory
Explanation: Datasink
I needed to define the structure of what files are saved and where.
End of explanation
# Define substitution strings so that the data is similar to BIDS
substitutions = [('_subject_id_', 'sub-'),
('_resample_brain_flirt.nii_brain', ''),
('_roi_st_mcf_flirt.nii_brain_flirt', ''),
('task-rest_run-1_bold_roi_st_mcf.nii','motion_params'),
('T1w_resample_brain_flirt_sub-0050002_task-rest_run-1_bold_roi_st_mcf_mean_bet_flirt','fun2std')
]
# Feed the substitution strings to the DataSink node
dataSink.inputs.substitutions = substitutions
Explanation: To create the substitutions I looked the datasink folder where I was redirecting the output. I manually selected the part of file/folder name that I wanted to change and copied below to be substituted.
TODO: Using datasink create a hierarchical directory structure i.e. folder in folder - to exactly match BIDS.
End of explanation
# Function
# in_file: The file on which you want to apply mask
# in_file2 = mask_file: The mask you want to use. Make sure that mask_file has same size as in_file
# out_file : Result of applying mask in in_file -> Gives the path of the output file
def applyMask_func(in_file, in_file2):
import numpy as np
import nibabel as nib
import os
from os.path import join as opj
# convert from unicode to string : u'/tmp/tmp8daO2Q/..' -> '/tmp/tmp8daO2Q/..' i.e. removes the prefix 'u'
mask_file = in_file2
brain_data = nib.load(in_file)
mask_data = nib.load(mask_file)
brain = brain_data.get_data().astype('float32')
mask = mask_data.get_data()
# applying mask by multiplying elementwise to the binary mask
if len(brain.shape) == 3: # Anat file
brain = np.multiply(brain,mask)
elif len(brain.shape) > 3: # Functional File
for t in range(brain.shape[-1]):
brain[:,:,:,t] = np.multiply(brain[:,:,:,t],mask)
else:
pass
# Saving the brain file
path = os.getcwd()
in_file_split_list = in_file.split('/')
in_file_name = in_file_split_list[-1]
out_file = in_file_name + '_brain.nii.gz' # changing name
brain_with_header = nib.Nifti1Image(brain, affine=brain_data.affine,header = brain_data.header)
nib.save(brain_with_header,out_file)
out_file = opj(path,out_file)
out_file2 = in_file2
return out_file, out_file2
Explanation: Apply Mask to functional data
Mean file of the motion corrected functional scan is sent to skullStrip to get just the brain and the mask_image. Mask_image is just a binary file (containing 1 where brain is present and 0 where it isn't).
After getting the mask_image form skullStrip, apply that mask to aligned functional image to extract its brain and remove the skull
End of explanation
applyMask = Node(Function(function=applyMask_func, input_names=['in_file','in_file2'],
output_names=['out_file','out_file2']), name='applyMask')
Explanation: Things learnt:
I found out that whenever a node is being executed, it becomes the current directory and whatever file you create now, will be stored here.
from IPython.core.debugger import Tracer; Tracer()() # Debugger doesnt work in nipype
Wrap the above function inside a Node
End of explanation
# FLIRT.help()
# Node for getting the xformation matrix
func2anat_reg = Node(FLIRT(output_type='NIFTI'), name="func2anat_reg")
# Node for applying xformation matrix to functional data
func2std_xform = Node(FLIRT(output_type='NIFTI',
apply_xfm=True), name="func2std_xform")
# Node for applying xformation matrix to standard space brain data
std2func_xform = Node(FLIRT(output_type='NIFTI',
apply_xfm=True, interp='nearestneighbour'), name="std2func_xform")
# Node for Normalizing/Standardizing the anatomical and getting the xformation matrix
anat2std_reg = Node(FLIRT(output_type='NIFTI'), name="anat2std_reg")
Explanation: Some nodes needed for Co-registration and Normalization
I observed using fslsyes that the brain is enlarged if you Normalize a brain resampled to 2mm brain. This in turn causes the functional data to enlarge as well after normalization. So, I will apply MNI152_2mm brain mask to the resample brain after it has been normalized.
For that let's first create a Node - anat2std_reg_masking that applies the MNI152_2mm brain mask to the Output of anat2std_reg.
End of explanation
MNI152_2mm = Node(IdentityInterface(fields=['standard_file','mask_file']),
name="MNI152_2mm")
# Set the mask_file and standard_file input in the Node. This setting sets the input mask_file permanently.
MNI152_2mm.inputs.mask_file = os.path.expandvars('$FSLDIR/data/standard/MNI152_T1_2mm_brain_mask.nii.gz')
MNI152_2mm.inputs.standard_file = os.path.expandvars('$FSLDIR/data/standard/MNI152_T1_2mm_brain.nii.gz')
# MNI152_2mm.inputs.mask_file = '/usr/share/fsl/5.0/data/standard/MNI152_T1_2mm_brain_mask.nii.gz'
# MNI152_2mm.inputs.standard_file = '/usr/share/fsl/5.0/data/standard/MNI152_T1_2mm_brain.nii.gz'
# /usr/local/fsl/data/standard/
# Testing
# res = MNI152_2mm_mask.run()
# res.outputs
# afni.Bandpass.help()
Explanation: I wanted to use the MNI file as input to the workflow so I created an Identity Node that reads the MNI file path and outputs the same MNI file path. Then I connected this node to whereever it was needed.
End of explanation
### AFNI
# bandpass = Node(afni.Bandpass(highpass=0.008, lowpass=0.08,
# despike=False, no_detrend=True, notrans=True,
# outputtype='NIFTI_GZ'),name='bandpass')
bandpass = Node(afni.Bandpass(highpass=0.01, lowpass=0.1,
despike=False, no_detrend=True, notrans=True,
tr=2.0,outputtype='NIFTI_GZ'),name='bandpass')
# bandpass.inputs.mask = MNI152_2mm.outputs.mask_file
# Testing bandpass on the func data in subject's space
# First comment out the bandpass.inputs.mask as it is in standard space.
# subject_id = layout.get_subjects()[0] # gives the first subject's ID
# func_file_path = [f.filename for f in layout.get(subject=subject_id, type='bold', extensions=['nii', 'nii.gz'])]
# bandpass.inputs.in_file = func_file_path[0]
# res = bandpass.run();
# res.outputs.out_file
# To view in fsl I need to save this file. You can change the the location as per your need.
# First run utility functions section. It contains the load_and_save function
# load_and_save(res.outputs.out_file,'/home/jovyan/work/preprocess/result/filtered_func.nii')
# afni.Bandpass.help() # to see what all parameters are supported by Bandpass filter of afni
Explanation: Band Pass Filtering
Let's do a band pass filtering on the data using the code from https://neurostars.org/t/bandpass-filtering-different-outputs-from-fsl-and-nipype-custom-function/824/2
End of explanation
def save_file_list_function(in_brain, in_mask, in_motion_params, in_motion_outliers, in_joint_xformation_matrix, in_tr, in_atlas):
# Imports
import numpy as np
import os
from os.path import join as opj
file_list = np.asarray(in_brain)
print('######################## File List ######################: \n',file_list)
np.save('brain_file_list',file_list)
file_name = 'brain_file_list.npy'
out_brain = opj(os.getcwd(),file_name) # path
file_list2 = np.asarray(in_mask)
print('######################## File List ######################: \n',file_list2)
np.save('mask_file_list',file_list2)
file_name2 = 'mask_file_list.npy'
out_mask = opj(os.getcwd(),file_name2) # path
file_list3 = np.asarray(in_motion_params)
print('######################## File List ######################: \n',file_list3)
np.save('motion_params_file_list',file_list3)
file_name3 = 'motion_params_file_list.npy'
out_motion_params = opj(os.getcwd(),file_name3) # path
file_list4 = np.asarray(in_motion_outliers)
print('######################## File List ######################: \n',file_list4)
np.save('motion_outliers_file_list',file_list4)
file_name4 = 'motion_outliers_file_list.npy'
out_motion_outliers = opj(os.getcwd(),file_name4) # path
file_list5 = np.asarray(in_joint_xformation_matrix)
print('######################## File List ######################: \n',file_list5)
np.save('joint_xformation_matrix_file_list',file_list5)
file_name5 = 'joint_xformation_matrix_file_list.npy'
out_joint_xformation_matrix = opj(os.getcwd(),file_name5) # path
tr_list = np.asarray(in_tr)
print('######################## TR List ######################: \n',tr_list)
np.save('tr_list',tr_list)
file_name6 = 'tr_list.npy'
out_tr = opj(os.getcwd(),file_name6) # path
file_list7 = np.asarray(in_atlas)
print('######################## File List ######################: \n',file_list7)
np.save('atlas_file_list',file_list7)
file_name7 = 'atlas_file_list.npy'
out_atlas = opj(os.getcwd(),file_name7) # path
return out_brain, out_mask, out_motion_params, out_motion_outliers, out_joint_xformation_matrix, out_tr , out_atlas
save_file_list = JoinNode(Function(function=save_file_list_function, input_names=['in_brain', 'in_mask', 'in_motion_params','in_motion_outliers','in_joint_xformation_matrix', 'in_tr', 'in_atlas'],
output_names=['out_brain','out_mask','out_motion_params','out_motion_outliers','out_joint_xformation_matrix','out_tr', 'out_atlas']),
joinsource="infosource",
joinfield=['in_brain', 'in_mask', 'in_motion_params','in_motion_outliers','in_joint_xformation_matrix','in_tr', 'in_atlas'],
name="save_file_list")
# ------------------Change it in the program below -- all the names of parameters iin the workflow..
Explanation: Following is a Join Node that collects the preprocessed file paths and saves them in a file
End of explanation
motionOutliers = Node(MotionOutliers(no_motion_correction=True, out_metric_plot = 'refrms_plot.png',
out_metric_values='refrms_raw.txt'),name='motionOutliers')
# (MotionOutliers(in_file = 'var.nii',no_motion_correction=False, out_metric_plot = 'refrms_plot',
# out_metric_values='refrms_raw')).cmdline
Explanation: AFNI's filter is working good:
Next:
[x] Add the mask as parameter to the afni Node
[] Add the Node to the workflow
[x] Improve the data sink
[] Create Voxel pair FC map
Motion outliers
End of explanation
# num_ROIs = int((np.max(atlas) - np.min(atlas) ))
# print('Min Index:', np.min(atlas),'Max Index', np.max(atlas))
# print('Total Number of Parcellations = ',num_ROIs)
Explanation: Lets see the number of regions in the atlas and display the atlas.
End of explanation
wf_atlas_resize_reg = Workflow(name=atlas_resize_reg_directory)
wf_atlas_resize_reg.connect([
# Apply the inverse matrix to the 3mm Atlas to transform it to func space
(maskfunc4mean, std2func_xform, [(('out_file','reference'))]),
(resample_atlas, std2func_xform, [('out_file','in_file')] ),
# Now, applying the inverse matrix
(inv_mat, std2func_xform, [('out_file','in_matrix_file')]), # output: Atlas in func space
(std2func_xform, save_file_list, [('out_file','in_atlas')]),
# ---------------------------Save the required files --------------------------------------------
(save_file_list, dataSink, [('out_motion_params','motion_params_paths.@out_motion_params')]),
(save_file_list, dataSink, [('out_motion_outliers','motion_outliers_paths.@out_motion_outliers')]),
(save_file_list, dataSink, [('out_brain','preprocessed_brain_paths.@out_brain')]),
(save_file_list, dataSink, [('out_mask','preprocessed_mask_paths.@out_mask')]),
(save_file_list, dataSink, [('out_joint_xformation_matrix',
'joint_xformation_matrix_paths.@out_joint_xformation_matrix')]),
(save_file_list, dataSink, [('out_tr','tr_paths.@out_tr')]),
(save_file_list, dataSink, [('out_atlas','atlas_paths.@out_atlas')])
])
wf_coreg_reg = Workflow(name=coreg_reg_directory)
# wf_coreg_reg.base_dir = base_directory
# Dir where all the outputs will be stored(inside coregistrationPipeline folder).
wf_coreg_reg.connect([
(BIDSDataGrabber,skullStrip,[('anat_file_path','in_file')]), # Resampled the anat file to 3mm
(skullStrip,resample_anat,[('out_file','in_file')]),
(resample_anat,func2anat_reg,[('out_file','reference')]), # Make the resampled file as reference in func2anat_reg
# Sec 1. The above 3 steps registers the mean image to resampled anat image and
# calculates the xformation matrix .. I hope the xformation matrix will be saved
(MNI152_2mm, resample_mni, [('standard_file','in_file')]),
(resample_mni, anat2std_reg, [('out_file','reference')]),
(resample_anat, anat2std_reg, [('out_file','in_file')]),
# Calculates the Xformationmatrix from anat3mm to MNI 3mm
# We can get those matrices by refering to func2anat_reg.outputs.out_matrix_file and similarly for anat2std_reg
(func2anat_reg, concat_xform, [('out_matrix_file','in_file')]),
(anat2std_reg, concat_xform, [('out_matrix_file','in_file2')]),
(concat_xform, dataSink, [('out_file', 'tranformation_matrix_fun2std.@out_file')]),
(concat_xform, save_file_list, [('out_file', 'in_joint_xformation_matrix')]), #func2std xformation mat files
# Now inverse the func2std MAT to std2func
(concat_xform, wf_atlas_resize_reg, [('out_file','inv_mat.in_file')])
])
# wf_coreg_reg = Workflow(name=coreg_reg_directory)
# # wf_coreg_reg.base_dir = base_directory
# # Dir where all the outputs will be stored(inside coregistrationPipeline folder).
# wf_coreg_reg.connect([
# (BIDSDataGrabber,resample_anat,[('anat_file_path','in_file')]), # Resampled the anat file to 3mm
# (resample_anat,skullStrip,[('out_file','in_file')]),
# (skullStrip,func2anat_reg,[('out_file','reference')]), # Make the resampled file as reference in func2anat_reg
# # Sec 1. The above 3 steps registers the mean image to resampled anat image and
# # calculates the xformation matrix .. I hope the xformation matrix will be saved
# (MNI152_2mm, resample_mni, [('standard_file','in_file')]),
# (resample_mni, anat2std_reg, [('out_file','reference')]),
# (skullStrip, anat2std_reg, [('out_file','in_file')]),
# # Calculates the Xformationmatrix from anat3mm to MNI 3mm
# # We can get those matrices by refering to func2anat_reg.outputs.out_matrix_file and similarly for anat2std_reg
# (func2anat_reg, concat_xform, [('out_matrix_file','in_file')]),
# (anat2std_reg, concat_xform, [('out_matrix_file','in_file2')]),
# (concat_xform, dataSink, [('out_file', 'tranformation_matrix_fun2std.@out_file')]),
# (concat_xform, save_file_list, [('out_file', 'in_joint_xformation_matrix')]), #func2std xformation mat files
# # Now inverse the func2std MAT to std2func
# (concat_xform, wf_atlas_resize_reg, [('out_file','inv_mat.in_file')])
# -------------------------------------------------
# # Apply the transformation to 3mm Atlas to transform it to func space
# (maskfunc4mean, std2func_xform, [(('standard_file','reference'))]),
# (resample_atlas, std2func_xform, [('out_file','in_file')] ), # Applies the transform to the ...
# # ... atlas
# # Now, apply the inverse matrix to the atlas
# (inv_mat, std2func_xform, [('out_file','in_matrix_file')]),
# ------------------------------------------
# (save_file_list, dataSink, [('out_motion_params','motion_params_paths.@out_motion_params')]),
# (save_file_list, dataSink, [('out_motion_outliers','motion_outliers_paths.@out_motion_outliers')]),
# (save_file_list, dataSink, [('out_brain','preprocessed_brain_paths.@out_brain')]),
# (save_file_list, dataSink, [('out_mask','preprocessed_mask_paths.@out_mask')]),
# (save_file_list, dataSink, [('out_joint_xformation_matrix',
# 'joint_xformation_matrix_paths.@out_joint_xformation_matrix')]),
# (save_file_list, dataSink, [('out_tr',
# 'tr_paths.@out_tr')])
# ])
Explanation: Workflow for atlas registration from std to functional
End of explanation
wf_motion_correction_bet = Workflow(name=motion_correction_bet_directory)
# wf_motion_correction_bet.base_dir = base_directory
wf_motion_correction_bet.connect([
(mcflirt,dataSink,[('par_file','motion_params.@par_file')]), # saves the motion parameters calculated before
(mcflirt,save_file_list,[('par_file','in_motion_params')]),
# (save_file_list, dataSink, [('out_motion_params','motion_params_paths.@out_motion_params')]),
(mcflirt, meanfunc, [('out_file','in_file')]),
(meanfunc, meanfuncmask, [('out_file','in_file')]),
(slicetimer,motionOutliers,[('slice_time_corrected_file','in_file')]),
# (mcflirt, motionOutliers, [('out_file','in_file')]),
(meanfuncmask, motionOutliers, [('mask_file','mask')]),
(motionOutliers, dataSink, [('out_file','motionOutliers.@out_file')]),
(motionOutliers, dataSink, [('out_metric_plot','motionOutliers.@out_metric_plot')]),
(motionOutliers, dataSink, [('out_metric_values','motionOutliers.@out_metric_values')]),
(motionOutliers, save_file_list, [('out_file','in_motion_outliers')]),
# (save_file_list, dataSink, [('out_motion_outliers','motion_outliers_paths.@out_motion_outliers')]),
(mcflirt,applyMask , [('out_file','in_file')]), # 1
(meanfuncmask, applyMask, [('mask_file','in_file2')]), # 2 output: 1&2, BET on coregistered fmri scan
(meanfunc, maskfunc4mean, [('out_file', 'in_file')]), # 3
(meanfuncmask, maskfunc4mean, [('mask_file','in_file2')]), # 4 output: 3&4, BET on mean func scan
(applyMask, save_file_list, [('out_file', 'in_brain')]),
(applyMask, save_file_list, [('out_file2', 'in_mask')]),
# (save_file_list, dataSink, [('out_brain','preprocessed_brain_paths.@out_brain')]),
# (save_file_list, dataSink, [('out_mask','preprocessed_mask_paths.@out_mask')]),
(maskfunc4mean, wf_coreg_reg, [('out_file','func2anat_reg.in_file')])
])
Explanation: Co-Registration, Normalization and Bandpass Workflow
Co-registration means alligning the func to anat
Normalization means aligning func/anat to standard
Applied band pass filtering in range - highpass=0.008, lowpass=0.08
End of explanation
import pandas as pd
import numpy as np
df = pd.read_csv('/home1/varunk/data/ABIDE1/RawDataBIDs/composite_phenotypic_file.csv') # , index_col='SUB_ID'
df = df.sort_values(['SUB_ID'])
df
selected_participants = df.loc[(df['SITE_ID'] == 'KKI') | (df['SITE_ID'] == 'Leuven_1') | (df['SITE_ID'] == 'Leuven_2') \
| (df['SITE_ID'] == 'SBL') | (df['SITE_ID'] == 'Trinity') | (df['SITE_ID'] == 'UM_1') \
| (df['SITE_ID'] == 'UM_2')]
selected_participants = list(map(str, selected_participants.as_matrix(['SUB_ID']).squeeze()))
# selected_participants[0]
Explanation: Observation:
Applying masking again on the Normalized func file greately reduced the size from ~600MB -> ~150MB. I think Normalizing might have generated some extra voxels in the region of 'no brain'. Masking again got rid of them. Hence, reduced size.
Adding a module to select only selected participants
KKI
Leuven_1
Leuven_2
SBL
Stanford (N.A)
Trinity
UM_1
UM_2
End of explanation
subject_list = selected_participants[0:2]
subject_list = [str(item).zfill(7) for item in subject_list]
subject_list
infosource = Node(IdentityInterface(fields=['subject_id']),
name="infosource")
infosource.iterables = [('subject_id',subject_list)]
# infosource.inputs.subject_id = subject_list[0]
# res = infosource.run()
# res.outputs
# Create the workflow
# Refer to Supplementary material's Section Two. for more on workspaces
wf = Workflow(name=parent_wf_directory)
# base_dir = opj(s,'result')
wf.base_dir = base_directory # Dir where all the outputs will be stored(inside BETFlow folder).
wf.connect([ (infosource, BIDSDataGrabber, [('subject_id','subject_id')]),
(BIDSDataGrabber, extract, [('func_file_path','in_file')]),
(BIDSDataGrabber,getMetadata, [('func_file_path','in_file')]),
(getMetadata,slicetimer, [('tr','time_repetition')]),
(getMetadata,slicetimer, [('index_dir','index_dir')]),
(getMetadata,slicetimer, [('interleaved','interleaved')]),
(getMetadata,save_file_list, [('tr','in_tr')]),
(extract,slicetimer,[('roi_file','in_file')]),
(slicetimer,wf_motion_correction_bet,[('slice_time_corrected_file','mcflirt.in_file')])
])
# Run it in parallel
%time wf.run('MultiProc', plugin_args={'n_procs': 6})
# (BIDSDataGrabber,slicetimer, [(('func_file_path', get_TR ),'time_repetition')]),
Explanation: Main Workflow
End of explanation
error
# Visualize the detailed graph
from IPython.display import Image
wf.write_graph(graph2use='exec', format='png', simple_form=True)
file_name = opj(base_directory,parent_wf_directory,'graph_detailed.dot.png')
Image(filename=file_name)
Explanation: Summary:
End of explanation
# os.chdir('../results/ABIDE1_Preprocess/motion_correction_bet/_subject_id_0050002/applyMask')
# in_file = 'sub-0050002_task-rest_run-1_bold_roi_st_mcf.nii_brain.nii.gz'
# in_matrix_file = 'sub-0050002_T1w_resample_brain_flirt_sub-0050002_task-rest_run-1_bold_roi_st_mcf_mean_bet_flirt.mat'
# func2std_xform.inputs
import numpy as np
X = np.load('../results_again_again/ABIDE1_Preprocess_Datasink/preprocessed_brain_paths/brain_file_list.npy')
X
X = np.load('../results_again_again/ABIDE1_Preprocess_Datasink/preprocessed_mask_paths/mask_file_list.npy')
X
! cat ../results_again_again/ABIDE1_Preprocess_Datasink/motion_params/sub-0050002/sub-0050002_motion_params.par
! cat ../results_again_again/ABIDE1_Preprocess_Datasink/tranformation_matrix_fun2std/sub-0050002/sub-0050002_task-rest_run-1_bold_roi_st_mcf_mean_bet_flirt_sub-0050002_T1w_resample_brain_flirt.mat
! cat ../results_again_again/ABIDE1_Preprocess_Datasink/tranformation_matrix_fun2std/sub-0050003/sub-0050003_task-rest_run-1_bold_roi_st_mcf_mean_bet_flirt_sub-0050003_T1w_resample_brain_flirt.mat
Explanation: Summary [Incomplete]
wf.connect([(infosource, BIDSDataGrabber, [('data_dir','data_dir'), ('subject_id', 'subject_id'),]),
(BIDSDataGrabber, extract, [('func_file_path','in_file')]),
(extract,slicetimer,[('roi_file','in_file')]),
(slicetimer,mcflirt,[('slice_time_corrected_file','in_file')]),
(mcflirt, skullStrip, [('mean_img', 'in_file')]),
(mcflirt,applyMask,[('out_file','brain_file')]),
(skullStrip, applyMask, [('mask_file', 'mask_file')]),
])
In the above created workflow the infosource node iterates over the subject_id, it creates a Node and for each Subject ID it sends data_dir (path where the data resides) and the subject specific subject_id to BIDSDataGrabber Node.
BIDSDataGrabber Node accepts the above 2 parameters, calls the function get_nifti_filenames(subject_id,data_dir)which returns the path of the anatomical and BOLD files of the subject with given subject_id and hence the Node produces output that I call func_file_path and anat_file_path. I have used only func_file_pathright now.
The file path denoted by 'func_file_path' is then fed as input to extract that removes 4 initial brain volumes of the functional scan.
Its output is called - slice_time_corrected_file which is fed to mcflirt node to correct the movion between volumes of an individual subject. This is called Motion Correction.
In next step the mean_image from mcflirt is sent to skullStrip to get the mask. The role of skullStrip is just to obtain mask from the mean EPI image.
The mask got above is then applied to the functional volume to get rif of skull.
The final results are stored in the directory : /home/jovyan/work/preprocess/result/BETFlow. Every node has its own folder where its results are stored.
TODO
Make a single workflow
See how the file looks like after the transformation is applied
Change the FC code such that the FC maps is calculated instead of just matrices.
Transform these FC maps to Standard space
Do FDR correction (Check it with the MATLAB Fdr code)
End of explanation
#TR
X = np.load('../results_again_again/ABIDE1_Preprocess_Datasink/tr_paths/tr_list.npy')
X
X = np.load('../results_again_again//ABIDE1_Preprocess_Datasink/atlas_paths/atlas_file_list.npy')
X, X.shape
# Read the std atlas
atlasObject = nib.load(atlasPath)
atlas = atlasObject.get_data()
num_ROIs = int((np.max(atlas) - np.min(atlas) ))
num_ROIs
# Read the func atlas
atlaspath2 = '/home1/varunk/results_again/ABIDE1_Preprocess/motion_correction_bet/coreg_reg/atlas_resize_reg_directory/_subject_id_0050002/std2func_xform/fullbrain_atlas_thr0-2mm_resample_flirt.nii'
atlasObject = nib.load(atlaspath2)
atlas = atlasObject.get_data()
num_ROIs = int((np.max(atlas) - np.min(atlas) ))
num_ROIs
'/home1/varunk/results_again_again/ABIDE1_Preprocess/motion_correction_bet/coreg_reg/atlas_resize_reg_directory/_subject_id_0050004/std2func_xform/fullbrain_atlas_thr0-2mm_resample_flirt.nii'
Explanation: $ \alpha_2 $ -- Just checking latex embedding
End of explanation |
2,661 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment analysis with TFLearn
In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.
We'll start off by importing all the modules we'll need, then load and prepare the data.
Step1: Preparing the data
Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.
Read the data
Use the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
Step2: Counting word frequency
To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.
Exercise
Step3: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
Step4: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
Step5: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Note
Step6: Text to vector function
Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this
Step7: If you do this right, the following code should return
```
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[
Step8: Now, run through our entire review data set and convert each review to a word vector.
Step9: Train, Validation, Test sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.
Step10: Building the network
TFLearn lets you build the network by defining the layers.
Input layer
For the input layer, you just need to tell it how many units you have. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.
The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).
Output layer
The last layer you add is used as the output layer. There for, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.
net = tflearn.fully_connected(net, 2, activation='softmax')
Training
To set how you train the network, use
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords
Step11: Intializing the model
Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.
Note
Step12: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.
You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.
Step13: Testing
After you're satisified with your hyperparameters, you can run the network on the test set to measure it's performance. Remember, only do this after finalizing the hyperparameters.
Step14: Try out your own text! | Python Code:
import pandas as pd
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.data_utils import to_categorical
Explanation: Sentiment analysis with TFLearn
In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.
We'll start off by importing all the modules we'll need, then load and prepare the data.
End of explanation
reviews = pd.read_csv('reviews.txt', header=None)
labels = pd.read_csv('labels.txt', header=None)
Explanation: Preparing the data
Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.
Read the data
Use the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
End of explanation
from collections import Counter
total_counts = Counter()
for _, row in reviews.iterrows():
total_counts.update(row[0].split(' '))
print("Total words in data set: ", len(total_counts))
Explanation: Counting word frequency
To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.
Exercise: Create the bag of words from the reviews data and assign it to total_counts. The reviews are stores in the reviews Pandas DataFrame. If you want the reviews as a Numpy array, use reviews.values. You can iterate through the rows in the DataFrame with for idx, row in reviews.iterrows(): (documentation). When you break up the reviews into words, use .split(' ') instead of .split() so your results match ours.
End of explanation
vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]
print(vocab[:60])
Explanation: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
End of explanation
print(vocab[-1], ': ', total_counts[vocab[-1]])
Explanation: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
End of explanation
word2idx = {word: i for i, word in enumerate(vocab)}
Explanation: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Note: When you run, you may see a different word from the one shown above, but it will also have the value 30. That's because there are many words tied for that number of counts, and the Counter class does not guarantee which one will be returned in the case of a tie.
Now for each review in the data, we'll make a word vector. First we need to make a mapping of word to index, pretty easy to do with a dictionary comprehension.
Exercise: Create a dictionary called word2idx that maps each word in the vocabulary to an index. The first word in vocab has index 0, the second word has index 1, and so on.
End of explanation
def text_to_vector(text):
word_vector = np.zeros(len(vocab), dtype=np.int_)
for word in text.split(' '):
idx = word2idx.get(word, None)
if idx is None:
continue
else:
word_vector[idx] += 1
return np.array(word_vector)
Explanation: Text to vector function
Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this:
Initialize the word vector with np.zeros, it should be the length of the vocabulary.
Split the input string of text into a list of words with .split(' '). Again, if you call .split() instead, you'll get slightly different results than what we show here.
For each word in that list, increment the element in the index associated with that word, which you get from word2idx.
Note: Since all words aren't in the vocab dictionary, you'll get a key error if you run into one of those words. You can use the .get method of the word2idx dictionary to specify a default returned value when you make a key error. For example, word2idx.get(word, None) returns None if word doesn't exist in the dictionary.
End of explanation
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
Explanation: If you do this right, the following code should return
```
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
array([0, 1, 0, 0, 2, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0])
```
End of explanation
word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)
for ii, (_, text) in enumerate(reviews.iterrows()):
word_vectors[ii] = text_to_vector(text[0])
# Printing out the first 5 word vectors
word_vectors[:5, :23]
Explanation: Now, run through our entire review data set and convert each review to a word vector.
End of explanation
Y = (labels=='positive').astype(np.int_)
records = len(labels)
shuffle = np.arange(records)
np.random.shuffle(shuffle)
test_fraction = 0.9
train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]
trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split, 0], 2)
testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split], 2)
trainY
Explanation: Train, Validation, Test sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.
End of explanation
# Network building
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
# Inputs
net = tflearn.input_data([None, 10000])
# Hidden layer(s)
net = tflearn.fully_connected(net, 200, activation='ReLU')
net = tflearn.fully_connected(net, 25, activation='ReLU')
# Output layer
net = tflearn.fully_connected(net, 2, activation='softmax')
net = tflearn.regression(net, optimizer='sgd',
learning_rate=0.1,
loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
Explanation: Building the network
TFLearn lets you build the network by defining the layers.
Input layer
For the input layer, you just need to tell it how many units you have. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.
The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).
Output layer
The last layer you add is used as the output layer. There for, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.
net = tflearn.fully_connected(net, 2, activation='softmax')
Training
To set how you train the network, use
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with the categorical cross-entropy.
Finally you put all this together to create the model with tflearn.DNN(net). So it ends up looking something like
net = tflearn.input_data([None, 10]) # Input
net = tflearn.fully_connected(net, 5, activation='ReLU') # Hidden
net = tflearn.fully_connected(net, 2, activation='softmax') # Output
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
model = tflearn.DNN(net)
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
End of explanation
model = build_model()
Explanation: Intializing the model
Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.
Note: You might get a bunch of warnings here. TFLearn uses a lot of deprecated code in TensorFlow. Hopefully it gets updated to the new TensorFlow version soon.
End of explanation
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=100)
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.
You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.
End of explanation
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
print("Test accuracy: ", test_accuracy)
Explanation: Testing
After you're satisified with your hyperparameters, you can run the network on the test set to measure it's performance. Remember, only do this after finalizing the hyperparameters.
End of explanation
# Helper function that uses your model to predict sentiment
def test_sentence(sentence):
positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1]
print('Sentence: {}'.format(sentence))
print('P(positive) = {:.3f} :'.format(positive_prob),
'Positive' if positive_prob > 0.5 else 'Negative')
sentence = "Moonlight is by far the best movie of 2016."
test_sentence(sentence)
sentence = "It's amazing anyone could be talented enough to make something this spectacularly awful"
test_sentence(sentence)
Explanation: Try out your own text!
End of explanation |
2,662 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Loopless FBA
The goal of this procedure is identification of a thermodynamically consistent flux state without loops, as implied by the name. You can find a more detailed description in the method section at the end of the notebook.
Step1: Loopless solution
Classical loopless approaches as described below are computationally expensive to solve due to the added mixed-integer constraints. A much faster, and pragmatic approach is instead to post-process flux distributions to simply set fluxes to zero wherever they can be zero without changing the fluxes of any exchange reactions in the model. CycleFreeFlux is an algorithm that can be used to achieve this and in cobrapy it is implemented in the cobra.flux_analysis.loopless_solution function. loopless_solution will identify the closest flux distribution (using only loopless elementary flux modes) to the original one. Note that this will not remove loops which you explicitly requested, for instance by forcing a loop reaction to carry non-zero flux.
Using a larger model than the simple example above, this can be demonstrated as follows
Step2: This functionality can also be used in FVA by using the loopless=True argument to avoid getting high flux ranges for reactions that essentially only can reach high fluxes if they are allowed to participate in loops (see the simulation notebook) leading to much narrower flux ranges.
Loopless model
Cobrapy also includes the "classical" loopless formulation by Schellenberger et. al. implemented in cobra.flux_analysis.add_loopless modify the model with additional mixed-integer constraints that make thermodynamically infeasible loops impossible. This is much slower than the strategy provided above and should only be used if one of the two following cases applies
Step3: While this model contains a loop, a flux state exists which has no flux through reaction v$_3$, and is identified by loopless FBA.
Step4: If there is no forced flux through a loopless reaction, parsimonious FBA will also have no flux through the loop.
Step5: However, if flux is forced through v$_3$, then there is no longer a feasible loopless solution, but the parsimonious solution will still exist. | Python Code:
%matplotlib inline
import plot_helper
import cobra.test
from cobra import Reaction, Metabolite, Model
from cobra.flux_analysis.loopless import add_loopless, loopless_solution
from cobra.flux_analysis import pfba
Explanation: Loopless FBA
The goal of this procedure is identification of a thermodynamically consistent flux state without loops, as implied by the name. You can find a more detailed description in the method section at the end of the notebook.
End of explanation
salmonella = cobra.test.create_test_model('salmonella')
nominal = salmonella.optimize()
loopless = loopless_solution(salmonella)
import pandas
df = pandas.DataFrame(dict(loopless=loopless.fluxes, nominal=nominal.fluxes))
df.plot.scatter(x='loopless', y='nominal')
Explanation: Loopless solution
Classical loopless approaches as described below are computationally expensive to solve due to the added mixed-integer constraints. A much faster, and pragmatic approach is instead to post-process flux distributions to simply set fluxes to zero wherever they can be zero without changing the fluxes of any exchange reactions in the model. CycleFreeFlux is an algorithm that can be used to achieve this and in cobrapy it is implemented in the cobra.flux_analysis.loopless_solution function. loopless_solution will identify the closest flux distribution (using only loopless elementary flux modes) to the original one. Note that this will not remove loops which you explicitly requested, for instance by forcing a loop reaction to carry non-zero flux.
Using a larger model than the simple example above, this can be demonstrated as follows
End of explanation
plot_helper.plot_loop()
model = Model()
model.add_metabolites([Metabolite(i) for i in "ABC"])
model.add_reactions([Reaction(i) for i in ["EX_A", "DM_C", "v1", "v2", "v3"]])
model.reactions.EX_A.add_metabolites({"A": 1})
model.reactions.DM_C.add_metabolites({"C": -1})
model.reactions.v1.add_metabolites({"A": -1, "B": 1})
model.reactions.v2.add_metabolites({"B": -1, "C": 1})
model.reactions.v3.add_metabolites({"C": -1, "A": 1})
model.objective = 'DM_C'
Explanation: This functionality can also be used in FVA by using the loopless=True argument to avoid getting high flux ranges for reactions that essentially only can reach high fluxes if they are allowed to participate in loops (see the simulation notebook) leading to much narrower flux ranges.
Loopless model
Cobrapy also includes the "classical" loopless formulation by Schellenberger et. al. implemented in cobra.flux_analysis.add_loopless modify the model with additional mixed-integer constraints that make thermodynamically infeasible loops impossible. This is much slower than the strategy provided above and should only be used if one of the two following cases applies:
You want to combine a non-linear (e.g. quadratic) objective with the loopless condition
You want to force the model to be infeasible in the presence of loops independent of the set reaction bounds.
We will demonstrate this with a toy model which has a simple loop cycling A $\rightarrow$ B $\rightarrow$ C $\rightarrow$ A, with A allowed to enter the system and C allowed to leave. A graphical view of the system is drawn below:
End of explanation
with model:
add_loopless(model)
solution = model.optimize()
print("loopless solution: status = " + solution.status)
print("loopless solution flux: v3 = %.1f" % solution.fluxes["v3"])
Explanation: While this model contains a loop, a flux state exists which has no flux through reaction v$_3$, and is identified by loopless FBA.
End of explanation
solution = pfba(model)
print("parsimonious solution: status = " + solution.status)
print("loopless solution flux: v3 = %.1f" % solution.fluxes["v3"])
Explanation: If there is no forced flux through a loopless reaction, parsimonious FBA will also have no flux through the loop.
End of explanation
model.reactions.v3.lower_bound = 1
with model:
add_loopless(model)
try:
solution = model.optimize()
except:
print('model is infeasible')
solution = pfba(model)
print("parsimonious solution: status = " + solution.status)
print("loopless solution flux: v3 = %.1f" % solution.fluxes["v3"])
Explanation: However, if flux is forced through v$_3$, then there is no longer a feasible loopless solution, but the parsimonious solution will still exist.
End of explanation |
2,663 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Logging results and uploading models to Comet ML
In this example, we train a simple XGBoost model and log the training
results to Comet ML. We also save the resulting model checkpoints
as artifacts.
Let's start with installing our dependencies
Step1: Then we need some imports
Step3: We define a simple function that returns our training dataset as a Ray Dataset
Step5: Now we define a simple training function. All the magic happens within the CometLoggerCallback
Step6: Let's kick off a run | Python Code:
!pip install -qU "ray[tune]" sklearn xgboost_ray comet_ml
Explanation: Logging results and uploading models to Comet ML
In this example, we train a simple XGBoost model and log the training
results to Comet ML. We also save the resulting model checkpoints
as artifacts.
Let's start with installing our dependencies:
End of explanation
import ray
from ray.air import RunConfig
from ray.air.result import Result
from ray.train.xgboost import XGBoostTrainer
from ray.tune.integration.comet import CometLoggerCallback
from sklearn.datasets import load_breast_cancer
Explanation: Then we need some imports:
End of explanation
def get_train_dataset() -> ray.data.Dataset:
Return the "Breast cancer" dataset as a Ray dataset.
data_raw = load_breast_cancer(as_frame=True)
df = data_raw["data"]
df["target"] = data_raw["target"]
return ray.data.from_pandas(df)
Explanation: We define a simple function that returns our training dataset as a Ray Dataset:
End of explanation
def train_model(train_dataset: ray.data.Dataset, comet_project: str) -> Result:
Train a simple XGBoost model and return the result.
trainer = XGBoostTrainer(
scaling_config={"num_workers": 2},
params={"tree_method": "auto"},
label_column="target",
datasets={"train": train_dataset},
num_boost_round=10,
run_config=RunConfig(
callbacks=[
# This is the part needed to enable logging to Comet ML.
# It assumes Comet ML can find a valid API (e.g. by setting
# the ``COMET_API_KEY`` environment variable).
CometLoggerCallback(
project_name=comet_project,
save_checkpoints=True,
)
]
),
)
result = trainer.fit()
return result
Explanation: Now we define a simple training function. All the magic happens within the CometLoggerCallback:
python
CometLoggerCallback(
project_name=comet_project,
save_checkpoints=True,
)
It will automatically log all results to Comet ML and upload the checkpoints as artifacts. It assumes you're logged in into Comet via an API key or your ~./.comet.config.
End of explanation
comet_project = "ray_air_example"
train_dataset = get_train_dataset()
result = train_model(train_dataset=train_dataset, comet_project=comet_project)
Explanation: Let's kick off a run:
End of explanation |
2,664 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Purpose
Provide Thread-Safe FIFO Implementation
multi-producer, multi-consumer queue
Basic FIFO Queue
The Queue class implements a basic first-in, first-out container. Element are added to one "end" of the sequence using put(), and removed from the other using get()
Step1: This example uses a single thread to illustrate that elements are removed from the queue in the same order in which they are inserted
LIFO Queue
In contrast to the standard FIFO implementation of Queue, the LifoQueue uses last-in, first-out ordering (normally associated with a stack data structure)
Step2: Priority Queue
Sometimes the processing order of the items in a queue needs to be based on characteristics of those items, rather than just the order they are created or added to the queue. For example, print jobs from the payroll department may take precedence over a code listing that a developer wants to print. PriorityQueue uses the sort order of the contents of the queue to decide which item to retrieve.
Step4: This example has multiple threads consuming the jobs, which are processed based on the priority of items in the queue at the time get() was called. The order of processing for items added to the queue while the consumer threads are running depends on thread context switching.
Building a Threaded Podcast Client | Python Code:
import queue
q = queue.Queue()
for i in range(5):
q.put(i)
while not q.empty():
print(q.get(), end=' ')
Explanation: Purpose
Provide Thread-Safe FIFO Implementation
multi-producer, multi-consumer queue
Basic FIFO Queue
The Queue class implements a basic first-in, first-out container. Element are added to one "end" of the sequence using put(), and removed from the other using get()
End of explanation
import queue
q = queue.LifoQueue()
for i in range(5):
q.put(i)
while not q.empty():
print(q.get(), end=' ')
Explanation: This example uses a single thread to illustrate that elements are removed from the queue in the same order in which they are inserted
LIFO Queue
In contrast to the standard FIFO implementation of Queue, the LifoQueue uses last-in, first-out ordering (normally associated with a stack data structure)
End of explanation
import functools
import queue
import threading
import time
@functools.total_ordering
class Job:
def __init__(self, priority, description):
self.priority = priority
self.description = description
print('New job:', description)
return
def __eq__(self, other):
try:
return self.priority == other.priority
except AttributeError:
return NotImplemented
def __lt__(self, other):
try:
return self.priority < other.priority
except AttributeError:
return NotImplemented
q = queue.PriorityQueue()
q.put(Job(3, 'Mid-level job'))
q.put(Job(10, 'Low-level job'))
q.put(Job(1, 'Important job'))
time.sleep(3)
def process_job(q):
while True:
next_job = q.get()
print('Processing job:', next_job.description)
q.task_done()
workers = [
threading.Thread(target=process_job, args=(q,)),
# threading.Thread(target=process_job, args=(q,)),
]
for w in workers:
w.setDaemon(True)
w.start()
q.join()
Explanation: Priority Queue
Sometimes the processing order of the items in a queue needs to be based on characteristics of those items, rather than just the order they are created or added to the queue. For example, print jobs from the payroll department may take precedence over a code listing that a developer wants to print. PriorityQueue uses the sort order of the contents of the queue to decide which item to retrieve.
End of explanation
# %load fetch_podcasts.py
# First, some operating parameters are established.
# Usually, these would come from user inputs
# (e.g., preferences or a database). The example uses hard-coded
# values for the number of threads and list of URLs to fetch.
from queue import Queue
import threading
import time
import urllib
from urllib.parse import urlparse
import feedparser
# Set up some global variables
num_fetch_threads = 2
enclosure_queue = Queue()
# A real app wouldn't use hard-coded data...
feed_urls = [
'http://talkpython.fm/episodes/rss',
]
def message(s):
print('{}: {}'.format(threading.current_thread().name, s))
# The function download_enclosures() runs in the worker thread
# and processes the downloads using urllib.
def download_enclosures(q):
This is the worker thread function.
It processes items in the queue one after
another. These daemon threads go into an
infinite loop, and exit only when
the main thread ends.
while True:
message('looking for the next enclosure')
url = q.get()
filename = url.rpartition('/')[-1]
message('downloading {}'.format(filename))
response = urllib.request.urlopen(url)
data = response.read()
# Save the downloaded file to the current directory
message('writing to {}'.format(filename))
with open(filename, 'wb') as outfile:
outfile.write(data)
q.task_done()
# Set up some threads to fetch the enclosures
for i in range(num_fetch_threads):
worker = threading.Thread(
target=download_enclosures,
args=(enclosure_queue,),
name='worker-{}'.format(i),
)
worker.setDaemon(True)
worker.start()
# Download the feed(s) and put the enclosure URLs into
# the queue.
for url in feed_urls:
response = feedparser.parse(url, agent='fetch_podcasts.py')
for entry in response['entries'][:5]:
for enclosure in entry.get('enclosures', []):
parsed_url = urlparse(enclosure['url'])
message('queuing {}'.format(
parsed_url.path.rpartition('/')[-1]))
enclosure_queue.put(enclosure['url'])
# Now wait for the queue to be empty, indicating that we have
# processed all of the downloads.
message('*** main thread waiting')
enclosure_queue.join()
message('*** done')
!python fetch_podcasts.py
# remove all download files
!rm *.mp3
Explanation: This example has multiple threads consuming the jobs, which are processed based on the priority of items in the queue at the time get() was called. The order of processing for items added to the queue while the consumer threads are running depends on thread context switching.
Building a Threaded Podcast Client
End of explanation |
2,665 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CNN HandsOn with Keras
Problem Definition
Recognize handwritten digits
Data
The MNIST database (link) has a database of handwritten digits.
The training set has $60,000$ samples.
The test set has $10,000$ samples.
The digits are size-normalized and centered in a fixed-size image.
The data page has description on how the data was collected. It also has reports the benchmark of various algorithms on the test dataset.
Load the data
The data is available in the repo's data folder. Let's load that using the keras library.
For now, let's load the data and see how it looks.
Step1: Basic data analysis on the dataset
Step2: Display Images
Let's now display some of the images and see how they look
We will be using matplotlib library for displaying the image | Python Code:
import numpy as np
import keras
from keras.datasets import mnist
# Load the datasets
(X_train, y_train), (X_test, y_test) = mnist.load_data()
Explanation: CNN HandsOn with Keras
Problem Definition
Recognize handwritten digits
Data
The MNIST database (link) has a database of handwritten digits.
The training set has $60,000$ samples.
The test set has $10,000$ samples.
The digits are size-normalized and centered in a fixed-size image.
The data page has description on how the data was collected. It also has reports the benchmark of various algorithms on the test dataset.
Load the data
The data is available in the repo's data folder. Let's load that using the keras library.
For now, let's load the data and see how it looks.
End of explanation
# What is the type of X_train?
# What is the type of y_train?
# Find number of observations in training data
# Find number of observations in test data
# Display first 2 records of X_train
# Display the first 10 records of y_train
# Find the number of observations for each digit in the y_train dataset
# Find the number of observations for each digit in the y_test dataset
# What is the dimension of X_train?. What does that mean?
Explanation: Basic data analysis on the dataset
End of explanation
from matplotlib import pyplot
import matplotlib as mpl
%matplotlib inline
# Displaying the first training data
fig = pyplot.figure()
ax = fig.add_subplot(1,1,1)
imgplot = ax.imshow(X_train[0], cmap=mpl.cm.Greys)
imgplot.set_interpolation('nearest')
ax.xaxis.set_ticks_position('top')
ax.yaxis.set_ticks_position('left')
pyplot.show()
# Let's now display the 11th record
Explanation: Display Images
Let's now display some of the images and see how they look
We will be using matplotlib library for displaying the image
End of explanation |
2,666 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title of Database
Step1: Import and basic data inspection
Step2: The dataframe consists of only positive values and the classes are encoded as strings in the variable with index 24
Step3: Whats the distribution of the classes?
Step4: The Move_Forward and the Sharp-Right-Turn Class combine nearly 80% of all observated classes. So it might happen, that the accuracy may still be high with around 75% although most of the features are eliminated.
Preprocessing
0. Mapping integer values to the classes.
Step5: 1. Take a random sample of 90% of the rows from the dataframe. To ensure reproducability the random_state variable is set. The other 10% are placed aside for validation after training. The last column is the class column and is stored in the y variables respectively.
Step6: 2. Normalization between 0 and 1
Step7: 3. Make useful categorical variables out of the single column data by one-hot encoding it.
Step8: 4. Set Global Parameters
Step10: Train Neural Net
Due to a tight schedule we will not perform any cross validation. So it might happen that our accuracy estimators lack a little bit in potential of generalization. We shall live with that. Another setup of experiments would be, that we loop over some different dataframes samples up in the preprocessing steps and repeat all the steps below to finally average the results.
The dimension of the hidden layers are set arbitrarily but some runs have shown that 30 is a good number. The input_dim Variable is set to 24 because initially there are 24 features. The aim is to build the best possible neural net.
Optimizer
RMSprop is a mini batch gradient descent algorithm which divides the gradient by a running average of the learning rate. More information
Step11: Comparison
The following data is from a paper published in March 2017. You can find that here
Step12: One can easily see that our results are better.
Dimensionality Reduction with Single Hidden Layer Autoencoder
Step13: Dimensionality reduction with stacked (multi hidden layer) autoencoder
Step14: Whats the best dimensionality reduction with single autoencoder?
Step15: Prediction for a classifier with a dimension of 16 (accuracy 0.9084)
Step16: Experimental area (not for presentation)
The indices of the extracted features are
Step17: Check the neural net performance with new selected features
Step18: Finding good features
"Is there a good number of features for the Robo- Dataset?" For that we create a loop over some numbers (i.e. dimension of hidden layer from the single autoencoder) and get the respective result.
The architecture within those loops is quite special. For that we recommend to have a look at the neural net visualization below.
The parameters for the ensemble training
Step19: The idea is
Step20: Summary
Step21: Architecture
Step22: Results | Python Code:
# modules
from keras.layers import Input, Dense, Dropout
from keras.models import Model
from keras.datasets import mnist
from keras.models import Sequential, load_model
from keras.optimizers import RMSprop
from keras.callbacks import TensorBoard
from __future__ import print_function
from keras.utils import plot_model
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
from sklearn import preprocessing
from keras import layers
from keras import initializers
from matplotlib import axes
from matplotlib import rc
import keras
import matplotlib.pyplot as plt
import numpy as np
import math
import pydot
import graphviz
import pandas as pd
import IPython
%matplotlib inline
font = {'family' : 'monospace',
'weight' : 'bold',
'size' : 20}
rc('font', **font)
Explanation: Title of Database: Wall-Following navigation task with mobile robot SCITOS-G5
The data were collected as the SCITOS G5 navigates through the room following the wall in a clockwise
direction, for 4 rounds. To navigate, the robot uses 24 ultrasound sensors arranged circularly around its "waist".
The numbering of the ultrasound sensors starts at the front of the robot and increases in clockwise direction.
End of explanation
# import
data_raw = pd.read_csv('data/sensor_readings_24.csv', sep=",", header=None)
data = data_raw.copy()
Explanation: Import and basic data inspection
End of explanation
data.head()
Explanation: The dataframe consists of only positive values and the classes are encoded as strings in the variable with index 24
End of explanation
df_tab = data_raw
df_tab[24] = df_tab[24].astype('category')
tab = pd.crosstab(index=df_tab[24], columns="frequency")
tab.index.name = 'Class/Direction'
tab/tab.sum()
Explanation: Whats the distribution of the classes?
End of explanation
mapping = {key: value for (key, value) in zip(data[24].unique(), range(len(data[24].unique())))}
print(mapping)
data.replace({24:mapping}, inplace=True)
data[24].unique()
Explanation: The Move_Forward and the Sharp-Right-Turn Class combine nearly 80% of all observated classes. So it might happen, that the accuracy may still be high with around 75% although most of the features are eliminated.
Preprocessing
0. Mapping integer values to the classes.
End of explanation
data_train = data.sample(frac=0.9, random_state=42)
data_val = data.drop(data_train.index)
df_x_train = data_train.iloc[:,:-1]
df_y_train = data_train.iloc[:,-1]
df_x_val = data_val.iloc[:,:-1]
df_y_val = data_val.iloc[:,-1]
Explanation: 1. Take a random sample of 90% of the rows from the dataframe. To ensure reproducability the random_state variable is set. The other 10% are placed aside for validation after training. The last column is the class column and is stored in the y variables respectively.
End of explanation
x_train = df_x_train.values
x_train = (x_train - x_train.min()) / (x_train.max() - x_train.min())
y_train = df_y_train.values
x_val = df_x_val.values
x_val = (x_val - x_val.min()) / (x_val.max() - x_val.min())
y_val = df_y_val.values
y_eval = y_val
Explanation: 2. Normalization between 0 and 1
End of explanation
y_train = keras.utils.to_categorical(y_train, 4)
y_val = keras.utils.to_categorical(y_val, 4)
Explanation: 3. Make useful categorical variables out of the single column data by one-hot encoding it.
End of explanation
epochsize = 150
batchsize = 24
shuffle = False
dropout = 0.1
num_classes = 4
input_dim = x_train.shape[1]
hidden1_dim = 30
hidden2_dim = 30
class_names = mapping.keys()
Explanation: 4. Set Global Parameters
End of explanation
input_data = Input(shape=(input_dim,), dtype='float32', name='main_input')
hidden_layer1 = Dense(hidden1_dim, activation='relu', input_shape=(input_dim,), kernel_initializer='normal')(input_data)
dropout1 = Dropout(dropout)(hidden_layer1)
hidden_layer2 = Dense(hidden2_dim, activation='relu', input_shape=(input_dim,), kernel_initializer='normal')(dropout1)
dropout2 = Dropout(dropout)(hidden_layer2)
output_layer = Dense(num_classes, activation='softmax', kernel_initializer='normal')(dropout2)
model = Model(inputs=input_data, outputs=output_layer)
model.compile(loss='categorical_crossentropy',
optimizer=RMSprop(),
metrics=['accuracy'])
plot_model(model, to_file='images/robo1_nn.png', show_shapes=True, show_layer_names=True)
IPython.display.Image("images/robo1_nn.png")
model.fit(x_train, y_train,
batch_size=batchsize,
epochs=epochsize,
verbose=0,
shuffle=shuffle,
validation_split=0.05)
nn_score = model.evaluate(x_val, y_val)[1]
print(nn_score)
fig = plt.figure(figsize=(20,10))
plt.plot(model.history.history['val_acc'])
plt.plot(model.history.history['acc'])
plt.axhline(y=nn_score, c="red")
plt.text(0, nn_score, "test: " + str(round(nn_score, 4)), fontdict=font)
plt.title('model accuracy for neural net with 2 hidden layers')
plt.ylabel('accuracy')
plt.xlabel('epochs')
plt.legend(['train', 'valid'], loc='lower right')
plt.show()
import itertools
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# Compute confusion matrix
cnf_matrix = confusion_matrix(y_eval, model.predict(x_val).argmax(axis=-1))
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure(figsize=(20,10))
plot_confusion_matrix(cnf_matrix, classes=class_names,
title='Confusion matrix, without normalization')
# Plot normalized confusion matrix
plt.figure(figsize=(20,10))
plot_confusion_matrix(cnf_matrix, classes=class_names, normalize=True,
title='Normalized confusion matrix')
Explanation: Train Neural Net
Due to a tight schedule we will not perform any cross validation. So it might happen that our accuracy estimators lack a little bit in potential of generalization. We shall live with that. Another setup of experiments would be, that we loop over some different dataframes samples up in the preprocessing steps and repeat all the steps below to finally average the results.
The dimension of the hidden layers are set arbitrarily but some runs have shown that 30 is a good number. The input_dim Variable is set to 24 because initially there are 24 features. The aim is to build the best possible neural net.
Optimizer
RMSprop is a mini batch gradient descent algorithm which divides the gradient by a running average of the learning rate. More information: http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf
The weights are initialized by a normal distribution with mean 0 and standard deviation of 0.05.
End of explanation
IPython.display.Image("images/2018-01-25 18_44_01-PubMed Central, Table 2_ Sensors (Basel). 2017 Mar; 17(3)_ 549. Published online.png")
Explanation: Comparison
The following data is from a paper published in March 2017. You can find that here: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5375835/
End of explanation
encoder_dim = 8
hidden1_dim = 30
hidden2_dim = 30
main_input = Input(shape=(input_dim,), dtype='float32', name='main_input')
encoding_layer = Dense(encoder_dim, activation='relu', kernel_initializer='normal')
encoding_layer_output = encoding_layer(main_input)
decoding_layer_output = Dense(input_dim
,activation='sigmoid'
,name='decoder_output'
,kernel_initializer='normal')(encoding_layer_output)
x = Dense(hidden1_dim, activation='relu', kernel_initializer='normal')(encoding_layer_output)
x = Dropout(dropout)(x)
x = Dense(hidden2_dim, activation='relu', kernel_initializer='normal')(x)
x = Dropout(dropout)(x)
classifier_output = Dense(num_classes
,activation='softmax'
,name='main_output'
,kernel_initializer='normal')(x)
auto_classifier = Model(inputs=main_input, outputs=[classifier_output, decoding_layer_output])
auto_classifier.compile(optimizer=RMSprop(),
loss={'main_output': 'categorical_crossentropy', 'decoder_output': 'mean_squared_error'},
loss_weights={'main_output': 1., 'decoder_output': 1.},
metrics=['accuracy'])
plot_model(auto_classifier, to_file='images/robo4_auto_class_LR.png', show_shapes=True, show_layer_names=True)
IPython.display.Image("images/robo4_auto_class_LR.png")
auto_classifier.fit({'main_input': x_train},
{'main_output': y_train, 'decoder_output': x_train},
epochs=epochsize,
batch_size=batchsize,
shuffle=shuffle,
validation_split=0.05,
verbose=0)
score = auto_classifier.evaluate(x=x_val, y=[y_val, x_val], verbose=1)[3]
print(score)
fig = plt.figure(figsize=(20,10))
plt.plot(auto_classifier.history.history['val_main_output_acc'])
plt.plot(auto_classifier.history.history['main_output_acc'])
plt.axhline(y=score, c="red")
plt.text(0, score, "test: " + str(round(score, 4)), fontdict=font)
plt.title('model accuracy for ' + str(round(input_dim/encoder_dim, 2)) + ' x compression with single layer autoencoder')
plt.ylabel('accuracy')
plt.xlabel('epochs')
plt.legend(['train', 'valid'], loc='lower right')
plt.show()
Explanation: One can easily see that our results are better.
Dimensionality Reduction with Single Hidden Layer Autoencoder
End of explanation
encoder_dim1 = 16
encoder_dim2 = 8
decoder_dim1 = 16
main_input = Input(shape=(input_dim,), dtype='float32', name='main_input')
encoding_layer1 = Dense(encoder_dim1, activation='relu', kernel_initializer='normal')(main_input)
encoding_layer2 = Dense(encoder_dim2, activation='relu', kernel_initializer='normal')(encoding_layer1)
decoding_layer1 = Dense(decoder_dim1
,activation='relu'
,kernel_initializer='normal')(encoding_layer2)
decoding_layer2 = Dense(input_dim
,activation='sigmoid'
,name='decoder_output'
,kernel_initializer='normal')(decoding_layer1)
x = Dense(hidden1_dim, activation='relu', kernel_initializer='normal')(encoding_layer2)
x = Dropout(dropout)(x)
x = Dense(hidden2_dim, activation='relu', kernel_initializer='normal')(x)
x = Dropout(dropout)(x)
classifier_output = Dense(num_classes
,activation='softmax'
,name='main_output'
,kernel_initializer='normal')(x)
stacked_auto_classifier = Model(inputs=main_input, outputs=[classifier_output, decoding_layer2])
stacked_auto_classifier.compile(optimizer=RMSprop(),
loss={'main_output': 'categorical_crossentropy', 'decoder_output': 'mean_squared_error'},
loss_weights={'main_output': 1., 'decoder_output': 1.},
metrics=['accuracy'])
plot_model(stacked_auto_classifier, to_file='images/stacked__auto_class.png', show_shapes=True, show_layer_names=True)
IPython.display.Image("images/stacked__auto_class.png")
stacked_auto_classifier.fit({'main_input': x_train},
{'main_output': y_train, 'decoder_output': x_train},
epochs=epochsize,
batch_size=batchsize,
shuffle=shuffle,
validation_split=0.05,
verbose=0)
stacked_score = stacked_auto_classifier.evaluate(x=x_val, y=[y_val, x_val], verbose=1)[3]
print(stacked_score)
fig = plt.figure(figsize=(20,10))
plt.plot(stacked_auto_classifier.history.history['val_main_output_acc'])
plt.plot(stacked_auto_classifier.history.history['main_output_acc'])
plt.axhline(y=stacked_score, c="red")
plt.text(0, stacked_score, "test: " + str(round(stacked_score, 4)), fontdict=font)
plt.title('model accuracy for ' + str(round(input_dim/encoder_dim, 2)) + ' x compression with stacked autoencoder')
plt.ylabel('accuracy')
plt.xlabel('epochs')
plt.legend(['train', 'valid'], loc='lower right')
plt.show()
Explanation: Dimensionality reduction with stacked (multi hidden layer) autoencoder
End of explanation
# the initial coding dimension s.t. there is no feature selection at the beginning
encoding_dim = input_dim
result3 = {'encoding_dim': []
,'auto_classifier_acc': []}
while encoding_dim > 0:
main_input = Input(shape=(input_dim,), dtype='float32', name='main_input')
encoding_layer = Dense(encoding_dim, activation='relu', name='encoder', kernel_initializer='normal')
encoding_layer_output = encoding_layer(main_input)
decoding_layer_output = Dense(input_dim, activation='sigmoid'
,name='decoder_output'
,kernel_initializer='normal')(encoding_layer_output)
x = Dense(hidden1_dim, activation='relu', kernel_initializer='normal')(encoding_layer_output)
x = Dropout(dropout)(x)
x = Dense(hidden2_dim, activation='relu', kernel_initializer='normal')(x)
x = Dropout(dropout)(x)
classifier_output = Dense(num_classes, activation='softmax', name='main_output', kernel_initializer='normal')(x)
auto_classifier = Model(inputs=main_input, outputs=[classifier_output, decoding_layer_output])
auto_classifier.compile(optimizer=RMSprop(),
loss={'main_output': 'categorical_crossentropy', 'decoder_output': 'mean_squared_error'},
loss_weights={'main_output': 1., 'decoder_output': 1.},
metrics=['accuracy'])
auto_classifier.fit({'main_input': x_train},
{'main_output': y_train, 'decoder_output': x_train},
epochs=epochsize,
batch_size=batchsize,
shuffle=shuffle,
validation_split=0.05,
verbose=0)
accuracy = auto_classifier.evaluate(x=x_val, y=[y_val, x_val], verbose=1)[3]
result3['encoding_dim'].append(encoding_dim)
result3['auto_classifier_acc'].append(accuracy)
encoding_dim -=1
print(result3)
result_df = pd.DataFrame(result3)
result_df['neural_net_acc'] = nn_score
result_df
fig = plt.figure(figsize=(20,10))
plt.bar(result_df['encoding_dim'], result_df['auto_classifier_acc'])
plt.axhline(y=result_df['neural_net_acc'][0], c="red")
plt.text(0, result_df['neural_net_acc'][0], "best neural net: " + str(round(result_df['neural_net_acc'][0], 4))
,fontdict=font)
plt.title('model accuracy for different encoding dimensions')
plt.ylabel('accuracy')
plt.xlabel('dimension')
plt.ylim(0.6, 1)
Explanation: Whats the best dimensionality reduction with single autoencoder?
End of explanation
# Compute confusion matrix
cnf_matrix = confusion_matrix(y_eval, auto_classifier.predict(x_val)[0].argmax(axis=-1))
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure(figsize=(20,10))
plot_confusion_matrix(cnf_matrix, classes=class_names,
title='Confusion matrix, without normalization')
# Plot normalized confusion matrix
plt.figure(figsize=(20,10))
plot_confusion_matrix(cnf_matrix, classes=class_names, normalize=True,
title='Normalized confusion matrix')
result_df.to_csv('results/robo_results.csv')
Explanation: Prediction for a classifier with a dimension of 16 (accuracy 0.9084)
End of explanation
encoding_weights = encoding_layer.get_weights()
sum_of_weights = {index: item.sum() for (index, item) in enumerate(encoding_weights[0])}
weights = sum_of_weights
features = []
for i in range(encoder_dim1):
max_key = max(weights, key=lambda key: weights[key])
features.append(max_key)
del weights[max_key]
print(features)
Explanation: Experimental area (not for presentation)
The indices of the extracted features are:
End of explanation
x_train_selected = np.array([x[features] for x in x_train])
x_val_selected = np.array([x[features] for x in x_val])
input_dim = x_train_selected.shape[1]
hidden1_dim = 26
hidden2_dim = 26
result3 = []
for i in range(1,4):
model_new = Sequential()
model_new.add(Dense(hidden1_dim, activation='relu', input_shape=(input_dim,)))
model_new.add(Dense(hidden2_dim, activation='relu'))
model_new.add(Dense(num_classes, activation='softmax'))
model_new.compile(loss='categorical_crossentropy',
optimizer=RMSprop(),
metrics=['accuracy'])
model_new.fit(x_train_selected, y_train,
batch_size=batchsize,
epochs=epochsize,
verbose=0,
shuffle=shuffle,
validation_split=0.1)
score = model_new.evaluate(x_val_selected, y_val)[1]
result3.append(score)
print(result3)
print(np.mean(result3))
Explanation: Check the neural net performance with new selected features
End of explanation
# the initial coding dimension s.t. there is no feature selection at the beginning
encoding_dim = 24
# dimension of the neural net layer1
hidden1_dim = 30
# dimension of the second neural net layer
hidden2_dim = 30
epoch_size = 150
batch_size = 24
shuffle = False
result2 = {'encoding_dim/features': []
,'compression_level': []
,'auto_classifier_acc': []
,'selected_classifier_acc': []
,'features': []}
Explanation: Finding good features
"Is there a good number of features for the Robo- Dataset?" For that we create a loop over some numbers (i.e. dimension of hidden layer from the single autoencoder) and get the respective result.
The architecture within those loops is quite special. For that we recommend to have a look at the neural net visualization below.
The parameters for the ensemble training
End of explanation
while encoding_dim > 0:
main_input = Input(shape=(input_dim,), dtype='float32', name='main_input')
encoding_layer = Dense(encoding_dim, activation='relu', name='encoder', kernel_initializer='normal')
encoding_layer_output = encoding_layer(main_input)
decoding_layer_output = Dense(input_dim, activation='sigmoid'
,name='decoder_output'
,kernel_initializer='normal')(encoding_layer_output)
x = Dense(hidden1_dim, activation='relu', kernel_initializer='normal')(encoding_layer_output)
x = Dense(hidden2_dim, activation='relu', kernel_initializer='normal')(x)
classifier_output = Dense(num_classes, activation='softmax', name='main_output', kernel_initializer='normal')(x)
auto_classifier = Model(inputs=main_input, outputs=[classifier_output, decoding_layer_output])
auto_classifier.compile(optimizer=RMSprop(),
loss={'main_output': 'categorical_crossentropy', 'decoder_output': 'mean_squared_error'},
loss_weights={'main_output': 1., 'decoder_output': 1.},
metrics=['accuracy'])
auto_classifier.fit({'main_input': x_train},
{'main_output': y_train, 'decoder_output': x_train},
epochs=epoch_size,
batch_size=batch_size,
shuffle=shuffle,
validation_split=0.1,
verbose=0)
accuracy = auto_classifier.evaluate(x=x_val, y=[y_val, x_val], verbose=1)[3]
result2['encoding_dim/features'].append(encoding_dim)
result2['compression_level'].append(1 - encoding_dim/24)
result2['auto_classifier_acc'].append(accuracy)
encoding_weights = encoding_layer.get_weights()
sum_of_weights = {index: item.sum() for (index, item) in enumerate(encoding_weights[0])}
weights = sum_of_weights
features = []
for i in range(encoding_dim):
max_key = max(weights, key=lambda key: weights[key])
features.append(max_key)
del weights[max_key]
result2['features'].append(features)
x_train_selected = np.array([x[features] for x in x_train])
x_val_selected = np.array([x[features] for x in x_val])
input_dim_new = x_train_selected.shape[1]
accuracy_list = []
for i in range(1):
model_new = Sequential()
model_new.add(Dense(hidden1_dim, activation='relu', input_shape=(input_dim_new,), kernel_initializer='normal'))
model_new.add(Dense(hidden2_dim, activation='relu', kernel_initializer='normal'))
model_new.add(Dense(num_classes, activation='softmax', kernel_initializer='normal'))
model_new.compile(loss='categorical_crossentropy',
optimizer=RMSprop(),
metrics=['accuracy'])
model_new.fit(x_train_selected, y_train,
batch_size=batch_size,
epochs=epoch_size,
verbose=0,
shuffle=shuffle,
validation_split=0.1)
score = model_new.evaluate(x_val_selected, y_val)[1]
accuracy_list.append(score)
result2['selected_classifier_acc'].append(np.mean(accuracy_list))
encoding_dim -=1
print(result2)
Explanation: The idea is:
- reduce the number of features encoding_dim iteratively by training an auto_classifier network
- after training, the weights of the first layer are analyzed.
- therefore sum up the weights for each input node and take the encoding_dim number as new features
- test the new features with the pre- defined neural net and see which selection is best.
End of explanation
auto_classifier.summary()
Explanation: Summary
End of explanation
plot_model(auto_classifier, to_file='images/robo2_auto_class_LR.png', show_shapes=True, show_layer_names=True, rankdir='LR')
IPython.display.Image("images/robo2_auto_class_LR.png")
Explanation: Architecture
End of explanation
result_df = pd.DataFrame(result2)
result_df['neural_net_acc'] = 0.949938949939
result_df
result_df.to_csv('results/robo_results.csv')
result_df.plot(x='encoding_dim/features', y=['selected_classifier_acc', 'neural_net_acc'], kind='bar', figsize=(20,10))
Explanation: Results
End of explanation |
2,667 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
IPython 3 (jupyter)
Video Toturial
Step1: $$Julia + Python + R = jupyter$$
This is not to indicate that jupyter only supports these languages. But it is a reference to a talk by Fernando Perez. The complete list has more than 15 languages.
But there is another reason
Galileo documented his observations about Jupiter's moons and published them in the Sidereus Nuncius (Starry Messenger) back in 1610 which was a pamphlet they he published. This was the first paper ever published about astronomy that used observations using a telescope. He has proven that the Earth orbits the Sun like Jupiter's moons orbit Jupiter.
"I therefore concluded and decided unhesitatingly, that there are three stars in the heavens moving about Jupiter, as Venus and Mercury round the Sun; which at length was established as clear as daylight by numerous subsequent observations. These observations also established that there are not only three, but four, erratic sidereal bodies performing their revolutions round Jupiter...the revolutions are so swift that an observer may generally get differences of position every hour."
Galileo trans Carlos, 1880, p47.
<img src="http | Python Code:
from IPython.display import Image
Image(filename='kernel.png')
Explanation: IPython 3 (jupyter)
Video Toturial: https://www.youtube.com/user/roshanRush
Jupyter is an web-based interactive development environment. It support multiple programming languages like Julia, Octave, Python and R (In alphabetical order).
Upgrade to the new version
sudo pip install --upgrade "ipython[all]"
If you use sudo pip install --upgrade ipython, you might get "Terminal Unavailable" on New menu.
Enabling Python3
You can add Python 3 kernel to jupyter by using this command:
sudo ipython3 kernelspec install-self
Why is it called jupyter?
<img src="http://jupyter.org/images/jupyter-sq-text.svg" style="width:50%;height:50%">
The name represents the new direction of IPython development. They are moving away from linking the IPython platform with a single language. The way it currently works is as follows:
End of explanation
α = 1 #\alpha
β = 2 #\beta
λ = 0 #\lambda
Λ = 0 #\Lambda
𝔞 = 0 #\mfraka - Notice it is not monospaced - Thinner
𝖰 = 0 #\msansQ - Notice it is not monospaced - Wider
α / β
Explanation: $$Julia + Python + R = jupyter$$
This is not to indicate that jupyter only supports these languages. But it is a reference to a talk by Fernando Perez. The complete list has more than 15 languages.
But there is another reason
Galileo documented his observations about Jupiter's moons and published them in the Sidereus Nuncius (Starry Messenger) back in 1610 which was a pamphlet they he published. This was the first paper ever published about astronomy that used observations using a telescope. He has proven that the Earth orbits the Sun like Jupiter's moons orbit Jupiter.
"I therefore concluded and decided unhesitatingly, that there are three stars in the heavens moving about Jupiter, as Venus and Mercury round the Sun; which at length was established as clear as daylight by numerous subsequent observations. These observations also established that there are not only three, but four, erratic sidereal bodies performing their revolutions round Jupiter...the revolutions are so swift that an observer may generally get differences of position every hour."
Galileo trans Carlos, 1880, p47.
<img src="http://upload.wikimedia.org/wikipedia/commons/d/d0/Sidereus_Nuncius_Medicean_Stars.jpg">
<center>Image(s) courtesy of the History of Science Collections, University of Oklahoma Libraries.</center>
Sharing these observations and conclusions in a way that can be replicated and verified inspired the modern scientific method. This is what IPython Notebooks is used for; sharing the complete process with others to verify your process and replicate the results.
So what's new?
http://ipython.org/ipython-doc/3/whatsnew/version3.html
3.x will be the last monolithic release of IPython, as the next release cycle will see the growing project split into its Python-specific and language-agnostic components.
http://ipython.org/ipython-doc/3/whatsnew/version3.html
1. Notebooks with different kernels
https://github.com/ipython/ipython/wiki/IPython-kernels-for-other-languages
2. Unicode Identifiers
Works with Python3 and Julia
To get α variable, type \alpha Then Press Tab.
End of explanation |
2,668 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using Variational Autoencoder to Generate Faces
In this example, we are going to use VAE to generate faces. The dataset we are going to use is CelebA. The dataset consists of more than 200K celebrity face images. You have to download the Align&Cropped Images from the above website to run this example.
Step1: Define the Model
Here, we define a slightly more complicate CNN networks using convolution, batchnorm, and leakyRelu.
Step2: Load the Dataset
Step3: Define the Training Objective
Step4: Define the Optimizer
Step5: Spin Up the Training
This could take a while. It took about 2 hours on a desktop with a intel i7-6700 cpu and 40GB java heap memory. You can reduce the training time by using less data (some changes in the "Load the Dataset" section), but the performce may not as good.
Step6: Random Sample Some Images | Python Code:
from bigdl.nn.layer import *
from bigdl.nn.criterion import *
from bigdl.optim.optimizer import *
from bigdl.dataset import mnist
import datetime as dt
from glob import glob
import os
import numpy as np
from utils import *
import imageio
image_size = 148
Z_DIM = 128
ENCODER_FILTER_NUM = 32
#download the data CelebA, and may repalce with your own data path
DATA_PATH = os.getenv("ANALYTICS_ZOO_HOME") + "/apps/variational-autoencoder/img_align_celeba"
from zoo.common.nncontext import *
sc = init_nncontext("Variational Autoencoder Example")
sc.addFile(os.getenv("ANALYTICS_ZOO_HOME")+"/apps/variational-autoencoder/utils.py")
Explanation: Using Variational Autoencoder to Generate Faces
In this example, we are going to use VAE to generate faces. The dataset we are going to use is CelebA. The dataset consists of more than 200K celebrity face images. You have to download the Align&Cropped Images from the above website to run this example.
End of explanation
def conv_bn_lrelu(in_channels, out_channles, kw=4, kh=4, sw=2, sh=2, pw=-1, ph=-1):
model = Sequential()
model.add(SpatialConvolution(in_channels, out_channles, kw, kh, sw, sh, pw, ph))
model.add(SpatialBatchNormalization(out_channles))
model.add(LeakyReLU(0.2))
return model
def upsample_conv_bn_lrelu(in_channels, out_channles, out_width, out_height, kw=3, kh=3, sw=1, sh=1, pw=-1, ph=-1):
model = Sequential()
model.add(ResizeBilinear(out_width, out_height))
model.add(SpatialConvolution(in_channels, out_channles, kw, kh, sw, sh, pw, ph))
model.add(SpatialBatchNormalization(out_channles))
model.add(LeakyReLU(0.2))
return model
def get_encoder_cnn():
input0 = Input()
#CONV
conv1 = conv_bn_lrelu(3, ENCODER_FILTER_NUM)(input0) # 32 * 32 * 32
conv2 = conv_bn_lrelu(ENCODER_FILTER_NUM, ENCODER_FILTER_NUM * 2)(conv1) # 16 * 16 * 64
conv3 = conv_bn_lrelu(ENCODER_FILTER_NUM * 2, ENCODER_FILTER_NUM * 4)(conv2) # 8 * 8 * 128
conv4 = conv_bn_lrelu(ENCODER_FILTER_NUM * 4, ENCODER_FILTER_NUM * 8)(conv3) # 4 * 4 * 256
view = View([4*4*ENCODER_FILTER_NUM*8])(conv4)
inter = Linear(4*4*ENCODER_FILTER_NUM*8, 2048)(view)
inter = BatchNormalization(2048)(inter)
inter = ReLU()(inter)
# fully connected to generate mean and log-variance
mean = Linear(2048, Z_DIM)(inter)
log_variance = Linear(2048, Z_DIM)(inter)
model = Model([input0], [mean, log_variance])
return model
def get_decoder_cnn():
input0 = Input()
linear = Linear(Z_DIM, 2048)(input0)
linear = Linear(2048, 4*4*ENCODER_FILTER_NUM * 8)(linear)
reshape = Reshape([ENCODER_FILTER_NUM * 8, 4, 4])(linear)
bn = SpatialBatchNormalization(ENCODER_FILTER_NUM * 8)(reshape)
# upsampling
up1 = upsample_conv_bn_lrelu(ENCODER_FILTER_NUM*8, ENCODER_FILTER_NUM*4, 8, 8)(bn) # 8 * 8 * 128
up2 = upsample_conv_bn_lrelu(ENCODER_FILTER_NUM*4, ENCODER_FILTER_NUM*2, 16, 16)(up1) # 16 * 16 * 64
up3 = upsample_conv_bn_lrelu(ENCODER_FILTER_NUM*2, ENCODER_FILTER_NUM, 32, 32)(up2) # 32 * 32 * 32
up4 = upsample_conv_bn_lrelu(ENCODER_FILTER_NUM, 3, 64, 64)(up3) # 64 * 64 * 3
output = Sigmoid()(up4)
model = Model([input0], [output])
return model
def get_autoencoder_cnn():
input0 = Input()
encoder = get_encoder_cnn()(input0)
sampler = GaussianSampler()(encoder)
decoder_model = get_decoder_cnn()
decoder = decoder_model(sampler)
model = Model([input0], [encoder, decoder])
return model, decoder_model
model, decoder = get_autoencoder_cnn()
Explanation: Define the Model
Here, we define a slightly more complicate CNN networks using convolution, batchnorm, and leakyRelu.
End of explanation
def get_data():
data_files = glob(os.path.join(DATA_PATH, "*.jpg"))
rdd_train_images = sc.parallelize(data_files[:100000]) \
.map(lambda path: inverse_transform(get_image(path, image_size)).transpose(2, 0, 1))
rdd_train_sample = rdd_train_images.map(lambda img: Sample.from_ndarray(img, [np.array(0.0), img]))
return rdd_train_sample
train_data = get_data()
Explanation: Load the Dataset
End of explanation
criterion = ParallelCriterion()
criterion.add(KLDCriterion(), 1.0) # You may want to twick this parameter
criterion.add(BCECriterion(size_average=False), 1.0 / 64)
Explanation: Define the Training Objective
End of explanation
batch_size = 100
# Create an Optimizer
optimizer = Optimizer(
model=model,
training_rdd=train_data,
criterion=criterion,
optim_method=Adam(0.001, beta1=0.5),
end_trigger=MaxEpoch(1),
batch_size=batch_size)
app_name='vea-'+dt.datetime.now().strftime("%Y%m%d-%H%M%S")
train_summary = TrainSummary(log_dir='/tmp/vae',
app_name=app_name)
train_summary.set_summary_trigger("LearningRate", SeveralIteration(10))
train_summary.set_summary_trigger("Parameters", EveryEpoch())
optimizer.set_train_summary(train_summary)
print ("saving logs to ",app_name)
Explanation: Define the Optimizer
End of explanation
redire_spark_logs()
show_bigdl_info_logs()
def gen_image_row():
decoder.evaluate()
return np.column_stack([decoder.forward(np.random.randn(1, Z_DIM)).reshape(3, 64,64).transpose(1, 2, 0) for s in range(8)])
def gen_image():
return np.row_stack([gen_image_row() for i in range(8)])
for i in range(1, 6):
optimizer.set_end_when(MaxEpoch(i))
trained_model = optimizer.optimize()
image = gen_image()
if not os.path.exists("./images"):
os.makedirs("./images")
if not os.path.exists("./models"):
os.makedirs("./models")
# you may change the following directory accordingly and make sure the directory
# you are writing to exists
imageio.imwrite("./images/image_%s.png" % i , image)
decoder.saveModel("./models/decoder_%s.model" % i, over_write = True)
import matplotlib
matplotlib.use('Agg')
%pylab inline
import numpy as np
import datetime as dt
import matplotlib.pyplot as plt
loss = np.array(train_summary.read_scalar("Loss"))
plt.figure(figsize = (12,12))
plt.plot(loss[:,0],loss[:,1],label='loss')
plt.xlim(0,loss.shape[0]+10)
plt.grid(True)
plt.title("loss")
Explanation: Spin Up the Training
This could take a while. It took about 2 hours on a desktop with a intel i7-6700 cpu and 40GB java heap memory. You can reduce the training time by using less data (some changes in the "Load the Dataset" section), but the performce may not as good.
End of explanation
from matplotlib.pyplot import imshow
img = gen_image()
imshow(img)
Explanation: Random Sample Some Images
End of explanation |
2,669 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting started with TensorFlow (Graph Mode)
Learning Objectives
- Understand the difference between Tensorflow's two modes
Step1: Graph Execution
Adding Two Tensors
Build the Graph
Unlike eager mode, no concrete value will be returned yet. Just a name, shape and type are printed. Behind the scenes a directed graph is being created.
Step2: Run the Graph
A graph can be executed in the context of a tf.Session(). Think of a session as the bridge between the front-end Python API and the back-end C++ execution engine.
Within a session, passing a tensor operation to run() will cause Tensorflow to execute all upstream operations in the graph required to calculate that value.
Step3: Can you mix eager and graph execution together?
Parameterizing the Grpah
What if values of a and b keep changing? How can you parameterize them so they can be fed in at runtime?
Step 1
Step4: Linear Regression
Toy Dataset
We'll model the following
Step5: Loss Function
Using mean squared error, our loss function is
Step6: Optimizer
An optimizer in TensorFlow both calculates gradients and updates weights. In addition to basic gradient descent, TF provides implementations of several more advanced optimizers such as ADAM and FTRL. They can all be found in the tf.train module.
Note below we're not expclictly telling the optimizer which tensors are our weight tensors. So how does it know what to update? Optimizers will update all variables in the tf.GraphKeys.TRAINABLE_VARIABLES collection. All variables are added to this collection by default. Since our only variables are w0 and w1, this is the behavior we want. If we had a variable that we didn't want to be added to the collection we would set trainable=false when creating it.
Exercise 2
When performing gradient descent, we must specify the learning rate and which optimizer to use. In the training loop we will create below, we'll pass the learning rate to the optimzer using a feed dictionary. Thus, we need to create a placeholder for the value of the learning rate. You can read more about placeholders in Tensorflow here. Placeholders are used for values that will be fed to the operation later. Complete the code below to create a placeholder for the learning rate.
We also want to specify the optimizer for the training loop we'll perform below. There are Tensorflow implementations of various optimizers. Complete the code below to create an optimizer. You can find the available optimizers in the tf.train Module.
Step7: Training Loop
Note our results are identical to what we found in Eager mode.
Exercise 3
Finally we are ready to evaluate our training loop in Graph mode. As before, we need to calculate the gradients and update the weights via our optimizer. Complete the code below to call the optimizer using sess.run. You can read more about using tf.Session() to execute operations here. Note that you will need to also pass a feed_dict to specify the learning rate of the optimizer you created above.
After completing this Exercise, compare with the training loop we made for Eager mode in the previous lab. | Python Code:
import tensorflow as tf
print(tf.__version__)
Explanation: Getting started with TensorFlow (Graph Mode)
Learning Objectives
- Understand the difference between Tensorflow's two modes: Eager Execution and Graph Execution
- Get used to deferred execution paradigm: first define a graph then run it in a tf.Session()
- Understand how to parameterize a graph using tf.placeholder() and feed_dict
- Understand the difference between constant Tensors and variable Tensors, and how to define each
- Practice using mid-level tf.train module for gradient descent
Introduction
Eager Execution
Eager mode evaluates operations immediatley and return concrete values immediately. To enable eager mode simply place tf.enable_eager_execution() at the top of your code. We recommend using eager execution when prototyping as it is intuitive, easier to debug, and requires less boilerplate code.
Graph Execution
Graph mode is TensorFlow's default execution mode (although it will change to eager in TF 2.0). In graph mode operations only produce a symbolic graph which doesn't get executed until run within the context of a tf.Session(). This style of coding is less inutitive and has more boilerplate, however it can lead to performance optimizations and is particularly suited for distributing training across multiple devices. We recommend using delayed execution for performance sensitive production code.
End of explanation
a = tf.constant(value = [5, 3, 8], dtype = tf.int32)
b = tf.constant(value = [3, -1, 2], dtype = tf.int32)
c = tf.add(x = a, y = b)
print(c)
Explanation: Graph Execution
Adding Two Tensors
Build the Graph
Unlike eager mode, no concrete value will be returned yet. Just a name, shape and type are printed. Behind the scenes a directed graph is being created.
End of explanation
with tf.Session() as sess:
result = sess.run(fetches = c)
print(result)
Explanation: Run the Graph
A graph can be executed in the context of a tf.Session(). Think of a session as the bridge between the front-end Python API and the back-end C++ execution engine.
Within a session, passing a tensor operation to run() will cause Tensorflow to execute all upstream operations in the graph required to calculate that value.
End of explanation
a = tf.placeholder(dtype = tf.int32, shape = [None])
b = tf.placeholder(dtype = tf.int32, shape = [None])
c = tf.add(x = a, y = b)
with tf.Session() as sess:
result = sess.run(fetches = c, feed_dict = {
a: [3, 4, 5],
b: [-1, 2, 3]
})
print(result)
Explanation: Can you mix eager and graph execution together?
Parameterizing the Grpah
What if values of a and b keep changing? How can you parameterize them so they can be fed in at runtime?
Step 1: Define Placeholders
Define a and b using tf.placeholder(). You'll need to specify the data type of the placeholder, and optionally a tensor shape.
Step 2: Provide feed_dict
Now when invoking run() within the tf.Session(), in addition to providing a tensor operation to evaluate, you also provide a dictionary whose keys are the names of the placeholders.
End of explanation
X = tf.constant(value = [1,2,3,4,5,6,7,8,9,10], dtype = tf.float32)
Y = 2 * X + 10
print("X:{}".format(X))
print("Y:{}".format(Y))
Explanation: Linear Regression
Toy Dataset
We'll model the following:
\begin{equation}
y= 2x + 10
\end{equation}
End of explanation
with tf.variable_scope(name_or_scope = "training", reuse = tf.AUTO_REUSE):
w0 = # TODO: Your code goes here
w1 = # TODO: Your code goes here
Y_hat = w0 * X + w1
loss_mse = tf.reduce_mean(input_tensor = (Y_hat - Y)**2)
Explanation: Loss Function
Using mean squared error, our loss function is:
\begin{equation}
MSE = \frac{1}{m}\sum_{i=1}^{m}(\hat{Y}_i-Y_i)^2
\end{equation}
$\hat{Y}$ represents the vector containing our model's predictions:
\begin{equation}
\hat{Y} = w_0X + w_1
\end{equation}
Note below we introduce TF variables for the first time. Unlike constants, variables are mutable.
Browse the official TensorFlow guide on variables for more information on when/how to use them.
Exercise 1
Becauase the parameters $w_0$ and $w_1$ will be updated through gradient descent, we need to tell Tensorflow that these values are variables and initialize them accordingly. Have a look at the Tensorflow usage for variables here. Complete the code below to define and initialize the variables w0 and w1.
End of explanation
LEARNING_RATE = # TODO: Your code goes here
optimizer = # TODO: Your code goes here
Explanation: Optimizer
An optimizer in TensorFlow both calculates gradients and updates weights. In addition to basic gradient descent, TF provides implementations of several more advanced optimizers such as ADAM and FTRL. They can all be found in the tf.train module.
Note below we're not expclictly telling the optimizer which tensors are our weight tensors. So how does it know what to update? Optimizers will update all variables in the tf.GraphKeys.TRAINABLE_VARIABLES collection. All variables are added to this collection by default. Since our only variables are w0 and w1, this is the behavior we want. If we had a variable that we didn't want to be added to the collection we would set trainable=false when creating it.
Exercise 2
When performing gradient descent, we must specify the learning rate and which optimizer to use. In the training loop we will create below, we'll pass the learning rate to the optimzer using a feed dictionary. Thus, we need to create a placeholder for the value of the learning rate. You can read more about placeholders in Tensorflow here. Placeholders are used for values that will be fed to the operation later. Complete the code below to create a placeholder for the learning rate.
We also want to specify the optimizer for the training loop we'll perform below. There are Tensorflow implementations of various optimizers. Complete the code below to create an optimizer. You can find the available optimizers in the tf.train Module.
End of explanation
STEPS = 1000
with tf.Session() as sess:
sess.run(tf.global_variables_initializer()) # initialize variables
for step in range(STEPS):
#1. Calculate gradients and update weights
# TODO: Your code goes here
#2. Periodically print MSE
if step % 100 == 0:
print("STEP: {} MSE: {}".format(step, sess.run(fetches = loss_mse)))
# Print final MSE and weights
print("STEP: {} MSE: {}".format(STEPS, sess.run(loss_mse)))
print("w0:{}".format(round(float(sess.run(w0)), 4)))
print("w1:{}".format(round(float(sess.run(w1)), 4)))
Explanation: Training Loop
Note our results are identical to what we found in Eager mode.
Exercise 3
Finally we are ready to evaluate our training loop in Graph mode. As before, we need to calculate the gradients and update the weights via our optimizer. Complete the code below to call the optimizer using sess.run. You can read more about using tf.Session() to execute operations here. Note that you will need to also pass a feed_dict to specify the learning rate of the optimizer you created above.
After completing this Exercise, compare with the training loop we made for Eager mode in the previous lab.
End of explanation |
2,670 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using pretrained GloVe Embedding
Step1: Download data
Step2: With pre-defined and fixed embeddings, we can not be better than just guessing
Step3: Embedding trainable, but still pre-set
Step4: Embeddings trained from scratch | Python Code:
# Based on
# https://github.com/fchollet/deep-learning-with-python-notebooks/blob/master/6.1-using-word-embeddings.ipynb
# https://machinelearningmastery.com/develop-word-embeddings-python-gensim/
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
%pylab inline
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR)
print(tf.__version__)
Explanation: Using pretrained GloVe Embedding
End of explanation
import os
imdb_dir = 'C:/Users/olive/Development/data/aclImdb'
train_dir = os.path.join(imdb_dir, 'train')
labels = []
texts = []
for label_type in ['neg', 'pos']:
dir_name = os.path.join(train_dir, label_type)
for fname in os.listdir(dir_name):
if fname[-4:] == '.txt':
f = open(os.path.join(dir_name, fname), encoding='UTF-8')
texts.append(f.read())
f.close()
if label_type == 'neg':
labels.append(0)
else:
labels.append(1)
len(texts)
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
import numpy as np
maxlen = 500 # We will cut reviews after 100 words
training_samples = 15000 # We will be training on 200 samples
validation_samples = 10000 # We will be validating on 10000 samples
max_words = 10000 # We will only consider the top 10,000 words in the dataset
tokenizer = Tokenizer(num_words=max_words)
tokenizer.fit_on_texts(texts)
sequences = tokenizer.texts_to_sequences(texts)
word_index = tokenizer.word_index
print('Found %s unique tokens.' % len(word_index))
data = pad_sequences(sequences, maxlen=maxlen)
labels = np.asarray(labels)
print('Shape of data tensor:', data.shape)
print('Shape of label tensor:', labels.shape)
# But first, shuffle the data, since we started from data
# where sample are ordered (all negative first, then all positive).
indices = np.arange(data.shape[0])
np.random.shuffle(indices)
data = data[indices]
labels = labels[indices]
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(data, labels, test_size=0.2, random_state=42, stratify=labels)
x_train.shape
Explanation: Download data:
original imdb database from http://ai.stanford.edu/~amaas/data/sentiment/
pre-computed GloVe embeddings from on Wikipedia data (6B) https://nlp.stanford.edu/projects/glove/
End of explanation
glove_dir = 'C:/Users/olive/Development/data/glove.6B'
embeddings_index = {}
f = open(os.path.join(glove_dir, 'glove.6B.100d.txt'), encoding='UTF-8')
for line in f:
values = line.split()
word = values[0]
coefs = np.asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs
f.close()
print('Found %s word vectors.' % len(embeddings_index))
embedding_dim = 100
embedding_matrix = np.zeros((max_words, embedding_dim))
for word, i in word_index.items():
embedding_vector = embeddings_index.get(word)
if i < max_words:
if embedding_vector is not None:
# Words not found in embedding index will be all-zeros.
embedding_matrix[i] = embedding_vector
from keras.models import Sequential
from keras.layers import Embedding, Flatten, Dense
model = Sequential()
model.add(Embedding(max_words, embedding_dim, input_length=maxlen))
model.add(Flatten())
model.add(Dense(32, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.layers[0].set_weights([embedding_matrix])
model.layers[0].trainable = False
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['acc'])
model.summary()
batch_size=1000
model.fit(x_train, y_train,
epochs=10,
batch_size=batch_size,
validation_split=0.2)
Explanation: With pre-defined and fixed embeddings, we can not be better than just guessing
End of explanation
model.layers[0].trainable = True
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['acc'])
model.summary()
batch_size=1000
model.fit(x_train, y_train,
epochs=20,
batch_size=batch_size,
validation_split=0.2)
Explanation: Embedding trainable, but still pre-set
End of explanation
from keras.models import Sequential
from keras.layers import Embedding, Flatten, Dense
model = Sequential()
model.add(Embedding(max_words, embedding_dim, input_length=maxlen))
model.add(Flatten())
model.add(Dense(32, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['acc'])
model.summary()
batch_size=1000
model.fit(x_train, y_train,
epochs=10,
batch_size=batch_size,
validation_split=0.2)
train_loss, train_accuracy = model.evaluate(x_train, y_train, batch_size=batch_size)
train_accuracy
test_loss, test_accuracy = model.evaluate(x_test, y_test, batch_size=batch_size)
test_accuracy
Explanation: Embeddings trained from scratch
End of explanation |
2,671 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Type
Is Required
Step7: 1.4. Elemental Stoichiometry
Is Required
Step8: 1.5. Elemental Stoichiometry Details
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 1.7. Diagnostic Variables
Is Required
Step11: 1.8. Damping
Is Required
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required
Step13: 2.2. Timestep If Not From Ocean
Is Required
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required
Step15: 3.2. Timestep If Not From Ocean
Is Required
Step16: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required
Step17: 4.2. Scheme
Is Required
Step18: 4.3. Use Different Scheme
Is Required
Step19: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required
Step20: 5.2. River Input
Is Required
Step21: 5.3. Sediments From Boundary Conditions
Is Required
Step22: 5.4. Sediments From Explicit Model
Is Required
Step23: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required
Step24: 6.2. CO2 Exchange Type
Is Required
Step25: 6.3. O2 Exchange Present
Is Required
Step26: 6.4. O2 Exchange Type
Is Required
Step27: 6.5. DMS Exchange Present
Is Required
Step28: 6.6. DMS Exchange Type
Is Required
Step29: 6.7. N2 Exchange Present
Is Required
Step30: 6.8. N2 Exchange Type
Is Required
Step31: 6.9. N2O Exchange Present
Is Required
Step32: 6.10. N2O Exchange Type
Is Required
Step33: 6.11. CFC11 Exchange Present
Is Required
Step34: 6.12. CFC11 Exchange Type
Is Required
Step35: 6.13. CFC12 Exchange Present
Is Required
Step36: 6.14. CFC12 Exchange Type
Is Required
Step37: 6.15. SF6 Exchange Present
Is Required
Step38: 6.16. SF6 Exchange Type
Is Required
Step39: 6.17. 13CO2 Exchange Present
Is Required
Step40: 6.18. 13CO2 Exchange Type
Is Required
Step41: 6.19. 14CO2 Exchange Present
Is Required
Step42: 6.20. 14CO2 Exchange Type
Is Required
Step43: 6.21. Other Gases
Is Required
Step44: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required
Step45: 7.2. PH Scale
Is Required
Step46: 7.3. Constants If Not OMIP
Is Required
Step47: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required
Step48: 8.2. Sulfur Cycle Present
Is Required
Step49: 8.3. Nutrients Present
Is Required
Step50: 8.4. Nitrous Species If N
Is Required
Step51: 8.5. Nitrous Processes If N
Is Required
Step52: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required
Step53: 9.2. Upper Trophic Levels Treatment
Is Required
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required
Step55: 10.2. Pft
Is Required
Step56: 10.3. Size Classes
Is Required
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required
Step58: 11.2. Size Classes
Is Required
Step59: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required
Step60: 12.2. Lability
Is Required
Step61: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required
Step62: 13.2. Types If Prognostic
Is Required
Step63: 13.3. Size If Prognostic
Is Required
Step64: 13.4. Size If Discrete
Is Required
Step65: 13.5. Sinking Speed If Prognostic
Is Required
Step66: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required
Step67: 14.2. Abiotic Carbon
Is Required
Step68: 14.3. Alkalinity
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'hammoz-consortium', 'mpiesm-1-2-ham', 'ocnbgchem')
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: HAMMOZ-CONSORTIUM
Source ID: MPIESM-1-2-HAM
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:03
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation |
2,672 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Stack Overflow Network Analysis
Claas Brüß, Simon Romanski and Maximilian Rünz
NOTE
Step1: 2 Data processing
The data behind this project was provided by Stack Overflow itself. They release frequent data dumps on archive.org. We have analyzed all posts from Stack Overflow with the stackoverflow.com-Posts.7z file. The compressed file contains a list of all questions and answeres formatted as xml.
Note
Step2: 2.1 Extract meaningful features
Before we started our analysis we extracted the features of the posts which are interesting for us. The provided features can be looked up in this text file. We selected the following features for our analysis
Step3: 2.2 Create edge list
Having selected the meaningful features, we started to create the graph. Each answer is matched with its corresponding question. The result are two nodes and one edge
Step4: 2.3 Split networks by tags
After we tried to analyze the whole Stack Overflow network and came to the conclusion, that it is simpy too big, we decided to split the network by tags. Therefore we created one edge list per Tag (e.g numpy).
Step5: 2.4 Order edges by time
In order to simplify the analysis of the network evolution, we ordered the edges based on the creation date of the answer.
Step6: 2.5 Format edge list to txt files
Last but not least, the json files are converted into txt edge list, so that they can be read easily by networkx.
Step7: 3 Data Cleaning
During the network analysis we noticed that it makes sense to clean the created network. Therefore, we implemented several filters
Step8: The whole stackoverflow community has more than 8 million users, 15 million questions and 23 million answers on different aspects of different libraries, programming languages and operating systems.
Hence, we decided to focus on specific widely-used libraries for our investigation. In our case we perform data analysis for a commonly used python library Numpy and compare it to another python library Matlplotlib as well as to a heavily used library for C++ called Eigen. Ultimately we will compare the entire Python community with the Numpy community.
For numpy we then create two networks
Step9: Filter by attributes
Step10: Filter by node degree
Step11: Only use giant component
Step12: Remove self loops
Step13: 3 Data Exploration
Step14: We can see that we have roughly 20.000 users.
Step15: And we have roughly 37.231 edges corresponding to one answer each.
Step16: The number of connected components describes the amount of completely seperated groups in the community that do not interfere with each other. A deeper analysis shows that there is in fact one big group and many very small groups.
Step17: The self-loops represent answers that users have given to their own questions. As this seems to be counterintuitive, we have removed those in our data cleaning.
Step18: The average degree for the numpy network is 3.5, we will evaluate this in the exploitation part of this report.
Step19: The cluster coefficient is relatively low due to the model of our network. More details on that are also following in the data exploitation chapter.
Step20: The maximal degree of a node is tremendously higher than the average degree. This is a little suspicous. Therefore, we will have a look at the distribution of the node degrees.
Degree distribution
To get insights about the user behaviour, i.e., how many question do users ask and answer we war plotting a degree distribution in the following
Step21: These plots show a distribution that demonstrates a high number of highly connected nodes. This superlineaer distribution plot imply a hub-and-spoke topology of this network. Note the double-logarithmic scaling of the scatter plot.
Step22: Like before can observe that incoming and outgoing degree distributions both demonstrate bevaior associated with networks with hub-and-spoke topology. The outgoing distribution exhibits this even stronger than the incoming distribution indicating that certain members of the community carry an overpropotional workload in answering questions posed on the platform.
Attribute Distribution
Step23: Another interesting property is the votes per edge as their distribution is another valuable metric to understand how user activity is concentrated on certain areas.
Step24: In general it is interesting to see the evoulution of a network over time. We can see that in this case the amount of edges is conitnuously increasing.
Comparison
It is instructive to compare the characteristics of the different subcommunities
Step25: At first one can notice, that the ratio between number of nodes and size of the giant component is quite large for all networks. Only a few inactive users are not connected to the real community. Furthermore, the number of connected components is correlated to the number of nodes of the given network.
For the average degree one can notice, that the libraries have an average degree around 2-3, but python as programing language has an average degree of more than 6!
The cluster coefficient does not seem to be related to network types.
All in all we can say, that libraries behave similary even if the programming language is different (eigen is written in c++). Built networks of programming language have higher degree.
4 Data Exploitation
It is possible to create different graph models based on the actual graph to understand how our real world model fits into these theoretical concepts. This would be a first helpful step to understand the underlying network model. We were planning to build an Erdős–Rényi and a Barabási-Albert graph based on the following assumptions
Step26: The plots yield that the underlying graph of the Numpy subcommunity is neither a scale-free network nor a random network. Its degree distribution follows a super linear network.
We can still calculate γ of the distribution to compare it to other networks
Scale Free Networks
In scale free networks there exists a linear dependency between the logarithm of the probability and the logarithm of the degree
Step27: For now we will keep these numbers in mind. We will come back to them later.
Network evolution
In most cases the growth of a network is clearly correlated with time. Most models simply regard the time until a new node joins the network as a time step. In real networks this time can widely differ and therefore we decided to plot over time to look at the changes in network in a certain timeframe rather than a certain magnitude of change. Nonetheless it is implied that more and more nodes join the network over the weeks.
Step28: <img src="files/imgs/WWWYearNodes.png"/>
Number of Nodes over Time
In order to gain a better understanding of how these networks evolve over time we observed various network attributes over time to network models such as Barabási-Albert Model.
These two curves depict the number of nodes present in the network plotted over time. It’s clear that the growth of the network is accelerating in both cases.
Step29: ⟨ k⟩ over time
The average degree of the nodes within the network closely follows a logistic curve converging at an average degree value of about 3.6.
Step30: k<sub>max</sub> over Time
The growing number of nodes accelerates the rise in maximum degree .The acelleration or stronger than linear growth of the maximum degree suggests that new nodes show a tendency to connect to already highly connected nodes. This superlinear preferential attachment incates that we will see high values of α in the following plots.
Step31: <img src="files/imgs/EvolClustering.png"/>
Step32: <img src="files/imgs/EvolHubs.png"/>
k<sub>max</sub> over ⟨ k⟩
As indicated by the superlinear growth of maximum degree over time, we also see superlinear behvaior in the plot of maximum degree over average degree with values of α > 2.5.
Step33: <img src="files/imgs/EvolDegree.png"/>
Degree Dynamic and Degree Distribution in different Stages
In this section we compare the degree dynamics of the network nodes between the network implied from the StackOverflow data and network closely following the bevaiour of the Barabási-Albert Model. For this we selected degree distribution with similiar node counts N. While the Barabási-Albert network shows linearity in the degree distributions plots and the rise in the degree of the node plot lines, the and distributon plots of the StackOverflow community network show clear superlinear behaviour. The degree dynamic plot is prefiltered and only shows the plot lines for nodes that eventually reach a degree higher than 100. In these highly connected nodes we see a very quick development towards them becoming hubs in the network instagating the a topology that leans towards hub-and-spoke. In contrast to the Barabási-Albert network with a scale-free topology and power law dominated distributions.
Comparison to other real world networks
<img src="files/imgs/OtherNetworks.png"/>
The Stack Overflow Numpy network behaves reagaring the ratio bewteen edges and nodes similary to the smaller networks as Power Grid and E. Coli Metabolism. It also behaves similiar to their gamma.
But if one has a look at the parameters for the Python network, they are much closer to communication networks as the Science collaboration and Citation Network. This is most probably due to our tag selection. It is likely, that the complete Stack Overflow Networks behaves very similar to the Communication Networks with a gamma of up to 5.
Classifier
In order to automatically detect super users we will train an unsupervised k-means clustering. This clustering is based on the attributes of nodes like
Step34: We can see that there are very active users with label 0, who have answered ten questions and asked fourteen questions in average.
The users from label 4 are also asking and answering several times, but they ask very good questions which score in average 85 votes.
Users with label 1 are very inactive. They were only active once. These users are colored in green, located very close to the coordinates origin.
The super users are further away from the origin.
Group work models
In the beginning of this report we were aiming for an intuitive understanding of how collaboration networks like Stackoverflow work. Consequently, we will try to compare our extracted information with two common models for group work theory.
First of all we have to define what exactly “work” is in Stackoverflow. As it is impossible to infer information about the actual project people are working on, we cannot measure the actual project work outcomes of individuals. However, we can define the knowledge transfer, i.e. answering of questions, as actual work.
The Belbin Team Inventory describes different roles of people that is emerging from the formation of the group and was presented in Management Teams | Python Code:
%matplotlib inline
import os
import networkx as nx
import numpy as np
import matplotlib.pyplot as plt
from scipy import sparse
# Own modules
import DataProcessing as proc
import DataCleaning as clean
import NetworkAnalysis as analysis
import Classification as classification
import NetworkEvolution as evol
Explanation: Stack Overflow Network Analysis
Claas Brüß, Simon Romanski and Maximilian Rünz
NOTE: Originally we claimed that the data visualization used in this project will be credited for the Data Visualization course. As we finally chose a very different approach in Data Visualization, this claim does not hold anymore. Apart from using the same data set, these are two clearly seperated projects.
1 Introduction
There are comprehensive studies on how groups form and work together in a face-to-face team work scenario. However, even though the development of the internet has allowed open platforms for collaboration in a massive scale, less research has been conducted on patterns of collaboration in that domain.
This project analyzes the stackoverflow community representing it as a graph. We are applying network analysis methods to subcommunities for libraries like Numpy in order to understand the structure of the community.
In a next stage we are comparing the communities to theoretical network models as well as to real network models to obtain insights about work patterns in the community. Finally we will compare those insights with proven psychological models of group work theorey to gain intuition about knowledge transfer and work in these communites. We will show that the shape of group work and knowledge transfer changes by the means of online communities.
End of explanation
# Paths that will be used
posts_path = os.path.join("Posts.xml")
questions_path = os.path.join("Questions.json")
answers_path = os.path.join("Answers.json")
edge_list_path = os.path.join("Edges.json")
edge_list_tag_path = os.path.join("Tags")
Explanation: 2 Data processing
The data behind this project was provided by Stack Overflow itself. They release frequent data dumps on archive.org. We have analyzed all posts from Stack Overflow with the stackoverflow.com-Posts.7z file. The compressed file contains a list of all questions and answeres formatted as xml.
Note: As the analyzed xml file is more than 50 GB big, the data processing takes several hours. The data processing part can be skipped, the uploaded zip contains the constructed edge lists.
End of explanation
%%time
# Create JSON for questions and answers
proc.split_qa_json_all(questions_path, answers_path, posts_path)
Explanation: 2.1 Extract meaningful features
Before we started our analysis we extracted the features of the posts which are interesting for us. The provided features can be looked up in this text file. We selected the following features for our analysis:
* PostTypeId (Question/Answer)
* Id
* ParentId
* AcceptedAnswerId
* CreationDate
* Score
* OwnerUserId
* Tags
Based on the PostType the posts are then stored then stored in the questions or answers JSON file.
End of explanation
%%time
# create edge list
proc.create_edge_list_all(questions_path, answers_path, edge_list_path)
Explanation: 2.2 Create edge list
Having selected the meaningful features, we started to create the graph. Each answer is matched with its corresponding question. The result are two nodes and one edge: The nodes represent Stack Overflow users. The edge connectes those users from the inquirer to the respondent.
End of explanation
%%time
# split in file for each tag
proc.split_edge_list_tags(edge_list_tag_path, edge_list_path)
Explanation: 2.3 Split networks by tags
After we tried to analyze the whole Stack Overflow network and came to the conclusion, that it is simpy too big, we decided to split the network by tags. Therefore we created one edge list per Tag (e.g numpy).
End of explanation
%%time
# order by time
proc.order_edge_lists_tags_time(edge_list_tag_path)
Explanation: 2.4 Order edges by time
In order to simplify the analysis of the network evolution, we ordered the edges based on the creation date of the answer.
End of explanation
%%time
proc.edge_lists_to_txt(edge_list_tag_path)
Explanation: 2.5 Format edge list to txt files
Last but not least, the json files are converted into txt edge list, so that they can be read easily by networkx.
End of explanation
network_path = os.path.join("Tags", "numpy_complete_ordered_list.txt")
network = nx.read_edgelist(network_path,nodetype=int, data=(('time',int),('votes_q', int),('votes_a', int),('accepted', bool)))
network_directed = nx.read_edgelist(network_path, create_using=nx.DiGraph(), nodetype=int, data=(('time',int),('votes_q', int),('votes_a', int),('accepted', bool)))
Explanation: 3 Data Cleaning
During the network analysis we noticed that it makes sense to clean the created network. Therefore, we implemented several filters:
* Filter by attribute
* Filter by degree
* Filter by component
* Remove self loops
The most frequently used filter is the filter for attributes. Using this filter we remove questions and answers with negative votes, as they are not helpful for the community.
Furthermore this filter will be used to analyse the evolution of the network.
End of explanation
# in epoche
min_time = -1
max_time = -1
min_q_votes = 0
max_q_votes = -1
min_a_votes = 0
max_a_votes = -1
accepted = -1
min_degree = -1
max_degree = -1
only_gc = False
no_self_loops = True
Explanation: The whole stackoverflow community has more than 8 million users, 15 million questions and 23 million answers on different aspects of different libraries, programming languages and operating systems.
Hence, we decided to focus on specific widely-used libraries for our investigation. In our case we perform data analysis for a commonly used python library Numpy and compare it to another python library Matlplotlib as well as to a heavily used library for C++ called Eigen. Ultimately we will compare the entire Python community with the Numpy community.
For numpy we then create two networks: one directed and one undirected for different analysis purposes.
End of explanation
network_cleaned = clean.filter_network_attributes(network, min_time, max_time,\
min_q_votes, max_q_votes, min_a_votes, max_a_votes, accepted)
network_direted_cleaned = clean.filter_network_attributes(network_directed, min_time, max_time,\
min_q_votes, max_q_votes, min_a_votes, max_a_votes, accepted, directed=True)
Explanation: Filter by attributes
End of explanation
network_cleaned = clean.filter_network_node_degree(network_cleaned, min_degree, max_degree)
network_direted_cleaned = clean.filter_network_node_degree(network_direted_cleaned, min_degree, max_degree)
Explanation: Filter by node degree
End of explanation
if only_gc:
network_cleaned = clean.filter_network_gc(network_cleaned)
network_direted_cleaned = clean.filter_network_gc(network_direted_cleaned)
Explanation: Only use giant component
End of explanation
if no_self_loops:
network_cleaned = clean.filter_selfloops(network_cleaned)
network_direted_cleaned = clean.filter_selfloops(network_direted_cleaned)
Explanation: Remove self loops
End of explanation
analysis.get_number_nodes(network_cleaned)
Explanation: 3 Data Exploration: Network properties
We are starting with some basic properties of the subcommunity. Each node in our graph represents one user.
End of explanation
analysis.get_number_edges(network_cleaned)
Explanation: We can see that we have roughly 20.000 users.
End of explanation
analysis.get_number_connected_components(network_cleaned)
Explanation: And we have roughly 37.231 edges corresponding to one answer each.
End of explanation
analysis.get_size_giant_component(network_cleaned)
analysis.plot_ranking_component_size(network_cleaned)
analysis.get_number_self_loops(network_cleaned)
Explanation: The number of connected components describes the amount of completely seperated groups in the community that do not interfere with each other. A deeper analysis shows that there is in fact one big group and many very small groups.
End of explanation
analysis.get_avg_degree(network_cleaned)
Explanation: The self-loops represent answers that users have given to their own questions. As this seems to be counterintuitive, we have removed those in our data cleaning.
End of explanation
analysis.get_cluster_coefficient(network_cleaned)
Explanation: The average degree for the numpy network is 3.5, we will evaluate this in the exploitation part of this report.
End of explanation
analysis.get_max_degree(network_cleaned)
Explanation: The cluster coefficient is relatively low due to the model of our network. More details on that are also following in the data exploitation chapter.
End of explanation
analysis.plot_degree_hist(network_cleaned)
analysis.plot_degree_scatter(network_cleaned)
Explanation: The maximal degree of a node is tremendously higher than the average degree. This is a little suspicous. Therefore, we will have a look at the distribution of the node degrees.
Degree distribution
To get insights about the user behaviour, i.e., how many question do users ask and answer we war plotting a degree distribution in the following:
End of explanation
analysis.plot_in_degree_hist(network_direted_cleaned)
analysis.plot_in_degree_scatter(network_direted_cleaned)
analysis.plot_out_degree_hist(network_direted_cleaned)
analysis.plot_out_degree_scatter(network_direted_cleaned)
Explanation: These plots show a distribution that demonstrates a high number of highly connected nodes. This superlineaer distribution plot imply a hub-and-spoke topology of this network. Note the double-logarithmic scaling of the scatter plot.
End of explanation
analysis.analyze_attribute_q_votes(network_cleaned)
analysis.analyze_attribute_a_votes(network_cleaned)
Explanation: Like before can observe that incoming and outgoing degree distributions both demonstrate bevaior associated with networks with hub-and-spoke topology. The outgoing distribution exhibits this even stronger than the incoming distribution indicating that certain members of the community carry an overpropotional workload in answering questions posed on the platform.
Attribute Distribution
End of explanation
analysis.analyze_attribute_time(network_cleaned)
Explanation: Another interesting property is the votes per edge as their distribution is another valuable metric to understand how user activity is concentrated on certain areas.
End of explanation
for file in ["python_complete_ordered_list.txt",\
"matplotlib_complete_ordered_list.txt",\
"eigen_complete_ordered_list.txt"]:
print(file)
analysis.analyze_basic_file(file)
print()
Explanation: In general it is interesting to see the evoulution of a network over time. We can see that in this case the amount of edges is conitnuously increasing.
Comparison
It is instructive to compare the characteristics of the different subcommunities:
End of explanation
analysis.plot_degree_scatter(network_cleaned)
Explanation: At first one can notice, that the ratio between number of nodes and size of the giant component is quite large for all networks. Only a few inactive users are not connected to the real community. Furthermore, the number of connected components is correlated to the number of nodes of the given network.
For the average degree one can notice, that the libraries have an average degree around 2-3, but python as programing language has an average degree of more than 6!
The cluster coefficient does not seem to be related to network types.
All in all we can say, that libraries behave similary even if the programming language is different (eigen is written in c++). Built networks of programming language have higher degree.
4 Data Exploitation
It is possible to create different graph models based on the actual graph to understand how our real world model fits into these theoretical concepts. This would be a first helpful step to understand the underlying network model. We were planning to build an Erdős–Rényi and a Barabási-Albert graph based on the following assumptions:
If NN denotes the number of nodes and LL the number of edges,
we can calculate the Erdős–Rényi graph parameter pp as follows:
$p = \frac{2L}{((N-1)N)}$
Respectively, we can caluclate the parameter m for the Barabási-Albert graph.
$m =(\frac{L}{N})+1$
We noticed that building the graphs for all the subcommunities takes a lot of time as our graphs are relatively big. Hence, we are aiming for a more efficient approach to understand the network model.
The key difference between the two models is that the Barabási-Albert model describes a scale-free network and the Erdős–Rényi model describes a random network. Hence, the degree distribution is more likely to show how similar each model is to the original graph.
This is the abstraction that we would like to draw after the comparison whatsoever. As a result we can also compare the degree distribution with the two distributions representing the random network model and the scale free model. That is the Poisson distribution and the power-law distribution respectively.
<img src="files/imgs/RandomNetworkDegreeDistribution.png"/>
<img src="files/imgs/ScaleFreeNetworkDegreeDistribution.png"/>
They can be a bit hard to distinguish when we are handling real data in a linearly scaled graph. However, if we take the logarithm of both axes, they are clearly distinguishable.
End of explanation
analysis.get_gamma_power_law(network_cleaned)
analysis.get_in_gamma_power_law(network_direted_cleaned)
analysis.get_out_gamma_power_law(network_direted_cleaned)
Explanation: The plots yield that the underlying graph of the Numpy subcommunity is neither a scale-free network nor a random network. Its degree distribution follows a super linear network.
We can still calculate γ of the distribution to compare it to other networks
Scale Free Networks
In scale free networks there exists a linear dependency between the logarithm of the probability and the logarithm of the degree:
$log(p(k)) \sim -y log(k) $
Gamma can then be calculated by fitting a linear regression between $log(p(k))$ and $log(k)$. The slope of the regression line is the gamma of the scale free network.
End of explanation
networks = evol.split_network(network)
evol.plot_t_n(networks)
Explanation: For now we will keep these numbers in mind. We will come back to them later.
Network evolution
In most cases the growth of a network is clearly correlated with time. Most models simply regard the time until a new node joins the network as a time step. In real networks this time can widely differ and therefore we decided to plot over time to look at the changes in network in a certain timeframe rather than a certain magnitude of change. Nonetheless it is implied that more and more nodes join the network over the weeks.
End of explanation
evol.plot_t_k_avg(networks)
Explanation: <img src="files/imgs/WWWYearNodes.png"/>
Number of Nodes over Time
In order to gain a better understanding of how these networks evolve over time we observed various network attributes over time to network models such as Barabási-Albert Model.
These two curves depict the number of nodes present in the network plotted over time. It’s clear that the growth of the network is accelerating in both cases.
End of explanation
evol.plot_t_k_max(networks)
Explanation: ⟨ k⟩ over time
The average degree of the nodes within the network closely follows a logistic curve converging at an average degree value of about 3.6.
End of explanation
evol.plot_n_c(networks)
Explanation: k<sub>max</sub> over Time
The growing number of nodes accelerates the rise in maximum degree .The acelleration or stronger than linear growth of the maximum degree suggests that new nodes show a tendency to connect to already highly connected nodes. This superlinear preferential attachment incates that we will see high values of α in the following plots.
End of explanation
evol.plot_k_avg_k_max(networks)
Explanation: <img src="files/imgs/EvolClustering.png"/>
End of explanation
evol.DegreeDynamics(network_cleaned, 20)
# n = 100
analysis.plot_degree_scatter(networks[1246320000000])
# n = 1000
analysis.plot_degree_scatter(networks[1302566400000])
# n = 10000
analysis.plot_degree_scatter(networks[1430179200000])
# all
analysis.plot_degree_scatter(network_cleaned)
Explanation: <img src="files/imgs/EvolHubs.png"/>
k<sub>max</sub> over ⟨ k⟩
As indicated by the superlinear growth of maximum degree over time, we also see superlinear behvaior in the plot of maximum degree over average degree with values of α > 2.5.
End of explanation
classification.classify_users(network_directed)
Explanation: <img src="files/imgs/EvolDegree.png"/>
Degree Dynamic and Degree Distribution in different Stages
In this section we compare the degree dynamics of the network nodes between the network implied from the StackOverflow data and network closely following the bevaiour of the Barabási-Albert Model. For this we selected degree distribution with similiar node counts N. While the Barabási-Albert network shows linearity in the degree distributions plots and the rise in the degree of the node plot lines, the and distributon plots of the StackOverflow community network show clear superlinear behaviour. The degree dynamic plot is prefiltered and only shows the plot lines for nodes that eventually reach a degree higher than 100. In these highly connected nodes we see a very quick development towards them becoming hubs in the network instagating the a topology that leans towards hub-and-spoke. In contrast to the Barabási-Albert network with a scale-free topology and power law dominated distributions.
Comparison to other real world networks
<img src="files/imgs/OtherNetworks.png"/>
The Stack Overflow Numpy network behaves reagaring the ratio bewteen edges and nodes similary to the smaller networks as Power Grid and E. Coli Metabolism. It also behaves similiar to their gamma.
But if one has a look at the parameters for the Python network, they are much closer to communication networks as the Science collaboration and Citation Network. This is most probably due to our tag selection. It is likely, that the complete Stack Overflow Networks behaves very similar to the Communication Networks with a gamma of up to 5.
Classifier
In order to automatically detect super users we will train an unsupervised k-means clustering. This clustering is based on the attributes of nodes like:
* in degree
* out degree
* average question votes
* average answer votes
Kmeans follows an iterative approach:
In the first steps the data points are assigned the label of the nearest cluster.
$S_i^{(t)} = \big { x_p : \big \| x_p - \mu^{(t)}_i \big \|^2 \le \big \| x_p - \mu^{(t)}_j \big \|^2 \ \forall j, 1 \le j \le k \big}$
After that a new center for each cluster is computed:
$\mu^{(t+1)}i = \frac{1}{|S^{(t)}_i|} \sum{x_j \in S^{(t)}_i} x_j$
This procedure is repeated until the location of clusters is not changing anymore.
End of explanation
print("Gamma total: {}".format(analysis.get_gamma_power_law(network_cleaned)))
print("Gamma in: {}".format(analysis.get_in_gamma_power_law(network_direted_cleaned)))
print("Gamma out: {}".format(analysis.get_out_gamma_power_law(network_direted_cleaned)))
Explanation: We can see that there are very active users with label 0, who have answered ten questions and asked fourteen questions in average.
The users from label 4 are also asking and answering several times, but they ask very good questions which score in average 85 votes.
Users with label 1 are very inactive. They were only active once. These users are colored in green, located very close to the coordinates origin.
The super users are further away from the origin.
Group work models
In the beginning of this report we were aiming for an intuitive understanding of how collaboration networks like Stackoverflow work. Consequently, we will try to compare our extracted information with two common models for group work theory.
First of all we have to define what exactly “work” is in Stackoverflow. As it is impossible to infer information about the actual project people are working on, we cannot measure the actual project work outcomes of individuals. However, we can define the knowledge transfer, i.e. answering of questions, as actual work.
The Belbin Team Inventory describes different roles of people that is emerging from the formation of the group and was presented in Management Teams: Why They Succeed or Fail (1981). The extended Belbin Team inventory consists of the following types:
Plants are creative generators of ideas.
Resource Investigators provides enthusiasm at the start of a project and seizes contacts and opportunities.
Coordinators have a talent for seeing the big picture and are therefore like to become the leader of the team.
Shaper are driven by a lot of energy and the urge to perform. Therefore they usually make sure that all possibilities are considered and shake things up if necessary.
Monitor Evaluators unemotional observers of the project and team.
Teamworkers ensure that is the team running effectively and without friction.
Implementers take suggestions and ideas and turns them into action.
Completers are perfectionists and double-check the final outcome of the work.
Specialists are experts in their own particular field and typically transfer this knowledge to others. Usually the stick to their domain of expertise.
While it is hard to identify some of the types, e.g. the identification of a completer would require analysis of the whole answer text, it possible to find some similarities between our users
We calculated the different γ for incoming and outgoing degrees, i.e. how many hubs do we have for users based on asking questions and how many hubs do we have for answering questions.
End of explanation |
2,673 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Nonlinear Classification and Regression with Decision Trees
Decision trees
Decision trees are commonly learned by recursively splitting the set of training
instances into subsets based on the instances' values for the explanatory variables.
In classification tasks, the leaf nodes
of the decision tree represent classes. In regression tasks, the values of the response
variable for the instances contained in a leaf node may be averaged to produce the
estimate for the response variable. After the decision tree has been constructed,
making a prediction for a test instance requires only following the edges until a
leaf node is reached.
Let's create a decision tree using an algorithm called Iterative Dichotomiser 3 (ID3).
Invented by Ross Quinlan, ID3 was one of the first algorithms used to train decision
trees.
But how to choose the first variable on which we have to divide the data so that we can have smaller tree.
Measured in bits, entropy quantifies the amount of uncertainty in a variable. Entropy
is given by the following equation, where n is the number of outcomes and ( ) i P x is
the probability of the outcome i. Common values for b are 2, e, and 10. Because the
log of a number less than one will be negative, the entire sum is negated to return a
positive value.
entropy $$ H(X) = -\sum_{i=1}^{n} P(x_i)log_b P(x_i) $$
Information gain
Selecting the test that produces the subsets with the lowest average entropy can produce a suboptimal tree.
we will measure the reduction in entropy using a metric called information gain.
Calculated with the following equation, information gain is the difference between the entropy of the parent
node, H (T ), and the weighted average of the children nodes' entropies.
For creating Decision Tree, Algo ID3 is the one mostly used. C4.5 is a modified version of ID3
that can be used with continuous explanatory variables and can accommodate
missing values for features. C4.5 also can prune trees.
Pruning reduces the size of a tree by replacing branches that classify few instances with leaf nodes. Used by
scikit-learn's implementation of decision trees, CART is another learning algorithm
that supports pruning.
Gini impurity
Gini impurity measures the proportions of classes in a set. Gini impurity
is given by the following equation, where j is the number of classes, t is the subset
of instances for the node, and P(i|t) is the probability of selecting an element of
class i from the node's subset
Step1: Tree ensembles (RandomForestClassifier)
Ensemble learning methods combine a set of models to produce an estimator that
has better predictive performance than its individual components. A random forest
is a collection of decision trees that have been trained on randomly selected subsets
of the training instances and explanatory variables. Random forests usually make
predictions by returning the mode or mean of the predictions of their constituent
trees.
Random forests are less prone to overfitting than decision trees because no single
tree can learn from all of the instances and explanatory variables; no single tree can
memorize all of the noise in the representation | Python Code:
# import
import pandas as pd
from sklearn.tree import DecisionTreeClassifier
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import classification_report
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.ensemble import RandomForestClassifier
df = pd.read_csv("data/ad.data", header=None)
explanatory_variable_columns = set(df.columns.values)
response_variable_column = df[len(df.columns.values)-1]
# The last column describes the targets
explanatory_variable_columns.remove(len(df.columns.values)-1)
y = [1 if e == 'ad.' else 0 for e in response_variable_column]
X = df[list(explanatory_variable_columns)]
#X.replace(to_replace=' *\?', value=-1, regex=True, inplace=True)
X.replace(['?'], [-1])
X_train, X_test, y_train, y_test = train_test_split(X, y)
pipeline = Pipeline([
('clf', DecisionTreeClassifier(criterion='entropy'))
])
parameters = {
'clf__max_depth': (150, 155, 160),
'clf__min_samples_split': (1, 2, 3),
'clf__min_samples_leaf': (1, 2, 3)
}
grid_search = GridSearchCV(pipeline, parameters, n_jobs=-1, verbose=1, scoring='f1')
#grid_search.fit(X_train, y_train)
print( 'Best score: %0.3f' % grid_search.best_score_)
print( 'Best parameters set:')
best_parameters = grid_search.best_estimator_.get_params()
for param_name in sorted(parameters.keys()):
print( '\t%s: %r' % (param_name, best_parameters[param_name]))
predictions = grid_search.predict(X_test)
print ('Accuracy:', accuracy_score(y_test, predictions))
print ('Confusion Matrix:', confusion_matrix(y_test, predictions))
print ('Classification Report:', classification_report(y_test, predictions))
Explanation: Nonlinear Classification and Regression with Decision Trees
Decision trees
Decision trees are commonly learned by recursively splitting the set of training
instances into subsets based on the instances' values for the explanatory variables.
In classification tasks, the leaf nodes
of the decision tree represent classes. In regression tasks, the values of the response
variable for the instances contained in a leaf node may be averaged to produce the
estimate for the response variable. After the decision tree has been constructed,
making a prediction for a test instance requires only following the edges until a
leaf node is reached.
Let's create a decision tree using an algorithm called Iterative Dichotomiser 3 (ID3).
Invented by Ross Quinlan, ID3 was one of the first algorithms used to train decision
trees.
But how to choose the first variable on which we have to divide the data so that we can have smaller tree.
Measured in bits, entropy quantifies the amount of uncertainty in a variable. Entropy
is given by the following equation, where n is the number of outcomes and ( ) i P x is
the probability of the outcome i. Common values for b are 2, e, and 10. Because the
log of a number less than one will be negative, the entire sum is negated to return a
positive value.
entropy $$ H(X) = -\sum_{i=1}^{n} P(x_i)log_b P(x_i) $$
Information gain
Selecting the test that produces the subsets with the lowest average entropy can produce a suboptimal tree.
we will measure the reduction in entropy using a metric called information gain.
Calculated with the following equation, information gain is the difference between the entropy of the parent
node, H (T ), and the weighted average of the children nodes' entropies.
For creating Decision Tree, Algo ID3 is the one mostly used. C4.5 is a modified version of ID3
that can be used with continuous explanatory variables and can accommodate
missing values for features. C4.5 also can prune trees.
Pruning reduces the size of a tree by replacing branches that classify few instances with leaf nodes. Used by
scikit-learn's implementation of decision trees, CART is another learning algorithm
that supports pruning.
Gini impurity
Gini impurity measures the proportions of classes in a set. Gini impurity
is given by the following equation, where j is the number of classes, t is the subset
of instances for the node, and P(i|t) is the probability of selecting an element of
class i from the node's subset:
$$ Gini (t) = 1 - \sum_{i=1}^{j} P(i|t)^2 $$
Intuitively, Gini impurity is zero when all of the elements of the set are the same
class, as the probability of selecting an element of that class is equal to one. Like
entropy, Gini impurity is greatest when each class has an equal probability of being
selected. The maximum value of Gini impurity depends on the number of possible
classes, and it is given by the following equation:
$$ Gini_{max} = 1 - \frac{1}{n} $$
End of explanation
pipeline = Pipeline([
('clf', RandomForestClassifier(criterion='entropy'))
])
parameters = {
'clf__n_estimators': (5, 10, 20, 50),
'clf__max_depth': (50, 150, 250),
'clf__min_samples_split': (1, 2, 3),
'clf__min_samples_leaf': (1, 2, 3)
}
grid_search = GridSearchCV(pipeline, parameters, n_jobs=-1, verbose=1, scoring='f1')
#grid_search.fit(X_train, y_train)
Explanation: Tree ensembles (RandomForestClassifier)
Ensemble learning methods combine a set of models to produce an estimator that
has better predictive performance than its individual components. A random forest
is a collection of decision trees that have been trained on randomly selected subsets
of the training instances and explanatory variables. Random forests usually make
predictions by returning the mode or mean of the predictions of their constituent
trees.
Random forests are less prone to overfitting than decision trees because no single
tree can learn from all of the instances and explanatory variables; no single tree can
memorize all of the noise in the representation
End of explanation |
2,674 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ejercicio
Simplifica los cocientes entre factoriales
Step1: Ejercicio
Calcula las siguientes operaciones | Python Code:
enunciado = list([r'\frac{7!}{6!}',r'\frac{{8!}}{{9!}}',r'\frac{{9!}}{{5!\cdot 4!}}',r'\frac{{m!}}{{(m - 1)!}}', r'\frac{{( {m + 1} )!}}{{( {m - 1} )!}}'])
enunciado
enunciado = list([r'\frac{7!}{6!}',r'\frac{{8!}}{{9!}}',r'\frac{{9!}}{{5!\cdot 4!}}',r'\frac{{m!}}{{(m - 1)!}}', r'\frac{{( {m + 1} )!}}{{( {m - 1} )!}}'])
enunciado_sympy=[]
for i in enunciado :
enunciado_sympy.append(parse_latex(i));
enunciado_sympy
for i in range(len(enunciado_sympy)) :
display(md("$"+enunciado[i]+" \\rightarrow "+latex(simplify(enunciado_sympy[i]))+"$"))
Explanation: Ejercicio
Simplifica los cocientes entre factoriales:
- $\frac{7!}{6!}$
- $\frac{{8!}}{{9!}}$
- $\frac{{9!}}{{5!.4!}}$
- $\frac{{m!}}{{(m - 1)!}}$
- $\frac{{\left( {m + 1} \right)!}}{{\left( {m - 1} \right)!}}$
End of explanation
from sympy.functions.combinatorial.numbers import nC, nP, nT
nC(5,3)
from sympy import *
expr = sympify("nC(5,3)")
display(expr.expand())
enunciado = [[252,250], [25,3], [25,4]]
for i in range(len(enunciado)):
display(nC(enunciado[i][0],enunciado[i][1]))
nC(enunciado[0][0],enunciado[0][1])
factorial(252)/(factorial(250)*factorial(2))
Explanation: Ejercicio
Calcula las siguientes operaciones:
- $\binom{252}{250}$
- $\binom{25}{3} + \binom{25}{4} = \binom{26}{4}$
- $\binom{9}{6} + \binom{9}{7} + \binom{10}{2}=\binom{10}{7}+\binom{10}{8}=\binom{11}{8}$
- $\binom{4}{2} + \binom{4}{3} + \binom{5}{4}+\binom{6}{5} + \binom{7}{6} + \binom{8}{7}=\binom{9}{7}$
- $\binom{4}{0} + \binom{4}{1} + \binom{4}{2}+\binom{4}{3} = 2^4-1$
\endmatrix } \right)\, + \,\left( {\matrix
{25} \
4 \
\endmatrix } \right)$$ iii) $$\left( {\matrix
9 \
6 \
\endmatrix } \right)\, + \,\left( {\matrix
9 \
7 \
\endmatrix } \right)\, + \,\left( {\matrix
{10} \
2 \
\endmatrix } \right)$$
iv) $$\left( {\matrix
4 \
2 \
\endmatrix } \right)\, + \,\left( {\matrix
4 \
3 \
\endmatrix } \right)\, + \,\left( {\matrix
5 \
4 \
\endmatrix } \right)\, + \,\left( {\matrix
6 \
5 \
\endmatrix } \right)\, + \,\left( {\matrix
7 \
6 \
\endmatrix } \right)\, + \,\left( {\matrix
8 \
7 \
\endmatrix } \right)$$ v) $$\left( {\matrix
4 \
0 \
\endmatrix } \right)\, + \,\left( {\matrix
4 \
1 \
\endmatrix } \right)\, + \,\left( {\matrix
4 \
2 \
\endmatrix } \right)\, + \,\left( {\matrix
4 \
3 \
\endmatrix } \right)$$
End of explanation |
2,675 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Solvers
Step2: Undeterdetermined case (expect gradient descent to give sublinear convergence)
Step3: Question for thought | Python Code:
import numpy as np
from numpy.linalg import norm
from matplotlib import pyplot as plt
rng = np.random.default_rng()
Explanation: <a href="https://colab.research.google.com/github/stephenbeckr/convex-optimization-class/blob/master/Demos/ConvergenceRateDemo.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Convergence rate demo
Minimize $f(x)$ without any constraints. For APPM 5360, Spr '21, Becker
We'll compare three methods
1. Gradient descent, assuming $\nabla f$ is $L$-Lipschitz continuous (so use step $t=1/L$)
2. Nesterov accelerated gradient descent (same assumptions, same stepsize)
3. sub-gradient descent, assuming $f$ is $\rho$-Lipschitz, that $\|x_0-x^\star\|\le B$. If we run $k$ iterations, use stepsize $t=\frac{B}{\rho\sqrt{k}}$
We'll run this to solve a quadratic problem
$$\min_x f(x) = \frac12\|Ax-b\|_2^2$$
which satisfies $\nabla f(x) = A^T(Ax-b)$ is $L$-Lipschitz continuous with $L=\|A\|^2$,
as well as a monotonic transformation of that problem
$$\min_x g(x) = \|Ax-b\|_2$$
which satisfies $g$ is $\rho$-Lipschitz with $\rho=\|A\|$, and
$\partial g(x) = \frac{A^T(Ax-b)}{\|Ax-b\|_2}$ if $Ax-b\neq 0$.
Let $A$ be $m\times n$ in size. We'll consider two cases:
1. Underdetermined, $m<n$, then $f$ is not strongly convex
2. Overdetermined, $m>n$, then if $A$ is a Gaussian, with probability 1 $f$ is strongly convex.
We expect different types of convergence:
1. sublinear, where error $\propto 1/k^\alpha$ for $\alpha>0$
2. linear, where error $\propto \rho^k$ for some $\rho = 1-1/\kappa$ or $\rho = 1- 1/\sqrt{\kappa}$.
End of explanation
def gradientDescent(f,grad,stepsize,x0,maxiter=1e3):
x = x0.copy()
fHist = []
for k in range(int(maxiter)):
x -= stepsize*grad(x)
fHist.append( f(x) )
return x, fHist
def NesterovGradientDescent(f,grad,stepsize,x0,maxiter=1e3,restart=np.Inf):
x = x0.copy()
y = x.copy()
fHist = []
kk = 0
for k in range(int(maxiter)):
xOld = x.copy()
x = y - stepsize*grad(y)
kk = kk + 1
if kk > restart:
kk = 0
y = x + kk/(kk+3)*(x-xOld)
fHist.append( f(x) )
return x, fHist
Explanation: Solvers
End of explanation
rng = np.random.default_rng(1)
m = 49
n = 50
A = rng.normal( size=(m,n) )
xStar = np.ones( (n,1) )
#b = rng.normal( size=(m,1) )
b = A@xStar
L = norm(A,ord=2)**2
# For gradient descent on 1/2||Ax-b||^2
f = lambda x : norm(A@x-b)**2/2
grad= lambda x : A.T@( A@x-b )
fStar = 0
x0 = np.zeros((n,1))
# And if we do subgradient descent on ||Ax-b||
# (Note: if we measure f2(x) convergence, since it's not squared
# we'd of course at least expect sqrt() slower... )
f2 = lambda x : norm(A@x-b)
def subgrad(x):
r = A@x-b
return A.T@(r/norm(r))
rho = norm(A,ord=2)
B = norm(xStar-x0)
maxiter = 1e4
x_gd, fHist_gd = gradientDescent(f,grad,1/L,x0,maxiter=maxiter)
x_Nest, fHist_Nest = NesterovGradientDescent(f,grad,1/L,x0,maxiter=maxiter)
# subgradient descent:
step = B/rho/np.sqrt(maxiter)
x_sgd, fHist_sgd = gradientDescent(f2,subgrad,step,x0,maxiter=maxiter)
plt.figure(figsize=(12,7))
plt.loglog( fHist_gd, label='Gradient Descent' )
plt.loglog( fHist_Nest, label='Nesterov Acceleration' )
plt.loglog( fHist_sgd, label='Subgradient Descent' )
k = np.arange(1,maxiter)
plt.loglog(k,90/k,'--',label='$1/k$')
plt.loglog(k,90/k**2,'--',label='$1/k^2$')
plt.legend()
plt.grid()
plt.show()
Explanation: Undeterdetermined case (expect gradient descent to give sublinear convergence)
End of explanation
rng = np.random.default_rng(1)
m = 55
n = 50
A = rng.normal( size=(m,n) )
xStar = np.ones( (n,1) )
b = A@xStar
L = norm(A,ord=2)**2
# For gradient descent on 1/2||Ax-b||^2
f = lambda x : norm(A@x-b)**2/2
grad= lambda x : A.T@( A@x-b )
fStar = 0
x0 = np.zeros((n,1))
evals = np.linalg.eigvals(A.T@A)
L = np.max(evals)
mu = np.min(evals)
kappa = L/mu
print(f"L is {L:.2f}, mu is {mu:.2f}, condition number is {kappa:.2e}")
f2 = lambda x : norm(A@x-b)
def subgrad(x):
r = A@x-b
return A.T@(r/norm(r))
rho = norm(A,ord=2)
B = norm(xStar-x0)
maxiter = 1e4
x_gd, fHist_gd = gradientDescent(f,grad,1/L,x0,maxiter=maxiter)
x_Nest, fHist_Nest = NesterovGradientDescent(f,grad,1/L,x0,maxiter=maxiter)
x_Nest2, fHist_Nest2 = NesterovGradientDescent(f,grad,1/L,x0,maxiter=maxiter,restart=500)
# subgradient descent:
step = 1e0*(B/rho)/np.sqrt(maxiter)
x_sgd, fHist_sgd = gradientDescent(f2,subgrad,step,x0,maxiter=maxiter)
plt.figure(figsize=(12,7))
plt.semilogy( fHist_gd, label='Gradient Descent' )
plt.semilogy( fHist_Nest, label='Nesterov Acceleration' )
plt.semilogy( fHist_Nest2, label='Nesterov Acceleration w/ restarts' )
plt.semilogy( fHist_sgd, label='Subgradient Descent' )
k = np.arange(1,maxiter)
plt.semilogy(k,1e-4*(1-1/kappa)**k,'--',label='$(1-\kappa^{-1})^k$')
plt.semilogy(k,1e3*(1-1/np.sqrt(kappa))**k,'--',label='$(1-\kappa^{-1/2})^k$')
plt.ylim(bottom=1e-29,top=1e2)
plt.legend()
plt.grid()
plt.show()
Explanation: Question for thought:
The convergence rate of gradient descent and Nesterov acceleration seems to improve eventually. Why?
A: This is probably because the algorithm has identified the right manifold (e.g., the support of $x^\star$) which effectively reduces the dimensionality, and hence we start acting like an over-determined least-squares problem, hence strongly convex. See Are we there yet? Manifold identification of gradient-related proximal methods ICML '19.
Over-determined case, expect linear convergence
End of explanation |
2,676 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NumPy
Step1: Multidimensional array type
Step2: Creating arrays
Step3: See also
Step4: All array creation functions accept an optional dtype argument
Step5: You can use the astype method to create a copy of the array with a given dtype
Step6: IPython's tab completion is useful for exploring the various available dtypes
Step7: The NumPy documentation on dtypes describes the many other ways of specifying dtypes.
Array operations
Basic mathematical operations are elementwise for
Step8: Indexing and slicing
Indexing and slicing provide an efficient way of getting the values in an array and modifying them.
Step9: The enable function is part of vizarray and enables a nice display of arrays
Step10: Extract the 0th column
Step11: The last row
Step12: You can also slice ranges
Step13: Assignment also works with slices
Step14: Note how even though we assigned the value to the slice, the original array was changed. This clarifies that slices are views of the same data, not a copy.
Boolean indexing
Step15: You can use a boolean array to index into the original or another array
Step16: Reshaping, transposing
Step17: Universal functions
Universal function, or "ufuncs," are functions that take and return arrays or scalars
Step18: Basic data processing
Step19: Numpy has a basic set of methods and function for computing basic quantities about data.
Step20: The cumsum and cumprod methods compute cumulative sums and products
Step21: Most of the functions and methods above take an axis argument that will apply the action along a particular axis
Step22: With axis=0 the action takes place along rows
Step23: With axis=1 the action takes place along columns
Step24: The unique function is extremely useful in working with categorical data
Step25: The where function allows you to apply conditional logic to arrays. Here is a rough sketch of how it works
Step26: The if_false and if_true values can be arrays themselves
Step27: File IO
NumPy has a a number of different function to reading and writing arrays to and from disk.
Single array, binary format
Step28: Using %pycat to look at the file shows that it is binary
Step29: Single array, text format
Step30: Using %pycat to look at the contents shows that the files is indeed a plain text file
Step31: Multiple arrays, binary format
Step32: Linear algebra
NumPy has excellent linear algebra capabilities.
Step33: Remember that array operations are elementwise. Thus, this is not matrix multiplication
Step34: To get matrix multiplication use np.dot
Step35: Or, NumPy as a matrix subclass for which matrix operations are the default
Step36: The np.linalg package has a wide range of fast linear algebra operations.
Here is determinant
Step37: Matrix inverse
Step38: Eigenvalues
Step39: NumPy can be built against fast BLAS/LAPACK implementation for these linear algebra operations.
Step40: Random numbers
NumPy has functions for creating arrays of random numbers from different distributions in np.random, as well as handling things like permutation and shuffling.
Here is the numpy.random documentation.
Step41: The shuffle function shuffles an array in place
Step42: The permutation function does the same thing but first makes a copy | Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
plt.style.use('ggplot')
Explanation: NumPy: Numerical Arrays for Python
Learning Objectives: Learn how to create, transform and visualize multidimensional data of a single type using Numpy.
NumPy is the foundation for scientific computing and data science in Python.
Any number of dimensions
All elements of an array have the same data type
Array elements are usually native data dtype
The memory for an array is a contiguous block that can be easily passed to other numerical libraries (BLAS, LAPACK, etc.).
Most of NumPy is implemented in C, so it is fast
NumPy arrays are the foundational data type that the entire Python numerical computing stack is built upon
Plotting
While this notebook doesn't focus on plotting, matplotlib will be used to make a few basic plots.
End of explanation
import numpy as np
import vizarray as vz
data = [0,2,4,6]
a = np.array(data)
type(a)
a
vz.vizarray(a)
a.shape
a.ndim
a.size
a.nbytes
a.dtype
Explanation: Multidimensional array type
End of explanation
data = [[0.0,2.0,4.0,6.0],[1.0,3.0,5.0,7.0]]
b = np.array(data)
b
vz.vizarray(b)
b.shape, b.ndim, b.size, b.nbytes
c = np.arange(0.0, 10.0, 1.0) # Step size of 1.0
c
e = np.linspace(0.0, 5.0, 11) # 11 points
e
np.empty((4,4))
np.zeros((3,3))
np.ones((3,3))
Explanation: Creating arrays
End of explanation
a = np.array([0,1,2,3])
a, a.dtype
Explanation: See also:
empty_like, ones_like, zeros_like
eye, identity
dtype
Arrays have a dtype attribute that encapsulates the data type of each element. It can be set:
Implicitely by the element type
By passing the dtype argument to an array creation function
End of explanation
b = np.zeros((2,2), dtype=np.complex64)
b
c = np.arange(0, 10, 2, dtype=np.float)
c
Explanation: All array creation functions accept an optional dtype argument:
End of explanation
d = c.astype(dtype=np.int)
d
Explanation: You can use the astype method to create a copy of the array with a given dtype:
End of explanation
np.float*?
Explanation: IPython's tab completion is useful for exploring the various available dtypes:
End of explanation
a = np.empty((3,3))
a.fill(0.1)
a
b = np.ones((3,3))
b
a+b
b/a
a**2
np.pi*b
Explanation: The NumPy documentation on dtypes describes the many other ways of specifying dtypes.
Array operations
Basic mathematical operations are elementwise for:
Scalars and arrays
Arrays and arrays
End of explanation
a = np.random.rand(10,10)
Explanation: Indexing and slicing
Indexing and slicing provide an efficient way of getting the values in an array and modifying them.
End of explanation
vz.enable()
a
a[0,0]
a[-1,-1] == a[9,9]
Explanation: The enable function is part of vizarray and enables a nice display of arrays:
End of explanation
a[:,0]
Explanation: Extract the 0th column:
End of explanation
a[-1,:]
Explanation: The last row:
End of explanation
a[0:2,0:2]
Explanation: You can also slice ranges:
End of explanation
a[0:5,0:5] = 1.0
a
vz.disable()
Explanation: Assignment also works with slices:
End of explanation
ages = np.array([23,56,67,89,23,56,27,12,8,72])
genders = np.array(['m','m','f','f','m','f','m','m','m','f'])
ages > 30
genders == 'm'
(ages > 10) & (ages < 50)
Explanation: Note how even though we assigned the value to the slice, the original array was changed. This clarifies that slices are views of the same data, not a copy.
Boolean indexing
End of explanation
mask = (genders == 'f')
ages[mask]
ages[ages>30]
Explanation: You can use a boolean array to index into the original or another array:
End of explanation
vz.enable()
a = np.random.rand(3,4)
a
a.T
a.reshape(2,6)
a.reshape(6,2)
a.ravel()
vz.disable()
Explanation: Reshaping, transposing
End of explanation
vz.set_block_size(5)
vz.enable()
t = np.linspace(0.0, 4*np.pi, 100)
t
np.sin(t)
np.exp(t)
vz.disable()
vz.set_block_size(30)
plt.plot(t, np.exp(-0.1*t)*np.sin(t))
Explanation: Universal functions
Universal function, or "ufuncs," are functions that take and return arrays or scalars:
Vectorized C implementations, much faster than hand written loops in Python
Allow for concise Pythonic code
Here is a complete list of the available NumPy ufuncs lists the available ufuncs.
End of explanation
ages = np.array([23,56,67,89,23,56,27,12,8,72])
genders = np.array(['m','m','f','f','m','f','m','m','m','f'])
Explanation: Basic data processing
End of explanation
ages.min(), ages.max()
ages.mean()
ages.var(), ages.std()
np.bincount(ages)
Explanation: Numpy has a basic set of methods and function for computing basic quantities about data.
End of explanation
ages.cumsum()
ages.cumprod()
Explanation: The cumsum and cumprod methods compute cumulative sums and products:
End of explanation
a = np.random.randint(0,10,(3,4))
a
Explanation: Most of the functions and methods above take an axis argument that will apply the action along a particular axis:
End of explanation
a.sum(axis=0)
Explanation: With axis=0 the action takes place along rows:
End of explanation
a.sum(axis=1)
Explanation: With axis=1 the action takes place along columns:
End of explanation
np.unique(genders)
np.unique(genders, return_counts=True)
Explanation: The unique function is extremely useful in working with categorical data:
End of explanation
np.where(ages>30, 0, 1)
Explanation: The where function allows you to apply conditional logic to arrays. Here is a rough sketch of how it works:
python
def where(condition, if_false, if_true):
End of explanation
np.where(ages<30, 0, ages)
Explanation: The if_false and if_true values can be arrays themselves:
End of explanation
a = np.random.rand(10)
a
np.save('array1', a)
ls
Explanation: File IO
NumPy has a a number of different function to reading and writing arrays to and from disk.
Single array, binary format
End of explanation
%pycat array1.npy
a_copy = np.load('array1.npy')
a_copy
Explanation: Using %pycat to look at the file shows that it is binary:
End of explanation
b = np.random.randint(0,10,(5,3))
b
np.savetxt('array2.txt', b)
ls
Explanation: Single array, text format
End of explanation
%pycat array2.txt
np.loadtxt('array2.txt')
Explanation: Using %pycat to look at the contents shows that the files is indeed a plain text file:
End of explanation
np.savez('arrays.npz', a=a, b=b)
a_and_b = np.load('arrays.npz')
a_and_b['a']
a_and_b['b']
Explanation: Multiple arrays, binary format
End of explanation
a = np.random.rand(5,5)
b = np.random.rand(5,5)
Explanation: Linear algebra
NumPy has excellent linear algebra capabilities.
End of explanation
a*b
Explanation: Remember that array operations are elementwise. Thus, this is not matrix multiplication:
End of explanation
np.dot(a, b)
Explanation: To get matrix multiplication use np.dot:
End of explanation
m1 = np.matrix(a)
m2 = np.matrix(b)
m1*m2
Explanation: Or, NumPy as a matrix subclass for which matrix operations are the default:
End of explanation
np.linalg.det(a)
Explanation: The np.linalg package has a wide range of fast linear algebra operations.
Here is determinant:
End of explanation
np.linalg.inv(a)
Explanation: Matrix inverse:
End of explanation
np.linalg.eigvals(a)
Explanation: Eigenvalues:
End of explanation
c = np.random.rand(2000,2000)
%timeit -n1 -r1 evs = np.linalg.eigvals(c)
Explanation: NumPy can be built against fast BLAS/LAPACK implementation for these linear algebra operations.
End of explanation
plt.hist(np.random.random(250))
plt.title('Uniform Random Distribution $[0,1]$')
plt.xlabel('value')
plt.ylabel('count')
plt.hist(np.random.randn(250))
plt.title('Standard Normal Distribution')
plt.xlabel('value')
plt.ylabel('count')
Explanation: Random numbers
NumPy has functions for creating arrays of random numbers from different distributions in np.random, as well as handling things like permutation and shuffling.
Here is the numpy.random documentation.
End of explanation
a = np.arange(0,10)
np.random.shuffle(a)
a
Explanation: The shuffle function shuffles an array in place:
End of explanation
a = np.arange(0,10)
print(np.random.permutation(a))
print(a)
Explanation: The permutation function does the same thing but first makes a copy:
End of explanation |
2,677 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to pyrpl
1) Introduction
The RedPitaya is an affordable FPGA board with fast analog inputs and outputs. This makes it interesting also for quantum optics experiments. The software package PyRPL (Python RedPitaya Lockbox) is an implementation of many devices that are needed for optics experiments every day. The user interface and all high-level functionality is written in python, but an essential part of the software is hidden in a custom FPGA design (based on the official RedPitaya software version 0.95). While most users probably never want to touch the FPGA design, the Verilog source code is provided together with this package and may be modified to customize the software to your needs.
2) Table of contents
In this document, you will find the following sections
Step1: Should the directory not be the one of your local github installation, you might have an older version of pyrpl installed. Just delete any such directories other than your principal github clone and everything should work.
Option 2
Step2: Compiling the server application (optional)
The software comes with a precompiled version of the server application (written in C) that runs on the RedPitaya. This application is uploaded automatically when you start the connection. If you made changes to this file, you can recompile it by typing
$\texttt{python setup.py compile_server}$
For this to work, you must have gcc and the cross-compiling libraries installed. Basically, if you can compile any of the official RedPitaya software written in C, then this should work, too.
If you do not have a working cross-compiler installed on your UserPC, you can also compile directly on the RedPitaya (tested with ecosystem v0.95). To do so, you must upload the directory pyrpl/monitor_server on the redpitaya, and launch the compilation with the command
$\texttt{make CROSS_COMPILE=}$
Compiling the FPGA bitfile (optional)
If you would like to modify the FPGA code or just make sure that it can be compiled, you should have a working installation of Vivado 2015.4. For windows users it is recommended to set up a virtual machine with Ubuntu on which the compiler can be run in order to avoid any compatibility problems. For the FPGA part, you only need the /fpga subdirectory of this software. Make sure it is somewhere in the file system of the machine with the vivado installation. Then type the following commands. You should adapt the path in the first and second commands to the locations of the Vivado installation / the fpga directory in your filesystem
Step3: Sometimes, python has problems finding the path to pyrplockbox. In that case you should add the pyrplockbox directory to your pythonpath environment variable (http
Step4: Now retry to load the module. It should really work now.
Step5: Connecting to the RedPitaya
You should have a working SD card (any version of the SD card content is okay) in your RedPitaya (for instructions see http
Step6: If you see at least one '>' symbol, your computer has successfully connected to your RedPitaya via SSH. This means that your connection works. The message 'Server application started on port 2222' means that your computer has sucessfully installed and started a server application on your RedPitaya. Once you get 'Client started with success', your python session has successfully connected to that server and all things are in place to get started.
Basic communication with your RedPitaya
Step7: With the last command, you have successfully retrieved a value from an FPGA register. This operation takes about 300 µs on my computer. So there is enough time to repeat the reading n times.
Step8: You see that the input values are not exactly zero. This is normal with all RedPitayas as some offsets are hard to keep zero when the environment changes (temperature etc.). So we will have to compensate for the offsets with our software. Another thing is that you see quite a bit of scatter beetween the points - almost as much that you do not see that the datapoints are quantized. The conclusion here is that the input noise is typically not totally negligible. Therefore we will need to use every trick at hand to get optimal noise performance.
After reading from the RedPitaya, let's now try to write to the register controlling the first 8 yellow LED's on the board. The number written to the LED register is displayed on the LED array in binary representation. You should see some fast flashing of the yellow leds for a few seconds when you execute the next block.
Step9: 5) RedPitaya modules
Let's now look a bit closer at the class RedPitaya. Besides managing the communication with your board, it contains different modules that represent the different sections of the FPGA. You already encountered two of them in the example above
Step10: ASG and Scope module
Arbitrary Signal Generator
There are two Arbitrary Signal Generator modules
Step11: Let's set up the ASG to output a sawtooth signal of amplitude 0.8 V (peak-to-peak 1.6 V) at 1 MHz on output 2
Step12: Oscilloscope
The scope works similar to the ASG but in reverse
Step13: Let's have a look at a signal generated by asg1. Later we will use convenience functions to reduce the amount of code necessary to set up the scope
Step14: What do we see? The blue trace for channel 1 shows just the output signal of the asg. The time=0 corresponds to the trigger event. One can see that the trigger was not activated by the constant signal of 0 at the beginning, since it did not cross the hysteresis interval. One can also see a 'bug'
Step15: PID module
We have already seen some use of the pid module above. There are four PID modules available
Step16: Proportional and integral gain
Step17: Control with the integral value register
Step18: Again, what do we see? We set up the pid module with a constant (positive) input from the ASG. We then turned on the integrator (with negative gain), which will inevitably lead to a slow drift of the output towards negative voltages (blue trace). We had set the integral value above the positive saturation voltage, such that it takes longer until it reaches the negative saturation voltage. The output of the pid module is bound to saturate at +- 1 Volts, which is clearly visible in the green trace. The value of the integral is internally represented by a 32 bit number, so it can practically take arbitrarily large values compared to the 14 bit output. You can set it within the range from +4 to -4V, for example if you want to exloit the delay, or even if you want to compensate it with proportional gain.
Input filters
The pid module has one more feature
Step19: You should now go back to the Scope and ASG example above and play around with the setting of these filters to convince yourself that they do what they are supposed to.
IQ module
Demodulation of a signal means convolving it with a sine and cosine at the 'carrier frequency'. The two resulting signals are usually low-pass filtered and called 'quadrature I' and and 'quadrature Q'. Based on this simple idea, the IQ module of pyrpl can implement several functionalities, depending on the particular setting of the various registers. In most cases, the configuration can be completely carried out through the setup function of the module.
<img src="IQmodule.png">
Lock-in detection / PDH / synchronous detection
Step20: After this setup, the demodulated quadrature is available as the output_signal of iq0, and can serve for example as the input of a PID module to stabilize the frequency of a laser to a reference cavity. The module was tested and is in daily use in our lab. Frequencies as low as 20 Hz and as high as 50 MHz have been used for this technique. At the present time, the functionality of a PDH-like detection as the one set up above cannot be conveniently tested internally. We plan to upgrade the IQ-module to VCO functionality in the near future, which will also enable testing the PDH functionality.
Network analyzer
When implementing complex functionality in the RedPitaya, the network analyzer module is by far the most useful tool for diagnostics. The network analyzer is able to probe the transfer function of any other module or external device by exciting the device with a sine of variable frequency and analyzing the resulting output from that device. This is done by demodulating the device output (=network analyzer input) with the same sine that was used for the excitation and a corresponding cosine, lowpass-filtering, and averaging the two quadratures for a well-defined number of cycles. From the two quadratures, one can extract the magnitude and phase shift of the device's transfer function at the probed frequencies. Let's illustrate the behaviour. For this example, you should connect output 1 to input 1 of your RedPitaya, such that we can compare the analog transfer function to a reference. Make sure you put a 50 Ohm terminator in parallel with input 1.
Step21: If your cable is properly connected, you will see that both magnitudes are near 0 dB over most of the frequency range. Near the Nyquist frequency (62.5 MHz), one can see that the internal signal remains flat while the analog signal is strongly attenuated, as it should be to avoid aliasing. One can also see that the delay (phase lag) of the internal signal is much less than the one through the analog signal path.
If you have executed the last example (PDH detection) in this python session, iq0 should still send a modulation to out1, which is added to the signal of the network analyzer, and sampled by input1. In this case, you should see a little peak near the PDH modulation frequency, which was 25 MHz in the example above.
Lorentzian bandpass filter
The iq module can also be used as a bandpass filter with continuously tunable phase. Let's measure the transfer function of such a bandpass with the network analyzer
Step22: Frequency comparator module
To lock the frequency of a VCO (Voltage controlled oscillator) to a frequency reference defined by the RedPitaya, the IQ module contains the frequency comparator block. This is how you set it up. You have to feed the output of this module through a PID block to send it to the analog output. As you will see, if your feedback is not already enabled when you turn on the module, its integrator will rapidly saturate (-585 is the maximum value here, while a value of the order of 1e-3 indicates a reasonable frequency lock).
Step23: IIR module
Sometimes it is interesting to realize even more complicated filters. This is the case, for example, when a piezo resonance limits the maximum gain of a feedback loop. For these situations, the IIR module can implement filters with 'Infinite Impulse Response' (https
Step24: If you try changing a few coefficients, you will see that your design filter is not always properly realized. The bottleneck here is the conversion from the analytical expression (poles and zeros) to the filter coefficients, not the FPGA performance. This conversion is (among other things) limited by floating point precision. We hope to provide a more robust algorithm in future versions. If you can obtain filter coefficients by another, preferrably analytical method, this might lead to better results than our generic algorithm.
Let's check if the filter is really working as it is supposed
Step25: As you can see, the filter has trouble to realize large dynamic ranges. With the current standard design software, it takes some 'practice' to design transfer functions which are properly implemented by the code. While most zeros are properly realized by the filter, you see that the first two poles suffer from some kind of saturation. We are working on an automatic rescaling of the coefficients to allow for optimum dynamic range. From the overflow register printed above the plot, you can also see that the network analyzer scan caused an internal overflow in the filter. All these are signs that different parameters should be tried.
A straightforward way to impove filter performance is to adjust the DC-gain and compensate it later with the gain of a subsequent PID module. See for yourself what the parameter g=0.1 (instead of the default value g=1.0) does here
Step26: You see that we have improved the second peak (and avoided internal overflows) at the cost of increased nosie in other regions. Of course this noise can be reduced by increasing the NA averaging time. But maybe it will be detrimental to your application? After all, IIR filter design is far from trivial, but this tutorial should have given you enough information to get started and maybe to improve the way we have implemented the filter in pyrpl (e.g. by implementing automated filter coefficient scaling).
If you plan to play more with the filter, these are the remaining internal iir registers
Step27: 6) The Pyrpl class
The RedPitayas in our lab are mostly used to stabilize one item or another in quantum optics experiments. To do so, the experimenter usually does not want to bother with the detailed implementation on the RedPitaya while trying to understand the physics going on in her/his experiment. For this situation, we have developed the Pyrpl class, which provides an API with high-level functions such as
Step28: We need to inform our RedPitaya about which connections we want to make. The cabling discussed above translates into
Step29: Finally, we need to define a setpoint. Lets first measure the offset when the laser is away from the resonance, and then measure or estimate how much light gets through on resonance.
Step30: Now lets start to approach the resonance. We need to figure out from which side we are coming. The choice is made such that a simple integrator will naturally drift into the resonance and stay there
Step31: Questions to users
Step32: 7) The Graphical User Interface
Most of the modules described in section 5 can be controlled via a graphical user interface. The graphical window can be displayed with the following | Python Code:
import pyrpl
print(pyrpl.__file__)
Explanation: Introduction to pyrpl
1) Introduction
The RedPitaya is an affordable FPGA board with fast analog inputs and outputs. This makes it interesting also for quantum optics experiments. The software package PyRPL (Python RedPitaya Lockbox) is an implementation of many devices that are needed for optics experiments every day. The user interface and all high-level functionality is written in python, but an essential part of the software is hidden in a custom FPGA design (based on the official RedPitaya software version 0.95). While most users probably never want to touch the FPGA design, the Verilog source code is provided together with this package and may be modified to customize the software to your needs.
2) Table of contents
In this document, you will find the following sections:
1. Introduction
2. ToC
3. Installation
4. First steps
5. RedPitaya Modules
6. The Pyrpl class
7. The Graphical User Interface
If you are using Pyrpl for the first time, you should read sections 1-4. This will take about 15 minutes and should leave you able to communicate with your RedPitaya via python.
If you plan to use Pyrpl for a project that is not related to quantum optics, you probably want to go to section 5 then and omit section 6 altogether. Inversely, if you are only interested in a powerful tool for quantum optics and dont care about the details of the implementation, go to section 6. If you plan to contribute to the repository, you should definitely read section 5 to get an idea of what this software package realy does, and where help is needed. Finaly, Pyrpl also comes with a Graphical User Interface (GUI) to interactively control the modules described in section 5. Please, read section 7 for a quick description of the GUI.
3) Installation
Option 3: Simple clone from GitHub (developers)
If instead you plan to synchronize with github on a regular basis, you can also leave the downloaded code where it is and add the parent directory of the pyrpl folder to the PYTHONPATH environment variable as described in this thread: http://stackoverflow.com/questions/3402168/permanently-add-a-directory-to-pythonpath. For all beta-testers and developers, this is the preferred option. So the typical PYTHONPATH environment variable should look somewhat like this:
$\texttt{PYTHONPATH=C:\OTHER_MODULE;C:\GITHUB\PYRPL}$
If you are experiencing problems with the dependencies on other python packages, executing the following command in the pyrpl directory might help:
$\texttt{python setup.py install develop}$
If at a later point, you have the impression that updates from github are not reflected in the program's behavior, try this:
End of explanation
#no-test
!pip install pyrpl #if you look at this file in ipython notebook, just execute this cell to install pyrplockbox
Explanation: Should the directory not be the one of your local github installation, you might have an older version of pyrpl installed. Just delete any such directories other than your principal github clone and everything should work.
Option 2: from GitHub using setuptools (beta version)
Download the code manually from https://github.com/lneuhaus/pyrpl/archive/master.zip and unzip it or get it directly from git by typing
$\texttt{git clone https://github.com/lneuhaus/pyrpl.git YOUR_DESTINATIONFOLDER}$
In a command line shell, navigate into your new local pyrplockbox directory and execute
$\texttt{python setup.py install}$
This copies the files into the side-package directory of python. The setup should make sure that you have the python libraries paramiko (http://www.paramiko.org/installing.html) and scp (https://pypi.python.org/pypi/scp) installed. If this is not the case you will get a corresponding error message in a later step of this tutorial.
Option 1: with pip (coming soon)
If you have pip correctly installed, executing the following line in a command line should install pyrplockbox and all dependencies:
$\texttt{pip install pyrpl}$
End of explanation
from pyrpl import Pyrpl
Explanation: Compiling the server application (optional)
The software comes with a precompiled version of the server application (written in C) that runs on the RedPitaya. This application is uploaded automatically when you start the connection. If you made changes to this file, you can recompile it by typing
$\texttt{python setup.py compile_server}$
For this to work, you must have gcc and the cross-compiling libraries installed. Basically, if you can compile any of the official RedPitaya software written in C, then this should work, too.
If you do not have a working cross-compiler installed on your UserPC, you can also compile directly on the RedPitaya (tested with ecosystem v0.95). To do so, you must upload the directory pyrpl/monitor_server on the redpitaya, and launch the compilation with the command
$\texttt{make CROSS_COMPILE=}$
Compiling the FPGA bitfile (optional)
If you would like to modify the FPGA code or just make sure that it can be compiled, you should have a working installation of Vivado 2015.4. For windows users it is recommended to set up a virtual machine with Ubuntu on which the compiler can be run in order to avoid any compatibility problems. For the FPGA part, you only need the /fpga subdirectory of this software. Make sure it is somewhere in the file system of the machine with the vivado installation. Then type the following commands. You should adapt the path in the first and second commands to the locations of the Vivado installation / the fpga directory in your filesystem:
$\texttt{source /opt/Xilinx/Vivado/2015.4/settings64.sh}$
$\texttt{cd /home/myusername/fpga}$
$\texttt{make}$
The compilation should take between 15 and 30 minutes. The result will be the file $\texttt{fpga/red_pitaya.bin}$. To test the new FPGA design, make sure that this file in the fpga subdirectory of your pyrpl code directory. That is, if you used a virtual machine for the compilation, you must copy the file back to the original machine on which you run pyrpl.
Unitary tests (optional)
In order to make sure that any recent changes do not affect prior functionality, a large number of automated tests have been implemented. Every push to the github repository is automatically installed tested on an empty virtual linux system. However, the testing server has currently no RedPitaya available to run tests directly on the FPGA. Therefore it is also useful to run these tests on your local machine in case you modified the code.
Currently, the tests confirm that
- all pyrpl modules can be loaded in python
- all designated registers can be read and written
- future: functionality of all major submodules against reference benchmarks
To run the test, navigate in command line into the pyrpl directory and type
$\texttt{set REDPITAYA=192.168.1.100}$ (in windows) or
$\texttt{export REDPITAYA=192.168.1.100}$ (in linux)
$\texttt{python setup.py nosetests}$
The first command tells the test at which IP address it can find a RedPitaya. The last command runs the actual test. After a few seconds, there should be some output saying that the software has passed more than 140 tests.
After you have implemented additional features, you are encouraged to add unitary tests to consolidate the changes. If you immediately validate your changes with unitary tests, this will result in a huge productivity improvement for you. You can find all test files in the folder $\texttt{pyrpl/pyrpl/test}$, and the existing examples (notably $\texttt{test_example.py}$) should give you a good point to start. As long as you add a function starting with 'test_' in one of these files, your test should automatically run along with the others. As you add more tests, you will see the number of total tests increase when you run the test launcher.
Workflow to submit code changes (for developers)
As soon as the code will have reached version 0.9.0.3 (high-level unitary tests implemented and passing, approx. end of May 2016), we will consider the master branch of the github repository as the stable pre-release version. The goal is that the master branch will guarantee functionality at all times.
Any changes to the code, if they do not pass the unitary tests or have not been tested, are to be submitted as pull-requests in order not to endanger the stability of the master branch. We will briefly desribe how to properly submit your changes in that scenario.
Let's say you already changed the code of your local clone of pyrpl. Instead of directly committing the change to the master branch, you should create your own branch. In the windows application of github, when you are looking at the pyrpl repository, there is a small symbol looking like a steet bifurcation in the upper left corner, that says "Create new branch" when you hold the cursor over it. Click it and enter the name of your branch "leos development branch" or similar. The program will automatically switch to that branch. Now you can commit your changes, and then hit the "publish" or "sync" button in the upper right. That will upload your changes so everyone can see and test them.
You can continue working on your branch, add more commits and sync them with the online repository until your change is working. If the master branch has changed in the meantime, just click 'sync' to download them, and then the button "update from master" (upper left corner of the window) that will insert the most recent changes of the master branch into your branch. If the button doesn't work, that means that there are no changes available. This way you can benefit from the updates of the stable pre-release version, as long as they don't conflict with the changes you have been working on. If there are conflicts, github will wait for you to resolve them. In case you have been recompiling the fpga, there will always be a conflict w.r.t. the file 'red_pitaya.bin' (since it is a binary file, github cannot simply merge the differences you implemented). The best way to deal with this problem is to recompile the fpga bitfile after the 'update from master'. This way the binary file in your repository will correspond to the fpga code of the merged verilog files, and github will understand from the most recent modification date of the file that your local version of red_pitaya.bin is the one to keep.
At some point, you might want to insert your changes into the master branch, because they have been well-tested and are going to be useful for everyone else, too. To do so, after having committed and synced all recent changes to your branch, click on "Pull request" in the upper right corner, enter a title and description concerning the changes you have made, and click "Send pull request". Now your job is done. I will review and test the modifications of your code once again, possibly fix incompatibility issues, and merge it into the master branch once all is well. After the merge, you can delete your development branch. If you plan to continue working on related changes, you can also keep the branch and send pull requests later on. If you plan to work on a different feature, I recommend you create a new branch with a name related to the new feature, since this will make the evolution history of the feature more understandable for others. Or, if you would like to go back to following the master branch, click on the little downward arrow besides the name of your branch close to the street bifurcation symbol in the upper left of the github window. You will be able to choose which branch to work on, and to select master.
Let's all try to stick to this protocol. It might seem a little complicated at first, but you will quikly appreciate the fact that other people's mistakes won't be able to endanger your working code, and that by following the commits of the master branch alone, you will realize if an update is incompatible with your work.
4) First steps
If the installation went well, you should now be able to load the package in python. If that works you can pass directly to the next section 'Connecting to the RedPitaya'.
End of explanation
#no-test
cd c:\lneuhaus\github\pyrpl
Explanation: Sometimes, python has problems finding the path to pyrplockbox. In that case you should add the pyrplockbox directory to your pythonpath environment variable (http://stackoverflow.com/questions/3402168/permanently-add-a-directory-to-pythonpath). If you do not know how to do that, just manually navigate the ipython console to the directory, for example:
End of explanation
from pyrpl import Pyrpl
Explanation: Now retry to load the module. It should really work now.
End of explanation
#define hostname
HOSTNAME = ""
from pyrpl import Pyrpl
p = Pyrpl(config='', # do not use a config file
#config='tutorial', # this would continuously save the current redpitaya state to a file "tutorial.yml"
hostname=HOSTNAME)
Explanation: Connecting to the RedPitaya
You should have a working SD card (any version of the SD card content is okay) in your RedPitaya (for instructions see http://redpitaya.com/quick-start/). The RedPitaya should be connected via ethernet to your computer. To set this up, there is plenty of instructions on the RedPitaya website (http://redpitaya.com/quick-start/). If you type the ip address of your module in a browser, you should be able to start the different apps from the manufacturer. The default address is http://192.168.1.100.
If this works, we can load the python interface of pyrplockbox by specifying the RedPitaya's ip address. If you leave the HOSTNAME blanck, a popup window will open up to let you choose among the various connected Redpitayas on your local network.
End of explanation
#check the value of input1
print(p.rp.scope.voltage_in1)
Explanation: If you see at least one '>' symbol, your computer has successfully connected to your RedPitaya via SSH. This means that your connection works. The message 'Server application started on port 2222' means that your computer has sucessfully installed and started a server application on your RedPitaya. Once you get 'Client started with success', your python session has successfully connected to that server and all things are in place to get started.
Basic communication with your RedPitaya
End of explanation
#see how the adc reading fluctuates over time
import time
from matplotlib import pyplot as plt
times,data = [],[]
t0 = time.time()
n = 3000
for i in range(n):
times.append(time.time()-t0)
data.append(p.rp.scope.voltage_in1)
print("Rough time to read one FPGA register: ", (time.time()-t0)/n*1e6, "µs")
%matplotlib inline
f, axarr = plt.subplots(1,2, sharey=True)
axarr[0].plot(times, data, "+");
axarr[0].set_title("ADC voltage vs time");
axarr[1].hist(data, bins=10,normed=True, orientation="horizontal");
axarr[1].set_title("ADC voltage histogram");
Explanation: With the last command, you have successfully retrieved a value from an FPGA register. This operation takes about 300 µs on my computer. So there is enough time to repeat the reading n times.
End of explanation
#blink some leds for 5 seconds
from time import sleep
for i in range(1025):
p.rp.hk.led=i
sleep(0.005)
# now feel free to play around a little to get familiar with binary representation by looking at the leds.
from time import sleep
p.rp.hk.led = 0b00000001
for i in range(10):
p.rp.hk.led = ~p.rp.hk.led>>1
sleep(0.2)
import random
for i in range(100):
p.rp.hk.led = random.randint(0,255)
sleep(0.02)
Explanation: You see that the input values are not exactly zero. This is normal with all RedPitayas as some offsets are hard to keep zero when the environment changes (temperature etc.). So we will have to compensate for the offsets with our software. Another thing is that you see quite a bit of scatter beetween the points - almost as much that you do not see that the datapoints are quantized. The conclusion here is that the input noise is typically not totally negligible. Therefore we will need to use every trick at hand to get optimal noise performance.
After reading from the RedPitaya, let's now try to write to the register controlling the first 8 yellow LED's on the board. The number written to the LED register is displayed on the LED array in binary representation. You should see some fast flashing of the yellow leds for a few seconds when you execute the next block.
End of explanation
r = p.rp #redpitaya object
r.hk #"housekeeping" = LEDs and digital inputs/outputs
r.ams #"analog mixed signals" = auxiliary ADCs and DACs.
r.scope #oscilloscope interface
r.asg0 #"arbitrary signal generator" channel 1
r.asg1 #"arbitrary signal generator" channel 2
r.pid0 #first of four PID modules
r.pid1
r.pid2
r.iq0 #first of three I+Q quadrature demodulation/modulation modules
r.iq1
r.iq2
r.iir #"infinite impules response" filter module that can realize complex transfer functions
Explanation: 5) RedPitaya modules
Let's now look a bit closer at the class RedPitaya. Besides managing the communication with your board, it contains different modules that represent the different sections of the FPGA. You already encountered two of them in the example above: "hk" and "scope". Here is the full list of modules:
End of explanation
asg = r.asg0 # make a shortcut
print("Trigger sources:", asg.trigger_sources)
print("Output options: ", asg.output_directs)
Explanation: ASG and Scope module
Arbitrary Signal Generator
There are two Arbitrary Signal Generator modules: asg1 and asg2. For these modules, any waveform composed of $2^{14}$ programmable points is sent to the output with arbitrary frequency and start phase upon a trigger event.
End of explanation
asg.output_direct = 'out2'
asg.setup(waveform='halframp', frequency=20e4, amplitude=0.8, offset=0, trigger_source='immediately')
Explanation: Let's set up the ASG to output a sawtooth signal of amplitude 0.8 V (peak-to-peak 1.6 V) at 1 MHz on output 2:
End of explanation
s = p.rp.scope # shortcut
print("Available decimation factors:", s.decimations)
print("Trigger sources:", s.trigger_sources)
print("Available inputs: ", s.inputs)
s.inputs
Explanation: Oscilloscope
The scope works similar to the ASG but in reverse: Two channels are available. A table of $2^{14}$ datapoints for each channel is filled with the time series of incoming data. Downloading a full trace takes about 10 ms over standard ethernet. The rate at which the memory is filled is the sampling rate (125 MHz) divided by the value of 'decimation'. The property 'average' decides whether each datapoint is a single sample or the average of all samples over the decimation interval.
End of explanation
from pyrpl.async_utils import sleep
from pyrpl import RedPitaya
#reload everything
r = p.rp #redpitaya object
asg = r.asg1
s = r.scope
# turn off asg so the scope has a chance to measure its "off-state" as well
asg.output_direct = "off"
# setup scope
s.input1 = 'asg1'
# pass asg signal through pid0 with a simple integrator - just for fun (detailed explanations for pid will follow)
r.pid0.input = 'asg1'
r.pid0.ival = 0 # reset the integrator to zero
r.pid0.i = 1000 # unity gain frequency of 1000 hz
r.pid0.p = 1.0 # proportional gain of 1.0
r.pid0.inputfilter = [0,0,0,0] # leave input filter disabled for now
# show pid output on channel2
s.input2 = 'pid0'
# trig at zero volt crossing
s.threshold_ch1 = 0
# positive/negative slope is detected by waiting for input to
# sweept through hysteresis around the trigger threshold in
# the right direction
s.hysteresis_ch1 = 0.01
# trigger on the input signal positive slope
s.trigger_source = 'ch1_positive_edge'
# take data symetrically around the trigger event
s.trigger_delay = 0
# set decimation factor to 64 -> full scope trace is 8ns * 2^14 * decimation = 8.3 ms long
s.decimation = 64
# only 1 trace average
s.trace_average = 1
# setup the scope for an acquisition
curve = s.single_async()
sleep(0.001)
print("\nBefore turning on asg:")
print("Curve ready:", s.curve_ready()) # trigger should still be armed
# turn on asg and leave enough time for the scope to record the data
asg.setup(frequency=1e3, amplitude=0.3, start_phase=90, waveform='halframp', trigger_source='immediately')
sleep(0.010)
# check that the trigger has been disarmed
print("\nAfter turning on asg:")
print("Curve ready:", s.curve_ready())
print("Trigger event age [ms]:",8e-9*((s.current_timestamp&0xFFFFFFFFFFFFFFFF) - s.trigger_timestamp)*1000)
# plot the data
%matplotlib inline
curve = curve.result()
plt.plot(s.times*1e3, curve[0], s.times*1e3, curve[1]);
plt.xlabel("Time [ms]");
plt.ylabel("Voltage");
Explanation: Let's have a look at a signal generated by asg1. Later we will use convenience functions to reduce the amount of code necessary to set up the scope:
End of explanation
# useful functions for scope diagnostics
print("Curve ready:", s.curve_ready())
print("Trigger source:",s.trigger_source)
print("Trigger threshold [V]:",s.threshold_ch1)
print("Averaging:",s.average)
print("Trigger delay [s]:",s.trigger_delay)
print("Trace duration [s]: ",s.duration)
print("Trigger hysteresis [V]", s.hysteresis_ch1)
print("Current scope time [cycles]:",hex(s.current_timestamp))
print("Trigger time [cycles]:",hex(s.trigger_timestamp))
print("Current voltage on channel 1 [V]:", r.scope.voltage_in1)
print("First point in data buffer 1 [V]:", s.ch1_firstpoint)
Explanation: What do we see? The blue trace for channel 1 shows just the output signal of the asg. The time=0 corresponds to the trigger event. One can see that the trigger was not activated by the constant signal of 0 at the beginning, since it did not cross the hysteresis interval. One can also see a 'bug': After setting up the asg, it outputs the first value of its data table until its waveform output is triggered. For the halframp signal, as it is implemented in pyrpl, this is the maximally negative value. However, we passed the argument start_phase=90 to the asg.setup function, which shifts the first point by a quarter period. Can you guess what happens when we set start_phase=180? You should try it out!
In green, we see the same signal, filtered through the pid module. The nonzero proportional gain leads to instant jumps along with the asg signal. The integrator is responsible for the constant decrease rate at the beginning, and the low-pass that smoothens the asg waveform a little. One can also foresee that, if we are not paying attention, too large an integrator gain will quickly saturate the outputs.
End of explanation
print(r.pid0.help())
Explanation: PID module
We have already seen some use of the pid module above. There are four PID modules available: pid0 to pid3.
End of explanation
#make shortcut
pid = r.pid0
#turn off by setting gains to zero
pid.p,pid.i = 0,0
print("P/I gain when turned off:", pid.i,pid.p)
# small nonzero numbers set gain to minimum value - avoids rounding off to zero gain
pid.p = 1e-100
pid.i = 1e-100
print("Minimum proportional gain: ",pid.p)
print("Minimum integral unity-gain frequency [Hz]: ",pid.i)
# saturation at maximum values
pid.p = 1e100
pid.i = 1e100
print("Maximum proportional gain: ",pid.p)
print("Maximum integral unity-gain frequency [Hz]: ",pid.i)
Explanation: Proportional and integral gain
End of explanation
import numpy as np
#make shortcut
pid = r.pid0
# set input to asg1
pid.input = "asg1"
# set asg to constant 0.1 Volts
r.asg1.setup(waveform="dc", offset = 0.1)
# set scope ch1 to pid0
r.scope.input1 = 'pid0'
#turn off the gains for now
pid.p,pid.i = 0, 0
#set integral value to zero
pid.ival = 0
#prepare data recording
from time import time
times, ivals, outputs = [], [], []
# turn on integrator to whatever negative gain
pid.i = -10
# set integral value above the maximum positive voltage
pid.ival = 1.5
#take 1000 points - jitter of the ethernet delay will add a noise here but we dont care
for n in range(1000):
times.append(time())
ivals.append(pid.ival)
outputs.append(r.scope.voltage_in1)
#plot
import matplotlib.pyplot as plt
%matplotlib inline
times = np.array(times)-min(times)
plt.plot(times,ivals,times,outputs);
plt.xlabel("Time [s]");
plt.ylabel("Voltage");
Explanation: Control with the integral value register
End of explanation
# off by default
r.pid0.inputfilter
# minimum cutoff frequency is 2 Hz, maximum 77 kHz (for now)
r.pid0.inputfilter = [1,1e10,-1,-1e10]
print(r.pid0.inputfilter)
# not setting a coefficient turns that filter off
r.pid0.inputfilter = [0,4,8]
print(r.pid0.inputfilter)
# setting without list also works
r.pid0.inputfilter = -2000
print(r.pid0.inputfilter)
# turn off again
r.pid0.inputfilter = []
print(r.pid0.inputfilter)
Explanation: Again, what do we see? We set up the pid module with a constant (positive) input from the ASG. We then turned on the integrator (with negative gain), which will inevitably lead to a slow drift of the output towards negative voltages (blue trace). We had set the integral value above the positive saturation voltage, such that it takes longer until it reaches the negative saturation voltage. The output of the pid module is bound to saturate at +- 1 Volts, which is clearly visible in the green trace. The value of the integral is internally represented by a 32 bit number, so it can practically take arbitrarily large values compared to the 14 bit output. You can set it within the range from +4 to -4V, for example if you want to exloit the delay, or even if you want to compensate it with proportional gain.
Input filters
The pid module has one more feature: A bank of 4 input filters in series. These filters can be either off (bandwidth=0), lowpass (bandwidth positive) or highpass (bandwidth negative). The way these filters were implemented demands that the filter bandwidths can only take values that scale as the powers of 2.
End of explanation
#reload to make sure settings are default ones
#from pyrpl import Pyrpl
#r = Pyrpl(hostname=HOSTNAME, config='tutorial').rp
#shortcut
iq = r.iq0
# modulation/demodulation frequency 25 MHz
# two lowpass filters with 10 and 20 kHz bandwidth
# input signal is analog input 1
# input AC-coupled with cutoff frequency near 50 kHz
# modulation amplitude 0.1 V
# modulation goes to out1
# output_signal is the demodulated quadrature 1
# quadrature_1 is amplified by 10
iq.setup(frequency=25e6, bandwidth=[10e3,20e3], gain=0.0,
phase=0, acbandwidth=50000, amplitude=0.5,
input='in1', output_direct='out1',
output_signal='quadrature', quadrature_factor=10)
Explanation: You should now go back to the Scope and ASG example above and play around with the setting of these filters to convince yourself that they do what they are supposed to.
IQ module
Demodulation of a signal means convolving it with a sine and cosine at the 'carrier frequency'. The two resulting signals are usually low-pass filtered and called 'quadrature I' and and 'quadrature Q'. Based on this simple idea, the IQ module of pyrpl can implement several functionalities, depending on the particular setting of the various registers. In most cases, the configuration can be completely carried out through the setup function of the module.
<img src="IQmodule.png">
Lock-in detection / PDH / synchronous detection
End of explanation
# shortcut for na
na = p.networkanalyzer
na.iq_name = 'iq1'
#take transfer functions. first: iq1 -> iq1, second iq1->out1->(your cable)->adc1
na.setup(start=1e3,stop=62.5e6,points=1001,rbw=1000,amplitude=0.2,input='iq1',output_direct='off', acbandwidth=0, trace_average=1)
iq1 = na.single()
na.setup(start=1e3,stop=62.5e6,points=1001,rbw=1000,amplitude=0.2,input='in1',output_direct='out1', acbandwidth=0, trace_average=1)
adc1 = na.single()
f = na.data_x
#plot
from pyrpl.hardware_modules.iir.iir_theory import bodeplot
%matplotlib inline
bodeplot([(f, iq1, "iq1->iq1"), (f, adc1, "iq1->out1->in1->iq1")], xlog=True)
Explanation: After this setup, the demodulated quadrature is available as the output_signal of iq0, and can serve for example as the input of a PID module to stabilize the frequency of a laser to a reference cavity. The module was tested and is in daily use in our lab. Frequencies as low as 20 Hz and as high as 50 MHz have been used for this technique. At the present time, the functionality of a PDH-like detection as the one set up above cannot be conveniently tested internally. We plan to upgrade the IQ-module to VCO functionality in the near future, which will also enable testing the PDH functionality.
Network analyzer
When implementing complex functionality in the RedPitaya, the network analyzer module is by far the most useful tool for diagnostics. The network analyzer is able to probe the transfer function of any other module or external device by exciting the device with a sine of variable frequency and analyzing the resulting output from that device. This is done by demodulating the device output (=network analyzer input) with the same sine that was used for the excitation and a corresponding cosine, lowpass-filtering, and averaging the two quadratures for a well-defined number of cycles. From the two quadratures, one can extract the magnitude and phase shift of the device's transfer function at the probed frequencies. Let's illustrate the behaviour. For this example, you should connect output 1 to input 1 of your RedPitaya, such that we can compare the analog transfer function to a reference. Make sure you put a 50 Ohm terminator in parallel with input 1.
End of explanation
# shortcut for na and bpf (bandpass filter)
na = p.networkanalyzer
na.iq_name = 'iq1'
bpf = r.iq2
# setup bandpass
bpf.setup(frequency = 2.5e6, #center frequency
Q=10.0, # the filter quality factor
acbandwidth = 10e5, # ac filter to remove pot. input offsets
phase=0, # nominal phase at center frequency (propagation phase lags not accounted for)
gain=2.0, # peak gain = +6 dB
output_direct='off',
output_signal='output_direct',
input='iq1')
# take transfer function
na.setup(start=1e5, stop=4e6, points=201, rbw=100, avg=3,
amplitude=0.2, input='iq2',output_direct='off', trace_average=1)
tf1 = na.single()
# add a phase advance of 82.3 degrees and measure transfer function
bpf.phase = 82.3
na.setup(start=1e5, stop=4e6, points=201, rbw=100, avg=3,
amplitude=0.2, input='iq2',output_direct='off', trace_average=1)
tf2 = na.single()
f = na.data_x
#plot
from pyrpl.hardware_modules.iir.iir_theory import bodeplot
%matplotlib inline
bodeplot([(f, tf1, "phase = 0.0"), (f, tf2, "phase = %.1f"%bpf.phase)])
Explanation: If your cable is properly connected, you will see that both magnitudes are near 0 dB over most of the frequency range. Near the Nyquist frequency (62.5 MHz), one can see that the internal signal remains flat while the analog signal is strongly attenuated, as it should be to avoid aliasing. One can also see that the delay (phase lag) of the internal signal is much less than the one through the analog signal path.
If you have executed the last example (PDH detection) in this python session, iq0 should still send a modulation to out1, which is added to the signal of the network analyzer, and sampled by input1. In this case, you should see a little peak near the PDH modulation frequency, which was 25 MHz in the example above.
Lorentzian bandpass filter
The iq module can also be used as a bandpass filter with continuously tunable phase. Let's measure the transfer function of such a bandpass with the network analyzer:
End of explanation
iq = r.iq0
# turn off pfd module for settings
iq.pfd_on = False
# local oscillator frequency
iq.frequency = 33.7e6
# local oscillator phase
iq.phase = 0
iq.input = 'in1'
iq.output_direct = 'off'
iq.output_signal = 'pfd'
print("Before turning on:")
print("Frequency difference error integral", iq.pfd_integral)
print("After turning on:")
iq.pfd_on = True
for i in range(10):
print("Frequency difference error integral", iq.pfd_integral)
Explanation: Frequency comparator module
To lock the frequency of a VCO (Voltage controlled oscillator) to a frequency reference defined by the RedPitaya, the IQ module contains the frequency comparator block. This is how you set it up. You have to feed the output of this module through a PID block to send it to the analog output. As you will see, if your feedback is not already enabled when you turn on the module, its integrator will rapidly saturate (-585 is the maximum value here, while a value of the order of 1e-3 indicates a reasonable frequency lock).
End of explanation
#shortcut
iir = r.iir
#print docstring of the setup function
print(iir.setup.__doc__)
#prepare plot parameters
%matplotlib inline
import matplotlib
matplotlib.rcParams['figure.figsize'] = (10, 6)
#setup a complicated transfer function
zeros = [ +4e4j-300,-2e5j-1000]
#[ -4e4j-300, +4e4j-300,-2e5j-1000, +2e5j-1000, -2e6j-3000, +2e6j-3000]
poles = [ -1e6, +5e4j-300]
#[ -1e6, -5e4j-300, +5e4j-300, -1e5j-3000, +1e5j-3000, -1e6j-30000, +1e6j-30000]
designdata = iir.setup(zeros=zeros, poles=poles, loops=None, plot=True);
print("Filter sampling frequency: ", 125./iir.loops,"MHz")
Explanation: IIR module
Sometimes it is interesting to realize even more complicated filters. This is the case, for example, when a piezo resonance limits the maximum gain of a feedback loop. For these situations, the IIR module can implement filters with 'Infinite Impulse Response' (https://en.wikipedia.org/wiki/Infinite_impulse_response). It is the your task to choose the filter to be implemented by specifying the complex values of the poles and zeros of the filter. In the current version of pyrpl, the IIR module can implement IIR filters with the following properties:
- strictly proper transfer function (number of poles > number of zeros)
- poles (zeros) either real or complex-conjugate pairs
- no three or more identical real poles (zeros)
- no two or more identical pairs of complex conjugate poles (zeros)
- pole and zero frequencies should be larger than $\frac{f_\rm{nyquist}}{1000}$ (but you can optimize the nyquist frequency of your filter by tuning the 'loops' parameter)
- the DC-gain of the filter must be 1.0. Despite the FPGA implemention being more flexible, we found this constraint rather practical. If you need different behavior, pass the IIR signal through a PID module and use its input filter and proportional gain. If you still need different behaviour, the file iir.py is a good starting point.
- total filter order <= 16 (realizable with 8 parallel biquads)
- a remaining bug limits the dynamic range to about 30 dB before internal saturation interferes with filter performance
Filters whose poles have a positive real part are unstable by design. Zeros with positive real part lead to non-minimum phase lag. Nevertheless, the IIR module will let you implement these filters.
In general the IIR module is still fragile in the sense that you should verify the correct implementation of each filter you design. Usually you can trust the simulated transfer function. It is nevertheless a good idea to use the internal network analyzer module to actually measure the IIR transfer function with an amplitude comparable to the signal you expect to go through the filter, as to verify that no saturation of internal filter signals limits its performance.
End of explanation
# first thing to check if the filter is not ok
print("IIR overflows before:", bool(iir.overflow))
# measure tf of iir filter
p.rp.iir.input = 'iq1'
p.networkanalyzer.setup(iq_name='iq1', start=1e4, stop=3e6, points = 301, rbw=100, trace_average=1,
amplitude=0.1, input='iir', output_direct='off', logscale=True)
tf = p.networkanalyzer.single()
f = p.networkanalyzer.data_x
# first thing to check if the filter is not ok
print("IIR overflows after:", bool(iir.overflow))
#plot with design data
%matplotlib inline
import matplotlib
matplotlib.rcParams['figure.figsize'] = (10, 6)
from pyrpl.hardware_modules.iir.iir_theory import bodeplot
bodeplot([(f, iir.transfer_function(f),"designed system")] + [(f,tf,"measured system")],xlog=True)
Explanation: If you try changing a few coefficients, you will see that your design filter is not always properly realized. The bottleneck here is the conversion from the analytical expression (poles and zeros) to the filter coefficients, not the FPGA performance. This conversion is (among other things) limited by floating point precision. We hope to provide a more robust algorithm in future versions. If you can obtain filter coefficients by another, preferrably analytical method, this might lead to better results than our generic algorithm.
Let's check if the filter is really working as it is supposed:
End of explanation
#rescale the filter by 20fold reduction of DC gain
iir.setup(zeros=zeros, poles=poles, g=0.1,loops=None,plot=False);
# first thing to check if the filter is not ok
print("IIR overflows before:", bool(iir.overflow))
# measure tf of iir filter
p.rp.iir.input = 'networkanalyzer'
p.networkanalyzer.setup(start=1e4, stop=3e6, points= 301, rbw=100, trace_average=1,
amplitude=0.1, input='iir', output_direct='off', logscale=True)
tf = p.networkanalyzer.single()
f = p.networkanalyzer.data_x
# first thing to check if the filter is not ok
print("IIR overflows after:", bool(iir.overflow))
#plot with design data
%matplotlib inline
import pylab
pylab.rcParams['figure.figsize'] = (10, 6)
from pyrpl.hardware_modules.iir.iir_theory import bodeplot
bodeplot([(f, p.rp.iir.transfer_function(f), "design")]+[(f,tf,"measured system")],xlog=True)
Explanation: As you can see, the filter has trouble to realize large dynamic ranges. With the current standard design software, it takes some 'practice' to design transfer functions which are properly implemented by the code. While most zeros are properly realized by the filter, you see that the first two poles suffer from some kind of saturation. We are working on an automatic rescaling of the coefficients to allow for optimum dynamic range. From the overflow register printed above the plot, you can also see that the network analyzer scan caused an internal overflow in the filter. All these are signs that different parameters should be tried.
A straightforward way to impove filter performance is to adjust the DC-gain and compensate it later with the gain of a subsequent PID module. See for yourself what the parameter g=0.1 (instead of the default value g=1.0) does here:
End of explanation
iir = p.rp.iir
# useful diagnostic functions
print("IIR on:", iir.on)
#print("IIR bypassed:", iir.shortcut)
#print("IIR copydata:", iir.copydata)
print("IIR loops:", iir.loops)
print("IIR overflows:", iir.overflow)
print("\nCoefficients (6 per biquad):")
print(iir.coefficients)
# set the unity transfer function to the filter
iir._setup_unity()
Explanation: You see that we have improved the second peak (and avoided internal overflows) at the cost of increased nosie in other regions. Of course this noise can be reduced by increasing the NA averaging time. But maybe it will be detrimental to your application? After all, IIR filter design is far from trivial, but this tutorial should have given you enough information to get started and maybe to improve the way we have implemented the filter in pyrpl (e.g. by implementing automated filter coefficient scaling).
If you plan to play more with the filter, these are the remaining internal iir registers:
End of explanation
pid = p.rp.pid0
print(pid.help())
pid.ival #bug: help forgets about pid.ival: current integrator value [volts]
Explanation: 6) The Pyrpl class
The RedPitayas in our lab are mostly used to stabilize one item or another in quantum optics experiments. To do so, the experimenter usually does not want to bother with the detailed implementation on the RedPitaya while trying to understand the physics going on in her/his experiment. For this situation, we have developed the Pyrpl class, which provides an API with high-level functions such as:
# optimial pdh-lock with setpoint 0.1 cavity bandwidth away from resonance
cavity.lock(method='pdh',detuning=0.1)
# unlock the cavity
cavity.unlock()
# calibrate the fringe height of an interferometer, and lock it at local oscillator phase 45 degrees
interferometer.lock(phase=45.0)
First attempts at locking
SECTION NOT READY YET, BECAUSE CODE NOT CLEANED YET
Now lets go for a first attempt to lock something. Say you connect the error signal (transmission or reflection) of your setup to input 1. Make sure that the peak-to-peak of the error signal coincides with the maximum voltages the RedPitaya can handle (-1 to +1 V if the jumpers are set to LV). This is important for getting optimal noise performance. If your signal is too low, amplify it. If it is too high, you should build a voltage divider with 2 resistors of the order of a few kOhm (that way, the input impedance of the RedPitaya of 1 MOhm does not interfere).
Next, connect output 1 to the standard actuator at your hand, e.g. a piezo. Again, you should try to exploit the full -1 to +1 V output range. If the voltage at the actuator must be kept below 0.5V for example, you should make another voltage divider for this. Make sure that you take the input impedance of your actuator into consideration here. If you output needs to be amplified, it is best practice to put the voltage divider after the amplifier as to also attenuate the noise added by the amplifier. Hovever, when this poses a problem (limited bandwidth because of capacity of the actuator), you have to put the voltage divider before the amplifier. Also, this is the moment when you should think about low-pass filtering the actuator voltage. Because of DAC noise, analog low-pass filters are usually more effective than digital ones. A 3dB bandwidth of the order of 100 Hz is a good starting point for most piezos.
You often need two actuators to control your cavity. This is because the output resolution of 14 bits can only realize 16384 different values. This would mean that with a finesse of 15000, you would only be able to set it to resonance or a linewidth away from it, but nothing in between. To solve this, use a coarse actuator to cover at least one free spectral range which brings you near the resonance, and a fine one whose range is 1000 or 10000 times smaller and who gives you lots of graduation around the resonance. The coarse actuator should be strongly low-pass filtered (typical bandwidth of 1Hz or even less), the fine actuator can have 100 Hz or even higher bandwidth. Do not get confused here: the unity-gain frequency of your final lock can be 10- or even 100-fold above the 3dB bandwidth of the analog filter at the output - it suffices to increase the proportional gain of the RedPitaya Lockbox.
Once everything is connected, let's grab a PID module, make a shortcut to it and print its helpstring. All modules have a metho help() which prints all available registers and their description:
End of explanation
pid.input = 'in1'
pid.output_direct = 'out1'
#see other available options just for curiosity:
print(pid.inputs)
print(pid.output_directs)
Explanation: We need to inform our RedPitaya about which connections we want to make. The cabling discussed above translates into:
End of explanation
# turn on the laser
offresonant = p.rp.scope.voltage_in1 #volts at analog input 1 with the unlocked cavity
# make a guess of what voltage you will measure at an optical resonance
resonant = 0.5 #Volts at analog input 1
# set the setpoint at relative reflection of 0.75 / rel. transmission of 0.25
pid.setpoint = 0.75*offresonant + 0.25*resonant
Explanation: Finally, we need to define a setpoint. Lets first measure the offset when the laser is away from the resonance, and then measure or estimate how much light gets through on resonance.
End of explanation
pid.i = 0 # make sure gain is off
pid.p = 0
#errorsignal = adc1 - setpoint
if resonant > offresonant: # when we are away from resonance, error is negative.
slopesign = 1.0 # therefore, near resonance, the slope is positive as the error crosses zero.
else:
slopesign = -1.0
gainsign = -slopesign #the gain must be the opposite to stabilize
# the effectove gain will in any case slopesign*gainsign = -1.
#Therefore we must start at the maximum positive voltage, so the negative effective gain leads to a decreasing output
pid.ival = 1.0 #sets the integrator value = output voltage to maximum
from time import sleep
sleep(1.0) #wait for the voltage to stabilize (adjust for a few times the lowpass filter bandwidth)
#finally, turn on the integrator
pid.i = gainsign * 0.1
#no-test
#with a bit of luck, this should work
from time import time
t0 = time()
while True:
relative_error = abs((p.rp.scope.voltage_in1-pid.setpoint)/(offresonant-resonant))
if time()-t0 > 2: #diagnostics every 2 seconds
print("relative error:",relative_error)
t0 = time()
if relative_error < 0.1:
break
sleep(0.01)
if pid.ival <= -1:
print("Resonance missed. Trying again slower..")
pid.ival = 1.2 #overshoot a little
pid.i /= 2
print("Resonance approch successful")
Explanation: Now lets start to approach the resonance. We need to figure out from which side we are coming. The choice is made such that a simple integrator will naturally drift into the resonance and stay there:
End of explanation
#shortcut
iq = p.rp.iq0
iq.setup(frequency=1000e3, bandwidth=[10e3,20e3], gain=0.0,
phase=0, acbandwidth=50000, amplitude=0.4,
input='in1', output_direct='out1',
output_signal='output_direct', quadrature_factor=0)
iq.frequency=10
p.rp.scope.input1='in1'
# shortcut for na
na = p.networkanalyzer
na.iq_name = "iq1"
# pid1 will be our device under test
pid = p.rp.pid0
pid.input = 'iq1'
pid.i = 0
pid.ival = 0
pid.p = 1.0
pid.setpoint = 0
pid.inputfilter = []#[-1e3, 5e3, 20e3, 80e3]
# take the transfer function through pid1, this will take a few seconds...
na.setup(start=0,stop=200e3,points=101,rbw=100,avg=1,amplitude=0.5,input='iq1',output_direct='off', acbandwidth=0)
y = na.single()
x = na.data_x
#plot
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
plt.plot(x*1e-3,np.abs(y)**2);
plt.xlabel("Frequency [kHz]");
plt.ylabel("|S21|");
Explanation: Questions to users: what parameters do you know?
finesse of the cavity? 1000
length? 1.57m
what error signals are available? transmission direct, reflection AC -> directement pdh analogique
are modulators available n/a
what cavity length / laser frequency actuators are available? PZT mephisto DC - 10kHz, 48MHz opt./V, V_rp apmplifie x20
temperature du laser <1 Hz 2.5~GHz/V, apres AOM
what is known about them (displacement, bandwidth, amplifiers)?
what analog filters are present? YAG PZT a 10kHz
imposer le design des sorties
More to come
End of explanation
#no-test
from pyrpl import Pyrpl
p = Pyrpl(hostname=HOSTNAME, config='tutorial')
Explanation: 7) The Graphical User Interface
Most of the modules described in section 5 can be controlled via a graphical user interface. The graphical window can be displayed with the following:
WARNING: For the GUI to work fine within an ipython session, the option --gui=qt has to be given to the command launching ipython. This makes sure that an event loop is running.
End of explanation |
2,678 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
How to batch convert sentence lengths to masks in PyTorch? | Problem:
import numpy as np
import pandas as pd
import torch
lens = load_data()
max_len = max(lens)
mask = torch.arange(max_len).expand(len(lens), max_len) < lens.unsqueeze(1)
mask = mask.type(torch.LongTensor) |
2,679 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Spatial Model fitting in GLS
In this exercise we will fit a linear model using a Spatial structure as covariance matrix.
We will use GLS to get better estimators.
As always we will need to load the necessary libraries.
Step1: Use this to automate the process. Be carefull it can overwrite current results
run ../HEC_runs/fit_fia_logbiomass_logspp_GLS.py /RawDataCSV/idiv_share/plotsClimateData_11092017.csv /apps/external_plugins/spystats/HEC_runs/results/logbiomas_logsppn_res.csv -85 -80 30 35
Importing data
We will use the FIA dataset and for exemplary purposes we will take a subsample of this data.
Also important.
The empirical variogram has been calculated for the entire data set using the residuals of an OLS model.
We will use some auxiliary functions defined in the fit_fia_logbiomass_logspp_GLS.
You can inspect the functions using the ?? symbol.
Step2: Now we will obtain the data from the calculated empirical variogram.
Step3: Instantiating the variogram object
Step4: Instantiating theoretical variogram model | Python Code:
# Load Biospytial modules and etc.
%matplotlib inline
import sys
sys.path.append('/apps')
sys.path.append('..')
sys.path.append('../spystats')
import django
django.setup()
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
## Use the ggplot style
plt.style.use('ggplot')
import tools
Explanation: Spatial Model fitting in GLS
In this exercise we will fit a linear model using a Spatial structure as covariance matrix.
We will use GLS to get better estimators.
As always we will need to load the necessary libraries.
End of explanation
from HEC_runs.fit_fia_logbiomass_logspp_GLS import prepareDataFrame,loadVariogramFromData,buildSpatialStructure, calculateGLS, initAnalysis, fitGLSRobust
section = initAnalysis("/RawDataCSV/idiv_share/FIA_Plots_Biomass_11092017.csv",
"/apps/external_plugins/spystats/HEC_runs/results/variogram/data_envelope.csv",
-130,-60,30,40)
#section = initAnalysis("/RawDataCSV/idiv_share/plotsClimateData_11092017.csv",
# "/apps/external_plugins/spystats/HEC_runs/results/variogram/data_envelope.csv",
# -85,-80,30,35)
# IN HEC
#section = initAnalysis("/home/hpc/28/escamill/csv_data/idiv/FIA_Plots_Biomass_11092017.csv","/home/hpc/28/escamill/spystats/HEC_runs/results/variogram/data_envelope.csv",-85,-80,30,35)
section.shape
Explanation: Use this to automate the process. Be carefull it can overwrite current results
run ../HEC_runs/fit_fia_logbiomass_logspp_GLS.py /RawDataCSV/idiv_share/plotsClimateData_11092017.csv /apps/external_plugins/spystats/HEC_runs/results/logbiomas_logsppn_res.csv -85 -80 30 35
Importing data
We will use the FIA dataset and for exemplary purposes we will take a subsample of this data.
Also important.
The empirical variogram has been calculated for the entire data set using the residuals of an OLS model.
We will use some auxiliary functions defined in the fit_fia_logbiomass_logspp_GLS.
You can inspect the functions using the ?? symbol.
End of explanation
gvg,tt = loadVariogramFromData("/apps/external_plugins/spystats/HEC_runs/results/variogram/data_envelope.csv",section)
gvg.plot(refresh=False,with_envelope=True)
resum,gvgn,resultspd,results = fitGLSRobust(section,gvg,num_iterations=10,distance_threshold=1000000)
resum.as_text
plt.plot(resultspd.rsq)
plt.title("GLS feedback algorithm")
plt.xlabel("Number of iterations")
plt.ylabel("R-sq fitness estimator")
resultspd.columns
a = map(lambda x : x.to_dict(), resultspd['params'])
paramsd = pd.DataFrame(a)
paramsd
plt.plot(paramsd.Intercept.loc[1:])
plt.get_yaxis().get_major_formatter().set_useOffset(False)
fig = plt.figure(figsize=(10,10))
plt.plot(paramsd.logSppN.iloc[1:])
variogram_data_path = "/apps/external_plugins/spystats/HEC_runs/results/variogram/data_envelope.csv"
thrs_dist = 100000
emp_var_log_log = pd.read_csv(variogram_data_path)
Explanation: Now we will obtain the data from the calculated empirical variogram.
End of explanation
gvg = tools.Variogram(section,'logBiomass',using_distance_threshold=thrs_dist)
gvg.envelope = emp_var_log_log
gvg.empirical = emp_var_log_log.variogram
gvg.lags = emp_var_log_log.lags
#emp_var_log_log = emp_var_log_log.dropna()
#vdata = gvg.envelope.dropna()
Explanation: Instantiating the variogram object
End of explanation
matern_model = tools.MaternVariogram(sill=0.34,range_a=100000,nugget=0.33,kappa=4)
whittle_model = tools.WhittleVariogram(sill=0.34,range_a=100000,nugget=0.0,alpha=3)
exp_model = tools.ExponentialVariogram(sill=0.34,range_a=100000,nugget=0.33)
gaussian_model = tools.GaussianVariogram(sill=0.34,range_a=100000,nugget=0.33)
spherical_model = tools.SphericalVariogram(sill=0.34,range_a=100000,nugget=0.33)
gvg.model = whittle_model
#gvg.model = matern_model
#models = map(lambda model : gvg.fitVariogramModel(model),[matern_model,whittle_model,exp_model,gaussian_model,spherical_model])
gvg.fitVariogramModel(whittle_model)
import numpy as np
xx = np.linspace(0,1000000,1000)
gvg.plot(refresh=False,with_envelope=True)
plt.plot(xx,whittle_model.f(xx),lw=2.0,c='k')
plt.title("Empirical Variogram with fitted Whittle Model")
def randomSelection(n,p):
idxs = np.random.choice(n,p,replace=False)
random_sample = new_data.iloc[idxs]
return random_sample
#################
n = len(new_data)
p = 3000 # The amount of samples taken (let's do it without replacement)
random_sample = randomSelection(n,100)
Explanation: Instantiating theoretical variogram model
End of explanation |
2,680 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numpy and Scipy Tutorial
Numpy and Scipy are the most common Python package for mathematical and numerical routines in precompiled, fast functions. The NumPy package provides basic routines for manipulating large arrays and matrices of numeric data.
The SciPy package extends the functionality of NumPy with a substantial collection of useful algorithms, like minimization, Fourier transformation, regression, and other applied mathematical techniques.
References
* Scipy tutorial
* Python for Data Analysis Data Wrangling with Pandas, NumPy, and IPython
First we use the following convention when importing numpy and scipy
Step1: ndarray
Step2: Each array has its own type, data type and shape
Step3: In this case, our array is a 1-dimensional array, thus the shape (4,). Surely we can have multi-dimensional array like this
Step4: Note that len function returns the first dimension. ndim property returns the number of dimensions.
If you want to get the number of elements in an array, you can use the size property.
Step5: You can specify the type of the array elements in the constructor like this
Step6: You can convert array type from one type to another
Step7: You can reshape array using tuples that define new dimension
Step8: Convert Array to other data structure
Convert to list
Step9: Convert to binary string
Step10: Other functions to create array
The arange function is similar to the range function but returns an array
Step11: Array can be copied from existing ones
Step12: The zeros_like and ones_like functions create a new array with the same dimensions
and type of an existing one
Step13: To create an identity matrix of a given size
Step14: The eye function returns matrices with ones along the kth diagonal
Step15: Array Manipulation
Fill array with single value
Step16: Transpose array
Step17: Flatten a multidimensional array to a 1 dimensional
Step18: Concatenate 1-dimensional arrays
Step19: If an array has more than one dimension, it is possible to specify the axis along which multiple arrays are concatenated. By default (without specifying the axis), NumPy concatenates along
the first dimension
Step20: Finally, the dimensionality of an array can be increased using the newaxis constant in bracket notation
Step21: The in statement can be used to test if values are present in an array
Step22: Array Mathematics
Step23: For two-dimensional arrays, multiplication remains elementwise and does not correspond to
matrix multiplication.
Step24: Errors are thrown if arrays do not match in size
Step25: NumPy offers a large library of common mathematical functions that can be applied elementwise to arrays. abs,
sign, sqrt, log, log10, exp, sin, cos, tan, arcsin, arccos,
arctan, sinh, cosh, tanh, arcsinh, arccosh, arctanh, floor, ceil, rint
Step26: Also included in the NumPy module are two important mathematical constants
Step27: Other array manipulation functions
Step28: Comparison operator and value testing
Step29: Some testing functions
Step30: Array item selection
Step31: Array selection using integer array
Step32: Take and put can be used as well
Step33: Scipy
scipy can create polynomial
Step34: Do integral and derivative
Step35: Statistics
numpy provide basic statistics function
Step36: The median can be found
Step37: The covariance for data can be found for multiple variables
Step38: scipy provide more advanced functions
Step39: Random Numbers
Set random seed
Step40: An array of random numbers in the half-open interval [0.0, 1.0) can be generated
Step41: NumPy also includes generators for many other distributions
Step42: The random module can also be used to randomly shuffle the order of items in a list | Python Code:
import numpy as np
import scipy as cp
Explanation: Numpy and Scipy Tutorial
Numpy and Scipy are the most common Python package for mathematical and numerical routines in precompiled, fast functions. The NumPy package provides basic routines for manipulating large arrays and matrices of numeric data.
The SciPy package extends the functionality of NumPy with a substantial collection of useful algorithms, like minimization, Fourier transformation, regression, and other applied mathematical techniques.
References
* Scipy tutorial
* Python for Data Analysis Data Wrangling with Pandas, NumPy, and IPython
First we use the following convention when importing numpy and scipy
End of explanation
a = np.array([1,2,3,4])
b = np.array([5,6])
print(a)
print(b)
Explanation: ndarray: N-dimensional array
We can create a ndarray from a Python list like this
End of explanation
print(type(a))
print(a.dtype)
a.shape
Explanation: Each array has its own type, data type and shape:
End of explanation
a2 = np.array ([[1,2,3], [4,5,6]])
print(a2.shape)
print(len(a2))
print(a2.ndim)
Explanation: In this case, our array is a 1-dimensional array, thus the shape (4,). Surely we can have multi-dimensional array like this:
End of explanation
a2.size
Explanation: Note that len function returns the first dimension. ndim property returns the number of dimensions.
If you want to get the number of elements in an array, you can use the size property.
End of explanation
a3 = np.array([4.5, 7.1, 6.2], float)
a3.dtype
Explanation: You can specify the type of the array elements in the constructor like this:
End of explanation
print(a3.astype(str))
Explanation: You can convert array type from one type to another:
End of explanation
a = np.array(range(10), float)
a = a.reshape((5, 2))
a
Explanation: You can reshape array using tuples that define new dimension
End of explanation
a.tolist()
Explanation: Convert Array to other data structure
Convert to list:
End of explanation
binary_string = a.tostring()
print(binary_string)
a = np.fromstring(binary_string)
Explanation: Convert to binary string
End of explanation
np.arange(5, dtype=float)
Explanation: Other functions to create array
The arange function is similar to the range function but returns an array:
End of explanation
b = a.copy ()
b
print(np.zeros(10))
print(np.ones(20))
print(np.zeros((3,5)))
print(np.ones((5,7)))
np.empty((2,3,2))
Explanation: Array can be copied from existing ones
End of explanation
a = np.array([[1, 2, 3], [4, 5, 6]], float)
np.zeros_like(a)
Explanation: The zeros_like and ones_like functions create a new array with the same dimensions
and type of an existing one:
End of explanation
np.identity(4, dtype=float)
Explanation: To create an identity matrix of a given size:
End of explanation
np.eye(4, k=1, dtype=float)
Explanation: The eye function returns matrices with ones along the kth diagonal:
End of explanation
a.fill(0)
a
Explanation: Array Manipulation
Fill array with single value
End of explanation
a = np.array([[1, 2, 3], [4, 5, 6]], float)
print(a.transpose())
print(a.T) # shorthand
print(a.T.shape)
Explanation: Transpose array:
End of explanation
a.flatten()
Explanation: Flatten a multidimensional array to a 1 dimensional
End of explanation
a = np.array([1,2], float)
b = np.array([3,4,5,6], float)
c = np.array([7,8,9], float)
np.concatenate((a, b, c))
Explanation: Concatenate 1-dimensional arrays
End of explanation
a = np.array([[1, 2], [3, 4]], float)
b = np.array([[5, 6], [7,8]], float)
print(np.concatenate((a,b)))
print(np.concatenate((a,b), axis=0))
print(np.concatenate((a,b), axis=1))
Explanation: If an array has more than one dimension, it is possible to specify the axis along which multiple arrays are concatenated. By default (without specifying the axis), NumPy concatenates along
the first dimension:
End of explanation
a = np.array([1, 2, 3], float)
print(a)
print (a[:, np.newaxis])
print (a[:,np.newaxis].shape)
print(a[np.newaxis,:])
print(a[np.newaxis,:].shape)
Explanation: Finally, the dimensionality of an array can be increased using the newaxis constant in bracket notation
End of explanation
print(7.1 in a3)
print(2 in a3)
Explanation: The in statement can be used to test if values are present in an array:
End of explanation
a = np.array([1,2,3], float)
b = np.array([5,2,6], float)
print (a + b)
print (a * b)
print (b / a)
print (a % b)
print (b**a)
Explanation: Array Mathematics
End of explanation
a = np.array([[1,2], [3,4]], float)
b = np.array([[2,0], [1,3]], float)
a * b
Explanation: For two-dimensional arrays, multiplication remains elementwise and does not correspond to
matrix multiplication.
End of explanation
a = np.array([1,2,3], float)
b = np.array([4,5], float)
# a + b should throw error
Explanation: Errors are thrown if arrays do not match in size
End of explanation
a = np.array( [0, 1.5 ,2.4,3.7] )
print (np.sqrt(a))
print (np.floor(a))
print (np.sin(a))
print (np.cos(a))
Explanation: NumPy offers a large library of common mathematical functions that can be applied elementwise to arrays. abs,
sign, sqrt, log, log10, exp, sin, cos, tan, arcsin, arccos,
arctan, sinh, cosh, tanh, arcsinh, arccosh, arctanh, floor, ceil, rint
End of explanation
print (np.pi)
print (np.e)
Explanation: Also included in the NumPy module are two important mathematical constants:
End of explanation
# iterate array like a list
a = np.array([1, 4, 5], int)
for x in a:
print(x)
# loop through multi dimensional array
a = np.array([[1, 2], [3, 4], [5, 6]], float)
for x in a:
print(x)
for x,y in a:
print (x * y)
# basic array operation
print (a.sum())
print (np.sum(a))
print (a.prod())
print (np.prod(a))
print (a.min())
print (a.max())
# print the indice of the min and max value
print (a.argmin())
print (a.argmax())
# multidimensional array can specify the axis
a = np.array([[0, 2], [3, -1], [3, 5]], float)
print (a.min(axis=1))
print (a.max(axis=0))
# sort, clip
a = np.array([6, 2, 5, -1, 0, 3, 7], float)
print (sorted(a))
print (a.clip(0, 5))
# uniq
print (np.unique(a))
# diagonal
a = np.array([[0, 2], [3, -1], [3, 5]], float)
print (a.diagonal())
Explanation: Other array manipulation functions:
End of explanation
a = np.array([1, 3, 0], float)
b = np.array([0, 3, 2], float)
print (a > b)
print (a < b)
print (a == b)
print (np.any(a > b))
print (np.all(a > b))
a = np.array([1, 3, 0], float)
print (np.logical_and(a > 0, a < 3))
print (np.logical_not(b))
c = np.array([False, True, False], bool)
print (np.logical_or(b, c))
Explanation: Comparison operator and value testing
End of explanation
a = np.array([1, 3, 0], float)
print (np.where(a != 0, 1 / a, a))
a = np.array([[0, 1], [3, 0]], float)
print (a.nonzero())
a = np.array([1, np.NaN, np.Inf], float)
print (np.isnan(a))
print (np.isfinite(a))
Explanation: Some testing functions
End of explanation
a = np.array([[6, 4], [5, 9]], float)
print (a[a >= 6])
Explanation: Array item selection:
End of explanation
a = np.array([2, 4, 6, 8], float)
b = np.array([0, 0, 1, 3, 2, 1], int)
print (a[b])
Explanation: Array selection using integer array
End of explanation
a = np.array([2, 4, 6, 8], float)
b = np.array([0, 0, 1, 3, 2, 1], int)
print (a.take(b))
a = np.array([0, 1, 2, 3, 4, 5], float)
b = np.array([9, 8, 7], float)
a.put([0, 3], b)
print (a)
Explanation: Take and put can be used as well
End of explanation
p = cp.poly1d([3,4,5])
print(p)
print (p(1))
print (p([2,3]))
Explanation: Scipy
scipy can create polynomial
End of explanation
print(p.integ())
print(p.deriv())
Explanation: Do integral and derivative
End of explanation
a = np.random.randn(100)
print(np.mean(a))
print(np.var(a))
print(np.std(a))
print(np.cov(a))
Explanation: Statistics
numpy provide basic statistics function: mean, var, std, cov
End of explanation
np.median(a)
Explanation: The median can be found:
End of explanation
a = np.array([[1,2,3,4,5], [6,5,2,2,2]], float)
print(np.cov(a))
Explanation: The covariance for data can be found for multiple variables:
End of explanation
# Create a normal distribution with mean 0.5, variance 1
import scipy.stats
rv = cp.stats.norm()
print (rv.stats())
print (rv.mean(), rv.std(), rv.var())
# get cdf, pdf of a particular point
print(rv.pdf(0))
print(rv.cdf(0))
Explanation: scipy provide more advanced functions
End of explanation
np.random.seed(213412)
Explanation: Random Numbers
Set random seed
End of explanation
np.random.randn ()
print (np.random.random())
print( np.random.randint(5, 10))
print ( np.random.rand(5))
print (np.random.rand (2,4))
Explanation: An array of random numbers in the half-open interval [0.0, 1.0) can be generated:
End of explanation
print(np.random.poisson(6.0))
print (np.random.normal())
print ( np.random.normal(1.5, 4.0))
print ( np.random.normal(size=5))
Explanation: NumPy also includes generators for many other distributions: Beta, binomial, chi-square, Dirichlet, exponential, F, Gamma, geometric, Gumbel, hypergeometric, Laplace, logistic, lognormal, logarithmic, multinomial, multivariate, negative binomial, noncentral chi-square, noncentral F, normal, Pareto, Poisson, power, Rayleigh, Cauchy, student's t, triangular, von Mises, Wald, Weibull, and Zipf distributions
End of explanation
a = list(range(10))
np.random.shuffle(a)
a
Explanation: The random module can also be used to randomly shuffle the order of items in a list
End of explanation |
2,681 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
End-to-end Recommender System with NVIDIA Merlin and Vertex AI.
This notebook shows how to deploy and execute an end-to-end recommender system on Vertex Pipelines using NVIDIA Merlin.
The notebook covers the following
Step1: Change the following variables according to your definitions.
Step2: Change the following variables ONLY if necessary.
You can leave the default variables.
Step3: 2. Set Pipeline Configurations
Step4: The following cell lists the configuration values in config.py
Step5: 3. Build Pipeline Container Images
The following three commands build the NVTabular preprocessing, HugeCTR training, and Triton serving container images using Cloud Build, and store the container images in Container Registry.
Build NVTabular preprocessing container image
Step6: Build HugeCTR training container image
Step7: Build Triton serving container image
Step8: 4. Configure pipeline parameters
Change the following variables according to your definitions.
Step9: 5. Compile KFP pipeline
Step10: 6. Submit pipeline to Vertex AI | Python Code:
import os
import json
from datetime import datetime
from google.cloud import aiplatform as vertex_ai
from kfp.v2 import compiler
Explanation: End-to-end Recommender System with NVIDIA Merlin and Vertex AI.
This notebook shows how to deploy and execute an end-to-end recommender system on Vertex Pipelines using NVIDIA Merlin.
The notebook covers the following:
Training pipeline overview.
Set pipeline configurations.
Build pipeline container images.
Configure pipeline parameters.
Compile KFP pipeline.
Submit pipeline to Vertex AI.
1. Training Pipeline Overview
The following diagram shows the end-to-end pipeline for preprocessing, training, and serving NVIDIA Merlin Recommender System using Vertex AI.
The pipeline is defined in src/training_pipelines.py module.
The training_bq pipeline function reads the criteo data from Cloud Storage and perform the following steps:
Preprocess the data using NVTabular, as described in the 01-dataset-preprocessing.ipynb notebook:
Convert CSV data to Parquet and write to Cloud Storage.
Transform the data using an NVTabular workflow.
Write the transformed data as parquet files and the workflow object to Cloud Storage.
Train a DeepFM model using HugeCTR. This step is submits a Custom Training Job to Vertex AI training, as described in 02-model-training-hugectr.ipynb.
Export the model as a Triton Ensemble to be served using Triton server. The ensemble consists of of the NVTabular preprocessing workflow and a HugeCTR model.
The exported Triton ensemble model is uploaded to Vertex AI model resources.
Once the model is uploaded to Vertex AI, a long with a reference to its serving Triton container, it can be deployed to Vertex AI Prediction, as described in 03-model-inference-hugectr.ipynb.
All the components of the pipelines are defined in the src/pipelines/components.py module.
<img src="images/merlin-vertex-e2e.png" alt="Pipeline"/>
Setup
In this section of the notebook you configure your environment settings, including a GCP project, a GCP compute region, a Vertex AI service account and a Vertex AI staging bucket.
Make sure to update the below cells with the values reflecting your environment.
First import all the necessary python packages.
End of explanation
# Project definitions
PROJECT_ID = '<YOUR PROJECT ID>' # Change to your project.
REGION = '<LOCATION OF RESOURCES>' # Change to your region.
# Service Account address
VERTEX_SA = f'vertex-sa@{PROJECT_ID}.iam.gserviceaccount.com' # Change to your service account with Vertex AI Admin permitions.
# Bucket definitions
BUCKET = '<YOUR BUCKET NAME>' # Change to your bucket. All the files will be stored here.
Explanation: Change the following variables according to your definitions.
End of explanation
# Bucket definitions
MODEL_NAME = 'deepfm'
MODEL_VERSION = 'v01'
MODEL_DISPLAY_NAME = f'criteo-hugectr-{MODEL_NAME}-{MODEL_VERSION}'
WORKSPACE = f'gs://{BUCKET}/{MODEL_DISPLAY_NAME}'
TRAINING_PIPELINE_NAME = f'merlin-training-pipeline'
# Docker definitions for data preprocessing
NVT_IMAGE_NAME = 'nvt-preprocessing'
NVT_IMAGE_URI = f'gcr.io/{PROJECT_ID}/{NVT_IMAGE_NAME}'
NVT_DOCKERNAME = 'nvtabular'
# Docker definitions for model training
HUGECTR_IMAGE_NAME = 'hugectr-training'
HUGECTR_IMAGE_URI = f'gcr.io/{PROJECT_ID}/{HUGECTR_IMAGE_NAME}'
HUGECTR_DOCKERNAME = 'hugectr'
# Docker definitions for model serving
TRITON_IMAGE_NAME = f'triton-serving'
TRITON_IMAGE_URI = f'gcr.io/{PROJECT_ID}/{TRITON_IMAGE_NAME}'
TRITON_DOCKERNAME = 'triton'
Explanation: Change the following variables ONLY if necessary.
You can leave the default variables.
End of explanation
os.environ['PROJECT_ID'] = PROJECT_ID
os.environ['REGION'] = REGION
os.environ['BUCKET'] = BUCKET
os.environ['WORKSPACE'] = WORKSPACE
os.environ['TRAINING_PIPELINE_NAME'] = TRAINING_PIPELINE_NAME
os.environ['MODEL_NAME'] = MODEL_NAME
os.environ['MODEL_VERSION'] = MODEL_VERSION
os.environ['MODEL_DISPLAY_NAME'] = MODEL_DISPLAY_NAME
os.environ['MEMORY_LIMIT'] = '680'
os.environ['CPU_LIMIT'] = '96'
os.environ['GPU_LIMIT'] = '8'
os.environ['GPU_TYPE'] = 'NVIDIA_TESLA_A100'
os.environ['MACHINE_TYPE'] = 'a2-highgpu-1g'
os.environ['ACCELERATOR_TYPE'] = 'NVIDIA_TESLA_A100'
os.environ['ACCELERATOR_NUM'] = '1'
os.environ['NUM_WORKERS'] = '12'
os.environ['NUM_SLOTS'] = '26'
os.environ['MAX_NNZ'] = '2'
os.environ['EMBEDDING_VECTOR_SIZE'] = '11'
os.environ['MAX_BATCH_SIZE'] = '64'
os.environ['MODEL_REPOSITORY_PATH'] = '/model'
os.environ['NVT_IMAGE_URI'] = NVT_IMAGE_URI
os.environ['HUGECTR_IMAGE_URI'] = HUGECTR_IMAGE_URI
os.environ['TRITON_IMAGE_URI'] = TRITON_IMAGE_URI
Explanation: 2. Set Pipeline Configurations
End of explanation
from src.pipelines import config
import importlib
importlib.reload(config)
for key, value in config.__dict__.items():
if key.isupper(): print(f'{key}: {value}')
Explanation: The following cell lists the configuration values in config.py
End of explanation
FILE_LOCATION = './src'
! gcloud builds submit --config src/cloudbuild.yaml --substitutions _DOCKERNAME=$NVT_DOCKERNAME,_IMAGE_URI=$NVT_IMAGE_URI,_FILE_LOCATION=$FILE_LOCATION --timeout=2h --machine-type=e2-highcpu-8
Explanation: 3. Build Pipeline Container Images
The following three commands build the NVTabular preprocessing, HugeCTR training, and Triton serving container images using Cloud Build, and store the container images in Container Registry.
Build NVTabular preprocessing container image
End of explanation
FILE_LOCATION = './src'
! gcloud builds submit --config src/cloudbuild.yaml --substitutions _DOCKERNAME=$HUGECTR_DOCKERNAME,_IMAGE_URI=$HUGECTR_IMAGE_URI,_FILE_LOCATION=$FILE_LOCATION --timeout=2h --machine-type=e2-highcpu-8
Explanation: Build HugeCTR training container image
End of explanation
FILE_LOCATION = './src'
! gcloud builds submit --config src/cloudbuild.yaml --substitutions _DOCKERNAME=$TRITON_DOCKERNAME,_IMAGE_URI=$TRITON_IMAGE_URI,_FILE_LOCATION=$FILE_LOCATION --timeout=24h --machine-type=e2-highcpu-8
Explanation: Build Triton serving container image
End of explanation
# List of path(s) to criteo file(s) or folder(s) in GCS.
# Training files
TRAIN_PATHS = ['gs://renatoleite-criteo-full/'] # Training CSV file to be preprocessed.
# Validation files
VALID_PATHS = ['gs://renatoleite-criteo-full/day_0'] # Validation CSV file to be preprocessed.
# Data preprocessing parameters
num_output_files_train = 24 # Number of output files after converting CSV to Parquet
num_output_files_valid = 1 # Number of output files after converting CSV to Parquet
# Training parameters
NUM_EPOCHS = 0
MAX_ITERATIONS = 25000
EVAL_INTERVAL = 1000
EVAL_BATCHES = 500
EVAL_BATCHES_FINAL = 2500
DISPLAY_INTERVAL = 200
SNAPSHOT_INTERVAL = 0
PER_GPU_BATCHSIZE = 2048
LR = 0.001
DROPOUT_RATE = 0.5
parameter_values = {
'train_paths': TRAIN_PATHS,
'valid_paths': VALID_PATHS,
'shuffle': json.dumps(None), # select PER_PARTITION, PER_WORKER, FULL, or None.
'num_output_files_train': num_output_files_train,
'num_output_files_valid': num_output_files_valid,
'per_gpu_batch_size': PER_GPU_BATCHSIZE,
'max_iter': MAX_ITERATIONS,
'max_eval_batches': EVAL_BATCHES ,
'eval_batches': EVAL_BATCHES_FINAL ,
'dropout_rate': DROPOUT_RATE,
'lr': LR ,
'num_epochs': NUM_EPOCHS,
'eval_interval': EVAL_INTERVAL,
'snapshot': SNAPSHOT_INTERVAL,
'display_interval': DISPLAY_INTERVAL
}
Explanation: 4. Configure pipeline parameters
Change the following variables according to your definitions.
End of explanation
from src.pipelines import training_pipelines
compiled_pipeline_path = 'merlin_training_pipeline.json'
compiler.Compiler().compile(
pipeline_func=training_pipelines.training_pipeline,
package_path=compiled_pipeline_path
)
Explanation: 5. Compile KFP pipeline
End of explanation
job_name = f'merlin_training_{datetime.now().strftime("%Y%m%d%H%M%S")}'
pipeline_job = vertex_ai.PipelineJob(
display_name=job_name,
template_path=compiled_pipeline_path,
enable_caching=False,
parameter_values=parameter_values,
)
pipeline_job.submit(service_account=VERTEX_SA)
Explanation: 6. Submit pipeline to Vertex AI
End of explanation |
2,682 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Derivatives of a TPS
Step1: We start by defining the source and target landmarks. Notice that, in this first example source = target!!!
Step2: The warp can be effectively computed, although the rendering will not appear to be correct...
Step3: The next step is to define the set of points at which the derivative of the previous TPS warp must be evaluated. In this case, we use the function meshgrid to generate points inside the convex hull defined by the source landmarks.
Step4: We evaluate the derivative, reshape the output, and visualize the result.
Step5: If everything goes as expected, the upper corner of the images defining the derivative of the warp wrt the x and y coordinates of the first of the source landmarks should both contain values close to 1.
Step6: The sum of all the derivatives wrt the x coordinates should produce an all 1 image
Step7: and so should the sum of all derivatives wrt the y coordinates.
Step8: Finally, the derivatives with respect to the x and y coordinates should be in this case exactly the same!!! | Python Code:
import os
import numpy as np
import scipy.io as sio
import matplotlib.pyplot as plt
from menpo.shape import PointCloud
import menpo.io as mio
from menpofit.transform import DifferentiableThinPlateSplines
Explanation: Derivatives of a TPS
End of explanation
src_landmarks = PointCloud(np.array([[-1, -1],
[-1, 1],
[ 1, -1],
[ 1, 1]]))
tgt_landmarks = PointCloud(np.array([[-1, -1],
[-1, 1],
[ 1, -1],
[ 1, 1]]))
Explanation: We start by defining the source and target landmarks. Notice that, in this first example source = target!!!
End of explanation
tps = DifferentiableThinPlateSplines(src_landmarks, tgt_landmarks)
np.allclose(tps.apply(src_landmarks).points, tgt_landmarks.points)
Explanation: The warp can be effectively computed, although the rendering will not appear to be correct...
End of explanation
x = np.arange(-1, 1, 0.01)
y = np.arange(-1, 1, 0.01)
xx, yy = np.meshgrid(x, y)
points = np.array([xx.flatten(1), yy.flatten(1)]).T
Explanation: The next step is to define the set of points at which the derivative of the previous TPS warp must be evaluated. In this case, we use the function meshgrid to generate points inside the convex hull defined by the source landmarks.
End of explanation
%matplotlib inline
dW_dxy = tps.d_dl(points)
reshaped = dW_dxy.reshape(xx.shape + (4,2))
#dW_dx
plt.subplot(241)
plt.imshow(reshaped[:,:,0,0])
plt.subplot(242)
plt.imshow(reshaped[:,:,1,0])
plt.subplot(243)
plt.imshow(reshaped[:,:,2,0])
plt.subplot(244)
plt.imshow(reshaped[:,:,3,0])
#dW_dy
plt.subplot(245)
plt.imshow(reshaped[:,:,0,1])
plt.subplot(246)
plt.imshow(reshaped[:,:,1,1])
plt.subplot(247)
plt.imshow(reshaped[:,:,2,1])
plt.subplot(248)
plt.imshow(reshaped[:,:,3,1])
Explanation: We evaluate the derivative, reshape the output, and visualize the result.
End of explanation
print(reshaped[1:5,1:5,0,0])
print(reshaped[1:5,1:5,0,1])
Explanation: If everything goes as expected, the upper corner of the images defining the derivative of the warp wrt the x and y coordinates of the first of the source landmarks should both contain values close to 1.
End of explanation
summed_x = np.sum(reshaped[:,:,:,0], axis=-1)
np.allclose(np.ones(xx.shape), summed_x)
plt.imshow(summed_x)
Explanation: The sum of all the derivatives wrt the x coordinates should produce an all 1 image
End of explanation
summed_y = np.sum(reshaped[:,:,:,1], axis=-1)
np.allclose(np.ones(xx.shape), summed_y)
plt.imshow(summed_y)
Explanation: and so should the sum of all derivatives wrt the y coordinates.
End of explanation
np.allclose(reshaped[:,:,:,0], reshaped[:,:,:,1])
Explanation: Finally, the derivatives with respect to the x and y coordinates should be in this case exactly the same!!!
End of explanation |
2,683 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="http
Step1: Describing subsidence due to fault motion
From here, we can think about the subsidence rate of the hangingwall as a function of horizontal extension velocity, $u$. We can think of the hangingwall as an enormous, floppy sled that glides down the slope of the fault plane. Consider a point on the hangingwall. In the reference frame of the footwall, the thickness of the underlying hangingwall block shrinks over time as the hangingwall moves to the "right". If the fault plane is fixed, then the vertical rate of change of surface elevation, $v$, in a reference frame fixed to the footwall, is equal to the rate of change of local hangingwall thickness. The time rate of change of hangingwall thickness, $H_h$, is the product of the spatial gradient in thickness times the extension rate, $u$,
$$v = \frac{dH_h}{dt} = -u \frac{dH_h}{dx}$$
If the footwall is rigid (which we'll assume for now), the time rate of change of surface elevation due to hangingwall motion---again, in the reference frame of the footwall---equals the rate of change of hangingwall thickness.
The hangingwall thickness equals its surface elevation, $\eta(x,t)$, minus the fault-plane elevation, $z(x)$
Step2: Numerical implementation
The numerical approach is to divide the problem into two parts
Step3: Example 2
Step4: Example 3
Step5: Example 4
Step6: Example 5 | Python Code:
import numpy as np
import matplotlib.pyplot as plt
alpha0 = 60.0 # fault dip at surface, degrees
z0 = 0.0 # elevation of surface trace
h = 10.0 # detachment depth, km
G0 = np.tan(np.deg2rad(60.0))
x = np.arange(0, 41.0)
z = z0 - h * (1.0 - np.exp(-x * G0 / h))
plt.plot(x, z, "k")
plt.xlabel("Distance (km)")
plt.ylabel("Fault plane elevation (km)")
Explanation: <a href="http://landlab.github.io"><img style="float: left" src="../../landlab_header.png"></a>
Using the Landlab ListricKinematicExtender component
(Greg Tucker, University of Colorado Boulder, March 2021)
<hr>
<small>For more Landlab tutorials, click here: <a href="https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html">https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html</a></small>
<hr>
This tutorial demonstrates how to use the ListricKinematicExtender component. ListricKinematicExtender models the vertical subsidence and lateral tectonic motion associated with a listric detachment fault. A listric fault is one that shallows with depth, such that the fault plane has a concave-upward profile. The word "kinematic" indicates that this component does not calculate the mechanics of stress and strain involved in an extensional fault; it simply aims to mimic them geometrically. The basic concept, described in detail below, is to divide the resulting tectonics into a vertical component and a horizontal component. The vertical component is modeled by imposing a subsidence rate that decays exponentially with distance from the fault's initial surface location. The horizontal component is modeled by shifting elevation values (and optionally other fields) by one cell at regular time intervals, based on a given extension rate.
Theory
Describing a listric fault plane
Consider a fault plane with dip angle $\alpha$ relative to the horizontal. The fault plane has a listric shape, in which the dip angle at the surface is $\alpha_0$, and it becomes increasingly shallow with depth, ultimately asymptoting to horizontal at depth $h$ (we'll refer to $h$ as the detachment depth). We can express the dip angle in terms of gradient $G = \tan\alpha$, and $G_0 = \tan\alpha_0$. Let the gradient decay exponentially with distance from its surface trace, $x$, starting from the surface value $G_0$:
$$G(x) = G_0 e^{-x/\lambda}$$
where $\lambda$ is a length scale that we'll define in a moment. Because $G$ is the rate of change of fault plane elevation, $z$ with distance $x$, we can write:
$$\frac{dz}{dx} = -G_0 e^{-x/\lambda}\hskip1em\mbox{(1)}$$
Integrating,
$$z(x) = G_0\lambda e^{-x/\lambda} + C$$
Evaluate constant of integration by noting that $z = z_0$ (the elevation of the initial surface trace) at $x = 0$,
$$z_0 = G_0\lambda + C$$
so
$$z(x) = z_0 - G_0\lambda (1 - e^{-x/\lambda})$$
Note that the fault elevation asymptotes to a detachment depth $h = G_0\lambda$. This gives us a physical basis for $\lambda$, and means we can express our fault plane geometry by $h$ instead of $\lambda$:
$$\boxed{z(x) = z_0 - h \left(1 - e^{-x G_0 / h}\right)}$$
Let's plot it:
End of explanation
dt = 100000.0 # time span, y
xf = 10000.0 # initial location of surface trace of fault, m
u = 0.01 # extension rate, m/y
h = 10000.0 # detachment depth, m
nprofiles = 5
x = np.arange(0.0, 40100.0, 100.0)
dist_from_fault = np.maximum(x - xf, 0.0)
z = z0 - h * (1.0 - np.exp(-dist_from_fault * G0 / h))
plt.plot(x, z, "r", label="Fault plane")
for i in range(nprofiles):
t = i * dt
shifted_dist_from_fault = np.maximum(dist_from_fault - u * t, 0.0)
# WAIT
# Calculate the surface topography
eta = h * (
np.exp(-dist_from_fault * G0 / h) - np.exp(-shifted_dist_from_fault * G0 / h)
)
# Calculate thickness
# thickness = h * (1.0 - np.exp(-shifted_dist_from_fault * G0 / h))
# eta won't be less than the fault-plane elevation
eta[eta < z] = z[eta < z]
plt.plot(x, eta, "k", label="Surface elevation " + str(i))
# plt.plot(x, thickness, 'b', label='Thickness' + str(i))
plt.xlabel("Distance (km)")
plt.ylabel("Elevation (km)")
plt.legend()
Explanation: Describing subsidence due to fault motion
From here, we can think about the subsidence rate of the hangingwall as a function of horizontal extension velocity, $u$. We can think of the hangingwall as an enormous, floppy sled that glides down the slope of the fault plane. Consider a point on the hangingwall. In the reference frame of the footwall, the thickness of the underlying hangingwall block shrinks over time as the hangingwall moves to the "right". If the fault plane is fixed, then the vertical rate of change of surface elevation, $v$, in a reference frame fixed to the footwall, is equal to the rate of change of local hangingwall thickness. The time rate of change of hangingwall thickness, $H_h$, is the product of the spatial gradient in thickness times the extension rate, $u$,
$$v = \frac{dH_h}{dt} = -u \frac{dH_h}{dx}$$
If the footwall is rigid (which we'll assume for now), the time rate of change of surface elevation due to hangingwall motion---again, in the reference frame of the footwall---equals the rate of change of hangingwall thickness.
The hangingwall thickness equals its surface elevation, $\eta(x,t)$, minus the fault-plane elevation, $z(x)$:
$$H_h(x,t) = \eta(x,t) - (z_0 - h (1 - e^{-x G_0 / h}))$$
where again $x$ is the initial location of the fault's surface trace. Suppose that there were no erosion or sedimentation. We can rewrite the above as
$$H_h(x,t) = \eta(x-ut, 0) - (z_0 - h (1 - e^{-(x-ut) G_0 / h}))$$
As an illustration, suppose the topographic surface is initially level and equal to zero. In that case,
$$H_h(x,t) = h (1 - e^{-(x-ut) G_0 / h}))$$
The corresponding height of the topographic surface at a given position and time is
$$\boxed{\eta(x,t) = z(x) + H_h(x,t) = h e^{-x G_0 / h} - h e^{-(x-ut) G_0 / h}}$$
Our implementation trick will be to apply this subsidence to grid cells in an Eulerian frame, but also capture the horizontal component of motion by shifting hangingwall grid cells every time the cumulative horizontal displacement equals or exceeds one grid cell width.
The block of code below shows an example of an initially level topographic surface that has accumulated subsidence over time according to the above equation. Note how the subsidence profile reflects the "rightward" motion of the hangingwall relative to the (fixed) footwall.
End of explanation
import numpy as np
import matplotlib.pyplot as plt
from landlab import RasterModelGrid, imshow_grid
from landlab.components import ListricKinematicExtender
# parameters
nrows = 3
ncols = 51
dx = 1000.0 # grid spacing, m
nsteps = 20 # number of iterations
dt = 2.5e5 # time step, y
extension_rate = 0.001 # m/y
detachment_depth = 10000.0 # m
fault_dip = 60.0 # fault dip angle, degrees
fault_loc = 10000.0 # m from left side of model
# Create grid and elevation field
grid = RasterModelGrid((nrows, ncols), xy_spacing=dx)
elev = grid.add_zeros("topographic__elevation", at="node")
# Instantiate component
extender = ListricKinematicExtender(
grid,
extension_rate=extension_rate,
fault_dip=fault_dip,
fault_location=fault_loc,
detachment_depth=detachment_depth,
)
# Plot the starting elevations, in cross-section (middle row)
midrow = np.arange(ncols, 2 * ncols, dtype=int)
plt.plot(grid.x_of_node[midrow] / 1000.0, elev[midrow], "k")
plt.xlabel("Distance (km)")
plt.ylabel("Elevation (m)")
plt.xlim([10.0, 40.0])
# Add a plot of the fault plane
dist_from_fault = grid.x_of_node - fault_loc
dist_from_fault[dist_from_fault < 0.0] = 0.0
x0 = detachment_depth / np.tan(np.deg2rad(fault_dip))
fault_plane = -(detachment_depth * (1.0 - np.exp(-dist_from_fault / x0)))
plt.plot(grid.x_of_node[midrow] / 1000.0, fault_plane[midrow], "r")
for i in range(nsteps):
extender.run_one_step(dt)
plt.plot(grid.x_of_node[midrow] / 1000.0, elev[midrow], "k")
# Add the analytic solution
total_time = nsteps * dt
G0 = np.tan(np.deg2rad(fault_dip))
shifted_dist_from_fault = np.maximum(dist_from_fault - extension_rate * total_time, 0.0)
elev_pred = detachment_depth * (
np.exp(-dist_from_fault * G0 / h) - np.exp(-(shifted_dist_from_fault * G0 / h))
)
elev_pred = np.maximum(elev_pred, fault_plane)
plt.plot(grid.x_of_node[midrow] / 1000.0, elev_pred[midrow], "b:")
Explanation: Numerical implementation
The numerical approach is to divide the problem into two parts: subsidence that results from the descent of the hangingwall as it moves along the fault plane, and lateral translation of topography. The mathematical basis for this starts with expressing the hangingwall thickness, $H_f$, in terms of surface topography, $\eta$ and fault plane elevation, $z$:
$$H_f = \eta - z$$
We can therefore decompose the local rate of hangingwall subsidence (in the footwall frame of reference) into two components:
$$v = -u \left( \frac{d\eta}{dx} - \frac{dz}{dx}\right)$$
The second term represents subsidence of hangingwall rock that occurs because of downward motion along the fault plane. Substituting equation (1), this component is:
$$v_s = -u G_0 \exp(-x G_0 / h)$$
where $x$ is defined as distance from the original position of the surface fault trace. However, it only applies where the hangingwall is still present, and not to those locations where the hangingwall has slipped off to reveal the fault plane at the surface. Therefore, we will track the $x$ coordinate of the "left" edge of the hangingwall, and only apply this component of subsidence to those locations. The subsidence rate component $v_s$ is applied continuously to the topography, i.e., at every time step.
The second component, represented by $-u d\eta / dx$, represents the local subsidence that occurs because the topography is translating lateral with respect to the footwall. This component we do not want to apply continuously, because it would result in artificial diffusion of the topography. Instead, the algorithm periodically shifts the topography in the entire hangingwall portion of the grid by one cell to the "right". To accomplish this, the algorithm keeps track of cumulative lateral motion since the last shift, executing a new shift whenever that value exceeds one grid-cell width, and decrementing the cumulative lateral motion by one cell width. This method preserves the hangingwall topography (and any other associated fields), at the expense of introducing episodic lateral tectonic motion. However, because of the direct translation, the relative change in topography between adjacent cells is minimized.
Fields
The ListricKinematicExtender requires topographic__elevation as a field; it applies subsidence to this field. It creates one output field: subsidence_rate records the latest subsidence rate at grid nodes.
There are also two optional fields that are used only if the user selects the track_thickness option, which is designed to support combining this component with lithosphere flexure by also tracking changes in crustal thickness that result from extension. upper_crust_thickness is an input-and-output field that contains the current thickness of the upper crust (however defined), and the cumulative_subsidence_depth field records the accumulated subsidence since the most recent horizontal shift (see below).
Vertical subsidence
The run_one_step() method calculates the subsidence rate field at nodes using the exponential function above, then multiplies this by the given time-step duration dt and subtracts this value from the node elevations.
Alternatively, a user may wish to calculate the subsidence rates without having the compent actually apply them to the elevation field. To accomplish this, the component provides a public function update_subsidence_rate. This function updates the subsidence rate field without changing elevations.
Horizontal motion
To represent horizontal motion of the hangingwall relative to the footwall (which is the fixed datum), the component keeps track of cumulative horizontal motion, updating it each time run_one_step is called. When the cumulative motion equals or exceeds one grid-cell width, the component shifts the elevation values in the hangingwall portion of the domain to the "right", representing offset of one cell width. The cumulative horizontal offset is then decremented by one grid cell. The position of the "left" edge of the hangingwall is also increased by one cell width (its initial position is the user-specified fault position). This means that the boundary between the footwall and hangingwall also migrates to the "right" at the specified extension rate, and that the area of active subsidence gradually shrinks over time. However, the subsidence rate profile is still calculated using fault position. Mathematically, this can be expressed as:
$$v(x, t) = \begin{cases}
-v G_0 \exp ( -(x - x_f) G_0 / h ) & \mbox{if } x > x_h(t) \
0 & \mbox{otherwise}
\end{cases}$$
$$x_h(t) = x_f + u t$$
where $x_f$ is the initial $x$ position of the surface fault trace, and $x_h$ represents the "left" edge of the hangingwall.
In addition to "shifting" elevation values, the user may pass a list of node field names in the fields_to_shift parameter, and these will also be shifted.
Integrating with flexure
By itself, ListricKinematicExtender does not include rift-shoulder uplift, which in nature (at least in the author's understanding) occurs as a result of flexural isostatic uplift in response to extensional thinning of the crust, and also possibly as a result of thermal isostatic uplift in the underlying mantle. To handle the first of these, ListricKinematicExtender is designed to work together with a flexural isostasy component. The basic idea is to calculate explicitly the thinning of the crustal column that results from extension, so that this reduction in crustal thickness can be used by an isostasy component such as Flexure.
The basic concept behind ListricKinematicExtender is that thinning occurs when the hangingwall block is dragged away from the footwall, in effect sliding down the fault plane, as illustrated in the plot of topography and fault plane above. In order to combine with a flexural isostasy component, we need to keep track of the progressive reduction in crustal thickness. This tracking is activated when the track_crustal_thickness option is set to True (the default is False). The user must provide an upper_crust_thickness node field. As noted above, the algorithm separates the vertical and horizontal components of motion, with horizontal motion only explicitly implemented when the cumulative displacement equals or exceeds a full grid-cell width. In keeping with this approach, the thickness field is only modified when a cell-shift occurs. But that approach could cause a problem if one wishes to incorporate flexural isostasy: a natural approach to flexural isostasy is to keep track of an evolving crustal thickness field (which thins under erosion and thickens under deposition), and calculate surface topography as the sum of a crustal datum, flexural offset, and crustal thickness above the datum. To enable this approach, we somehow need to keep track of the extensional subsidence that occurs between horizontal offsets. To do this, the ListricKinematicExtender keeps track of cumulative subsidence since the last horizontal shift. This quantity is tracked by the optional output field cumulative_subsidence_depth (the field is created only if the user sets track_crustal_thickness to True). One can then calculate elevation at any time step by summing a crustal datum elevation, the thickness of crust above this datum, the isostatic deflection, and the cumulative extensional subsidence. Whenever a shift occurs, the thickness field is included in the shift: those crustal columns to the "right" of the hangingwall edge are shifted by one cell, along with the topography. The cumulative subsidence since the last shift is then subtracted from the thickness field to record the accumulated thinning associated with that shift. This method effectively captures the thinning of crust along a listric fault plane without needing to explicitly track the fault plane or of separate hangingwall and footwall columns.
Examples
Example 1: Quasi-1D
The first example uses a quasi-1D setup to represent an initially level topography on which subsidence progressively accumulates.
End of explanation
period = 15000.0 # period of sinusoidal variations in initial topography, m
ampl = 500.0 # amplitude of variations, m
# Create grid and elevation field
grid = RasterModelGrid((nrows, ncols), xy_spacing=dx)
elev = grid.add_zeros("topographic__elevation", at="node")
elev[:] = ampl * np.sin(2 * np.pi * grid.x_of_node / period)
# Instantiate component
extender = ListricKinematicExtender(
grid,
extension_rate=extension_rate,
fault_dip=fault_dip,
fault_location=fault_loc,
detachment_depth=detachment_depth,
)
# Plot the starting elevations, in cross-section (middle row)
midrow = np.arange(ncols, 2 * ncols, dtype=int)
plt.plot(grid.x_of_node[midrow] / 1000.0, elev[midrow], "k")
plt.xlabel("Distance (km)")
plt.ylabel("Elevation (m)")
plt.grid(True)
# Add a plot of the fault plane
dist_from_fault = grid.x_of_node - fault_loc
dist_from_fault[dist_from_fault < 0.0] = 0.0
x0 = detachment_depth / np.tan(np.deg2rad(fault_dip))
fault_plane = -(detachment_depth * (1.0 - np.exp(-dist_from_fault / x0)))
plt.plot(grid.x_of_node[midrow] / 1000.0, fault_plane[midrow], "r")
for i in range(nsteps):
extender.run_one_step(dt)
c = 1.0 - i / nsteps
plt.plot(grid.x_of_node[midrow] / 1000.0, elev[midrow], color=[c, c, c])
Explanation: Example 2: quasi-1D with topography
End of explanation
from landlab import imshow_grid
# parameters
nrows = 31
ncols = 51
dx = 1000.0 # grid spacing, m
nsteps = 20 # number of iterations
dt = 2.5e5 # time step, y
extension_rate = 0.001 # m/y
detachment_depth = 10000.0 # m
fault_dip = 60.0 # fault dip angle, degrees
fault_loc = 10000.0 # m from left side of model
period = 15000.0 # period of sinusoidal variations in initial topography, m
ampl = 500.0 # amplitude of variations, m
# Create grid and elevation field
grid = RasterModelGrid((nrows, ncols), xy_spacing=dx)
elev = grid.add_zeros("topographic__elevation", at="node")
elev[:] = (
ampl
* np.sin(2 * np.pi * grid.x_of_node / period)
* np.sin(2 * np.pi * grid.y_of_node / period)
)
# Instantiate component
extender = ListricKinematicExtender(
grid,
extension_rate=extension_rate,
fault_dip=fault_dip,
fault_location=fault_loc,
detachment_depth=detachment_depth,
)
# Plot the starting topography
imshow_grid(grid, elev)
for i in range(nsteps // 2):
extender.run_one_step(dt)
imshow_grid(grid, elev)
for i in range(nsteps // 2):
extender.run_one_step(dt)
imshow_grid(grid, elev)
imshow_grid(grid, extender._fault_normal_coord)
# Plot a cross-section
start_node = 6 * ncols
end_node = start_node + ncols
midrow = np.arange(start_node, end_node, dtype=int)
plt.plot(grid.x_of_node[midrow] / 1000.0, elev[midrow], "k")
plt.xlabel("Distance (km)")
plt.ylabel("Elevation (m)")
plt.grid(True)
# Add a plot of the fault plane
dist_from_fault = grid.x_of_node - fault_loc
dist_from_fault[dist_from_fault < 0.0] = 0.0
x0 = detachment_depth / np.tan(np.deg2rad(fault_dip))
fault_plane = -(detachment_depth * (1.0 - np.exp(-dist_from_fault / x0)))
plt.plot(grid.x_of_node[midrow] / 1000.0, fault_plane[midrow], "r")
Explanation: Example 3: extending to 2D
End of explanation
from landlab import HexModelGrid
# parameters
nrows = 31
ncols = 51
dx = 1000.0 # grid spacing, m
nsteps = 20 # number of iterations
dt = 2.5e5 # time step, y
extension_rate = 0.001 # m/y
detachment_depth = 10000.0 # m
fault_dip = 60.0 # fault dip angle, degrees
fault_loc = 10000.0 # m from left side of model
period = 15000.0 # period of sinusoidal variations in initial topography, m
ampl = 500.0 # amplitude of variations, m
# Create grid and elevation field
grid = HexModelGrid((nrows, ncols), spacing=dx, node_layout="rect")
elev = grid.add_zeros("topographic__elevation", at="node")
elev[:] = (
ampl
* np.sin(2 * np.pi * grid.x_of_node / period)
* np.sin(2 * np.pi * grid.y_of_node / period)
)
# Instantiate component
extender = ListricKinematicExtender(
grid,
extension_rate=extension_rate,
fault_dip=fault_dip,
fault_location=fault_loc,
detachment_depth=detachment_depth,
)
# Plot the starting topography
imshow_grid(grid, elev)
for i in range(nsteps // 2):
extender.run_one_step(dt)
imshow_grid(grid, elev)
for i in range(nsteps // 2):
extender.run_one_step(dt)
imshow_grid(grid, elev)
# Plot a cross-section
start_node = 6 * ncols
end_node = start_node + ncols
midrow = np.arange(start_node, end_node, dtype=int)
plt.plot(grid.x_of_node[midrow] / 1000.0, elev[midrow], "k")
plt.xlabel("Distance (km)")
plt.ylabel("Elevation (m)")
plt.grid(True)
# Add a plot of the fault plane
dist_from_fault = grid.x_of_node - fault_loc
dist_from_fault[dist_from_fault < 0.0] = 0.0
x0 = detachment_depth / np.tan(np.deg2rad(fault_dip))
fault_plane = -(detachment_depth * (1.0 - np.exp(-dist_from_fault / x0)))
plt.plot(grid.x_of_node[midrow] / 1000.0, fault_plane[midrow], "r")
Explanation: Example 4: hex grid
End of explanation
from landlab.components import Flexure
# parameters
nrows = 31
ncols = 51
dx = 1000.0 # grid spacing, m
nsteps = 20 # number of iterations
dt = 2.5e5 # time step, y
extension_rate = 0.001 # m/y
detachment_depth = 10000.0 # m
fault_dip = 60.0 # fault dip angle, degrees
fault_loc = 10000.0 # m from left side of model
period = 15000.0 # period of sinusoidal variations in initial topography, m
ampl = 500.0 # amplitude of variations, m
# flexural parameters
eet = 5000.0 # effective elastic thickness, m (here very thin)
crust_datum = -10000.0 # elevation of crustal reference datum, m
rhoc = 2700.0 # crust density, kg/m3
g = 9.8 # guess what?
# Create grid and elevation field
grid = RasterModelGrid((nrows, ncols), xy_spacing=dx)
elev = grid.add_zeros("topographic__elevation", at="node")
elev[:] = (
ampl
* np.sin(2 * np.pi * grid.x_of_node / period)
* np.sin(2 * np.pi * grid.y_of_node / period)
)
thickness = grid.add_zeros("upper_crust_thickness", at="node")
load = grid.add_zeros("lithosphere__overlying_pressure_increment", at="node")
# Instantiate components
extender = ListricKinematicExtender(
grid,
extension_rate=extension_rate,
fault_dip=fault_dip,
fault_location=fault_loc,
detachment_depth=detachment_depth,
track_crustal_thickness=True,
)
cum_subs = grid.at_node["cumulative_subsidence_depth"]
flexer = Flexure(grid, eet=eet, method="flexure")
deflection = grid.at_node["lithosphere_surface__elevation_increment"]
# set up thickness and flexure
unit_wt = rhoc * g
thickness[:] = elev - crust_datum
load[:] = unit_wt * thickness
flexer.update()
init_flex = deflection.copy()
# show initial deflection field (positive downward)
imshow_grid(grid, init_flex)
for i in range(nsteps):
extender.run_one_step(dt)
load[:] = unit_wt * thickness
flexer.update()
net_deflection = deflection - init_flex
elev[:] = crust_datum + thickness - (cum_subs + net_deflection)
imshow_grid(grid, thickness)
imshow_grid(grid, net_deflection)
imshow_grid(grid, cum_subs)
imshow_grid(grid, elev)
plt.plot(elev.reshape(31, 51)[:, 10], label="Rift shoulder")
plt.plot(elev.reshape(31, 51)[:, 12], label="Rift basin")
plt.plot(-net_deflection.reshape(31, 51)[:, 10], label="Isostatic uplift profile")
plt.xlabel("North-south distance (km)")
plt.ylabel("Height (m)")
plt.legend()
Explanation: Example 5: combining with lithosphere flexure
End of explanation |
2,684 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Product of independent complex-circular Gaussian Random Variable
Let $z\sim \mathcal{CN}(0, \sigma^2)$ and if $z = x + {\rm j}y$, both $x$ and $y$ are zero mean Gaussian r.v. with variance $\sigma^2/2$.
Step1: Express $z = |z|e^{j\phi}$. The magnitude $|z|$ is Rayleigh distributed while the phase $\phi = \operatorname{arg}(z)$ is uniform over the interval $[-\pi, \pi)$.
Step2: Consider two independent complex gaussian r.v. $w$ and $z$ both $\mathcal{CN}(0, \sigma^2)$. Let $w = |w|e^{j\theta}$ and $p = |p|e^{j\omega}$.
Q What is the distribution of the phase of the product $p = wz$? | Python Code:
# magic
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# prettyplot stuff
import seaborn as sns
sns.set(style='ticks', palette='Set2')
sns.despine()
mu = 0
sigmasq = 1
sd = np.sqrt(sigmasq)
# Generate complex gaussian r.v. samples
x = np.random.normal(loc = mu, scale = sd/np.sqrt(2), size = 1000)
y = np.random.normal(loc = mu, scale = sd/np.sqrt(2), size = 1000)
z = x + 1j*y
h = plt.plot(np.real(z), np.imag(z), 'o')
plt.axis('equal')
plt.grid(True)
Explanation: Product of independent complex-circular Gaussian Random Variable
Let $z\sim \mathcal{CN}(0, \sigma^2)$ and if $z = x + {\rm j}y$, both $x$ and $y$ are zero mean Gaussian r.v. with variance $\sigma^2/2$.
End of explanation
z_mag = np.abs(z)
z_arg = np.angle(z)
plt.subplot(211)
plt.hist(z_mag, 20)
plt.ylabel('Histogram of |z|')
plt.subplot(212)
plt.hist(z_arg, 20)
plt.ylabel('Histogram of arg(z)')
Explanation: Express $z = |z|e^{j\phi}$. The magnitude $|z|$ is Rayleigh distributed while the phase $\phi = \operatorname{arg}(z)$ is uniform over the interval $[-\pi, \pi)$.
End of explanation
# Generate complex gaussian r.v. samples for w
u = np.random.normal(loc = mu, scale = sd/np.sqrt(2), size = 1000)
v = np.random.normal(loc = mu, scale = sd/np.sqrt(2), size = 1000)
w = u + 1j*v
p = w*z
p_arg = np.angle(p)
h = plt.hist(p_arg)
Explanation: Consider two independent complex gaussian r.v. $w$ and $z$ both $\mathcal{CN}(0, \sigma^2)$. Let $w = |w|e^{j\theta}$ and $p = |p|e^{j\omega}$.
Q What is the distribution of the phase of the product $p = wz$?
End of explanation |
2,685 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Soluciones a los ejercicios propuestos
Nivel básico
1.
Haz un pequeño programa que le pida al usuario introducir dos números ($x_1$ y $x_2$), calcule la siguiente operación y muestre el resultado de la misma ($x$)
Step1: 2.
Haz un programa que le pida al usuario un número (de ninjas). Si dicho número es menor que 50 y es par, el programa imprimirá "puedo con ellos!", en caso contrario imprimirá "no me vendría mal una ayudita..."
Nota
Step2: 3.
Haz un bucle while que imprima todos los números desde el 0 hasta un número que introduzca el usuario. Si el número que introduce es negativo puedes tomar dos decisiones
Step3: 4.
Genera con range los números pares del 0 al 10, ambos inclusive. ¿Qué cambiarías para generar del 2 al 10?
Step4: 5.
¿Cuál es la diferencia entre la sentencia break y la sentencia continue?
Cuando en un bucle se lee una instrucción break o una instrucción continue, se interumpe la iteración actual. Ahora bien, en el caso de break, se abandona el bucle y en el caso de continue se pasa a la siguiente iteración. Por ejemplo, el siguiente bucle imprime si un número es par o impar
Step5: 6.
Haz una lista de la compra e imprime los siguientes elementos
Step6: 7.
Crea una lista con todos los números pares del 0 al 10 en una única línea.
Step7: 8.
Crea la siguiente matriz en una línea
Step8: 9.
Vuelve a hacer la lista de la compra que hiciste en el último ejercicio, pero esta vez guarda cada elemento de la lista de la compra junto con su precio. Después, imprime los siguientes elementos
Step9: 10.
¿Es buena idea usar la función set para eliminar los elementos repetidos de una lista?
Al usar la función set para eliminar los elementos repetidos de una lista perdemos el orden original de nuestra lista. Además, no funcionará si nuestra lista es de diccionarios o de listas, debido a que no son objetos hashables.
11.
Usando la tupla que creaste en el ejercicio sobre tuplas, crea un diccionario de tu lista de la compra. Una vez tengas el diccionario creado
Step10: Nivel medio
1.
Ahora que hemos visto cómo crear arrays a partir de un objeto y otros para crear arrays con tipos prefijados, crea distintos arrays con las funciones anteriores para 1D, 2D y 3D e imprímelas por pantalla. Prueba a usar distintos tipos para ver cómo cambian los arrays. Si tienes dudas sobre cómo usarlos, puedes consultar la documentación oficial.
Step11: 2.
Gracias a las distintas formas de indexar un array que nos permite NumPy, podemos hacer operaciones de forma vectorizada, evitando los bucles. Esto supone un incremento en la eficiencia del código y tener un código más corto y legible. Para ello, vamos a realizar el siguiente ejercicio.
Genera una matriz aleatoria cuadrada de tamaño 1000. Una vez creada, genera una nueva matriz donde las filas y columnas 0 y $n-1$ estén repetidas 500 veces y el centro de la matriz quede exactamente igual a la original. Un ejemplo de esto lo podemos ver a continuación
Step12: 3.
Una matriz de rotación $R$ es una matriz que representa una rotación en el espacio euclídeo. Esta matriz $R$ se representa como
$$ R = \left( \begin{matrix} \cos\theta & -\sin\theta \
\sin\theta & -\cos\theta
\end{matrix} \right) $$
donde $\theta$ es el número de ángulos rotados en sentido antihorario.
Estas matrices son muy usadas en geometría, informática o física. Un ejemplo de uso de estas matrices puede ser el cálculo de una rotación de un objeto en un sistema gráfico, la rotación de una cámara respecto a un punto en el espacio, etc.
Estas matrices tienen como propiedades que son matrices ortogonales (su inversa y su traspuesta son iguales) y su determinante es igual a 1. Por tanto, genera un array y muestra si ese array es una matriz de rotación.
Step13: 4.
Dados el array que se ve a continuación, realiza los siguientes apartados
Step14: Multiplica array1 por $\frac{\pi}{4}$ y calcula el seno del array resultante.
Genera un nuevo array cuyo valor sea el doble del resultado anterior mas el vector array1.
Calcula la norma del vector resultante. Para ello, consulta la documentación para ver qué función realiza esta tarea, y ten en cuenta los parámetros que recibe.
Step15: 5.
Dada la siguiente matriz, realiza los siguientes apartados
Step16: Calcula la media y la desviación típica de la matriz.
Obtén el elemento mínimo y máximo de la matriz.
Calcula el determinante, la traza y la traspuesta de la matriz.
Calcula la descomposición en valores singulares de la matriz.
Calcula el valor de la suma de los elementos de la diagonal principal de la matriz.
Step17: 6.
A veces, es necesario en nuestro problema, tener que eliminar los elementos repetidos de una lista, dejando aquellos que solo aparezcan una sola vez. Es muy común, que muchos usuarios llamen a la función set para esta tarea, haciendo de la lista un conjunto sin elementos repetidos, ordenándolos y luego, el resultado de esto, volverlo a convertir en una lista. Esto, puede no estar mal del todo, pero puede ser que en el caso peor, puede que estemos haciendo un gasto inútil de memoria, tiempo y cálculos, para que, en el caso de que no haya elementos repetidos, sólo obtengamos una lista ordenada.
Es por ello, por lo que existe otra forma de hacerlo. Utilizando lo ya visto, obtén una lista sin elementos repetidos que mantengan el orden de la lista original. Para hacerlo aún más divertido, no uses más de 4 líneas. | Python Code:
x1 = int(input("Introduce un número: "))
x2 = int(input("Y ahora otro: "))
x = (20 * x1 - x2)/(x2 + 3)
print("x =",x)
Explanation: Soluciones a los ejercicios propuestos
Nivel básico
1.
Haz un pequeño programa que le pida al usuario introducir dos números ($x_1$ y $x_2$), calcule la siguiente operación y muestre el resultado de la misma ($x$):
$$ x = \frac{20 * x_1 - x_2}{x_2 + 3} $$
Si intentas operar con el resultado de la función input obtendrás un error que te informa que no se pueden restar dos datos de tipo str. Usa la función int para convertir los datos introducidos por teclado a datos numéricos.
End of explanation
num = int(input("Introduce número de ninjas: "))
if num < 50 and num%2==0:
print("Puedo con ellos!")
else:
print("No me vendría mal una ayudita...")
Explanation: 2.
Haz un programa que le pida al usuario un número (de ninjas). Si dicho número es menor que 50 y es par, el programa imprimirá "puedo con ellos!", en caso contrario imprimirá "no me vendría mal una ayudita..."
Nota: para saber si un número es par o no debes usar el operador $\%$ y para saber si dos condiciones se cuplen a la vez, el operador lógico and
End of explanation
num = int(input("Intoduce un número: "))
# Opción 1: si el usuario introduce un número negativo pedir otro número
while num < 0:
num = int(input("Introduce un número: "))
i = 0
while i <= num:
print(i)
i += 1
num = int(input("Intoduce un número: "))
# Opción 2: si el usuario introduce un número negativo, contar hacia atrás
sign = lambda x: (1, -1)[x < 0]
i = 0
s = sign(num)
while i*s <= num*s:
print(i)
i += s
Explanation: 3.
Haz un bucle while que imprima todos los números desde el 0 hasta un número que introduzca el usuario. Si el número que introduce es negativo puedes tomar dos decisiones: pedirle que introduzca un número positivo o contar hacia atrás, tú eliges!
End of explanation
# Para generar del 0 al 10 ambos inclusive:
for i in range(0,11):
print(i)
# Para generar del 2 al 10 sólo con números pares
for i in range(2, 11, 2):
print(i)
Explanation: 4.
Genera con range los números pares del 0 al 10, ambos inclusive. ¿Qué cambiarías para generar del 2 al 10?
End of explanation
for num in range(2,10):
if num % 2 == 0:
print(num, "es par!")
continue
print(num, "es impar!")
Explanation: 5.
¿Cuál es la diferencia entre la sentencia break y la sentencia continue?
Cuando en un bucle se lee una instrucción break o una instrucción continue, se interumpe la iteración actual. Ahora bien, en el caso de break, se abandona el bucle y en el caso de continue se pasa a la siguiente iteración. Por ejemplo, el siguiente bucle imprime si un número es par o impar:
End of explanation
lista_compra = ['Leche', 'Chocolate', 'Arroz', 'Macarrones']
print("Penúltimo elemento: ", lista_compra[-2])
print("Del segundo al cuarto elemento: ", lista_compra[1:5])
print("Los tres últimos elementos: ", lista_compra[-3:])
print("Todos: ", lista_compra)
del lista_compra[2]
print(lista_compra)
Explanation: 6.
Haz una lista de la compra e imprime los siguientes elementos:
Penúltimo elemento
Del segundo al cuarto elemento
Los tres últimos
Todos!
Por último, elimina el tercer elemento de la lista usando la sentencia del
End of explanation
# solución 1:
[x for x in range(10) if x%2==0]
# solución 2:
list(range(0,10,2))
Explanation: 7.
Crea una lista con todos los números pares del 0 al 10 en una única línea.
End of explanation
[[j for j in range(i*i, i*i+3)] for i in range(1,3)]
Explanation: 8.
Crea la siguiente matriz en una línea:
$$ M_{2 \times 3} = \left( \begin{matrix} 1 & 2 & 3 \
4 & 5 & 6 \end{matrix} \right)$$
End of explanation
tuplas_compra = [('Leche', 2), ('Chocolate', 1), ('Arroz', 1.5),
('Macarrones', 2.1)]
print("Precio del tercer elemento: ", tuplas_compra[2][1])
print("Nombre del último elemento: ", tuplas_compra[-1][0])
print("Nombre y precio del primer elemento", tuplas_compra[0])
Explanation: 9.
Vuelve a hacer la lista de la compra que hiciste en el último ejercicio, pero esta vez guarda cada elemento de la lista de la compra junto con su precio. Después, imprime los siguientes elementos:
El precio del tercer elemento.
El nombre del último elemento.
Tanto el nombre como el precio del primer elemento.
End of explanation
dict_compra = dict(tuplas_compra)
for compra in dict_compra.items():
print("he comprado {} y me ha costado {}".format(compra[0], compra[1]))
print('He comprado leche?', 'Leche' in dict_compra)
del dict_compra['Arroz']
print(dict_compra)
Explanation: 10.
¿Es buena idea usar la función set para eliminar los elementos repetidos de una lista?
Al usar la función set para eliminar los elementos repetidos de una lista perdemos el orden original de nuestra lista. Además, no funcionará si nuestra lista es de diccionarios o de listas, debido a que no son objetos hashables.
11.
Usando la tupla que creaste en el ejercicio sobre tuplas, crea un diccionario de tu lista de la compra. Una vez tengas el diccionario creado:
Imprime todos los elementos que vayas a comprar creando la siguiente frase con la función format: "he comprado __ y me ha costado __".
Consulta si has añadido un determinado elemento (por ejemplo un cartón de leche) a la lista de la compra
Elimina un elemento usando la función del
End of explanation
import numpy as np
print(np.ones(5, dtype=np.int8))
print(np.random.random(5))
print(np.full(shape=(3,3), fill_value=4, dtype=np.int8))
print(np.arange(6))
print(np.linspace(start=1, stop=6, num=10))
print(np.eye(N=2))
print(np.identity(n=3, dtype=np.int8))
Explanation: Nivel medio
1.
Ahora que hemos visto cómo crear arrays a partir de un objeto y otros para crear arrays con tipos prefijados, crea distintos arrays con las funciones anteriores para 1D, 2D y 3D e imprímelas por pantalla. Prueba a usar distintos tipos para ver cómo cambian los arrays. Si tienes dudas sobre cómo usarlos, puedes consultar la documentación oficial.
End of explanation
from time import time
def clona_cols_rows(size=1000, clone=500, print_matrix=False,
create_random=True):
if create_random:
m = np.random.random((size,size))
else:
m = np.arange(size*size).reshape(size,size)
n = np.zeros((size+clone*2, size+clone*2))
antes = time()
# en primer lugar, copiamos m en el centro de n
for i in range(size):
for j in range(size):
n[i+clone, j+clone] = m[i,j]
# después, copiamos la primera fila/columna en las
# primeras clone filas/columnas
for i in range(clone):
n[i,clone:clone+size] = m[0]
n[clone:clone+size, i] = m[:,0]
# una vez copiada la primera fila/columna, pasamos a
# copiar la última/columna
for i in range(clone+size, size+clone*2):
n[i, clone:clone+size] = m[-1]
n[clone:clone+size, i] = m[:,-1]
# por último, copiamos los valores de los extremos en las esquinas
for i in range(clone):
n[i, :clone] = np.full(clone, m[0,0])
n[i, size+clone:] = np.full(clone, m[0,-1])
n[i+size+clone, :clone] = np.full(clone, m[-1,0])
n[i+size+clone, size+clone:] = np.full(clone, m[-1,-1])
despues = time()
if print_matrix:
print(m)
print(n)
return despues-antes
clona_cols_rows(size=3, clone=2, print_matrix=True, create_random=False)
print("Tiempo con bucle for: ", clona_cols_rows(), " s")
def clona_vec_cols_rows(size=1000, clone=500, print_matrix=False,
create_random=True):
if create_random:
m = np.random.random((size,size))
else:
m = np.arange(size*size).reshape(size,size)
n = np.zeros((size+clone*2, size+clone*2))
antes=time()
# en primer lugar, insertamos m en el centro de n
n[clone:clone+size, clone:clone+size] = m
# Copiamos la primera fila de m, en las primeras filas
# de n, y la última fila de m en las últimas filas de n
n[:clone, clone:clone+size] = m[0]
n[size+clone:, clone:size+clone] = m[-1]
# Lo mismo para las columnas
n[:, :clone] = np.repeat(n[:,clone],clone).reshape(2*clone+size, clone)
n[:, size+clone:] = np.repeat(n[:,-(clone+1)],clone).reshape(2*clone+size, clone)
despues=time()
if print_matrix:
print(m)
print(n)
return despues-antes
clona_vec_cols_rows(size=3, clone=2, print_matrix=True, create_random=False)
print("Tiempo vectorizando: ", clona_vec_cols_rows(), " s")
Explanation: 2.
Gracias a las distintas formas de indexar un array que nos permite NumPy, podemos hacer operaciones de forma vectorizada, evitando los bucles. Esto supone un incremento en la eficiencia del código y tener un código más corto y legible. Para ello, vamos a realizar el siguiente ejercicio.
Genera una matriz aleatoria cuadrada de tamaño 1000. Una vez creada, genera una nueva matriz donde las filas y columnas 0 y $n-1$ estén repetidas 500 veces y el centro de la matriz quede exactamente igual a la original. Un ejemplo de esto lo podemos ver a continuación:
$$ \left( \begin{matrix}
1 & 2 & 3 \
2 & 3 & 4 \
3 & 4 & 5
\end{matrix} \right) \Longrightarrow \left( \begin{matrix}
1 & 1 & 1 & 2 & 3 & 3 & 3 \
1 & 1 & 1 & 2 & 3 & 3 & 3 \
1 & 1 & 1 & 2 & 3 & 3 & 3 \
2 & 2 & 2 & 3 & 4 & 4 & 4 \
3 & 3 & 3 & 4 & 5 & 5 & 5 \
3 & 3 & 3 & 4 & 5 & 5 & 5 \
3 & 3 & 3 & 4 & 5 & 5 & 5 \end{matrix} \right) $$
Impleméntalo usando un bucle for y vectorizando el cálculo usando lo anteriormente visto para ver la diferencias de tiempos usando ambas variantes. Para medir el tiempo, puedes usar el módulo time.
End of explanation
R = np.random.random((2,2))
if (R.T == np.linalg.inv(R)).all() and np.linalg.det(R) == 1:
print("Matriz de rotación!")
else:
print("No es matriz de rotación u_u")
Explanation: 3.
Una matriz de rotación $R$ es una matriz que representa una rotación en el espacio euclídeo. Esta matriz $R$ se representa como
$$ R = \left( \begin{matrix} \cos\theta & -\sin\theta \
\sin\theta & -\cos\theta
\end{matrix} \right) $$
donde $\theta$ es el número de ángulos rotados en sentido antihorario.
Estas matrices son muy usadas en geometría, informática o física. Un ejemplo de uso de estas matrices puede ser el cálculo de una rotación de un objeto en un sistema gráfico, la rotación de una cámara respecto a un punto en el espacio, etc.
Estas matrices tienen como propiedades que son matrices ortogonales (su inversa y su traspuesta son iguales) y su determinante es igual a 1. Por tanto, genera un array y muestra si ese array es una matriz de rotación.
End of explanation
array1 = np.array([ -1., 4., -9.])
Explanation: 4.
Dados el array que se ve a continuación, realiza los siguientes apartados:
End of explanation
array2 = np.sin(array1 * np.pi/4)
array2
array3 = array2 * 2 + array1
array3
np.linalg.norm(array3)
Explanation: Multiplica array1 por $\frac{\pi}{4}$ y calcula el seno del array resultante.
Genera un nuevo array cuyo valor sea el doble del resultado anterior mas el vector array1.
Calcula la norma del vector resultante. Para ello, consulta la documentación para ver qué función realiza esta tarea, y ten en cuenta los parámetros que recibe.
End of explanation
n_array1 = np.array([[ 1., 3., 5.], [7., -9., 2.], [4., 6., 8.]])
Explanation: 5.
Dada la siguiente matriz, realiza los siguientes apartados:
End of explanation
media = np.mean(n_array1)
desv_tipica = np.std(n_array1)
print("Media =", media, " y desv típica =", desv_tipica)
maximo = np.max(n_array1)
minimo = np.min(n_array1)
print("Máximo =", maximo, " y minimo =", minimo)
det = np.linalg.det(n_array1)
traza = np.trace(n_array1)
traspuesta = n_array1.T
U, S, V = np.linalg.svd(n_array1)
print(U)
print(S)
print(V)
result = np.diag(array1).sum()
print("Resultado: ", result)
Explanation: Calcula la media y la desviación típica de la matriz.
Obtén el elemento mínimo y máximo de la matriz.
Calcula el determinante, la traza y la traspuesta de la matriz.
Calcula la descomposición en valores singulares de la matriz.
Calcula el valor de la suma de los elementos de la diagonal principal de la matriz.
End of explanation
a = [1,1,1,2,5,3,4,8,5,8]
b = []
list(filter(lambda x: b.append(x) if not x in b else False, a))
print("Lista original:\t\t", a)
print("Lista sin repetidos:\t", b)
Explanation: 6.
A veces, es necesario en nuestro problema, tener que eliminar los elementos repetidos de una lista, dejando aquellos que solo aparezcan una sola vez. Es muy común, que muchos usuarios llamen a la función set para esta tarea, haciendo de la lista un conjunto sin elementos repetidos, ordenándolos y luego, el resultado de esto, volverlo a convertir en una lista. Esto, puede no estar mal del todo, pero puede ser que en el caso peor, puede que estemos haciendo un gasto inútil de memoria, tiempo y cálculos, para que, en el caso de que no haya elementos repetidos, sólo obtengamos una lista ordenada.
Es por ello, por lo que existe otra forma de hacerlo. Utilizando lo ya visto, obtén una lista sin elementos repetidos que mantengan el orden de la lista original. Para hacerlo aún más divertido, no uses más de 4 líneas.
End of explanation |
2,686 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sebastian Raschka
back to the matplotlib-gallery at https
Step1: <font size="1.5em">More info about the %watermark extension</font>
Step2: <br>
<br>
Matplotlib Formatting III
Step3: <br>
<br>
Let's get fancy
⬆
Step4: <br>
<br>
Thinking outside the box
⬆
Step5: <br>
<br>
I love when things are transparent, free and clear
⬆
Step6: <br>
<br>
Markers -- All good things come in threes!
⬆ | Python Code:
%load_ext watermark
%watermark -u -v -d -p matplotlib,numpy
Explanation: Sebastian Raschka
back to the matplotlib-gallery at https://github.com/rasbt/matplotlib-gallery
End of explanation
%matplotlib inline
Explanation: <font size="1.5em">More info about the %watermark extension</font>
End of explanation
import matplotlib.pyplot as plt
import numpy as np
x = np.arange(10)
for i in range(1, 4):
plt.plot(x, i * x**2, label='Group %d' % i)
plt.legend(loc='best')
plt.show()
Explanation: <br>
<br>
Matplotlib Formatting III: What it takes to become a legend
I won't be a rock star. I will be a legend.
-- Freddie Mercury
<br>
<br>
Sections
Back to square one
Let's get fancy
Thinking outside the box
I love when things are transparent, free and clear
Markers -- All good things come in threes!
<br>
<br>
Back to square one
⬆
End of explanation
x = np.arange(10)
for i in range(1, 4):
plt.plot(x, i * x**2, label='Group %d' % i)
plt.legend(loc='best', fancybox=True, shadow=True)
plt.show()
Explanation: <br>
<br>
Let's get fancy
⬆
End of explanation
fig = plt.figure()
ax = plt.subplot(111)
x = np.arange(10)
for i in range(1, 4):
ax.plot(x, i * x**2, label='Group %d' % i)
ax.legend(loc='upper center',
bbox_to_anchor=(0.5, # horizontal
1.15),# vertical
ncol=3, fancybox=True)
plt.show()
fig = plt.figure()
ax = plt.subplot(111)
x = np.arange(10)
for i in range(1, 4):
ax.plot(x, i * x**2, label='Group %d' % i)
ax.legend(loc='upper center',
bbox_to_anchor=(1.15, 1.02),
ncol=1, fancybox=True)
plt.show()
Explanation: <br>
<br>
Thinking outside the box
⬆
End of explanation
x = np.arange(10)
for i in range(1, 4):
plt.plot(x, i * x**2, label='Group %d' % i)
plt.legend(loc='upper right', framealpha=0.1)
plt.show()
Explanation: <br>
<br>
I love when things are transparent, free and clear
⬆
End of explanation
from itertools import cycle
x = np.arange(10)
colors = ['blue', 'red', 'green']
color_gen = cycle(colors)
for i in range(1, 4):
plt.scatter(x, i * x**2, label='Group %d' % i, color=next(color_gen))
plt.legend(loc='upper left')
plt.show()
from itertools import cycle
x = np.arange(10)
colors = ['blue', 'red', 'green']
color_gen = cycle(colors)
for i in range(1, 4):
plt.scatter(x, i * x**2, label='Group %d' % i, color=next(color_gen))
plt.legend(loc='upper left', scatterpoints=1)
plt.show()
from itertools import cycle
x = np.arange(10)
colors = ['blue', 'red', 'green']
color_gen = cycle(colors)
for i in range(1, 4):
plt.plot(x, i * x**2, label='Group %d' % i, marker='o')
plt.legend(loc='upper left')
plt.show()
from itertools import cycle
x = np.arange(10)
colors = ['blue', 'red', 'green']
color_gen = cycle(colors)
for i in range(1, 4):
plt.plot(x, i * x**2, label='Group %d' % i, marker='o')
plt.legend(loc='upper left', numpoints=1)
plt.show()
Explanation: <br>
<br>
Markers -- All good things come in threes!
⬆
End of explanation |
2,687 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Webscraping with Beautiful Soup
Intro
In this tutorial, we'll be scraping information on the state senators of Illinois, available here, as well as the list of bills each senator has sponsored (e.g., here.
The Tools
Requests
Beautiful Soup
Step1: Part 1
Step2: 1.2 Soup it
Now we use the BeautifulSoup function to parse the reponse into an HTML tree. This returns an object (called a soup object) which contains all of the HTML in the original document.
Step3: 1.3 Find Elements
BeautifulSoup has a number of functions to find things on a page. Like other webscraping tools, Beautiful Soup lets you find elements by their
Step4: NB
Step5: That's a lot! Many elements on a page will have the same html tag. For instance, if you search for everything with the a tag, you're likely to get a lot of stuff, much of which you don't want. What if we wanted to search for HTML tags ONLY with certain attributes, like particular CSS classes?
We can do this by adding an additional argument to the find_all
In the example below, we are finding all the a tags, and then filtering those with class = "sidemenu".
Step6: Oftentimes a more efficient way to search and find things on a website is by CSS selector. For this we have to use a different method, select(). Just pass a string into the .select() to get all elements with that string as a valid CSS selector.
In the example above, we can use "a.sidemenu" as a CSS selector, which returns all a tags with class sidemenu.
Step7: Challenge 1
Find all the <a> elements in class mainmenu
Step8: 1.4 Get Attributes and Text of Elements
Once we identify elements, we want the access information in that element. Oftentimes this means two things
Step9: It's a tag! Which means it has a text member
Step10: Sometimes we want the value of certain attributes. This is particularly relevant for a tags, or links, where the href attribute tells us where the link goes.
You can access a tag’s attributes by treating the tag like a dictionary
Step11: Challenge 2
Find all the href attributes (url) from the mainmenu.
Step12: Part 2
Believe it or not, that's all you need to scrape a website. Let's apply these skills to scrape http
Step13: 2.2 Find the right elements and text.
Now let's try to get a list of rows in that table. Remember that rows are identified by the tr tag.
Step14: But remember, find_all gets all the elements with the tr tag. We can use smart CSS selectors to get only the rows we want.
Step15: We can use the select method on anything. Let's say we want to find everything with the CSS selector td.detail in an item of the list we created above.
Step16: Most of the time, we're interested in the actual text of a website, not its tags. Remember, to get the text of an HTML element, use the text member.
Step17: Now we can combine the beautifulsoup tools with our basic python skills to scrape an entire web page.
Step18: 2.3 Loop it all together
Let's use a for loop to get 'em all!
Step19: Challege 3
Step20: Challenge 4
Step21: Part 3
Step22: 3.2 Get all the bills
Finally, create a dictionary bills_dict which maps a district number (the key) onto a list_of_bills (the value) eminating from that district. You can do this by looping over all of the senate members in members_dict and calling get_bills() for each of their associated bill URLs.
NOTE | Python Code:
# import required modules
import requests
from bs4 import BeautifulSoup
from datetime import datetime
import time
import re
import sys
Explanation: Webscraping with Beautiful Soup
Intro
In this tutorial, we'll be scraping information on the state senators of Illinois, available here, as well as the list of bills each senator has sponsored (e.g., here.
The Tools
Requests
Beautiful Soup
End of explanation
# make a GET request
req = requests.get('http://www.ilga.gov/senate/default.asp')
# read the content of the server’s response
src = req.text
Explanation: Part 1: Using Beautiful Soup
1.1 Make a Get Request and Read in HTML
We use requests library to:
1. make a GET request to the page
2. read in the html of the page
End of explanation
# parse the response into an HTML tree
soup = BeautifulSoup(src, 'lxml')
# take a look
print(soup.prettify()[:1000])
Explanation: 1.2 Soup it
Now we use the BeautifulSoup function to parse the reponse into an HTML tree. This returns an object (called a soup object) which contains all of the HTML in the original document.
End of explanation
# find all elements in a certain tag
# these two lines of code are equivilant
# soup.find_all("a")
Explanation: 1.3 Find Elements
BeautifulSoup has a number of functions to find things on a page. Like other webscraping tools, Beautiful Soup lets you find elements by their:
HTML tags
HTML Attributes
CSS Selectors
Let's search first for HTML tags.
The function find_all searches the soup tree to find all the elements with an a particular HTML tag, and returns all of those elements.
What does the example below do?
End of explanation
# soup.find_all("a")
# soup("a")
Explanation: NB: Because find_all() is the most popular method in the Beautiful Soup search API, you can use a shortcut for it. If you treat the BeautifulSoup object as though it were a function, then it’s the same as calling find_all() on that object.
These two lines of code are equivalent:
End of explanation
# Get only the 'a' tags in 'sidemenu' class
soup("a", class_="sidemenu")
Explanation: That's a lot! Many elements on a page will have the same html tag. For instance, if you search for everything with the a tag, you're likely to get a lot of stuff, much of which you don't want. What if we wanted to search for HTML tags ONLY with certain attributes, like particular CSS classes?
We can do this by adding an additional argument to the find_all
In the example below, we are finding all the a tags, and then filtering those with class = "sidemenu".
End of explanation
# get elements with "a.sidemenu" CSS Selector.
soup.select("a.sidemenu")
Explanation: Oftentimes a more efficient way to search and find things on a website is by CSS selector. For this we have to use a different method, select(). Just pass a string into the .select() to get all elements with that string as a valid CSS selector.
In the example above, we can use "a.sidemenu" as a CSS selector, which returns all a tags with class sidemenu.
End of explanation
# YOUR CODE HERE
Explanation: Challenge 1
Find all the <a> elements in class mainmenu
End of explanation
# this is a list
soup.select("a.sidemenu")
# we first want to get an individual tag object
first_link = soup.select("a.sidemenu")[0]
# check out its class
type(first_link)
Explanation: 1.4 Get Attributes and Text of Elements
Once we identify elements, we want the access information in that element. Oftentimes this means two things:
Text
Attributes
Getting the text inside an element is easy. All we have to do is use the text member of a tag object:
End of explanation
print(first_link.text)
Explanation: It's a tag! Which means it has a text member:
End of explanation
print(first_link['href'])
Explanation: Sometimes we want the value of certain attributes. This is particularly relevant for a tags, or links, where the href attribute tells us where the link goes.
You can access a tag’s attributes by treating the tag like a dictionary:
End of explanation
# YOUR CODE HERE
Explanation: Challenge 2
Find all the href attributes (url) from the mainmenu.
End of explanation
# make a GET request
req = requests.get('http://www.ilga.gov/senate/default.asp?GA=98')
# read the content of the server’s response
src = req.text
# soup it
soup = BeautifulSoup(src, "lxml")
Explanation: Part 2
Believe it or not, that's all you need to scrape a website. Let's apply these skills to scrape http://www.ilga.gov/senate/default.asp?GA=98
NB: we're just going to scrape the 98th general assembly.
Our goal is to scrape information on each senator, including their:
- name
- district
- party
2.1 First, make the get request and soup it.
End of explanation
# get all tr elements
rows = soup.find_all("tr")
len(rows)
Explanation: 2.2 Find the right elements and text.
Now let's try to get a list of rows in that table. Remember that rows are identified by the tr tag.
End of explanation
# returns every ‘tr tr tr’ css selector in the page
rows = soup.select('tr tr tr')
print(rows[2].prettify())
Explanation: But remember, find_all gets all the elements with the tr tag. We can use smart CSS selectors to get only the rows we want.
End of explanation
# select only those 'td' tags with class 'detail'
row = rows[2]
detailCells = row.select('td.detail')
detailCells
Explanation: We can use the select method on anything. Let's say we want to find everything with the CSS selector td.detail in an item of the list we created above.
End of explanation
# Keep only the text in each of those cells
rowData = [cell.text for cell in detailCells]
Explanation: Most of the time, we're interested in the actual text of a website, not its tags. Remember, to get the text of an HTML element, use the text member.
End of explanation
# check em out
print(rowData[0]) # Name
print(rowData[3]) # district
print(rowData[4]) # party
Explanation: Now we can combine the beautifulsoup tools with our basic python skills to scrape an entire web page.
End of explanation
# make a GET request
req = requests.get('http://www.ilga.gov/senate/default.asp?GA=98')
# read the content of the server’s response
src = req.text
# soup it
soup = BeautifulSoup(src, "lxml")
# Create empty list to store our data
members = []
# returns every ‘tr tr tr’ css selector in the page
rows = soup.select('tr tr tr')
# loop through all rows
for row in rows:
# select only those 'td' tags with class 'detail'
detailCells = row.select('td.detail')
# get rid of junk rows
if len(detailCells) is not 5:
continue
# Keep only the text in each of those cells
rowData = [cell.text for cell in detailCells]
# Collect information
name = rowData[0]
district = int(rowData[3])
party = rowData[4]
# Store in a tuple
tup = (name,district,party)
# Append to list
members.append(tup)
len(members)
Explanation: 2.3 Loop it all together
Let's use a for loop to get 'em all!
End of explanation
# make a GET request
req = requests.get('http://www.ilga.gov/senate/default.asp?GA=98')
# read the content of the server’s response
src = req.text
# soup it
soup = BeautifulSoup(src, "lxml")
# Create empty list to store our data
members = []
# returns every ‘tr tr tr’ css selector in the page
rows = soup.select('tr tr tr')
# loop through all rows
for row in rows:
# select only those 'td' tags with class 'detail'
detailCells = row.select('td.detail')
# get rid of junk rows
if len(detailCells) is not 5:
continue
# Keep only the text in each of those cells
rowData = [cell.text for cell in detailCells]
# Collect information
name = rowData[0]
district = int(rowData[3])
party = rowData[4]
# YOUR CODE HERE.
# Store in a tuple
tup = (name, district, party, full_path)
# Append to list
members.append(tup)
# Uncomment to test
# members[:5]
Explanation: Challege 3: Get HREF element pointing to members' bills.
The code above retrieves information on:
- the senator's name
- their district number
- and their party
We now want to retrieve the URL for each senator's list of bills. The format for the list of bills for a given senator is:
http://www.ilga.gov/senate/SenatorBills.asp + ? + GA=98 + &MemberID=memberID + &Primary=True
to get something like:
http://www.ilga.gov/senate/SenatorBills.asp?MemberID=1911&GA=98&Primary=True
You should be able to see that, unfortunately, memberID is not currently something pulled out in our scraping code.
Your initial task is to modify the code above so that we also retrieve the full URL which points to the corresponding page of primary-sponsored bills, for each member, and return it along with their name, district, and party.
Tips:
To do this, you will want to get the appropriate anchor element (<a>) in each legislator's row of the table. You can again use the .select() method on the row object in the loop to do this — similar to the command that finds all of the td.detail cells in the row. Remember that we only want the link to the legislator's bills, not the committees or the legislator's profile page.
The anchor elements' HTML will look like <a href="/senate/Senator.asp/...">Bills</a>. The string in the href attribute contains the relative link we are after. You can access an attribute of a BeatifulSoup Tag object the same way you access a Python dictionary: anchor['attributeName']. (See the <a href="http://www.crummy.com/software/BeautifulSoup/bs4/doc/#tag">documentation</a> for more details).
NOTE: There are a lot of different ways to use BeautifulSoup to get things done; whatever you need to do to pull that HREF out is fine. Posting on the etherpad is recommended for discussing different strategies.
I've started out the code for you. Fill it in where it says #YOUR CODE HERE (Save the path into an object called full_path
End of explanation
# YOUR FUNCTION HERE
# Uncomment to test you3 code!
# senateMembers = get_members('http://www.ilga.gov/senate/default.asp?GA=98')
# len(senateMembers)
Explanation: Challenge 4: Make a function
Turn the code above into a function that accepts a URL, scrapes the URL for its senators, and returns a list of tuples containing information about each senator.
End of explanation
# COMPLETE THIS FUNCTION
def get_bills(url):
src = requests.get(url).text
soup = BeautifulSoup(src)
rows = soup.select('tr')
bills = []
for row in rows:
# YOUR CODE HERE
tup = (bill_id, description, champber, last_action, last_action_date)
bills.append(tup)
return(bills)
# uncomment to test your code:
# test_url = senateMembers[0][3]
# get_bills(test_url)[0:5]
Explanation: Part 3: Scrape Bills
3.1 Writing a Scraper Function
Now we want to scrape the webpages corresponding to bills sponsored by each bills.
Write a function called get_bills(url) to parse a given Bills URL. This will involve:
requesting the URL using the <a href="http://docs.python-requests.org/en/latest/">requests</a> library
using the features of the BeautifulSoup library to find all of the <td> elements with the class billlist
return a list of tuples, each with:
description (2nd column)
chamber (S or H) (3rd column)
the last action (4th column)
the last action date (5th column)
I've started the function for you. Fill in the rest.
End of explanation
# YOUR CODE HERE
# Uncomment to test
# bills_dict[52]
Explanation: 3.2 Get all the bills
Finally, create a dictionary bills_dict which maps a district number (the key) onto a list_of_bills (the value) eminating from that district. You can do this by looping over all of the senate members in members_dict and calling get_bills() for each of their associated bill URLs.
NOTE: please call the function time.sleep(0.5) for each iteration of the loop, so that we don't destroy the state's web site.
End of explanation |
2,688 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: Binary vector generator
Version 1
Type checking
Step5: Binary vector generator
Version 2 - via Itertool
Step9: Accumulator Inputs ##
Step11: Label the data
Step13: Create dataset
Namedtuple
Step14: Pickling
Step15: We now pickle the named_tuple
cfr. When to pickle
See http
Step16: Accumulator inputs - Verguts& Fias##
Numerosity from 1 to 5, where unity is represented by 3 repeated ones. (e.g. 2 is represented as
[1,1,1,1,1,1,0,0,0,0,0,0,0,0,0]).
No zero vector. | Python Code:
from scipy.special import comb
import numpy as np
def how_many(max_n = 6, length = 16):
Compute how many different binary vectors of a given length can be formed up to a given number.
If a list is passed, compute the vectors as specified in the list.
if isinstance(max_n, int):
indexes = range(1,max_n+1)
if isinstance(max_n, list):
indexes = max_n
else:
raise TypeError("how_many(x,y) requires x to be either list or int")
rows_n=0
for i in indexes:
rows_n = rows_n + comb(length,i, exact=True)
return(rows_n)
def binary_vectors(length = 16, max_n = 6, one_hot = False):
Return an array of size [how_many(max_n, length), length]
Each row is a binary vector with up to max_n ones.
Return a label array of size how_many(max_n, length) either as
integer or as one_hot representation
The function computes all possibilities by converting successive integers into
binary representation and then extracts those within range
#Compute the dimension of the matrix for memory allocation
# numbers of column
columns_n = 16
# numbers of rows
rows_n = 2**columns_n
#location matrix
locations = np.zeros((rows_n, columns_n))
#populate the location matrix
for i in range(rows_n):
bin_string = np.binary_repr(i,length)
# we need to convert the binary string into a "boolean vector"
# http://stackoverflow.com/questions/29091869/convert-bitstring-string-of-1-and-0s-to-numpy-array
bin_array = np.fromstring(bin_string,'u1') - ord('0')
locations[i,:] = bin_array
#Exctrat vector within range
locations = locations[np.sum(locations, axis=1)<=max_n]
return locations
# The 50.000 inputs
# Repeat the matrix 4 times and cut the excess
# inputs = np.tile(locations,(4,1))
# inputs = inputs[0:50000,:]
# labels = np.sum(inputs, axis=1).reshape(50000,1)
# First we store the
# print("vector {} has label {}".format(inputs[2532,:], labels[2532,:]))
Explanation: Binary vector generator
Version 1
Type checking
End of explanation
# def binary_vector_2(rows_n = [2,4,6,8,10], columns_n = 10):
# rows = how_many(rows_n, 10)
# index = 0
# locations = np.zeros((rows, columns_n))
# for i in rows_n:
# for bin_string in kbits(10,i):
# bin_array = np.fromstring(bin_string,'u1') - ord('0')
# locations[index,:] = bin_array
# index = index+1
# return locations
# inputs = binary_vector_2()
# labels = find_labels(inputs, one_hot=True)
# #dataset_ver = Dataset(inputs, labels)
# #pickle_test(dataset_ver)
# inputs.shape
import numpy as np
import itertools
from scipy.special import comb
def kbits(n, k):
Generate a list of ordered binary strings representing all the possibile
way n chooses k.
Args:
n (int): set cardinality
k (int): subset cardinality
Returns:
result (string): list of binary strings
result = []
for bits in itertools.combinations(range(n), k):
s = ['0'] * n
for bit in bits:
s[bit] = '1'
result.append(''.join(s))
return result
def binary_vector_2(rows_n = [2,4,6,8,10], distribution=[45], columns_n = 10):
Matrix of binary vectors from distribution.
Args:
rows_n (int, ndarray): nx1
distribution (int, ndarray): nx1
Returns:
ndarray of dimension rows_n * distribution, columns_n
TODO: check inputs, here given as list, but should it be a ndarray?
remove index accumulator and rewrite via len(kbit)
Examples:
Should be written in doctest format and should illustrate how
to use the function.
distribution=comb(columns_n, row)
returns all possible combinations: in reality not, should remove randomness: or better set flag
replacement = False
rows_n = np.array(rows_n)
distribution = np.array(distribution)
assert np.all(rows_n >0)
assert np.all(distribution >0), "Distribution values must be positive. {} provided".format(distribution)
if len(distribution) == 1:
distribution = np.repeat(distribution, len(rows_n))
assert len(distribution) == len(rows_n)
rows = np.sum(distribution)
index = 0
locations = np.zeros((rows, columns_n))
cluster_size = comb(columns_n,rows_n)
for i in range(len(rows_n)):
kbit = kbits(10,rows_n[i])
take_this = np.random.randint(cluster_size[i], size=distribution[i])
lista =[]
for indices in take_this:
lista.append(kbit[indices])
kbit = lista
for bin_string in kbit:
bin_array = np.fromstring(bin_string,'u1') - ord('0')
locations[index,:] = bin_array
index = index+1
return locations
Explanation: Binary vector generator
Version 2 - via Itertool
End of explanation
import numpy as np
class accumulatorMatrix(object):
Generate a matrix which row vectors correspond to accumulated numerosity, where each number
is coded by repeating 1 times times. If zero = true, the zero vector is included.
Args:
max_number (int): the greatest number to be represented
length (int): vectors length, if not provided is computed as the minimum length compatible
times (int): length of unity representation
zero (bool): whether the zero vector is included or excluded
Returns:
outputs (int, ndarray): max_number x length ndarray
def __init__(self, max_number, length=None, times=2, zero=False):
self.max_number = max_number
self.length = length
self.times = times
self.zero = zero
if not length:
self.length = self.times * self.max_number
assert self.max_number == self.length/times
if self.zero:
self.max_number = self.max_number + 1
add = 0
else:
add = 1
self.outputs = np.zeros((self.max_number, self.length), dtype=int)
for i in range(0,self.max_number):
self.outputs[i,:self.times * (i+add)].fill(1)
def shuffle_(self):
np.random.shuffle(self.outputs)
#def unshuffle(self):
We want to access the random shuffle in order to have the list
http://stackoverflow.com/questions/19306976/python-shuffling-with-a-parameter-to-get-the-same-result
def replicate(self, times=1):
self.outputs = np.tile(self.outputs, [times, 1])
import warnings
def accumulator_matrix(max_number, length=None, times=2, zero=False):
Generate a matrix which row vectors correspond to accumulated numerosity, where each number
is coded by repeating 1 times times. If zero = true, the zero vector is included.
Args:
max_number (int): the greatest number to be represented
length (int): vectors length, if not provided is computed as the minimum length compatible
times (int): length of unity representation
zero (bool): whether the zero vector is included or excluded
Returns:
outputs (int, ndarray): max_number x length ndarray
warnings.warn("shouldn't use this function anymore! Now use the class accumulatorMatrix.",DeprecationWarning)
if not length:
length = times * max_number
assert max_number == length/times
if zero:
max_number = max_number + 1
add = 0
else:
add = 1
outputs = np.zeros((max_number, length), dtype=int)
for i in range(0,max_number):
outputs[i,:times * (i+add)].fill(1)
return outputs
# np.random.seed(105)
# Weights = np.random.rand(5,10)
Explanation: Accumulator Inputs ##
End of explanation
def find_labels(inputs, multiple=1, one_hot=False):
Generate the labels corresponding to binary vectors. If one_hot = true, the label are
on hot encoded, otherwise integers.
Args:
inputs (int, ndarray): ndarray row samples
multiple (int): lenght of unity representation
one_hot (bool): False for integer labels, True for one hot encoded labels
Returns:
labels (int): integer or one hot encoded labels
labels = (np.sum(inputs, axis=1)/multiple).astype(int)
if one_hot:
size = np.max(labels)
label_matrix = np.zeros((labels.shape[0], size+1))
label_matrix[np.arange(labels.shape[0]), labels] = 1
labels = label_matrix
return labels
Explanation: Label the data
End of explanation
from collections import namedtuple
def Dataset(inputs, labels):
Creates dataset
Args:
inputs (array):
labels (array): corresponding labels
Returns:
Datasets: named tuple
Dataset = namedtuple('Dataset', ['data', 'labels'])
Datasets = Dataset(inputs, labels)
return Datasets
Explanation: Create dataset
Namedtuple
End of explanation
from collections import namedtuple
Dataset = namedtuple('Dataset', ['data', 'labels'])
#data_verguts = Dataset(inputs, labels)
import pickle
def pickle_test(Data, name):
f = open(name+'.pickle', 'ab')
pickle.dump(Data, f)
f.close()
#pickle_test(data_verguts, "verguts")
# # Test opening the pickle
# pickle_in = open("Data.pickle", "rb")
# ex = pickle.load(pickle_in)
# ex.labels[25]
Explanation: Pickling
End of explanation
rows_n = [2,4,6,8,10]
#comb(10, rows_n)
inputs = binary_vector_2(distribution = comb(10, rows_n))
labels = find_labels(inputs, multiple=2, one_hot=True)
count = 0
for i in inputs:
print(count, i, int(np.sum(i)/2), labels[count])
count +=1
Explanation: We now pickle the named_tuple
cfr. When to pickle
See http://localhost:8888/notebooks/Dropbox/Programming/Jupyter/Competitive-Unsupervised/NNTf.ipynb
for creating a panda dataframe out of the namedtuple
http://stackoverflow.com/questions/16377215/how-to-pickle-a-namedtuple-instance-correctly
https://blog.hartleybrody.com/python-serialize/
Simon and Petersons 2000, Input Dataset
The dataset consist of vecors of lenght 16 and vector of lenght 6 as label, one hot encoded.
50.000 inputs pattern are generated
A numerosities in range(6) is picked randomly.
Then locations are randomly selected.
Verguts and Fias: Inputs ##
Uniformly distributed input
The outlier 5 is represented only 10 times, this to allow the net to see it a reasonable numbers of times, but not too much, considering that it can only have one shape.
End of explanation
inputs = accumulatorMatrix(5, times=2).outputs
labels = find_labels(inputs, multiple=2, one_hot=True)
Dataset = namedtuple('Dataset', ['data', 'labels'])
verguts2004 = Dataset(inputs, labels)
pickle_test(verguts2004, "verguts_accumulator")
verguts2004.labels
Explanation: Accumulator inputs - Verguts& Fias##
Numerosity from 1 to 5, where unity is represented by 3 repeated ones. (e.g. 2 is represented as
[1,1,1,1,1,1,0,0,0,0,0,0,0,0,0]).
No zero vector.
End of explanation |
2,689 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Análisis de los datos obtenidos
Uso de ipython para el análsis y muestra de los datos obtenidos durante la producción.Se implementa un regulador experto. Los datos analizados son del día 11 de Agosto del 2015
Los datos del experimento
Step1: Representamos ambos diámetro y la velocidad de la tractora en la misma gráfica
Step2: En el boxplot, se ve como la mayoría de los datos están por encima de la media (primer cuartil). Se va a tratar de bajar ese porcentaje. La primera aproximación que vamos a realizar será la de hacer mayores incrementos al subir la velocidad en los tramos que el diámetro se encuentre entre $1.80mm$ y $1.75 mm$(caso 5) haremos incrementos de $d_v2$ en lugar de $d_v1$
Comparativa de Diametro X frente a Diametro Y para ver el ratio del filamento
Step3: Filtrado de datos
Las muestras tomadas $d_x >= 0.9$ or $d_y >= 0.9$ las asumimos como error del sensor, por ello las filtramos de las muestras tomadas.
Step4: Representación de X/Y
Step5: Analizamos datos del ratio
Step6: Límites de calidad
Calculamos el número de veces que traspasamos unos límites de calidad.
$Th^+ = 1.85$ and $Th^- = 1.65$ | Python Code:
#Importamos las librerías utilizadas
import numpy as np
import pandas as pd
import seaborn as sns
#Mostramos las versiones usadas de cada librerías
print ("Numpy v{}".format(np.__version__))
print ("Pandas v{}".format(pd.__version__))
print ("Seaborn v{}".format(sns.__version__))
#Abrimos el fichero csv con los datos de la muestra
datos = pd.read_csv('ensayo1.CSV')
%pylab inline
#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar
columns = ['Diametro X', 'RPM TRAC']
#Mostramos un resumen de los datos obtenidoss
datos[columns].describe()
#datos.describe().loc['mean',['Diametro X [mm]', 'Diametro Y [mm]']]
Explanation: Análisis de los datos obtenidos
Uso de ipython para el análsis y muestra de los datos obtenidos durante la producción.Se implementa un regulador experto. Los datos analizados son del día 11 de Agosto del 2015
Los datos del experimento:
* Duración 30min
* Filamento extruido: 537cm
* $T: 150ºC$
* $V_{min} tractora: 1.5 mm/s$
* $V_{max} tractora: 3.4 mm/s$
* Los incrementos de velocidades en las reglas del sistema experto son las mismas.
End of explanation
graf=datos.ix[:, "Diametro X"].plot(figsize=(16,10),ylim=(0.5,3))
graf.axhspan(1.65,1.85, alpha=0.2)
graf.set_xlabel('Tiempo (s)')
graf.set_ylabel('Diámetro (mm)')
#datos['RPM TRAC'].plot(secondary_y='RPM TRAC')
box = datos.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes')
box.axhspan(1.65,1.85, alpha=0.2)
Explanation: Representamos ambos diámetro y la velocidad de la tractora en la misma gráfica
End of explanation
plt.scatter(x=datos['Diametro X'], y=datos['Diametro Y'], marker='.')
Explanation: En el boxplot, se ve como la mayoría de los datos están por encima de la media (primer cuartil). Se va a tratar de bajar ese porcentaje. La primera aproximación que vamos a realizar será la de hacer mayores incrementos al subir la velocidad en los tramos que el diámetro se encuentre entre $1.80mm$ y $1.75 mm$(caso 5) haremos incrementos de $d_v2$ en lugar de $d_v1$
Comparativa de Diametro X frente a Diametro Y para ver el ratio del filamento
End of explanation
datos_filtrados = datos[(datos['Diametro X'] >= 0.9) & (datos['Diametro Y'] >= 0.9)]
#datos_filtrados.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes')
Explanation: Filtrado de datos
Las muestras tomadas $d_x >= 0.9$ or $d_y >= 0.9$ las asumimos como error del sensor, por ello las filtramos de las muestras tomadas.
End of explanation
plt.scatter(x=datos_filtrados['Diametro X'], y=datos_filtrados['Diametro Y'], marker='.')
Explanation: Representación de X/Y
End of explanation
ratio = datos_filtrados['Diametro X']/datos_filtrados['Diametro Y']
ratio.describe()
rolling_mean = pd.rolling_mean(ratio, 50)
rolling_std = pd.rolling_std(ratio, 50)
rolling_mean.plot(figsize=(12,6))
# plt.fill_between(ratio, y1=rolling_mean+rolling_std, y2=rolling_mean-rolling_std, alpha=0.5)
ratio.plot(figsize=(12,6), alpha=0.6, ylim=(0.5,1.5))
Explanation: Analizamos datos del ratio
End of explanation
Th_u = 1.85
Th_d = 1.65
data_violations = datos[(datos['Diametro X'] > Th_u) | (datos['Diametro X'] < Th_d) |
(datos['Diametro Y'] > Th_u) | (datos['Diametro Y'] < Th_d)]
data_violations.describe()
data_violations.plot(subplots=True, figsize=(12,12))
Explanation: Límites de calidad
Calculamos el número de veces que traspasamos unos límites de calidad.
$Th^+ = 1.85$ and $Th^- = 1.65$
End of explanation |
2,690 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualizing Supervised Machine Learning
Experiments
Logistic Regression
http
Step1: Loading and exploring our data set
This is a database of customers of an insurance company. Each data point is one customer. The group represents the number of accidents the customer has been involved with in the past
0 - red
Step2: Logistic Regression using the one-vs-rest (OvR) scheme
http
Step3: Cross Validation splits the train data in different ways and performs a number of training runs (3 in this case) | Python Code:
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
%pylab inline
import matplotlib.pyplot as plt
plt.xkcd()
# if this is true, all images are saved to disk
global_print_flag = False
!mkdir tmp_figures
Explanation: Visualizing Supervised Machine Learning
Experiments
Logistic Regression
http://scikit-learn.org/stable/auto_examples/linear_model/plot_logistic_multinomial.html
http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html
PCA feature selection
End of explanation
# Choose one of the two following data sets, the larger one gives better results, but might clutter the visualization depending on resolution
# !curl -O https://raw.githubusercontent.com/DJCordhose/ai/master/notebooks/scipy/data/insurance-customers-1500.csv
# !curl -O https://raw.githubusercontent.com/DJCordhose/ai/master/notebooks/scipy/data/insurance-customers-300.csv
import pandas as pd
# df = pd.read_csv('./insurance-customers-300.csv', sep=';')
df = pd.read_csv('./insurance-customers-1500.csv', sep=';')
# we deliberately decide this is going to be our label, it is often called lower case y
y=df['group']
# since 'group' is now the label we want to predict, we need to remove it from the training data
df.drop('group', axis='columns', inplace=True)
# input data often is named upper case X, the upper case indicates, that each row is a vector
X = df.as_matrix()
# ignore this, it is just technical code to plot decision boundaries
# Adapted from:
# http://scikit-learn.org/stable/auto_examples/neighbors/plot_classification.html
# http://jponttuset.cat/xkcd-deep-learning/
from matplotlib.colors import ListedColormap
cmap_print = ListedColormap(['#AA8888', '#004000', '#FFFFDD'])
cmap_bold = ListedColormap(['#AA4444', '#006000', '#EEEE44'])
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#FFFFDD'])
font_size=25
title_font_size=40
def meshGrid(x_data, y_data):
h = 1 # step size in the mesh
x_min, x_max = x_data.min() - 1, x_data.max() + 1
y_min, y_max = y_data.min() - 1, y_data.max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
return (xx,yy)
def plotPrediction(clf, x_data, y_data, x_label, y_label, ground_truth, title="",
mesh=True, fname=None, print=False):
xx,yy = meshGrid(x_data, y_data)
fig, ax = plt.subplots(figsize=(20,10))
if clf and mesh:
Z = clf.predict(np.c_[yy.ravel(), xx.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
ax.pcolormesh(xx, yy, Z, cmap=cmap_light)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
if print:
ax.scatter(x_data, y_data, c=ground_truth, cmap=cmap_print, s=200, marker='o', edgecolors='k')
else:
ax.scatter(x_data, y_data, c=ground_truth, cmap=cmap_bold, s=100, marker='o', edgecolors='k')
ax.set_xlabel(x_label, fontsize=font_size)
ax.set_ylabel(y_label, fontsize=font_size)
ax.set_title(title, fontsize=title_font_size)
if fname and global_print_flag:
fig.savefig('tmp_figures/'+fname)
def plot_keras_prediction(clf, x_data, y_data, x_label, y_label, ground_truth, title="",
mesh=True, fixed=None, fname=None, print=False):
xx,yy = meshGrid(x_data, y_data)
fig, ax = plt.subplots(figsize=(20,10))
if clf and mesh:
grid_X = np.array(np.c_[yy.ravel(), xx.ravel()])
if fixed:
fill_values = np.full((len(grid_X), 1), fixed)
grid_X = np.append(grid_X, fill_values, axis=1)
Z = clf.predict(grid_X)
Z = np.argmax(Z, axis=1)
Z = Z.reshape(xx.shape)
ax.pcolormesh(xx, yy, Z, cmap=cmap_light)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
if print:
ax.scatter(x_data, y_data, c=ground_truth, cmap=cmap_print, s=200, marker='o', edgecolors='k')
else:
ax.scatter(x_data, y_data, c=ground_truth, cmap=cmap_bold, s=100, marker='o', edgecolors='k')
ax.set_xlabel(x_label, fontsize=font_size)
ax.set_ylabel(y_label, fontsize=font_size)
ax.set_title(title, fontsize=title_font_size)
if fname and global_print_flag:
fig.savefig('tmp_figures/'+fname)
from sklearn.model_selection import train_test_split
# using stratefy we get a balanced number of samples per category (important!)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=42, stratify=y)
X_train.shape, y_train.shape, X_test.shape, y_test.shape
X_train_2_dim = X_train[:, :2]
X_test_2_dim = X_test[:, :2]
Explanation: Loading and exploring our data set
This is a database of customers of an insurance company. Each data point is one customer. The group represents the number of accidents the customer has been involved with in the past
0 - red: many accidents
1 - green: few or no accidents
2 - yellow: in the middle
End of explanation
from sklearn.linear_model import LogisticRegression
lg_clf = LogisticRegression()
%time lg_clf.fit(X_train_2_dim, y_train)
plotPrediction(lg_clf, X_train_2_dim[:, 1], X_train_2_dim[:, 0],
'Age', 'Max Speed', y_train,
title="Train Data, Logistic Regression",
fname='logistic-regression-train.png')
lg_clf.score(X_train_2_dim, y_train)
plotPrediction(lg_clf, X_test_2_dim[:, 1], X_test_2_dim[:, 0],
'Age', 'Max Speed', y_test,
title="Test Data, Logistic Regression",
fname='logistic-regression-test.png')
lg_clf.score(X_test_2_dim, y_test)
# http://scikit-learn.org/stable/modules/cross_validation.html
from sklearn.model_selection import cross_val_score
# cross_val_score?
Explanation: Logistic Regression using the one-vs-rest (OvR) scheme
http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html
End of explanation
scores = cross_val_score(lg_clf, X_train_2_dim, y_train, n_jobs=-1)
scores
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
Explanation: Cross Validation splits the train data in different ways and performs a number of training runs (3 in this case)
End of explanation |
2,691 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vertex AI Custom Image Classification Model for Batch Prediction
Overview
In this notebook, you learn how to use the Vertex SDK for Python to train and deploy a custom image classification model for batch prediction.
Learning Objective
Create a Vertex AI custom job for training a model.
Train a TensorFlow model.
Make a batch prediction.
Clean up resources.
Introduction
In this notebook, you will create a custom-trained model from a Python script in a Docker container using the Vertex SDK for Python, and then do a prediction on the deployed model by sending data. Alternatively, you can create custom-trained models using gcloud command-line tool, or online using the Cloud Console.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
Make sure to enable the Vertex AI API and Compute Engine API.
Installation
Install the latest (preview) version of Vertex SDK for Python.
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Install the pillow library for loading images.
Step3: Install the numpy library for manipulation of image data.
Step4: Please ignore the incompatible errors.
Restart the kernel
Once you've installed everything, you need to restart the notebook kernel so it can find the packages.
Step5: Set your project ID
If you don't know your project ID, you may be able to get your project ID using gcloud.
Step6: Otherwise, set your project ID here.
Step7: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
Step8: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a training job using the Cloud SDK, you upload a Python package
containing your training code to a Cloud Storage bucket. Vertex AI runs
the code from this package. In this tutorial, Vertex AI also saves the
trained model that results from your job in the same bucket. Using this model artifact, you can then create Vertex AI model resources.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are
available. You may
not use a Multi-Regional Storage bucket for training with Vertex AI.
Step9: Only if your bucket doesn't already exist
Step10: Finally, validate access to your Cloud Storage bucket by examining its contents
Step11: Set up variables
Next, set up some variables used throughout the tutorial.
Import Vertex SDK for Python
Import the Vertex SDK for Python into your Python environment and initialize it.
Step12: Set hardware accelerators
You can set hardware accelerators for both training and prediction.
Set the variables TRAIN_CPU/TRAIN_NCPU and DEPLOY_CPU/DEPLOY_NCPU to use a container image supporting a CPU and the number of CPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Tesla K80 GPUs allocated to each VM, you would specify
Step13: Set pre-built containers
Vertex AI provides pre-built containers to run training and prediction.
For the latest list, see Pre-built containers for training and Pre-built containers for prediction
Step14: Set machine types
Next, set the machine types to use for training and prediction.
Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure your compute resources for training and prediction.
machine type
n1-standard
Step15: Tutorial
Now you are ready to start creating your own custom-trained model with CIFAR10.
Train a model
There are two ways you can train a custom model using a container image
Step16: Training script
In the next cell, you will write the contents of the training script, task.py. In summary
Step17: Train the model
Define your custom training job on Vertex AI.
Use the CustomTrainingJob class to define the job, which takes the following parameters
Step18: Make a batch prediction request
Send a batch prediction request to your deployed model.
Get test data
Download images from the CIFAR dataset and preprocess them.
Download the test images
Download the provided set of images from the CIFAR dataset
Step19: Preprocess the images
Before you can run the data through the endpoint, you need to preprocess it to match the format that your custom model defined in task.py expects.
x_test
Step20: Prepare data for batch prediction
Before you can run the data through batch prediction, you need to save the data into one of a few possible formats.
For this tutorial, use JSONL as it's compatible with the 3-dimensional list that each image is currently represented in. To do this
Step21: Send the prediction request
To make a batch prediction request, call the model object's batch_predict method with the following parameters
Step22: Retrieve batch prediction results
When the batch prediction is done processing, you can finally view the predictions stored at the Cloud Storage path you set as output. The predictions will be in a JSONL format, which you indicated when you created the batch prediction job. The predictions are located in a subdirectory starting with the name prediction. Within that directory, there is a file named prediction.results-xxxx-of-xxxx.
Let's display the contents. You will get a row for each prediction. The row is the softmax probability distribution for the corresponding CIFAR10 classes.
Step23: Evaluate results
You can then run a quick evaluation on the prediction results
Step24: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | Python Code:
# Setup your dependencies
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
# Upgrade the specified package to the newest available version
! pip install {USER_FLAG} --upgrade google-cloud-aiplatform
Explanation: Vertex AI Custom Image Classification Model for Batch Prediction
Overview
In this notebook, you learn how to use the Vertex SDK for Python to train and deploy a custom image classification model for batch prediction.
Learning Objective
Create a Vertex AI custom job for training a model.
Train a TensorFlow model.
Make a batch prediction.
Clean up resources.
Introduction
In this notebook, you will create a custom-trained model from a Python script in a Docker container using the Vertex SDK for Python, and then do a prediction on the deployed model by sending data. Alternatively, you can create custom-trained models using gcloud command-line tool, or online using the Cloud Console.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
Make sure to enable the Vertex AI API and Compute Engine API.
Installation
Install the latest (preview) version of Vertex SDK for Python.
End of explanation
# Upgrade the specified package to the newest available version
! pip install {USER_FLAG} --upgrade google-cloud-storage
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
# Upgrade the specified package to the newest available version
! pip install {USER_FLAG} --upgrade pillow
Explanation: Install the pillow library for loading images.
End of explanation
# Upgrade the specified package to the newest available version
! pip install {USER_FLAG} --upgrade numpy
Explanation: Install the numpy library for manipulation of image data.
End of explanation
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Please ignore the incompatible errors.
Restart the kernel
Once you've installed everything, you need to restart the notebook kernel so it can find the packages.
End of explanation
import os
PROJECT_ID = ""
if not os.getenv("IS_TESTING"):
# Get your Google Cloud project ID from gcloud
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
Explanation: Set your project ID
If you don't know your project ID, you may be able to get your project ID using gcloud.
End of explanation
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "qwiklabs-gcp-00-f25b80479c89" # @param {type:"string"}
Explanation: Otherwise, set your project ID here.
End of explanation
# Import necessary libraries
from datetime import datetime
# Use a timestamp to ensure unique resources
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
End of explanation
# Fill in your bucket name and region
BUCKET_NAME = "gs://qwiklabs-gcp-00-f25b80479c89" # @param {type:"string"}
REGION = "us-central1" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://qwiklabs-gcp-00-f25b80479c89":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a training job using the Cloud SDK, you upload a Python package
containing your training code to a Cloud Storage bucket. Vertex AI runs
the code from this package. In this tutorial, Vertex AI also saves the
trained model that results from your job in the same bucket. Using this model artifact, you can then create Vertex AI model resources.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are
available. You may
not use a Multi-Regional Storage bucket for training with Vertex AI.
End of explanation
! gsutil mb -l $REGION $BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
# Import necessary libraries
import os
import sys
from google.cloud import aiplatform
from google.cloud.aiplatform import gapic as aip
aiplatform.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_NAME)
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import Vertex SDK for Python
Import the Vertex SDK for Python into your Python environment and initialize it.
End of explanation
TRAIN_CPU, TRAIN_NCPU = (None, None)
DEPLOY_CPU, DEPLOY_NCPU = (None, None)
Explanation: Set hardware accelerators
You can set hardware accelerators for both training and prediction.
Set the variables TRAIN_CPU/TRAIN_NCPU and DEPLOY_CPU/DEPLOY_NCPU to use a container image supporting a CPU and the number of CPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Tesla K80 GPUs allocated to each VM, you would specify:
(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)
See the locations where accelerators are available.
Otherwise specify (None, None) to use a container image to run on a CPU.
Note: TensorFlow releases earlier than 2.3 for GPU support fail to load the custom model in this tutorial. This issue is caused by static graph operations that are generated in the serving function. This is a known issue, which is fixed in TensorFlow 2.3. If you encounter this issue with your own custom models, use a container image for TensorFlow 2.3 or later with GPU support.
For this lab we will use a container image to run on a CPU.
End of explanation
TRAIN_VERSION = "tf-cpu.2-1"
DEPLOY_VERSION = "tf2-cpu.2-1"
TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION)
DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/{}:latest".format(DEPLOY_VERSION)
print("Training:", TRAIN_IMAGE, TRAIN_CPU, TRAIN_NCPU)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_CPU, DEPLOY_NCPU)
Explanation: Set pre-built containers
Vertex AI provides pre-built containers to run training and prediction.
For the latest list, see Pre-built containers for training and Pre-built containers for prediction
End of explanation
# Set the machine type
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
Explanation: Set machine types
Next, set the machine types to use for training and prediction.
Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure your compute resources for training and prediction.
machine type
n1-standard: 3.75GB of memory per vCPU
n1-highmem: 6.5GB of memory per vCPU
n1-highcpu: 0.9 GB of memory per vCPU
vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]
Note: The following is not supported for training:
standard: 2 vCPUs
highcpu: 2, 4 and 8 vCPUs
Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs.
End of explanation
# Define the command arguments for the training script
JOB_NAME = "custom_job_" + TIMESTAMP
MODEL_DIR = "{}/{}".format(BUCKET_NAME, JOB_NAME)
if not TRAIN_NCPU or TRAIN_NCPU < 2:
TRAIN_STRATEGY = "single"
else:
TRAIN_STRATEGY = "mirror"
EPOCHS = 20
STEPS = 100
CMDARGS = [
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
"--distribute=" + TRAIN_STRATEGY,
]
Explanation: Tutorial
Now you are ready to start creating your own custom-trained model with CIFAR10.
Train a model
There are two ways you can train a custom model using a container image:
Use a Google Cloud prebuilt container. If you use a prebuilt container, you will additionally specify a Python package to install into the container image. This Python package contains your code for training a custom model.
Use your own custom container image. If you use your own container, the container needs to contain your code for training a custom model.
Define the command args for the training script
Prepare the command-line arguments to pass to your training script.
- args: The command line arguments to pass to the corresponding Python module. In this example, they will be:
- "--epochs=" + EPOCHS: The number of epochs for training.
- "--steps=" + STEPS: The number of steps (batches) per epoch.
- "--distribute=" + TRAIN_STRATEGY" : The training distribution strategy to use for single or distributed training.
- "single": single device.
- "mirror": all GPU devices on a single compute instance.
- "multi": all GPU devices on all compute instances.
End of explanation
%%writefile task.py
# Single, Mirror and Multi-Machine Distributed Training for CIFAR-10
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.python.client import device_lib
import argparse
import os
import sys
tfds.disable_progress_bar()
parser = argparse.ArgumentParser()
parser.add_argument('--lr', dest='lr',
default=0.01, type=float,
help='Learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=10, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=200, type=int,
help='Number of steps per epoch.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
args = parser.parse_args()
print('Python Version = {}'.format(sys.version))
print('TensorFlow Version = {}'.format(tf.__version__))
print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
print('DEVICES', device_lib.list_local_devices())
# Single Machine, single compute device
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
# Single Machine, multiple compute device
elif args.distribute == 'mirror':
strategy = tf.distribute.MirroredStrategy()
# Multiple Machine, multiple compute device
elif args.distribute == 'multi':
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
# Multi-worker configuration
print('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
# Preparing dataset
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def make_datasets_unbatched():
# Scaling CIFAR10 data from (0, 255] to (0., 1.]
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255.0
return image, label
datasets, info = tfds.load(name='cifar10',
with_info=True,
as_supervised=True)
return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE).repeat()
# Build the Keras model
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(32, 32, 3)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(
loss=tf.keras.losses.sparse_categorical_crossentropy,
optimizer=tf.keras.optimizers.SGD(learning_rate=args.lr),
metrics=['accuracy'])
return model
# Train the model
NUM_WORKERS = strategy.num_replicas_in_sync
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size.
GLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS
MODEL_DIR = os.getenv("AIP_MODEL_DIR")
train_dataset = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)
with strategy.scope():
# Creation of dataset, and model building/compiling need to be within
# `strategy.scope()`.
model = build_and_compile_cnn_model()
model.fit(x=train_dataset, epochs=args.epochs, steps_per_epoch=args.steps)
model.save(MODEL_DIR)
Explanation: Training script
In the next cell, you will write the contents of the training script, task.py. In summary:
Get the directory where to save the model artifacts from the environment variable AIP_MODEL_DIR. This variable is set by the training service.
Loads CIFAR10 dataset from TF Datasets (tfds).
Builds a model using TF.Keras model API.
Compiles the model (compile()).
Sets a training distribution strategy according to the argument args.distribute.
Trains the model (fit()) with epochs and steps according to the arguments args.epochs and args.steps
Saves the trained model (save(MODEL_DIR)) to the specified model directory.
End of explanation
# TODO
# Define your custom training job and use the run function to start the training
job = aiplatform.CustomTrainingJob(
display_name=JOB_NAME,
script_path="task.py",
container_uri=TRAIN_IMAGE,
requirements=["tensorflow_datasets==1.3.0"],
model_serving_container_image_uri=DEPLOY_IMAGE,
)
MODEL_DISPLAY_NAME = "cifar10-" + TIMESTAMP
# TODO
# Start the training
if TRAIN_CPU:
model = job.run(
model_display_name=MODEL_DISPLAY_NAME,
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
accelerator_type=TRAIN_CPU.name,
accelerator_count=TRAIN_NCPU,
)
else:
model = job.run(
model_display_name=MODEL_DISPLAY_NAME,
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
accelerator_count=0,
)
Explanation: Train the model
Define your custom training job on Vertex AI.
Use the CustomTrainingJob class to define the job, which takes the following parameters:
display_name: The user-defined name of this training pipeline.
script_path: The local path to the training script.
container_uri: The URI of the training container image.
requirements: The list of Python package dependencies of the script.
model_serving_container_image_uri: The URI of a container that can serve predictions for your model — either a prebuilt container or a custom container.
Use the run function to start training, which takes the following parameters:
args: The command line arguments to be passed to the Python script.
replica_count: The number of worker replicas.
model_display_name: The display name of the Model if the script produces a managed Model.
machine_type: The type of machine to use for training.
accelerator_type: The hardware accelerator type.
accelerator_count: The number of accelerators to attach to a worker replica.
The run function creates a training pipeline that trains and creates a Model object. After the training pipeline completes, the run function returns the Model object.
End of explanation
# Download the images
! gsutil -m cp -r gs://cloud-samples-data/ai-platform-unified/cifar_test_images .
Explanation: Make a batch prediction request
Send a batch prediction request to your deployed model.
Get test data
Download images from the CIFAR dataset and preprocess them.
Download the test images
Download the provided set of images from the CIFAR dataset:
End of explanation
import numpy as np
from PIL import Image
# Load image data
IMAGE_DIRECTORY = "cifar_test_images"
image_files = [file for file in os.listdir(IMAGE_DIRECTORY) if file.endswith(".jpg")]
# Decode JPEG images into numpy arrays
image_data = [
np.asarray(Image.open(os.path.join(IMAGE_DIRECTORY, file))) for file in image_files
]
# Scale and convert to expected format
x_test = [(image / 255.0).astype(np.float32).tolist() for image in image_data]
# Extract labels from image name
y_test = [int(file.split("_")[1]) for file in image_files]
Explanation: Preprocess the images
Before you can run the data through the endpoint, you need to preprocess it to match the format that your custom model defined in task.py expects.
x_test:
Normalize (rescale) the pixel data by dividing each pixel by 255. This replaces each single byte integer pixel with a 32-bit floating point number between 0 and 1.
y_test:
You can extract the labels from the image filenames. Each image's filename format is "image_{LABEL}_{IMAGE_NUMBER}.jpg"
End of explanation
import json
BATCH_PREDICTION_INSTANCES_FILE = "batch_prediction_instances.jsonl"
BATCH_PREDICTION_GCS_SOURCE = (
BUCKET_NAME + "/batch_prediction_instances/" + BATCH_PREDICTION_INSTANCES_FILE
)
# Write instances at JSONL
with open(BATCH_PREDICTION_INSTANCES_FILE, "w") as f:
for x in x_test:
f.write(json.dumps(x) + "\n")
# Upload to Cloud Storage bucket
! gsutil cp $BATCH_PREDICTION_INSTANCES_FILE $BATCH_PREDICTION_GCS_SOURCE
print("Uploaded instances to: ", BATCH_PREDICTION_GCS_SOURCE)
Explanation: Prepare data for batch prediction
Before you can run the data through batch prediction, you need to save the data into one of a few possible formats.
For this tutorial, use JSONL as it's compatible with the 3-dimensional list that each image is currently represented in. To do this:
In a file, write each instance as JSON on its own line.
Upload this file to Cloud Storage.
For more details on batch prediction input formats: https://cloud.google.com/vertex-ai/docs/predictions/batch-predictions#batch_request_input
End of explanation
MIN_NODES = 1
MAX_NODES = 1
# The name of the job
BATCH_PREDICTION_JOB_NAME = "cifar10_batch-" + TIMESTAMP
# Folder in the bucket to write results to
DESTINATION_FOLDER = "batch_prediction_results"
# The Cloud Storage bucket to upload results to
BATCH_PREDICTION_GCS_DEST_PREFIX = BUCKET_NAME + "/" + DESTINATION_FOLDER
# TODO
# Make SDK batch_predict method call
batch_prediction_job = model.batch_predict(
instances_format="jsonl",
predictions_format="jsonl",
job_display_name=BATCH_PREDICTION_JOB_NAME,
gcs_source=BATCH_PREDICTION_GCS_SOURCE,
gcs_destination_prefix=BATCH_PREDICTION_GCS_DEST_PREFIX,
model_parameters=None,
machine_type=DEPLOY_COMPUTE,
accelerator_type=DEPLOY_CPU,
accelerator_count=DEPLOY_NCPU,
starting_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
sync=True,
)
Explanation: Send the prediction request
To make a batch prediction request, call the model object's batch_predict method with the following parameters:
- instances_format: The format of the batch prediction request file: "jsonl", "csv", "bigquery", "tf-record", "tf-record-gzip" or "file-list"
- prediction_format: The format of the batch prediction response file: "jsonl", "csv", "bigquery", "tf-record", "tf-record-gzip" or "file-list"
- job_display_name: The human readable name for the prediction job.
- gcs_source: A list of one or more Cloud Storage paths to your batch prediction requests.
- gcs_destination_prefix: The Cloud Storage path that the service will write the predictions to.
- model_parameters: Additional filtering parameters for serving prediction results.
- machine_type: The type of machine to use for training.
- accelerator_type: The hardware accelerator type.
- accelerator_count: The number of accelerators to attach to a worker replica.
- starting_replica_count: The number of compute instances to initially provision.
- max_replica_count: The maximum number of compute instances to scale to. In this tutorial, only one instance is provisioned.
Compute instance scaling
You can specify a single instance (or node) to process your batch prediction request. This tutorial uses a single node, so the variables MIN_NODES and MAX_NODES are both set to 1.
If you want to use multiple nodes to process your batch prediction request, set MAX_NODES to the maximum number of nodes you want to use. Vertex AI autoscales the number of nodes used to serve your predictions, up to the maximum number you set. Refer to the pricing page to understand the costs of autoscaling with multiple nodes.
End of explanation
RESULTS_DIRECTORY = "prediction_results"
RESULTS_DIRECTORY_FULL = RESULTS_DIRECTORY + "/" + DESTINATION_FOLDER
# Create missing directories
os.makedirs(RESULTS_DIRECTORY, exist_ok=True)
# Get the Cloud Storage paths for each result
! gsutil -m cp -r $BATCH_PREDICTION_GCS_DEST_PREFIX $RESULTS_DIRECTORY
# Get most recently modified directory
latest_directory = max(
[
os.path.join(RESULTS_DIRECTORY_FULL, d)
for d in os.listdir(RESULTS_DIRECTORY_FULL)
],
key=os.path.getmtime,
)
# Get downloaded results in directory
results_files = []
for dirpath, subdirs, files in os.walk(latest_directory):
for file in files:
if file.startswith("prediction.results"):
results_files.append(os.path.join(dirpath, file))
# Consolidate all the results into a list
results = []
for results_file in results_files:
# Download each result
with open(results_file, "r") as file:
results.extend([json.loads(line) for line in file.readlines()])
Explanation: Retrieve batch prediction results
When the batch prediction is done processing, you can finally view the predictions stored at the Cloud Storage path you set as output. The predictions will be in a JSONL format, which you indicated when you created the batch prediction job. The predictions are located in a subdirectory starting with the name prediction. Within that directory, there is a file named prediction.results-xxxx-of-xxxx.
Let's display the contents. You will get a row for each prediction. The row is the softmax probability distribution for the corresponding CIFAR10 classes.
End of explanation
# Evaluate the results
y_predicted = [np.argmax(result["prediction"]) for result in results]
correct = sum(y_predicted == np.array(y_test))
accuracy = len(y_predicted)
print(
f"Correct predictions = {correct}, Total predictions = {accuracy}, Accuracy = {correct/accuracy}"
)
Explanation: Evaluate results
You can then run a quick evaluation on the prediction results:
np.argmax: Convert each list of confidence levels to a label
Compare the predicted labels to the actual labels
Calculate accuracy as correct/total
To improve the accuracy, try training for a higher number of epochs.
End of explanation
delete_training_job = True
delete_model = True
# Warning: Setting this to true will delete everything in your bucket
delete_bucket = False
# TODO
# Delete the training job
job.delete()
# TODO
# Delete the model
model.delete()
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil -m rm -r $BUCKET_NAME
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Training Job
Model
Cloud Storage Bucket
End of explanation |
2,692 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plotting a resistivity model from ModEM on a basemap
In this example we will plot a resistivity model on a basemap. This example is a bit more complex than previous examples, as, unlike the previous examples, the basemap plotting functionality is not contained within MTPy. This has the benefit that it makes it easier to customise the plot. But it may mean it takes a bit longer to become familiar with the functionality.
The first step is to import the required modules needed. We have three MTPy imports - Model, Data and gis_tools. Then there is some standard matplotlib functionality and importantly the basemap module which creates coastlines and the nice borders.
Step1: The next step is to create a function that will draw an inset map showing the survey boundaries on Australia.
Step2: We now need to define our file paths for the response and data files
Step3: We can now create the plot! | Python Code:
from mtpy.modeling.modem import Model, Data
from mtpy.utils import gis_tools
import matplotlib.pyplot as plt
from matplotlib import colors
from mpl_toolkits.basemap import Basemap
from shapely.geometry import Polygon
from descartes import PolygonPatch
import numpy as np
Explanation: Plotting a resistivity model from ModEM on a basemap
In this example we will plot a resistivity model on a basemap. This example is a bit more complex than previous examples, as, unlike the previous examples, the basemap plotting functionality is not contained within MTPy. This has the benefit that it makes it easier to customise the plot. But it may mean it takes a bit longer to become familiar with the functionality.
The first step is to import the required modules needed. We have three MTPy imports - Model, Data and gis_tools. Then there is some standard matplotlib functionality and importantly the basemap module which creates coastlines and the nice borders.
End of explanation
# function to draw a bounding box
def drawBBox( minLon, minLat, maxLon, maxLat, bm, **kwargs):
bblons = np.array([minLon, maxLon, maxLon, minLon, minLon])
bblats = np.array([minLat, minLat, maxLat, maxLat, minLat])
x, y = bm( bblons, bblats )
xy = zip(x,y)
poly = Polygon(xy)
bm.ax.add_patch(PolygonPatch(poly, **kwargs))
Explanation: The next step is to create a function that will draw an inset map showing the survey boundaries on Australia.
End of explanation
# define paths
data_fn = r'C:/mtpywin/mtpy/examples/model_files/ModEM/ModEM_Data.dat'
model_fn = r'C:/mtpywin/mtpy/examples/model_files/ModEM/Modular_MPI_NLCG_004.rho'
# define extents
minLat = -30.4
maxLat = -30.
minLon = 133.45
maxLon = 134
# position of inset axes (bottom,left,width,height)
inset_ax_position = [0.6,0.2,0.3,0.2]
Explanation: We now need to define our file paths for the response and data files
End of explanation
# read in ModEM data to phase tensor object
mObj = Model()
mObj.read_model_file(model_fn = model_fn)
dObj = Data()
dObj.read_data_file(data_fn = data_fn)
# get easting and northing of model grid
east = mObj.grid_east + dObj.center_point['east']
north = mObj.grid_north + dObj.center_point['north']
gcx,gcy = [[np.mean(arr[i:i+2]) for i in range(len(arr)-1)] for arr in [east,north]]
# make a meshgrid, save the shape
east_grid,north_grid = np.meshgrid(east,north)
shape = east_grid.shape
# project to lat, lon
lonr,latr = gis_tools.epsg_project(east_grid,north_grid,28353,4326)
# define resistivity model and station locations
resvals = mObj.res_model.copy()
sloc = dObj.station_locations
# make a figure
fig, ax = plt.subplots(figsize=(10,10))
# make a basemap
m = Basemap(resolution='c', # c, l, i, h, f or None
ax=ax,
projection='merc',
lat_0=-20.5, lon_0=138, # central lat/lon for projection
llcrnrlon=minLon, llcrnrlat=minLat, urcrnrlon=maxLon, urcrnrlat=maxLat)
# draw lat-lon grids
m.drawparallels(np.linspace(minLat, maxLat, 5), labels=[1,1,0,0], linewidth=0.1)
m.drawmeridians(np.linspace(minLon, maxLon, 5), labels=[0,0,1,1], linewidth=0.1)
m.drawcoastlines()
# plot the resistivity model
mpldict={}
mpldict['cmap'] = 'jet_r'
mpldict['norm'] = colors.LogNorm()
mpldict['vmin'] = 2
mpldict['vmax'] = 5e3
x,y = m(lonr,latr)
mappable = m.pcolormesh(x,y,resvals[:,:,20])
xp,yp=m(sloc.lon,sloc.lat)
plt.plot(xp,yp,'k+')
# plot inset map ==================================================================
insetAx = fig.add_axes(inset_ax_position)
mInset = Basemap(resolution='c', # c, l, i, h, f or None
ax=insetAx,
projection='merc',
lat_0=-20, lon_0=132,
llcrnrlon=110, llcrnrlat=-40, urcrnrlon=155, urcrnrlat=-10)
mInset.fillcontinents(color='lightgray')
mInset.drawstates(color="grey")
drawBBox(minLon, minLat, maxLon, maxLat, mInset, fill='True', facecolor='k')
# make a colour bar
cbax = fig.add_axes([1.,0.5,0.025,.25])
cbar = plt.colorbar(mappable,ax=ax,cax=cbax)
cbar.set_label('Resistivity, ohm-m')
Explanation: We can now create the plot!
End of explanation |
2,693 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Predict with Model
View Config
Step1: Predict with Model (CLI)
Step2: Predict with Model under Mini-Load (CLI)
This is a mini load test to provide instant feedback on relative performance.
Step3: Predict with Model (REST)
Setup Prediction Inputs | Python Code:
%%bash
pio init-model \
--model-server-url http://prediction-python3.community.pipeline.io \
--model-type python3 \
--model-namespace default \
--model-name python3_zscore \
--model-version v1 \
--model-path .
Explanation: Predict with Model
View Config
End of explanation
%%bash
pio predict \
--model-test-request-path ./data/test_request.json
Explanation: Predict with Model (CLI)
End of explanation
%%bash
pio predict_many \
--model-test-request-path ./data/test_request.json \
--num-iterations 5
Explanation: Predict with Model under Mini-Load (CLI)
This is a mini load test to provide instant feedback on relative performance.
End of explanation
import requests
model_type = 'python3'
model_namespace = 'default'
model_name = 'python3_zscore'
model_version = 'v1'
deploy_url = 'http://prediction-%s.community.pipeline.io/api/v1/model/predict/%s/%s/%s/%s' % (model_type, model_type, model_namespace, model_name, model_version)
print(deploy_url)
with open('./data/test_request.json', 'rb') as fh:
model_input_binary = fh.read()
response = requests.post(url=deploy_url,
data=model_input_binary,
timeout=30)
print("Success!\n\n%s" % response.text)
Explanation: Predict with Model (REST)
Setup Prediction Inputs
End of explanation |
2,694 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Running ProjectQ code on AWS Braket service provided devices
Compiling code for AWS Braket Service
In this tutorial we will see how to run code on some of the devices provided by the Amazon AWS Braket service. The AWS Braket devices supported are
Step1: Prior to the instantiation of the backend we need to configure the credentials, the S3 storage folder and the device to be used (in the example the State Vector Simulator SV1)
Step2: Next we instantiate the engine with the AWSBraketBackend including the credentials and S3 configuration. By setting the 'use_hardware' parameter to False we indicate the use of the Simulator. In addition we set the number of times we want to run the circuit and the interval in secons to ask for the results. For a complete list of parameters and descriptions, please check the documentation.
Step3: We can now allocate the required qubits and create the circuit to be run. With the last instruction we ask the backend to run the circuit.
Step4: The backend will automatically create the task and generate a unique identifier (the task Arn) that can be used to recover the status of the task and results later on.
Once the circuit is executed the indicated number of times, the results are stored in the S3 folder configured previously and can be recovered to obtain the probabilities of each of the states.
Step5: Retrieve results form a previous execution
We can retrieve the result later on (of this job or a previously executed one) using the task Arn provided when it was run. In addition, you have to remember the amount of qubits involved in the job and the order you used. The latter is required since we need to set up a mapping for the qubits when retrieving results of a previously executed job.
To retrieve the results we need to configure the backend including the parameter 'retrieve_execution' set to the Task Arn of the job. To be able to get the probabilities of each state we need to configure the qubits and ask the backend to get the results.
Step6: We can plot an histogram with the probabilities as well. | Python Code:
from projectq import MainEngine
from projectq.backends import AWSBraketBackend
from projectq.ops import Measure, H, C, X, All
Explanation: Running ProjectQ code on AWS Braket service provided devices
Compiling code for AWS Braket Service
In this tutorial we will see how to run code on some of the devices provided by the Amazon AWS Braket service. The AWS Braket devices supported are: the State Vector Simulator 'SV1', the Rigetti device 'Aspen-8' and the IonQ device 'IonQ'
You need to have a valid AWS account, created a pair of access key/secret key, and have activated the braket service. As part of the activation of the service, a specific S3 bucket and folder associated to the service should be configured.
First we need to do the required imports. That includes the mail compiler engine (MainEngine), the backend (AWSBraketBackend in this case) and the operations to be used in the cicuit
End of explanation
creds = {
'AWS_ACCESS_KEY_ID': 'aws_access_key_id',
'AWS_SECRET_KEY': 'aws_secret_key',
} # replace with your Access key and Secret key
s3_folder = ['S3Bucket', 'S3Directory'] # replace with your S3 bucket and directory
device = 'SV1' # replace by the device you want to use
Explanation: Prior to the instantiation of the backend we need to configure the credentials, the S3 storage folder and the device to be used (in the example the State Vector Simulator SV1)
End of explanation
eng = MainEngine(AWSBraketBackend(use_hardware=False,
credentials=creds,
s3_folder=s3_folder,
num_runs=10,
interval=10))
Explanation: Next we instantiate the engine with the AWSBraketBackend including the credentials and S3 configuration. By setting the 'use_hardware' parameter to False we indicate the use of the Simulator. In addition we set the number of times we want to run the circuit and the interval in secons to ask for the results. For a complete list of parameters and descriptions, please check the documentation.
End of explanation
# Allocate the required qubits
qureg = eng.allocate_qureg(3)
# Create the circuit. In this example a quantum teleportation algorithms that teleports the first qubit to the third one.
H | qureg[0]
H | qureg[1]
C(X) | (qureg[1], qureg[2])
C(X) | (qureg[0], qureg[1])
H | qureg[0]
C(X) | (qureg[1], qureg[2])
# At the end we measure the qubits to get the results; should be all-0 or all-1
All(Measure) | qureg
# And run the circuit
eng.flush()
Explanation: We can now allocate the required qubits and create the circuit to be run. With the last instruction we ask the backend to run the circuit.
End of explanation
# Obtain and print the probabilies of the states
prob_dict = eng.backend.get_probabilities(qureg)
print("Probabilites for each of the results: ", prob_dict)
Explanation: The backend will automatically create the task and generate a unique identifier (the task Arn) that can be used to recover the status of the task and results later on.
Once the circuit is executed the indicated number of times, the results are stored in the S3 folder configured previously and can be recovered to obtain the probabilities of each of the states.
End of explanation
# Set the Task Arn of the job to be retrieved and instantiate the engine with the AWSBraketBackend
task_arn = 'your_task_arn' # replace with the actual TaskArn you want to use
eng1 = MainEngine(AWSBraketBackend(retrieve_execution=task_arn, credentials=creds, num_retries=2, verbose=True))
# Configure the qubits to get the states probabilies
qureg1 = eng1.allocate_qureg(3)
# Ask the backend to retrieve the results
eng1.flush()
# Obtain and print the probabilities of the states
prob_dict1 = eng1.backend.get_probabilities(qureg1)
print("Probabilities ", prob_dict1)
Explanation: Retrieve results form a previous execution
We can retrieve the result later on (of this job or a previously executed one) using the task Arn provided when it was run. In addition, you have to remember the amount of qubits involved in the job and the order you used. The latter is required since we need to set up a mapping for the qubits when retrieving results of a previously executed job.
To retrieve the results we need to configure the backend including the parameter 'retrieve_execution' set to the Task Arn of the job. To be able to get the probabilities of each state we need to configure the qubits and ask the backend to get the results.
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
from projectq.libs.hist import histogram
histogram(eng1.backend, qureg1)
plt.show()
Explanation: We can plot an histogram with the probabilities as well.
End of explanation |
2,695 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Attribution
Step2: For Machine Learning, we are mainly interested in unconstrained minimization of multivariate scalar functions (typically where gradient information is available). In addition to several algorithms for unconstrained minimization of multivariate scalar functions (e.g. BFGS, Nelder-Mead simplex, Newton Conjugate Gradient, etc.) the module also contains
Step3: The simplex method is a simple way to minimize a fairly well-behaved function. It only requires function evaluations and is a good choice for simple minimization problems. However, because it does not use any gradient evaluations, it may take longer to find the minimum.
Broyden-Fletcher-Golfarb-Shanno algorithm (method='BFGS')
In order to converge more quickly to the solution, this routine uses the gradient of the objective function. If the gradient is not given by the user, then it is estimated using first-differences. The Broyden-Fletcher-Golfarb-Shanno (BFGS) method typically requires fewer calls than the simplex algorithm even when the gradient must be estimated.
To demonstrate this algorithm, the Rosenbrock function is used again. The gradient of the Rosenbrock function is the vector
Step4: This gradient information is specified in the minimize function through the jac parameter
Step5: Machine learning libraries (e.g. Tensorflow, Theano, Torch etc.) will provide a similar interface. When they provide auto-differentiation capabilities, you will not need to worry about writing the derivative function yourself. You will need to provide the "forward" computational graph and an objective.
Black-box function optimization with skopt
Scikit-Optimize, or skopt, is a simple and efficient library to minimize (very) expensive and noisy black-box functions. It implements several methods for sequential model-based optimization.
Alternative libraries include Spearmint, PyBO, and Hyperopt.
Black-box algorithms do not need any knowledge of the gradient. These libraries provide algorithms that are more powerful and scale better than the Nelder-Mead simplex algorithm above. Modern black-box (or sequential model-based) optimization algorithms are increasingly popular for optimizing the hyperparameters (user-tuned "knobs") of machine learning models. We'll talk more about this later.
For now, just a brief example, which is taken from the skopt Bayesian Optimization tutorial
Step6: Let's assume the following noisy function $f$
Step7: In skopt, functions $f$ are assumed to take as input a 1D vector $x$ represented as an array-like and to return a scalar $f(x)$
Step8: Bayesian Optimization based on Gaussian Process regression is implemented in skopt.gp_minimize and can be carried out as follows
Step9: Accordingly, the approximated minimum is found to be
Step10: For further inspection of the results, attributes of the res named tuple provide the following information
Step11: Together these attributes can be used to visually inspect the results of the minimization, such as the convergence trace or the acquisition function at the last iteration | Python Code:
help(scipy.optimize)
Explanation: Attribution: These examples are taken from the Scipy Tutorial
The scipy.optimize package provides several commonly used optimization algorithms. A detailed listing can be found by:
End of explanation
import numpy as np
from scipy.optimize import minimize
def rosen(x):
The Rosenbrock function
return sum(100.0*(x[1:]-x[:-1]**2.0)**2.0 + (1-x[:-1])**2.0)
x0 = np.array([1.3, 0.7, 0.8, 1.9, 1.2])
res = minimize(rosen, x0, method='nelder-mead',
options={'xtol': 1e-8, 'disp': True})
print(res.x)
Explanation: For Machine Learning, we are mainly interested in unconstrained minimization of multivariate scalar functions (typically where gradient information is available). In addition to several algorithms for unconstrained minimization of multivariate scalar functions (e.g. BFGS, Nelder-Mead simplex, Newton Conjugate Gradient, etc.) the module also contains:
- Global (brute-force) optimization routines
- Least-squares minimization (which we saw before in the Linear Algebra Notebook)
- Scalar univariate function minimizers and root finders; and
- Multivariate equation system solvers using a variety of algorithms
Unconstrained minimization of multivariate scalar functions (minimize)
The minimize function provides a common interface to unconstrained and constrained minimization algorithms for multivariate scalar functions. To demonstrate the minimization function, let's consider the problem of minimizing the Rosenbrock function of $N$ variables:
$$ f\left(\mathbf{x}\right)=\sum_{i=1}^{N-1}100\left(x_{i}-x_{i-1}^{2}\right)^{2}+\left(1-x_{i-1}\right)^{2}.$$
The minimum value of this function is 0 which is achieved when $x_i=1$.
Note that the Rosenbrock function and its derivatives are included in scipy.optimize. The implementations in the following provide examples of how to define an objective function as well as its Jacobian and Hessian functions.
Nelder-Mead Simplex algorithm (method='Nelder-Mead')
In the example below, the minimize routine is used with the Nelder-Mead simplex algorithm (selected through the method parameter):
End of explanation
# note the special handling of the exterior derivatives
def rosen_der(x):
xm = x[1:-1]
xm_m1 = x[:-2]
xm_p1 = x[2:]
der = np.zeros_like(x)
der[1:-1] = 200*(xm-xm_m1**2) - 400*(xm_p1 - xm**2)*xm - 2*(1-xm)
der[0] = -400*x[0]*(x[1]-x[0]**2) - 2*(1-x[0])
der[-1] = 200*(x[-1]-x[-2]**2)
return der
Explanation: The simplex method is a simple way to minimize a fairly well-behaved function. It only requires function evaluations and is a good choice for simple minimization problems. However, because it does not use any gradient evaluations, it may take longer to find the minimum.
Broyden-Fletcher-Golfarb-Shanno algorithm (method='BFGS')
In order to converge more quickly to the solution, this routine uses the gradient of the objective function. If the gradient is not given by the user, then it is estimated using first-differences. The Broyden-Fletcher-Golfarb-Shanno (BFGS) method typically requires fewer calls than the simplex algorithm even when the gradient must be estimated.
To demonstrate this algorithm, the Rosenbrock function is used again. The gradient of the Rosenbrock function is the vector:
$$ \begin{eqnarray} \frac{\partial f}{\partial x_{j}} & = & \sum_{i=1}^{N}200\left(x_{i}-x_{i-1}^{2}\right)\left(\delta_{i,j}-2x_{i-1}\delta_{i-1,j}\right)-2\left(1-x_{i-1}\right)\delta_{i-1,j}.\ & = & 200\left(x_{j}-x_{j-1}^{2}\right)-400x_{j}\left(x_{j+1}-x_{j}^{2}\right)-2\left(1-x_{j}\right).\end{eqnarray}$$
This expression is vaalid for the interior derivatives. Special cases are:
$$ \begin{eqnarray} \frac{\partial f}{\partial x_{0}} & = & -400x_{0}\left(x_{1}-x_{0}^{2}\right)-2\left(1-x_{0}\right),\ \frac{\partial f}{\partial x_{N-1}} & = & 200\left(x_{N-1}-x_{N-2}^{2}\right).\end{eqnarray} $$
A function which computes this gradient is:
End of explanation
res = minimize(rosen, x0, method='BFGS', jac=rosen_der,
options={'disp': True})
print res.x
Explanation: This gradient information is specified in the minimize function through the jac parameter:
End of explanation
import numpy as np
from skopt import gp_minimize
Explanation: Machine learning libraries (e.g. Tensorflow, Theano, Torch etc.) will provide a similar interface. When they provide auto-differentiation capabilities, you will not need to worry about writing the derivative function yourself. You will need to provide the "forward" computational graph and an objective.
Black-box function optimization with skopt
Scikit-Optimize, or skopt, is a simple and efficient library to minimize (very) expensive and noisy black-box functions. It implements several methods for sequential model-based optimization.
Alternative libraries include Spearmint, PyBO, and Hyperopt.
Black-box algorithms do not need any knowledge of the gradient. These libraries provide algorithms that are more powerful and scale better than the Nelder-Mead simplex algorithm above. Modern black-box (or sequential model-based) optimization algorithms are increasingly popular for optimizing the hyperparameters (user-tuned "knobs") of machine learning models. We'll talk more about this later.
For now, just a brief example, which is taken from the skopt Bayesian Optimization tutorial:
End of explanation
noise_level = 0.1
def f(x, noise_level=noise_level):
return np.sin(5 * x[0]) * (1 - np.tanh(x[0] ** 2)) + np.random.randn() * noise_level
Explanation: Let's assume the following noisy function $f$:
End of explanation
# Plot f(x) + contours
x = np.linspace(-2, 2, 400).reshape(-1, 1)
fx = [f(x_i, noise_level=0.0) for x_i in x]
plt.plot(x, fx, "r--", label="True (unknown)")
plt.fill(np.concatenate([x, x[::-1]]),
np.concatenate(([fx_i - 1.9600 * noise_level for fx_i in fx],
[fx_i + 1.9600 * noise_level for fx_i in fx[::-1]])),
alpha=.2, fc="r", ec="None")
plt.legend()
plt.grid()
Explanation: In skopt, functions $f$ are assumed to take as input a 1D vector $x$ represented as an array-like and to return a scalar $f(x)$:
End of explanation
res = gp_minimize(f, # the function to minimize
[(-2.0, 2.0)], # the bounds on each dimension of x
acq_func="EI", # the acquisition function
n_calls=15, # the number of evaluations of f
n_random_starts=5, # the number of random initialization points
noise=0.1**2, # the noise level (optional)
random_state=123) # the random seed
Explanation: Bayesian Optimization based on Gaussian Process regression is implemented in skopt.gp_minimize and can be carried out as follows:
End of explanation
print "x^*=%.4f, f(x^*)=%.4f" % (res.x[0], res.fun)
Explanation: Accordingly, the approximated minimum is found to be:
End of explanation
print(res)
Explanation: For further inspection of the results, attributes of the res named tuple provide the following information:
x [float]: location of the minimum.
fun [float]: function value at the minimum.
models: surrogate models used for each iteration.
x_iters [array]: location of function evaluation for each iteration.
func_vals [array]: function value for each iteration.
space [Space]: the optimization space.
specs [dict]: parameters passed to the function.
End of explanation
from skopt.plots import plot_convergence
plot_convergence(res);
Explanation: Together these attributes can be used to visually inspect the results of the minimization, such as the convergence trace or the acquisition function at the last iteration:
End of explanation |
2,696 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Portfolio Optimization Using Signals
by Magnus Erik Hvass Pedersen
/ GitHub / Videos on YouTube
Introduction
We are interested in combining multiple stocks into a single investment portfolio. This can be done naively by having equal weights for all the stocks, or we can try and do it more intelligently so as to maximize some performance measure, which is called portfolio optimization.
Academia teaches only one method of portfolio optimization in investment finance
Step1: Load Data
We now define and load all the financial data we will be using.
Step2: Data is also included for the stocks GPC and WMT but doing the analysis in the first paper on those two stocks shows that their P/Sales ratios are not a good predictor for their returns, so those two stocks are omitted here.
Step3: Predictive Signals
We saw in the first paper that the P/Sales ratio was a strong predictor for the long-term returns of the S&P 500 and some individual stocks, so we will be using the P/Sales ratio as the predictive signal for those stocks.
For the S&P 400 and S&P 600 indices we do not have the P/Sales data but we do have the Dividend Yield, which is also a good predictor for the long-term returns of those indices. You should confirm this by doing the analysis in the first paper for those two indices.
It is extremely important that you analyse the relationship between a given signal and the stock returns before you use that signal, because the optimizer will try and uncover a relationship whether there really is one or not. It may work very well on the training-set, but it may not generalize to the future.
You should therefore always do analysis like the first paper before you use a predictive signal!
Similarly, you should be careful that your mathematical model only maps signals to stock-weights where the signal has a causal relationship with that particular stock. For example, a fully-connected neural network would allow all signals to be used to determine the weights for all the stocks, but the P/Sales for one company clearly has nothing to do with the future return of another company's stock - except for perhaps a historical coincidence in the training-set, which is unlikely to occur again in the future.
Note that we make a simple implementation here which only allows one signal per stock.
Step4: Daily Returns
We need the daily returns for all the stocks because we will rebalance the portfolio every day according to the stock-weights produced by our portfolio model.
We use interpolated data so we have the stock-prices and their daily returns for all days including weekends and holidays. This makes the programming easier for this demonstration, but a real implementation should probably only use real data points to avoid distortions.
Step5: Weight Bounds
We only allow for so-called "long" investing without margin, which means that the stock-weights are bounded between 0.0 and 1.0, and their sum must be less or equal to 1.0.
Step6: Furthermore, we limit the weights for the individual stocks to 0.2 (or 20%) of the portfolio, while allowing the stock-indices to have weights up to 1.0 (or 100%) of the portfolio. This is an attempt to regularize the portfolio model so it does not overfit to the training-set. Ideally we would have many more stocks to choose from so the stock-weights could be limited to e.g. 5% of the portfolio, but when we only have a small number of stocks, that would essentially mean that the portfolio would mostly be allocted to the indices and hence track those closely.
Step7: Split Training- and Test-Sets
We now split the data-set evenly into a training-set and a test-set.
Step8: Portfolio with Equal Weights
The most basic portfolio model uses equal weights for all the stocks and indices. It serves as a baseline for comparison to more advanced portfolio models.
Step9: Portfolio with Fixed Weights
Another baseline for use in comparison, is a portfolio model where the stock-weights are held fixed for all days, but the stock-weights need not be equal as they are found through optimization.
Step10: Portfolio with Adaptive Weights
The portfolio model with adaptive weights uses the signals for the stocks to determine the stock-weights in the portfolio. We use a simple linear function wrapped in a sigmoid-function to softly limit the result between 0.0 and 1.0. For example, the weight for the CLX stock is
Step11: Print Adaptive Weight Formulas
We can print the linear formulas for the stock-weights that we have found in the optimization above. We ignore the sigmoid and other scalings here and show more detailed plots of the stock-weights further below.
Step13: Plot Adaptive Weights
Step14: We can now plot the stock-weights from the adaptive portfolio model. There is a sub-plot for each stock or index. The black fill is the stock-weight with its scale given on the y-axis. The red line is the predictive signal used for that particular stock, and its scale is always between 0 and 1 because we show the normalized signals.
All stocks and indices use the P/Sales ratio as the signal, except for the S&P 400 and S&P 600 which use the Dividend Yield. We expect a high P/Sales to correspond to a low stock-weight, because a high P/Sales means the stock is expensive and its future returns are most likely low. Conversely, we expect a high Dividend Yield to correspond to a high stock-weight, because a high Dividend Yield means the stock is cheap so its future returns are most likely high. See the first paper in this series for more details.
Because the stock-weights must sum to 1.0 or less, there is internal competition amongst the stocks, so there is not always a clear relationship between the stock's signal and its weight. But in general it looks like our portfolio model has learned to map the signals to stock-weights.
We first show the stock-weights and signals for the training-set
Step15: We now show the stock-weights and signals for the test-set. This is "out-of-sample" data that has not been seen during optimization of the portfolio model. We see that the portfolio model still scales the stock-weights according to their signals, so it seems to do what we expect.
Note that the weights will most likely change every time you run this, because we are using a stochastic heuristic optimizer to find the parameters for the portfolio model.
Step17: Compare Returns
Step18: We can now plot a comparison of the different portfolio models to the stock indices. First we show it for the training-set, so this is the period of data used during optimization of the portfolio models. We see that the equal-weight portfolio performed better than all three stock indices. The fixed-weight portfolio performed even better, and the adaptive-weight portfolio performed best of all.
Step19: Let us now show the same comparison but for the test-set. This is out-of-sample data that was not seen during training of the portfolio models. The results change slightly each time you run this, because the optimization is heuristic and stochastic. But in general, the adaptive-weight portfolio performs slightly better than the fixed-weight and equal-weight portfolios. But the difference is probably too small to conclude the adaptive-weight portfolio is better, especially when taking trading costs and taxes into consideration, as discussed below. | Python Code:
%matplotlib inline
# Imports from Python packages.
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from sklearn.preprocessing import MinMaxScaler
# Imports from FinanceOps.
from data_keys import *
from data import load_stock_data, load_index_data
from portfolio import EqualWeights, FixedWeights, AdaptiveWeights
from returns import daily_returns
Explanation: Portfolio Optimization Using Signals
by Magnus Erik Hvass Pedersen
/ GitHub / Videos on YouTube
Introduction
We are interested in combining multiple stocks into a single investment portfolio. This can be done naively by having equal weights for all the stocks, or we can try and do it more intelligently so as to maximize some performance measure, which is called portfolio optimization.
Academia teaches only one method of portfolio optimization in investment finance: Markowitz Portfolio Theory, also known as Mean-Variance portfolio optimization because it seeks to maximize the portfolio's mean return while minimizing its variance. However, there are several problems with this method:
Firstly, the method assumes that historical returns and their covariances will continue in the future and does not use any predictive signals to adjust the future return distributions.
Secondly and perhaps more importantly, the method uses the variance (or equivalently the standard deviation) of the return distribution as a risk measure that must be minimized. But this is a fundamental misunderstanding of both finance, statistics and even the English language, because the variance measures the spread of a distribution, while the word "risk" of course means "the chance of injury or loss" which is not measured by the variance.
I have previously made YouTube videos explaining this problem here and here. The following cartoon is a funny but very accurate criticism of this fatal flaw of Markowitz portfolio theory:
In the first paper of this series, we showed that the P/Sales ratio could be used as a predictor for long-term returns of the S&P 500 and some individual stocks. In this paper we will develop a method for allocating portfolios using such predictive signals. Although we cannot test the method properly because of a lack of financial data, we do show that the stock-weights are allocated as we would expect from the predictive signals, which indicates that the method could work quite well if we had more financial data. The method should also work with signals predicting short and mid-term returns.
Flowchart
The overall idea is to create a portfolio model which is basically just a mathematical function that maps the predictive signals to stock-weights. The portfolio model has a number of parameters that are found using a heuristic optimizer so as to result in good investment performance.
The flowchart shows roughly how the optimizer produces new model parameters, which are then used by the portfolio model to map the predictive signals to stock-weights, which are then multiplied with the stock returns to produce the cumulative value of the portfolio. We can then calculate various performance measures from this and feed the result back into the optimizer to produce a new set of model parameters to try. This loop is repeated until satisfactory model parameters are found.
Portfolio Models
We will use three different portfolio models:
Equal-weights portfolio which does not require any optimization because its stock-weights are all equal, so the portfolio is merely rebalanced with equal weights every day.
Fixed-weights portfolio which uses fixed but possibly non-equal stock-weights for the daily rebalancing. The best stock-weights are found through optimization as shown in the flowchart.
Adaptive-weights portfolio which adapts the stock-weights using the predictive signals. The mapping is a mathematical function whose parameters are found through optimization as shown in the flowchart.
Direct vs. Indirect Portfolio Allocation
We might consider this method to be a form of "direct" portfolio allocation because we are mapping directly from the predictive signals to the stock-weights.
An "indirect" method of portfolio allocation would first have to estimate the return distributions for the stocks and then use those distributions to find an optimal portfolio allocation.
The advantage of our "direct" method, is that it avoids the complexity of having to model the return distributions, which would probably also have significant estimation errors. It also avoids a complicated optimization problem, which would have to be solved every time the portfolio should be rebalanced.
Instead our "direct" way of doing portfolio allocation simply maps the predictive signals to stock-weights in a manner that has been found to work well on the training-data.
Fitness Measure
The fitness function measures how well the portfolio model performs on the training-set. This is used to guide the optimization procedure to find better parameters for the portfolio model.
There are many ways of defining the fitness function. It is implemented in the function portfolio.Model._fitness() which gives an example. The main fitness measure is the mean-log return for all 5-year investment periods, which is also known as the Kelly Criterion. The main fitness is then severely penalized if more than 7% of all 1-year investment periods had losses. This means we strongly prefer portfolio models with few 1-year losses, possibly at the cost of lower long-term returns.
There are infinitely many ways of defining the fitness function. This can be used to shape the return distribution in different ways. Because we use a heuristic optimizer we do not need a gradient for the fitness function, so it is very easy to implement and experiment with.
Python Imports
This Jupyter Notebook is implemented in Python v. 3.6 and requires various packages for numerical computations and plotting. See the installation instructions in the README-file.
End of explanation
# Ticker-names for the stock indices.
ticker_SP500 = "S&P 500"
ticker_SP400 = "S&P 400"
ticker_SP600 = "S&P 600"
tickers_indices = [ticker_SP500, ticker_SP400, ticker_SP600]
Explanation: Load Data
We now define and load all the financial data we will be using.
End of explanation
# Ticker-names for the stocks.
tickers_stocks = [ 'CLX', 'CPB', 'DE', 'DIS', 'GIS',
'HSY', 'JNJ', 'K', 'PG']
# List of all tickers for the indices and stocks.
tickers = tickers_indices + tickers_stocks
# Load the financial data for the stock indices.
df_SP500 = load_index_data(ticker=ticker_SP500)
df_SP400 = load_index_data(ticker=ticker_SP400,
sales=False, book_value=False)
df_SP600 = load_index_data(ticker=ticker_SP600,
sales=False, book_value=False)
# DataFrames for all stock indices.
dfs_indices = [df_SP500, df_SP400, df_SP600]
# Load financial data for all stocks to a list of DataFrames.
dfs_stocks = [load_stock_data(ticker=ticker)
for ticker in tickers_stocks]
# List of all DataFrames for the indices and stocks.
dfs = dfs_indices + dfs_stocks
# Total number of stocks and indices.
num_stocks = len(dfs)
num_stocks
Explanation: Data is also included for the stocks GPC and WMT but doing the analysis in the first paper on those two stocks shows that their P/Sales ratios are not a good predictor for their returns, so those two stocks are omitted here.
End of explanation
# Predictive signals for the stock indices.
signals_indices = [df_SP500[PSALES],
df_SP400[DIVIDEND_YIELD],
df_SP600[DIVIDEND_YIELD]]
# Predictive signals for the stocks.
signals_stocks = [df[PSALES] for df in dfs_stocks]
# Combine all the signals into a single list.
signals = signals_indices + signals_stocks
# Create a Pandas DataFrame.
df_signals = pd.concat(signals, axis=1)
# Remove rows with missing data.
df_signals.dropna(inplace=True)
# Show the top rows of the valid signals.
df_signals.head()
# Period for which we have valid signals.
start_date, end_date = df_signals.index[[0, -1]]
# The raw unscaled signals.
signals_raw = df_signals.values
# The signals could have widely different ranges,
# so we scale the signals to be between 0.0 and 1.0
signals_scaler = MinMaxScaler()
signals_scaled = signals_scaler.fit_transform(signals_raw)
Explanation: Predictive Signals
We saw in the first paper that the P/Sales ratio was a strong predictor for the long-term returns of the S&P 500 and some individual stocks, so we will be using the P/Sales ratio as the predictive signal for those stocks.
For the S&P 400 and S&P 600 indices we do not have the P/Sales data but we do have the Dividend Yield, which is also a good predictor for the long-term returns of those indices. You should confirm this by doing the analysis in the first paper for those two indices.
It is extremely important that you analyse the relationship between a given signal and the stock returns before you use that signal, because the optimizer will try and uncover a relationship whether there really is one or not. It may work very well on the training-set, but it may not generalize to the future.
You should therefore always do analysis like the first paper before you use a predictive signal!
Similarly, you should be careful that your mathematical model only maps signals to stock-weights where the signal has a causal relationship with that particular stock. For example, a fully-connected neural network would allow all signals to be used to determine the weights for all the stocks, but the P/Sales for one company clearly has nothing to do with the future return of another company's stock - except for perhaps a historical coincidence in the training-set, which is unlikely to occur again in the future.
Note that we make a simple implementation here which only allows one signal per stock.
End of explanation
# Add a day to the period because we most likely have the
# Total Return data for that day as well, and that means
# we calculate the daily return for the end_date as well.
end_date_plus1 = end_date + pd.DateOffset(days=1)
# Calculate daily returns using the Total Return for all stocks.
daily_rets = np.array([daily_returns(df, start_date, end_date_plus1)
for df in dfs])
# Transpose the 2-dim array.
daily_rets = daily_rets.T
daily_rets.shape
# Remove the last row which only contains NAN.
daily_rets = daily_rets[0:-1, :]
daily_rets.shape
Explanation: Daily Returns
We need the daily returns for all the stocks because we will rebalance the portfolio every day according to the stock-weights produced by our portfolio model.
We use interpolated data so we have the stock-prices and their daily returns for all days including weekends and holidays. This makes the programming easier for this demonstration, but a real implementation should probably only use real data points to avoid distortions.
End of explanation
min_weights = np.zeros(num_stocks, dtype=np.float)
max_weights = np.ones(num_stocks, dtype=np.float)
Explanation: Weight Bounds
We only allow for so-called "long" investing without margin, which means that the stock-weights are bounded between 0.0 and 1.0, and their sum must be less or equal to 1.0.
End of explanation
max_weights[len(dfs_indices):] = 0.2
Explanation: Furthermore, we limit the weights for the individual stocks to 0.2 (or 20%) of the portfolio, while allowing the stock-indices to have weights up to 1.0 (or 100%) of the portfolio. This is an attempt to regularize the portfolio model so it does not overfit to the training-set. Ideally we would have many more stocks to choose from so the stock-weights could be limited to e.g. 5% of the portfolio, but when we only have a small number of stocks, that would essentially mean that the portfolio would mostly be allocted to the indices and hence track those closely.
End of explanation
# Total number of data-points.
num_data = len(daily_rets)
# Number of data-points in the training-set.
num_train = int(num_data * 0.5)
# Indices for the training-set.
# These are used to lookup dates, daily returns, signals, etc.
idx_train = range(0, num_train)
# Indices for the test-set.
idx_test = range(num_train, num_data)
# All dates for the entire period.
dates = df_SP500[start_date:end_date].index
# Dates for the training-set.
dates_train = dates[0:num_train]
# Dates for the test-set.
dates_test = dates[num_train:]
# Daily returns for the training-set.
daily_rets_train = daily_rets[0:num_train]
# Daily returns for the test-set.
daily_rets_test = daily_rets[num_train:]
# Signals for the training-set.
signals_train = signals_scaled[0:num_train]
# Signals for the test-set.
signals_test = signals_scaled[num_train:]
Explanation: Split Training- and Test-Sets
We now split the data-set evenly into a training-set and a test-set.
End of explanation
# Create portfolio model for equal weights.
portfolio_equal = EqualWeights(num_stocks=num_stocks, use_cash=False)
# Get the weights.
weights_equal, weights_cash_equal = portfolio_equal.get_weights(signals_test)
weights_equal
weights_cash_equal
Explanation: Portfolio with Equal Weights
The most basic portfolio model uses equal weights for all the stocks and indices. It serves as a baseline for comparison to more advanced portfolio models.
End of explanation
%%time
# Create portfolio model for fixed weights and
# find the weights that perform best on the training-set.
portfolio_fixed = FixedWeights(signals_train=signals_train,
daily_rets_train=daily_rets_train,
min_weights=min_weights,
max_weights=max_weights)
# Get the weights.
weights_fixed, weights_cash_fixed = portfolio_fixed.get_weights(signals_test)
# Print the stock-weights. The remaining cash-weight is not shown.
for ticker, weight in zip(tickers, weights_fixed[0]):
print("{0:7} {1:7.2%}".format(ticker, weight))
print("Sum = {0:7.2%}".format(np.sum(weights_fixed)))
Explanation: Portfolio with Fixed Weights
Another baseline for use in comparison, is a portfolio model where the stock-weights are held fixed for all days, but the stock-weights need not be equal as they are found through optimization.
End of explanation
%%time
# Create portfolio model for adaptive weights and find the
# model-parameters that perform best on the training-set.
portfolio_adapt = AdaptiveWeights(signals_train=signals_train,
daily_rets_train=daily_rets_train,
min_weights=min_weights,
max_weights=max_weights)
Explanation: Portfolio with Adaptive Weights
The portfolio model with adaptive weights uses the signals for the stocks to determine the stock-weights in the portfolio. We use a simple linear function wrapped in a sigmoid-function to softly limit the result between 0.0 and 1.0. For example, the weight for the CLX stock is:
$$
Weight_{CLX} = Sigmoid( a_{CLX} \cdot Signal_{CLX} + b_{CLX} )
$$
During optimization we are trying to find the parameters $a_{CLX}$ and $b_{CLX}$ that maximize the investment performance according to our fitness-function. We are trying to find the parameters for all stocks in the portfolio simultaneously.
There is some additional scaling on the weights to limit them between the valid boundaries, as well as ensuring that the weights for the entire portfolio are less or equal to 1.
End of explanation
for ticker, a, b in zip(tickers,
portfolio_adapt._a,
portfolio_adapt._b):
msg = "{0:8} weight = {1:6.2f} x signal + {2:6.2f}"
print(msg.format(ticker, a, b))
Explanation: Print Adaptive Weight Formulas
We can print the linear formulas for the stock-weights that we have found in the optimization above. We ignore the sigmoid and other scalings here and show more detailed plots of the stock-weights further below.
End of explanation
def plot_weights_signals(idx):
Plot stock-weights (black fills) and signals (red lines)
for the adaptive portfolio model, for the given period.
:param idx: List of integer indices into dates, signals, etc.
:return: None.
# Get the data for the period given by the indices.
dates_idx = dates[idx]
signals_idx = signals_scaled[idx, :]
# Setup plotting.
# A plot for each stock and a plot for the cash-weight.
fig, axes = plt.subplots(num_stocks+1, 1, sharex=True,
figsize=(10, 20))
# Get the weights for the stock and cash for this period.
weights, weights_cash = portfolio_adapt.get_weights(signals_idx)
# Make a plot for each stock-weight and signal.
for i, ax in enumerate(axes[:-1]):
# The range of the y-axis is set to the max
# allowed weight for this stock.
ylim = max_weights[i]
ax.set_ylim([0, ylim])
# Plot the stock-weights as a solid black fill.
ax.fill_between(dates_idx, 0.0, weights[:, i],
color="black", edgecolor="black")
# Plot the signal as a red line.
# Note that signals_scaled is between 0.0 and 1.0
# so we scale it by the ylim to use the full range
# of the y-axis.
signal = signals_scaled[idx, i] * ylim
ax.plot(dates_idx, signal, color="red")
# Plot a grid.
ax.grid()
# Print ticker-name in the middle of each sub-plot.
ax.text(0.5, 0.9, tickers[i], color="green", weight="bold",
ha="center", va="center", transform=ax.transAxes)
# Plot the cash weights. This is similar to a stock-plot.
ax = axes[-1]
ax.set_ylim([0, 1])
ax.fill_between(dates_idx, 0.0, weights_cash,
color="black", edgecolor="black")
ax.text(0.5, 0.9, "Cash", color="green", weight="bold",
ha="center", va="center", transform=ax.transAxes)
ax.grid()
# Plot with a compact layout.
plt.tight_layout()
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
Explanation: Plot Adaptive Weights
End of explanation
plot_weights_signals(idx=idx_train)
Explanation: We can now plot the stock-weights from the adaptive portfolio model. There is a sub-plot for each stock or index. The black fill is the stock-weight with its scale given on the y-axis. The red line is the predictive signal used for that particular stock, and its scale is always between 0 and 1 because we show the normalized signals.
All stocks and indices use the P/Sales ratio as the signal, except for the S&P 400 and S&P 600 which use the Dividend Yield. We expect a high P/Sales to correspond to a low stock-weight, because a high P/Sales means the stock is expensive and its future returns are most likely low. Conversely, we expect a high Dividend Yield to correspond to a high stock-weight, because a high Dividend Yield means the stock is cheap so its future returns are most likely high. See the first paper in this series for more details.
Because the stock-weights must sum to 1.0 or less, there is internal competition amongst the stocks, so there is not always a clear relationship between the stock's signal and its weight. But in general it looks like our portfolio model has learned to map the signals to stock-weights.
We first show the stock-weights and signals for the training-set:
End of explanation
plot_weights_signals(idx=idx_test)
Explanation: We now show the stock-weights and signals for the test-set. This is "out-of-sample" data that has not been seen during optimization of the portfolio model. We see that the portfolio model still scales the stock-weights according to their signals, so it seems to do what we expect.
Note that the weights will most likely change every time you run this, because we are using a stochastic heuristic optimizer to find the parameters for the portfolio model.
End of explanation
def plot_comparison(title, idx, tickers, dfs):
Plot comparison of Total Return for the given tickers
and the different portfolios: Equal, fixed and adaptive.
:param title: Title of the plot.
:param idx: List of integer indices into dates, signals, etc.
:param tickers: List of strings for the tickers.
:param dfs: List of DataFrames corresponding to the tickers.
# Create a single plot.
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(211)
# Get the data for the period given by the indices.
dates_idx = dates[idx]
daily_rets_idx = daily_rets[idx]
signals_idx = signals_scaled[idx, :]
# Plot the Total Return for the given stocks.
for ticker, df in zip(tickers, dfs):
# Get the Total Return for the given period.
tot_ret = df[TOTAL_RETURN][dates_idx].dropna()
# Normalize to begin at 1.0
tot_ret /= tot_ret[0]
# Plot it.
ax.plot(tot_ret, label=ticker)
# Plot portfolio value using equal weights.
value = portfolio_equal.value(daily_rets=daily_rets_idx)
ax.plot(dates_idx, value, label="Equal Weights")
# Plot portfolio value using fixed weights.
value = portfolio_fixed.value(daily_rets=daily_rets_idx)
ax.plot(dates_idx, value, label="Fixed Weights")
# Plot portfolio value using adaptive weights.
value = portfolio_adapt.value(daily_rets=daily_rets_idx,
signals=signals_idx)
ax.plot(dates_idx, value, label="Adapt Weights")
# Set the axis-labels.
ax.set_ylabel("Total Return")
# Add legend to plot.
ax.legend(loc=0)
# Add grid to plot.
ax.grid()
# Set the plot's title.
ax.set_title(title)
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
Explanation: Compare Returns
End of explanation
plot_comparison(title="Training-Set (In-Sample)", idx=idx_train,
tickers=[ticker_SP500, ticker_SP400, ticker_SP600],
dfs=[df_SP500, df_SP400, df_SP600])
Explanation: We can now plot a comparison of the different portfolio models to the stock indices. First we show it for the training-set, so this is the period of data used during optimization of the portfolio models. We see that the equal-weight portfolio performed better than all three stock indices. The fixed-weight portfolio performed even better, and the adaptive-weight portfolio performed best of all.
End of explanation
plot_comparison(title="Test-Set (Out-of-Sample)", idx=idx_test,
tickers=[ticker_SP500, ticker_SP400, ticker_SP600],
dfs=[df_SP500, df_SP400, df_SP600])
Explanation: Let us now show the same comparison but for the test-set. This is out-of-sample data that was not seen during training of the portfolio models. The results change slightly each time you run this, because the optimization is heuristic and stochastic. But in general, the adaptive-weight portfolio performs slightly better than the fixed-weight and equal-weight portfolios. But the difference is probably too small to conclude the adaptive-weight portfolio is better, especially when taking trading costs and taxes into consideration, as discussed below.
End of explanation |
2,697 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: 畳み込みニューラルネットワーク (Convolutional Neural Networks)
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https
Step2: MNISTデータセットのダウンロードと準備
CIFAR10 データセットには、10 のクラスに 60,000 のカラー画像が含まれ、各クラスに 6,000 の画像が含まれています。 データセットは、50,000 のトレーニング画像と 10,000 のテスト画像に分割されています。クラスは相互に排他的であり、それらの間に重複はありません。
Step3: データを確認する
データセットが正しいことを確認するために、トレーニングセットの最初の 25 枚の画像をプロットし、各画像の下にクラス名を表示しましょう。
Step4: 畳み込みの基礎部分の作成
下記の6行のコードは、一般的なパターンで畳み込みの基礎部分を定義しています
Step5: これまでのモデルのアーキテクチャを表示します。
Step6: 上記より、すべての Conv2D と MaxPooling2D レイヤーの出力は shape (height, width, channels) の 3D テンソルであることがわかります。width と height の寸法は、ネットワークが深くなるにつれて縮小する傾向があります。各 Conv2D レイヤーの出力チャネルの数は、第一引数 (例
Step7: モデルの完全なアーキテクチャは次のとおりです。
Step8: ネットワークの要約は、(4, 4, 64) 出力が、2 つの高密度レイヤーを通過する前に形状のベクトル (1024) に平坦化されたことを示しています。
モデルのコンパイルと学習
Step9: モデルの評価 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
import tensorflow as tf
from tensorflow.keras import datasets, layers, models
import matplotlib.pyplot as plt
Explanation: 畳み込みニューラルネットワーク (Convolutional Neural Networks)
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/tutorials/images/cnn"> <img src="https://www.tensorflow.org/images/tf_logo_32px.png"> View on TensorFlow.org</a> </td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tutorials/images/cnn.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colabで実行</a>
</td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tutorials/images/cnn.ipynb"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png"> View source on GitHub</a> </td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/tutorials/images/cnn.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">Download notebook</a> </td>
</table>
このチュートリアルでは、MNIST の数の分類をするための、シンプルな畳み込みニューラルネットワーク (CNN: Convolutional Neural Network) の学習について説明します。このシンプルなネットワークは MNIST テストセットにおいて、99%以上の精度を達成します。このチュートリアルでは、Keras Sequential APIを使用するため、ほんの数行のコードでモデルの作成と学習を行うことができます。<br>Note: GPU を使うことで CNN をより早く学習させることができます。もし、このノートブックを Colab で実行しているならば、編集 -> ノートブックの設定 -> ハードウェアアクセラレータ -> GPU から無料のGPUを有効にすることができます。
TensorFlowのインポート
End of explanation
(train_images, train_labels), (test_images, test_labels) = datasets.cifar10.load_data()
# Normalize pixel values to be between 0 and 1
train_images, test_images = train_images / 255.0, test_images / 255.0
Explanation: MNISTデータセットのダウンロードと準備
CIFAR10 データセットには、10 のクラスに 60,000 のカラー画像が含まれ、各クラスに 6,000 の画像が含まれています。 データセットは、50,000 のトレーニング画像と 10,000 のテスト画像に分割されています。クラスは相互に排他的であり、それらの間に重複はありません。
End of explanation
class_names = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i])
# The CIFAR labels happen to be arrays,
# which is why you need the extra index
plt.xlabel(class_names[train_labels[i][0]])
plt.show()
Explanation: データを確認する
データセットが正しいことを確認するために、トレーニングセットの最初の 25 枚の画像をプロットし、各画像の下にクラス名を表示しましょう。
End of explanation
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
Explanation: 畳み込みの基礎部分の作成
下記の6行のコードは、一般的なパターンで畳み込みの基礎部分を定義しています: Conv2D と MaxPooling2D レイヤーのスタック。
入力として、CNNはバッチサイズを無視して、形状(image_height、image_width、color_channels)のテンソルを取ります。これらのディメンションを初めて使用する場合、color_channelsは(R,G,B)を参照します。 この例では、CIFAR 画像の形式である形状(32, 32, 3)の入力を処理するようにCNNを構成します。これを行うには、引数input_shapeを最初のレイヤーに渡します。
End of explanation
model.summary()
Explanation: これまでのモデルのアーキテクチャを表示します。
End of explanation
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10))
Explanation: 上記より、すべての Conv2D と MaxPooling2D レイヤーの出力は shape (height, width, channels) の 3D テンソルであることがわかります。width と height の寸法は、ネットワークが深くなるにつれて縮小する傾向があります。各 Conv2D レイヤーの出力チャネルの数は、第一引数 (例: 32 または 64) によって制御されます。通常、width とheight が縮小すると、各 Conv2D レイヤーにさらに出力チャネルを追加する余裕が (計算上) できます。
上に Dense レイヤーを追加
モデルを完成するために、(shape (3, 3, 64) の) 畳み込みの基礎部分からの最後の出力テンソルを、1つ以上の Dense レイヤーに入れて分類を実行します。現在の出力は 3D テンソルですが、Dense レイヤーは入力としてベクトル (1D) を取ります。まず、3D 出力を 1D に平滑化 (または展開) してから、最上部に1つ以上の Dense レイヤーを追加します。MNIST は 10 個の出力クラスを持ちます。そのため、我々は最後の Dense レイヤーの出力を 10 にし、softmax関数を使用します。
End of explanation
model.summary()
Explanation: モデルの完全なアーキテクチャは次のとおりです。
End of explanation
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_images, train_labels, epochs=10,
validation_data=(test_images, test_labels))
Explanation: ネットワークの要約は、(4, 4, 64) 出力が、2 つの高密度レイヤーを通過する前に形状のベクトル (1024) に平坦化されたことを示しています。
モデルのコンパイルと学習
End of explanation
plt.plot(history.history['accuracy'], label='accuracy')
plt.plot(history.history['val_accuracy'], label = 'val_accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.ylim([0.5, 1])
plt.legend(loc='lower right')
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print(test_acc)
Explanation: モデルの評価
End of explanation |
2,698 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Boosting to Uniformity
In physical applications frequently we need to achieve uniformity of predictions along some features.
For instance, when testing the existence of new particle, we need classifier to be uniform in background along the mass (otherwise one can get false discovery due to peaking background).
This notebook contains some comparison of classifiers. The target is to obtain flat effiency in signal (without significally loosing quality of classification) in Dalitz features.
The classifiers compared are
plain GradientBoosting
uBoost
gradient boosting with knn-Ada loss (UGB+knnAda)
gradient boosting with FlatnessLoss (UGB+FlatnessLoss)
We use dataset from paper about uBoost for demonstration purposes.
We have plenty of data here, so results are quite stable
Step1: Loading data
Step3: Distribution of events in different files in the Dalitz features
As we can see, the background is distributed mostly in the corners of Dalitz plot, <br />
and for traditional classifiers this results in poor effieciency of signal in the corners.
Step4: Preparation of train/test datasets
Step5: Setting up classifiers, training
Step6: uBoost training takes much time, so we reduce number of efficiency_steps, use prediction smoothing and run uBoost in threads
Step7: Lets look at the results of training
dependence of quality on the number of trees built (ROC AUC - area under the ROC curve, the more the better)
Step8: SDE (squared deviation of efficiency) learning curve
SDE vs the number of built trees. SDE is memtric of nonuniformity - less is better.
Step9: CvM learning curve
CvM is metric of non-uniformity based on Cramer-von Mises distance. We are using Knn version now.
Step10: ROC curves after training
Step11: Signal efficiency
global cut corresponds to average signal efficiency=0.5. In ideal case the picture shall be white.
Step12: the same for global efficiency = 0.7 | Python Code:
# downloading data
!wget -O ../data/dalitzdata.root -nc https://github.com/arogozhnikov/hep_ml/blob/data/data_to_download/dalitzdata.root?raw=true
%pylab inline
import pandas, numpy
from sklearn.cross_validation import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import GradientBoostingClassifier
# this wrapper makes it possible to train on subset of features
from rep.estimators import SklearnClassifier
from hep_ml.commonutils import train_test_split
from hep_ml import uboost, gradientboosting as ugb, losses
Explanation: Boosting to Uniformity
In physical applications frequently we need to achieve uniformity of predictions along some features.
For instance, when testing the existence of new particle, we need classifier to be uniform in background along the mass (otherwise one can get false discovery due to peaking background).
This notebook contains some comparison of classifiers. The target is to obtain flat effiency in signal (without significally loosing quality of classification) in Dalitz features.
The classifiers compared are
plain GradientBoosting
uBoost
gradient boosting with knn-Ada loss (UGB+knnAda)
gradient boosting with FlatnessLoss (UGB+FlatnessLoss)
We use dataset from paper about uBoost for demonstration purposes.
We have plenty of data here, so results are quite stable
End of explanation
import root_numpy
used_columns = ["Y1", "Y2", "Y3", "M2AB", "M2AC"]
data = pandas.DataFrame(root_numpy.root2array('../data/dalitzdata.root', treename='tree'))
labels = data['labels']
data = data.drop('labels', axis=1)
Explanation: Loading data
End of explanation
def plot_distribution(data_frame, var_name1='M2AB', var_name2='M2AC', bins=40):
The function to plot 2D distribution histograms
pylab.hist2d(data_frame[var_name1], data_frame[var_name2], bins = 40, cmap=cm.Blues)
pylab.xlabel(var_name1)
pylab.ylabel(var_name2)
pylab.colorbar()
pylab.figure(figsize=(12, 6))
subplot(1, 2, 1), pylab.title("signal"), plot_distribution(data[labels==1])
subplot(1, 2, 2), pylab.title("background"), plot_distribution(data[labels==0])
pass
Explanation: Distribution of events in different files in the Dalitz features
As we can see, the background is distributed mostly in the corners of Dalitz plot, <br />
and for traditional classifiers this results in poor effieciency of signal in the corners.
End of explanation
trainX, testX, trainY, testY = train_test_split(data, labels, random_state=42)
Explanation: Preparation of train/test datasets
End of explanation
uniform_features = ["M2AB", "M2AC"]
train_features = ["Y1", "Y2", "Y3"]
n_estimators = 150
base_estimator = DecisionTreeClassifier(max_depth=4)
Explanation: Setting up classifiers, training
End of explanation
from rep.metaml import ClassifiersFactory
classifiers = ClassifiersFactory()
base_ada = GradientBoostingClassifier(max_depth=4, n_estimators=n_estimators, learning_rate=0.1)
classifiers['AdaBoost'] = SklearnClassifier(base_ada, features=train_features)
knnloss = ugb.KnnAdaLossFunction(uniform_features, knn=10, uniform_label=1)
ugbKnn = ugb.UGradientBoostingClassifier(loss=knnloss, max_depth=4, n_estimators=n_estimators,
learning_rate=0.4, train_features=train_features)
classifiers['uGB+knnAda'] = SklearnClassifier(ugbKnn)
uboost_clf = uboost.uBoostClassifier(uniform_features=uniform_features, uniform_label=1,
base_estimator=base_estimator,
n_estimators=n_estimators, train_features=train_features,
efficiency_steps=12, n_threads=4)
classifiers['uBoost'] = SklearnClassifier(uboost_clf)
flatnessloss = ugb.KnnFlatnessLossFunction(uniform_features, fl_coefficient=3., power=1.3, uniform_label=1)
ugbFL = ugb.UGradientBoostingClassifier(loss=flatnessloss, max_depth=4,
n_estimators=n_estimators,
learning_rate=0.1, train_features=train_features)
classifiers['uGB+FL'] = SklearnClassifier(ugbFL)
classifiers.fit(trainX, trainY, parallel_profile='threads-4')
pass
Explanation: uBoost training takes much time, so we reduce number of efficiency_steps, use prediction smoothing and run uBoost in threads
End of explanation
from rep.report.metrics import RocAuc
report = classifiers.test_on(testX, testY)
ylim(0.88, 0.94)
report.learning_curve(RocAuc(), steps=1)
Explanation: Lets look at the results of training
dependence of quality on the number of trees built (ROC AUC - area under the ROC curve, the more the better)
End of explanation
from hep_ml.metrics import BinBasedSDE, KnnBasedCvM
report.learning_curve(BinBasedSDE(uniform_features, uniform_label=1))
Explanation: SDE (squared deviation of efficiency) learning curve
SDE vs the number of built trees. SDE is memtric of nonuniformity - less is better.
End of explanation
report.learning_curve(KnnBasedCvM(uniform_features, uniform_label=1))
Explanation: CvM learning curve
CvM is metric of non-uniformity based on Cramer-von Mises distance. We are using Knn version now.
End of explanation
report.roc().plot(new_plot=True, figsize=[10, 9])
Explanation: ROC curves after training
End of explanation
report.efficiencies_2d(uniform_features, efficiency=0.5, signal_label=1, n_bins=15,
labels_dict={1: 'signal'})
Explanation: Signal efficiency
global cut corresponds to average signal efficiency=0.5. In ideal case the picture shall be white.
End of explanation
report.efficiencies_2d(uniform_features, efficiency=0.7, signal_label=1, n_bins=15,
labels_dict={1: 'signal'})
Explanation: the same for global efficiency = 0.7
End of explanation |
2,699 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Novel-taxa and simulated community generation
This notebook describes the generation of reference databases for both novel-taxa and simulated community analyses. Novel-taxa analysis is a form of cross-validated taxonomic classification, wherein random unique sequences are sampled from the reference database as a test set; all sequences sharing taxonomic affiliation at a given taxonomic level are removed from the reference database (training set); and taxonomy is assigned to the query sequences at the given taxonomic level. Thus, this test interrogates the behavior of a taxonomy classifier when challenged with "novel" sequences that are not represented by close matches within the reference sequence database. Such an analysis is performed to assess the degree to which "overassignment" occurs for sequences that are not represented in a reference database.
Simulated community analysis represents more conventional cross-validated classification, wherein unique sequences are randomly sampled from a reference dataset and used as a test set for taxonomic classification, using a training set that has those sequences removed, but not other sequences that share taxonomic affiliation. Instead, the training set must contain identical taxonomies to those represented by the test sequences.
Novel-taxa reference data set generation
This section describes the preparation of the data sets necessary for "novel taxa" analysis. The goals of this step are
Step1: Now we will import these to a dataframe and view it. You should not need to modify the following cell.
Step2: Generate "clean" reference taxonomy and sequence database by removing taxonomy strings with empty or ambiguous levels'
Set simulated community parameters, including amplicon length and the number of iterations to perform. Iterations will split our query sequence files into N chunks.
This will take a few minutes to run. Get some coffee.
Step3: Data Leakage
First check that train/train and train/test distances are similarly distributed for the cross validation data sets.
Step4: Now check that the novel taxa distance distributions are ok.
Step5: For peace of mind, we can test our novel taxa and simulated community datasets to confirm that
Step6: As a sanity check, confirm that novel taxa were generated successfully. | Python Code:
from tax_credit.framework_functions import \
generate_simulated_datasets, distance_comparison, \
test_cross_validated_sequences, \
test_novel_taxa_datasets
from os.path import expandvars, join
import pandas as pd
%matplotlib inline
project_dir = expandvars("../..")
data_dir = join(project_dir, "data")
# List databases as fasta/taxonomy file pairs
databases = {'B1-REF': ['../../data/ref_dbs/gg_13_8_otus/99_otus.fasta',
'../../data/ref_dbs/gg_13_8_otus/99_otu_taxonomy.txt',
"gg_13_8_otus", "GTGCCAGCMGCCGCGGTAA", "GGACTACHVGGGTWTCTAAT", "515f", "806r"],
'F1-REF': ['../../data/ref_dbs/unite_20.11.2016/sh_refs_qiime_ver7_99_20.11.2016_dev.fasta',
'../../data/ref_dbs/unite_20.11.2016/sh_taxonomy_qiime_ver7_99_20.11.2016_dev.txt',
"unite_20.11.2016", "ACCTGCGGARGGATCA", "GAGATCCRTTGYTRAAAGTT", "BITSf", "B58S3r"]
}
Explanation: Novel-taxa and simulated community generation
This notebook describes the generation of reference databases for both novel-taxa and simulated community analyses. Novel-taxa analysis is a form of cross-validated taxonomic classification, wherein random unique sequences are sampled from the reference database as a test set; all sequences sharing taxonomic affiliation at a given taxonomic level are removed from the reference database (training set); and taxonomy is assigned to the query sequences at the given taxonomic level. Thus, this test interrogates the behavior of a taxonomy classifier when challenged with "novel" sequences that are not represented by close matches within the reference sequence database. Such an analysis is performed to assess the degree to which "overassignment" occurs for sequences that are not represented in a reference database.
Simulated community analysis represents more conventional cross-validated classification, wherein unique sequences are randomly sampled from a reference dataset and used as a test set for taxonomic classification, using a training set that has those sequences removed, but not other sequences that share taxonomic affiliation. Instead, the training set must contain identical taxonomies to those represented by the test sequences.
Novel-taxa reference data set generation
This section describes the preparation of the data sets necessary for "novel taxa" analysis. The goals of this step are:
1. Create a "clean" reference database that can be used for evaluation of "novel taxa" from phylum to species level.
2. Generate simulated amplicons and randomly subsample query sequences to use as "novel taxa"
3. Create modified sequence reference databases for taxonomic classification of "novel taxa" sequences
In this first cell, we describe data set/database characteristics as a dictionary: dataset name is the key, with values reference sequence fasta, taxonomy, database name, forward primer sequence, reverse primer sequence, forward primer name, reverse primer name.
MODIFY these values to generate novel-taxa files on a new reference database
End of explanation
# Arrange data set / database info in data frame
simulated_community_definitions = pd.DataFrame.from_dict(databases, orient="index")
simulated_community_definitions.columns = ["Reference file path", "Reference tax path", "Reference id",
"Fwd primer", "Rev primer", "Fwd primer id", "Rev primer id"]
simulated_community_definitions
Explanation: Now we will import these to a dataframe and view it. You should not need to modify the following cell.
End of explanation
read_length = 250
iterations = 10
min_read_length = 80
generate_simulated_datasets(simulated_community_definitions, data_dir,
read_length, iterations, min_read_length=min_read_length,
levelrange=range(6, 1, -1), force=True)
Explanation: Generate "clean" reference taxonomy and sequence database by removing taxonomy strings with empty or ambiguous levels'
Set simulated community parameters, including amplicon length and the number of iterations to perform. Iterations will split our query sequence files into N chunks.
This will take a few minutes to run. Get some coffee.
End of explanation
distance_comparison(simulated_community_definitions, data_dir, 'cross-validated')
Explanation: Data Leakage
First check that train/train and train/test distances are similarly distributed for the cross validation data sets.
End of explanation
distance_comparison(simulated_community_definitions, data_dir, 'novel-taxa-simulations', samples=100)
Explanation: Now check that the novel taxa distance distributions are ok.
End of explanation
test_cross_validated_sequences(data_dir)
Explanation: For peace of mind, we can test our novel taxa and simulated community datasets to confirm that:
1) For simulated communities, test (query) taxa IDs are not in training (ref) set, but all taxonomy strings are
2) For novel taxa, test taxa IDs and taxonomies are not in training (ref) set, but sister branch taxa are
If no errors print, all tests pass.
End of explanation
test_novel_taxa_datasets(data_dir)
Explanation: As a sanity check, confirm that novel taxa were generated successfully.
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.