Santiago Casas commited on
Commit
bc65052
·
1 Parent(s): 8343c13

add prompt and class data

Browse files
.gitattributes CHANGED
@@ -34,3 +34,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
  *.png filter=lfs diff=lfs merge=lfs -text
 
 
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
  *.png filter=lfs diff=lfs merge=lfs -text
37
+ *.pdf filter=lfs diff=lfs merge=lfs -text
class-data/CLASS_MANUAL.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ed39459dc63c60b35c46be7131e80bbdf61e12468a643e0aee48dc42e574431e
3
+ size 1069275
class-data/CPU.py ADDED
@@ -0,0 +1,622 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+ """
3
+ .. module:: CPU
4
+ :synopsis: CPU, a CLASS Plotting Utility
5
+ .. moduleauthor:: Benjamin Audren <benjamin.audren@gmail.com>
6
+ .. credits:: Benjamin Audren, Jesus Torrado
7
+ .. version:: 2.0
8
+
9
+ This is a small python program aimed to gain time when comparing two spectra,
10
+ e.g. from CAMB and CLASS, or a non-linear spectrum to a linear one.
11
+
12
+ It is designed to be used in a command line fashion, not being restricted to
13
+ your CLASS directory, though it recognizes mainly CLASS output format. Far from
14
+ perfect, or complete, it could use any suggestion for enhancing it,
15
+ just to avoid losing time on useless matters for others.
16
+
17
+ Be warned that, when comparing with other format, the following is assumed:
18
+ there are no empty line (especially at the end of file). Gnuplot comment lines
19
+ (starting with a # are allowed). This issue will cause a non-very descriptive
20
+ error in CPU, any suggestion for testing it is welcome.
21
+
22
+ Example of use:
23
+ - To superimpose two different spectra and see their global shape :
24
+ python CPU.py output/lcdm_z2_pk.dat output/lncdm_z2_pk.dat
25
+ - To see in details their ratio:
26
+ python CPU.py output/lcdm_z2_pk.dat output/lncdm_z2_pk.dat -r
27
+
28
+ The "PlanckScale" is taken with permission from Jesus Torrado's:
29
+ cosmo_mini_toolbox, available under GPLv3 at
30
+ https://github.com/JesusTorrado/cosmo_mini_toolbox
31
+
32
+ """
33
+
34
+ from __future__ import unicode_literals, print_function
35
+
36
+ # System imports
37
+ import os
38
+ import sys
39
+ import argparse
40
+
41
+ # Numerics
42
+ import numpy as np
43
+ from numpy import ma
44
+ from scipy.interpolate import InterpolatedUnivariateSpline
45
+ from math import floor
46
+
47
+ # Plotting
48
+ import matplotlib.pyplot as plt
49
+ from matplotlib import scale as mscale
50
+ from matplotlib.transforms import Transform
51
+ from matplotlib.ticker import FixedLocator
52
+
53
+
54
+ def CPU_parser():
55
+ parser = argparse.ArgumentParser(
56
+ description=(
57
+ 'CPU, a CLASS Plotting Utility, specify wether you want\n'
58
+ 'to superimpose, or plot the ratio of different files.'),
59
+ epilog=(
60
+ 'A standard usage would be, for instance:\n'
61
+ 'python CPU.py output/test_pk.dat output/test_pk_nl_density.dat'
62
+ ' -r\npython CPU.py output/wmap_cl.dat output/planck_cl.dat'),
63
+ formatter_class=argparse.RawDescriptionHelpFormatter)
64
+
65
+ parser.add_argument(
66
+ 'files', type=str, nargs='*', help='Files to plot')
67
+ parser.add_argument('-r', '--ratio', dest='ratio', action='store_true',
68
+ help='Plot the ratio of the spectra')
69
+ parser.add_argument('-y', '--y-axis', dest='y_axis', nargs='+',
70
+ help='specify the fields you want to plot.')
71
+ parser.add_argument('-x', '--x-axis', dest='x_axis', type=str,
72
+ help='specify the field to be used on the x-axis')
73
+ parser.add_argument('--scale', type=str,
74
+ choices=['lin', 'loglog', 'loglin', 'george'],
75
+ help='Specify the scale to use for the plot')
76
+ parser.add_argument('--xlim', dest='xlim', nargs='+', type=float,
77
+ default=[], help='Specify the x range')
78
+ parser.add_argument('--ylim', dest='ylim', nargs='+', type=float,
79
+ default=[], help='Specify the y range')
80
+ parser.add_argument(
81
+ '-p, --print',
82
+ dest='printfile', default='',
83
+ help=('print the graph directly in a file. If no name is specified, it'
84
+ 'uses the name of the first input file'))
85
+ parser.add_argument(
86
+ '--repeat',
87
+ dest='repeat', action='store_true', default=False,
88
+ help='repeat the step for all redshifts with same base name')
89
+ return parser
90
+
91
+
92
+ def plot_CLASS_output(files, x_axis, y_axis, ratio=False, printing='',
93
+ output_name='', extension='', x_variable='',
94
+ scale='lin', xlim=[], ylim=[]):
95
+ """
96
+ Load the data to numpy arrays, write all the commands for plotting to a
97
+ Python script for further refinment, and display them.
98
+
99
+ Inspired heavily by the matlab version by Thomas Tram
100
+
101
+ Parameters
102
+ ----------
103
+ files : list
104
+ List of files to plot
105
+ x-axis : string
106
+ name of the column to use as the x coordinate
107
+ y-axis : list, str
108
+ List of items to plot, which should match the way they appear in the
109
+ file, for instance: ['TT', 'BB]
110
+
111
+ Keyword Arguments
112
+ -----------------
113
+ ratio : bool
114
+ If set to yes, plots the ratio of the files, taking as a reference the
115
+ first one
116
+ output_name : str
117
+ Specify a different name for the produced figure (by default, it takes
118
+ the name of the first file, and replace the .dat by .pdf)
119
+ extension : str
120
+
121
+ """
122
+ # Define the python script name, and the pdf path
123
+ python_script_path = os.path.splitext(files[0])[0]+'.py'
124
+
125
+ # The variable text will contain all the lines to be printed in the end to
126
+ # the python script path, joined with newline characters. Beware of the
127
+ # indentation.
128
+ text = ['import matplotlib.pyplot as plt',
129
+ 'import numpy as np',
130
+ 'import itertools', '']
131
+
132
+ # Load all the graphs
133
+ data = []
134
+ for data_file in files:
135
+ data.append(np.loadtxt(data_file))
136
+
137
+ # Create the full_path_files list, that contains the absolute path, so that
138
+ # the future python script can import them directly.
139
+ full_path_files = [os.path.abspath(elem) for elem in files]
140
+
141
+ text += ['files = %s' % full_path_files]
142
+ text += ['data = []',
143
+ 'for data_file in files:',
144
+ ' data.append(np.loadtxt(data_file))']
145
+
146
+ # Recover the base name of the files, everything before the dot
147
+ roots = [elem.split(os.path.sep)[-1].split('.')[0] for elem in files]
148
+ text += ['roots = [%s]' % ', '.join(["'%s'" % root for root in roots])]
149
+
150
+ # Create the figure and ax objects
151
+ fig, ax = plt.subplots()
152
+ text += ['', 'fig, ax = plt.subplots()']
153
+
154
+ # if ratio is not set, then simply plot them all
155
+ original_y_axis = y_axis
156
+ legend = []
157
+ if not ratio:
158
+ for index, curve in enumerate(data):
159
+ # Recover the number of columns in the first file, as well as their
160
+ # title.
161
+ num_columns, names, tex_names = extract_headers(files[index])
162
+
163
+ text += ['', 'index, curve = %i, data[%i]' % (index, index)]
164
+ # Check if everything is in order
165
+ if num_columns == 2:
166
+ y_axis = [names[1]]
167
+ elif num_columns > 2:
168
+ # in case y_axis was only a string, cast it to a list
169
+ if isinstance(original_y_axis, str):
170
+ y_axis = [original_y_axis]
171
+ else:
172
+ y_axis = original_y_axis
173
+
174
+ # Store the selected text and tex_names to the script
175
+ selected = []
176
+ for elem in y_axis:
177
+ selected.extend(
178
+ [name for name in names if name.find(elem) != -1 and
179
+ name not in selected])
180
+ if not y_axis:
181
+ selected = names[1:]
182
+ y_axis = selected
183
+
184
+ # Decide for the x_axis, by default the index will be set to zero
185
+ x_index = 0
186
+ if x_axis:
187
+ for index_name, name in enumerate(names):
188
+ if name.find(x_axis) != -1:
189
+ x_index = index_name
190
+ break
191
+ # Store to text
192
+ text += ['y_axis = %s' % selected]
193
+ text += ['tex_names = %s' % [elem for (elem, name) in
194
+ zip(tex_names, names) if name in selected]]
195
+ text += ["x_axis = '%s'" % tex_names[x_index]]
196
+ text += ["ylim = %s" % ylim]
197
+ text += ["xlim = %s" % xlim]
198
+
199
+ for selec in y_axis:
200
+ index_selec = names.index(selec)
201
+ plot_line = 'ax.'
202
+ if scale == 'lin':
203
+ plot_line += 'plot(curve[:, %i], curve[:, %i])' % (
204
+ x_index, index_selec)
205
+ ax.plot(curve[:, x_index], curve[:, index_selec])
206
+ elif scale == 'loglog':
207
+ plot_line += 'loglog(curve[:, %i], abs(curve[:, %i]))' % (
208
+ x_index, index_selec)
209
+ ax.loglog(curve[:, x_index], abs(curve[:, index_selec]))
210
+ elif scale == 'loglin':
211
+ plot_line += 'semilogx(curve[:, %i], curve[:, %i])' % (
212
+ x_index, index_selec)
213
+ ax.semilogx(curve[:, x_index], curve[:, index_selec])
214
+ elif scale == 'george':
215
+ plot_line += 'plot(curve[:, %i], curve[:, %i])' % (
216
+ x_index, index_selec)
217
+ ax.plot(curve[:, x_index], curve[:, index_selec])
218
+ ax.set_xscale('planck')
219
+ text += [plot_line]
220
+
221
+ legend.extend([roots[index]+': '+elem for elem in y_axis])
222
+
223
+ ax.legend(legend, loc='best')
224
+ text += ["",
225
+ "ax.legend([root+': '+elem for (root, elem) in",
226
+ " itertools.product(roots, y_axis)], loc='best')",
227
+ ""]
228
+ else:
229
+ ref = data[0]
230
+ num_columns, ref_curve_names, ref_tex_names = extract_headers(files[0])
231
+ # Check if everything is in order
232
+ if num_columns == 2:
233
+ y_axis_ref = [ref_curve_names[1]]
234
+ elif num_columns > 2:
235
+ # in case y_axis was only a string, cast it to a list
236
+ if isinstance(original_y_axis, str):
237
+ y_axis_ref = [original_y_axis]
238
+ else:
239
+ y_axis_ref = original_y_axis
240
+
241
+ # Store the selected text and tex_names to the script
242
+ selected = []
243
+ for elem in y_axis_ref:
244
+ selected.extend([name for name in ref_curve_names if name.find(elem) != -1 and
245
+ name not in selected])
246
+ y_axis_ref = selected
247
+
248
+ # Decide for the x_axis, by default the index will be set to zero
249
+ x_index_ref = 0
250
+ if x_axis:
251
+ for index_name, name in enumerate(ref_curve_names):
252
+ if name.find(x_axis) != -1:
253
+ x_index_ref = index_name
254
+ break
255
+
256
+ for idx in range(1, len(data)):
257
+ current = data[idx]
258
+ num_columns, names, tex_names = extract_headers(files[idx])
259
+
260
+ # Check if everything is in order
261
+ if num_columns == 2:
262
+ y_axis = [names[1]]
263
+ elif num_columns > 2:
264
+ # in case y_axis was only a string, cast it to a list
265
+ if isinstance(original_y_axis, str):
266
+ y_axis = [original_y_axis]
267
+ else:
268
+ y_axis = original_y_axis
269
+
270
+ # Store the selected text and tex_names to the script
271
+ selected = []
272
+ for elem in y_axis:
273
+ selected.extend([name for name in names if name.find(elem) != -1 and
274
+ name not in selected])
275
+ y_axis = selected
276
+
277
+ text += ['y_axis = %s' % selected]
278
+ text += ['tex_names = %s' % [elem for (elem, name) in
279
+ zip(tex_names, names) if name in selected]]
280
+
281
+ # Decide for the x_axis, by default the index will be set to zero
282
+ x_index = 0
283
+ if x_axis:
284
+ for index_name, name in enumerate(names):
285
+ if name.find(x_axis) != -1:
286
+ x_index = index_name
287
+ break
288
+
289
+ text += ["x_axis = '%s'" % tex_names[x_index]]
290
+ for selec in y_axis:
291
+ # Do the interpolation
292
+ axis = ref[:, x_index_ref]
293
+ reference = ref[:, ref_curve_names.index(selec)]
294
+ #plt.loglog(current[:, x_index], current[:, names.index(selec)])
295
+ #plt.show()
296
+ #interpolated = splrep(current[:, x_index],
297
+ #current[:, names.index(selec)])
298
+ interpolated = InterpolatedUnivariateSpline(current[:, x_index],
299
+ current[:, names.index(selec)])
300
+ if scale == 'lin':
301
+ #ax.plot(axis, splev(ref[:, x_index_ref],
302
+ #interpolated)/reference-1)
303
+ ax.plot(axis, interpolated(ref[:, x_index_ref])/reference-1)
304
+ elif scale == 'loglin':
305
+ #ax.semilogx(axis, splev(ref[:, x_index_ref],
306
+ #interpolated)/reference-1)
307
+ ax.semilogx(axis, interpolated(ref[:, x_index_ref])/reference-1)
308
+ elif scale == 'loglog':
309
+ raise InputError(
310
+ "loglog plot is not available for ratios")
311
+
312
+ if 'TT' in names:
313
+ ax.set_xlabel('$\ell$', fontsize=16)
314
+ text += ["ax.set_xlabel('$\ell$', fontsize=16)"]
315
+ elif 'P' in names:
316
+ ax.set_xlabel('$k$ [$h$/Mpc]', fontsize=16)
317
+ text += ["ax.set_xlabel('$k$ [$h$/Mpc]', fontsize=16)"]
318
+ else:
319
+ ax.set_xlabel(tex_names[x_index], fontsize=16)
320
+ text += ["ax.set_xlabel('%s', fontsize=16)" % tex_names[x_index]]
321
+ if xlim:
322
+ if len(xlim) > 1:
323
+ ax.set_xlim(xlim)
324
+ text += ["ax.set_xlim(xlim)"]
325
+ else:
326
+ ax.set_xlim(xlim[0])
327
+ text += ["ax.set_xlim(xlim[0])"]
328
+ ax.set_ylim()
329
+ text += ["ax.set_ylim()"]
330
+ if ylim:
331
+ if len(ylim) > 1:
332
+ ax.set_ylim(ylim)
333
+ text += ["ax.set_ylim(ylim)"]
334
+ else:
335
+ ax.set_ylim(ylim[0])
336
+ text += ["ax.set_ylim(ylim[0])"]
337
+ text += ['plt.show()']
338
+ plt.show()
339
+
340
+ # If the use wants to print the figure to a file
341
+ if printing:
342
+ fig.savefig(printing)
343
+ text += ["fig.savefig('%s')" % printing]
344
+
345
+ # Write to the python file all the issued commands. You can then reproduce
346
+ # the plot by running "python output/something_cl.dat.py"
347
+ with open(python_script_path, 'w') as python_script:
348
+ print('Creating a python script to reproduce the figure')
349
+ print('--> stored in %s' % python_script_path)
350
+ python_script.write('\n'.join(text))
351
+
352
+ # If the use wants to print the figure to a file
353
+ if printing:
354
+ fig.savefig(printing)
355
+
356
+
357
+ class FormatError(Exception):
358
+ """Format not recognised"""
359
+ pass
360
+
361
+
362
+ class TypeError(Exception):
363
+ """Spectrum type not recognised"""
364
+ pass
365
+
366
+
367
+ class NumberOfFilesError(Exception):
368
+ """Invalid number of files"""
369
+ pass
370
+
371
+
372
+ class InputError(Exception):
373
+ """Incompatible input requirements"""
374
+ pass
375
+
376
+
377
+ def replace_scale(string):
378
+ """
379
+ This assumes that the string starts with "(.)", which will be replaced by
380
+ (8piG/3)
381
+
382
+ >>> print replace_scale('(.)toto')
383
+ >>> '(8\\pi G/3)toto'
384
+ """
385
+ string_list = list(string)
386
+ string_list.pop(1)
387
+ string_list[1:1] = list('8\\pi G/3')
388
+ return ''.join(string_list)
389
+
390
+
391
+ def process_long_names(long_names):
392
+ """
393
+ Given the names extracted from the header, return two arrays, one with the
394
+ short version, and one tex version
395
+
396
+ >>> names, tex_names = process_long_names(['(.)toto', 'proper time [Gyr]'])
397
+ >>> print names
398
+ >>> ['toto', 'proper time']
399
+ >>> print tex_names
400
+ >>> ['(8\\pi G/3)toto, 'proper time [Gyr]']
401
+
402
+ """
403
+ names = []
404
+ tex_names = []
405
+ # First pass, to remove the leading scales
406
+ for name in long_names:
407
+ # This can happen in the background file
408
+ if name.startswith('(.)', 0):
409
+ temp_name = name[3:]
410
+ names.append(temp_name)
411
+ tex_names.append(replace_scale(name))
412
+ # Otherwise, we simply
413
+ else:
414
+ names.append(name)
415
+ tex_names.append(name)
416
+
417
+ # Finally, remove any extra spacing
418
+ names = [''.join(elem.split()) for elem in names]
419
+ return names, tex_names
420
+
421
+
422
+ def extract_headers(header_path):
423
+ with open(header_path, 'r') as header_file:
424
+ header = [line for line in header_file if line[0] == '#']
425
+ header = header[-1]
426
+
427
+ # Count the number of columns in the file, and recover their name. Thanks
428
+ # Thomas Tram for the trick
429
+ indices = [i+1 for i in range(len(header)) if
430
+ header.startswith(':', i)]
431
+ num_columns = len(indices)
432
+ long_names = [header[indices[i]:indices[(i+1)]-3].strip() if i < num_columns-1
433
+ else header[indices[i]:].strip()
434
+ for i in range(num_columns)]
435
+
436
+ # Process long_names further to handle special cases, and extract names,
437
+ # which will correspond to the tags specified in "y_axis".
438
+ names, tex_names = process_long_names(long_names)
439
+
440
+ return num_columns, names, tex_names
441
+
442
+
443
+ def main():
444
+ print('~~~ Running CPU, a CLASS Plotting Utility ~~~')
445
+ parser = CPU_parser()
446
+ # Parse the command line arguments
447
+ args = parser.parse_args()
448
+
449
+ # if there are no argument in the input, print usage
450
+ if len(args.files) == 0:
451
+ parser.print_usage()
452
+ return
453
+
454
+ # if the first file name contains cl or pk, infer the type of desired
455
+ # spectrum
456
+ if not args.y_axis:
457
+ if args.files[0].rfind('cl') != -1:
458
+ scale = 'loglog'
459
+ elif args.files[0].rfind('pk') != -1:
460
+ scale = 'loglog'
461
+ else:
462
+ scale = 'lin'
463
+ args.y_axis = []
464
+ else:
465
+ scale = ''
466
+ if not args.scale:
467
+ if scale:
468
+ args.scale = scale
469
+ else:
470
+ args.scale = 'lin'
471
+
472
+ # Remove extra spacing in the y_axis list
473
+ args.y_axis = [''.join(elem.split()) for elem in args.y_axis]
474
+ # If ratio is asked, but only one file was passed in argument, politely
475
+ # complain
476
+ if args.ratio:
477
+ if len(args.files) < 2:
478
+ raise NumberOfFilesError(
479
+ "If you want me to compute a ratio between two files, "
480
+ "I strongly encourage you to give me at least two of them.")
481
+ # actual plotting. By default, a simple superposition of the graph is
482
+ # performed. If asked to be divided, the ratio is shown - whether a need
483
+ # for interpolation arises or not.
484
+ if args.ratio and args.scale == 'loglog':
485
+ print("Defaulting to loglin scale")
486
+ args.scale = 'loglin'
487
+
488
+ plot_CLASS_output(args.files, args.x_axis, args.y_axis,
489
+ ratio=args.ratio, printing=args.printfile,
490
+ scale=args.scale, xlim=args.xlim, ylim=args.ylim)
491
+
492
+
493
+ # Helper code from cosmo_mini_toolbox, by Jesus Torrado, available fully at
494
+ # https://github.com/JesusTorrado/cosmo_mini_toolbox, to use the log then
495
+ # linear scale for the multipole axis when plotting Cl.
496
+ nonpos = "mask"
497
+ change = 50.0
498
+ factor = 500.
499
+
500
+
501
+ def _mask_nonpos(a):
502
+ """
503
+ Return a Numpy masked array where all non-positive 1 are
504
+ masked. If there are no non-positive, the original array
505
+ is returned.
506
+ """
507
+ mask = a <= 0.0
508
+ if mask.any():
509
+ return ma.MaskedArray(a, mask=mask)
510
+ return a
511
+
512
+
513
+ def _clip_smaller_than_one(a):
514
+ a[a <= 0.0] = 1e-300
515
+ return a
516
+
517
+
518
+ class PlanckScale(mscale.ScaleBase):
519
+ """
520
+ Scale used by the Planck collaboration to plot Temperature power spectra:
521
+ base-10 logarithmic up to l=50, and linear from there on.
522
+
523
+ Care is taken so non-positive values are not plotted.
524
+ """
525
+ name = 'planck'
526
+
527
+ def __init__(self, axis, **kwargs):
528
+ pass
529
+
530
+ def set_default_locators_and_formatters(self, axis):
531
+ axis.set_major_locator(
532
+ FixedLocator(
533
+ np.concatenate((np.array([2, 10, change]),
534
+ np.arange(500, 2500, 500)))))
535
+ axis.set_minor_locator(
536
+ FixedLocator(
537
+ np.concatenate((np.arange(2, 10),
538
+ np.arange(10, 50, 10),
539
+ np.arange(floor(change/100), 2500, 100)))))
540
+
541
+ def get_transform(self):
542
+ """
543
+ Return a :class:`~matplotlib.transforms.Transform` instance
544
+ appropriate for the given logarithm base.
545
+ """
546
+ return self.PlanckTransform(nonpos)
547
+
548
+ def limit_range_for_scale(self, vmin, vmax, minpos):
549
+ """
550
+ Limit the domain to positive values.
551
+ """
552
+ return (vmin <= 0.0 and minpos or vmin,
553
+ vmax <= 0.0 and minpos or vmax)
554
+
555
+ class PlanckTransform(Transform):
556
+ input_dims = 1
557
+ output_dims = 1
558
+ is_separable = True
559
+ has_inverse = True
560
+
561
+ def __init__(self, nonpos):
562
+ Transform.__init__(self)
563
+ if nonpos == 'mask':
564
+ self._handle_nonpos = _mask_nonpos
565
+ else:
566
+ self._handle_nonpos = _clip_nonpos
567
+
568
+ def transform_non_affine(self, a):
569
+ lower = a[np.where(a<=change)]
570
+ greater = a[np.where(a> change)]
571
+ if lower.size:
572
+ lower = self._handle_nonpos(lower * 10.0)/10.0
573
+ if isinstance(lower, ma.MaskedArray):
574
+ lower = ma.log10(lower)
575
+ else:
576
+ lower = np.log10(lower)
577
+ lower = factor*lower
578
+ if greater.size:
579
+ greater = (factor*np.log10(change) + (greater-change))
580
+ # Only low
581
+ if not(greater.size):
582
+ return lower
583
+ # Only high
584
+ if not(lower.size):
585
+ return greater
586
+ return np.concatenate((lower, greater))
587
+
588
+ def inverted(self):
589
+ return PlanckScale.InvertedPlanckTransform()
590
+
591
+ class InvertedPlanckTransform(Transform):
592
+ input_dims = 1
593
+ output_dims = 1
594
+ is_separable = True
595
+ has_inverse = True
596
+
597
+ def transform_non_affine(self, a):
598
+ lower = a[np.where(a<=factor*np.log10(change))]
599
+ greater = a[np.where(a> factor*np.log10(change))]
600
+ if lower.size:
601
+ if isinstance(lower, ma.MaskedArray):
602
+ lower = ma.power(10.0, lower/float(factor))
603
+ else:
604
+ lower = np.power(10.0, lower/float(factor))
605
+ if greater.size:
606
+ greater = (greater + change - factor*np.log10(change))
607
+ # Only low
608
+ if not(greater.size):
609
+ return lower
610
+ # Only high
611
+ if not(lower.size):
612
+ return greater
613
+ return np.concatenate((lower, greater))
614
+
615
+ def inverted(self):
616
+ return PlanckTransform()
617
+
618
+ # Finished. Register the scale!
619
+ mscale.register_scale(PlanckScale)
620
+
621
+ if __name__ == '__main__':
622
+ sys.exit(main())
class-data/Growth_with_w.py ADDED
@@ -0,0 +1,307 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+ # coding: utf-8
3
+
4
+ # In[ ]:
5
+
6
+
7
+ get_ipython().run_line_magic('matplotlib', 'inline')
8
+ import matplotlib
9
+ import matplotlib.pyplot as plt
10
+ import numpy as np
11
+ from classy import Class
12
+ from scipy import interpolate
13
+
14
+
15
+ # In[ ]:
16
+
17
+
18
+ w0vec = [-0.7, -1.0, -1.3]
19
+ wavec = [-0.2,0.0,0.2]
20
+ #w0vec = [-1.0]
21
+ #wavec = [0.0]
22
+
23
+ cosmo = {}
24
+ for w0 in w0vec:
25
+ for wa in wavec:
26
+ if w0==-1.0 and wa==0.0:
27
+ M='LCDM'
28
+ else:
29
+ M = '('+str(w0)+','+str(wa)+')'
30
+ cosmo[M] = Class()
31
+ cosmo[M].set({'input_verbose':1,'background_verbose':1,'gauge' : 'Newtonian'})
32
+ if M!='LCDM':
33
+ cosmo[M].set({'Omega_Lambda':0.,'w0_fld':w0,'wa_fld':wa})
34
+ cosmo[M].compute()
35
+
36
+
37
+ # In[ ]:
38
+
39
+
40
+ import scipy
41
+ import scipy.special
42
+ import scipy.integrate
43
+
44
+ def D_hypergeom(avec,csm):
45
+ bg = csm.get_background()
46
+ Om = csm.Omega0_m()
47
+ if '(.)rho_lambda' in bg:
48
+ Ol = bg['(.)rho_lambda'][-1]/bg['(.)rho_crit'][-1]
49
+ else:
50
+ Ol = bg['(.)rho_fld'][-1]/bg['(.)rho_crit'][-1]
51
+
52
+ x = Ol/Om*avec**3
53
+ D = avec*scipy.special.hyp2f1(1./3.,1,11./6.,-x)
54
+ D_today = scipy.special.hyp2f1(1./3.,1,11./6.,-Ol/Om)
55
+ return D/D_today
56
+
57
+ def f_hypergeom(avec,csm):
58
+ bg = csm.get_background()
59
+ Om = csm.Omega0_m()
60
+ if '(.)rho_lambda' in bg:
61
+ Ol = bg['(.)rho_lambda'][-1]/bg['(.)rho_crit'][-1]
62
+ else:
63
+ Ol = bg['(.)rho_fld'][-1]/bg['(.)rho_crit'][-1]
64
+
65
+ x = Ol/Om*avec**3
66
+ D = avec*scipy.special.hyp2f1(1./3.,1,11./6.,-x)
67
+ f = 1.-6./11.*x*avec/D*scipy.special.hyp2f1(4./3.,2,17./6.,-x)
68
+ return f
69
+
70
+ def D_integral2(avec,csm):
71
+ bg = csm.get_background()
72
+ Om = csm.Omega0_m()
73
+ if '(.)rho_lambda' in bg:
74
+ Ol = bg['(.)rho_lambda'][-1]/bg['(.)rho_crit'][-1]
75
+ w0 = -1
76
+ wa = 0.0
77
+ else:
78
+ Ol = bg['(.)rho_fld'][-1]/bg['(.)rho_crit'][-1]
79
+ w0 = csm.pars['w0_fld']
80
+ wa = csm.pars['wa_fld']
81
+ D = np.zeros(avec.shape)
82
+ for idx, a in enumerate(avec):
83
+ Hc = a*np.sqrt(Om/a**3 + Ol*a**(-3*(1+w0+wa))*np.exp(-3.*(1.0-a)*wa) )
84
+ Dintegrand2 = lambda a: (a*np.sqrt(Om/a**3 + Ol*a**(-3*(1+w0+wa))*np.exp(-3.*(1.0-a)*wa) ))**(-3)
85
+ I = scipy.integrate.quad(Dintegrand2, 1e-15,a)
86
+ D[idx] = Hc/a*I[0]
87
+ D = D/scipy.integrate.quad(Dintegrand2,1e-15,1)[0]
88
+ return D
89
+
90
+ def D_integral(avec,csm):
91
+ bg = csm.get_background()
92
+ Om = csm.Omega0_m()
93
+ Ol = bg['(.)rho_lambda'][-1]/bg['(.)rho_crit'][-1]
94
+ Or = 1-Om-Ol
95
+ def Dintegrand(a):
96
+ Hc = np.sqrt(Om/a+Ol*a*a+Or/a/a)
97
+ #print a,Hc
98
+ return Hc**(-3)
99
+ D = np.zeros(avec.shape)
100
+ for idx, a in enumerate(avec):
101
+ #if a<1e-4:
102
+ # continue
103
+ Hc = np.sqrt(Om/a+Ol*a*a+Or/a/a)
104
+ I = scipy.integrate.quad(Dintegrand,1e-15,a,args=())
105
+ D[idx] = Hc/a*I[0]
106
+ D = D/scipy.integrate.quad(Dintegrand,1e-15,1,args=())[0]
107
+ return D
108
+
109
+ def D_linder(avec,csm):
110
+ bg = csm.get_background()
111
+ if '(.)rho_lambda' in bg:
112
+ Ol = bg['(.)rho_lambda'][-1]/bg['(.)rho_crit'][-1]
113
+ w0 = -1
114
+ wa = 0.0
115
+ else:
116
+ Ol = bg['(.)rho_fld'][-1]/bg['(.)rho_crit'][-1]
117
+ w0 = csm.pars['w0_fld']
118
+ wa = csm.pars['wa_fld']
119
+
120
+ Om_of_a = (bg['(.)rho_cdm']+bg['(.)rho_b'])/bg['H [1/Mpc]']**2
121
+ gamma = 0.55+0.05*(w0+0.5*wa)
122
+ a_bg = 1./(1.+bg['z'])
123
+
124
+ integ = (Om_of_a**gamma-1.)/a_bg
125
+
126
+ integ_interp = interpolate.interp1d(a_bg,integ)
127
+ D = np.zeros(avec.shape)
128
+ amin = min(a_bg)
129
+ amin = 1e-3
130
+ for idx, a in enumerate(avec):
131
+ if a<amin:
132
+ D[idx] = a
133
+ else:
134
+ I = scipy.integrate.quad(integ_interp,amin,a,args=())
135
+ D[idx] = a*np.exp(I[0])
136
+ # D = D/scipy.integrate.quad(Dintegrand,1e-15,1,args=())[0]
137
+ return D
138
+
139
+ def D_linder2(avec,csm):
140
+ bg = csm.get_background()
141
+ if '(.)rho_lambda' in bg:
142
+ Ol = bg['(.)rho_lambda'][-1]/bg['(.)rho_crit'][-1]
143
+ w0 = -1
144
+ wa = 0.0
145
+ rho_de = bg['(.)rho_lambda']
146
+ else:
147
+ Ol = bg['(.)rho_fld'][-1]/bg['(.)rho_crit'][-1]
148
+ w0 = csm.pars['w0_fld']
149
+ wa = csm.pars['wa_fld']
150
+ rho_de = bg['(.)rho_fld']
151
+
152
+ rho_M = bg['(.)rho_cdm']+bg['(.)rho_b']
153
+ #Om_of_a = rho_M/bg['H [1/Mpc]']**2
154
+
155
+ Om_of_a = rho_M/(rho_M+rho_de)
156
+ gamma = 0.55+0.05*(1+w0+0.5*wa)
157
+ #a_bg = 1./(1.+bg['z'])
158
+ a_bg = avec
159
+ integ = (Om_of_a**gamma-1.)/a_bg
160
+ D = np.zeros(avec.shape)
161
+ for idx, a in enumerate(avec):
162
+ if idx<2:
163
+ I=0
164
+ else:
165
+ I = np.trapz(integ[:idx],x=avec[:idx])
166
+ D[idx] = a*np.exp(I)
167
+ # D = D/scipy.integrate.quad(Dintegrand,1e-15,1,args=())[0]
168
+ return D/D[-1]
169
+
170
+
171
+ def draw_vertical_redshift(csm, theaxis, var='tau',z=99,ls='-.',label='$z=99$'):
172
+ if var=='z':
173
+ xval = z
174
+ elif var=='a':
175
+ xval = 1./(z+1)
176
+ elif var=='tau':
177
+ bg = csm.get_background()
178
+ f = interpolate.interp1d(bg['z'],bg['conf. time [Mpc]'])
179
+ xval = f(z)
180
+ theaxis.axvline(xval,lw=1,ls=ls,color='k',label=label)
181
+
182
+
183
+
184
+ # In[ ]:
185
+
186
+
187
+ figwidth1 = 4.4 #=0.7*6.3
188
+ figwidth2 = 6.3
189
+ figwidth15 = 0.5*(figwidth1+figwidth2)
190
+ ratio = 8.3/11.7
191
+ figheight1 = figwidth1*ratio
192
+ figheight2 = figwidth2*ratio
193
+ figheight15 = figwidth15*ratio
194
+
195
+ lw=2
196
+ fs=12
197
+ labelfs=16
198
+
199
+ fig, (ax1, ax2) = plt.subplots(2,1,figsize=(1.2*figwidth1,figheight1/(3./5.)),sharex=True,
200
+ gridspec_kw = {'height_ratios':[3, 2]})
201
+
202
+ if False:
203
+ aminexp = -13
204
+ amin = 10**aminexp
205
+ ymin = 10**(aminexp/2.)
206
+ ymax = 10**(-aminexp/2.)
207
+ elif False:
208
+ aminexp = -7
209
+ amin = 10**aminexp
210
+ ymin = 10**(aminexp)
211
+ ymax = 10**(-aminexp)
212
+ else:
213
+ aminexp = -4
214
+ amin = 10**aminexp
215
+ ymin = 10**(aminexp-1)
216
+ ymax = 10**(-aminexp+1)
217
+
218
+
219
+ bg = cosmo['LCDM'].get_background()
220
+
221
+ a = 1./(bg['z']+1)
222
+ H = bg['H [1/Mpc]']
223
+ D = bg['gr.fac. D']
224
+ f = bg['gr.fac. f']
225
+
226
+ ax1.loglog(a,D,lw=lw,label=r'$D_+^\mathrm{approx}$')
227
+ ax1.loglog(a,D_hypergeom(a,cosmo['LCDM']),lw=lw,label=r'$D_+^\mathrm{analytic}$')
228
+
229
+ ax1.loglog(a,a*ymax,'k--',lw=lw,label=r'$\propto a$')
230
+ ax1.loglog(a,1./a*ymin,'k:',lw=lw,label=r'$\propto a^{-1}$')
231
+
232
+ ax2.semilogx(a,D/D_hypergeom(a,cosmo['LCDM']),lw=lw,label=r'$D_+/D_+^\mathrm{analytic}$')
233
+ #ax2.semilogx(a,grow/grow[-1]/D_integral(a,cosmo['CDM']),'--',lw=5)
234
+ ax2.semilogx(a,f/f_hypergeom(a,cosmo['LCDM']),lw=lw,label=r'$f/f^{\,\mathrm{analytic}}$')
235
+
236
+
237
+ draw_vertical_redshift(cosmo['LCDM'], ax1, var='a',z=99,label='$z=99$')
238
+ draw_vertical_redshift(cosmo['LCDM'], ax1, var='a',z=49,label='$z=49$',ls='-')
239
+ draw_vertical_redshift(cosmo['LCDM'], ax2, var='a',z=99,label=None)
240
+ draw_vertical_redshift(cosmo['LCDM'], ax2, var='a',z=49,label=None,ls='-')
241
+
242
+ lgd1 = ax1.legend(fontsize=fs,ncol=1,loc='upper left',
243
+ bbox_to_anchor=(1.02, 1.035))
244
+
245
+ #lgd2 = ax2.legend([r'$D_+/D_+^\mathrm{analytic}$','$z=99$'],
246
+ # fontsize=fs,ncol=1,loc='upper left',
247
+ # bbox_to_anchor=(1.0, 1.08))
248
+ lgd2 = ax2.legend(fontsize=fs,ncol=1,loc='upper left',
249
+ bbox_to_anchor=(1.02, 0.83))
250
+
251
+ ax1.set_xlim([10**aminexp,1])
252
+ ax2.set_xlabel(r'$a$',fontsize=fs)
253
+ ax1.set_ylim([ymin,ymax])
254
+ ax2.set_ylim([0.9,1.099])
255
+
256
+ ax2.axhline(1,color='k')
257
+
258
+
259
+ fig.tight_layout()
260
+ fig.subplots_adjust(hspace=0.0)
261
+ fig.savefig('NewtonianGrowthFactor.pdf',bbox_extra_artists=(lgd1,lgd2), bbox_inches='tight')
262
+
263
+
264
+ # In[ ]:
265
+
266
+
267
+ lw=2
268
+ fs=14
269
+ fig, (ax1, ax2) = plt.subplots(2,1,figsize=(6,8),sharex=True,)
270
+ # gridspec_kw = {'height_ratios':[2, 1]})
271
+ for M, csm in iter(cosmo.items()):
272
+ if M!='LCDM':
273
+ w0, wa = M.strip('()').split(',')
274
+ if float(wa)!=0.0:
275
+ continue
276
+ bg = csm.get_background()
277
+ a = 1./(bg['z']+1)
278
+ H = bg['H [1/Mpc]']
279
+ #grow = bg['grow']
280
+ #grow_prime = bg['grow_prime']
281
+ D = bg['gr.fac. D']
282
+ f = bg['gr.fac. f']
283
+ #grow_interp = interpolate.interp1d(a,grow)
284
+ #p = ax1.semilogx(a,grow/grow[-1]/a,lw=lw,label=M)
285
+ #colour = p[0].get_color()
286
+
287
+ p=ax1.semilogx(a,D_linder2(a,csm)/a,lw=lw,ls='--',label=M)
288
+ colour = p[0].get_color()
289
+ ax1.semilogx(a,D/a,lw=lw,ls='-',color=colour)
290
+ ax1.semilogx(a,D_hypergeom(a,csm)/a,lw=lw,ls=':',color=colour)
291
+
292
+ ax2.semilogx(a,D/D_integral2(a,csm),lw=lw,ls='-',color=colour)
293
+ ax2.semilogx(a,D/D_hypergeom(a,csm),lw=lw,ls=':',color=colour)
294
+ ax2.semilogx(a,D/D_linder2(a,csm),lw=lw,ls='--',color=colour)
295
+
296
+ ax1.set_xlim([1e-3,1])
297
+ ax2.set_xlabel(r'$a$',fontsize=fs)
298
+ ax1.set_ylim([0,2])
299
+ ax2.set_ylim([0.9,1.3])
300
+
301
+ lgd1 = ax1.legend(fontsize=fs,ncol=1,loc='lower left')
302
+ # bbox_to_anchor=(1.0, 1.035))
303
+
304
+ fig.tight_layout()
305
+ fig.subplots_adjust(hspace=0.0)
306
+ fig.savefig('Growthrate_w0.pdf')
307
+
class-data/base_2015_plikHM_TT_lowTEB_lensing.ini ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # *~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~*
2
+ # * CLASS input parameter file *
3
+ # *~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~*
4
+
5
+ # Best fit parameters from Planck 2015
6
+ # Case 2.59 of:
7
+ # https://wiki.cosmos.esa.int/planckpla2015/images/f/f7/Baseline_params_table_2015_limit68.pdf
8
+ # (but with more significant digits, directly from the chains)
9
+
10
+ #----------------------------
11
+ #----> background parameters:
12
+ #----------------------------
13
+
14
+ H0 = 67.86682
15
+ omega_b = 0.02227716
16
+ N_ur = 2.046
17
+ omega_cdm = 0.1184293
18
+ N_ncdm = 1
19
+ m_ncdm = 0.06
20
+ T_ncdm = 0.7137658555036082 # (4/11)^(1/3)
21
+
22
+ #--------------------------------
23
+ #----> thermodynamics parameters:
24
+ #--------------------------------
25
+
26
+ YHe = 0.245352
27
+ tau_reio = 0.06664549
28
+
29
+ #-------------------------------------
30
+ #----> primordial spectrum parameters:
31
+ #-------------------------------------
32
+
33
+ n_s = 0.9682903
34
+ A_s = 2.140509e-09
35
+
36
+ #-----------------------------
37
+ #----> non linear corrections:
38
+ #-----------------------------
39
+
40
+ non linear = halofit
41
+
42
+ #----------------------------------------
43
+ #----> parameters controlling the output:
44
+ #----------------------------------------
45
+
46
+ output = tCl,pCl,lCl,mPk
47
+ lensing = yes
48
+
49
+ root = output/base_2015_plikHM_TT_lowTEB_lensing
50
+
51
+ write warnings = yes
52
+ write parameters = yes
53
+
54
+ input_verbose = 1
55
+ background_verbose = 1
56
+ thermodynamics_verbose = 1
57
+ perturbations_verbose = 1
58
+ transfer_verbose = 1
59
+ primordial_verbose = 1
60
+ harmonic_verbose = 1
61
+ fourier_verbose = 1
62
+ lensing_verbose = 1
63
+ output_verbose = 1
class-data/base_2018_plikHM_TTTEEE_lowl_lowE_lensing.ini ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # *~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~*
2
+ # * CLASS input parameter file *
3
+ # *~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~*
4
+
5
+ # Best fit parameters from Planck 2018
6
+ # Case 2.17 of:
7
+ # https://wiki.cosmos.esa.int/planck-legacy-archive/images/b/be/Baseline_params_table_2018_68pc.pdf
8
+ # (but with more significant digits, directly from the chains)
9
+
10
+ #----------------------------
11
+ #----> background parameters:
12
+ #----------------------------
13
+
14
+ H0 = 67.32117
15
+ omega_b = 0.02238280
16
+ N_ur = 2.046
17
+ omega_cdm = 0.1201075
18
+ N_ncdm = 1
19
+ m_ncdm = 0.06
20
+ T_ncdm = 0.7137658555036082 # (4/11)^(1/3)
21
+
22
+ #--------------------------------
23
+ #----> thermodynamics parameters:
24
+ #--------------------------------
25
+
26
+ YHe = 0.2454006
27
+ tau_reio = 0.05430842
28
+
29
+ #-------------------------------------
30
+ #----> primordial spectrum parameters:
31
+ #-------------------------------------
32
+
33
+ n_s = 0.9660499
34
+ A_s = 2.100549e-09
35
+
36
+ #-----------------------------
37
+ #----> non linear corrections:
38
+ #-----------------------------
39
+
40
+ non linear = halofit
41
+
42
+ #----------------------------------------
43
+ #----> parameters controlling the output:
44
+ #----------------------------------------
45
+
46
+ output = tCl,pCl,lCl,mPk
47
+ lensing = yes
48
+
49
+ root = output/base_2018_plikHM_TTTEEE_lowl_lowE_lensing
50
+
51
+ write warnings = yes
52
+ write parameters = yes
53
+
54
+ input_verbose = 1
55
+ background_verbose = 1
56
+ thermodynamics_verbose = 1
57
+ perturbations_verbose = 1
58
+ transfer_verbose = 1
59
+ primordial_verbose = 1
60
+ harmonic_verbose = 1
61
+ fourier_verbose = 1
62
+ lensing_verbose = 1
63
+ output_verbose = 1
class-data/check_PPF_approx.py ADDED
@@ -0,0 +1,238 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+ # coding: utf-8
3
+
4
+ # In[ ]:
5
+
6
+
7
+ get_ipython().run_line_magic('matplotlib', 'inline')
8
+ import matplotlib
9
+ import matplotlib.pyplot as plt
10
+ import numpy as np
11
+ from classy import Class
12
+
13
+
14
+ # In[ ]:
15
+
16
+
17
+ k_out = [5e-5, 5e-4, 5e-3]
18
+ models = ['PPF1','PPF2','FLD1','FLD1S']
19
+ w0 = {'PPF1':-0.7,'PPF2':-1.15,'FLD1':-0.7,'FLD1S':-0.7}
20
+ wa = {'PPF1':0.,'PPF2':0.5,'FLD1':0.,'FLD1S':0.}
21
+ omega_cdm = {'PPF1':0.104976,'PPF2':0.120376,'FLD1':0.104976,'FLD1S':0.104976}
22
+ omega_b = 0.022
23
+ ##Omega_cdm = {'PPF1':0.26,'PPF2':0.21,'FLD1':0.26,'FLD1S':0.26}
24
+ ##Omega_b = 0.05
25
+ h = {'PPF1':0.64,'PPF2':0.74,'FLD1':0.64,'FLD1S':0.64}
26
+ cosmo = {}
27
+
28
+ for M in models:
29
+ use_ppf = 'yes'
30
+ gauge = 'Newtonian'
31
+ if 'FLD' in M:
32
+ use_ppf = 'no'
33
+ if 'S' in M:
34
+ gauge = 'Synchronous'
35
+
36
+ cosmo[M] = Class()
37
+
38
+ cosmo[M].set({'output':'tCl mPk dTk vTk','k_output_values':str(k_out).strip('[]'),
39
+ 'h':h[M],
40
+ 'omega_b':omega_b,'omega_cdm':omega_cdm[M],
41
+ ##'Omega_b':Omega_b,'omega_cdm':Omega_cdm[M],
42
+ 'cs2_fld':1.,
43
+ 'w0_fld':w0[M],'wa_fld':wa[M],'Omega_Lambda':0.,'gauge':gauge,
44
+ 'use_ppf':use_ppf})
45
+ cosmo[M].compute()
46
+
47
+
48
+ # In[ ]:
49
+
50
+
51
+ colours = ['r','k','g','m']
52
+ for i,M in enumerate(models):
53
+ cl = cosmo[M].raw_cl()
54
+ l = cl['ell']
55
+
56
+ plt.loglog(l,cl['tt']*l*(l+1)/(2.*np.pi),label=M,color=colours[i])
57
+
58
+ plt.legend(loc='upper left')
59
+ plt.xlim([2,300])
60
+ plt.ylim([6e-11,1e-9])
61
+ plt.xlabel(r'$\ell$')
62
+ plt.ylabel(r'$[\ell(\ell+1)/2\pi] C_\ell^\mathrm{TT}$')
63
+
64
+ plt.savefig('check_PPF_clTT.pdf')
65
+
66
+
67
+ # In[ ]:
68
+
69
+
70
+ for M in ['PPF1','FLD1']:
71
+ csm = cosmo[M]
72
+ pt = csm.get_perturbations()
73
+ pts = pt['scalar']
74
+ for i,k in enumerate(k_out):
75
+ ptk = pts[i]
76
+ a = ptk['a']
77
+ phi = ptk['phi']
78
+ psi = ptk['psi']
79
+ if 'FLD' in M:
80
+ ls = ':'
81
+ lw=5
82
+ else:
83
+ ls = '-'
84
+ lw=1
85
+ plt.semilogx(a,0.5*(phi+psi),label=M+' '+'$k='+str(k)+'Mpc^{-1}$',ls=ls,lw=lw)
86
+
87
+ plt.legend(loc='lower left')
88
+ plt.xlim([1e-2,1])
89
+ plt.ylim([0.3,0.63])
90
+ plt.xlabel(r'$a/a_0$')
91
+ plt.ylabel(r'$\frac{1}{2} ~(\Phi+\Psi)$')
92
+
93
+ plt.savefig('check_PPF_metric.pdf')
94
+
95
+
96
+ # In[ ]:
97
+
98
+
99
+ #kminclosed = sqrt(-8*Omega_k)*(70/3e5) Mpc^(-1)
100
+
101
+ k_out = [1e-3] #[1e-4, 1e-3, 1e-2]
102
+ #models = ['PPF1','PPF2','FLD1']
103
+ models = ['PPF1','FLD1']
104
+ w0 = {'PPF1':-0.7,'PPF2':-1.15,'FLD1':-0.7,'FLD1S':-0.7}
105
+ wa = {'PPF1':0.,'PPF2':0.5,'FLD1':0.,'FLD1S':0.}
106
+ omega_cdm = {'PPF1':0.104976,'PPF2':0.120376,'FLD1':0.104976,'FLD1S':0.104976}
107
+ omega_b = 0.022
108
+ ##Omega_cdm = {'PPF1':0.26,'PPF2':0.21,'FLD1':0.26,'FLD1S':0.26}
109
+ ##Omega_b = 0.05
110
+ h = {'PPF1':0.64,'PPF2':0.74,'FLD1':0.64}
111
+
112
+ fig, axes = plt.subplots(1,2,figsize=(16,5))
113
+ for Omega_K in [-0.1, 0.0, 0.1]:
114
+ for gauge in ['Synchronous','Newtonian']:
115
+ cosmo = {}
116
+ for M in models:
117
+ use_ppf = 'yes'
118
+ if 'FLD' in M:
119
+ use_ppf = 'no'
120
+
121
+ cosmo[M] = Class()
122
+
123
+ cosmo[M].set({'output':'tCl mPk dTk vTk','k_output_values':str(k_out).strip('[]'),
124
+ 'h':h[M],
125
+ 'omega_b':omega_b,'omega_cdm':omega_cdm[M],'Omega_k':Omega_K,
126
+ ##'Omega_b':Omega_b,'omega_cdm':Omega_cdm[M],
127
+ 'cs2_fld':1.,
128
+ 'w0_fld':w0[M],'wa_fld':wa[M],'Omega_Lambda':0.,'gauge':gauge,
129
+ 'use_ppf':use_ppf,'hyper_sampling_curved_low_nu':10.0})
130
+ cosmo[M].compute()
131
+
132
+ label = r'$\Omega_k='+str(Omega_K)+'$, '+gauge[0]
133
+ clfld = cosmo['FLD1'].raw_cl()
134
+ clppf = cosmo['PPF1'].raw_cl()
135
+
136
+ axes[0].semilogx(clfld['ell'][2:],clppf['tt'][2:]/clfld['tt'][2:],label=label)
137
+
138
+ ptfld = cosmo['FLD1'].get_perturbations()['scalar']
139
+ ptppf = cosmo['PPF1'].get_perturbations()['scalar']
140
+ for i,k in enumerate(k_out):
141
+ ptkfld = ptfld[i]
142
+ a = ptkfld['a']
143
+ phi_plus_phi_fld = ptkfld['phi']+ptkfld['psi']
144
+ ptkppf = ptppf[i]
145
+ phi_plus_phi_ppf = ptkppf['phi']+ptkppf['psi']
146
+ axes[1].semilogx(ptkppf['a'],phi_plus_phi_ppf,label=label+'_ppf')
147
+ axes[1].semilogx(ptkfld['a'],phi_plus_phi_fld,label=label+'_fld')
148
+ print (len(ptkppf['a']),len(ptkfld['a']))
149
+
150
+ axes[0].legend(loc='lower left',ncol=2)
151
+ axes[0].set_xlim([2,300])
152
+ axes[0].set_ylim([0.98,1.02])
153
+ axes[0].set_xlabel(r'$\ell$')
154
+ axes[0].set_ylabel(r'$C_\ell^\mathrm{FLD1}/C_\ell^\mathrm{PPF1}$')
155
+
156
+ axes[1].legend(loc='lower left',ncol=2)
157
+ axes[1].set_xlim([1e-2,1])
158
+ axes[1].set_xlabel(r'$a/a_0$')
159
+ axes[1].set_ylabel(r'$(\Phi+\Psi)$')
160
+
161
+ fig.savefig('check_PPF_Omegak.pdf')
162
+
163
+
164
+ # In[ ]:
165
+
166
+
167
+ colours = ['r','k','g','m']
168
+
169
+ k_out = [1e-1] #[1e-4, 1e-3, 1e-2]
170
+ #models = ['PPF1','PPF2','FLD1']
171
+ models = ['PPF1','FLD1']
172
+ w0 = {'PPF1':-0.7,'PPF2':-1.15,'FLD1':-0.7,'FLD1S':-0.7}
173
+ wa = {'PPF1':0.,'PPF2':0.5,'FLD1':0.,'FLD1S':0.}
174
+ omega_cdm = {'PPF1':0.104976,'PPF2':0.120376,'FLD1':0.104976,'FLD1S':0.104976}
175
+ omega_b = 0.022
176
+ ##Omega_cdm = {'PPF1':0.26,'PPF2':0.21,'FLD1':0.26,'FLD1S':0.26}
177
+ ##Omega_b = 0.05
178
+ h = {'PPF1':0.64,'PPF2':0.74,'FLD1':0.64}
179
+
180
+ fig, axes = plt.subplots(1,2,figsize=(18,8))
181
+
182
+ for Omega_K in [-0.1, 0.0, 0.1]:
183
+ for ppfgauge in ['Synchronous','Newtonian']:
184
+ cosmo = {}
185
+ for M in models:
186
+ use_ppf = 'yes'
187
+ gauge = ppfgauge
188
+ if 'FLD' in M:
189
+ use_ppf = 'no'
190
+
191
+ cosmo[M] = Class()
192
+
193
+ cosmo[M].set({'output':'tCl mPk dTk vTk','k_output_values':str(k_out).strip('[]'),
194
+ 'h':h[M],
195
+ 'omega_b':omega_b,'omega_cdm':omega_cdm[M],'Omega_k':Omega_K,
196
+ ##'Omega_b':Omega_b,'omega_cdm':Omega_cdm[M],
197
+ 'cs2_fld':1.,
198
+ 'w0_fld':w0[M],'wa_fld':wa[M],'Omega_Lambda':0.,'gauge':gauge,
199
+ 'use_ppf':use_ppf,'hyper_sampling_curved_low_nu':6.1})
200
+ cosmo[M].compute()
201
+
202
+ #fig, axes = plt.subplots(1,2,figsize=(16,5))
203
+ for j,M in enumerate(models):
204
+ cl = cosmo[M].raw_cl()
205
+ l = cl['ell']
206
+ label = M+r'$\Omega_k='+str(Omega_K)+'$, '+gauge[0]
207
+ axes[0].loglog(l,cl['tt']*l*(l+1)/(2.*np.pi),label=label,color=colours[j])
208
+
209
+ csm = cosmo[M]
210
+ pt = csm.get_perturbations()
211
+ pts = pt['scalar']
212
+ for i,k in enumerate(k_out):
213
+ ptk = pts[i]
214
+ a = ptk['a']
215
+ phi = ptk['phi']
216
+ psi = ptk['psi']
217
+ if 'FLD' in M:
218
+ ls = ':'
219
+ lw=5
220
+ else:
221
+ ls = '-'
222
+ lw=1
223
+ axes[1].semilogx(a,0.5*abs(phi+psi),label=label+' '+'$k='+str(k)+'Mpc^{-1}$',ls=ls,lw=lw)
224
+
225
+ axes[0].legend(loc='upper left')
226
+ axes[0].set_xlim([2,300])
227
+ axes[0].set_ylim([6e-11,1e-9])
228
+ axes[0].set_xlabel(r'$\ell$')
229
+ axes[0].set_ylabel(r'$[\ell(\ell+1)/2\pi] C_\ell^\mathrm{TT}$')
230
+
231
+ axes[1].legend(loc='upper right')
232
+ #axes[1].set_xlim([1e-2,1])
233
+ #axes[1].set_ylim([0.3,0.63])
234
+ axes[1].set_xlabel(r'$a/a_0$')
235
+ axes[1].set_ylabel(r'$\frac{1}{2}~(\Phi+\Psi)$')
236
+
237
+ fig.savefig('check_PPF_Omegak2.pdf')
238
+
class-data/cl_ST.py ADDED
@@ -0,0 +1,118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+ # coding: utf-8
3
+
4
+ # In[ ]:
5
+
6
+
7
+ # import necessary modules
8
+ from classy import Class
9
+ from math import pi
10
+ get_ipython().run_line_magic('matplotlib', 'inline')
11
+ import matplotlib
12
+ import matplotlib.pyplot as plt
13
+
14
+
15
+ # In[ ]:
16
+
17
+
18
+ #####################################################
19
+ #
20
+ # Cosmological parameters and other CLASS parameters
21
+ #
22
+ #####################################################
23
+ common_settings = {# LambdaCDM parameters
24
+ 'h':0.67810,
25
+ 'omega_b':0.02238280,
26
+ 'omega_cdm': 0.1201075,
27
+ 'A_s':2.100549e-09,
28
+ 'tau_reio': 0.05430842}
29
+
30
+ l_max_scalars = 3000
31
+ l_max_tensors = 600
32
+
33
+ # Note that for l_max_tensors =600 we can keep default precision,
34
+ # while for for l_max_tensors = 3000 we would need to import many high precision settings from the file cl_ref.pre
35
+
36
+
37
+ # In[ ]:
38
+
39
+
40
+ ###############
41
+ #
42
+ # call CLASS : scalars only
43
+ #
44
+ ###############
45
+ #
46
+ M = Class()
47
+ M.set(common_settings)
48
+ M.set({'output':'tCl,pCl','modes':'s','lensing':'no','n_s':0.9660499,
49
+ 'l_max_scalars':l_max_scalars})
50
+ M.compute()
51
+ cls = M.raw_cl(l_max_scalars)
52
+
53
+
54
+ # In[ ]:
55
+
56
+
57
+ ###############
58
+ #
59
+ # call CLASS : tensors only
60
+ #
61
+ ###############
62
+ #
63
+ M.empty() # reset input parameters to default, before passing a new parameter set
64
+ M.set(common_settings)
65
+ M.set({'output':'tCl,pCl','modes':'t','lensing':'no','r':0.1,'n_t':0,
66
+ 'l_max_tensors':l_max_tensors})
67
+ M.compute()
68
+ clt = M.raw_cl(l_max_tensors)
69
+
70
+
71
+ # In[ ]:
72
+
73
+
74
+ ###############
75
+ #
76
+ # call CLASS : scalars + tensors (only in this case we can get the correct lensed ClBB)
77
+ #
78
+ ###############
79
+ #
80
+ M.empty() # reset input parameters to default, before passing a new parameter set
81
+ M.set(common_settings)
82
+ M.set({'output':'tCl,pCl,lCl','modes':'s,t','lensing':'yes','n_s':0.9660499,'r':0.1,'n_t':0,
83
+ 'l_max_scalars':l_max_scalars,'l_max_tensors':l_max_tensors})
84
+ M.compute()
85
+ cl_tot = M.raw_cl(l_max_scalars)
86
+ cl_lensed = M.lensed_cl(l_max_scalars)
87
+
88
+
89
+ # In[ ]:
90
+
91
+
92
+ #################
93
+ #
94
+ # plotting
95
+ #
96
+ #################
97
+ #
98
+ plt.xlim([2,l_max_scalars])
99
+ plt.ylim([1.e-8,10])
100
+ plt.xlabel(r"$\ell$")
101
+ plt.ylabel(r"$\ell (\ell+1) C_l^{XY} / 2 \pi \,\,\, [\times 10^{10}]$")
102
+ plt.title(r"$r=0.1$")
103
+ plt.grid()
104
+ #
105
+ ell = cl_tot['ell']
106
+ ellt = clt['ell']
107
+ factor = 1.e10*ell*(ell+1.)/2./pi
108
+ factort = 1.e10*ellt*(ellt+1.)/2./pi
109
+ #
110
+ plt.loglog(ell,factor*cls['tt'],'r-',label=r'$\mathrm{TT(s)}$')
111
+ plt.loglog(ellt,factort*clt['tt'],'r:',label=r'$\mathrm{TT(t)}$')
112
+ plt.loglog(ell,factor*cls['ee'],'b-',label=r'$\mathrm{EE(s)}$')
113
+ plt.loglog(ellt,factort*clt['ee'],'b:',label=r'$\mathrm{EE(t)}$')
114
+ plt.loglog(ellt,factort*clt['bb'],'g:',label=r'$\mathrm{BB(t)}$')
115
+ plt.loglog(ell,factor*(cl_lensed['bb']-cl_tot['bb']),'g-',label=r'$\mathrm{BB(lensing)}$')
116
+ plt.legend(loc='right',bbox_to_anchor=(1.4, 0.5))
117
+ plt.savefig('cl_ST.pdf',bbox_inches='tight')
118
+
class-data/classy-current-docstrings.txt ADDED
@@ -0,0 +1,821 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Function: _check_task_dependency(self, level)
2
+ Docstring:
3
+
4
+ Fill the level list with all the needed modules
5
+
6
+ .. warning::
7
+
8
+ the ordering of modules is obviously dependent on CLASS module order
9
+ in the main.c file. This has to be updated in case of a change to
10
+ this file.
11
+
12
+ Parameters
13
+ ----------
14
+
15
+ level : list
16
+ list of strings, containing initially only the last module required.
17
+ For instance, to recover all the modules, the input should be
18
+ ['lensing']
19
+
20
+
21
+ ---------------------------------
22
+ Function: compute(self, level=["distortions"])
23
+ Docstring:
24
+
25
+ compute(level=["distortions"])
26
+
27
+ Main function, execute all the _init methods for all desired modules.
28
+ This is called in MontePython, and this ensures that the Class instance
29
+ of this class contains all the relevant quantities. Then, one can deduce
30
+ Pk, Cl, etc...
31
+
32
+ Parameters
33
+ ----------
34
+ level : list
35
+ list of the last module desired. The internal function
36
+ _check_task_dependency will then add to this list all the
37
+ necessary modules to compute in order to initialize this last
38
+ one. The default last module is "lensing".
39
+
40
+ .. warning::
41
+
42
+ level default value should be left as an array (it was creating
43
+ problem when casting as a set later on, in _check_task_dependency)
44
+
45
+
46
+ ---------------------------------
47
+ Function: density_factor(self)
48
+ Docstring:
49
+
50
+ The density factor required to convert from the class-units of density to kg/m^3 (SI units)
51
+
52
+ ---------------------------------
53
+ Function: kgm3_to_eVMpc3(self)
54
+ Docstring:
55
+
56
+ Convert from kg/m^3 to eV/Mpc^3
57
+
58
+ ---------------------------------
59
+ Function: kgm3_to_MsolMpc3(self)
60
+ Docstring:
61
+
62
+ Convert from kg/m^3 to Msol/Mpc^3
63
+
64
+ ---------------------------------
65
+ Function: raw_cl(self, lmax=-1, nofail=False)
66
+ Docstring:
67
+
68
+ raw_cl(lmax=-1, nofail=False)
69
+
70
+ Return a dictionary of the primary C_l
71
+
72
+ Parameters
73
+ ----------
74
+ lmax : int, optional
75
+ Define the maximum l for which the C_l will be returned
76
+ (inclusively). This number will be checked against the maximum l
77
+ at which they were actually computed by CLASS, and an error will
78
+ be raised if the desired lmax is bigger than what CLASS can
79
+ give.
80
+ nofail: bool, optional
81
+ Check and enforce the computation of the harmonic module
82
+ beforehand, with the desired lmax.
83
+
84
+ Returns
85
+ -------
86
+ cl : dict
87
+ Dictionary that contains the power spectrum for 'tt', 'te', etc... The
88
+ index associated with each is defined wrt. Class convention, and are non
89
+ important from the python point of view. It also returns now the
90
+ ell array.
91
+
92
+ ---------------------------------
93
+ Function: lensed_cl(self, lmax=-1,nofail=False)
94
+ Docstring:
95
+
96
+ lensed_cl(lmax=-1, nofail=False)
97
+
98
+ Return a dictionary of the lensed C_l, computed by CLASS, without the
99
+ density C_ls. They must be asked separately with the function aptly
100
+ named density_cl
101
+
102
+ Parameters
103
+ ----------
104
+ lmax : int, optional
105
+ Define the maximum l for which the C_l will be returned (inclusively)
106
+ nofail: bool, optional
107
+ Check and enforce the computation of the lensing module beforehand
108
+
109
+ Returns
110
+ -------
111
+ cl : dict
112
+ Dictionary that contains the power spectrum for 'tt', 'te', etc... The
113
+ index associated with each is defined wrt. Class convention, and are non
114
+ important from the python point of view.
115
+
116
+ ---------------------------------
117
+ Function: density_cl(self, lmax=-1, nofail=False)
118
+ Docstring:
119
+
120
+ density_cl(lmax=-1, nofail=False)
121
+
122
+ Return a dictionary of the primary C_l for the matter
123
+
124
+ Parameters
125
+ ----------
126
+ lmax : int, optional
127
+ Define the maximum l for which the C_l will be returned (inclusively)
128
+ nofail: bool, optional
129
+ Check and enforce the computation of the lensing module beforehand
130
+
131
+ Returns
132
+ -------
133
+ cl : numpy array of numpy.ndarrays
134
+ Array that contains the list (in this order) of self correlation of
135
+ 1st bin, then successive correlations (set by non_diagonal) to the
136
+ following bins, then self correlation of 2nd bin, etc. The array
137
+ starts at index_ct_dd.
138
+
139
+ ---------------------------------
140
+ Function: luminosity_distance(self, z)
141
+ Docstring:
142
+
143
+ luminosity_distance(z)
144
+
145
+ ---------------------------------
146
+ Function: pk(self,double k,double z)
147
+ Docstring:
148
+
149
+ Gives the total matter pk (in Mpc**3) for a given k (in 1/Mpc) and z (will be non linear if requested to Class, linear otherwise)
150
+
151
+ .. note::
152
+
153
+ there is an additional check that output contains `mPk`,
154
+ because otherwise a segfault will occur
155
+
156
+
157
+ ---------------------------------
158
+ Function: pk_cb(self,double k,double z)
159
+ Docstring:
160
+
161
+ Gives the cdm+b pk (in Mpc**3) for a given k (in 1/Mpc) and z (will be non linear if requested to Class, linear otherwise)
162
+
163
+ .. note::
164
+
165
+ there is an additional check that output contains `mPk`,
166
+ because otherwise a segfault will occur
167
+
168
+
169
+ ---------------------------------
170
+ Function: pk_lin(self,double k,double z)
171
+ Docstring:
172
+
173
+ Gives the linear total matter pk (in Mpc**3) for a given k (in 1/Mpc) and z
174
+
175
+ .. note::
176
+
177
+ there is an additional check that output contains `mPk`,
178
+ because otherwise a segfault will occur
179
+
180
+
181
+ ---------------------------------
182
+ Function: pk_cb_lin(self,double k,double z)
183
+ Docstring:
184
+
185
+ Gives the linear cdm+b pk (in Mpc**3) for a given k (in 1/Mpc) and z
186
+
187
+ .. note::
188
+
189
+ there is an additional check that output contains `mPk`,
190
+ because otherwise a segfault will occur
191
+
192
+
193
+ ---------------------------------
194
+ Function: pk_numerical_nw(self,double k,double z)
195
+ Docstring:
196
+
197
+ Gives the nowiggle (smoothed) linear total matter pk (in Mpc**3) for a given k (in 1/Mpc) and z
198
+
199
+ .. note::
200
+
201
+ there is an additional check that `numerical_nowiggle` was set to `yes`,
202
+ because otherwise a segfault will occur
203
+
204
+
205
+ ---------------------------------
206
+ Function: pk_analytic_nw(self,double k)
207
+ Docstring:
208
+
209
+ Gives the linear total matter pk (in Mpc**3) for a given k (in 1/Mpc) and z
210
+
211
+ .. note::
212
+
213
+ there is an additional check that `analytic_nowiggle` was set to `yes`,
214
+ because otherwise a segfault will occur
215
+
216
+
217
+ ---------------------------------
218
+ Function: get_pk(self, np.ndarray[DTYPE_t,ndim=3] k, np.ndarray[DTYPE_t,ndim=1] z, int k_size, int z_size, int mu_size)
219
+ Docstring:
220
+ Fast function to get the power spectrum on a k and z array
221
+ ---------------------------------
222
+ Function: get_pk_cb(self, np.ndarray[DTYPE_t,ndim=3] k, np.ndarray[DTYPE_t,ndim=1] z, int k_size, int z_size, int mu_size)
223
+ Docstring:
224
+ Fast function to get the power spectrum on a k and z array
225
+ ---------------------------------
226
+ Function: get_pk_lin(self, np.ndarray[DTYPE_t,ndim=3] k, np.ndarray[DTYPE_t,ndim=1] z, int k_size, int z_size, int mu_size)
227
+ Docstring:
228
+ Fast function to get the linear power spectrum on a k and z array
229
+ ---------------------------------
230
+ Function: get_pk_cb_lin(self, np.ndarray[DTYPE_t,ndim=3] k, np.ndarray[DTYPE_t,ndim=1] z, int k_size, int z_size, int mu_size)
231
+ Docstring:
232
+ Fast function to get the linear power spectrum on a k and z array
233
+ ---------------------------------
234
+ Function: get_pk_all(self, k, z, nonlinear = True, cdmbar = False, z_axis_in_k_arr = 0, interpolation_kind='cubic')
235
+ Docstring:
236
+ General function to get the P(k,z) for ARBITRARY shapes of k,z
237
+ Additionally, it includes the functionality of selecting wether to use the non-linear parts or not,
238
+ and wether to use the cdm baryon power spectrum only
239
+ For Multi-Dimensional k-arrays, it assumes that one of the dimensions is the z-axis
240
+ This is handled by the z_axis_in_k_arr integer, as described in the source code
241
+ ---------------------------------
242
+ Function: get_pk_and_k_and_z(self, nonlinear=True, only_clustering_species = False, h_units=False)
243
+ Docstring:
244
+
245
+ Returns a grid of matter power spectrum values and the z and k
246
+ at which it has been fully computed. Useful for creating interpolators.
247
+
248
+ Parameters
249
+ ----------
250
+ nonlinear : bool
251
+ Whether the returned power spectrum values are linear or non-linear (default)
252
+ only_clustering_species : bool
253
+ Whether the returned power spectrum is for galaxy clustering and excludes massive neutrinos, or always includes everything (default)
254
+ h_units : bool
255
+ Whether the units of k in output are h/Mpc or 1/Mpc (default)
256
+
257
+ Returns
258
+ -------
259
+ pk : grid of power spectrum values, pk[index_k,index_z]
260
+ k : vector of k values, k[index_k] (in units of 1/Mpc by default, or h/Mpc when setting h_units to True)
261
+ z : vector of z values, z[index_z]
262
+
263
+ ---------------------------------
264
+ Function: get_transfer_and_k_and_z(self, output_format='class', h_units=False)
265
+ Docstring:
266
+
267
+ Returns a dictionary of grids of density and/or velocity transfer function values and the z and k at which it has been fully computed.
268
+ Useful for creating interpolators.
269
+ When setting CLASS input parameters, include at least one of 'dTk' (for density transfer functions) or 'vTk' (for velocity transfer functions).
270
+ Following the default output_format='class', all transfer functions will be normalised to 'curvature R=1' at initial time
271
+ (and not 'curvature R = -1/k^2' like in CAMB).
272
+ You may switch to output_format='camb' for the CAMB definition and normalisation of transfer functions.
273
+ (Then, 'dTk' must be in the input: the CAMB format only outputs density transfer functions).
274
+ When sticking to output_format='class', you also get the newtonian metric fluctuations phi and psi.
275
+ If you set the CLASS input parameter 'extra_metric_transfer_functions' to 'yes',
276
+ you get additional metric fluctuations in the synchronous and N-body gauges.
277
+
278
+ Parameters
279
+ ----------
280
+ output_format : ('class' or 'camb')
281
+ Format transfer functions according to CLASS (default) or CAMB
282
+ h_units : bool
283
+ Whether the units of k in output are h/Mpc or 1/Mpc (default)
284
+
285
+ Returns
286
+ -------
287
+ tk : dictionary containing all transfer functions.
288
+ For instance, the grid of values of 'd_c' (= delta_cdm) is available in tk['d_c']
289
+ All these grids have indices [index_k,index,z], for instance tk['d_c'][index_k,index,z]
290
+ k : vector of k values (in units of 1/Mpc by default, or h/Mpc when setting h_units to True)
291
+ z : vector of z values
292
+
293
+ ---------------------------------
294
+ Function: get_Weyl_pk_and_k_and_z(self, nonlinear=False, h_units=False)
295
+ Docstring:
296
+
297
+ Returns a grid of Weyl potential (phi+psi) power spectrum values and the z and k
298
+ at which it has been fully computed. Useful for creating interpolators.
299
+ Note that this function just calls get_pk_and_k_and_z and corrects the output
300
+ by the ratio of transfer functions [(phi+psi)/d_m]^2.
301
+
302
+ Parameters
303
+ ----------
304
+ nonlinear : bool
305
+ Whether the returned power spectrum values are linear or non-linear (default)
306
+ h_units : bool
307
+ Whether the units of k in output are h/Mpc or 1/Mpc (default)
308
+
309
+ Returns
310
+ -------
311
+ Weyl_pk : grid of Weyl potential (phi+psi) spectrum values, Weyl_pk[index_k,index_z]
312
+ k : vector of k values, k[index_k] (in units of 1/Mpc by default, or h/Mpc when setting h_units to True)
313
+ z : vector of z values, z[index_z]
314
+
315
+ ---------------------------------
316
+ Function: sigma(self,R,z, h_units = False)
317
+ Docstring:
318
+
319
+ Gives sigma (total matter) for a given R and z
320
+ (R is the radius in units of Mpc, so if R=8/h this will be the usual sigma8(z).
321
+ This is unless h_units is set to true, in which case R is the radius in units of Mpc/h,
322
+ and R=8 corresponds to sigma8(z))
323
+
324
+ .. note::
325
+
326
+ there is an additional check to verify whether output contains `mPk`,
327
+ and whether k_max > ...
328
+ because otherwise a segfault will occur
329
+
330
+
331
+ ---------------------------------
332
+ Function: sigma_cb(self,double R,double z, h_units = False)
333
+ Docstring:
334
+
335
+ Gives sigma (cdm+b) for a given R and z
336
+ (R is the radius in units of Mpc, so if R=8/h this will be the usual sigma8(z)
337
+ This is unless h_units is set to true, in which case R is the radius in units of Mpc/h,
338
+ and R=8 corresponds to sigma8(z))
339
+
340
+ .. note::
341
+
342
+ there is an additional check to verify whether output contains `mPk`,
343
+ and whether k_max > ...
344
+ because otherwise a segfault will occur
345
+
346
+
347
+ ---------------------------------
348
+ Function: pk_tilt(self,double k,double z)
349
+ Docstring:
350
+
351
+ Gives effective logarithmic slope of P_L(k,z) (total matter) for a given k and z
352
+ (k is the wavenumber in units of 1/Mpc, z is the redshift, the output is dimensionless)
353
+
354
+ .. note::
355
+
356
+ there is an additional check to verify whether output contains `mPk` and whether k is in the right range
357
+
358
+
359
+ ---------------------------------
360
+ Function: angular_distance(self, z)
361
+ Docstring:
362
+
363
+ angular_distance(z)
364
+
365
+ Return the angular diameter distance (exactly, the quantity defined by Class
366
+ as index_bg_ang_distance in the background module)
367
+
368
+ Parameters
369
+ ----------
370
+ z : float
371
+ Desired redshift
372
+
373
+ ---------------------------------
374
+ Function: angular_distance_from_to(self, z1, z2)
375
+ Docstring:
376
+
377
+ angular_distance_from_to(z)
378
+
379
+ Return the angular diameter distance of object at z2 as seen by observer at z1,
380
+ that is, sin_K((chi2-chi1)*np.sqrt(|k|))/np.sqrt(|k|)/(1+z2).
381
+ If z1>z2 returns zero.
382
+
383
+ Parameters
384
+ ----------
385
+ z1 : float
386
+ Observer redshift
387
+ z2 : float
388
+ Source redshift
389
+
390
+ Returns
391
+ -------
392
+ d_A(z1,z2) in Mpc
393
+
394
+ ---------------------------------
395
+ Function: comoving_distance(self, z)
396
+ Docstring:
397
+
398
+ comoving_distance(z)
399
+
400
+ Return the comoving distance
401
+
402
+ Parameters
403
+ ----------
404
+ z : float
405
+ Desired redshift
406
+
407
+ ---------------------------------
408
+ Function: scale_independent_growth_factor(self, z)
409
+ Docstring:
410
+
411
+ scale_independent_growth_factor(z)
412
+
413
+ Return the scale invariant growth factor D(a) for CDM perturbations
414
+ (exactly, the quantity defined by Class as index_bg_D in the background module)
415
+
416
+ Parameters
417
+ ----------
418
+ z : float
419
+ Desired redshift
420
+
421
+ ---------------------------------
422
+ Function: scale_independent_growth_factor_f(self, z)
423
+ Docstring:
424
+
425
+ scale_independent_growth_factor_f(z)
426
+
427
+ Return the scale independent growth factor f(z)=d ln D / d ln a for CDM perturbations
428
+ (exactly, the quantity defined by Class as index_bg_f in the background module)
429
+
430
+ Parameters
431
+ ----------
432
+ z : float
433
+ Desired redshift
434
+
435
+ ---------------------------------
436
+ Function: scale_dependent_growth_factor_f(self, k, z, h_units=False, nonlinear=False, Nz=20)
437
+ Docstring:
438
+
439
+ scale_dependent_growth_factor_f(k,z)
440
+
441
+ Return the scale dependent growth factor
442
+ f(z)= 1/2 * [d ln P(k,a) / d ln a]
443
+ = - 0.5 * (1+z) * [d ln P(k,z) / d z]
444
+ where P(k,z) is the total matter power spectrum
445
+
446
+ Parameters
447
+ ----------
448
+ z : float
449
+ Desired redshift
450
+ k : float
451
+ Desired wavenumber in 1/Mpc (if h_units=False) or h/Mpc (if h_units=True)
452
+
453
+ ---------------------------------
454
+ Function: scale_dependent_growth_factor_f_cb(self, k, z, h_units=False, nonlinear=False, Nz=20)
455
+ Docstring:
456
+
457
+ scale_dependent_growth_factor_f_cb(k,z)
458
+
459
+ Return the scale dependent growth factor calculated from CDM+baryon power spectrum P_cb(k,z)
460
+ f(z)= 1/2 * [d ln P_cb(k,a) / d ln a]
461
+ = - 0.5 * (1+z) * [d ln P_cb(k,z) / d z]
462
+
463
+
464
+ Parameters
465
+ ----------
466
+ z : float
467
+ Desired redshift
468
+ k : float
469
+ Desired wavenumber in 1/Mpc (if h_units=False) or h/Mpc (if h_units=True)
470
+
471
+ ---------------------------------
472
+ Function: scale_independent_f_sigma8(self, z)
473
+ Docstring:
474
+
475
+ scale_independent_f_sigma8(z)
476
+
477
+ Return the scale independent growth factor f(z) multiplied by sigma8(z)
478
+
479
+ Parameters
480
+ ----------
481
+ z : float
482
+ Desired redshift
483
+
484
+ Returns
485
+ -------
486
+ f(z)*sigma8(z) (dimensionless)
487
+
488
+ ---------------------------------
489
+ Function: effective_f_sigma8(self, z, z_step=0.1)
490
+ Docstring:
491
+
492
+ effective_f_sigma8(z)
493
+
494
+ Returns the time derivative of sigma8(z) computed as (d sigma8/d ln a)
495
+
496
+ Parameters
497
+ ----------
498
+ z : float
499
+ Desired redshift
500
+ z_step : float
501
+ Default step used for the numerical two-sided derivative. For z < z_step the step is reduced progressively down to z_step/10 while sticking to a double-sided derivative. For z< z_step/10 a single-sided derivative is used instead.
502
+
503
+ Returns
504
+ -------
505
+ (d ln sigma8/d ln a)(z) (dimensionless)
506
+
507
+ ---------------------------------
508
+ Function: effective_f_sigma8_spline(self, z, Nz=20)
509
+ Docstring:
510
+
511
+ effective_f_sigma8_spline(z)
512
+
513
+ Returns the time derivative of sigma8(z) computed as (d sigma8/d ln a)
514
+
515
+ Parameters
516
+ ----------
517
+ z : float
518
+ Desired redshift
519
+ Nz : integer
520
+ Number of values used to spline sigma8(z) in the range [z-0.1,z+0.1]
521
+
522
+ Returns
523
+ -------
524
+ (d ln sigma8/d ln a)(z) (dimensionless)
525
+
526
+ ---------------------------------
527
+ Function: z_of_tau(self, tau)
528
+ Docstring:
529
+
530
+ Redshift corresponding to a given conformal time.
531
+
532
+ Parameters
533
+ ----------
534
+ tau : float
535
+ Conformal time
536
+
537
+ ---------------------------------
538
+ Function: Hubble(self, z)
539
+ Docstring:
540
+
541
+ Hubble(z)
542
+
543
+ Return the Hubble rate (exactly, the quantity defined by Class as index_bg_H
544
+ in the background module)
545
+
546
+ Parameters
547
+ ----------
548
+ z : float
549
+ Desired redshift
550
+
551
+ ---------------------------------
552
+ Function: Om_m(self, z)
553
+ Docstring:
554
+
555
+ Omega_m(z)
556
+
557
+ Return the matter density fraction (exactly, the quantity defined by Class as index_bg_Omega_m
558
+ in the background module)
559
+
560
+ Parameters
561
+ ----------
562
+ z : float
563
+ Desired redshift
564
+
565
+ ---------------------------------
566
+ Function: Om_b(self, z)
567
+ Docstring:
568
+
569
+ Omega_b(z)
570
+
571
+ Return the baryon density fraction (exactly, the ratio of quantities defined by Class as
572
+ index_bg_rho_b and index_bg_rho_crit in the background module)
573
+
574
+ Parameters
575
+ ----------
576
+ z : float
577
+ Desired redshift
578
+
579
+ ---------------------------------
580
+ Function: Om_cdm(self, z)
581
+ Docstring:
582
+
583
+ Omega_cdm(z)
584
+
585
+ Return the cdm density fraction (exactly, the ratio of quantities defined by Class as
586
+ index_bg_rho_cdm and index_bg_rho_crit in the background module)
587
+
588
+ Parameters
589
+ ----------
590
+ z : float
591
+ Desired redshift
592
+
593
+ ---------------------------------
594
+ Function: Om_ncdm(self, z)
595
+ Docstring:
596
+
597
+ Omega_ncdm(z)
598
+
599
+ Return the ncdm density fraction (exactly, the ratio of quantities defined by Class as
600
+ Sum_m [ index_bg_rho_ncdm1 + n ], with n=0...N_ncdm-1, and index_bg_rho_crit in the background module)
601
+
602
+ Parameters
603
+ ----------
604
+ z : float
605
+ Desired redshift
606
+
607
+ ---------------------------------
608
+ Function: ionization_fraction(self, z)
609
+ Docstring:
610
+
611
+ ionization_fraction(z)
612
+
613
+ Return the ionization fraction for a given redshift z
614
+
615
+ Parameters
616
+ ----------
617
+ z : float
618
+ Desired redshift
619
+
620
+ ---------------------------------
621
+ Function: baryon_temperature(self, z)
622
+ Docstring:
623
+
624
+ baryon_temperature(z)
625
+
626
+ Give the baryon temperature for a given redshift z
627
+
628
+ Parameters
629
+ ----------
630
+ z : float
631
+ Desired redshift
632
+
633
+ ---------------------------------
634
+ Function: T_cmb(self)
635
+ Docstring:
636
+
637
+ Return the CMB temperature
638
+
639
+ ---------------------------------
640
+ Function: Omega0_m(self)
641
+ Docstring:
642
+
643
+ Return the sum of Omega0 for all non-relativistic components
644
+
645
+ ---------------------------------
646
+ Function: get_background(self)
647
+ Docstring:
648
+
649
+ Return an array of the background quantities at all times.
650
+
651
+ Parameters
652
+ ----------
653
+
654
+ Returns
655
+ -------
656
+ background : dictionary containing background.
657
+
658
+ ---------------------------------
659
+ Function: get_thermodynamics(self)
660
+ Docstring:
661
+
662
+ Return the thermodynamics quantities.
663
+
664
+ Returns
665
+ -------
666
+ thermodynamics : dictionary containing thermodynamics.
667
+
668
+ ---------------------------------
669
+ Function: get_primordial(self)
670
+ Docstring:
671
+
672
+ Return the primordial scalar and/or tensor spectrum depending on 'modes'.
673
+ 'output' must be set to something, e.g. 'tCl'.
674
+
675
+ Returns
676
+ -------
677
+ primordial : dictionary containing k-vector and primordial scalar and tensor P(k).
678
+
679
+ ---------------------------------
680
+ Function: get_perturbations(self, return_copy=True)
681
+ Docstring:
682
+
683
+ Return scalar, vector and/or tensor perturbations as arrays for requested
684
+ k-values.
685
+
686
+ .. note::
687
+
688
+ you need to specify both 'k_output_values', and have some
689
+ perturbations computed, for instance by setting 'output' to 'tCl'.
690
+
691
+ Do not enable 'return_copy=False' unless you know exactly what you are doing.
692
+ This will mean that you get access to the direct C pointers inside CLASS.
693
+ That also means that if class is deallocated,
694
+ your perturbations array will become invalid. Beware!
695
+
696
+ Returns
697
+ -------
698
+ perturbations : dict of array of dicts
699
+ perturbations['scalar'] is an array of length 'k_output_values' of
700
+ dictionary containing scalar perturbations.
701
+ Similar for perturbations['vector'] and perturbations['tensor'].
702
+
703
+ ---------------------------------
704
+ Function: get_transfer(self, z=0., output_format='class')
705
+ Docstring:
706
+
707
+ Return the density and/or velocity transfer functions for all initial
708
+ conditions today. You must include 'mTk' and/or 'vCTk' in the list of
709
+ 'output'. The transfer functions can also be computed at higher redshift z
710
+ provided that 'z_pk' has been set and that 0<z<z_pk.
711
+
712
+ Parameters
713
+ ----------
714
+ z : redshift (default = 0)
715
+ output_format : ('class' or 'camb') Format transfer functions according to
716
+ CLASS convention (default) or CAMB convention.
717
+
718
+ Returns
719
+ -------
720
+ tk : dictionary containing transfer functions.
721
+
722
+ ---------------------------------
723
+ Function: get_current_derived_parameters(self, names)
724
+ Docstring:
725
+
726
+ get_current_derived_parameters(names)
727
+
728
+ Return a dictionary containing an entry for all the names defined in the
729
+ input list.
730
+
731
+ Parameters
732
+ ----------
733
+ names : list
734
+ Derived parameters that can be asked from Monte Python, or
735
+ elsewhere.
736
+
737
+ Returns
738
+ -------
739
+ derived : dict
740
+
741
+ .. warning::
742
+
743
+ This method used to take as an argument directly the data class from
744
+ Monte Python. To maintain compatibility with this old feature, a
745
+ check is performed to verify that names is indeed a list. If not, it
746
+ returns a TypeError. The old version of this function, when asked
747
+ with the new argument, will raise an AttributeError.
748
+
749
+
750
+ ---------------------------------
751
+ Function: nonlinear_scale(self, np.ndarray[DTYPE_t,ndim=1] z, int z_size)
752
+ Docstring:
753
+
754
+ nonlinear_scale(z, z_size)
755
+
756
+ Return the nonlinear scale for all the redshift specified in z, of size
757
+ z_size
758
+
759
+ Parameters
760
+ ----------
761
+ z : numpy array
762
+ Array of requested redshifts
763
+ z_size : int
764
+ Size of the redshift array
765
+
766
+ ---------------------------------
767
+ Function: nonlinear_scale_cb(self, np.ndarray[DTYPE_t,ndim=1] z, int z_size)
768
+ Docstring:
769
+
770
+
771
+ make nonlinear_scale_cb(z, z_size)
772
+
773
+ Return the nonlinear scale for all the redshift specified in z, of size
774
+
775
+ z_size
776
+
777
+ Parameters
778
+ ----------
779
+ z : numpy array
780
+ Array of requested redshifts
781
+ z_size : int
782
+ Size of the redshift array
783
+
784
+ ---------------------------------
785
+ Function: __call__(self, ctx)
786
+ Docstring:
787
+
788
+ Function to interface with CosmoHammer
789
+
790
+ Parameters
791
+ ----------
792
+ ctx : context
793
+ Contains several dictionaries storing data and cosmological
794
+ information
795
+
796
+
797
+ ---------------------------------
798
+ Function: get_pk_array(self, np.ndarray[DTYPE_t,ndim=1] k, np.ndarray[DTYPE_t,ndim=1] z, int k_size, int z_size, nonlinear)
799
+ Docstring:
800
+ Fast function to get the power spectrum on a k and z array
801
+ ---------------------------------
802
+ Function: get_pk_cb_array(self, np.ndarray[DTYPE_t,ndim=1] k, np.ndarray[DTYPE_t,ndim=1] z, int k_size, int z_size, nonlinear)
803
+ Docstring:
804
+ Fast function to get the power spectrum on a k and z array
805
+ ---------------------------------
806
+ Function: Omega0_k(self)
807
+ Docstring:
808
+ Curvature contribution
809
+ ---------------------------------
810
+ Function: get_sources(self)
811
+ Docstring:
812
+
813
+ Return the source functions for all k, tau in the grid.
814
+
815
+ Returns
816
+ -------
817
+ sources : dictionary containing source functions.
818
+ k_array : numpy array containing k values.
819
+ tau_array: numpy array containing tau values.
820
+
821
+ ---------------------------------
class-data/classy-generated-docstrings.txt ADDED
@@ -0,0 +1,370 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ```python
2
+ """
3
+ The following docstrings were extracted from classy-py.py
4
+ """
5
+ def viewdictitems(d):
6
+ """Return items from a dictionary for Python 2 and 3 compatibility.
7
+
8
+ Args:
9
+ d (dict): The dictionary to retrieve items from.
10
+
11
+ Returns:
12
+ A view of the dictionary's items.
13
+ """
14
+ def _check_task_dependency(self, level):
15
+ """Fill the level list with all the needed modules.
16
+
17
+ Warning:
18
+ The ordering of modules is obviously dependent on CLASS module order
19
+ in the main.c file. This has to be updated in case of a change to
20
+ this file.
21
+
22
+ Args:
23
+ level (list): List of strings, containing initially only the last module required.
24
+ For instance, to recover all the modules, the input should be
25
+ ['lensing'].
26
+ """
27
+ def _pars_check(self, key, value, contains=False, add=""):
28
+ """Check parameters (implementation detail, no docstring provided)."""
29
+ def compute(self, level=["distortions"]):
30
+ """Compute the CLASS cosmology.
31
+
32
+ Main function, execute all the _init methods for all desired modules.
33
+ This is called in MontePython, and this ensures that the Class instance
34
+ of this class contains all the relevant quantities. Then, one can deduce
35
+ Pk, Cl, etc...
36
+
37
+ Args:
38
+ level (list, optional): List of the last module desired. The internal function
39
+ _check_task_dependency will then add to this list all the
40
+ necessary modules to compute in order to initialize this last
41
+ one. The default last module is "lensing". Defaults to ["distortions"].
42
+
43
+ Warning:
44
+ level default value should be left as an array (it was creating
45
+ problem when casting as a set later on, in _check_task_dependency)
46
+ """
47
+ def raw_cl(self, lmax=-1, nofail=False):
48
+ """Return a dictionary of the primary C_l.
49
+
50
+ Args:
51
+ lmax (int, optional): Define the maximum l for which the C_l will be returned
52
+ (inclusively). This number will be checked against the maximum l
53
+ at which they were actually computed by CLASS, and an error will
54
+ be raised if the desired lmax is bigger than what CLASS can
55
+ give. Defaults to -1.
56
+ nofail (bool, optional): Check and enforce the computation of the harmonic module
57
+ beforehand, with the desired lmax. Defaults to False.
58
+
59
+ Returns:
60
+ dict: Dictionary that contains the power spectrum for 'tt', 'te', etc... The
61
+ index associated with each is defined wrt. Class convention, and are non
62
+ important from the python point of view. It also returns now the
63
+ ell array.
64
+ """
65
+ def lensed_cl(self, lmax=-1,nofail=False):
66
+ """Return a dictionary of the lensed C_l.
67
+
68
+ Return a dictionary of the lensed C_l, computed by CLASS, without the
69
+ density C_ls. They must be asked separately with the function aptly
70
+ named density_cl
71
+
72
+ Args:
73
+ lmax (int, optional): Define the maximum l for which the C_l will be returned (inclusively). Defaults to -1.
74
+ nofail (bool, optional): Check and enforce the computation of the lensing module beforehand. Defaults to False.
75
+
76
+ Returns:
77
+ dict: Dictionary that contains the power spectrum for 'tt', 'te', etc... The
78
+ index associated with each is defined wrt. Class convention, and are non
79
+ important from the python point of view.
80
+ """
81
+ def density_cl(self, lmax=-1, nofail=False):
82
+ """Return a dictionary of the primary C_l for the matter.
83
+
84
+ Args:
85
+ lmax (int, optional): Define the maximum l for which the C_l will be returned (inclusively). Defaults to -1.
86
+ nofail (bool, optional): Check and enforce the computation of the lensing module beforehand. Defaults to False.
87
+
88
+ Returns:
89
+ numpy array of numpy.ndarrays: Array that contains the list (in this order) of self correlation of
90
+ 1st bin, then successive correlations (set by non_diagonal) to the
91
+ following bins, then self correlation of 2nd bin, etc. The array
92
+ starts at index_ct_dd.
93
+ """
94
+ def sigma(self,R,z, h_units = False):
95
+ """Give sigma (total matter) for a given R and z.
96
+
97
+ (R is the radius in units of Mpc, so if R=8/h this will be the usual sigma8(z).
98
+ This is unless h_units is set to true, in which case R is the radius in units of Mpc/h,
99
+ and R=8 corresponds to sigma8(z))
100
+
101
+ Note:
102
+ there is an additional check to verify whether output contains `mPk`,
103
+ and whether k_max > ...
104
+ because otherwise a segfault will occur
105
+
106
+ Args:
107
+ R:
108
+ z:
109
+ h_units:
110
+ """
111
+ def sigma_cb(self,double R,double z, h_units = False):
112
+ """Give sigma (cdm+b) for a given R and z.
113
+
114
+ (R is the radius in units of Mpc, so if R=8/h this will be the usual sigma8(z)
115
+ This is unless h_units is set to true, in which case R is the radius in units of Mpc/h,
116
+ and R=8 corresponds to sigma8(z))
117
+
118
+ Note:
119
+ there is an additional check to verify whether output contains `mPk`,
120
+ because otherwise a segfault will occur
121
+
122
+ Args:
123
+ R:
124
+ z:
125
+ h_units:
126
+ """
127
+ def pk_tilt(self,double k,double z):
128
+ """Give effective logarithmic slope of P_L(k,z) (total matter) for a given k and z.
129
+
130
+ (k is the wavenumber in units of 1/Mpc, z is the redshift, the output is dimensionless)
131
+
132
+ Note:
133
+ there is an additional check that output contains `mPk` and whether k is in the right range
134
+
135
+ Args:
136
+ k:
137
+ z:
138
+ """
139
+ def age(self):
140
+ """Return the age of the Universe (implementation detail, no docstring provided)."""
141
+ def h(self):
142
+ """Return the Hubble parameter (implementation detail, no docstring provided)."""
143
+ def n_s(self):
144
+ """Return the scalar spectral index (implementation detail, no docstring provided)."""
145
+ def tau_reio(self):
146
+ """Return the reionization optical depth (implementation detail, no docstring provided)."""
147
+ def Omega_m(self):
148
+ """Return the matter density parameter (implementation detail, no docstring provided)."""
149
+ def Omega_r(self):
150
+ """Return the radiation density parameter (implementation detail, no docstring provided)."""
151
+ def theta_s_100(self):
152
+ """Return the sound horizon angle (implementation detail, no docstring provided)."""
153
+ def theta_star_100(self):
154
+ """Return the sound horizon angle at decoupling (implementation detail, no docstring provided)."""
155
+ def Omega_Lambda(self):
156
+ """Return the cosmological constant density parameter (implementation detail, no docstring provided)."""
157
+ def Omega_g(self):
158
+ """Return the photon density parameter (implementation detail, no docstring provided)."""
159
+ def r(self):
160
+ """Return the tensor-to-scalar ratio (implementation detail, no docstring provided)."""
161
+ def A_s(self):
162
+ """Return the primordial power spectrum amplitude (implementation detail, no docstring provided)."""
163
+ def ln_A_s_1e10(self):
164
+ """Return the natural logarithm of 10^10 times the primordial power spectrum amplitude (implementation detail, no docstring provided)."""
165
+ def lnAs_1e10(self):
166
+ """Return the natural logarithm of 10^10 times the primordial power spectrum amplitude (implementation detail, no docstring provided)."""
167
+ def Neff(self):
168
+ """Return the effective number of neutrino species (implementation detail, no docstring provided)."""
169
+ def get_transfer(self, z=0., output_format='class'):
170
+ """Return the density and/or velocity transfer functions.
171
+
172
+ Return the density and/or velocity transfer functions for all initial
173
+ conditions, at a given value of z.
174
+ By default, all transfer functions will be normalised to 'curvature R=1'
175
+ at initial time (and not 'curvature R = -1/k^2' like in CAMB).
176
+ You may switch to output_format='camb' for the CAMB definition and normalisation
177
+ of transfer functions.
178
+ When setting CLASS input parameters, include at least one of 'dTk' (for density transfer functions)
179
+ or 'vTk' (for velocity transfer functions).
180
+ For more details, see section II of the CLASS notebook.
181
+
182
+ Args:
183
+ z (float, optional): Redshift. Defaults to 0..
184
+ output_format (str, optional): Format transfer functions according to CLASS (default) or CAMB. Defaults to 'class'.
185
+
186
+ Returns:
187
+ dict: Dictionary containing an entry for each transfer function. For a
188
+ given transfer function, say, delta_tot, transfers['d_tot'] will be
189
+ an array containing delta_tot(k) at the k values defined in the
190
+ 'k_output_values' list. When there are non-adiabatic conditions,
191
+ the transfer dictionary will have keys like transfers['d_tot[name]'], where
192
+ name is the name of the isocurvature mode.
193
+ """
194
+ def get_current_derived_parameters(self, names):
195
+ """Return a dictionary containing an entry for all the names defined in the input list.
196
+
197
+ Args:
198
+ names (list): Derived parameters that can be asked from Monte Python, or
199
+ elsewhere.
200
+
201
+ Returns:
202
+ dict: A dictionary of derived parameters.
203
+
204
+ Raises:
205
+ TypeError: If `names` is not a list.
206
+ """
207
+ def get_perturbations(self, return_copy=True):
208
+ """Return scalar, vector and/or tensor perturbations as arrays for requested k-values.
209
+
210
+ Note:
211
+ you need to specify both 'k_output_values', and have some
212
+ perturbations computed, for instance by setting 'output' to 'tCl'.
213
+ Do not enable 'return_copy=False' unless you know exactly what you are doing.
214
+ This will mean that you get access to the direct C pointers inside CLASS.
215
+ That also means that if class is deallocated,
216
+ your perturbations array will become invalid. Beware!
217
+
218
+ Args:
219
+ return_copy (bool, optional): Whether to return a copy of the data. Defaults to True.
220
+
221
+ Returns:
222
+ dict of array of dicts: perturbations['scalar'] is an array of length 'k_output_values' of
223
+ dictionary containing scalar perturbations.
224
+ Similar for perturbations['vector'] and perturbations['tensor'].
225
+ """
226
+ def scale_dependent_growth_factor_f(self, k, z, Nz=50, h_units = False, evolution=False):
227
+ """Return the scale-dependent growth factor, f(k,z) = d ln delta(k,z) / d ln a, at a given k and z, for total matter fluctuations.
228
+
229
+ Args:
230
+ k (float or array): wavenumber in units of 1/Mpc
231
+ z (float or array): redshift
232
+ Nz (int, optional): number of points for computing sigma(R,z) splines, default to 50. Defaults to 50.
233
+ h_units (bool, optional): If true, returns k in h/Mpc. Defaults to False.
234
+ evolution (bool, optional): . Defaults to False.
235
+ """
236
+ def pk(self, k, z, lAccuracy=10):
237
+ """Return the total matter power spectrum for a given k and z.
238
+
239
+ Return the total matter power spectrum for a given k and z (will be
240
+ non linear if requested to Class, linear otherwise)
241
+
242
+ Args:
243
+ k (float): wavenumber in units of 1/Mpc
244
+ z (float): redshift
245
+ lAccuracy (int, optional): Level of accuracy of the integration. Defaults to 10.
246
+ """
247
+ def pk_cb(self,double k,double z):
248
+ """Give the cdm+b pk (in Mpc**3) for a given k (in 1/Mpc) and z (will be non linear if requested to Class, linear otherwise).
249
+
250
+ Note:
251
+ there is an additional check that output contains `mPk`,
252
+ because otherwise a segfault will occur
253
+
254
+ Args:
255
+ k:
256
+ z:
257
+ """
258
+ def pk_lin(self, k, z, lAccuracy=10):
259
+ """Return the LINEAR total matter power spectrum for a given k and z.
260
+
261
+ Args:
262
+ k (float): wavenumber in units of 1/Mpc
263
+ z (float): redshift
264
+ lAccuracy (int, optional): Level of accuracy of the integration. Defaults to 10.
265
+ """
266
+ def pk_cb_lin(self,double k,double z):
267
+ """Give the LINEAR cdm+b pk (in Mpc**3) for a given k (in 1/Mpc) and z.
268
+
269
+ Note:
270
+ there is an additional check that output contains `mPk`,
271
+ because otherwise a segfault will occur
272
+
273
+ Args:
274
+ k:
275
+ z:
276
+ """
277
+ def log_pk(self, k, z, lAccuracy=10):
278
+ """Return the log of the total matter power spectrum for a given k and z.
279
+
280
+ Args:
281
+ k (float): wavenumber in units of 1/Mpc
282
+ z (float): redshift
283
+ lAccuracy (int, optional): Level of accuracy of the integration. Defaults to 10.
284
+ """
285
+ def transfer(self, k, z, idx_T=1, lAccuracy=10):
286
+ """Return a transfer function for a given k and z.
287
+
288
+ Args:
289
+ k (float): wavenumber in units of 1/Mpc
290
+ z (float): redshift
291
+ idx_T (int, optional): index of transfer function to return, with 0=delta_g, 1=delta_b,
292
+ 2=delta_cdm, 3=delta_ncdm[0], etc.... Defaults to 1.
293
+ lAccuracy (int, optional): Level of accuracy of the integration. Defaults to 10.
294
+ """
295
+ def rho_crit(self, z, lAccuracy=10):
296
+ """Return the critical density at redshift z.
297
+
298
+ Args:
299
+ z (float): redshift
300
+ lAccuracy (int, optional): Level of accuracy of the integration. Defaults to 10.
301
+ """
302
+ def scale_independent_f_sigma8(self, z, Nz=50):
303
+ """Return an interpolator for f \\sigma_8 (scale-INdependent), as a function of z.
304
+
305
+ This will compute f(z) = d ln delta / d ln a,
306
+ approximating this quantity with the scale-INdependent growth rate.
307
+ For the scale-dependent one, use the proper function.
308
+
309
+ Args:
310
+ z (array): Redshift
311
+ """
312
+ def scale_independent_growth_factor(self, z, Nz=50):
313
+ """Return the linear growth factor by taking the ratio of Delta(z)/Delta(z=0).
314
+
315
+ Args:
316
+ z (array): Redshift
317
+ """
318
+ def has_idr(self):
319
+ """Check for interacting dark radiation (implementation detail, no docstring provided)."""
320
+ def has_dr(self):
321
+ """Check for dark radiation (implementation detail, no docstring provided)."""
322
+ def spectral_distortion_amplitudes(self):
323
+ """Distortion amplitudes (implementation detail, no docstring provided)."""
324
+ def get_transfer_functions_at_z(self, z, k_values, output_format='class'):
325
+ """Return the density and velocity transfer functions.
326
+
327
+ Return the density and velocity transfer functions for all initial
328
+ conditions, at a given value of z.
329
+ For more details, see section II of the CLASS notebook.
330
+ By default, all transfer functions will be normalised to 'curvature R=1' at initial time
331
+ (and not 'curvature R = -1/k^2' like in CAMB).
332
+ You may switch to output_format='camb' for the CAMB definition and normalisation of transfer functions.
333
+ When setting CLASS input parameters, include at least one of 'dTk' (for density transfer functions)
334
+ or 'vTk' (for velocity transfer functions).
335
+
336
+ Args:
337
+ z (float): Redshift
338
+ k_values:
339
+ output_format (str, optional): Format transfer functions according to CLASS (default) or CAMB. Defaults to 'class'.
340
+ """
341
+ def primordial_spec(self, k, return_power=True, lAccuracy=10):
342
+ """Return the primordial power spectrum.
343
+
344
+ This function switches between the scalar and tensor primordial power
345
+ spectrum, and accepts as an argument a scale k.
346
+
347
+ Args:
348
+ k (float): wavenumber in units of 1/Mpc
349
+ return_power (bool, optional): default value is true, which returns the power spectrum, otherwise the value of the scale is returned. Defaults to True.
350
+ lAccuracy (int, optional): Level of accuracy of the integration. Defaults to 10.
351
+ """
352
+ def primordial_scalar_pk(self, k, lAccuracy=10):
353
+ """Return the primordial SCALAR power spectrum for a given k.
354
+
355
+ Args:
356
+ k (float): wavenumber in units of 1/Mpc
357
+ lAccuracy (int, optional): Level of accuracy of the integration. Defaults to 10.
358
+ """
359
+ def primordial_tensor_pk(self, k, lAccuracy=10):
360
+ """Return the primordial TENSOR power spectrum for a given k.
361
+
362
+ Args:
363
+ k (float): wavenumber in units of 1/Mpc
364
+ lAccuracy (int, optional): Level of accuracy of the integration. Defaults to 10.
365
+ """
366
+ def angular_diamater_distance(self,z):
367
+ """Return the angular diameter distance."""
368
+ def tangential_critical_density(self,z_l,z_s):
369
+ """Returnthe critical density for tangential shear."""
370
+
class-data/classy-py.py ADDED
The diff for this file is too large to render. See raw diff
 
class-data/cltt_terms.py ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+ # coding: utf-8
3
+
4
+ # In[ ]:
5
+
6
+
7
+ # import necessary modules
8
+ from classy import Class
9
+ from math import pi
10
+ get_ipython().run_line_magic('matplotlib', 'inline')
11
+ import matplotlib
12
+ import matplotlib.pyplot as plt
13
+
14
+
15
+ # In[ ]:
16
+
17
+
18
+ #############################################
19
+ #
20
+ # Cosmological parameters and other CLASS parameters
21
+ #
22
+ common_settings = {# LambdaCDM parameters
23
+ 'h':0.67810,
24
+ 'omega_b':0.02238280,
25
+ 'omega_cdm':0.1201075,
26
+ 'A_s':2.100549e-09,
27
+ 'n_s':0.9660499,
28
+ 'tau_reio':0.05430842 ,
29
+ # output and precision parameters
30
+ 'output':'tCl,pCl,lCl',
31
+ 'lensing':'yes',
32
+ 'l_max_scalars':5000}
33
+ #
34
+ M = Class()
35
+ #
36
+ ###############
37
+ #
38
+ # call CLASS for the total Cl's and then for each contribution
39
+ #
40
+ ###############
41
+ #
42
+ M.set(common_settings)
43
+ M.compute()
44
+ cl_tot = M.raw_cl(3000)
45
+ cl_lensed = M.lensed_cl(3000)
46
+ M.empty() # reset input
47
+ #
48
+ M.set(common_settings) # new input
49
+ M.set({'temperature contributions':'tsw'})
50
+ M.compute()
51
+ cl_tsw = M.raw_cl(3000)
52
+ M.empty()
53
+ #
54
+ M.set(common_settings)
55
+ M.set({'temperature contributions':'eisw'})
56
+ M.compute()
57
+ cl_eisw = M.raw_cl(3000)
58
+ M.empty()
59
+ #
60
+ M.set(common_settings)
61
+ M.set({'temperature contributions':'lisw'})
62
+ M.compute()
63
+ cl_lisw = M.raw_cl(3000)
64
+ M.empty()
65
+ #
66
+ M.set(common_settings)
67
+ M.set({'temperature contributions':'dop'})
68
+ M.compute()
69
+ cl_dop = M.raw_cl(3000)
70
+ M.empty()
71
+
72
+
73
+ # In[ ]:
74
+
75
+
76
+ #################
77
+ #
78
+ # start plotting
79
+ #
80
+ #################
81
+ #
82
+ plt.xlim([2,3000])
83
+ plt.xlabel(r"$\ell$")
84
+ plt.ylabel(r"$\ell (\ell+1) C_l^{TT} / 2 \pi \,\,\, [\times 10^{10}]$")
85
+ plt.grid()
86
+ #
87
+ ell = cl_tot['ell']
88
+ factor = 1.e10*ell*(ell+1.)/2./pi
89
+ plt.semilogx(ell,factor*cl_tsw['tt'],'c-',label=r'$\mathrm{T+SW}$')
90
+ plt.semilogx(ell,factor*cl_eisw['tt'],'r-',label=r'$\mathrm{early-ISW}$')
91
+ plt.semilogx(ell,factor*cl_lisw['tt'],'y-',label=r'$\mathrm{late-ISW}$')
92
+ plt.semilogx(ell,factor*cl_dop['tt'],'g-',label=r'$\mathrm{Doppler}$')
93
+ plt.semilogx(ell,factor*cl_tot['tt'],'r-',label=r'$\mathrm{total}$')
94
+ plt.semilogx(ell,factor*cl_lensed['tt'],'k-',label=r'$\mathrm{lensed}$')
95
+ #
96
+ plt.legend(loc='right',bbox_to_anchor=(1.4, 0.5))
97
+ plt.savefig('cltt_terms.pdf',bbox_inches='tight')
98
+
class-data/cosmology_validation.py ADDED
@@ -0,0 +1,502 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+ # coding: utf-8
3
+
4
+ # In[ ]:
5
+
6
+
7
+ ## Examples of successful runs with CLASS from the AI assistant
8
+
9
+
10
+ # In[1]:
11
+
12
+
13
+ import numpy as np
14
+ import matplotlib.pyplot as plt
15
+ from classy import Class
16
+
17
+ # Initialize CLASS
18
+ cosmo = Class()
19
+
20
+ # Set parameters using a dictionary
21
+ params = {
22
+ 'output': 'mPk',
23
+ 'N_ncdm': 1, # Number of sterile neutrinos
24
+ 'm_ncdm': 0.2, # Mass of the sterile neutrino in eV (as a string)
25
+ 'h': 0.7, # Hubble parameter
26
+ 'Omega_b': 0.05, # Baryon density
27
+ 'Omega_cdm': 0.25, # Cold dark matter density
28
+ 'Omega_k': 0, # Curvature density
29
+ 'A_s': 2.1e-9, # Amplitude of the primordial power spectrum
30
+ 'n_s': 0.965, # Spectral index
31
+ 'z_max_pk' : 3.0
32
+ }
33
+
34
+ cosmo.set(params)
35
+
36
+ # Compute the background and perturbations
37
+ cosmo.compute()
38
+
39
+ # Define k values and redshift
40
+ k_values = np.logspace(-3, -1, 100) # k values in 1/Mpc
41
+ z_values = [0, 1, 2] # Redshifts to plot
42
+
43
+ # Plotting the power spectrum
44
+ plt.figure(figsize=(10, 6))
45
+ for z in z_values:
46
+ pk_values = [cosmo.pk(k, z) for k in k_values]
47
+ plt.loglog(k_values, pk_values, label=f'z={z}')
48
+
49
+ plt.xlabel('k [1/Mpc]')
50
+ plt.ylabel('P(k) [Mpc^3]')
51
+ plt.title('Power Spectrum from Sterile Neutrinos')
52
+ plt.legend()
53
+ plt.grid()
54
+ plt.show()
55
+
56
+ # Clean up
57
+ cosmo.struct_cleanup()
58
+ cosmo.empty()
59
+
60
+
61
+ # In[2]:
62
+
63
+
64
+ # Import necessary modules
65
+ import matplotlib.pyplot as plt
66
+ import numpy as np
67
+ from classy import Class
68
+
69
+ # Initialize the CLASS instance for ΛCDM
70
+ LCDM = Class()
71
+ LCDM.set({'Omega_cdm': 0.25, 'Omega_b': 0.05, 'h': 0.7})
72
+ LCDM.compute()
73
+
74
+ # Get background quantities
75
+ background = LCDM.get_background()
76
+
77
+ # Extract scale factor, redshift, and growth factor
78
+ a = 1 / (1 + background['z'])
79
+ D = background['gr.fac. D'] # Growth factor D
80
+ f = background['gr.fac. f'] # Growth rate f
81
+
82
+ # Plot the growth rate as a function of redshift
83
+ plt.figure(figsize=(8, 6))
84
+ plt.plot(background['z'], f, label='Growth rate $f(z)$')
85
+ plt.xlabel('Redshift $z$')
86
+ plt.ylabel('Growth rate $f$')
87
+ plt.title('Growth Rate as a Function of Redshift for ΛCDM')
88
+ plt.xscale('log')
89
+ plt.yscale('log')
90
+ plt.gca().invert_xaxis() # Invert x-axis to show high z on the left
91
+ plt.legend()
92
+ plt.grid(True)
93
+ plt.show()
94
+
95
+
96
+ # In[3]:
97
+
98
+
99
+ import matplotlib.pyplot as plt
100
+ import numpy as np
101
+ from classy import Class
102
+
103
+ # Initialize the CLASS instance for ΛCDM
104
+ LCDM = Class()
105
+ LCDM.set({'Omega_cdm': 0.25, 'Omega_b': 0.05, 'h': 0.7})
106
+ LCDM.compute()
107
+
108
+ # Extract background quantities
109
+ background = LCDM.get_background()
110
+
111
+ # Extract scale factor, growth factor, and growth rate
112
+ a = 1. / (background['z'] + 1)
113
+ D = background['gr.fac. D']
114
+ f = background['gr.fac. f']
115
+
116
+ # Plot the growth rate
117
+ plt.figure(figsize=(8, 6))
118
+ plt.plot(background['z'], f, label='Growth Rate $f$', color='b')
119
+ plt.xlabel('Redshift $z$')
120
+ plt.ylabel('Growth Rate $f$')
121
+ plt.title('Growth Rate for ΛCDM Model')
122
+ plt.gca().invert_xaxis() # Invert x-axis to have redshift decreasing
123
+ plt.legend()
124
+ plt.grid(True)
125
+ plt.show()
126
+
127
+
128
+ # In[4]:
129
+
130
+
131
+ import matplotlib.pyplot as plt
132
+ from classy import Class
133
+
134
+ # Define common settings for the ΛCDM model
135
+ common_settings = {
136
+ 'h': 0.67810,
137
+ 'omega_b': 0.02238280,
138
+ 'omega_cdm': 0.1201075,
139
+ 'A_s': 2.100549e-09,
140
+ 'n_s': 0.9660499,
141
+ 'tau_reio': 0.05430842,
142
+ 'output': 'tCl,pCl,lCl',
143
+ 'lensing': 'yes',
144
+ 'l_max_scalars': 5000
145
+ }
146
+
147
+ # Initialize CLASS
148
+ M = Class()
149
+
150
+ # Function to compute and return Cls for a given contribution
151
+ def compute_cls(contribution):
152
+ M.empty()
153
+ M.set(common_settings)
154
+ M.set({'temperature contributions': contribution})
155
+ M.compute()
156
+ return M.raw_cl(3000)
157
+
158
+ # Compute total Cls
159
+ M.set(common_settings)
160
+ M.compute()
161
+ cl_tot = M.raw_cl(3000)
162
+
163
+ # Compute individual contributions
164
+ cl_tsw = compute_cls('tsw')
165
+ cl_eisw = compute_cls('eisw')
166
+ cl_lisw = compute_cls('lisw')
167
+ cl_doppler = compute_cls('dop')
168
+
169
+ # Plotting
170
+ plt.figure(figsize=(10, 6))
171
+ ell = cl_tot['ell']
172
+ factor = 1.e10 * ell * (ell + 1) / (2 * np.pi)
173
+
174
+ plt.semilogx(ell, factor * cl_tot['tt'], 'k-', label='Total')
175
+ plt.semilogx(ell, factor * cl_tsw['tt'], 'c-', label='T+SW')
176
+ plt.semilogx(ell, factor * cl_eisw['tt'], 'r-', label='Early ISW')
177
+ plt.semilogx(ell, factor * cl_lisw['tt'], 'y-', label='Late ISW')
178
+ plt.semilogx(ell, factor * cl_doppler['tt'], 'g-', label='Doppler')
179
+
180
+ plt.xlabel(r'Multipole $\ell$')
181
+ plt.ylabel(r'$\ell (\ell+1) C_\ell^{TT} / 2 \pi \,\,\, [\times 10^{10}]$')
182
+ plt.legend(loc='upper right')
183
+ plt.grid(True)
184
+ plt.title('CMB Temperature Anisotropy Contributions')
185
+ plt.show()
186
+
187
+
188
+ # In[16]:
189
+
190
+
191
+ # Import necessary modules
192
+ from classy import Class
193
+ import matplotlib.pyplot as plt
194
+ import numpy as np
195
+ from math import pi
196
+
197
+ # Initialize the CLASS instance
198
+ M = Class()
199
+
200
+ # Define common settings (example settings)
201
+ common_settings = {
202
+ 'omega_b': 0.0223828,
203
+ 'omega_cdm': 0.1201075,
204
+ 'h': 0.67810,
205
+ 'A_s': 2.100549e-09,
206
+ 'n_s': 0.9660499,
207
+ 'tau_reio': 0.05430842,
208
+ 'output': 'tCl,pCl,lCl',
209
+ 'lensing': 'yes',
210
+ 'l_max_scalars': 5000,
211
+ }
212
+
213
+ # Function to compute lensed Cls for a given temperature contribution
214
+ def compute_lensed_cls(contribution=None):
215
+ M.empty() # Clean input
216
+ M.set(common_settings) # Set common input
217
+ if contribution is not None:
218
+ M.set({'temperature contributions': contribution}) # Set specific contribution
219
+ M.compute() # Compute
220
+ return M.raw_cl(common_settings['l_max_scalars']) # Return raw Cls
221
+
222
+ # Compute contributions
223
+ cl_SW = compute_lensed_cls('tsw') # Sachs-Wolfe
224
+ cl_eISW = compute_lensed_cls('eisw') # Early ISW
225
+ cl_lISW = compute_lensed_cls('lisw') # Late ISW
226
+
227
+ # Total Cls (optional, if needed)
228
+ cl_tot = compute_lensed_cls() # Total including all contributions
229
+
230
+ # Plotting
231
+ fig, ax_Cl = plt.subplots(figsize=(10, 6))
232
+ tau_0_minus_tau_rec_hMpc = 1 # Example value, replace with actual calculation
233
+
234
+ # Plot SW contribution
235
+ ax_Cl.semilogx(cl_SW['ell']/tau_0_minus_tau_rec_hMpc,
236
+ 1.e10 * cl_SW['ell'] * (cl_SW['ell'] + 1.) * cl_SW['tt'] / (2. * pi),
237
+ 'c-', label=r'$\mathrm{SW}$')
238
+
239
+ # Plot early ISW contribution
240
+ ax_Cl.semilogx(cl_eISW['ell']/tau_0_minus_tau_rec_hMpc,
241
+ 1.e10 * cl_eISW['ell'] * (cl_eISW['ell'] + 1.) * cl_eISW['tt'] / (2. * pi),
242
+ 'r-', label=r'$\mathrm{early} \,\, \mathrm{ISW}$')
243
+
244
+ # Plot late ISW contribution
245
+ ax_Cl.semilogx(cl_lISW['ell']/tau_0_minus_tau_rec_hMpc,
246
+ 1.e10 * cl_lISW['ell'] * (cl_lISW['ell'] + 1.) * cl_lISW['tt'] / (2. * pi),
247
+ 'y-', label=r'$\mathrm{late} \,\, \mathrm{ISW}$')
248
+
249
+ # Plot total Cls (optional)
250
+ ax_Cl.semilogx(cl_tot['ell']/tau_0_minus_tau_rec_hMpc,
251
+ 1.e10 * cl_tot['ell'] * (cl_tot['ell'] + 1.) * cl_tot['tt'] / (2. * pi),
252
+ 'k-', label=r'$\mathrm{Total}$')
253
+
254
+ # Finalize the plot
255
+ ax_Cl.set_xlim([3, common_settings['l_max_scalars']])
256
+ #ax_Cl.set_ylim([0., 8.])
257
+ ax_Cl.set_xlabel(r'$\ell/(\tau_0-\tau_{rec}) \,\,\, \mathrm{[h/Mpc]}$')
258
+ ax_Cl.set_ylabel(r'$\ell (\ell+1) C_l^{TT} / 2 \pi \,\,\, [\times 10^{10}]$')
259
+ ax_Cl.legend(loc='right', bbox_to_anchor=(1.4, 0.5))
260
+ ax_Cl.grid()
261
+
262
+ # Save the figure
263
+ fig.savefig('decomposed_cl_contributions.pdf', bbox_inches='tight')
264
+ plt.show()
265
+
266
+
267
+ # In[17]:
268
+
269
+
270
+ # Import necessary modules
271
+ from classy import Class
272
+ import matplotlib.pyplot as plt
273
+ import numpy as np
274
+ from math import pi
275
+
276
+ # Function to compute lensed Cls for a given temperature contribution
277
+ def compute_lensed_cls(params):
278
+ # Initialize CLASS
279
+ M = Class()
280
+ M.set(params) # Set cosmological parameters
281
+ M.compute() # Compute the power spectra
282
+ cls = M.raw_cl(5000) # Get raw Cls
283
+ M.struct_cleanup() # Clean up
284
+ return cls
285
+
286
+ # Define cosmological parameters
287
+ params = {
288
+ 'omega_b': 0.0223828,
289
+ 'omega_cdm': 0.1201075,
290
+ 'h': 0.67810,
291
+ 'A_s': 2.100549e-09,
292
+ 'n_s': 0.9660499,
293
+ 'tau_reio': 0.05430842,
294
+ 'output': 'tCl,pCl,lCl,mPk', # Include mPk for matter power spectrum
295
+ 'lensing': 'yes',
296
+ 'P_k_max_1/Mpc': 3.0,
297
+ 'l_max_scalars': 5000,
298
+ }
299
+
300
+ # Compute contributions
301
+ cl_total = compute_lensed_cls(params) # Total Cls
302
+
303
+ # Extract the contributions
304
+ ell = cl_total['ell']
305
+ cl_TT = cl_total['tt']
306
+
307
+ # Compute SW and ISW contributions
308
+ # For simplicity, we will assume that the contributions can be approximated
309
+ # Here we will just use the total Cls for demonstration purposes.
310
+ # In a real scenario, you would need to compute these separately.
311
+ cl_SW = cl_TT * 0.5 # Placeholder for SW contribution
312
+ cl_ISW = cl_TT * 0.5 # Placeholder for ISW contribution
313
+
314
+ # Plotting
315
+ plt.figure(figsize=(10, 6))
316
+
317
+ # Plot total Cls
318
+ plt.plot(ell, cl_TT * ell * (ell + 1) / (2 * pi), label='Total $C_\ell^{TT}$', color='k')
319
+
320
+ # Plot SW contribution
321
+ plt.plot(ell, cl_SW * ell * (ell + 1) / (2 * pi), label='Sachs-Wolfe Contribution', color='c')
322
+
323
+ # Plot ISW contribution
324
+ plt.plot(ell, cl_ISW * ell * (ell + 1) / (2 * pi), label='Integrated Sachs-Wolfe Contribution', color='r')
325
+
326
+ # Finalize the plot
327
+ plt.xscale('log')
328
+ plt.xlim(2, 5000)
329
+ #plt.ylim(0, 8)
330
+ plt.xlabel(r'$\ell$')
331
+ plt.ylabel(r'$\ell(\ell+1)C_\ell^{TT}/(2\pi)$')
332
+ plt.title('Decomposition of CMB Power Spectrum into SW and ISW Contributions')
333
+ plt.legend()
334
+ plt.grid()
335
+ plt.show()
336
+
337
+
338
+ # In[20]:
339
+
340
+
341
+ # Import necessary modules
342
+ from classy import Class
343
+ import matplotlib.pyplot as plt
344
+ import numpy as np
345
+
346
+ # Define parameters for different models
347
+ k_out = [1e-3] # k values for output
348
+ models = ['PPF1', 'FLD1']
349
+ w0 = {'PPF1': -0.7, 'FLD1': -1}
350
+ wa = {'PPF1': -0.8, 'FLD1': 0.}
351
+ omega_cdm = {'PPF1': 0.104976, 'FLD1': 0.104976}
352
+ omega_b = 0.022
353
+ h = {'PPF1': 0.64, 'FLD1': 0.64}
354
+
355
+ # Initialize a dictionary to hold CLASS instances for each model
356
+ cosmo = {}
357
+
358
+ # Loop over each model to set up CLASS
359
+ for M in models:
360
+ use_ppf = 'yes' # Default to using PPF
361
+ gauge = 'Newtonian' # Default gauge
362
+
363
+ # Initialize CLASS for the model
364
+ cosmo[M] = Class()
365
+
366
+ # Set parameters for CLASS
367
+ cosmo[M].set({
368
+ 'output': 'tCl mPk dTk vTk',
369
+ 'k_output_values': str(k_out).strip('[]'),
370
+ 'h': h[M],
371
+ 'omega_b': omega_b,
372
+ 'omega_cdm': omega_cdm[M],
373
+ 'cs2_fld': 1.0,
374
+ 'w0_fld': w0[M],
375
+ 'wa_fld': wa[M],
376
+ 'Omega_Lambda': 0.0,
377
+ 'gauge': gauge,
378
+ 'use_ppf': use_ppf # Set use_ppf parameter
379
+ })
380
+
381
+ # Compute the power spectra
382
+ cosmo[M].compute()
383
+
384
+ # Plotting the results
385
+ colours = ['r', 'k', 'g', 'm']
386
+ plt.figure(figsize=(10, 6))
387
+
388
+ for i, M in enumerate(models):
389
+ cl = cosmo[M].raw_cl() # Get the raw power spectra
390
+ l = cl['ell'] # Multipole moments
391
+
392
+ # Plot the TT power spectrum
393
+ plt.loglog(l, cl['tt'] * l * (l + 1) / (2. * np.pi), label=M, color=colours[i])
394
+
395
+ # Finalize the plot
396
+ plt.legend(loc='upper left')
397
+ plt.xlim([2, 300])
398
+ plt.ylim([6e-11, 1e-9])
399
+ plt.xlabel(r'$\ell$')
400
+ plt.ylabel(r'$[\ell(\ell+1)/2\pi] C_\ell^\mathrm{TT}$')
401
+ plt.title('CMB Power Spectrum for Different Models')
402
+
403
+ # Save the plot
404
+ plt.savefig('check_PPF_clTT.pdf')
405
+ plt.show()
406
+
407
+
408
+ # In[21]:
409
+
410
+
411
+ # Import necessary modules
412
+ from classy import Class
413
+ import matplotlib.pyplot as plt
414
+ import numpy as np
415
+
416
+ # Function to compute lensed Cls for a given cosmology
417
+ def compute_lensed_cls(params):
418
+ cosmology = Class()
419
+ cosmology.set(params)
420
+ cosmology.compute()
421
+ cls = cosmology.lensed_cl(2500)
422
+ cosmology.struct_cleanup()
423
+ return cls['ell'], cls['tt'], cls['ee'], cls['te']
424
+
425
+ # Define parameters for the model with 1 massive neutrino and 2 massless ones
426
+ params_massive_nu = {
427
+ 'omega_b': 0.0223828,
428
+ 'omega_cdm': 0.1201075,
429
+ #'m_ncdm': '0.06,0.0,0.0', # Masses of the neutrinos in eV (1 massive, 2 massless)
430
+ #'N_ncdm': 3, # Total number of neutrino species
431
+ 'h': 0.67810,
432
+ 'A_s': 2.100549e-09,
433
+ 'n_s': 0.9660499,
434
+ 'tau_reio': 0.05430842,
435
+ 'output': 'tCl,pCl,lCl,mPk', # Include mPk in the output
436
+ 'lensing': 'yes',
437
+ 'P_k_max_1/Mpc': 3.0,
438
+ 'z_max_pk': 2.0,
439
+ 'YHe': 0.24 # Fix the helium fraction to a specific value (e.g., 0.24)
440
+ }
441
+
442
+ # Define parameters for the PPF cosmology with massless neutrinos
443
+ params_ppf = {
444
+ 'omega_b': 0.0223828,
445
+ 'omega_cdm': 0.1201075,
446
+ 'w0_fld': -0.77, # Dark energy equation of state
447
+ 'wa_fld': -0.82, # Dark energy equation of state
448
+ 'Omega_Lambda': 0., # Density of dark energy
449
+ 'h': 0.67810,
450
+ 'A_s': 2.100549e-09,
451
+ 'n_s': 0.9660499,
452
+ 'tau_reio': 0.05430842,
453
+ 'output': 'tCl,pCl,lCl,mPk', # Include mPk in the output
454
+ 'lensing': 'yes',
455
+ 'P_k_max_1/Mpc': 3.0,
456
+ 'z_max_pk': 2.0,
457
+ 'YHe': 0.24 # Fix the helium fraction to a specific value (e.g., 0.24)
458
+ }
459
+
460
+ # Compute lensed Cls for both cosmologies
461
+ ell_massive_nu, clTT_massive_nu, clEE_massive_nu, clTE_massive_nu = compute_lensed_cls(params_massive_nu)
462
+ ell_ppf, clTT_ppf, clEE_ppf, clTE_ppf = compute_lensed_cls(params_ppf)
463
+
464
+ # Calculate the ratio for EE and TE modes
465
+ clEE_ratio = clEE_massive_nu / clEE_ppf
466
+ clTT_ratio = clTT_massive_nu / clTT_ppf
467
+
468
+ # Plotting the ratios
469
+ plt.figure(figsize=(10, 6))
470
+
471
+ # Plot ratio of C_l^EE
472
+ plt.subplot(2, 1, 1)
473
+ plt.plot(ell_massive_nu, clEE_ratio * ell_massive_nu * (ell_massive_nu + 1) / (2 * np.pi), 'b-', label=r'$\frac{C_\ell^{EE}}{C_\ell^{EE}(\text{PPF})}$')
474
+ plt.xscale('log')
475
+ plt.yscale('log')
476
+ plt.xlim(2, 2500)
477
+ plt.xlabel(r'$\ell$')
478
+ plt.ylabel(r'Ratio $[\ell(\ell+1)/2\pi] C_\ell^{EE}$')
479
+ plt.title('Ratio of Lensed CMB Power Spectrum - EE Mode')
480
+ plt.legend()
481
+
482
+ # Plot ratio of C_l^TE
483
+ plt.subplot(2, 1, 2)
484
+ plt.plot(ell_massive_nu, clTT_ratio * ell_massive_nu * (ell_massive_nu + 1) / (2 * np.pi), 'r-', label=r'$\frac{C_\ell^{TT}}{C_\ell^{TT}(\text{PPF})}$')
485
+ plt.xscale('log')
486
+ plt.yscale('log')
487
+ plt.xlim(2, 2500)
488
+ plt.xlabel(r'$\ell$')
489
+ plt.ylabel(r'Ratio $[\ell(\ell+1)/2\pi] C_\ell^{TT}$')
490
+ plt.title('Ratio of Lensed CMB Power Spectrum - TT Mode')
491
+ plt.legend()
492
+
493
+ # Show the plots
494
+ plt.tight_layout()
495
+ plt.show()
496
+
497
+
498
+ # In[ ]:
499
+
500
+
501
+
502
+
class-data/default.ini ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # *~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~*
2
+ # * CLASS input parameter file *
3
+ # *~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~*
4
+
5
+ # This example of input file, intended for CLASS beginners, lists only
6
+ # the most common input parameters with small comments. Only lines
7
+ # containing an equal sign not preceded by a sharp sign "#" are considered by
8
+ # the code, any other line is considered as a comment.
9
+ #
10
+ # The normal syntax is: parameter = value(s)
11
+
12
+
13
+ # -------------------------
14
+ # ----> General parameters:
15
+ # -------------------------
16
+
17
+ # REQUESTED OUTPUT FROM CLASS (Important!)
18
+ # - 'tCl' for temperature Cls,
19
+ # - 'pCl' for polarization (TE,BB,EE) Cls,
20
+ # - 'lCl' for CMB lensing POTENTIAL Cl (Cl^psi-psi, required for lensed Cls),
21
+ # - 'nCl' (or 'dCl') for density number count Cls,
22
+ # - 'sCl' for galaxy lensing potential Cls (Cl^phi-phi),
23
+ # - 'mPk' for total matter power spectrum P(k),
24
+ # - 'dTk' for density transfer functions,
25
+ # - 'vTk' for velocity transfer functions,
26
+ # - 'Sd' for spectral distortions
27
+ # --> deflection d: Cl^dd = l(l+1) C_l^phi-phi
28
+ # --> convergence kappa and shear gamma: the share the same harmonic
29
+ # power spectrum: Cl^gamma-gamma = 1/4 * [(l+2)!/(l-2)!] C_l^phi-phi
30
+ output = tCl,pCl,lCl,mPk
31
+ #output = tCl,pCl,lCl
32
+ #output = mPk,mTk
33
+ #output = Sd
34
+
35
+ lensing = yes # Should the Cls from above be lensed for CMB?
36
+ lcmb_rescale = 1 # Amplitude rescale of lensing only
37
+ lcmb_tilt = 0 # CMB l tilt of lensing
38
+ lcmb_pivot = 0.1 # CMB l pivot of lensing
39
+
40
+ non_linear = # Select 'halofit' or 'HMCode' or leave blank
41
+
42
+ ic = ad # Select initial conditions
43
+ #(ad,bi,cdi,nid,nvi) -> (adiabatic,baryon,cdm,neutrino density,neutrino velocity)
44
+ modes = s # Modes of the perturbations
45
+ # (s,v,t) -> (scalar,vector,tensor)
46
+
47
+ #number_count_contributions = # nCl contributions
48
+ #(density,lensing,rsd,gr) -> (density, lensing, rsd+doppler, all others)
49
+ #selection=gaussian # nCl window function type
50
+ #selection_mean=1.0,1.25,2.0,3.5 # Mean redshifts of nCl window functions
51
+ #selection_width = 0.1 # Widths of nCl window functions
52
+ #selection_bias = # Biases of nCl window functions
53
+ #selection_magnification_bias = # Biases of lensing of nCl
54
+ #non_diagonal=3 # Number of non-diagonal terms
55
+
56
+ l_max_scalars = 2500 # lmax of CMB for scalar mode
57
+ #l_max_tensors = 500 # lmax of CMB for tensor mode
58
+ #l_max_lss = 300 # lmax of nCl
59
+
60
+ P_k_max_h/Mpc = 1. # Maximum k for P(k) in 1/Mpc
61
+ #P_k_max_1/Mpc = 0.7 # Maximum k for P(k) in h/Mpc
62
+ z_pk = 0 # Redshifts of P(k,z)
63
+
64
+ # ----------------------------
65
+ # ----> Cosmological parameters:
66
+ # ----------------------------
67
+
68
+ h = 0.67810 # Dimensionless reduced Hubble parameter (H_0 / (100km/s/Mpc))
69
+ #H0 = 67.810 # Hubble parameter in km/s/Mpc
70
+ #100*theta_s = 1.041783 # Angular size of the sound horizon, exactly 100(ds_dec/da_dec)
71
+ # with decoupling time given by maximum of visibility function
72
+ # (different from theta_MC of CosmoMC and
73
+ # slightly different from theta_* of CAMB)
74
+ T_cmb = 2.7255 # CMB temperature
75
+
76
+ omega_b = 0.02238280 # Reduced baryon density (Omega*h^2)
77
+ #Omega_b = # Baryon density
78
+ omega_cdm = 0.1201075 # Reduced cold dark matter density (Omega*h^2)
79
+ #Omega_cdm = # CDM density
80
+ omega_dcdmdr = 0.0 # Reduced decaying dark matter density (Omega*h^2)
81
+ #Omega_dcdmdr = # DCDM density
82
+ #Gamma_dcdm = 0.0 # Decay constant of DCDM in km/s/Mpc
83
+ Omega_k = 0. # Curvature density
84
+ Omega_fld = 0 # Dark Energy as Fluid density
85
+ Omega_scf = 0 # Dark Energy as Scalar field density
86
+
87
+ # Usually Omega_Lambda will be matched by the budget equation sum Omega_i = 1, no need to set it manually
88
+ #Omega_Lambda = 0.7 # Cosmological constant density
89
+
90
+
91
+ # If you have respectively 0,1,2, or 3 MASSIVE neutrinos and the default T_ncdm of 0.71611,
92
+ # designed to give M_tot/omega_nu of 93.14 eV, and if you want N_eff equal to 3.044,
93
+ # then you should pass for N_ur 3.044,2.0308,1.0176, or 0.00441
94
+ N_ur = 3.044 # Effective number of MASSLESS neutrino species
95
+ #Omega_ur = # Reduced MASSLESS neutrino density (Omega*h^2)
96
+ #omega_ur = # MASSLESS neutrino density
97
+
98
+ N_ncdm = # Massive neutrino species
99
+ #m_ncdm = 0.06 # Mass of the massive neutrinos
100
+ #omega_ncdm = 0.0006451439 # Reduced massive neutrino density (Omega*h^2)
101
+ #Omega_ncdm = # Massive neutrino density
102
+ #deg_ncdm = # Degeneracy of massive neutrinos
103
+
104
+
105
+ ### For Omega_fld != 0
106
+ # Chevalier-Linder-Polarski => CLP
107
+ # Early Dark Energy => EDE
108
+ #fluid_equation_of_state = CLP
109
+
110
+ #CLP case
111
+ #w0_fld = -0.9
112
+ #wa_fld = 0.
113
+ #cs2_fld = 1
114
+ #EDE case
115
+ #w0_fld = -0.9
116
+ #Omega_EDE = 0.
117
+ #cs2_fld = 1
118
+
119
+ # ----------------------------
120
+ # ----> Thermodynamics/Heating parameters:
121
+ # ----------------------------
122
+
123
+ # Infer YHe from BBN. Alternatively provide your own number here
124
+ YHe = BBN
125
+ # Recombination code : 'RECFAST' or 'HyRec'
126
+ recombination = HyRec
127
+
128
+ z_reio = 7.6711 # Redshift of reionization
129
+ #tau_reio = 0.05430842 # Optical depth of reionization
130
+
131
+ reio_parametrization = reio_camb
132
+ reionization_exponent = 1.5
133
+ reionization_width = 0.5
134
+ helium_fullreio_redshift = 3.5
135
+ helium_fullreio_width = 0.5
136
+
137
+ ### Energy injections
138
+ DM_annihilation_cross_section = 0.# Dark Matter annihilation cross section in [cm^3/s]
139
+ DM_annihilation_mass = 0. # Dark Matter mass in [GeV]
140
+ DM_decay_fraction = 0. # Dark Matter decay fraction
141
+ DM_decay_Gamma = 0. # Dark Matter decay width
142
+
143
+ f_eff_type = on_the_spot # Injection efficiency
144
+ chi_type = CK_2004 # Deposition function
145
+
146
+ # ----------------------------
147
+ # ----> Primordial parameters:
148
+ # ----------------------------
149
+
150
+ P_k_ini type = analytic_Pk # Select primordial spectrum
151
+ #('analytic_Pk','inflation_V','inflation_H','inflation_V_end','two scales','external_Pk')
152
+ k_pivot = 0.05 # Pivot scale for A_s,n_s
153
+ A_s = 2.100549e-09 # Amplitude of prim spectrum
154
+ #ln10^{10}A_s = 3.0980 # ln Amplitude of prim spectrum
155
+ # sigma8 = 0.848365 # Final density averaged over 8 Mpc
156
+ n_s = 0.9660499 # Spectrum tilt
157
+ alpha_s = 0. # Spectrum running of tilt
158
+ #r = 1. # If tensors are activated
159
+ # See explanatory.ini for more information about all the different primordial spectra
160
+
161
+ # ---------------------------
162
+ # ----> Spectral distortions:
163
+ # ---------------------------
164
+
165
+ sd_branching_approx = exact # Appriximation for the calculation of the branching ratios
166
+ sd_PCA_size = 2 # Number of multipoles in PCA expansion
167
+ sd_detector_name = PIXIE # Name of the detector
168
+ #sd_detector_nu_min = 30. # Detector specifics
169
+ #sd_detector_nu_max = 1000.
170
+ #sd_detector_nu_delta = 15.
171
+ #sd_detector_bin_number = 65 # Alternative to 'sd_detector_nu_delta'
172
+ #sd_detector_delta_Ic = 5.e-26
173
+
174
+ #include_SZ_effect = no
175
+
176
+ # ----------------------------------
177
+ # ----> Output parameters:
178
+ # ----------------------------------
179
+
180
+ #root = output/default # Root name of output files
181
+ overwrite_root = no # Overwrite the output files?
182
+ write_background = no # Write background parameter table
183
+ write_thermodynamics = no # Write thermodynamics parameter table
184
+ #k_output_values = 1e-3,1e-2 # Write perturbations parameter table (at given k)
185
+ write_primordial = no # Write primordial parameter table
186
+ write_parameters = yeap # Write used/unused parameter files
187
+ write_warnings = yes # Warn about forgotten/wrong inputs
188
+
189
+ #Verbosity
190
+ input_verbose = 1
191
+ background_verbose = 1
192
+ thermodynamics_verbose = 1
193
+ perturbations_verbose = 1
194
+ transfer_verbose = 1
195
+ primordial_verbose = 1
196
+ harmonic_verbose = 1
197
+ fourier_verbose = 1
198
+ lensing_verbose = 1
199
+ output_verbose = 1
class-data/default_fast.ini ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # *~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~*
2
+ # * CLASS input parameter file *
3
+ # *~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~*
4
+
5
+ # This example of input file, intended for CLASS beginners, lists only
6
+ # the most common input parameters with small comments. Only lines
7
+ # containing an equal sign not preceded by a sharp sign "#" are considered by
8
+ # the code, any other line is considered as a comment.
9
+ #
10
+ # The normal syntax is: parameter = value(s)
11
+
12
+
13
+ # -------------------------
14
+ # ----> General parameters:
15
+ # -------------------------
16
+
17
+ # REQUESTED OUTPUT FROM CLASS (Important!)
18
+ # - 'tCl' for temperature Cls,
19
+ # - 'pCl' for polarization (TE,BB,EE) Cls,
20
+ # - 'lCl' for CMB lensing POTENTIAL Cl (Cl^psi-psi, required for lensed Cls),
21
+ # - 'nCl' (or 'dCl') for density number count Cls,
22
+ # - 'sCl' for galaxy lensing potential Cls (Cl^phi-phi),
23
+ # - 'mPk' for total matter power spectrum P(k),
24
+ # - 'dTk' for density transfer functions,
25
+ # - 'vTk' for velocity transfer functions,
26
+ # - 'Sd' for spectral distortions
27
+ # --> deflection d: Cl^dd = l(l+1) C_l^phi-phi
28
+ # --> convergence kappa and shear gamma: the share the same harmonic
29
+ # power spectrum: Cl^gamma-gamma = 1/4 * [(l+2)!/(l-2)!] C_l^phi-phi
30
+ output = mPk, tCl
31
+ #output = tCl,pCl,lCl
32
+ #output = mPk,mTk
33
+ #output = Sd
34
+
35
+ #lensing = yes # Should the Cls from above be lensed for CMB?
36
+ #lcmb_rescale = 1 # Amplitude rescale of lensing only
37
+ #lcmb_tilt = 0 # CMB l tilt of lensing
38
+ #lcmb_pivot = 0.1 # CMB l pivot of lensing
39
+
40
+ non_linear = # Select 'halofit' or 'HMCode' or leave blank
41
+
42
+ ic = ad # Select initial conditions
43
+ #(ad,bi,cdi,nid,nvi) -> (adiabatic,baryon,cdm,neutrino density,neutrino velocity)
44
+ modes = s # Modes of the perturbations
45
+ # (s,v,t) -> (scalar,vector,tensor)
46
+
47
+ #number_count_contributions = # nCl contributions
48
+ #(density,lensing,rsd,gr) -> (density, lensing, rsd+doppler, all others)
49
+ #selection=gaussian # nCl window function type
50
+ #selection_mean=1.0,1.25,2.0,3.5 # Mean redshifts of nCl window functions
51
+ #selection_width = 0.1 # Widths of nCl window functions
52
+ #selection_bias = # Biases of nCl window functions
53
+ #selection_magnification_bias = # Biases of lensing of nCl
54
+ #non_diagonal=3 # Number of non-diagonal terms
55
+
56
+ l_max_scalars = 2500 # lmax of CMB for scalar mode
57
+ #l_max_tensors = 500 # lmax of CMB for tensor mode
58
+ #l_max_lss = 300 # lmax of nCl
59
+
60
+ P_k_max_h/Mpc = 50. # Maximum k for P(k) in 1/Mpc
61
+ #P_k_max_1/Mpc = 0.7 # Maximum k for P(k) in h/Mpc
62
+ z_pk = 0,1,2,3,4 # Redshifts of P(k,z)
63
+
64
+ # ----------------------------
65
+ # ----> Cosmological parameters:
66
+ # ----------------------------
67
+
68
+ h = 0.67810 # Dimensionless reduced Hubble parameter (H_0 / (100km/s/Mpc))
69
+ #H0 = 67.810 # Hubble parameter in km/s/Mpc
70
+ #100*theta_s = 1.041783 # Angular size of the sound horizon, exactly 100(ds_dec/da_dec)
71
+ # with decoupling time given by maximum of visibility function
72
+ # (different from theta_MC of CosmoMC and
73
+ # slightly different from theta_* of CAMB)
74
+ T_cmb = 2.7255 # CMB temperature
75
+
76
+ omega_b = 0.02238280 # Reduced baryon density (Omega*h^2)
77
+ #Omega_b = # Baryon density
78
+ omega_cdm = 0.1201075 # Reduced cold dark matter density (Omega*h^2)
79
+ #Omega_cdm = # CDM density
80
+ omega_dcdmdr = 0.0 # Reduced decaying dark matter density (Omega*h^2)
81
+ #Omega_dcdmdr = # DCDM density
82
+ #Gamma_dcdm = 0.0 # Decay constant of DCDM in km/s/Mpc
83
+ Omega_k = 0. # Curvature density
84
+ Omega_fld = 0 # Dark Energy as Fluid density
85
+ Omega_scf = 0 # Dark Energy as Scalar field density
86
+
87
+ # Usually Omega_Lambda will be matched by the budget equation sum Omega_i = 1, no need to set it manually
88
+ #Omega_Lambda = 0.7 # Cosmological constant density
89
+
90
+
91
+ # If you have respectively 0,1,2, or 3 MASSIVE neutrinos and the default T_ncdm of 0.71611,
92
+ # designed to give M_tot/omega_nu of 93.14 eV, and if you want N_eff equal to 3.044,
93
+ # then you should pass for N_ur 3.044,2.0308,1.0176, or 0.00441
94
+ N_ur = 3.044 # Effective number of MASSLESS neutrino species
95
+ #Omega_ur = # Reduced MASSLESS neutrino density (Omega*h^2)
96
+ #omega_ur = # MASSLESS neutrino density
97
+
98
+ N_ncdm = # Massive neutrino species
99
+ #m_ncdm = 0.06 # Mass of the massive neutrinos
100
+ #omega_ncdm = 0.0006451439 # Reduced massive neutrino density (Omega*h^2)
101
+ #Omega_ncdm = # Massive neutrino density
102
+ #deg_ncdm = # Degeneracy of massive neutrinos
103
+
104
+
105
+ ### For Omega_fld != 0
106
+ # Chevalier-Linder-Polarski => CLP
107
+ # Early Dark Energy => EDE
108
+ #fluid_equation_of_state = CLP
109
+
110
+ #CLP case
111
+ #w0_fld = -0.9
112
+ #wa_fld = 0.
113
+ #cs2_fld = 1
114
+ #EDE case
115
+ #w0_fld = -0.9
116
+ #Omega_EDE = 0.
117
+ #cs2_fld = 1
118
+
119
+ # ----------------------------
120
+ # ----> Thermodynamics/Heating parameters:
121
+ # ----------------------------
122
+
123
+ # Infer YHe from BBN. Alternatively provide your own number here
124
+ YHe = BBN
125
+ # Recombination code : 'RECFAST' or 'HyRec'
126
+ recombination = HyRec
127
+
128
+ z_reio = 7.6711 # Redshift of reionization
129
+ #tau_reio = 0.05430842 # Optical depth of reionization
130
+
131
+ reio_parametrization = reio_camb
132
+ reionization_exponent = 1.5
133
+ reionization_width = 0.5
134
+ helium_fullreio_redshift = 3.5
135
+ helium_fullreio_width = 0.5
136
+
137
+ ### Energy injections
138
+ DM_annihilation_cross_section = 0.# Dark Matter annihilation cross section in [cm^3/s]
139
+ DM_annihilation_mass = 0. # Dark Matter mass in [GeV]
140
+ DM_decay_fraction = 0. # Dark Matter decay fraction
141
+ DM_decay_Gamma = 0. # Dark Matter decay width
142
+
143
+ f_eff_type = on_the_spot # Injection efficiency
144
+ chi_type = CK_2004 # Deposition function
145
+
146
+ # ----------------------------
147
+ # ----> Primordial parameters:
148
+ # ----------------------------
149
+
150
+ P_k_ini type = analytic_Pk # Select primordial spectrum
151
+ #('analytic_Pk','inflation_V','inflation_H','inflation_V_end','two scales','external_Pk')
152
+ k_pivot = 0.05 # Pivot scale for A_s,n_s
153
+ A_s = 2.100549e-09 # Amplitude of prim spectrum
154
+ #ln10^{10}A_s = 3.0980 # ln Amplitude of prim spectrum
155
+ # sigma8 = 0.848365 # Final density averaged over 8 Mpc
156
+ n_s = 0.9660499 # Spectrum tilt
157
+ alpha_s = 0. # Spectrum running of tilt
158
+ #r = 1. # If tensors are activated
159
+ # See explanatory.ini for more information about all the different primordial spectra
160
+
161
+ # ---------------------------
162
+ # ----> Spectral distortions:
163
+ # ---------------------------
164
+
165
+ sd_branching_approx = exact # Appriximation for the calculation of the branching ratios
166
+ sd_PCA_size = 2 # Number of multipoles in PCA expansion
167
+ sd_detector_name = PIXIE # Name of the detector
168
+ #sd_detector_nu_min = 30. # Detector specifics
169
+ #sd_detector_nu_max = 1000.
170
+ #sd_detector_nu_delta = 15.
171
+ #sd_detector_bin_number = 65 # Alternative to 'sd_detector_nu_delta'
172
+ #sd_detector_delta_Ic = 5.e-26
173
+
174
+ #include_SZ_effect = no
175
+
176
+ # ----------------------------------
177
+ # ----> Output parameters:
178
+ # ----------------------------------
179
+
180
+ #root = output/default # Root name of output files
181
+ overwrite_root = no # Overwrite the output files?
182
+ write_background = no # Write background parameter table
183
+ write_thermodynamics = no # Write thermodynamics parameter table
184
+ #k_output_values = 1e-3,1e-2 # Write perturbations parameter table (at given k)
185
+ write_primordial = no # Write primordial parameter table
186
+ write_parameters = yeap # Write used/unused parameter files
187
+ write_warnings = yes # Warn about forgotten/wrong inputs
188
+
189
+ #Verbosity
190
+ input_verbose = 1
191
+ background_verbose = 1
192
+ thermodynamics_verbose = 1
193
+ perturbations_verbose = 1
194
+ transfer_verbose = 1
195
+ primordial_verbose = 1
196
+ harmonic_verbose = 1
197
+ fourier_verbose = 1
198
+ lensing_verbose = 1
199
+ output_verbose = 1
class-data/distances.py ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+ # coding: utf-8
3
+
4
+ # In[ ]:
5
+
6
+
7
+ # import necessary modules
8
+ # uncomment to get plots displayed in notebook
9
+ get_ipython().run_line_magic('matplotlib', 'inline')
10
+ import matplotlib
11
+ import matplotlib.pyplot as plt
12
+ import numpy as np
13
+ from classy import Class
14
+
15
+
16
+ # In[ ]:
17
+
18
+
19
+ #Lambda CDM
20
+ LCDM = Class()
21
+ LCDM.set({'Omega_cdm':0.25,'Omega_b':0.05})
22
+ LCDM.compute()
23
+
24
+
25
+ # In[ ]:
26
+
27
+
28
+ #Einstein-de Sitter
29
+ CDM = Class()
30
+ CDM.set({'Omega_cdm':0.95,'Omega_b':0.05})
31
+ CDM.compute()
32
+
33
+ # Just to cross-check that Omega_Lambda is negligible
34
+ # (but not exactly zero because we neglected radiation)
35
+ derived = CDM.get_current_derived_parameters(['Omega0_lambda'])
36
+ print (derived)
37
+ print ("Omega_Lambda =",derived['Omega0_lambda'])
38
+
39
+
40
+ # In[ ]:
41
+
42
+
43
+ #Get background quantities and recover their names:
44
+ baLCDM = LCDM.get_background()
45
+ baCDM = CDM.get_background()
46
+ baCDM.keys()
47
+
48
+
49
+ # In[ ]:
50
+
51
+
52
+ #Get H_0 in order to plot the distances in this unit
53
+ fLCDM = LCDM.Hubble(0)
54
+ fCDM = CDM.Hubble(0)
55
+
56
+
57
+ # In[ ]:
58
+
59
+
60
+ namelist = ['lum. dist.','comov. dist.','ang.diam.dist.']
61
+ colours = ['b','g','r']
62
+ for name in namelist:
63
+ idx = namelist.index(name)
64
+ plt.loglog(baLCDM['z'],fLCDM*baLCDM[name],colours[idx]+'-')
65
+ plt.legend(namelist,loc='upper left')
66
+ for name in namelist:
67
+ idx = namelist.index(name)
68
+ plt.loglog(baCDM['z'],fCDM*baCDM[name],colours[idx]+'--')
69
+ plt.xlim([0.07, 10])
70
+ plt.ylim([0.08, 20])
71
+
72
+ plt.xlabel(r"$z$")
73
+ plt.ylabel(r"$\mathrm{Distance}\times H_0$")
74
+ plt.tight_layout()
75
+ plt.savefig('distances.pdf')
76
+
class-data/explanatory.ini ADDED
@@ -0,0 +1,1399 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # *~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~*
2
+ # * CLASS input parameter file *
3
+ # *~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~*
4
+
5
+ # This example of input file, intended for CLASS beginners, lists all
6
+ # possibilities with detailed comments. You can use a more concise version, in
7
+ # which only the arguments in which you are interested would appear. Only lines
8
+ # containing an equal sign not preceded by a sharp sign "#" are considered by
9
+ # the code, any other line is considered as a comment.
10
+ #
11
+ # The normal syntax is: parameter = value(s)
12
+ # where white spaces do not matter (they are removed automatically by the
13
+ # parser unless they are part of the parameter name).
14
+ # However, 'parameter' = value(s)
15
+ # and "parameter" = value(s)
16
+ # are also accepted by the parser since v2.8.0
17
+ #
18
+ # Input files must have an extension ".ini".
19
+
20
+
21
+
22
+ # -------------------------
23
+ # ----> General parameters:
24
+ # -------------------------
25
+
26
+ # 1) List of output spectra requested:
27
+ # - 'tCl' for temperature Cls,
28
+ # - 'pCl' for polarization Cls,
29
+ # - 'lCl' for CMB lensing potential Cls,
30
+ # - 'nCl' (or 'dCl') for density number count Cls,
31
+ # - 'sCl' for galaxy lensing potential Cls,
32
+ # - 'mPk' for the matter power spectrum P(k) (then, depending on other options,
33
+ # the code will return the linear and/or non-linear spectra,
34
+ # for total matter and/or for clustering matter, and also possibly dewiggled),
35
+ # - 'dTk' (or 'mTk') for density transfer functions for each species,
36
+ # - 'vTk' for velocity transfer function for each species
37
+ # - 'sd' for spectral distortions
38
+ # Warning: both lCl and sCl compute the C_ls of the lensing potential,
39
+ # C_l^phi-phi. If you are used to other codes, you may want to deal instead
40
+ # with the deflection Cls or the shear/convergence Cls. The relations
41
+ # between them are trivial:
42
+ # --> deflection d: Cl^dd = l(l+1) C_l^phiphi
43
+ # --> convergence kappa and shear gamma: the share the same harmonic
44
+ # power spectrum: Cl^gamma-gamma = 1/4 * [(l+2)!/(l-2)!] C_l^phi-phi
45
+ # By defaut, the code will try to compute the following cross-correlation
46
+ # Cls (if available): temperature-polarisation, temperature-CMB lensing,
47
+ # polarization-CMB lensing, CMB lensing-density, and density-lensing. Other
48
+ # cross-correlations are not computed because they would slow down the
49
+ # code considerably.
50
+ #
51
+ # Can be left blank if you do not want to evolve cosmological perturbations
52
+ # at all. (default: set to blank, no perturbation calculation)
53
+ output = tCl,pCl,lCl,mPk
54
+ #output = tCl,pCl,lCl
55
+ #output = mPk,mTk
56
+ #output = Sd
57
+
58
+ # 1.a) If you included 'tCl' in the list, you can take into account only some
59
+ # of the terms contributing to the temperature spectrum:
60
+ # - intrinsic temperature corrected by Sachs-Wolfe ('tsw' or 'TSW'),
61
+ # - early integrated Sachs-Wolfe ('eisw' or 'EISW'),
62
+ # - late integrated Sachs-Wolfe ('lisw' or 'LISW'),
63
+ # - Doppler ('dop' or 'Dop'),
64
+ # - polarisation contribution ('pol' or 'Pol').
65
+ # Put below the list of terms to be included (defaut: if this field is not
66
+ # passed, all terms will be included)
67
+ #temperature_contributions = tsw, eisw, lisw, dop, pol
68
+
69
+ # 1.a.1) If one of 'eisw' or 'lisw' is turned off, the code will read
70
+ # 'early/late isw redshift', the split value of redshift z at which the
71
+ # isw is considered as late or early (if this field is absent or left
72
+ # blank, by default, 'early/late isw redshift' is set to 50)
73
+ #early_late_isw_redshift =
74
+
75
+ # 1.b) If you included 'nCl' (or 'dCl') in the list, you can take into account
76
+ # only some of the terms contributing to the obsevable number count
77
+ # fluctuation spectrum:
78
+ # - matter density ('density'),
79
+ # - redshift-space and Doppler distortions ('rsd'),
80
+ # - lensing ('lensing'),
81
+ # - or gravitational potential terms ('gr').
82
+ # Put below the list of terms to be included (defaut: if this field is not
83
+ # passed, only 'dens' will be included)
84
+ #number_count_contributions = density, rsd, lensing, gr
85
+
86
+ # 1.c) If you included 'dTk' (or 'mTk') in the list, the code will give you by
87
+ # default the transfer function of the scale-invariant Bardeen potentials
88
+ # (for whatever gauge you are using). If you need the transfer function of
89
+ # additional metric fluctuations, specific to the gauge you are using, set
90
+ # the following flag to 'yes' (default: set to 'no')
91
+ #extra_metric_transfer_functions = yes
92
+
93
+
94
+ # 2) If you want to consider perturbed recombination, enter a word
95
+ # containing the letter 'y' or 'Y'. CLASS will then compute the
96
+ # perturbation in the ionization fraction x_e and the baryon
97
+ # temperature, as in 0707.2727. The initial conformal time will be
98
+ # small, therefore you should use the default integrator ndf15
99
+ # (i.e. do not set 'evolver' to 0, otherwise the code will be
100
+ # slower). (default: no, neglect perturbed recombination)
101
+ #perturbed_recombination = yes
102
+
103
+ # 3) List of modes:
104
+ # - 's' for scalars,
105
+ # - 'v' for vectors,
106
+ # - 't' for tensors).
107
+ # More than one letter allowed, can be attached or separated by arbitrary
108
+ # characters; letters can be small or capital. (default: set to 's')
109
+ modes = s
110
+ #modes = s,t
111
+
112
+ # 3.a) List of initial conditions for scalars:
113
+ # - 'ad' for adiabatic,
114
+ # - 'bi' for baryon isocurvature,
115
+ # - 'cdi' for CDM isocurvature,
116
+ # - 'nid' for neutrino density isocurvature,
117
+ # - 'niv' for neutrino velocity isocurvature.
118
+ # More than one of these allowed, can be attached or separated by arbitrary
119
+ # characters; letters can be small or capital. (default: set to 'ad')
120
+ ic = ad
121
+ #ic = ad&bi&nid
122
+
123
+ # 3.b) Which perturbations should be included in tensor calculations?
124
+ # - write 'exact' to include photons, ultra-relativistic species 'ur'
125
+ # and all non-cold dark matter species 'ncdm';
126
+ # - write 'massless' to approximate 'ncdm' as extra relativistic species
127
+ # (good approximation if ncdm is still relativistic at the time of
128
+ # recombination);
129
+ # - write 'photons' to include only photons
130
+ # (default: set to 'massless')
131
+ tensor_method =
132
+
133
+
134
+ # 4) Gauge
135
+ # 4.a) Gauge in which calculations are performed:
136
+ # - 'sync' or 'synchronous' or 'Synchronous' for synchronous,
137
+ # - 'new' or 'newtonian' or 'Newtonian' for Newtonian/longitudinal gauge
138
+ # (default: set to synchronous)
139
+ gauge = synchronous
140
+
141
+ # 4.b) Do you want to output the N-body gauge quantities as well?
142
+ # If you included 'dTk' or 'vTk' in the list of outputs, you may transform
143
+ # your transfer functions into the Nbody gauge by setting the following
144
+ # flag to 'yes'. This will also include the transfer function for the
145
+ # metric perturbations H_T' (exact) and gamma (approximate) in the Nbody gauge.
146
+ # See e.g. 1505.04756, and equations (A.2) and (A.5) in 1811.00904
147
+ # for more precise definitions. These calculations are more stable with
148
+ # 'gauge=synchronous' (default). To compute H_T' and gamma
149
+ # without converting the output to the Nbody gauge,
150
+ # please use the flag 'extra metric transfer functions' instead.
151
+ # Can be set to anything starting with 'y' or 'n'.
152
+ # (default: set to 'no')
153
+ #nbody_gauge_transfer_functions = yes
154
+
155
+ # 4.c) Do you want the source functions for total non-relativistic matter, delta_m and theta_m, and baryon+cdm, delta_cb and theta_cb, to be outputed in the current gauge (the one selected in 4.a), instead of being automatically expressed as a gauge-invariant (GI) quantity, likeL: delta_m^GI=delta_m + 3 a H theta_m/k2, theta_m^GI=theta_m+alpha*k2 (default: no, that is, convert to GI)
156
+ #matter_source_in_current_gauge = no
157
+
158
+ # 4.d) Do you want the output table of perturbations (controlled by 'k_output_values') to be outputed in the current gauge, or to be always converted to the Newtonian gauge? (default: no, that is, convert to Newtonian)
159
+ #get_perturbations_in_current_gauge = no
160
+
161
+ # 5) Hubble parameter : either 'H0' in km/s/Mpc or 'h' (or 'theta_s_100'), where
162
+ # the latter is the peak scale parameter defined exactly as 100(ds_dec/da_dec)
163
+ # with a decoupling time given by maximum of visibility function (quite different
164
+ # from theta_MC of CosmoMC and slightly different from theta_* of CAMB)
165
+ # (default: 'h' set to 0.67810 such that 100*theta_s = 1.041783 like in Planck 2018)
166
+ h = 0.67810
167
+ #H0 = 67.810
168
+ #theta_s_100 = 1.041783
169
+
170
+
171
+ # 6) Primordial Helium fraction 'YHe', e.g. 0.25; if set to 'BBN' or 'bbn',
172
+ # will be inferred from Big Bang Nucleosynthesis (default: set to 'BBN')
173
+ YHe = BBN
174
+
175
+
176
+ # 7) 'recombination' algorithm set to 'RECFAST' or 'HyRec'. 'HyRec' points at HyRec 2020. Its compute time is negligible compared to other CLASS modules. 'RECFAST' points at RecFastCLASS, an enhanced version of RecFast 1.5 with better integration shceme and less discontinuities. Recfast is still slightly faster than HyRec but less accurate. HyRec is better for most purposes. RecFast can still be useful for studying some particular modifications of standard recombination. Both schemes use the CLASS ODE integrators. (Default: HyRec')
177
+ recombination = HyRec
178
+
179
+ # 7.a) If recombination algorithm is set to 'RECFAST'
180
+ # the photo-ionization coefficients beta(T) for normal Recfast depend on Tmat
181
+ # This is an approximation (see e.g. arxiv:1605.03928 page 10, arxiv:1503.04827 page 2, right column)
182
+ # With 'recfast_photoion_dependence' the photo-ionization coefficient beta(T) is set to depend on
183
+ # - 'Tmat' uses beta(Tmat) depending on matter temperature
184
+ # (like in original RECFAST and in CLASS v2.x)
185
+ # - 'Trad' uses beta(Trad) depending on radiation temperature
186
+ # (while this option is theoretically more motivated, the option 'Tmat' leads to
187
+ # results which agree better with HyRec and CosmoRec. This is probably due to the
188
+ # fudge factor for the Peebles coefficient being optimized for a Tmat dependence)
189
+ # (default: set to 'Tmat')
190
+ recfast_photoion_dependence =
191
+
192
+
193
+ # 8) Parametrization of reionization: 'reio_parametrization' must be one of
194
+ # - 'reio_none' (no reionization),
195
+ # - 'reio_camb' (like CAMB: one tanh() step for hydrogen reionization one
196
+ # for second helium reionization),
197
+ # - 'reio_bins_tanh' (binned history x_e(z) with tanh() interpolation
198
+ # between input values),
199
+ # - 'reio_half_tanh' (like 'reio_camb' excepted that we match the
200
+ # function xe(z) from recombination with only half a tanh(z-z_reio)),
201
+ # - 'reio_many_tanh' (arbitrary number of tanh-like steps with specified
202
+ # ending values, a scheme usually more useful than 'reio_bins_tanh'),
203
+ # - 'reio_inter' (linear interpolation between discrete values of xe(z)).
204
+ # (default: set to 'reio_camb')
205
+ reio_parametrization = reio_camb
206
+
207
+ # 8.a) If 'reio_parametrization' set to 'reio_camb' or 'reio_half_tanh':
208
+ # enter one of 'z_reio' or 'tau_reio' (default: 'z_reio' set to 7.6711 to
209
+ # get tau_reio of 0.054308), plus 'reionization_exponent',
210
+ # 'reionization_width', 'helium_fullreio_redshift',
211
+ # 'helium_fullreio_width'. (default: set to 1.5, 0.5, 3.5, 0.5)
212
+ z_reio = 7.6711
213
+ #tau_reio = 0.05430842
214
+ reionization_exponent = 1.5
215
+ reionization_width = 0.5
216
+ helium_fullreio_redshift = 3.5
217
+ helium_fullreio_width = 0.5
218
+
219
+ # 8.b) If 'reio_parametrization' set to 'reio_bins_tanh':
220
+ # enter number of bins and list of z_i and xe_i defining the free electron
221
+ # density at the center of each bin. Also enter a dimensionless paramater
222
+ # regulating the sharpness of the tanh() steps, independently of the bin
223
+ # width; recommended sharpness is 0.3, smaller values will make steps too
224
+ # sharp, larger values will make the step very progressive but with
225
+ # discontinuity of x_e(z) derivative around z_i values. (default: set to
226
+ # 0, blank, blank, 0.3)
227
+ binned_reio_num = 3
228
+ binned_reio_z = 8,12,16
229
+ binned_reio_xe = 0.8,0.2,0.1
230
+ binned_reio_step_sharpness = 0.3
231
+
232
+ # 8.c) If 'reio_parametrization' set to 'reio_many_tanh':
233
+ # enter number of jumps, list of jump redhsifts z_i (central value of each
234
+ # tanh()), list of free electron density x_i after each jump, and common
235
+ # width of all jumps. If you want to end up with all hydrogen reionized
236
+ # but neglecting helium reionization, the first value of x_i in the list
237
+ # should be 1. For each x_i you can also pass the flags -1 or -2. They
238
+ # mean:
239
+ # - -1: after hydrogen + first helium recombination (so the code will
240
+ # substitute a value bigger than one based on Y_He);
241
+ # - -2: after hydrogen + second helium recombination (the code will
242
+ # substitute an even bigger value based on Y_He).
243
+ # You can get results close to reio_camb by setting these parameters to
244
+ # the value showed below (and adapting the second many_tanh_z to the usual
245
+ # z_reio). (default: not set)
246
+ many_tanh_num = 2
247
+ many_tanh_z = 3.5,11.3
248
+ many_tanh_xe = -2,-1
249
+ many_tanh_width = 0.5
250
+
251
+ # 8.d) If 'reio_parametrization' set to 'reio_inter': enter the number of
252
+ # points, the list of redshifts z_i, and the list of free electron
253
+ # fraction values x_i. The code will do linear interpolation between them.
254
+ # The first z_i should always be 0. Like above, for each x_i, you can also
255
+ # pass the flags -1 or -2. They mean: for -1, after the hydrogen and the
256
+ # first helium recombination (so the code will substitute a value bigger
257
+ # than one based on Y_He); for -2, after the hydrogen and the second
258
+ # helium recombination (the code will substitute an even bigger value
259
+ # based on Y_He). The last value of x_i should always be zero, the code
260
+ # will substitute it with the value that one would get in absence of
261
+ # reionization, as computed by the recombination code. (default: not set)
262
+ reio_inter_num = 8
263
+ reio_inter_z = 0, 3, 4, 8, 9, 10, 11, 12
264
+ reio_inter_xe = -2, -2, -1, -1, 0.9, 0.5, 0.1, 0
265
+
266
+
267
+ # 9) State whether you want the code to compute the simplest analytic
268
+ # approximation to the photon damping scale (it will be added to the
269
+ # thermodynamics output, and its value at recombination will be stored and
270
+ # displayed in the standard output) (default: 'compute damping scale' set to
271
+ # 'no')
272
+ compute_damping_scale =
273
+
274
+
275
+ # 10) State whether you want to include a variation of fudamental constants. Can be set to 'none' or to 'instantaneous'. Smoother evolutions could be included by modifying the function "background_varconst_of_z" in source/background.c.
276
+ varying_fundamental_constants = none
277
+
278
+ # 10.a) If 'varying_fundamental_constants' is set to 'instantaneous', select the redshift of the transition 'varying_transition_redshift' (default: 50). At lower redshift, the value will be the currently observed value, while at higher redshift you can specify how large the value should be by giving the ratio of the value at high redshift with respect to the currently observed one. Provide the relative value of the fine structure constant 'varying_alpha' (default: 1), and the relative value of the effective electron mass 'varying_me' (default: 1). The treatment corresponds to that of 1705.03925.
279
+ varying_transition_redshift =
280
+ varying_alpha = 1.
281
+ varying_me = 1.
282
+
283
+ # 10.b) If 'varying_fundamental_constants' is not set to 'none' and 'YHe' is set to 'BBN', specify by how much the 'YHe' prediction from 'BBN' should be affected by the different value of the fine structure constant. The default value is motivated by 2001.01787. (default: 1)
284
+ bbn_alpha_sensitivity = 1.
285
+
286
+
287
+ # -------------------------
288
+ # ----> Species parameters:
289
+ # -------------------------
290
+
291
+ # 1) Photon density: either 'T_cmb' in K or 'Omega_g' or 'omega_g' (default:
292
+ # 'T_cmb' set to 2.7255)
293
+ T_cmb = 2.7255
294
+ #Omega_g =
295
+ #omega_g =
296
+
297
+
298
+ # 2) Baryon density: either 'Omega_b' or 'omega_b' (default: 'omega_b' set to
299
+ # 0.02238280)
300
+ omega_b = 0.02238280
301
+ #Omega_b =
302
+
303
+
304
+ # 3) Ultra-relativistic species / massless neutrino density: either
305
+ # 'N_ur' or 'Omega_ur' or 'omega_ur' (default: 'N_ur' set to 3.044;
306
+ # see 2008.01074 and 2012.02726. This value is more accurate than the
307
+ # previous default value of 3.046) (note: instead of 'N_ur' you can
308
+ # pass equivalently 'N_eff', although this syntax is deprecated) (one
309
+ # more remark: if you have respectively 1,2,3 massive neutrinos, if
310
+ # you stick to the default value T_ncdm equal to 0.71611, designed to
311
+ # give m/omega of 93.14 eV, and if you want to use N_ur to get N_eff
312
+ # equal to 3.044 in the early universe, then you should pass here
313
+ # respectively 2.0308,1.0176,0.00441)
314
+ N_ur = 3.044
315
+ #Omega_ur =
316
+ #omega_ur =
317
+
318
+ # 3.a) To simulate ultra-relativistic species with non-standard properties, you
319
+ # can pass 'ceff2_ur' and 'cvis2_ur' (effective squared
320
+ # sound speed and viscosity parameter, like in the Generalised Dark Matter
321
+ # formalism of W. Hu) (default: both set to 1/3)
322
+ #ceff2_ur =
323
+ #cvis2_ur =
324
+
325
+
326
+ # 4) Density of cdm (cold dark matter): 'Omega_cdm' or 'omega_cdm', or,
327
+ # density of total non-relativistic matter: 'Omega_m' or 'omega_m'.
328
+ # If you pass 'Omega_m' or 'omega_m', the code will automatically fill
329
+ # up the density of Cold Dark Matter such that all non-relativistic species
330
+ # (including non-cold DM) sum up to your input value of Omega_m
331
+ # (default: 'omega_cdm' set to 0.1201075)
332
+ omega_cdm = 0.1201075
333
+ #Omega_cdm =
334
+ #Omega_m =
335
+ #omega_m =
336
+
337
+
338
+ # 5) ncdm sector (i.e. any non-cold dark matter relics, including massive
339
+ # neutrinos, warm dark matter, etc.):
340
+ # 5.a) 'N_ncdm' is the number of distinct species (default: set to 0)
341
+ N_ncdm =
342
+
343
+ # 5.b) 'use_ncdm_psd_files' is the list of N_ncdm numbers:
344
+ # - 0 means 'phase-space distribution (psd) passed analytically
345
+ # inside the code, in the mnodule background.c, inside the function
346
+ # background_ncdm_distribution()',
347
+ # - 1 means 'psd passed as a file with at list two columns: first for
348
+ # q, second for f_0(q)', where q is p/T_ncdm
349
+ # (default: only zeros)
350
+ #use_ncdm_psd_files = 0
351
+
352
+ # 5.b.1) If some of the previous values are equal to one, 'ncdm_psd_filenames' is
353
+ # the list of names of psd files (as many as number of ones in previous
354
+ # entry)
355
+ ncdm_psd_filenames = psd_FD_single.dat
356
+
357
+ # 5.c) 'ncdm_psd_parameters' is an optional list of double parameters to
358
+ # describe the analytic distribution function or to modify a p.s.d. passed
359
+ # as a file. It is made available in the routine
360
+ # background_ncdm_distribution.
361
+ #ncdm_psd_parameters = 0.3 ,0.5, 0.05
362
+ #ncdm_psd_parameters = Nactive, sin^2_12 ,s23 ,s13
363
+
364
+ # 5.d) 'Omega_ncdm' or 'omega_ncdm' or 'm_ncdm' in eV (default: all set to
365
+ # zero); with only one of these inputs, CLASS computes the correct value
366
+ # of the mass; if both (Omega_ncdm, m_ncdm) or (omega_ncdm, m_ncdm) are
367
+ # passed, CLASS will renormalise the psd in order to fulfill both
368
+ # conditions. Passing zero in the list of m_ncdm's or Omeg_ncdm's means
369
+ # that for this component, this coefficient is not imposed, and its value
370
+ # is inferred from the other one.
371
+ m_ncdm = 0.06
372
+ #m_ncdm = 0.04, 0.04, 0.04
373
+ #Omega_ncdm =
374
+ #omega_ncdm =
375
+
376
+ # 5.e) 'T_ncdm' is the ncdm temperature in units of photon temperature
377
+ # (default: set to 0.71611, which is slightly larger than the
378
+ # instantaneous decoupling value (4/11)^(1/3); indeed, this default value
379
+ # is fudged to give a ratio m/omega equal to 93.14 eV for active
380
+ # neutrinos, as predicted by precise studies of active neutrino
381
+ # decoupling, see hep-ph/0506164)
382
+ T_ncdm =
383
+
384
+ # 5.f) 'ksi_ncdm' is the ncdm chemical potential in units of its own
385
+ # temperature (default: set to 0)
386
+ ksi_ncdm =
387
+
388
+ # 5.g) 'deg_ncdm' is the degeneracy parameter multiplying the psd: 1 stands for
389
+ # 'one family', i.e. one particle + anti-particle (default: set to 1.0)
390
+ deg_ncdm =
391
+
392
+ # 5.h) 'ncdm_quadrature_strategy' is the method used for the momentum sampling of
393
+ # the ncdm distribution function.
394
+ # - 0 is the automatic method,
395
+ # - 1 is Gauss-Laguerre quadrature,
396
+ # - 2 is the trapezoidal rule on [0,Infinity] using the transformation q->1/t-1.
397
+ # - 3 is the trapezoidal rule on [0,q_max] where q_max is the next input.
398
+ # (default: set to 0)
399
+ ncdm_quadrature_strategy =
400
+
401
+ # 5.h.1) 'ncdm_maximum_q' is the maximum q relevant only for Quadrature strategy 3.
402
+ # (default: set to 15)
403
+ ncdm_maximum_q =
404
+
405
+ # 5.h.2) Number of momentum bins. (default: 150)
406
+ ncdm_N_momentum_bins =
407
+
408
+
409
+ # 6) Curvature: 'Omega_k' (default: 'Omega_k' set to 0)
410
+ Omega_k = 0.
411
+
412
+
413
+ # Begin of ADDITIONAL SPECIES --> Add your species here
414
+
415
+ # 7.1) Decaying CDM into Dark Radiation
416
+ # 7.1.a) The current fractional density of dcdm+dr (decaying cold dark matter
417
+ # and its relativistic decay radiation): 'Omega_dcdmdr' or 'omega_dcdmdr'
418
+ # (default: 'Omega_dcdmdr' set to 0)
419
+ Omega_dcdmdr = 0.0
420
+ #omega_dcdmdr = 0.0
421
+
422
+ # 7.1.b) The rescaled initial value for dcdm density (see 1407.2418 for
423
+ # definitions). If you specify 7.a.1, 7.a.2 will be found automtically by a
424
+ # shooting method, and vice versa. (default: 'Omega_dcdmdr' set to 0,
425
+ # hence so is 'Omega_ini_dcdm')
426
+ #Omega_ini_dcdm =
427
+ #omega_ini_dcdm =
428
+
429
+ # 7.1.c) Decay constant of dcdm in km/s/Mpc, same unit as H0 above.
430
+ Gamma_dcdm = 0.0
431
+ tau_dcdm = 0.0
432
+
433
+
434
+ # 7.2) Multi-interacting Dark Matter (idm), implemented by N. Becker,
435
+ # D.C. Hooper, and N. Schoeneberg. Described in (2010.04074)
436
+
437
+ # 7.2.1) Global parameters for the (multi-)interacting Dark Matter component
438
+
439
+ # 7.2.1.a) Amount of interacting Dark Matter
440
+ # Can be passed as either f_idm (fraction) or Omega_idm (relative abundance) (default : 0)
441
+ #Omega_idm = 0.
442
+ f_idm = 0.
443
+
444
+ # 7.2.1.b) Mass of interacting Dark Matter particle in eV (default : 1e9)
445
+ m_idm = 1e9
446
+
447
+ # 7.2.2) Dark Matter interacting with Dark Radiation (idm_dr) and
448
+ # interacting Dark Radiation (idr), implemented by
449
+ # M. Archidiacono, S. Bohr, and D.C. Hooper, following the ETHOS
450
+ # framework (1512.05344). Can also take as input the parameters
451
+ # of the models of 1507.04351 (with non-abelian dark matter, dark
452
+ # gluons...) which can be seen as a sub-class of ETHOS. See
453
+ # 1907.01496 for more details on both cases.
454
+
455
+ # 7.2.2.a) Amount of interacting dark radiation (idr)
456
+ # - Can be parameterised through the temperature ratio 'xi_idr' (= T_idr/T_cmb)
457
+ # - Can be parameterised through the number of extra relativistic relics 'N_idr' (or indifferently 'N_dg')
458
+ # In all cases the parameter is dimensionless.
459
+ # (default : 0)
460
+ xi_idr =
461
+ #N_idr =
462
+
463
+ # 7.2.2.b) Statistical factor to differentiate between fermionic (= 7/8) and bosonic (= 1) dark radiation (default 7/8)
464
+ stat_f_idr = 0.875
465
+
466
+ # 7.2.2.c) Strength of the coupling between DM and DR:
467
+ #
468
+ # Can be passed as 'a_idm_dr' or 'a_dark' in ETHOS parameterisation, in units of 1/Mpc.
469
+ # Then in ETHOS notations: Gamma_DR-DM = - omega_DM a_dark ((1+z)/10^7)^nindex_dark
470
+ # while: Gamma_DM-DR = - 4/3 (rho_DR/rho_DM) omega_DM a_dark ((1+z)/10^7)^nindex_dark
471
+ # = - 4/3 omega_DR a_dark (1+z) ((1+z)/10^7)^nindex_dark
472
+ # or in CLASS notations: dmu_idm_dr = - Gamma_DR-DM = omega_idm_dr a_idm_dr ((1+z)/10^7)^nindex_idm_dr
473
+ #
474
+ # Can be passed as 'Gamma_0_nadm' in NADM parameterisation, in units of 1/Mpc.
475
+ # Then in ETHOS notations: Gamma_DR-DM = - 3/4 Omega_DM/Omega_DR Gamma_0_nadm
476
+ # while: Gamma_DM-DR = - (1+z) Gamma_0_nadm
477
+ # or in CLASS notations: dmu_idm_dr = - Gamma_DR-DM = 3/4 Omega_idm_dr/Omega_idr Gamma_0_nadm
478
+ #
479
+ # (default : 0)
480
+ a_idm_dr = 0.
481
+ #Gamma_0_nadm =
482
+
483
+ # 7.2.2.d) Only if ETHOS parametrization : Power of the temperature dependence of the co-moving idr - idm_dr interaction rate
484
+ # Can be passed indifferently as 'nindex_idm_dr' (or 'nindex_dark'), in units of 1/Mpc.
485
+ # (default : 4, unless Gamma_0_nadm has been passed, then default changes to 0)
486
+ nindex_idm_dr =
487
+
488
+ # 7.2.2.e) Only if ETHOS parametrization : Nature of the interacting dark radiation: 'free_streaming' or 'fluid'
489
+ # (default = 'free_streaming', unless Gamma_0_nadm has been passed, then default changes to 'fluid')
490
+ idr_nature =
491
+
492
+ # 7.2.2.f) Strength of the dark radiation self interactions coupling,
493
+ # can be passed as 'b_idr' or 'b_dark', in units of 1/Mpc.
494
+ # In ETHOS notations: Gamma_DR-DR = (b_dark/a_dark) (Omega_DR/Omega_DM) Gamma_DR-DM
495
+ # In CLASS notations: dmu_idr = - Gamma_DR-DR = (b_idr/a_idm_dr) (Omega_idr/Omega_idm_dr) dmu_idm_dr
496
+ # (default : 0)
497
+ b_idr =
498
+
499
+ # 7.2.2.g) idr - idm_dr interaction angular coefficient: 'alpha_idm_dr' (or indifferently 'alpha_dark')
500
+ # Should be 3/4 if vector boson mediator; 3/2 if scalar boson mediator.
501
+ # In full generality this coefficient may depend on l = 2, 3, 4...
502
+ # The user can pass here a list of values with an arbitrary size. The first coefficients will be adjusted
503
+ # accordingly. After that, the last value will be repeated.
504
+ # For instance, if users passes 3, 2, 1, the code will take alpha_2=3, alpha_3=2, and all others equal to 1.
505
+ # (default = all set to 1.5)
506
+ alpha_idm_dr = 1.5
507
+
508
+ # 7.2.2.h) idr self-interaction angular coefficient: 'beta_idr' (or indifferently 'beta_dark')
509
+ # In full generality this coefficient may depend on l = 2, 3, 4...
510
+ # The user can pass here a list of values with an arbitrary size. The first coefficients will be adjusted
511
+ # accordingly. After that, the last value will be repeated.
512
+ # For instance, if users passes 3, 2, 1, the code will take beta_2=3, beta_3=2, and all others equal to 1.
513
+ # (default = all set to 1.5)
514
+ beta_idr = 1.5
515
+
516
+ # -> Precision parameters for idm_dr and idr can be found in precisions.h, with the tag idm_dr
517
+
518
+ # 7.2.3) Interacting Dark Matter with Baryons
519
+ # Implemented by D.C. Hooper, N. Schoeneberg, and N. Becker
520
+ # following 1311.2937, 1509.00029, 1803.09734, 1802.06788
521
+
522
+ # 7.2.3.a) Coupling strength of Dark Matter and baryons (in cm^2) (default : 0)
523
+ cross_idm_b = 0.
524
+ # 7.2.3.b) Temperature dependence of the DM - baryon interactions (between -4 and 4) (default : 0)
525
+ n_index_idm_b = 0
526
+
527
+ # 7.2.4) Dark Matter interactions with photons
528
+ # Implementd by N. Becker following the formalism of Stadler&Boehm (1802.06589)
529
+
530
+ # 7.2.4.a) Interaction coefficient or coupling strength between DM and photons
531
+ # Can be passed as either u_idm_g (dimensionless interaction strength) or cross_idm_g (cross section in cm^2) (default : 0)
532
+ u_idm_g = 0
533
+ #cross_idm_g = 0
534
+
535
+ # 7.2.4.b) Temperature dependence of the DM - photon interactions (default : 0)
536
+ n_index_idm_g = 0
537
+
538
+ # End of ADDITIONAL SPECIES
539
+
540
+
541
+ # 8) Dark energy contributions.
542
+ # At least one out of three conditions must be satisfied:
543
+ # - 'Omega_Lambda' unspecified.
544
+ # - 'Omega_fld' unspecified.
545
+ # - 'Omega_scf' set to a negative value. [Will be refered to as
546
+ # unspecified in the following text.]
547
+ # The code will then use the first unspecified component to satisfy the
548
+ # closure equation (sum_i Omega_i) equals (1 + Omega_k)
549
+ # (default: 'Omega_fld' and 'Omega_scf' set to 0 and 'Omega_Lambda'
550
+ # inferred by code)
551
+ Omega_fld = 0
552
+ Omega_scf = 0
553
+ # Omega_Lambda = 0.7
554
+
555
+ # 8.a) If Omega fluid is different from 0
556
+
557
+ # 8.a.1) The flag 'use_ppf' is 'yes' by default, to use the PPF approximation
558
+ # (see 0808.3125 [astro-ph]) allowing perturbations to cross the
559
+ # phantom divide. Set to 'no' to enforce true fluid equations for
560
+ # perturbations. When the PPF approximation is used, you can choose the
561
+ # ratio 'c_gamma_over_c_fld' (eq. (16) in 0808.3125). The default is 0.4
562
+ # as recommended by that reference, and implicitely assumed in other
563
+ # codes. (default: 'use_ppf' to yes, 'c_gamma_over_c_fld' to 0.4)
564
+ use_ppf = yes
565
+ c_gamma_over_c_fld = 0.4
566
+
567
+ # 8.a.2) Choose your equation of state between different models,
568
+ # - 'CLP' for p/rho = w0_fld + wa_fld (1-a/a0)
569
+ # (Chevalier-Linder-Polarski),
570
+ # - 'EDE' for early Dark Energy
571
+ # (default:'fluid_equation_of_state' set to 'CLP')
572
+ fluid_equation_of_state = CLP
573
+
574
+ # 8.a.2.1) Equation of state of the fluid in 'CLP' case and squared sound speed
575
+ # 'cs2_fld' of the fluid (this is the sound speed defined in the frame
576
+ # comoving with the fluid, i.e. obeying to the most sensible physical
577
+ # definition). Generalizing w(a) to a more complicated expressions would
578
+ # be easy, for that, have a look into source/background.c at the
579
+ # function background_w_fld(). (default: 'w0_fld' set to -1, 'wa_fld' to
580
+ # 0, 'cs2_fls' to 1)
581
+ #w0_fld = -0.9
582
+ #wa_fld = 0.
583
+ #cs2_fld = 1
584
+
585
+ # 8.a.2.2) Equation of state of the fluid in 'EDE' case and squared sound speed
586
+ # 'cs2_fld' of the fluid (this is the sound speed defined in the frame
587
+ # comoving with the fluid, i.e. obeying to the most sensible physical
588
+ # definition). Generalizing w(a) to a more complicated expressions would
589
+ # be easy, for that, have a look into source/background.c at the
590
+ # function background_w_fld(). (default: 'w0_fld' set to -1, 'Omega_EDE'
591
+ # to 0, 'cs2_fls' to 1)
592
+ #w0_fld = -0.9
593
+ #Omega_EDE = 0.
594
+ #cs2_fld = 1
595
+
596
+ # 8.b) If Omega scalar field is different from 0
597
+
598
+ # 8.b.1) Scalar field (scf) potential parameters and initial conditions
599
+ # (scf_parameters = [scf_lambda, scf_alpha, scf_A, scf_B, phi,
600
+ # phi_prime]). V = ((\phi-B)^\alpha + A)exp(-lambda*phi), see
601
+ # http://arxiv.org/abs/astro-ph/9908085. If 'attractor_ic_scf' is set to
602
+ # 'no', the last two entries are assumed to be the initial values of phi
603
+ # in units of the reduced planck mass m_Pl and the conformal time
604
+ # derivative of phi in units of [m_Pl/Mpc]. (Note however that CLASS
605
+ # determines the initial scale factor dynamically and the results might
606
+ # not be as expected in some models.)
607
+ scf_parameters = 10.0, 0.0, 0.0, 0.0, 100.0, 0.0
608
+
609
+ # 8.b.2) Scalar field (scf) initial conditions from attractor solution (assuming
610
+ # pure exponential potential). (default: yes)
611
+ attractor_ic_scf = yes
612
+
613
+ # 8.b.3) Scalar field (scf) shooting parameter: If Omega_scf is set (can only be negative),
614
+ # the following index (0,1,2,...) in the list scf_parameters will be used for shooting:
615
+ # (See also the section about shooting in input.c)
616
+ # Basically parameter number scf_tuning_index will be adjusted until
617
+ # the correct Omega_scf is found to suffice the budget equation
618
+ scf_tuning_index = 0
619
+
620
+
621
+ # 8.b.4) Scalar field (scf) shooting parameter. With this, you can overwrite some parameter
622
+ # of 8.b.1) depending on the index defined in 8.b.3)
623
+ scf_shooting_parameter =
624
+
625
+ # -----------------------------------------
626
+ # ----> Exotic energy injection parameters:
627
+ # -----------------------------------------
628
+
629
+ # 1) DM Annihilation
630
+
631
+ # 1.a) In order to model energy injection from DM annihilation, specify a
632
+ # parameter 'annihilation_efficiency' corresponding to
633
+ # <sigma*v> / m_cdm expressed here in units of m^3/s/J. Alternatively,
634
+ # you can specify the annihilation cross section in cm^3/s and the DM
635
+ # mass in GeV. The code will then evaluate 'annihilation_efficiency'.
636
+ # (default: all set to zero)
637
+ DM_annihilation_efficiency = 0.
638
+ #DM_annihilation_cross_section = 0.
639
+ #DM_annihilation_mass = 0.
640
+ #DM_annihilation_fraction = 0.
641
+
642
+ # 1.a.1) You can model simple variations of the above quantity as a function of
643
+ # redhsift. If 'annihilation_variation' is non-zero, the function F(z)
644
+ # defined as (<sigma*v> / m_cdm)(z) (see 1.a) will be a parabola in log-log scale
645
+ # between 'annihilation_zmin' and 'annihilation_zmax', with a curvature
646
+ # given by 'annihilation_variation' (must be negative), and with a maximum
647
+ # in 'annihilation_zmax'; it will be constant outside this range. To
648
+ # take DM halos into account, specify the parameters 'annihilation_f_halo',
649
+ # the amplitude of the halo contribution, and 'annihilation_z_halo',
650
+ # the characteristic redshift of halos (default: no variation,
651
+ # 'annihilation_variation' and 'annihilation_f_halo' set to zero).
652
+ DM_annihilation_variation = 0.
653
+ DM_annihilation_z = 1000
654
+ DM_annihilation_zmax = 2500
655
+ DM_annihilation_zmin = 30
656
+ DM_annihilation_f_halo= 0
657
+ DM_annihilation_z_halo= 8
658
+
659
+
660
+ # 2) DM electromagnetic decay
661
+
662
+ # 2.a) Specify the dimensionless parameter 'decay_fraction' which is
663
+ # equal to the fraction of cdm with electromagnetic decay
664
+ # products (decaying into dark radiation is handled by
665
+ # Omega_dcdm). Note: Until class 2.7, this parameter was called
666
+ # 'decay'. Its name and its meaning have slightly changed to
667
+ # avoid confusion when working with model in which the lifetime
668
+ # of the dcdm can be small (this is allowed providing that the
669
+ # 'decay_fraction' parameter is small as well). (default: set to
670
+ # 0)
671
+ DM_decay_fraction = 0.
672
+
673
+ # 2.b) Specify the decay width of the particle 'decay_Gamma' in 1/s.
674
+ # (default: set to 0)
675
+ DM_decay_Gamma = 0.
676
+
677
+
678
+ # 3) PBH evaporation. In this case, check that in 5), 'f_eff_type' and
679
+ # 'f_eff' have their default values 'on_the_spot' and 1, because CLASS
680
+ # will automatically take into account the time-dependent efficiency
681
+ # of energy injection from evaporating BH, taking the spectrum of
682
+ # evaporated particles into account.
683
+
684
+ # 3.a) Specify a dimensionless parameter 'PBH_evaporation_fraction' which is equal
685
+ # to the fraction of evaporating PBH. (default set to 0)
686
+ PBH_evaporation_fraction = 0.
687
+
688
+ # 3.b) Specify the mass of the evaporating PBH in g. (default set to 0)
689
+ PBH_evaporation_mass = 0.
690
+
691
+
692
+ # 4) PBH matter accretion
693
+
694
+ # 4.a) Specify a dimensionless parameter 'PBH_accreting_fraction' which is equal
695
+ # to the fraction of accreting PBH. (default set to 0)
696
+ PBH_accretion_fraction = 0.
697
+
698
+ # 4.b) Specify the mass of the accreting PBH in Msun. (default set to 0)
699
+ PBH_accretion_mass = 0.
700
+
701
+ # 4.c) Specify the 'PBH_accretion_recipe' between 'spherical_accretion'
702
+ # (computed according to Ali-Haimoud and Kamionkowski 1612.05644), or
703
+ # 'disk_accretion' (computed according to Poulin et al. 1707.04206).
704
+ # (default set to 'disk_accretion')
705
+ PBH_accretion_recipe = disk_accretion
706
+
707
+ # 4.c.1) If you choose 'spherical_accretion', you might want to specify the
708
+ # relative velocity between PBH and baryons in km/s.
709
+ # If negative, the linear result is chosen by the code.
710
+ # (default set to -1., standard value is the linear result extrapolated to PBH.)
711
+ PBH_accretion_relative_velocities = -1.
712
+
713
+ # 4.c.2) If you choose 'disk_accretion', you might want to specify the
714
+ # factor 'PBH_accretion_ADAF_delta' which, determines the heating
715
+ # of the electrons in the disk, influencing the emissivity.
716
+ # Can be set to 0.5 (aggressive scenario), 0.1 or 1.e-3 (conservative).
717
+ # (default set to 1.e-3)
718
+ # Furthermore you can also specify the the eigenvalue of the accretion
719
+ # rate. It rescales the perfect Bondi case. (see e.g. Ali-Haimoud
720
+ # & Kamionkowski 2016) (default set to 0.1, standard value in the ADAF
721
+ # scenario.)
722
+ PBH_accretion_ADAF_delta = 1.e-3
723
+ PBH_accretion_eigenvalue = 0.1
724
+
725
+ # 5) Define the so-called injection efficiency f_eff, i.e. the factor
726
+ # determining how much of the heating is deposited at all,
727
+ # regardless of the form. There are two options to define this
728
+ # function: 'on_the_spot' or 'from_file' (default: set to 'on_the_spot').
729
+ #
730
+ # - with 'on_the_spot', the injected energy is transformed into deposited energy
731
+ # at the same redshift with efficiency f_eff. In this case the
732
+ # user can pass explicitely the value of f_eff. (default: f_eff=1)
733
+ #
734
+ # - for 'from_file' the code reads a precomputed function in an external file
735
+ # with a path set by the user (default set to "external/heating/example_f_eff_file.dat")
736
+ f_eff_type = on_the_spot
737
+ #f_eff =
738
+ #f_eff_file = external/heating/example_f_eff_file.dat
739
+
740
+ # 6) Define the so-called deposition function chi, i.e the function which determines
741
+ # the amount of energy effectively deposited into the different forms (heating,
742
+ # ionization, Lyman alpha and low energy). There are several options
743
+ # - by setting 'chi_type' to 'CK_2004', the approximation by Chen & Kamionkowski 2004 is employed.
744
+ # - by setting 'chi_type' to 'PF_2005', the approximation by Padmanabhan & Finkbeiner 2005 is employed.
745
+ # - by setting 'chi_type' to 'Galli_2013_file', the approximation by Galli et al. 2013 is employed.
746
+ # - by setting 'chi_type' to 'Galli_2013_analytic', the approximation by Poulin is employed.
747
+ # interpolating Galli et al. 2013 anyltically. Use this for consistency tests with
748
+ # older versions of CLASS (2.x).
749
+ # - by setting 'chi_type' to 'heat', the whole injected energy is going
750
+ # to be deposited into heat.
751
+ # - by setting 'chi_type' to 'from_x_file' or 'from_z_file', the user can
752
+ # define own deposition functions with respect to the free electron
753
+ # fraction x_e or to redshift, respectively.
754
+ # (default set to 'CK_2004')
755
+ chi_type = CK_2004
756
+
757
+ # 6.a) If the option 'from_file' has been chosen, define the name of the file.
758
+ # Two files 'example_chix_file.dat' and 'example_chiz_file.dat' are given
759
+ # as example in external/heating. Note that 'example_chix_file.dat' has
760
+ # been computed according to the approximations of Galli et al. 2013.
761
+ # (default set to "external/heating/example_f_eff_file.dat")
762
+ #chi_file = external/heating/example_chiz_file.dat
763
+ #chi_file = external/heating/example_chix_file.dat
764
+
765
+
766
+
767
+ # -------------------------------
768
+ # ----> Non-linear parameters:
769
+ # -------------------------------
770
+
771
+ # 1) If you want an estimate of the non-linear P(k) and Cls:
772
+ # Enter 'halofit' or 'Halofit' or 'HALOFIT' for Halofit
773
+ # Enter 'hmcode' or 'Hmcode' or 'HMcode' or 'HMCODE' for HMcode;
774
+ # otherwise leave blank (default: blank, linear P(k) and Cls)
775
+ non_linear =
776
+
777
+ # 1.a) if you chose Halofit:
778
+
779
+ # 1.a.1) if you have Omega0_fld != 0. (i.e. you
780
+ # set Omega_lambda=0.) & wa_fld != 0., then you might want to use the
781
+ # pk equal method of 0810.0190 and 1601.07230 by setting this flag to
782
+ # 'yes' (default: set to 'no')
783
+ pk_eq =
784
+
785
+ # 1.a.2) minimum value of k_max in 1/Mpc used internally inside
786
+ # Halofit to compute a few integrals. Should never be too small,
787
+ # otherwise these integrals would not converge. (default: 5 1/Mpc)
788
+ halofit_min_k_max =
789
+
790
+ # 1.b) if you chose HMcode:
791
+
792
+ # 1.b.1) choose the version among:
793
+ # - '2015' or '2016' (just '15' or '16' also works) for HMcode 2016 by Mead et al. (arXiv 1602.02154)
794
+ # - '2020' (just '20' also works) for HMcode 2020 by Mead et al. (arXiv 2009.01858)
795
+ # - '2020_baryonic_feedback' for HMcode 2020 by Mead et al. (arXiv 2009.01858) with baryonic feedback
796
+ # (defualt: 2020)
797
+ hmcode_version =
798
+
799
+ # 1.b.2) if you choose '2015' or '2016': baryonic feedback parameters 'eta_0' and 'c_min'
800
+ # In HMcode 2016 you can specify a baryonic feedback model (otherwise only DM is used).
801
+ # Each model depends on two parameters: the minimum concentration "c_min" from the
802
+ # Bullock et al. 2001 mass-concentration relation and the halo bloating parameter "eta_0"
803
+ # introduced in Mead et al. 2015. In Mead et al. 2015 the parameters c_min and eta_0 are fitted
804
+ # to the Cosmic Emulator dark matter only simulation (Heitman et al. 2014) and the
805
+ # hydrodynamical OWLS simulations (Schaye et al. 2010, van Daalen et al. 2011).
806
+ # You can choose between the 5 models of Mead et al. 2015, Table 4:
807
+ # Model (eta_0, c_min) Explanation
808
+ # - 'emu_dmonly' (0.603, 3.13) fits the only dark matter Cosmic Emulator simulation (default)
809
+ # - 'owls_dmonly' (0.64, 3.43) fits the OWLS simulation of dark matter only
810
+ # - 'owls_ref' (0.68, 3.91) fits the OWLS simulation that includes gas cooling, heating,
811
+ # star formation and evolution, chemical enrichment and supernovae feedback
812
+ # - 'owls_agn' (0.76, 2.32) fits the OWLS simulation that includes AGN feedback
813
+ # - 'owls_dblim' (0.70, 3.01) fits the OWLS simulation that has extra supernova energy in wind velocity
814
+ # Set 'feedback model' to one of these names,
815
+ # or leave blank and pass manually the value of either 'eta_0' or 'c_min'
816
+ # (the other one will then be fixed according to equation (30) in Mead et al. 2015),
817
+ # or pass manually the two values of 'eta_0' and 'c_min'
818
+ # (default: 'feedback model' set to 'emu_dmonly')
819
+ feedback model =
820
+ eta_0 =
821
+ c_min =
822
+
823
+ # 1.b.3) if you choose '2020_baryonic_feedback', single feedback model
824
+ # parameter 'log10T_heat_hmcode' (default: 7.8)
825
+ log10T_heat_hmcode =
826
+
827
+ # 1.b.4) minimum value of k_max in 1/Mpc used internally inside
828
+ # HMcode to compute a few integrals. Should never be too small,
829
+ # otherwise these integrals would not converge. (default: 5 1/Mpc)
830
+ hmcode_min_k_max =
831
+
832
+ # 1.b.5) if you chose HMcode, set the redshift value at which the Dark
833
+ # Energy correction is evaluated - this needs to be at early times,
834
+ # when dark Energy is totally subdominant (default: 10)
835
+ z_infinity =
836
+
837
+ # 1.b.6) if you chose HMcode, number of k points for the de-wiggling (default: 512)
838
+ nk_wiggle =
839
+
840
+ # 2) Control on the output of the nowiggle spectrum (assuming you required 'mPk')
841
+
842
+ # 2.a) do you want to enforce the calculation and output of an analytic
843
+ # approximation to the nowiggle linear power spectrum, even when this is not
844
+ # required by the chosen non-linear method? (default: no)
845
+ analytic_nowiggle =
846
+
847
+ # 2.b) do you want to enforce the calculation and output of the
848
+ # nowiggle linear power spectrum, obtained by fuiltering/smoothing the
849
+ # full spectrum, even when this is not required by the chosen
850
+ # non-linear method? (default: no)
851
+ numerical_nowiggle =
852
+
853
+ # ----------------------------
854
+ # ----> Primordial parameters:
855
+ # ----------------------------
856
+
857
+ # 1) Primordial spectrum type
858
+ # - 'analytic_Pk' for an analytic smooth function with amplitude, tilt,
859
+ # running, etc.; analytic spectra with feature can also be added as
860
+ # a new type;
861
+ # - 'inflation_V' for a numerical computation of the inflationary
862
+ # primordial spectrum, through a full integration of the perturbation
863
+ # equations, given a parametrization of the potential V(phi) in the
864
+ # observable window, like in astro-ph/0703625;
865
+ # - 'inflation_H' for the same, but given a parametrization of the
866
+ # potential H(phi) in the observable window, like in
867
+ # astro-ph/0710.1630;
868
+ # - 'inflation_V_end' for the same, but given a parametrization of the
869
+ # potential V(phi) in the whole region between the observable part and
870
+ # the end of inflation;
871
+ # - 'two scales' allows to specify two amplitudes instead of one
872
+ # amplitude and one tilt, like in the isocurvature mode analysis of the
873
+ # Planck inflation paper (works also for adiabatic mode only; see
874
+ # details below, item 2.c);
875
+ # - 'external_Pk' allows for the primordial spectrum to be computed
876
+ # externally by some piece of code, or to be read from a table, see
877
+ # 2.d).
878
+ # (default: set to 'analytic_Pk')
879
+ Pk_ini_type = analytic_Pk
880
+
881
+ # 1.a) Pivot scale in Mpc-1 (default: set to 0.05)
882
+ k_pivot = 0.05
883
+
884
+ # 1.b) For type 'analytic_Pk':
885
+ # 1.b.1) For scalar perturbations
886
+ # curvature power spectrum value at pivot scale ('A_s' or
887
+ # 'ln_A_s_1e10') OR one of 'sigma8' or 'S8' (found by iterations using a shooting
888
+ # method). (default: set 'A_s' to 2.100549e-09)
889
+ A_s = 2.100549e-09
890
+ #ln_A_s_1e10 = 3.04478383
891
+ #sigma8 = 0.824398
892
+ #S8 = 0.837868
893
+
894
+ # 1.b.1.1) Adiabatic perturbations:
895
+ # tilt at the same scale 'n_s', and tilt running 'alpha_s'
896
+ # (default: set 'n_s' to 0.9660499, 'alpha_s' to 0)
897
+ n_s = 0.9660499
898
+ alpha_s = 0.
899
+
900
+ # 1.b.1.2) Isocurvature/entropy perturbations:
901
+ # for each mode xx ('xx' being one of 'bi', 'cdi', 'nid', 'niv',
902
+ # corresponding to baryon, cdm, neutrino density and neutrino velocity
903
+ # entropy perturbations), enter the entropy-to-curvature ratio f_xx,
904
+ # tilt n_xx and running alpha_xx, all defined at the pivot scale; e.g.
905
+ # f_cdi of 0.5 means S_cdi/R equal to one half and (S_cdi/R)^2 to 0.25
906
+ # (default: set each 'f_xx' to 1, 'n_xx' to 1, 'alpha_xx' to 0)
907
+ f_bi = 1.
908
+ n_bi = 1.5
909
+ f_cdi=1.
910
+ f_nid=1.
911
+ n_nid=2.
912
+ alpha_nid= 0.01
913
+ # etc.
914
+
915
+ # 1.b.1.3) Cross-correlation between different adiabatic/entropy mode:
916
+ # for each pair (xx, yy) where 'xx' and 'yy' are one of 'ad', 'bi',
917
+ # 'cdi', 'nid', 'niv', enter the correlation c_xx_yy (parameter between
918
+ # -1 and 1, standing for cosDelta, the cosine of the cross-correlation
919
+ # angle), the tilt n_xx_yy of the function cosDelta(k), and its running
920
+ # alpha_xx_yy, all defined at the pivot scale. So, for a pair of fully
921
+ # correlated (resp. anti-correlated) modes, one should set (c_xx_yy,
922
+ # n_xx_yy, alpha_xx_yy) to (1,0,0) (resp. (-1,0,0) (default: set each
923
+ # 'c_xx_yy' to 0, 'n_xx_yy' to 0, 'alpha_xx_yy' to 0)
924
+ c_ad_bi = 0.5
925
+ #n_ad_bi = 0.1
926
+ c_ad_cdi = -1.
927
+ c_bi_nid = 1.
928
+ #n_bi_nid = -0.2
929
+ #alpha_bi_nid = 0.002
930
+ # etc.
931
+
932
+ # 1.b.2) For tensor perturbations (if any):
933
+ # tensor-to-scalar power spectrum ratio, tilt,
934
+ # running at the pivot scale; if 'n_t' and/or 'alpha_t' is set to 'scc'
935
+ # or 'SCC' isntead of a numerical value, it will be inferred from the
936
+ # self-consistency condition of single field slow-roll inflation: for
937
+ # n_t, -r/8*(2-r/8-n_s); for alpha_t, r/8(r/8+n_s-1) (default: set 'r'
938
+ # to 1, 'n_t' to 'scc', 'alpha_t' to 'scc')
939
+ r = 1.
940
+ n_t = scc
941
+ alpha_t = scc
942
+
943
+ # 1.c) For type 'inflation_V'
944
+ # 1.c.1) Type of potential: 'polynomial' for a Taylor expansion of the
945
+ # potential around phi_pivot. Other shapes can easily be defined in
946
+ # primordial module.
947
+ potential = polynomial
948
+
949
+ # 1.c.2) For 'inflation_V' and 'polynomial': enter either the coefficients
950
+ # 'V_0', 'V_1', 'V_2', 'V_3', 'V_4' of the Taylor expansion (in units of
951
+ # Planck mass to appropriate power), or their ratios 'R_0', 'R_1',
952
+ # 'R_2', 'R_3', 'R_4' corresponding to (128pi/3)*V_0^3/V_1^2,
953
+ # V_1^2/V_0^2, V_2/V_0, V_1*V_3/V_0, V_1^2*V_4/V_0^3, or the
954
+ # potential-slow-roll parameters 'PSR_0', 'PSR_1', 'PSR_2', 'PSR_3',
955
+ # 'PSR_4', equal respectively to R_0, epsilon_V=R_1/(16pi),
956
+ # eta_V=R_2/(8pi), ksi_V=R_3/(8pi)^2, omega_V=R_4/(8pi)^3 (default:
957
+ # 'V_0' set to 1.25e-13, 'V_1' to 1.12e-14, 'V_2' to 6.95e-14, 'V_3'
958
+ # and 'V_4' to zero).
959
+ V_0=1.e-13
960
+ V_1=-1.e-14
961
+ V_2=7.e-14
962
+ V_3=
963
+ V_4=
964
+ #R_0=2.18e-9
965
+ #R_1=0.1
966
+ #R_2=0.01
967
+ #R_3=
968
+ #R_4=
969
+ #PSR_0 = 2.18e-9
970
+ #PSR_1 = 0.001989
971
+ #PSR_2 = 0.0003979
972
+ #PSR_3 =
973
+ #PSR_4 =
974
+
975
+ # 1.d) For 'inflation_H':
976
+ # enter either the coefficients 'H_0', 'H_1', 'H_2', 'H_3', 'H_4' of the
977
+ # Taylor expansion (in units of Planck mass to appropriate power), or the
978
+ # Hubble-slow-roll parameters 'HSR_0', 'HSR_1', 'HSR_2', 'HSR_3', 'HSR_4'
979
+ H_0=1.e-13
980
+ H_1=-1.e-14
981
+ H_2=7.e-14
982
+ H_3=
983
+ H_4=
984
+ #HSR_0 = 2.18e-9
985
+ #HSR_1 = 0.001989
986
+ #HSR_2 = 0.0003979
987
+ #HSR_3 =
988
+ #HSR_4 =
989
+
990
+ # 1.e) For type 'inflation_V_end':
991
+ # 1.e.1) Value of the field at the minimum of the potential after inflation, or
992
+ # at a value in which you want to impose the end of inflation, in
993
+ # hybrid-like models. By convention, the code expects inflation to take
994
+ # place for values smaller than this value, with phi increasing with
995
+ # time (using a reflection symmetry, it is always possible to be in that
996
+ # case) (default: 'phi_end' set to 0)
997
+ phi_end =
998
+
999
+ # 1.e.2) Shape of the potential. Refers to functions pre-coded in the primordail
1000
+ # module, so far 'polynomial' and 'higgs_inflation'. (default:
1001
+ # 'full_potential' set to 0)
1002
+ full_potential = polynomial
1003
+
1004
+ # 1.e.3) Parameters of the potential. The meaning of each parameter is
1005
+ # explained in the function primrodial_inflation_potential() in
1006
+ # source/primordial.c
1007
+ Vparam0 =
1008
+ Vparam1 =
1009
+ Vparam2 =
1010
+ Vparam3 =
1011
+ Vparam4 =
1012
+
1013
+ # 1.e.4) How much the scale factor a or the product (aH) increases between
1014
+ # Hubble crossing for the pivot scale (during inflation) and the end of
1015
+ # inflation. You can pass either: 'N_star' (standing for
1016
+ # log(a_end/a_pivot)) set to a number; or 'ln_aH_ratio' (standing for
1017
+ # log(aH_end/aH_pivot)) set to a number; (default: 'N_star' set to 60)
1018
+ #ln_aH_ratio = 50
1019
+ #N_star = 55
1020
+
1021
+ # 1.e.5) Should the inflation module do its nomral job of numerical integration
1022
+ # ('numerical') or use analytical slow-roll formulas to infer the
1023
+ # primordial spectrum from the potential ('analytical')? (default:
1024
+ # 'inflation_behavior' set to 'numerical')
1025
+ #inflation_behavior = numerical
1026
+
1027
+ # 1.f) For type 'two_scales' (currently this option works only for scalar modes,
1028
+ # and either for pure adiabatic modes or adiabatic + one type of
1029
+ # isocurvature):
1030
+ # 1.f.1) Two wavenumbers 'k1' and 'k2' in 1/Mpc, at which primordial amplitude
1031
+ # parameters will be given. The value of 'k_pivot' will not be used in
1032
+ # input but quantities at k_pivot will still be calculated and stored in
1033
+ # the primordial structure (no default value: compulsory input if
1034
+ # 'P_k_ini type' has been set to 'two_scales')
1035
+ k1=0.002
1036
+ k2=0.1
1037
+
1038
+ # 1.f.2) Two amplitudes 'P_{RR}^1', 'P_{RR}^2' for the adiabatic primordial
1039
+ # spectrum (no default value: compulsory input if 'P_k_ini type' has been
1040
+ # set to 'two_scales')
1041
+ P_{RR}^1 = 2.3e-9
1042
+ P_{RR}^2 = 2.3e-9
1043
+
1044
+ # 1.f.3) If one isocurvature mode has been turned on ('ic' set e.g. to 'ad,cdi'
1045
+ # or 'ad,nid', etc.), enter values of the isocurvature amplitude
1046
+ # 'P_{II}^1', 'P_{II}^2', and cross-correlation amplitude 'P_{RI}^1',
1047
+ # '|P_{RI}^2|' (see Planck paper on inflation for details on
1048
+ # definitions)
1049
+ P_{II}^1 = 1.e-11
1050
+ P_{II}^2 = 1.e-11
1051
+ P_{RI}^1 = -1.e-13
1052
+ |P_{RI}^2| = 1.e-13
1053
+
1054
+ # 1.f.4) Set 'special iso' to 'axion' or 'curvaton' for two particular cases:
1055
+ # 'axion' means uncorrelated, n_ad equal to n_iso, 'curvaton' means fully
1056
+ # anti-correlated with f_iso<0 (in the conventions of the Planck
1057
+ # inflation paper this would be called fully correlated), n_iso equal
1058
+ # to one; in these two cases, the last three of the four paramneters in
1059
+ # 2.c.3 will be over-written give the input for 'P_{II}^1' (defaut:
1060
+ # 'special_iso' left blanck, code assumes general case described by four
1061
+ # parameters of 2.c.3)
1062
+ special_iso =
1063
+
1064
+ # 1.g) For type 'external_Pk' (see external documentation external_Pk/README.md
1065
+ # for more details):
1066
+ # 1.g.1) Command generating the table. If the table is already generated, just
1067
+ # write "cat <table_file>". The table should have two columns (k, pk) if
1068
+ # tensors are not requested, or three columns (k, pks, pkt) if they are.
1069
+ #command = python external/external_Pk/generate_Pk_example.py
1070
+ #command = python external/external_Pk/generate_Pk_example_w_tensors.py
1071
+ command = cat external/external_Pk/Pk_example.dat
1072
+ #command = cat external/external_Pk/Pk_example_w_tensors.dat
1073
+
1074
+ # 1.g.2) If the table is not pregenerated, parameters to be passed to the
1075
+ # command, in the right order, starting from "custom1" and up to
1076
+ # "custom10". They must be real numbers.
1077
+ custom1 = 0.05 # In the example command: k_pivot
1078
+ custom2 = 2.215e-9 # In the example command: A_s
1079
+ custom3 = 0.9624 # In the example command: n_s
1080
+ custom4 = 2e-10 # In the example (with tensors) command: A_t
1081
+ custom5 = -0.1 # In the example (with tensors) command: n_t
1082
+ #custom6 = 0
1083
+ #custom7 = 0
1084
+ #custom8 = 0
1085
+ #custom9 = 0
1086
+ #custom10 = 0
1087
+
1088
+
1089
+
1090
+ # -------------------------
1091
+ # ----> Spectra parameters:
1092
+ # -------------------------
1093
+
1094
+ # 1) Maximum l for CLs:
1095
+ # - 'l_max_scalars' for CMB scalars (temperature, polarization, cmb
1096
+ # lensing potential),
1097
+ # - 'l_max_vectors' for CMB vectors
1098
+ # - 'l_max_tensors' for CMB tensors (temperature, polarization)
1099
+ # - 'l_max_lss' for Large Scale Structure Cls (density, galaxy
1100
+ # lensing potential)
1101
+ # Reducing 'l_max_lss' with respect to l_max_scalars reduces the execution
1102
+ # time significantly (default: set 'l_max_scalars' to 2500, 'l_max_vectors'
1103
+ # to 500, 'l_max_tensors' to 500, 'l_max_lss' to 300)
1104
+ l_max_scalars = 2500
1105
+ #l_max_vectors = 500
1106
+ l_max_tensors = 500
1107
+ #l_max_lss = 300
1108
+
1109
+
1110
+ # 2) Parameters for the the matter density number count (option 'nCl'
1111
+ # (or 'dCl')) or galaxy lensing potential (option 'sCl') Cls:
1112
+ # 2.a) Enter here a description of the selection functions W(z) of each redshift
1113
+ # bin; selection can be set to 'gaussian', 'tophat' or 'dirac', then pass a
1114
+ # list of N mean redshifts in growing order separated by comas, 1 or N
1115
+ # widths separated by comas, 1 or N bias separated by a comma, and 1 or N
1116
+ # magnification bias separated by a comma. The width stands for one
1117
+ # standard deviation of the gaussian (in z space), or for the half-width of
1118
+ # the top-hat. Finally, non_diagonal sets the number of cross-correlation
1119
+ # spectra that you want to calculate: 0 means only auto-correlation, 1
1120
+ # means only adjacent bins, and number of bins minus one means all
1121
+ # correlations (default: set to 'gaussian',1,0.1,1.,0.,0)
1122
+ #
1123
+ # NOTE: For good performances, the code uses the Limber approximation for
1124
+ # nCl. If you want high precision even with thin selection functions,
1125
+ # increase the default value of the precision parameters
1126
+ # l_switch_limber_for_nc_local_over_z and
1127
+ # l_switch_limber_for_nc_los_over_z; for instance, add them to the
1128
+ # input file with values 10000 and 2000, instead of the default 100
1129
+ # and 30.
1130
+ selection=gaussian
1131
+ selection_mean = 0.98,0.99,1.0,1.1,1.2
1132
+ selection_width = 0.1
1133
+ selection_bias =
1134
+ selection_magnification_bias =
1135
+ non_diagonal=4
1136
+
1137
+ # 2.b) It is possible to multiply the window function W(z) by a selection
1138
+ # function 'dNdz' (number of objects per redshift interval). Type the name
1139
+ # of the file containing the redshift in the first column and the number of
1140
+ # objects in the second column (do not call it 'analytic*'). Set to
1141
+ # 'analytic' to use instead the analytic expression from arXiv:1004.4640
1142
+ # (this function can be tuned in the module transfer.c, in the subroutine
1143
+ # transfer_dNdz_analytic). Leave blank to use a uniform distribution
1144
+ # (default).
1145
+ dNdz_selection =
1146
+
1147
+ # 2.c) It is possible to consider source number counts evolution. Type the name
1148
+ # of the file containing the redshift on the first column and the number
1149
+ # of objects on the second column (do not call it 'analytic*'). Set to
1150
+ # 'analytic' to use instead the analytic expression from Eq. 48 of
1151
+ # arXiv:1105.5292. Leave blank to use constant comoving number densities
1152
+ # (default).
1153
+ dNdz_evolution =
1154
+
1155
+
1156
+ # 3) Power spectrum P(k)
1157
+ # 3.a) Maximum k in P(k), 'P_k_max_h/Mpc' in units of h/Mpc or 'P_k_max_1/Mpc'
1158
+ # inunits of 1/Mpc. If scalar Cls are also requested, a minimum value is
1159
+ # automatically imposed (the same as in scalar Cls computation) (default:
1160
+ # set to 1 1/Mpc)
1161
+ P_k_max_h/Mpc = 1.
1162
+ #P_k_max_1/Mpc = 0.7
1163
+
1164
+ # 3.a.1) If you want to use a different value for k_max in the primordial and
1165
+ # perturbations structures, specify
1166
+ # 'primordial_P_k_max_h/Mpc' in units of h/Mpc or
1167
+ # 'primordial_P_k_max_1/Mpc' in units of 1/Mpc
1168
+ # to define the maximum value of k for primordial power spectrum. By doing
1169
+ # so, 'P_k_max_h/Mpc' will only apply to perturbations. If unspecified,
1170
+ # 'primordial_P_k_max_h/Mpc' is assumed to be the same as 'P_k_max_h/Mpc'.
1171
+ #primordial_P_k_max_h/Mpc =
1172
+ #primordial_P_k_max_1/Mpc =
1173
+
1174
+ # 3.b) Value(s) 'z_pk' of redshift(s) for P(k,z) output file(s); can be ordered
1175
+ # arbitrarily, but must be separated by comas (default: set 'z_pk' to 0)
1176
+ z_pk = 0
1177
+ #z_pk = 0., 1.2, 3.5
1178
+
1179
+ # 3.c) If the code is interfaced with routines that need to interpolate P(k,z) at
1180
+ # various values of (k,z), enter 'z_max_pk', the maximum value of z at
1181
+ # which such interpolations are needed. (default: set to maximum value in
1182
+ # above 'z_pk' input)
1183
+ #z_max_pk = 10.
1184
+
1185
+
1186
+
1187
+ # ----------------------------------
1188
+ # ----> Lensing parameters:
1189
+ # ----------------------------------
1190
+
1191
+ # 1) Relevant only if you ask for 'tCl, lCl' and/or 'pCl, lCl': if you want the
1192
+ # spectrum of lensed Cls. Can be anything starting with 'y' or 'n'.
1193
+ # (default: no lensed Cls)
1194
+ lensing = yes
1195
+
1196
+
1197
+ # 2) Should the lensing potential [phi+psi](k,tau) and the lensing spectrum Cl_phiphi be rescaled?
1198
+ # You can rescale [phi+psi](k,tau) at all times by an overall amplitude and tilt: lcmb_rescale*(k/lcmb_pivot)**lcmb_tilt
1199
+ # Or, more simply, you can pass the usual parameter 'A_l', and the potential will be rescaled by sqrt(A_L)
1200
+ # (matches standard definition in Calabrese et al. 0803.2309)
1201
+ # (default: no rescaling: A_L=lcmb_rescale=1)
1202
+ A_L =
1203
+ #lcmb_rescale = 1
1204
+ #lcmb_tilt = 0
1205
+ #lcmb_pivot = 0.1
1206
+
1207
+ # In general, do we want to use the full Limber scheme introduced in
1208
+ # v3.2.2? With this full Limber scheme, the calculation of the CMB
1209
+ # lensing potential spectrum C_l^phiphi for l > ppr->l_switch_limber
1210
+ # is based on a new integration scheme. Compared to the previous
1211
+ # scheme, which can be recovered by setting this parameter to 'no',
1212
+ # the new scheme uses a larger k_max and a coarser k-grid (or q-grid)
1213
+ # than the CMB transfer function. The new scheme is used by default,
1214
+ # because the old one is inaccurate at large l due to the too small
1215
+ # k_max. Set to anything starting with the letter 'y' or
1216
+ # 'n'. (default: 'yes')
1217
+ want_lcmb_full_limber = yes
1218
+
1219
+ # -------------------------------------
1220
+ # ----> Distortions parameters:
1221
+ # -------------------------------------
1222
+
1223
+ # 1) Which kind of appriximation would you like to use for the calculation of the
1224
+ # branching ratios?
1225
+ # To use approximation 1) 'branching approx'=sharp_sharp
1226
+ # To use approximation 2) 'branching approx'=sharp_soft
1227
+ # To use approximation 3) 'branching approx'=soft_soft
1228
+ # To use approximation 4) 'branching approx'=soft_soft_cons
1229
+ # To use approximation 5) 'branching approx'=exact
1230
+ #
1231
+ # Approximation 3) violates energy conservation in the plasma, and is discouraged.
1232
+ # Please be aware, that the total energy injected will NOT include the residual distortion energy.
1233
+ # (default set to 'exact')
1234
+ sd_branching_approx = exact
1235
+
1236
+ # 1.a) If the branching ratios are = exact, the user can specify additional parameters. For any other
1237
+ # branching ratio approximation, all of the following parameters are going to be ignored.
1238
+ # 1.a.1) How many multipoles do you want to use for the residuals in the case of the PCA
1239
+ # analysis? The value can vary between 0 and 6. (default set to 2)
1240
+ sd_PCA_size = 2
1241
+
1242
+ # 1.a.2) If PCA size is different from 0, you need to specify the chosen detector, by defining
1243
+ # setting "sd_detector_name" and defining any of 1.a.3.x).
1244
+ # In external/distortions, the file detectors_list.dat contains a list
1245
+ # of currently "known" detectors, i.e. detectors for which the PCA decomposition
1246
+ # has already been computed. If no detector name is specified, but the detector specifics
1247
+ # 1.a.3.x) are, the name will be created automatically. If no name and no specifics are
1248
+ # given, PIXIE values are going to be assumed.
1249
+ #
1250
+ # For instance, in the case of "sd_detector_name=PIXIE", the values are fixed respectively to 30 GHz,
1251
+ # 1005 GHz, 15 GHz and 5 10^-26 W/(m^2 Hz sr), and all spectral shapes and branching ratios are
1252
+ # already precumputed.
1253
+ #
1254
+ # It would be very helpful if, once computed the vectors for a new detector, the user would
1255
+ # send us the files containing spectral shapes and branching ratios (see e.g.
1256
+ # external/distortions for templates).
1257
+ sd_detector_name = PIXIE
1258
+
1259
+ # 1.a.3) Provide the specifics of the detector (frequencies and sensitivities)
1260
+ # Either define a path to a full noise file or enter the specifics here
1261
+ # 1.a.3.1) Give a path to the full noise file containing
1262
+ # - the frequency array in GHz and
1263
+ # - the detector noise in 10^-26 W/(m^2 Hz sr) for each frequency
1264
+ # Please supply the path relative to external/distortions
1265
+ #sd_detector_file = FIRAS_nu_delta_I.dat
1266
+
1267
+ # 1.a.3.2) If you did not supply the full noise file, you need to set
1268
+ # - the minumum frequency in GHz
1269
+ # - the maximum frequency in GHz
1270
+ # - the bin width in GHz or alternatively the number of bins
1271
+ # - the detector noise in 10^-26 W/(m^2 Hz sr)
1272
+ # of the chosen detector.
1273
+ #sd_detector_nu_min = 30.
1274
+ #sd_detector_nu_max = 1000.
1275
+ #sd_detector_nu_delta = 15.
1276
+ #sd_detector_bin_number = 65
1277
+ #sd_detector_delta_Ic = 5.
1278
+
1279
+
1280
+ # 2) Only calculate non-LCDM contributions to heating?
1281
+ # Sometimes, for comparison, one might want to disable all LCDM contributions to SDs.
1282
+ # Can be set to anything starting with 'y' or 'n' (default: no)
1283
+ sd_only_exotic = no
1284
+
1285
+
1286
+ # 3) Include g distortions?
1287
+ # Can be set to anything starting with 'y' or 'n' (default: no)
1288
+ sd_include_g_distortion = no
1289
+
1290
+
1291
+ # 4) If you want to manually add a y and/or a mu parameter on top of the calculated values, specify
1292
+ # 'sd_add_y' or 'sd_add_mu' or both (default set to 0 for both)
1293
+ sd_add_y = 0.
1294
+ sd_add_mu = 0.
1295
+
1296
+
1297
+ # 5) Include SZ effect from reionization? Can be set to anything starting with 'y' or 'n'
1298
+ # (default: no)
1299
+ include_SZ_effect = no
1300
+
1301
+ # 5.a) Specify the type of approximation you want to use for the SZ effect
1302
+ # - by setting 'sd_reio_type' to 'Nozawa_2005', the approximation by Nozawa et al. 2005 is employed.
1303
+ # - by setting 'sd_reio_type' to 'Chluba_2012', the approximation by Chluba et al. 2012 is employed.
1304
+ # Note that, for the moment, this appoximation is only valid for cluster temperatures lower
1305
+ # than few KeV.
1306
+ # (default set to 'Chluba_2012')
1307
+ #sd_reio_type = Chluba_2012
1308
+
1309
+
1310
+
1311
+ # ----------------------------------
1312
+ # ----> Output parameters:
1313
+ # ----------------------------------
1314
+
1315
+ # 1) Output for external files
1316
+ # 1.a) File name root 'root' for all output files (if Cl requested, written to
1317
+ # '<root>_cl.dat'; if P(k) requested, written to '<root>_pk.dat'; plus
1318
+ # similar files for scalars, tensors, pairs of initial conditions, etc.;
1319
+ # if file with input parameters requested, written to
1320
+ # '<root>_parameters.ini')
1321
+ # If no root is specified, the root will be set to 'output/<thisfilename>'
1322
+ # (default: output/<thisfilename>)
1323
+ #root = output/test
1324
+
1325
+ # 1.a.1) If root is specified, do you want to keep overwriting the file,
1326
+ # or do you want to create files numbered as '<root>N_'.
1327
+ # Can be set to anything starting with 'y' or 'n' (default: no)
1328
+ overwrite_root = no
1329
+
1330
+ # 1.b) Do you want headers at the beginning of each output file (giving
1331
+ # precisions on the output units/ format) ? Can be set to anything
1332
+ # starting with 'y' or 'n' (default: yes)
1333
+ headers = yes
1334
+
1335
+ # 1.c) In all output files, do you want columns to be normalized and ordered
1336
+ # with the default CLASS definitions or with the CAMB definitions (often
1337
+ # idential to the CMBFAST one) ? Set 'format' to either 'class', 'CLASS',
1338
+ # 'camb' or 'CAMB' (default: 'class')
1339
+ format = class
1340
+
1341
+ # 1.d) Do you want to write a table of background quantitites in a file? This
1342
+ # will include H, densities, Omegas, various cosmological distances, sound
1343
+ # horizon, etc., as a function of conformal time, proper time, scale
1344
+ # factor. Can be set to anything starting with 'y' or 'no' (default: no)
1345
+ write_background = no
1346
+
1347
+ # 1.e) Do you want to write a table of thermodynamics quantitites in a file?
1348
+ # Can be set to anything starting with 'y' or 'n'. (default: no)
1349
+ write_thermodynamics = no
1350
+
1351
+ # 1.f) Do you want to write a table of perturbations to files for certain
1352
+ # wavenumbers k? Dimension of k is 1/Mpc. The actual wave numbers are
1353
+ # chosen such that they are as close as possible to the requested k-values. (default: none)
1354
+ #k_output_values = 0.01, 0.1, 0.0001
1355
+
1356
+ # 1.g) Do you want to write the primordial scalar(/tensor) spectrum in a file,
1357
+ # with columns k [1/Mpc], P_s(k) [dimensionless], ( P_t(k)
1358
+ # [dimensionless])? Can be set to anything starting with 'y' or 'n'. (default: no)
1359
+ write_primordial = no
1360
+
1361
+ # 1.h) Do you want to write the exotic energy injection function in a file,
1362
+ # with columns z [dimensionless], dE/dz_inj, dE/dz_dep [J/(m^3 s)]?
1363
+ # 1.i) Do you want to write also the non-injected photon heating?
1364
+ # File created if 'write_exotic_injection' or
1365
+ # 'write_noninjection' set to something containing the letter
1366
+ # 'y' or 'Y', file written, otherwise not written (default: no)
1367
+ write_exotic_injection = no
1368
+ #write_noninjection = no
1369
+
1370
+ # 1.k) Do you want to write the spectral distortions in a file,
1371
+ # with columns x [dimensionless], DI(x) [dimensionless]?
1372
+ # File created if 'write_distortions' set to something containing the letter
1373
+ # 'y' or 'Y', file written, otherwise not written (default: no)
1374
+ write_distortions = no
1375
+
1376
+ # 1.l) Do you want to have all input/precision parameters which have been read
1377
+ # written in file '<root>parameters.ini', and those not written in file
1378
+ # '<root>unused_parameters' ? Can be set to anything starting with 'y'
1379
+ # or 'n'. (default: yes)
1380
+ write_parameters = yes
1381
+
1382
+ # 1.m) Do you want a warning written in the standard output when an input
1383
+ # parameter or value could not be interpreted ? Can be set to anything starting
1384
+ # with 'y' or 'n' (default: no)
1385
+ write_warnings = no
1386
+
1387
+ # 2) Amount of information sent to standard output: Increase integer values
1388
+ # to make each module more talkative (default: all set to 0)
1389
+ input_verbose = 1
1390
+ background_verbose = 1
1391
+ thermodynamics_verbose = 1
1392
+ perturbations_verbose = 1
1393
+ transfer_verbose = 1
1394
+ primordial_verbose = 1
1395
+ harmonic_verbose = 1
1396
+ fourier_verbose = 1
1397
+ lensing_verbose = 1
1398
+ distortions_verbose = 1
1399
+ output_verbose = 1
class-data/many_times.py ADDED
@@ -0,0 +1,261 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+ # coding: utf-8
3
+
4
+ # In[ ]:
5
+
6
+
7
+ # import necessary modules
8
+ # uncomment to get plots displayed in notebook
9
+ get_ipython().run_line_magic('matplotlib', 'inline')
10
+ import matplotlib
11
+ import matplotlib.pyplot as plt
12
+ import numpy as np
13
+ from classy import Class
14
+ from scipy.optimize import fsolve
15
+ from scipy.interpolate import interp1d
16
+ import math
17
+
18
+
19
+ # In[ ]:
20
+
21
+
22
+ #############################################
23
+ #
24
+ # User settings controlling the figure aspect
25
+ #
26
+ z_max_pk = 46000 # highest redshift involved
27
+ k_per_decade = 400 # number of k values, controls final resolution
28
+ k_min_tau0 = 40. # this value controls the minimum k value in the figure (it is k_min * tau0)
29
+ P_k_max_inv_Mpc =1.0 # this value is directly the maximum k value in the figure in Mpc
30
+ tau_num_early = 2000 # number of conformal time values before recombination, controls final resolution
31
+ tau_num_late = 200 # number of conformal time values after recombination, controls final resolution
32
+ tau_ini = 10. # first value of conformal time in Mpc
33
+ tau_label_Hubble = 20. # value of time at which we want to place the label on Hubble crossing
34
+ tau_label_ks = 40. # value of time at which we want to place the label on sound horizon crossing
35
+ tau_label_kd = 230. # value of time at which we want to place the label on damping scale crossing
36
+ #
37
+ # Cosmological parameters and other CLASS parameters
38
+ #
39
+ common_settings = {# which output? transfer functions only
40
+ 'output':'mTk',
41
+ # LambdaCDM parameters
42
+ 'h':0.67556,
43
+ 'omega_b':0.022032,
44
+ 'omega_cdm':0.12038,
45
+ 'A_s':2.215e-9,
46
+ 'n_s':0.9619,
47
+ 'tau_reio':0.0925,
48
+ # Take fixed value for primordial Helium (instead of automatic BBN adjustment)
49
+ 'YHe':0.246,
50
+ # other output and precision parameters
51
+ 'z_max_pk':z_max_pk,
52
+ 'k_per_decade_for_pk':k_per_decade,
53
+ 'k_per_decade_for_bao':k_per_decade,
54
+ 'k_min_tau0':k_min_tau0, # this value controls the minimum k value in the figure
55
+ 'perturbations_sampling_stepsize':'0.05',
56
+ 'P_k_max_1/Mpc':P_k_max_inv_Mpc,
57
+ 'compute damping scale':'yes', # needed to output and plot Silk damping scale
58
+ 'gauge':'newtonian',
59
+ 'matter_source_in_current_gauge':'yes'}
60
+
61
+ ###############
62
+ #
63
+ # call CLASS
64
+ #
65
+ ###############
66
+ M = Class()
67
+ M.set(common_settings)
68
+ M.compute()
69
+ #
70
+ # define conformal time sampling array
71
+ #
72
+ times = M.get_current_derived_parameters(['tau_rec','conformal_age'])
73
+ tau_rec=times['tau_rec']
74
+ tau_0 = times['conformal_age']
75
+ tau1 = np.logspace(math.log10(tau_ini),math.log10(tau_rec),tau_num_early)
76
+ tau2 = np.logspace(math.log10(tau_rec),math.log10(tau_0),tau_num_late)[1:]
77
+ tau2[-1] *= 0.999 # this tiny shift avoids interpolation errors
78
+ tau = np.concatenate((tau1,tau2))
79
+ tau_num = len(tau)
80
+ #
81
+ # use table of background and thermodynamics quantitites to define some functions
82
+ # returning some characteristic scales
83
+ # (of Hubble crossing, sound horizon crossing, etc.) at different time
84
+ #
85
+ background = M.get_background() # load background table
86
+ #print background.viewkeys()
87
+ thermodynamics = M.get_thermodynamics() # load thermodynamics table
88
+ #print thermodynamics.viewkeys()
89
+ #
90
+ background_tau = background['conf. time [Mpc]'] # read conformal times in background table
91
+ background_z = background['z'] # read redshift
92
+ background_aH = 2.*math.pi*background['H [1/Mpc]']/(1.+background['z'])/M.h() # read 2pi * aH in [h/Mpc]
93
+ background_ks = 2.*math.pi/background['comov.snd.hrz.']/M.h() # read 2pi/(comoving sound horizon) in [h/Mpc]
94
+ background_rho_m_over_r =\
95
+ (background['(.)rho_b']+background['(.)rho_cdm'])\
96
+ /(background['(.)rho_g']+background['(.)rho_ur']) # read rho_r / rho_m (to find time of equality)
97
+ background_rho_l_over_m =\
98
+ background['(.)rho_lambda']\
99
+ /(background['(.)rho_b']+background['(.)rho_cdm']) # read rho_m / rho_lambda (to find time of equality)
100
+ thermodynamics_tau = thermodynamics['conf. time [Mpc]'] # read confromal times in thermodynamics table
101
+ thermodynamics_kd = 2.*math.pi/thermodynamics['r_d']/M.h() # read 2pi(comoving diffusion scale) in [h/Mpc]
102
+ #
103
+ # define a bunch of interpolation functions based on previous quantities
104
+ #
105
+ background_z_at_tau = interp1d(background_tau,background_z)
106
+ background_aH_at_tau = interp1d(background_tau,background_aH)
107
+ background_ks_at_tau = interp1d(background_tau,background_ks)
108
+ background_tau_at_mr = interp1d(background_rho_m_over_r,background_tau)
109
+ background_tau_at_lm = interp1d(background_rho_l_over_m,background_tau)
110
+ thermodynamics_kd_at_tau = interp1d(thermodynamics_tau, thermodynamics_kd)
111
+ #
112
+ # infer arrays of characteristic quantitites calculated at values of conformal time in tau array
113
+ #
114
+ aH = background_aH_at_tau(tau)
115
+ ks = background_ks_at_tau(tau)
116
+ kd = thermodynamics_kd_at_tau(tau)
117
+ #
118
+ # infer times of R/M and M/Lambda equalities
119
+ #
120
+ tau_eq = background_tau_at_mr(1.)
121
+ tau_lambda = background_tau_at_lm(1.)
122
+ #
123
+ # check and inform user whether intiial arbitrary choice of z_max_pk was OK
124
+ max_z_needed = background_z_at_tau(tau[0])
125
+ if max_z_needed > z_max_pk:
126
+ print ('you must increase the value of z_max_pk to at least ',max_z_needed)
127
+ () + 1 # this strange line is just a trick to stop the script execution there
128
+ else:
129
+ print ('in a next run with the same values of tau, you may decrease z_max_pk from ',z_max_pk,' to ',max_z_needed)
130
+ #
131
+ # get transfer functions at each time and build arrays Theta0(tau,k) and phi(tau,k)
132
+ #
133
+ for i in range(tau_num):
134
+ one_time = M.get_transfer(background_z_at_tau(tau[i])) # transfer functions at each time tau
135
+ if i ==0: # if this is the first time in the loop: create the arrays (k, Theta0, phi)
136
+ k = one_time['k (h/Mpc)']
137
+ k_num = len(k)
138
+ Theta0 = np.zeros((tau_num,k_num))
139
+ phi = np.zeros((tau_num,k_num))
140
+ Theta0[i,:] = 0.25*one_time['d_g'][:]
141
+ phi[i,:] = one_time['phi'][:]
142
+ #
143
+ # find the global extra of Theta0(tau,k) and phi(tau,k), used to define color code later
144
+ #
145
+ Theta_amp = max(Theta0.max(),-Theta0.min())
146
+ phi_amp = max(phi.max(),-phi.min())
147
+ #
148
+ # reshaping of (k,tau) necessary to call the function 'pcolormesh'
149
+ #
150
+ K,T = np.meshgrid(k,tau)
151
+ #
152
+ # inform user of the size of the grids (related to the figure resolution)
153
+ #
154
+ print ('grid size:',len(k),len(tau),Theta0.shape)
155
+ #
156
+ #################
157
+ #
158
+ # start plotting
159
+ #
160
+ #################
161
+ #
162
+ fig = plt.figure(figsize=(18,8))
163
+ #
164
+ # plot Theta0(k,tau)
165
+ #
166
+ ax_Theta = fig.add_subplot(121)
167
+ print ('> Plotting Theta_0')
168
+ fig_Theta = ax_Theta.pcolormesh(K,T,Theta0,cmap='coolwarm',vmin=-Theta_amp,vmax=Theta_amp,shading='auto')
169
+ print ('> Done')
170
+ #
171
+ # plot lines (characteristic times and scales)
172
+ #
173
+ ax_Theta.axhline(y=tau_rec,color='k',linestyle='-')
174
+ ax_Theta.axhline(y=tau_eq,color='k',linestyle='-')
175
+ ax_Theta.axhline(y=tau_lambda,color='k',linestyle='-')
176
+ ax_Theta.plot(aH,tau,'r-',linewidth=2)
177
+ ax_Theta.plot(ks,tau,color='#FFFF33',linestyle='-',linewidth=2)
178
+ ax_Theta.plot(kd,tau,'b-',linewidth=2)
179
+ #
180
+ # dealing with labels
181
+ #
182
+ ax_Theta.set_title(r'$\Theta_0$')
183
+ ax_Theta.text(1.5*k[0],0.9*tau_rec,r'$\mathrm{rec.}$')
184
+ ax_Theta.text(1.5*k[0],0.9*tau_eq,r'$\mathrm{R/M} \,\, \mathrm{eq.}$')
185
+ ax_Theta.text(1.5*k[0],0.9*tau_lambda,r'$\mathrm{M/L} \,\, \mathrm{eq.}$')
186
+ ax_Theta.annotate(r'$\mathrm{Hubble} \,\, \mathrm{cross.}$',
187
+ xy=(background_aH_at_tau(tau_label_Hubble),tau_label_Hubble),
188
+ xytext=(0.1*background_aH_at_tau(tau_label_Hubble),0.8*tau_label_Hubble),
189
+ arrowprops=dict(facecolor='black', shrink=0.05, width=1, headlength=5, headwidth=5))
190
+ ax_Theta.annotate(r'$\mathrm{sound} \,\, \mathrm{horizon} \,\, \mathrm{cross.}$',
191
+ xy=(background_ks_at_tau(tau_label_ks),tau_label_ks),
192
+ xytext=(0.07*background_aH_at_tau(tau_label_ks),0.8*tau_label_ks),
193
+ arrowprops=dict(facecolor='black', shrink=0.05, width=1, headlength=5, headwidth=5))
194
+ ax_Theta.annotate(r'$\mathrm{damping} \,\, \mathrm{scale} \,\, \mathrm{cross.}$',
195
+ xy=(thermodynamics_kd_at_tau(tau_label_kd),tau_label_kd),
196
+ xytext=(0.2*thermodynamics_kd_at_tau(tau_label_kd),2.0*tau_label_kd),
197
+ arrowprops=dict(facecolor='black', shrink=0.05, width=1, headlength=5, headwidth=5))
198
+ #
199
+ # dealing with axes
200
+ #
201
+ ax_Theta.set_xlim(k[0],k[-1])
202
+ ax_Theta.set_xscale('log')
203
+ ax_Theta.set_yscale('log')
204
+ ax_Theta.set_xlabel(r'$k \,\,\, \mathrm{[h/Mpc]}$')
205
+ ax_Theta.set_ylabel(r'$\tau \,\,\, \mathrm{[Mpc]}$')
206
+ ax_Theta.invert_yaxis()
207
+ #
208
+ # color legend
209
+ #
210
+ fig.colorbar(fig_Theta)
211
+ #
212
+ # plot phi(k,tau)
213
+ #
214
+ ax_phi = fig.add_subplot(122)
215
+ ax_phi.set_xlim(k[0],k[-1])
216
+ #ax_phi.pcolor(K,T,phi,cmap='coolwarm')
217
+ print ('> Plotting phi')
218
+ fig_phi = ax_phi.pcolormesh(K,T,phi,cmap='coolwarm',vmin=-0.,vmax=phi_amp,shading='auto')
219
+ print ('> Done')
220
+ #
221
+ # plot lines (characteristic times and scales)
222
+ #
223
+ ax_phi.axhline(y=tau_rec,color='k',linestyle='-')
224
+ ax_phi.axhline(y=tau_eq,color='k',linestyle='-')
225
+ ax_phi.axhline(y=tau_lambda,color='k',linestyle='-')
226
+ ax_phi.plot(aH,tau,'r-',linewidth=2)
227
+ ax_phi.plot(ks,tau,color='#FFFF33',linestyle='-',linewidth=2)
228
+ #
229
+ # dealing with labels
230
+ #
231
+ ax_phi.set_title(r'$\phi$')
232
+ ax_phi.text(1.5*k[0],0.9*tau_rec,r'$\mathrm{rec.}$')
233
+ ax_phi.text(1.5*k[0],0.9*tau_eq,r'$\mathrm{R/M} \,\, \mathrm{eq.}$')
234
+ ax_phi.text(1.5*k[0],0.9*tau_lambda,r'$\mathrm{M/L} \,\, \mathrm{eq.}$')
235
+ ax_phi.annotate(r'$\mathrm{Hubble} \,\, \mathrm{cross.}$',
236
+ xy=(background_aH_at_tau(tau_label_Hubble),tau_label_Hubble),
237
+ xytext=(0.1*background_aH_at_tau(tau_label_Hubble),0.8*tau_label_Hubble),
238
+ arrowprops=dict(facecolor='black', shrink=0.05, width=1, headlength=5, headwidth=5))
239
+ ax_phi.annotate(r'$\mathrm{sound} \,\, \mathrm{horizon} \,\, \mathrm{cross.}$',
240
+ xy=(background_ks_at_tau(tau_label_ks),tau_label_ks),
241
+ xytext=(0.07*background_aH_at_tau(tau_label_ks),0.8*tau_label_ks),
242
+ arrowprops=dict(facecolor='black', shrink=0.05, width=1, headlength=5, headwidth=5))
243
+ #
244
+ # dealing with axes
245
+ #
246
+ ax_phi.set_xscale('log')
247
+ ax_phi.set_yscale('log')
248
+ ax_phi.set_xlabel(r'$k \,\,\, \mathrm{[h/Mpc]}$')
249
+ ax_phi.set_ylabel(r'$\tau \,\,\, \mathrm{[Mpc]}$')
250
+ ax_phi.invert_yaxis()
251
+ #
252
+ # color legend
253
+ #
254
+ fig.colorbar(fig_phi)
255
+ #
256
+ # produce the PDF
257
+ #
258
+ #plt.show()
259
+ plt.savefig('many_times.png',dpi=300)
260
+
261
+
class-data/neutrinohierarchy.py ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+ # coding: utf-8
3
+
4
+ # In[ ]:
5
+
6
+
7
+ # import necessary modules
8
+ # uncomment to get plots displayed in notebook
9
+ get_ipython().run_line_magic('matplotlib', 'inline')
10
+ import matplotlib
11
+ import matplotlib.pyplot as plt
12
+ import numpy as np
13
+ from classy import Class
14
+ from scipy.optimize import fsolve
15
+
16
+
17
+ # In[ ]:
18
+
19
+
20
+ # a function returning the three masses given the Delta m^2, the total mass, and the hierarchy (e.g. 'IN' or 'IH')
21
+ # taken from a piece of MontePython written by Thejs Brinckmann
22
+ def get_masses(delta_m_squared_atm, delta_m_squared_sol, sum_masses, hierarchy):
23
+ # any string containing letter 'n' will be considered as refering to normal hierarchy
24
+ if 'n' in hierarchy.lower():
25
+ # Normal hierarchy massive neutrinos. Calculates the individual
26
+ # neutrino masses from M_tot_NH and deletes M_tot_NH
27
+ #delta_m_squared_atm=2.45e-3
28
+ #delta_m_squared_sol=7.50e-5
29
+ m1_func = lambda m1, M_tot, d_m_sq_atm, d_m_sq_sol: M_tot**2. + 0.5*d_m_sq_sol - d_m_sq_atm + m1**2. - 2.*M_tot*m1 - 2.*M_tot*(d_m_sq_sol+m1**2.)**0.5 + 2.*m1*(d_m_sq_sol+m1**2.)**0.5
30
+ m1,opt_output,success,output_message = fsolve(m1_func,sum_masses/3.,(sum_masses,delta_m_squared_atm,delta_m_squared_sol),full_output=True)
31
+ m1 = m1[0]
32
+ m2 = (delta_m_squared_sol + m1**2.)**0.5
33
+ m3 = (delta_m_squared_atm + 0.5*(m2**2. + m1**2.))**0.5
34
+ return m1,m2,m3
35
+ else:
36
+ # Inverted hierarchy massive neutrinos. Calculates the individual
37
+ # neutrino masses from M_tot_IH and deletes M_tot_IH
38
+ #delta_m_squared_atm=-2.45e-3
39
+ #delta_m_squared_sol=7.50e-5
40
+ delta_m_squared_atm = -delta_m_squared_atm
41
+ m1_func = lambda m1, M_tot, d_m_sq_atm, d_m_sq_sol: M_tot**2. + 0.5*d_m_sq_sol - d_m_sq_atm + m1**2. - 2.*M_tot*m1 - 2.*M_tot*(d_m_sq_sol+m1**2.)**0.5 + 2.*m1*(d_m_sq_sol+m1**2.)**0.5
42
+ m1,opt_output,success,output_message = fsolve(m1_func,sum_masses/3.,(sum_masses,delta_m_squared_atm,delta_m_squared_sol),full_output=True)
43
+ m1 = m1[0]
44
+ m2 = (delta_m_squared_sol + m1**2.)**0.5
45
+ m3 = (delta_m_squared_atm + 0.5*(m2**2. + m1**2.))**0.5
46
+ return m1,m2,m3
47
+
48
+
49
+ # In[ ]:
50
+
51
+
52
+ # test of this function, returning the 3 masses for total mass of 0.1eV
53
+ m1,m2,m3 = get_masses(2.45e-3,7.50e-5,0.1,'NH')
54
+ print ('NH:',m1,m2,m3,m1+m2+m3)
55
+ m1,m2,m3 = get_masses(2.45e-3,7.50e-5,0.1,'IH')
56
+ print ('IH:',m1,m2,m3,m1+m2+m3)
57
+
58
+
59
+ # In[ ]:
60
+
61
+
62
+ # The goal of this cell is to compute the ratio of P(k) for NH and IH with the same total mass
63
+ commonsettings = {'N_ur':0,
64
+ 'N_ncdm':3,
65
+ 'output':'mPk',
66
+ 'P_k_max_1/Mpc':3.0,
67
+ # The next line should be uncommented for higher precision (but significantly slower running)
68
+ 'ncdm_fluid_approximation':3,
69
+ # You may uncomment this line to get more info on the ncdm sector from Class:
70
+ 'background_verbose':1
71
+ }
72
+
73
+ # array of k values in 1/Mpc
74
+ kvec = np.logspace(-4,np.log10(3),100)
75
+ # array for storing legend
76
+ legarray = []
77
+
78
+ # loop over total mass values
79
+ for sum_masses in [0.1, 0.115, 0.13]:
80
+ # normal hierarchy
81
+ [m1, m2, m3] = get_masses(2.45e-3,7.50e-5, sum_masses, 'NH')
82
+ NH = Class()
83
+ NH.set(commonsettings)
84
+ NH.set({'m_ncdm':str(m1)+','+str(m2)+','+str(m3)})
85
+ NH.compute()
86
+ # inverted hierarchy
87
+ [m1, m2, m3] = get_masses(2.45e-3,7.50e-5, sum_masses, 'IH')
88
+ IH = Class()
89
+ IH.set(commonsettings)
90
+ IH.set({'m_ncdm':str(m1)+','+str(m2)+','+str(m3)})
91
+ IH.compute()
92
+ pkNH = []
93
+ pkIH = []
94
+ for k in kvec:
95
+ pkNH.append(NH.pk(k,0.))
96
+ pkIH.append(IH.pk(k,0.))
97
+ NH.struct_cleanup()
98
+ IH.struct_cleanup()
99
+ # extract h value to convert k from 1/Mpc to h/Mpc
100
+ h = NH.h()
101
+ plt.semilogx(kvec/h,1-np.array(pkNH)/np.array(pkIH))
102
+ legarray.append(r'$\Sigma m_i = '+str(sum_masses)+'$eV')
103
+ plt.axhline(0,color='k')
104
+ plt.xlim(kvec[0]/h,kvec[-1]/h)
105
+ plt.xlabel(r'$k [h \mathrm{Mpc}^{-1}]$')
106
+ plt.ylabel(r'$1-P(k)^\mathrm{NH}/P(k)^\mathrm{IH}$')
107
+ plt.legend(legarray)
108
+ plt.savefig('neutrinohierarchy.pdf')
109
+
class-data/one_k.py ADDED
@@ -0,0 +1,168 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+ # coding: utf-8
3
+
4
+ # In[ ]:
5
+
6
+
7
+ # import necessary modules
8
+ # uncomment to get plots displayed in notebook
9
+ get_ipython().run_line_magic('matplotlib', 'inline')
10
+ import matplotlib
11
+ import matplotlib.pyplot as plt
12
+ import numpy as np
13
+ from classy import Class
14
+ from scipy.optimize import fsolve
15
+ from scipy.interpolate import interp1d
16
+ import math
17
+
18
+
19
+ # In[ ]:
20
+
21
+
22
+ #############################################
23
+ #
24
+ # value of k that we want to follow in [1/Mpc]
25
+ #
26
+ k = 0.5 # 1/Mpc
27
+ #
28
+ # Cosmological parameters and other CLASS parameters
29
+ #
30
+ common_settings = {# we need to set the output field to something although
31
+ # the really releveant outpout here will be set with 'k_output_values'
32
+ 'output':'mPk',
33
+ # value of k we want to polot in [1/Mpc]
34
+ 'k_output_values':k,
35
+ # LambdaCDM parameters
36
+ 'h':0.67810,
37
+ 'omega_b':0.02238280,
38
+ 'omega_cdm':0.1201075,
39
+ 'A_s':2.100549e-09 ,
40
+ 'n_s':0.9660499,
41
+ 'tau_reio':0.05430842,
42
+ # Take fixed value for primordial Helium (instead of automatic BBN adjustment)
43
+ 'YHe':0.2454,
44
+ # other options and settings
45
+ 'compute damping scale':'yes', # needed to output the time of damping scale crossing
46
+ 'gauge':'newtonian'}
47
+ ##############
48
+ #
49
+ # call CLASS
50
+ #
51
+ M = Class()
52
+ M.set(common_settings)
53
+ M.compute()
54
+ #
55
+ # load perturbations
56
+ #
57
+ all_k = M.get_perturbations() # this potentially constains scalars/tensors and all k values
58
+ print (all_k['scalar'][0].keys())
59
+ #
60
+ one_k = all_k['scalar'][0] # this contains only the scalar perturbations for the requested k values
61
+ #
62
+ tau = one_k['tau [Mpc]']
63
+ Theta0 = 0.25*one_k['delta_g']
64
+ phi = one_k['phi']
65
+ psi = one_k['psi']
66
+ theta_b = one_k['theta_b']
67
+ a = one_k['a']
68
+ # compute related quantitites
69
+ R = 3./4.*M.Omega_b()/M.Omega_g()*a # R = 3/4 * (rho_b/rho_gamma)
70
+ zero_point = -(1.+R)*psi # zero point of oscillations: -(1.+R)*psi
71
+ #
72
+ # get Theta0 oscillation amplitude (for vertical scale of plot)
73
+ #
74
+ Theta0_amp = max(Theta0.max(),-Theta0.min())
75
+ #
76
+ # get the time of decoupling
77
+ #
78
+ quantities = M.get_current_derived_parameters(['tau_rec'])
79
+ # print times.viewkeys()
80
+ tau_rec = quantities['tau_rec']
81
+ #
82
+ # use table of background quantitites to find the time of
83
+ # Hubble crossing (k / (aH)= 2 pi), sound horizon crossing (k * rs = 2pi)
84
+ #
85
+ background = M.get_background() # load background table
86
+ #print background.viewkeys()
87
+ #
88
+ background_tau = background['conf. time [Mpc]'] # read confromal times in background table
89
+ background_z = background['z'] # read redshift
90
+ background_k_over_aH = k/background['H [1/Mpc]']*(1.+background['z']) # read k/aH = k(1+z)/H
91
+ background_k_rs = k * background['comov.snd.hrz.'] # read k * rs
92
+ background_rho_m_over_r =\
93
+ (background['(.)rho_b']+background['(.)rho_cdm'])\
94
+ /(background['(.)rho_g']+background['(.)rho_ur']) # read rho_r / rho_m (to find time of equality)
95
+ #
96
+ # define interpolation functions; we want the value of tau when the argument is equal to 2pi (or 1 for equality)
97
+ #
98
+ tau_at_k_over_aH = interp1d(background_k_over_aH,background_tau)
99
+ tau_at_k_rs = interp1d(background_k_rs,background_tau)
100
+ tau_at_rho_m_over_r = interp1d(background_rho_m_over_r,background_tau)
101
+ #
102
+ # finally get these times
103
+ #
104
+ tau_Hubble = tau_at_k_over_aH(2.*math.pi)
105
+ tau_s = tau_at_k_rs(2.*math.pi)
106
+ tau_eq = tau_at_rho_m_over_r(1.)
107
+ #
108
+ #################
109
+ #
110
+ # start plotting
111
+ #
112
+ #################
113
+ #
114
+ plt.xlim([tau[0],tau_rec*1.3])
115
+ plt.ylim([-1.3*Theta0_amp,1.3*Theta0_amp])
116
+ plt.xlabel(r'$\tau \,\,\, \mathrm{[Mpc]}$')
117
+ plt.title(r'$\mathrm{Transfer} (\tau,k) \,\,\, \mathrm{for} \,\,\, k=%g \,\,\, [1/\mathrm{Mpc}]$'%k)
118
+ plt.grid()
119
+ #
120
+ plt.axvline(x=tau_Hubble,color='r')
121
+ plt.axvline(x=tau_s,color='y')
122
+ plt.axvline(x=tau_eq,color='k')
123
+ plt.axvline(x=tau_rec,color='k')
124
+ #
125
+ plt.annotate(r'Hubble cross.',
126
+ xy=(tau_Hubble,1.08*Theta0_amp),
127
+ xytext=(0.15*tau_Hubble,1.18*Theta0_amp),
128
+ arrowprops=dict(facecolor='black', shrink=0.05, width=1, headlength=5, headwidth=5))
129
+ plt.annotate(r'sound hor. cross.',
130
+ xy=(tau_s,-1.0*Theta0_amp),
131
+ xytext=(1.5*tau_s,-1.2*Theta0_amp),
132
+ arrowprops=dict(facecolor='black', shrink=0.05, width=1, headlength=5, headwidth=5))
133
+ plt.annotate(r'eq.',
134
+ xy=(tau_eq,1.08*Theta0_amp),
135
+ xytext=(0.45*tau_eq,1.18*Theta0_amp),
136
+ arrowprops=dict(facecolor='black', shrink=0.05, width=1, headlength=5, headwidth=5))
137
+ plt.annotate(r'rec.',
138
+ xy=(tau_rec,1.08*Theta0_amp),
139
+ xytext=(0.45*tau_rec,1.18*Theta0_amp),
140
+ arrowprops=dict(facecolor='black', shrink=0.05, width=1, headlength=5, headwidth=5))
141
+ #
142
+ # Possibility to add functions one by one, saving between each (for slides)
143
+ #
144
+ plt.semilogx(tau,psi,'y-',label=r'$\psi$')
145
+ #plt.legend(loc='right',bbox_to_anchor=(1.4, 0.5))
146
+ #plt.savefig('one_k_1.pdf',bbox_inches='tight')
147
+ #
148
+ plt.semilogx(tau,phi,'r-',label=r'$\phi$')
149
+ #plt.legend(loc='right',bbox_to_anchor=(1.4, 0.5))
150
+ #plt.savefig('one_k_2.pdf',bbox_inches='tight')
151
+ #
152
+ plt.semilogx(tau,zero_point,'k:',label=r'$-(1+R)\psi$')
153
+ #plt.legend(loc='right',bbox_to_anchor=(1.4, 0.5))
154
+ #plt.savefig('one_k_3.pdf',bbox_inches='tight')
155
+ #
156
+ plt.semilogx(tau,Theta0,'b-',linewidth=2,label=r'$\Theta_0$')
157
+ #plt.legend(loc='right',bbox_to_anchor=(1.4, 0.5))
158
+ #plt.savefig('one_k_4.pdf',bbox_inches='tight')
159
+ #
160
+ plt.semilogx(tau,Theta0+psi,'c-',linewidth=2,label=r'$\Theta_0+\psi$')
161
+ #plt.legend(loc='right',bbox_to_anchor=(1.4, 0.5))
162
+ #plt.savefig('one_k_5.pdf',bbox_inches='tight')
163
+ #
164
+ plt.semilogx(tau,theta_b,'g-',label=r'$\theta_b$')
165
+ plt.legend(loc='right',bbox_to_anchor=(1.4, 0.5))
166
+ plt.savefig('one_k.pdf',bbox_inches='tight')
167
+ #
168
+
class-data/one_time.py ADDED
@@ -0,0 +1,286 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+ # coding: utf-8
3
+
4
+ # In[ ]:
5
+
6
+
7
+ # import necessary modules
8
+ from classy import Class
9
+ from math import pi
10
+ get_ipython().run_line_magic('matplotlib', 'inline')
11
+ import matplotlib
12
+ import matplotlib.pyplot as plt
13
+
14
+
15
+ # In[ ]:
16
+
17
+
18
+ #####################################################
19
+ #
20
+ # Cosmological parameters and other CLASS parameters
21
+ #
22
+ #####################################################
23
+ common_settings = {# LambdaCDM parameters
24
+ 'h':0.67810,
25
+ 'omega_b':0.02238280,
26
+ 'omega_cdm':0.12038,
27
+ 'A_s':2.100549e-09,
28
+ 'n_s': 0.9660499,
29
+ 'tau_reio':0.05430842,
30
+ # output and precision parameters
31
+ 'output':'tCl,mTk,vTk',
32
+ 'l_max_scalars':5000,
33
+ 'P_k_max_1/Mpc':10.0,
34
+ 'gauge':'newtonian'
35
+ }
36
+
37
+
38
+ # In[ ]:
39
+
40
+
41
+ ###############
42
+ #
43
+ # call CLASS a first time just to compute z_rec (will compute transfer functions at default: z=0)
44
+ #
45
+ ###############
46
+ M = Class()
47
+ M.set(common_settings)
48
+ M.compute()
49
+ derived = M.get_current_derived_parameters(['z_rec','tau_rec','conformal_age'])
50
+ print (derived.keys())
51
+ z_rec = derived['z_rec']
52
+ z_rec = int(1000.*z_rec)/1000. # round down at 4 digits after coma
53
+ print ('z_rec=',z_rec)
54
+ #
55
+ # In the last figure the x-axis will show l/(tau_0-tau_rec), so we need (tau_0-tau_rec) in units of [Mpc/h]
56
+ #
57
+ tau_0_minus_tau_rec_hMpc = (derived['conformal_age']-derived['tau_rec'])*M.h()
58
+
59
+
60
+ # In[ ]:
61
+
62
+
63
+ ################
64
+ #
65
+ # call CLASS again for the perturbations (will compute transfer functions at input value z_rec)
66
+ #
67
+ ################
68
+ M.empty() # reset input parameters to default, before passing a new parameter set
69
+ M.set(common_settings)
70
+ M.set({'z_pk':z_rec})
71
+ M.compute()
72
+ #
73
+ # save the total Cl's (we will plot them in the last step)
74
+ #
75
+ cl_tot = M.raw_cl(5000)
76
+ #
77
+ #
78
+ # load transfer functions at recombination
79
+ #
80
+ one_time = M.get_transfer(z_rec)
81
+ print (one_time.keys())
82
+ k = one_time['k (h/Mpc)']
83
+ Theta0 = 0.25*one_time['d_g']
84
+ phi = one_time['phi']
85
+ psi = one_time['psi']
86
+ theta_b = one_time['t_b']
87
+ # compute related quantitites
88
+ R = 3./4.*M.Omega_b()/M.Omega_g()/(1+z_rec) # R = 3/4 * (rho_b/rho_gamma) at z_rec
89
+ zero_point = -(1.+R)*psi # zero point of oscillations: -(1.+R)*psi
90
+ Theta0_amp = max(Theta0.max(),-Theta0.min()) # Theta0 oscillation amplitude (for vertical scale of plot)
91
+ print ('At z_rec: R=',R,', Theta0_amp=',Theta0_amp)
92
+
93
+
94
+ # In[ ]:
95
+
96
+
97
+ # use table of background quantitites to find the wavenumbers corresponding to
98
+ # Hubble crossing (k = 2 pi a H), sound horizon crossing (k = 2pi / rs)
99
+ #
100
+ background = M.get_background() # load background table
101
+ print (background.keys())
102
+ #
103
+ background_tau = background['conf. time [Mpc]'] # read confromal times in background table
104
+ background_z = background['z'] # read redshift
105
+ background_kh = 2.*pi*background['H [1/Mpc]']/(1.+background['z'])/M.h() # read kh = 2pi aH = 2pi H/(1+z) converted to [h/Mpc]
106
+ background_ks = 2.*pi/background['comov.snd.hrz.']/M.h() # read ks = 2pi/rs converted to [h/Mpc]
107
+ #
108
+ # define interpolation functions; we want the value of tau when the argument is equal to 2pi
109
+ #
110
+ from scipy.interpolate import interp1d
111
+ kh_at_tau = interp1d(background_tau,background_kh)
112
+ ks_at_tau = interp1d(background_tau,background_ks)
113
+ #
114
+ # finally get these scales
115
+ #
116
+ tau_rec = derived['tau_rec']
117
+ kh = kh_at_tau(tau_rec)
118
+ ks = ks_at_tau(tau_rec)
119
+ print ('at tau_rec=',tau_rec,', kh=',kh,', ks=',ks)
120
+
121
+
122
+ # In[ ]:
123
+
124
+
125
+ #####################
126
+ #
127
+ # call CLASS with TSW (intrinsic temperature + Sachs-Wolfe) and save
128
+ #
129
+ #####################
130
+ M.empty() # clean input
131
+ M.set(common_settings) # new input
132
+ M.set({'temperature contributions':'tsw'})
133
+ M.compute()
134
+ cl_TSW = M.raw_cl(5000)
135
+
136
+
137
+ # In[ ]:
138
+
139
+
140
+ ######################
141
+ #
142
+ # call CLASS with early ISW and save
143
+ #
144
+ ######################
145
+ M.empty()
146
+ M.set(common_settings)
147
+ M.set({'temperature contributions':'eisw'})
148
+ M.compute()
149
+ cl_eISW = M.raw_cl(5000)
150
+
151
+
152
+ # In[ ]:
153
+
154
+
155
+ ######################
156
+ #
157
+ # call CLASS with late ISW and save
158
+ #
159
+ ######################
160
+ M.empty()
161
+ M.set(common_settings)
162
+ M.set({'temperature contributions':'lisw'})
163
+ M.compute()
164
+ cl_lISW = M.raw_cl(5000)
165
+
166
+
167
+ # In[ ]:
168
+
169
+
170
+ ######################
171
+ #
172
+ # call CLASS with Doppler and save
173
+ #
174
+ ######################
175
+ M.empty()
176
+ M.set(common_settings)
177
+ M.set({'temperature contributions':'dop'})
178
+ M.compute()
179
+ cl_Doppler = M.raw_cl(5000)
180
+
181
+
182
+ # In[ ]:
183
+
184
+
185
+ #################
186
+ #
187
+ # start plotting
188
+ #
189
+ #################
190
+ #
191
+ fig, (ax_Tk, ax_Tk2, ax_Cl) = plt.subplots(3,sharex=True,figsize=(8,12))
192
+ fig.subplots_adjust(hspace=0)
193
+ ##################
194
+ #
195
+ # first figure with transfer functions
196
+ #
197
+ ##################
198
+ ax_Tk.set_xlim([3.e-4,0.5])
199
+ ax_Tk.set_ylim([-1.1*Theta0_amp,1.1*Theta0_amp])
200
+ ax_Tk.tick_params(axis='x',which='both',bottom='off',top='on',labelbottom='off',labeltop='on')
201
+ ax_Tk.set_xlabel(r'$\mathrm{k} \,\,\, \mathrm{[h/Mpc]}$')
202
+ ax_Tk.set_ylabel(r'$\mathrm{Transfer}(\tau_\mathrm{dec},k)$')
203
+ ax_Tk.xaxis.set_label_position('top')
204
+ ax_Tk.grid()
205
+ #
206
+ ax_Tk.axvline(x=kh,color='r')
207
+ ax_Tk.axvline(x=ks,color='y')
208
+ #
209
+ ax_Tk.annotate(r'Hubble cross.',
210
+ xy=(kh,0.8*Theta0_amp),
211
+ xytext=(0.15*kh,0.9*Theta0_amp),
212
+ arrowprops=dict(facecolor='black', shrink=0.05, width=1, headlength=5, headwidth=5))
213
+ ax_Tk.annotate(r'sound hor. cross.',
214
+ xy=(ks,0.8*Theta0_amp),
215
+ xytext=(1.3*ks,0.9*Theta0_amp),
216
+ arrowprops=dict(facecolor='black', shrink=0.05, width=1, headlength=5, headwidth=5))
217
+ #
218
+ ax_Tk.semilogx(k,psi,'y-',label=r'$\psi$')
219
+ ax_Tk.semilogx(k,phi,'r-',label=r'$\phi$')
220
+ ax_Tk.semilogx(k,zero_point,'k:',label=r'$-(1+R)\psi$')
221
+ ax_Tk.semilogx(k,Theta0,'b-',label=r'$\Theta_0$')
222
+ ax_Tk.semilogx(k,(Theta0+psi),'c',label=r'$\Theta_0+\psi$')
223
+ ax_Tk.semilogx(k,theta_b,'g-',label=r'$\theta_b$')
224
+ #
225
+ ax_Tk.legend(loc='right',bbox_to_anchor=(1.4, 0.5))
226
+ #######################
227
+ #
228
+ # second figure with transfer functions squared
229
+ #
230
+ #######################
231
+ ax_Tk2.set_xlim([3.e-4,0.5])
232
+ ax_Tk2.tick_params(axis='x',which='both',bottom='off',top='off',labelbottom='off',labeltop='off')
233
+ ax_Tk2.set_ylabel(r'$\mathrm{Transfer}(\tau_\mathrm{dec},k)^2$')
234
+ ax_Tk2.grid()
235
+ #
236
+ ax_Tk2.semilogx(k,(Theta0+psi)**2,'c',label=r'$(\Theta_0+\psi)^2$')
237
+ #
238
+ ax_Tk2.legend(loc='right',bbox_to_anchor=(1.4, 0.5))
239
+ ########################
240
+ #
241
+ # third figure with all contributions to Cls
242
+ #
243
+ # We already computed each contribution (TSW, earlyISW, lateISW, Doppler, total)
244
+ # Note that there is another contribution from polarisation. We don't plot it because it is
245
+ # too small to be seen, however it is included by default in the total.
246
+ #
247
+ # After each step we will save the figure (to get intermediate figures that can be used in slides)
248
+ #
249
+ #########################
250
+ # presentation settings
251
+ ax_Cl.set_xlim([3.e-4,0.5])
252
+ ax_Cl.set_ylim([0.,8.])
253
+ ax_Cl.set_xlabel(r'$\ell/(\tau_0-\tau_{rec}) \,\,\, \mathrm{[h/Mpc]}$')
254
+ ax_Cl.set_ylabel(r'$\ell (\ell+1) C_l^{TT} / 2 \pi \,\,\, [\times 10^{10}]$')
255
+ ax_Cl.tick_params(axis='x',which='both',bottom='on',top='off',labelbottom='on',labeltop='off')
256
+ ax_Cl.grid()
257
+ #
258
+ # plot and save with TSW
259
+ #
260
+ ax_Cl.semilogx(cl_TSW['ell']/tau_0_minus_tau_rec_hMpc,1.e10*cl_TSW['ell']*(cl_TSW['ell']+1.)*cl_TSW['tt']/2./pi,'c-',label=r'$\mathrm{T+SW}$')
261
+ #
262
+ ax_Cl.legend(loc='right',bbox_to_anchor=(1.4, 0.5))
263
+ fig.savefig('one_time_with_cl_1.pdf',bbox_inches='tight')
264
+ #
265
+ # plot and save with additionally early ISW and late ISW
266
+ #
267
+ ax_Cl.semilogx(cl_eISW['ell']/tau_0_minus_tau_rec_hMpc,1.e10*cl_eISW['ell']*(cl_eISW['ell']+1.)*cl_eISW['tt']/2./pi,'r-',label=r'$\mathrm{early} \,\, \mathrm{ISW}$')
268
+ ax_Cl.semilogx(cl_lISW['ell']/tau_0_minus_tau_rec_hMpc,1.e10*cl_lISW['ell']*(cl_lISW['ell']+1.)*cl_lISW['tt']/2./pi,'y-',label=r'$\mathrm{late} \,\, \mathrm{ISW}$')
269
+ #
270
+ ax_Cl.legend(loc='right',bbox_to_anchor=(1.4, 0.5))
271
+ fig.savefig('one_time_with_cl_2.pdf',bbox_inches='tight')
272
+ #
273
+ # plot and save with additionally Doppler
274
+ #
275
+ ax_Cl.semilogx(cl_Doppler['ell']/tau_0_minus_tau_rec_hMpc,1.e10*cl_Doppler['ell']*(cl_Doppler['ell']+1.)*cl_Doppler['tt']/2./pi,'g-',label=r'$\mathrm{Doppler}$')
276
+ #
277
+ ax_Cl.legend(loc='right',bbox_to_anchor=(1.4, 0.5))
278
+ fig.savefig('one_time_with_cl_3.pdf',bbox_inches='tight')
279
+ #
280
+ # plot and save with additionally total Cls
281
+ #
282
+ ax_Cl.semilogx(cl_tot['ell']/tau_0_minus_tau_rec_hMpc,1.e10*cl_tot['ell']*(cl_tot['ell']+1.)*cl_tot['tt']/2./pi,'k-',label=r'$\mathrm{Total}$')
283
+ #
284
+ ax_Cl.legend(loc='right',bbox_to_anchor=(1.4, 0.5))
285
+ fig.savefig('one_time_with_cl_tot.pdf',bbox_inches='tight')
286
+
class-data/test_hmcode.py ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+ # coding: utf-8
3
+
4
+ # In[ ]:
5
+
6
+
7
+ # import classy module
8
+ from classy import Class
9
+ # uncomment to get plots displayed in notebook
10
+ get_ipython().run_line_magic('matplotlib', 'inline')
11
+ import matplotlib.pyplot as plt
12
+ from math import pi
13
+ import numpy as np
14
+
15
+
16
+ # In[ ]:
17
+
18
+
19
+ # create instance of the class "Class"
20
+ LambdaCDM = Class()
21
+ # pass input parameters
22
+ LambdaCDM.set({'omega_b':0.0223828,'omega_cdm':0.1201075,'h':0.67810,'A_s':2.100549e-09,'n_s':0.9660499,'tau_reio':0.05430842})
23
+ LambdaCDM.set({'output':'tCl,pCl,lCl,mPk','analytic_nowiggle':'yes','numerical_nowiggle':'yes','lensing':'yes'})
24
+ LambdaCDM.set({'P_k_max_1/Mpc':3.0,'z_max_pk':1.1})
25
+ LambdaCDM.set({'non_linear':'HMcode','hmcode_version':'2020'})
26
+ # run class
27
+ LambdaCDM.compute()
28
+
29
+
30
+ # In[ ]:
31
+
32
+
33
+ kk = np.logspace(-4,np.log10(3),1000) # k in h/Mpc
34
+ h = LambdaCDM.h() # get reduced Hubble for conversions to 1/Mpc
35
+ Pk = [] # P(k) in (Mpc/h)**3
36
+ Pk_lin = [] # P(k) in (Mpc/h)**3
37
+ Pk_nw = []
38
+ Pk_an_nw = []
39
+
40
+
41
+ # In[ ]:
42
+
43
+
44
+ # get P(k) at redhsift z
45
+ z=0
46
+ for k in kk:
47
+ Pk_lin.append(LambdaCDM.pk_lin(k*h,z)*h**3) # function .pk(k,z)
48
+ Pk_nw.append(LambdaCDM.pk_numerical_nw(k*h,z)*h**3) # function .pk(k,z)
49
+ Pk_an_nw.append(LambdaCDM.pk_analytic_nw(k*h)*h**3) # function .pk(k,z)
50
+ Pk.append(LambdaCDM.pk(k*h,z)*h**3) # function .pk(k,z)
51
+
52
+
53
+ # In[ ]:
54
+
55
+
56
+ # plot P(k)
57
+ #plt.figure(1)
58
+ plt.xscale('log');plt.yscale('log')
59
+ #plt.xlim(kk[0],kk[-1])
60
+ plt.xlim(1.e-3,0.5)
61
+ plt.ylim(200,3e4)
62
+ plt.xlabel(r'$k \,\,\,\, [h/\mathrm{Mpc}]$')
63
+ plt.ylabel(r'$P(k) \,\,\,\, [\mathrm{Mpc}/h]^3$')
64
+ plt.plot(kk,Pk_nw,'k-')
65
+ plt.plot(kk,Pk_an_nw,'r-')
66
+ plt.plot(kk,Pk_lin,'g-')
67
+ plt.savefig('test_hmcode.pdf',bbox_inches='tight')
68
+
class-data/thermo.py ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+ # coding: utf-8
3
+
4
+ # In[ ]:
5
+
6
+
7
+ # import necessary modules
8
+ # uncomment to get plots displayed in notebook#%matplotlib inline
9
+ import matplotlib
10
+ import matplotlib.pyplot as plt
11
+ import numpy as np
12
+ from classy import Class
13
+ from scipy.optimize import fsolve
14
+ from scipy.interpolate import interp1d
15
+ import math
16
+
17
+
18
+ # In[ ]:
19
+
20
+
21
+ common_settings = {'output' : 'tCl',
22
+ # LambdaCDM parameters
23
+ 'h':0.6781,
24
+ 'omega_b':0.02238280,
25
+ 'omega_cdm':0.1201075,
26
+ 'A_s':2.100549e-09,
27
+ 'n_s':0.9660499,
28
+ 'tau_reio':0.05430842,
29
+ 'thermodynamics_verbose':1
30
+ }
31
+ ##############
32
+ #
33
+ # call CLASS
34
+ #
35
+ ###############
36
+ M = Class()
37
+ M.set(common_settings)
38
+ M.compute()
39
+ derived = M.get_current_derived_parameters(['tau_rec','conformal_age','conf_time_reio'])
40
+ thermo = M.get_thermodynamics()
41
+ print (thermo.keys())
42
+
43
+
44
+ # In[ ]:
45
+
46
+
47
+ tau = thermo['conf. time [Mpc]']
48
+ g = thermo['g [Mpc^-1]']
49
+ # to make the reionisation peak visible, rescale g by 100 for late times
50
+ g[:500] *= 100
51
+ #################
52
+ #
53
+ # start plotting
54
+ #
55
+ #################
56
+ #
57
+ plt.xlim([1.e2,derived['conformal_age']])
58
+ plt.xlabel(r'$\tau \,\,\, \mathrm{[Mpc]}$')
59
+ plt.ylabel(r'$\mathrm{visibility} \,\,\, g \,\,\, [\mathrm{Mpc}^{-1}]$')
60
+ plt.axvline(x=derived['tau_rec'],color='k')
61
+ plt.axvline(x=derived['conf_time_reio'],color='k')
62
+ #
63
+ plt.semilogx(tau,g,'r',label=r'$\psi$')
64
+ plt.savefig('thermo.pdf',bbox_inches='tight')
65
+
class-data/varying_neff.py ADDED
@@ -0,0 +1,184 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+ # coding: utf-8
3
+
4
+ # In[ ]:
5
+
6
+
7
+ # import necessary modules
8
+ # uncomment to get plots displayed in notebook
9
+ get_ipython().run_line_magic('matplotlib', 'inline')
10
+ import matplotlib
11
+ import matplotlib.pyplot as plt
12
+ import numpy as np
13
+ from classy import Class
14
+ from scipy.optimize import fsolve
15
+ import math
16
+
17
+
18
+ # In[ ]:
19
+
20
+
21
+ ############################################
22
+ #
23
+ # Varying parameter (others fixed to default)
24
+ #
25
+ var_name = 'N_ur'
26
+ var_array = np.linspace(3.044,5.044,5)
27
+ var_num = len(var_array)
28
+ var_legend = r'$N_\mathrm{eff}$'
29
+ var_figname = 'neff'
30
+ #
31
+ # Constraints to be matched
32
+ #
33
+ # As explained in the "Neutrino cosmology" book, CUP, Lesgourgues et al., section 5.3, the goal is to vary
34
+ # - omega_cdm by a factor alpha = (1 + coeff*Neff)/(1 + coeff*3.046)
35
+ # - h by a factor sqrt*(alpha)
36
+ # in order to keep a fixed z_equality(R/M) and z_equality(M/Lambda)
37
+ #
38
+ omega_b = 0.0223828
39
+ omega_cdm_standard = 0.1201075
40
+ h_standard = 0.67810
41
+ #
42
+ # coefficient such that omega_r = omega_gamma (1 + coeff*Neff),
43
+ # i.e. such that omega_ur = omega_gamma * coeff * Neff:
44
+ # coeff = omega_ur/omega_gamma/Neff_standard
45
+ # We could extract omega_ur and omega_gamma on-the-fly within th script,
46
+ # but for simplicity we did a preliminary interactive run with background_verbose=2
47
+ # and we copied the values given in the budget output.
48
+ #
49
+ coeff = 1.70961e-05/2.47298e-05/3.044
50
+ print ("coeff=",coeff)
51
+ #
52
+ #############################################
53
+ #
54
+ # Fixed settings
55
+ #
56
+ common_settings = {# fixed LambdaCDM parameters
57
+ 'omega_b':omega_b,
58
+ 'A_s':2.100549e-09,
59
+ 'n_s':0.9660499,
60
+ 'tau_reio':0.05430842,
61
+ # output and precision parameters
62
+ 'output':'tCl,pCl,lCl,mPk',
63
+ 'lensing':'yes',
64
+ 'P_k_max_1/Mpc':3.0}
65
+ #
66
+ ##############################################
67
+ #
68
+ # loop over varying parameter values
69
+ #
70
+ M = {}
71
+ #
72
+ for i, N_ur in enumerate(var_array):
73
+ #
74
+ # rescale omega_cdm and h
75
+ #
76
+ alpha = (1.+coeff*N_ur)/(1.+coeff*3.044)
77
+ omega_cdm = (omega_b + omega_cdm_standard)*alpha - omega_b
78
+ h = h_standard*math.sqrt(alpha)
79
+ print (' * Compute with %s=%e, %s=%e, %s=%e'%('N_ur',N_ur,'omega_cdm',omega_cdm,'h',h))
80
+ #
81
+ # call CLASS
82
+ #
83
+ M[i] = Class()
84
+ M[i].set(common_settings)
85
+ M[i].set(better_precision_settings)
86
+ M[i].set({'N_ur':N_ur})
87
+ M[i].set({'omega_cdm':omega_cdm})
88
+ M[i].set({'h':h})
89
+ M[i].compute()
90
+
91
+
92
+ # In[ ]:
93
+
94
+
95
+ #############################################
96
+ #
97
+ # extract spectra and plot them
98
+ #
99
+ #############################################
100
+ kvec = np.logspace(-4,np.log10(3),1000) # array of kvec in h/Mpc
101
+ twopi = 2.*math.pi
102
+ #
103
+ # Create figures
104
+ #
105
+ fig_Pk, ax_Pk = plt.subplots()
106
+ fig_TT, ax_TT = plt.subplots()
107
+ #
108
+ # loop over varying parameter values
109
+ #
110
+ ll = {}
111
+ clM = {}
112
+ clTT = {}
113
+ pkM = {}
114
+ legarray = []
115
+
116
+ for i, N_ur in enumerate(var_array):
117
+ #
118
+ alpha = (1.+coeff*N_ur)/(1.+coeff*3.044)
119
+ h = 0.67810*math.sqrt(alpha) # this is h
120
+ #
121
+ # deal with colors and legends
122
+ #
123
+ if i == 0:
124
+ var_color = 'k'
125
+ var_alpha = 1.
126
+ else:
127
+ var_color = plt.cm.Reds(0.8*i/(var_num-1))
128
+ #
129
+ # get Cls
130
+ #
131
+ clM[i] = M[i].lensed_cl(2500)
132
+ ll[i] = clM[i]['ell'][2:]
133
+ clTT[i] = clM[i]['tt'][2:]
134
+ #
135
+ # store P(k) for common k values
136
+ #
137
+ pkM[i] = []
138
+ # The function .pk(k,z) wants k in 1/Mpc so we must convert kvec for each case with the right h
139
+ khvec = kvec*h # This is k in 1/Mpc
140
+ for kh in khvec:
141
+ pkM[i].append(M[i].pk(kh,0.)*h**3)
142
+ #
143
+ # plot P(k)
144
+ #
145
+ if i == 0:
146
+ ax_Pk.semilogx(kvec,np.array(pkM[i])/np.array(pkM[0]),
147
+ color=var_color,#alpha=var_alpha,
148
+ linestyle='-')
149
+ else:
150
+ ax_Pk.semilogx(kvec,np.array(pkM[i])/np.array(pkM[0]),
151
+ color=var_color,#alpha=var_alpha,
152
+ linestyle='-',
153
+ label=r'$\Delta N_\mathrm{eff}=%g$'%(N_ur-3.044))
154
+ #
155
+ # plot C_l^TT
156
+ #
157
+ if i == 0:
158
+ ax_TT.semilogx(ll[i],clTT[i]/clTT[0],
159
+ color=var_color,alpha=var_alpha,linestyle='-')
160
+ else:
161
+ ax_TT.semilogx(ll[i],clTT[i]/clTT[0],
162
+ color=var_color,alpha=var_alpha,linestyle='-',
163
+ label=r'$\Delta N_\mathrm{eff}=%g$'%(N_ur-3.044))
164
+ #
165
+ # output of P(k) figure
166
+ #
167
+ ax_Pk.set_xlim([1.e-3,3.])
168
+ ax_Pk.set_ylim([0.98,1.20])
169
+ ax_Pk.set_xlabel(r'$k \,\,\,\, [h^{-1}\mathrm{Mpc}]$')
170
+ ax_Pk.set_ylabel(r'$P(k)/P(k)[N_\mathrm{eff}=3.046]$')
171
+ ax_Pk.legend(loc='upper left')
172
+ fig_Pk.tight_layout()
173
+ fig_Pk.savefig('ratio-%s-Pk.pdf' % var_figname)
174
+ #
175
+ # output of C_l^TT figure
176
+ #
177
+ ax_TT.set_xlim([2,2500])
178
+ ax_TT.set_ylim([0.850,1.005])
179
+ ax_TT.set_xlabel(r'$\mathrm{Multipole} \,\,\,\, \ell$')
180
+ ax_TT.set_ylabel(r'$C_\ell^\mathrm{TT}/C_\ell^\mathrm{TT}(N_\mathrm{eff}=3.046)$')
181
+ ax_TT.legend(loc='lower left')
182
+ fig_TT.tight_layout()
183
+ fig_TT.savefig('ratio-%s-cltt.pdf' % var_figname)
184
+
class-data/varying_pann.py ADDED
@@ -0,0 +1,161 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+ # coding: utf-8
3
+
4
+ # In[ ]:
5
+
6
+
7
+ # import necessary modules
8
+ # uncomment to get plots displayed in notebook
9
+ get_ipython().run_line_magic('matplotlib', 'inline')
10
+ import matplotlib
11
+ import matplotlib.pyplot as plt
12
+ import numpy as np
13
+ from classy import Class
14
+ from scipy.optimize import fsolve
15
+ from math import pi
16
+
17
+
18
+ # In[ ]:
19
+
20
+
21
+ ############################################
22
+ #
23
+ # Varying parameter (others fixed to default)
24
+ #
25
+ # With the input suntax of class <= 2.9 we used: annihilation = 1.e-5 m^3/s/Kg
26
+ # With the new syntax this is equivalent to DM_annihilation_efficiency = 1.11e-22 m^3/s/J
27
+ # (the ratio is a factor (c/[1 m/s])**2 = 9.e16)
28
+ #
29
+ var_name = 'DM_annihilation_efficiency'
30
+ var_array = np.linspace(0,1.11e-22,5)
31
+ var_num = len(var_array)
32
+ var_legend = r'$p_\mathrm{ann}$'
33
+ var_figname = 'pann'
34
+ #
35
+ #############################################
36
+ #
37
+ # Fixed settings
38
+ #
39
+ common_settings = {# LambdaCDM parameters
40
+ 'h':0.67556,
41
+ 'omega_b':0.022032,
42
+ 'omega_cdm':0.12038,
43
+ 'A_s':2.215e-9,
44
+ 'n_s':0.9619,
45
+ 'tau_reio':0.0925,
46
+ # output and precision parameters
47
+ 'output':'tCl,pCl,lCl,mPk',
48
+ 'lensing':'yes',
49
+ 'P_k_max_1/Mpc':3.0,
50
+ 'l_switch_limber':9
51
+ }
52
+ #
53
+ # arrays for output
54
+ #
55
+ kvec = np.logspace(-4,np.log10(3),1000)
56
+ legarray = []
57
+ twopi = 2.*pi
58
+ #
59
+ # Create figures
60
+ #
61
+ fig_Pk, ax_Pk = plt.subplots()
62
+ fig_TT, ax_TT = plt.subplots()
63
+ fig_EE, ax_EE = plt.subplots()
64
+ fig_PP, ax_PP = plt.subplots()
65
+ #
66
+ M = Class()
67
+ #
68
+ # loop over varying parameter values
69
+ #
70
+ for i,var in enumerate(var_array):
71
+ #
72
+ print (' * Compute with %s=%e'%(var_name,var))
73
+ #
74
+ # deal with colors and legends
75
+ #
76
+ if i == 0:
77
+ var_color = 'k'
78
+ var_alpha = 1.
79
+ legarray.append(r'ref. $\Lambda CDM$')
80
+ else:
81
+ var_color = 'r'
82
+ var_alpha = 1.*i/(var_num-1.)
83
+ if i == var_num-1:
84
+ legarray.append(var_legend)
85
+ #
86
+ # call CLASS
87
+ #
88
+ M.set(common_settings)
89
+ M.set({var_name:var})
90
+ M.compute()
91
+ #
92
+ # get Cls
93
+ #
94
+ clM = M.lensed_cl(2500)
95
+ ll = clM['ell'][2:]
96
+ clTT = clM['tt'][2:]
97
+ clEE = clM['ee'][2:]
98
+ clPP = clM['pp'][2:]
99
+ #
100
+ # get P(k) for common k values
101
+ #
102
+ pkM = []
103
+ for k in kvec:
104
+ pkM.append(M.pk(k,0.))
105
+ #
106
+ # plot P(k)
107
+ #
108
+ ax_Pk.loglog(kvec,np.array(pkM),color=var_color,alpha=var_alpha,linestyle='-')
109
+ #
110
+ # plot C_l^TT
111
+ #
112
+ ax_TT.semilogx(ll,clTT*ll*(ll+1)/twopi,color=var_color,alpha=var_alpha,linestyle='-')
113
+ #
114
+ # plot Cl EE
115
+ #
116
+ ax_EE.loglog(ll,clEE*ll*(ll+1)/twopi,color=var_color,alpha=var_alpha,linestyle='-')
117
+ #
118
+ # plot Cl phiphi
119
+ #
120
+ ax_PP.loglog(ll,clPP*ll*(ll+1)*ll*(ll+1)/twopi,color=var_color,alpha=var_alpha,linestyle='-')
121
+ #
122
+ # reset CLASS
123
+ #
124
+ M.empty()
125
+ #
126
+ # output of P(k) figure
127
+ #
128
+ ax_Pk.set_xlim([1.e-4,3.])
129
+ ax_Pk.set_xlabel(r'$k \,\,\,\, [h/\mathrm{Mpc}]$')
130
+ ax_Pk.set_ylabel(r'$P(k) \,\,\,\, [\mathrm{Mpc}/h]^3$')
131
+ ax_Pk.legend(legarray)
132
+ fig_Pk.tight_layout()
133
+ fig_Pk.savefig('varying_%s_Pk.pdf' % var_figname)
134
+ #
135
+ # output of C_l^TT figure
136
+ #
137
+ ax_TT.set_xlim([2,2500])
138
+ ax_TT.set_xlabel(r'$\ell$')
139
+ ax_TT.set_ylabel(r'$[\ell(\ell+1)/2\pi] C_\ell^\mathrm{TT}$')
140
+ ax_TT.legend(legarray)
141
+ fig_TT.tight_layout()
142
+ fig_TT.savefig('varying_%s_cltt.pdf' % var_figname)
143
+ #
144
+ # output of C_l^EE figure
145
+ #
146
+ ax_EE.set_xlim([2,2500])
147
+ ax_EE.set_xlabel(r'$\ell$')
148
+ ax_EE.set_ylabel(r'$[\ell(\ell+1)/2\pi] C_\ell^\mathrm{EE}$')
149
+ ax_EE.legend(legarray)
150
+ fig_EE.tight_layout()
151
+ fig_EE.savefig('varying_%s_clee.pdf' % var_figname)
152
+ #
153
+ # output of C_l^pp figure
154
+ #
155
+ ax_PP.set_xlim([10,2500])
156
+ ax_PP.set_xlabel(r'$\ell$')
157
+ ax_PP.set_ylabel(r'$[\ell^2(\ell+1)^2/2\pi] C_\ell^\mathrm{\phi \phi}$')
158
+ ax_PP.legend(legarray)
159
+ fig_PP.tight_layout()
160
+ fig_PP.savefig('varying_%s_clpp.pdf' % var_figname)
161
+
class-data/warmup.py ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+ # coding: utf-8
3
+
4
+ # In[ ]:
5
+
6
+
7
+ # import classy module
8
+ from classy import Class
9
+
10
+
11
+ # In[ ]:
12
+
13
+
14
+ # create instance of the class "Class"
15
+ LambdaCDM = Class()
16
+ # pass input parameters
17
+ LambdaCDM.set({'omega_b':0.0223828,'omega_cdm':0.1201075,'h':0.67810,'A_s':2.100549e-09,'n_s':0.9660499,'tau_reio':0.05430842})
18
+ LambdaCDM.set({'output':'tCl,pCl,lCl,mPk','lensing':'yes','P_k_max_1/Mpc':3.0})
19
+ # run class
20
+ LambdaCDM.compute()
21
+
22
+
23
+ # In[ ]:
24
+
25
+
26
+ # get all C_l output
27
+ cls = LambdaCDM.lensed_cl(2500)
28
+ # To check the format of cls
29
+ cls.keys()
30
+
31
+
32
+ # In[ ]:
33
+
34
+
35
+ ll = cls['ell'][2:]
36
+ clTT = cls['tt'][2:]
37
+ clEE = cls['ee'][2:]
38
+ clPP = cls['pp'][2:]
39
+
40
+
41
+ # In[ ]:
42
+
43
+
44
+ # uncomment to get plots displayed in notebook
45
+ get_ipython().run_line_magic('matplotlib', 'inline')
46
+ import matplotlib.pyplot as plt
47
+ from math import pi
48
+
49
+
50
+ # In[ ]:
51
+
52
+
53
+ # plot C_l^TT
54
+ plt.figure(1)
55
+ plt.xscale('log');plt.yscale('linear');plt.xlim(2,2500)
56
+ plt.xlabel(r'$\ell$')
57
+ plt.ylabel(r'$[\ell(\ell+1)/2\pi] C_\ell^\mathrm{TT}$')
58
+ plt.plot(ll,clTT*ll*(ll+1)/2./pi,'r-')
59
+ plt.savefig('warmup_cltt.pdf')
60
+
61
+
62
+ # In[ ]:
63
+
64
+
65
+ # get P(k) at redhsift z=0
66
+ import numpy as np
67
+ kk = np.logspace(-4,np.log10(3),1000) # k in h/Mpc
68
+ Pk = [] # P(k) in (Mpc/h)**3
69
+ h = LambdaCDM.h() # get reduced Hubble for conversions to 1/Mpc
70
+ for k in kk:
71
+ Pk.append(LambdaCDM.pk(k*h,0.)*h**3) # function .pk(k,z)
72
+
73
+
74
+ # In[ ]:
75
+
76
+
77
+ # plot P(k)
78
+ plt.figure(2)
79
+ plt.xscale('log');plt.yscale('log');plt.xlim(kk[0],kk[-1])
80
+ plt.xlabel(r'$k \,\,\,\, [h/\mathrm{Mpc}]$')
81
+ plt.ylabel(r'$P(k) \,\,\,\, [\mathrm{Mpc}/h]^3$')
82
+ plt.plot(kk,Pk,'b-')
83
+ plt.savefig('warmup_pk.pdf')
84
+
prompts/class_instructions.txt ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ You are the initial response agent. Your task is to:
2
+
3
+ 1. Generate a comprehensive initial response to the user's query
4
+ 2. Include any necessary code blocks with proper formatting (between ```python and ```)
5
+ 3. Ensure the response is clear and well-structured
6
+ 4. Focus on providing accurate and complete information
7
+ 5. For plots, always save to disk using savefig() instead of show()
8
+ 6. ALWAYS include code blocks when the question involves code or plotting
9
+
10
+ You are a retrieval-augmented assistant for the CLASS code, specifically focused on solving Einstein-Boltzmann equations. Your primary task is to use information retrieved from the CLASS code and its documentation to answer user queries accurately and concisely.
11
+
12
+ Define key components or concepts related to the Einstein-Boltzmann solver in the CLASS code, then proceed through each detailed step to reach the solution.
13
+
14
+ 1. **Use Retrieved Context**:
15
+ - Incorporate retrieved information directly into your responses.
16
+ - Ensure your answers are specifically related to the Einstein-Boltzmann solver in the CLASS code.
17
+
18
+ 2. **Fallback to General Knowledge**:
19
+ - If specific retrieved data is missing, incomplete, or irrelevant:
20
+ - Inform the user about the insufficiency.
21
+ - Utilize general scientific knowledge to answer, specifying that it’s based on such information.
22
+
23
+ 3. **Handling Conflicting Information**:
24
+ - If retrieved documents contain conflicting information:
25
+ - Highlight discrepancies.
26
+ - Cite each source and provide a balanced response.
27
+
28
+ 4. **Clarification and Error Handling**:
29
+ - If the query is ambiguous, request clarification before answering.
30
+
31
+ # Steps
32
+
33
+ 1. **Identify the Problem**: Clearly define the query related to Einstein-Boltzmann equations and identify important terms or components.
34
+ 2. **Break Down Steps**: Solve the problem step by step, considering mathematical and cosmological principles.
35
+ 3. **Reasoning**: Explain why each step is necessary before moving to the next one, using scientific reasoning.
36
+ 4. **Conclusion**: Present the final answer once all steps are explained and justified.
37
+
38
+ # Output Format
39
+
40
+ Provide concise, accurate responses in a scientific explanatory format. Make use of technical language relevant to Einstein-Boltzmann solvers.
41
+
42
+ # Notes
43
+
44
+ - Focus on the cosmological and differential equation-solving aspects critical to understanding Einstein-Boltzmann solvers.
45
+ - Precision in mathematical definitions and cosmological parameters is crucial.
46
+ - Clearly distinguish between retrieved information and general knowledge when formulating responses.
prompts/class_refinement.txt ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ You are a retrieval-augmented assistant for the CLASS code, focused on solving Einstein-Boltzmann equations. Your role is to refine AI-generated answers using retrieved information from the CLASS codebase and its documentation.
2
+
3
+ You will receive input in the following format:
4
+
5
+ Context:
6
+ <Retrieved information relevant to the question>
7
+
8
+ Question:
9
+ <The original user question>
10
+
11
+ AI Answer:
12
+ <The original AI-generated answer>
13
+
14
+ Reviewer Feedback:
15
+ <Specific feedback from a quality reviewer>
16
+
17
+ Your task is to **improve the AI Answer** based on the **Reviewer Feedback** and the provided **Context**, making the response more accurate, concise, and helpful for the user.
18
+
19
+ Only return the improved answer — do not include any commentary or formatting outside the final version.
prompts/class_refinement_ag2.txt ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ You are a retrieval-augmented assistant for the CLASS code, focused on solving Einstein-Boltzmann equations. Your role is to refine AI-generated answers using retrieved information from the CLASS codebase and its documentation.
2
+
3
+ You will receive input in the following format:
4
+
5
+ Context:
6
+ <Retrieved information relevant to the question>
7
+
8
+ Question:
9
+ <The original user question>
10
+
11
+ AI Answer:
12
+ <The original AI-generated answer>
13
+
14
+ Reviewer Feedback:
15
+ <Specific feedback from a quality reviewer>
16
+
17
+ Your task is to **improve the AI Answer** based on the **Reviewer Feedback** and the provided **Context**, making the response more accurate, concise, and helpful for the user.
18
+
19
+ Only return the improved answer — do not include any commentary or formatting outside the final version.
prompts/codeexecutor_instructions.txt ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ You are the code execution agent. Your task is to:
2
+ 1. Extract any code blocks from the message (text between ```python and ```)
3
+ 2. Execute the extracted code and report the results
4
+ 3. If the code execution fails, provide error details
5
+ 4. If no code blocks are found, respond with "No code blocks found to execute"
6
+ 5. For matplotlib plots, ensure they are saved to disk instead of using .show()
7
+ 6. ALWAYS check for code blocks in the message
8
+ 7. If code blocks are found, execute them and report the results
9
+
10
+ Example response format:
11
+ ```
12
+ Code Execution Results:
13
+ exitcode: 0 (execution succeeded)
14
+ Code output: [output here]
15
+ ```
16
+
17
+ If there are errors:
18
+ ```
19
+ Code Execution Results:
20
+ exitcode: 1 (execution failed)
21
+ Error: [error details here]
22
+ ```
prompts/formatting_instructions.txt ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ You are a specialized AI agent acting as a Response Formatter.
2
+
3
+ Your task is to take a reviewed and typo-corrected answer and format it in a clear, readable, and professional manner for the end user.
4
+
5
+ Your responsibilities include:
6
+
7
+ 1. Code Block Formatting:
8
+ - Identify and properly format code blocks using appropriate markdown syntax
9
+ - Use ```python for Python code
10
+ - Use ```c for C code
11
+ - Use ```bash for shell commands
12
+ - Use ``` for other programming languages or generic code examples
13
+ - Ensure proper indentation and spacing within code blocks
14
+ - For plots, always save figures to disk in png format with savefig method
15
+ - Do not use '.show()' for plots
16
+ - Add these lines at the end:
17
+ fig = plt.gcf()
18
+ fig.savefig("plot.png", dpi=300, bbox_inches='tight')
19
+ plt.close('all')
20
+ - For plots, add relevant units to axes labels
21
+ - Use 'ax.relim()' and 'ax.autoscale_view()' methods when possible
22
+ - Print a concise description of the plot when saving
23
+ - Use LaTeX formatting with raw strings for labels and titles
24
+ - All LaTeX expressions must use math mode with '$'
25
+
26
+ 2. Text Formatting:
27
+ - Use appropriate markdown formatting for better readability
28
+ - Add clear section headers using #, ##, or ### as needed
29
+ - Use bullet points or numbered lists when presenting multiple items
30
+ - Add emphasis using *italics* or **bold** where appropriate
31
+ - Ensure proper spacing between paragraphs and sections
32
+ - Include detailed docstrings for all methods/classes using raw string literals
33
+ - Annotate quantities with their units
34
+ - Print all important numerical results with detailed descriptions
35
+
36
+ 3. Structure and Organization:
37
+ - Maintain a logical flow of information
38
+ - Break down complex explanations into digestible sections
39
+ - Use clear transitions between different parts of the response
40
+ - Ensure the formatting enhances rather than distracts from the content
41
+ - Focus on one step at a time
42
+ - Do not suggest incomplete code
43
+ - Do not produce code blocks not intended for execution
44
+ - Include only one code block per response
45
+
46
+ 4. Consistency:
47
+ - Maintain consistent formatting throughout the response
48
+ - Use consistent heading levels
49
+ - Apply consistent code block formatting
50
+ - Keep a consistent style for lists and emphasis
51
+ - Use raw f-strings properly (replace "\," with "\\,")
52
+ - Handle underscores in LaTeX properly (replace '_' with r'\_')
53
+ - Use math mode for LaTeX expressions (e.g., r'$X$')
54
+
55
+ Additional Requirements:
56
+ - For ML model training:
57
+ - Disable verbose output (verbose=0)
58
+ - Suppress repetitive status messages
59
+ - Retain essential evaluation metrics
60
+ - Prevent unintended re-enabling of verbose logging
61
+
62
+ - For exploratory data analysis:
63
+ - Print all results with detailed descriptions
64
+ - Include proper error handling with full error messages
65
+ - Do not provide dummy summaries/solutions
66
+
67
+ - For LaTeX and math:
68
+ - Use raw strings and math mode for all LaTeX expressions
69
+ - \mathrm is allowed only in math mode with '$'
70
+ - Handle underscores properly in LaTeX expressions
71
+
72
+ Remember:
73
+ - Do not alter the actual content or meaning of the response
74
+ - Focus on improving readability and presentation
75
+ - Ensure the formatting is appropriate for the content type
76
+ - Keep the formatting clean and professional
77
+ - Provide single self-consistent Python code
78
+ - Include only concise explanations with the code
79
+ - Do not check for installed packages
80
+ - Do not install new packages
81
+ - Do not make suggestions, focus on providing Python code
82
+ - For multiple files/modules, provide code for each one separately
prompts/rating_instructions.txt ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ You are a quality reviewer for AI-generated responses.
2
+
3
+ You receive the following input:
4
+
5
+ Context: Retrieved context relevant to the question
6
+ Question: The originally asked user question
7
+ AI Answer: The AI-generated response you are supposed to evaluate
8
+
9
+ Please evaluate the quality of the AI answer. Is it accurate, clear, helpful, and appropriate given the context and question? Only give a grade 9 or 10 for very good answers.
10
+
11
+ Return only a JSON object with the following fields:
12
+
13
+ ```json
14
+ {
15
+ "rating": [an integer from 1 to 10],
16
+ "feedback": "A brief paragraph evaluating clarity, relevance, and completeness. Include specific suggestions for improving the answer if necessary."
17
+ }
18
+
19
+ Return only a raw JSON object. Do NOT include any code block formatting like ```json or backticks.
prompts/review_instructions.txt ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ You are a quality reviewer for AI-generated responses.
2
+
3
+ You are the review agent. Your task is to:
4
+ 1. Critically review the initial response
5
+ 2. Identify any issues or areas for improvement
6
+ 3. Provide constructive feedback
7
+ 4. Ensure code blocks are properly formatted and complete
8
+ 5. Verify plots are saved to disk (not using show())
9
+ 6. NEVER remove or modify code blocks unless there are actual errors
10
+ 7. If the code is correct, explicitly state that in your feedback
11
+
12
+ You receive the following input:
13
+
14
+ Context: Retrieved context relevant to the question
15
+ User Question: The originally asked user question
16
+
17
+ Plus the last answer of the previous AI Agent.
18
+
19
+ Please evaluate the quality of the AI answer. Is it accurate, clear, helpful, and appropriate given the context and question?
20
+
21
+ You return a rating from 1 to 10 and a feedback including specific suggestions for improving the ai answer.
prompts/typo_instructions.txt ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ You are a specialized AI agent acting as a Code Syntax and Typo Corrector.
2
+
3
+ Your task is to review a given draft answer and associated feedback, specifically focusing on identifying and correcting errors **only within code blocks** (e.g., Python, C, Shell scripts, etc.).
4
+
5
+ You will receive:
6
+ 1. The original draft answer generated by another AI.
7
+ 2. Feedback on that draft provided by a reviewer AI.
8
+
9
+ Your process should be:
10
+ 1. Carefully examine the code segments within the draft answer.
11
+ 2. Analyze the reviewer's feedback. If the feedback explicitly points out typos, syntax errors, or other mistakes within the code, apply those corrections accurately.
12
+ 3. If the feedback *doesn't* mention specific code errors, perform your own check for obvious typos or syntax mistakes in the code blocks.
13
+ 4. **Crucially: Do NOT modify the non-code parts of the answer (explanations, introductions, conclusions) unless the reviewer's feedback explicitly instructs you to do so.**
14
+ 5. If no code errors are found or mentioned in the feedback, leave the code blocks exactly as they are. Do not attempt to refactor or improve correct code.
15
+ 6. Return the complete answer, integrating any necessary corrections into the code blocks while preserving the original non-code text and overall structure.