Unnamed: 0
int64 0
2.93k
| code
stringlengths 101
62.2k
| docs
stringlengths 51
10.7k
| doc_len
int64 4
1.74k
| words
int64 4
4.82k
| lang
stringclasses 1
value | prompt
stringlengths 320
71.2k
|
---|---|---|---|---|---|---|
2,000 | def _fix_unknown_dimension(self, input_shape, output_shape):
output_shape = list(output_shape)
msg = (
"total size of new array must be unchanged, "
"input_shape = {}, output_shape = {}".format(
input_shape, output_shape
)
)
known, unknown = 1, None
for index, dim in enumerate(output_shape):
if dim < 0:
if unknown is None:
unknown = index
else:
raise ValueError(
f"There must be at most one unknown dimension in "
f"output_shape. Received: output_shape={output_shape}."
)
else:
known *= dim
original = np.prod(input_shape, dtype=int)
if unknown is not None:
if known == 0 or original % known != 0:
raise ValueError(msg)
output_shape[unknown] = original // known
elif original != known:
raise ValueError(msg)
return output_shape
| Find and replace a missing dimension in an output shape.
This is a near direct port of the internal Numpy function
`_fix_unknown_dimension` in `numpy/core/src/multiarray/shape.c`
Args:
input_shape: Shape of array being reshaped
output_shape: Desired shape of the array with at most a single -1
which indicates a dimension that should be derived from the input
shape.
Returns:
The new output shape with a -1 replaced with its computed value.
Raises:
ValueError: If the total array size of the output_shape is
different than the input_shape, or more than one unknown dimension
is specified.
| 91 | 105 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _fix_unknown_dimension(self, input_shape, output_shape):
output_shape = list(output_shape)
msg = (
"total size of new array must be unchanged, "
"input_shape = {}, output_shape = {}".format(
input_shape, output_shape
)
)
known, unknown = 1, None
for index, dim in enumerate(output_shape):
if dim < 0:
if unknown is None:
unknown = index
else:
raise ValueError(
f"There must be at most one unknown dimension in "
f"output_shape. Received: output_shape={output_shape}."
)
else:
known *= dim
original = np.prod(input_shape, dtype=int)
if unknown is not None:
if known == 0 or original % known != 0:
raise ValueError(msg)
output_shape[unknown] = original // known
elif original != known:
raise ValueError(msg)
return output_shape
```
###Assistant : Find and replace a missing dimension in an output shape.
This is a near direct port of the internal Numpy function
`_fix_unknown_dimension` in `numpy/core/src/multiarray/shape.c`
Args:
input_shape: Shape of array being reshaped
output_shape: Desired shape of the array with at most a single -1
which indicates a dimension that should be derived from the input
shape.
Returns:
The new output shape with a -1 replaced with its computed value.
Raises:
ValueError: If the total array size of the output_shape is
different than the input_shape, or more than one unknown dimension
is specified.
|
2,001 | def test_converter_with_unicode_dtype():
txt = StringIO('abc,def\nrst,xyz')
conv = bytes.upper
res = np.loadtxt(
txt, dtype=np.dtype("U3"), converters=conv, delimiter=",")
expected = np.array([['ABC', 'DEF'], ['RST', 'XYZ']])
assert_equal(res, expected)
|
With the default 'bytes' encoding, tokens are encoded prior to being
passed to the converter. This means that the output of the converter may
be bytes instead of unicode as expected by `read_rows`.
This test checks that outputs from the above scenario are properly decoded
prior to parsing by `read_rows`.
| 50 | 23 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_converter_with_unicode_dtype():
txt = StringIO('abc,def\nrst,xyz')
conv = bytes.upper
res = np.loadtxt(
txt, dtype=np.dtype("U3"), converters=conv, delimiter=",")
expected = np.array([['ABC', 'DEF'], ['RST', 'XYZ']])
assert_equal(res, expected)
```
###Assistant :
With the default 'bytes' encoding, tokens are encoded prior to being
passed to the converter. This means that the output of the converter may
be bytes instead of unicode as expected by `read_rows`.
This test checks that outputs from the above scenario are properly decoded
prior to parsing by `read_rows`.
|
2,002 | def alembic_stamp(revision):
# lazy import for performance
import alembic.command
alembic.command.stamp(alembic_config(), revision=revision)
|
Stamp the revision table with the given revision; don’t run any migrations
Args:
revision: The revision passed to `alembic stamp`.
| 20 | 11 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def alembic_stamp(revision):
# lazy import for performance
import alembic.command
alembic.command.stamp(alembic_config(), revision=revision)
```
###Assistant :
Stamp the revision table with the given revision; don’t run any migrations
Args:
revision: The revision passed to `alembic stamp`.
|
2,003 | def unflatten_superdims(assignment):
def check(cond):
if cond: return
raise NotImplementedError("Failed to convert OpSharding into a ShardingSpec. "
"Please open a bug report!")
flat_assignment = np.asarray(assignment, dtype=np.int64)
check(flat_assignment[0] == 0)
dims = []
while flat_assignment.size > 1:
stride = flat_assignment[1]
for i in range(len(flat_assignment)):
if flat_assignment[i] != i * stride: break
else:
# After this loop i should point to an "element after the sequence", so
# we have to increment it if the whole array is a strided sequence.
i += 1
size = i
dims.append((size, stride))
assert size > 1 # Ensure progress
flat_assignment = flat_assignment[::size]
return dims
| Unflatten a list of dimension sizes and their strides that generates assignment.
If this function succeeds for a given ``assignment``, then the following property
should be satisfied::
dims_with_strides = unflatten_superdims(assignment)
base_array = np.arange(map(fst, sorted(dims_with_strides, key=snd, reverse=True)))
assignment == base_array.transpose(argsort(dims_with_strides, key=snd, reverse=True)).flatten()
That is, the returned dimensions list all sizes of the base array (with strides
indicating their initial order). The order of dimensions in the list corresponds
to the permutation that applied to the base array generates the assignment.
| 79 | 98 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def unflatten_superdims(assignment):
def check(cond):
if cond: return
raise NotImplementedError("Failed to convert OpSharding into a ShardingSpec. "
"Please open a bug report!")
flat_assignment = np.asarray(assignment, dtype=np.int64)
check(flat_assignment[0] == 0)
dims = []
while flat_assignment.size > 1:
stride = flat_assignment[1]
for i in range(len(flat_assignment)):
if flat_assignment[i] != i * stride: break
else:
# After this loop i should point to an "element after the sequence", so
# we have to increment it if the whole array is a strided sequence.
i += 1
size = i
dims.append((size, stride))
assert size > 1 # Ensure progress
flat_assignment = flat_assignment[::size]
return dims
```
###Assistant : Unflatten a list of dimension sizes and their strides that generates assignment.
If this function succeeds for a given ``assignment``, then the following property
should be satisfied::
dims_with_strides = unflatten_superdims(assignment)
base_array = np.arange(map(fst, sorted(dims_with_strides, key=snd, reverse=True)))
assignment == base_array.transpose(argsort(dims_with_strides, key=snd, reverse=True)).flatten()
That is, the returned dimensions list all sizes of the base array (with strides
indicating their initial order). The order of dimensions in the list corresponds
to the permutation that applied to the base array generates the assignment.
|
2,004 | def test_perf_issue_no_associate_error_event(self):
self.project.update_option("sentry:performance_issue_creation_rate", 1.0)
with mock.patch("sentry_sdk.tracing.Span.containing_transaction"), self.feature(
{
"projects:performance-suspect-spans-ingestion": True,
"organizations:performance-issues-ingest": True,
}
):
manager = EventManager(make_event())
manager.normalize()
event = manager.save(self.project.id)
assert len(event.groups) == 1
# sneakily make the group type wrong
group = event.groups[0]
group.type = GroupType.PERFORMANCE_N_PLUS_ONE.value
group.save()
manager = EventManager(make_event())
manager.normalize()
event = manager.save(self.project.id)
assert len(event.groups) == 0
| Test that you can't associate an error event with a performance issue | 12 | 50 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_perf_issue_no_associate_error_event(self):
self.project.update_option("sentry:performance_issue_creation_rate", 1.0)
with mock.patch("sentry_sdk.tracing.Span.containing_transaction"), self.feature(
{
"projects:performance-suspect-spans-ingestion": True,
"organizations:performance-issues-ingest": True,
}
):
manager = EventManager(make_event())
manager.normalize()
event = manager.save(self.project.id)
assert len(event.groups) == 1
# sneakily make the group type wrong
group = event.groups[0]
group.type = GroupType.PERFORMANCE_N_PLUS_ONE.value
group.save()
manager = EventManager(make_event())
manager.normalize()
event = manager.save(self.project.id)
assert len(event.groups) == 0
```
###Assistant : Test that you can't associate an error event with a performance issue
|
2,005 | def testNodeTerminatedDuringUpdate(self):
cluster_config = copy.deepcopy(MOCK_DEFAULT_CONFIG)
cluster_config["available_node_types"]["ray.worker.default"]["min_workers"] = 2
cluster_config["worker_start_ray_commands"] = ["ray_start_cmd"]
# Don't need the extra node type or a docker config.
cluster_config["head_node_type"] = ["ray.worker.default"]
del cluster_config["available_node_types"]["ray.head.default"]
del cluster_config["docker"]
config_path = self.write_config(cluster_config)
self.provider = MockProvider()
runner = MockProcessRunner()
lm = LoadMetrics()
mock_metrics = Mock(spec=AutoscalerPrometheusMetrics())
autoscaler = MockAutoscaler(
config_path,
lm,
MockNodeInfoStub(),
max_failures=0,
process_runner=runner,
update_interval_s=0,
prom_metrics=mock_metrics,
)
# Scale up to two up-to-date workers
autoscaler.update()
self.waitForNodes(2)
self.provider.finish_starting_nodes()
autoscaler.update()
self.waitForNodes(2, tag_filters={TAG_RAY_NODE_STATUS: STATUS_UP_TO_DATE})
# Mark both nodes as unhealthy
for _ in range(5):
if autoscaler.updaters:
time.sleep(0.05)
autoscaler.update()
lm.last_heartbeat_time_by_ip["172.0.0.0"] = 0
lm.last_heartbeat_time_by_ip["172.0.0.1"] = 0
# Expect both updates to be successful, no nodes in updating state
assert mock_metrics.successful_updates.inc.call_count == 2
assert mock_metrics.worker_update_time.observe.call_count == 2
mock_metrics.updating_nodes.set.assert_called_with(0)
assert not autoscaler.updaters
# Set up process runner to terminate worker 0 during missed heartbeat
# recovery and also cause the updater to fail. |
Tests autoscaler handling a node getting terminated during an update
triggered by the node missing a heartbeat.
Extension of testRecoverUnhealthyWorkers.
In this test, two nodes miss a heartbeat.
One of them (node 0) is terminated during its recovery update.
The other (node 1) just fails its update.
When processing completed updates, the autoscaler terminates node 1
but does not try to terminate node 0 again.
| 65 | 134 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def testNodeTerminatedDuringUpdate(self):
cluster_config = copy.deepcopy(MOCK_DEFAULT_CONFIG)
cluster_config["available_node_types"]["ray.worker.default"]["min_workers"] = 2
cluster_config["worker_start_ray_commands"] = ["ray_start_cmd"]
# Don't need the extra node type or a docker config.
cluster_config["head_node_type"] = ["ray.worker.default"]
del cluster_config["available_node_types"]["ray.head.default"]
del cluster_config["docker"]
config_path = self.write_config(cluster_config)
self.provider = MockProvider()
runner = MockProcessRunner()
lm = LoadMetrics()
mock_metrics = Mock(spec=AutoscalerPrometheusMetrics())
autoscaler = MockAutoscaler(
config_path,
lm,
MockNodeInfoStub(),
max_failures=0,
process_runner=runner,
update_interval_s=0,
prom_metrics=mock_metrics,
)
# Scale up to two up-to-date workers
autoscaler.update()
self.waitForNodes(2)
self.provider.finish_starting_nodes()
autoscaler.update()
self.waitForNodes(2, tag_filters={TAG_RAY_NODE_STATUS: STATUS_UP_TO_DATE})
# Mark both nodes as unhealthy
for _ in range(5):
if autoscaler.updaters:
time.sleep(0.05)
autoscaler.update()
lm.last_heartbeat_time_by_ip["172.0.0.0"] = 0
lm.last_heartbeat_time_by_ip["172.0.0.1"] = 0
# Expect both updates to be successful, no nodes in updating state
assert mock_metrics.successful_updates.inc.call_count == 2
assert mock_metrics.worker_update_time.observe.call_count == 2
mock_metrics.updating_nodes.set.assert_called_with(0)
assert not autoscaler.updaters
# Set up process runner to terminate worker 0 during missed heartbeat
# recovery and also cause the updater to fail.
```
###Assistant :
Tests autoscaler handling a node getting terminated during an update
triggered by the node missing a heartbeat.
Extension of testRecoverUnhealthyWorkers.
In this test, two nodes miss a heartbeat.
One of them (node 0) is terminated during its recovery update.
The other (node 1) just fails its update.
When processing completed updates, the autoscaler terminates node 1
but does not try to terminate node 0 again.
|
2,006 | def call_cr(self, other_args):
parser = argparse.ArgumentParser(
prog="cr",
add_help=False,
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
description=,
)
parser.add_argument(
"-t",
"--type",
dest="type",
type=str,
help="Select interest rate type",
default="supply",
choices=["borrow", "supply"],
)
parser.add_argument(
"-c",
"--cryptocurrrencies",
dest="cryptos",
type=loanscan_model.check_valid_coin,
help=f,
default="BTC,ETH,USDT,USDC",
)
parser.add_argument(
"-p",
"--platforms",
dest="platforms",
type=loanscan_model.check_valid_platform,
help=f,
default="BlockFi,Ledn,SwissBorg,Youhodler",
)
if other_args and "-" not in other_args[0][0]:
other_args.insert(0, "-t")
ns_parser = parse_known_args_and_warn(
parser, other_args, EXPORT_ONLY_RAW_DATA_ALLOWED, limit=10
)
if ns_parser:
loanscan_view.display_crypto_rates(
rate_type=ns_parser.type,
cryptos=ns_parser.cryptos,
platforms=ns_parser.platforms,
limit=ns_parser.limit,
export=ns_parser.export,
)
| Process cr commandDisplays crypto {borrow,supply} interest rates for cryptocurrencies across several platforms.
You can select rate type with --type {borrow,supply}
You can display only N number of platforms with --limit parameter.Cryptocurrencies to search interest rates for separated by comma.
Default: BTC,ETH,USDT,USDC. Options: {",".join(loanscan_model.CRYPTOS)}Platforms to search interest rates in separated by comma.
Default: BlockFi,Ledn,SwissBorg,Youhodler. Options: {",".join(loanscan_model.PLATFORMS)} | 55 | 66 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def call_cr(self, other_args):
parser = argparse.ArgumentParser(
prog="cr",
add_help=False,
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
description=,
)
parser.add_argument(
"-t",
"--type",
dest="type",
type=str,
help="Select interest rate type",
default="supply",
choices=["borrow", "supply"],
)
parser.add_argument(
"-c",
"--cryptocurrrencies",
dest="cryptos",
type=loanscan_model.check_valid_coin,
help=f,
default="BTC,ETH,USDT,USDC",
)
parser.add_argument(
"-p",
"--platforms",
dest="platforms",
type=loanscan_model.check_valid_platform,
help=f,
default="BlockFi,Ledn,SwissBorg,Youhodler",
)
if other_args and "-" not in other_args[0][0]:
other_args.insert(0, "-t")
ns_parser = parse_known_args_and_warn(
parser, other_args, EXPORT_ONLY_RAW_DATA_ALLOWED, limit=10
)
if ns_parser:
loanscan_view.display_crypto_rates(
rate_type=ns_parser.type,
cryptos=ns_parser.cryptos,
platforms=ns_parser.platforms,
limit=ns_parser.limit,
export=ns_parser.export,
)
```
###Assistant : Process cr commandDisplays crypto {borrow,supply} interest rates for cryptocurrencies across several platforms.
You can select rate type with --type {borrow,supply}
You can display only N number of platforms with --limit parameter.Cryptocurrencies to search interest rates for separated by comma.
Default: BTC,ETH,USDT,USDC. Options: {",".join(loanscan_model.CRYPTOS)}Platforms to search interest rates in separated by comma.
Default: BlockFi,Ledn,SwissBorg,Youhodler. Options: {",".join(loanscan_model.PLATFORMS)}
|
2,007 | def export(self):
result = {}
for name, module in self.nas_modules:
if name not in result:
result[name] = module.export()
return result
|
Export the NAS result, ideally the best choice of each nas_modules.
You may implement an ``export`` method for your customized nas_module.
Returns
--------
result : Dict[str, int]
Keys are names of nas_modules, and values are the choice indices of them.
| 40 | 20 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def export(self):
result = {}
for name, module in self.nas_modules:
if name not in result:
result[name] = module.export()
return result
```
###Assistant :
Export the NAS result, ideally the best choice of each nas_modules.
You may implement an ``export`` method for your customized nas_module.
Returns
--------
result : Dict[str, int]
Keys are names of nas_modules, and values are the choice indices of them.
|
2,008 | def clean_pipeline_string(self, individual):
dirty_string = str(individual)
# There are many parameter prefixes in the pipeline strings, used solely for
# making the terminal name unique, eg. LinearSVC__.
parameter_prefixes = [
(m.start(), m.end()) for m in re.finditer(", [\w]+__", dirty_string)
]
# We handle them in reverse so we do not mess up indices
pretty = dirty_string
for (start, end) in reversed(parameter_prefixes):
pretty = pretty[: start + 2] + pretty[end:]
return pretty
| Provide a string of the individual without the parameter prefixes.
Parameters
----------
individual: individual
Individual which should be represented by a pretty string
Returns
-------
A string like str(individual), but with parameter prefixes removed.
| 34 | 70 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def clean_pipeline_string(self, individual):
dirty_string = str(individual)
# There are many parameter prefixes in the pipeline strings, used solely for
# making the terminal name unique, eg. LinearSVC__.
parameter_prefixes = [
(m.start(), m.end()) for m in re.finditer(", [\w]+__", dirty_string)
]
# We handle them in reverse so we do not mess up indices
pretty = dirty_string
for (start, end) in reversed(parameter_prefixes):
pretty = pretty[: start + 2] + pretty[end:]
return pretty
```
###Assistant : Provide a string of the individual without the parameter prefixes.
Parameters
----------
individual: individual
Individual which should be represented by a pretty string
Returns
-------
A string like str(individual), but with parameter prefixes removed.
|
2,009 | def _lin_eq2dict(a, symset):
if a in symset:
return S.Zero, {a: S.One}
elif a.is_Add:
terms_list = defaultdict(list)
coeff_list = []
for ai in a.args:
ci, ti = _lin_eq2dict(ai, symset)
coeff_list.append(ci)
for mij, cij in ti.items():
terms_list[mij].append(cij)
coeff = Add(*coeff_list)
terms = {sym: Add(*coeffs) for sym, coeffs in terms_list.items()}
return coeff, terms
elif a.is_Mul:
terms = terms_coeff = None
coeff_list = []
for ai in a.args:
ci, ti = _lin_eq2dict(ai, symset)
if not ti:
coeff_list.append(ci)
elif terms is None:
terms = ti
terms_coeff = ci
else:
raise PolyNonlinearError
coeff = Mul(*coeff_list)
if terms is None:
return coeff, {}
else:
terms = {sym: coeff * c for sym, c in terms.items()}
return coeff * terms_coeff, terms
elif a.is_Equality:
return _lin_eq2dict(a.lhs - a.rhs, symset)
elif not a.has_free(*symset):
return a, {}
else:
raise PolyNonlinearError
| Efficiently convert a linear equation to a dict of coefficients | 10 | 129 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _lin_eq2dict(a, symset):
if a in symset:
return S.Zero, {a: S.One}
elif a.is_Add:
terms_list = defaultdict(list)
coeff_list = []
for ai in a.args:
ci, ti = _lin_eq2dict(ai, symset)
coeff_list.append(ci)
for mij, cij in ti.items():
terms_list[mij].append(cij)
coeff = Add(*coeff_list)
terms = {sym: Add(*coeffs) for sym, coeffs in terms_list.items()}
return coeff, terms
elif a.is_Mul:
terms = terms_coeff = None
coeff_list = []
for ai in a.args:
ci, ti = _lin_eq2dict(ai, symset)
if not ti:
coeff_list.append(ci)
elif terms is None:
terms = ti
terms_coeff = ci
else:
raise PolyNonlinearError
coeff = Mul(*coeff_list)
if terms is None:
return coeff, {}
else:
terms = {sym: coeff * c for sym, c in terms.items()}
return coeff * terms_coeff, terms
elif a.is_Equality:
return _lin_eq2dict(a.lhs - a.rhs, symset)
elif not a.has_free(*symset):
return a, {}
else:
raise PolyNonlinearError
```
###Assistant : Efficiently convert a linear equation to a dict of coefficients
|
2,010 | def internal_ip(self, node_id):
ip = (
self._get_cached_node(node_id=node_id)["internal_ip"]
or self._get_node(node_id=node_id)["internal_ip"]
)
return ip
| Returns the internal ip (Ray ip) of the given node. | 10 | 12 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def internal_ip(self, node_id):
ip = (
self._get_cached_node(node_id=node_id)["internal_ip"]
or self._get_node(node_id=node_id)["internal_ip"]
)
return ip
```
###Assistant : Returns the internal ip (Ray ip) of the given node.
|
2,011 | def _write_file(self, source, dest, type, compress=False):
start = self.lib.tell()
length = os.stat(source).st_size
with open(source, 'rb') as f:
if compress:
buffer = bytearray(16 * 1024)
compressor = zlib.compressobj(self.LEVEL)
while 1:
read = f.readinto(buffer)
if not read:
break
self.lib.write(compressor.compress(buffer[:read]))
self.lib.write(compressor.flush())
else:
shutil.copyfileobj(f, self.lib)
self.toc.add(start, self.lib.tell() - start, length, int(compress), type, dest)
|
Stream copy a large file into the archive and update the table of contents.
| 14 | 49 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _write_file(self, source, dest, type, compress=False):
start = self.lib.tell()
length = os.stat(source).st_size
with open(source, 'rb') as f:
if compress:
buffer = bytearray(16 * 1024)
compressor = zlib.compressobj(self.LEVEL)
while 1:
read = f.readinto(buffer)
if not read:
break
self.lib.write(compressor.compress(buffer[:read]))
self.lib.write(compressor.flush())
else:
shutil.copyfileobj(f, self.lib)
self.toc.add(start, self.lib.tell() - start, length, int(compress), type, dest)
```
###Assistant :
Stream copy a large file into the archive and update the table of contents.
|
2,012 | def incidence_matrix(G, nodelist=None, edgelist=None, oriented=False, weight=None):
import scipy as sp
import scipy.sparse # call as sp.sparse
if nodelist is None:
nodelist = list(G)
if edgelist is None:
if G.is_multigraph():
edgelist = list(G.edges(keys=True))
else:
edgelist = list(G.edges())
A = sp.sparse.lil_array((len(nodelist), len(edgelist)))
node_index = {node: i for i, node in enumerate(nodelist)}
for ei, e in enumerate(edgelist):
(u, v) = e[:2]
if u == v:
continue # self loops give zero column
try:
ui = node_index[u]
vi = node_index[v]
except KeyError as err:
raise nx.NetworkXError(
f"node {u} or {v} in edgelist but not in nodelist"
) from err
if weight is None:
wt = 1
else:
if G.is_multigraph():
ekey = e[2]
wt = G[u][v][ekey].get(weight, 1)
else:
wt = G[u][v].get(weight, 1)
if oriented:
A[ui, ei] = -wt
A[vi, ei] = wt
else:
A[ui, ei] = wt
A[vi, ei] = wt
import warnings
warnings.warn(
"incidence_matrix will return a scipy.sparse array instead of a matrix in Networkx 3.0.",
FutureWarning,
stacklevel=2,
)
# TODO: Rm sp.sparse.csc_matrix in Networkx 3.0
return A.asformat("csc")
| Returns incidence matrix of G.
The incidence matrix assigns each row to a node and each column to an edge.
For a standard incidence matrix a 1 appears wherever a row's node is
incident on the column's edge. For an oriented incidence matrix each
edge is assigned an orientation (arbitrarily for undirected and aligning to
direction for directed). A -1 appears for the source (tail) of an edge and
1 for the destination (head) of the edge. The elements are zero otherwise.
Parameters
----------
G : graph
A NetworkX graph
nodelist : list, optional (default= all nodes in G)
The rows are ordered according to the nodes in nodelist.
If nodelist is None, then the ordering is produced by G.nodes().
edgelist : list, optional (default= all edges in G)
The columns are ordered according to the edges in edgelist.
If edgelist is None, then the ordering is produced by G.edges().
oriented: bool, optional (default=False)
If True, matrix elements are +1 or -1 for the head or tail node
respectively of each edge. If False, +1 occurs at both nodes.
weight : string or None, optional (default=None)
The edge data key used to provide each value in the matrix.
If None, then each edge has weight 1. Edge weights, if used,
should be positive so that the orientation can provide the sign.
Returns
-------
A : SciPy sparse matrix
The incidence matrix of G.
Notes
-----
For MultiGraph/MultiDiGraph, the edges in edgelist should be
(u,v,key) 3-tuples.
"Networks are the best discrete model for so many problems in
applied mathematics" [1]_.
References
----------
.. [1] Gil Strang, Network applications: A = incidence matrix,
http://videolectures.net/mit18085f07_strang_lec03/
| 272 | 164 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def incidence_matrix(G, nodelist=None, edgelist=None, oriented=False, weight=None):
import scipy as sp
import scipy.sparse # call as sp.sparse
if nodelist is None:
nodelist = list(G)
if edgelist is None:
if G.is_multigraph():
edgelist = list(G.edges(keys=True))
else:
edgelist = list(G.edges())
A = sp.sparse.lil_array((len(nodelist), len(edgelist)))
node_index = {node: i for i, node in enumerate(nodelist)}
for ei, e in enumerate(edgelist):
(u, v) = e[:2]
if u == v:
continue # self loops give zero column
try:
ui = node_index[u]
vi = node_index[v]
except KeyError as err:
raise nx.NetworkXError(
f"node {u} or {v} in edgelist but not in nodelist"
) from err
if weight is None:
wt = 1
else:
if G.is_multigraph():
ekey = e[2]
wt = G[u][v][ekey].get(weight, 1)
else:
wt = G[u][v].get(weight, 1)
if oriented:
A[ui, ei] = -wt
A[vi, ei] = wt
else:
A[ui, ei] = wt
A[vi, ei] = wt
import warnings
warnings.warn(
"incidence_matrix will return a scipy.sparse array instead of a matrix in Networkx 3.0.",
FutureWarning,
stacklevel=2,
)
# TODO: Rm sp.sparse.csc_matrix in Networkx 3.0
return A.asformat("csc")
```
###Assistant : Returns incidence matrix of G.
The incidence matrix assigns each row to a node and each column to an edge.
For a standard incidence matrix a 1 appears wherever a row's node is
incident on the column's edge. For an oriented incidence matrix each
edge is assigned an orientation (arbitrarily for undirected and aligning to
direction for directed). A -1 appears for the source (tail) of an edge and
1 for the destination (head) of the edge. The elements are zero otherwise.
Parameters
----------
G : graph
A NetworkX graph
nodelist : list, optional (default= all nodes in G)
The rows are ordered according to the nodes in nodelist.
If nodelist is None, then the ordering is produced by G.nodes().
edgelist : list, optional (default= all edges in G)
The columns are ordered according to the edges in edgelist.
If edgelist is None, then the ordering is produced by G.edges().
oriented: bool, optional (default=False)
If True, matrix elements are +1 or -1 for the head or tail node
respectively of each edge. If False, +1 occurs at both nodes.
weight : string or None, optional (default=None)
The edge data key used to provide each value in the matrix.
If None, then each edge has weight 1. Edge weights, if used,
should be positive so that the orientation can provide the sign.
Returns
-------
A : SciPy sparse matrix
The incidence matrix of G.
Notes
-----
For MultiGraph/MultiDiGraph, the edges in edgelist should be
(u,v,key) 3-tuples.
"Networks are the best discrete model for so many problems in
applied mathematics" [1]_.
References
----------
.. [1] Gil Strang, Network applications: A = incidence matrix,
http://videolectures.net/mit18085f07_strang_lec03/
|
2,013 | def call(self, features, training=None):
if not isinstance(features, dict):
raise ValueError(
"We expected a dictionary here. Instead we got: ", features
)
if training is None:
training = backend.learning_phase()
transformation_cache = (
tf.__internal__.feature_column.FeatureTransformationCache(features)
)
output_tensors = []
sequence_lengths = []
for column in self._feature_columns:
with backend.name_scope(column.name):
try:
(
dense_tensor,
sequence_length,
) = column.get_sequence_dense_tensor(
transformation_cache,
self._state_manager,
training=training,
)
except TypeError:
(
dense_tensor,
sequence_length,
) = column.get_sequence_dense_tensor(
transformation_cache, self._state_manager
)
# Flattens the final dimension to produce a 3D Tensor.
output_tensors.append(
self._process_dense_tensor(column, dense_tensor)
)
sequence_lengths.append(sequence_length)
# Check and process sequence lengths.
kfc._verify_static_batch_size_equality(
sequence_lengths, self._feature_columns
)
sequence_length = _assert_all_equal_and_return(sequence_lengths)
return self._verify_and_concat_tensors(output_tensors), sequence_length
| Returns sequence input corresponding to the `feature_columns`.
Args:
features: A dict mapping keys to tensors.
training: Python boolean or None, indicating whether to the layer is
being run in training mode. This argument is passed to the call
method of any `FeatureColumn` that takes a `training` argument. For
example, if a `FeatureColumn` performed dropout, the column could
expose a `training` argument to control whether the dropout should
be applied. If `None`, defaults to
`tf.keras.backend.learning_phase()`.
Returns:
An `(input_layer, sequence_length)` tuple where:
- input_layer: A float `Tensor` of shape `[batch_size, T, D]`.
`T` is the maximum sequence length for this batch, which could
differ from batch to batch. `D` is the sum of `num_elements` for
all `feature_columns`.
- sequence_length: An int `Tensor` of shape `[batch_size]`. The
sequence length for each example.
Raises:
ValueError: If features are not a dictionary.
| 137 | 98 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def call(self, features, training=None):
if not isinstance(features, dict):
raise ValueError(
"We expected a dictionary here. Instead we got: ", features
)
if training is None:
training = backend.learning_phase()
transformation_cache = (
tf.__internal__.feature_column.FeatureTransformationCache(features)
)
output_tensors = []
sequence_lengths = []
for column in self._feature_columns:
with backend.name_scope(column.name):
try:
(
dense_tensor,
sequence_length,
) = column.get_sequence_dense_tensor(
transformation_cache,
self._state_manager,
training=training,
)
except TypeError:
(
dense_tensor,
sequence_length,
) = column.get_sequence_dense_tensor(
transformation_cache, self._state_manager
)
# Flattens the final dimension to produce a 3D Tensor.
output_tensors.append(
self._process_dense_tensor(column, dense_tensor)
)
sequence_lengths.append(sequence_length)
# Check and process sequence lengths.
kfc._verify_static_batch_size_equality(
sequence_lengths, self._feature_columns
)
sequence_length = _assert_all_equal_and_return(sequence_lengths)
return self._verify_and_concat_tensors(output_tensors), sequence_length
```
###Assistant : Returns sequence input corresponding to the `feature_columns`.
Args:
features: A dict mapping keys to tensors.
training: Python boolean or None, indicating whether to the layer is
being run in training mode. This argument is passed to the call
method of any `FeatureColumn` that takes a `training` argument. For
example, if a `FeatureColumn` performed dropout, the column could
expose a `training` argument to control whether the dropout should
be applied. If `None`, defaults to
`tf.keras.backend.learning_phase()`.
Returns:
An `(input_layer, sequence_length)` tuple where:
- input_layer: A float `Tensor` of shape `[batch_size, T, D]`.
`T` is the maximum sequence length for this batch, which could
differ from batch to batch. `D` is the sum of `num_elements` for
all `feature_columns`.
- sequence_length: An int `Tensor` of shape `[batch_size]`. The
sequence length for each example.
Raises:
ValueError: If features are not a dictionary.
|
2,014 | def feed_eof(self):
self._incoming.write_eof()
ssldata, appdata = self.feed_ssldata(b'')
assert appdata == [] or appdata == [b'']
| Send a potentially "ragged" EOF.
This method will raise an SSL_ERROR_EOF exception if the EOF is
unexpected.
| 17 | 15 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def feed_eof(self):
self._incoming.write_eof()
ssldata, appdata = self.feed_ssldata(b'')
assert appdata == [] or appdata == [b'']
```
###Assistant : Send a potentially "ragged" EOF.
This method will raise an SSL_ERROR_EOF exception if the EOF is
unexpected.
|
2,015 | def screen(self) -> Screen:
try:
return self._screen_stack[-1]
except IndexError:
raise ScreenStackError("No screens on stack") from None
| Get the current screen.
Raises:
ScreenStackError: If there are no screens on the stack.
Returns:
Screen: The currently active screen.
| 20 | 16 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def screen(self) -> Screen:
try:
return self._screen_stack[-1]
except IndexError:
raise ScreenStackError("No screens on stack") from None
```
###Assistant : Get the current screen.
Raises:
ScreenStackError: If there are no screens on the stack.
Returns:
Screen: The currently active screen.
|
2,016 | def test_empty_dunder_path_no_dunder_file(self):
with self.assertRaises(ImproperlyConfigured):
AppConfig("label", Stub(__path__=[]))
| If the __path__ attr is empty and there is no __file__, raise. | 12 | 6 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_empty_dunder_path_no_dunder_file(self):
with self.assertRaises(ImproperlyConfigured):
AppConfig("label", Stub(__path__=[]))
```
###Assistant : If the __path__ attr is empty and there is no __file__, raise.
|
2,017 | def train_epoch_ch3(net, train_iter, loss, updater):
# Sum of training loss, sum of training accuracy, no. of examples
metric = Accumulator(3)
for X, y in train_iter:
# Compute gradients and update parameters
with tf.GradientTape() as tape:
y_hat = net(X)
# Keras implementations for loss takes (labels, predictions)
# instead of (predictions, labels) that users might implement
# in this book, e.g. `cross_entropy` that we implemented above
if isinstance(loss, tf.keras.losses.Loss):
l = loss(y, y_hat)
else:
l = loss(y_hat, y)
if isinstance(updater, tf.keras.optimizers.Optimizer):
params = net.trainable_variables
grads = tape.gradient(l, params)
updater.apply_gradients(zip(grads, params))
else:
updater(X.shape[0], tape.gradient(l, updater.params))
# Keras loss by default returns the average loss in a batch
l_sum = l * float(tf.size(y)) if isinstance(
loss, tf.keras.losses.Loss) else tf.reduce_sum(l)
metric.add(l_sum, accuracy(y_hat, y), tf.size(y))
# Return training loss and training accuracy
return metric[0] / metric[2], metric[1] / metric[2]
| The training loop defined in Chapter 3.
Defined in :numref:`sec_softmax_scratch` | 10 | 134 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def train_epoch_ch3(net, train_iter, loss, updater):
# Sum of training loss, sum of training accuracy, no. of examples
metric = Accumulator(3)
for X, y in train_iter:
# Compute gradients and update parameters
with tf.GradientTape() as tape:
y_hat = net(X)
# Keras implementations for loss takes (labels, predictions)
# instead of (predictions, labels) that users might implement
# in this book, e.g. `cross_entropy` that we implemented above
if isinstance(loss, tf.keras.losses.Loss):
l = loss(y, y_hat)
else:
l = loss(y_hat, y)
if isinstance(updater, tf.keras.optimizers.Optimizer):
params = net.trainable_variables
grads = tape.gradient(l, params)
updater.apply_gradients(zip(grads, params))
else:
updater(X.shape[0], tape.gradient(l, updater.params))
# Keras loss by default returns the average loss in a batch
l_sum = l * float(tf.size(y)) if isinstance(
loss, tf.keras.losses.Loss) else tf.reduce_sum(l)
metric.add(l_sum, accuracy(y_hat, y), tf.size(y))
# Return training loss and training accuracy
return metric[0] / metric[2], metric[1] / metric[2]
```
###Assistant : The training loop defined in Chapter 3.
Defined in :numref:`sec_softmax_scratch`
|
2,018 | def all_estimators(type_filter=None):
# lazy import to avoid circular imports from sklearn.base
from . import IS_PYPY
from ._testing import ignore_warnings
from ..base import (
BaseEstimator,
ClassifierMixin,
RegressorMixin,
TransformerMixin,
ClusterMixin,
)
| Get a list of all estimators from `sklearn`.
This function crawls the module and gets all classes that inherit
from BaseEstimator. Classes that are defined in test-modules are not
included.
Parameters
----------
type_filter : {"classifier", "regressor", "cluster", "transformer"} \
or list of such str, default=None
Which kind of estimators should be returned. If None, no filter is
applied and all estimators are returned. Possible values are
'classifier', 'regressor', 'cluster' and 'transformer' to get
estimators only of these specific types, or a list of these to
get the estimators that fit at least one of the types.
Returns
-------
estimators : list of tuples
List of (name, class), where ``name`` is the class name as string
and ``class`` is the actual type of the class.
| 124 | 29 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def all_estimators(type_filter=None):
# lazy import to avoid circular imports from sklearn.base
from . import IS_PYPY
from ._testing import ignore_warnings
from ..base import (
BaseEstimator,
ClassifierMixin,
RegressorMixin,
TransformerMixin,
ClusterMixin,
)
```
###Assistant : Get a list of all estimators from `sklearn`.
This function crawls the module and gets all classes that inherit
from BaseEstimator. Classes that are defined in test-modules are not
included.
Parameters
----------
type_filter : {"classifier", "regressor", "cluster", "transformer"} \
or list of such str, default=None
Which kind of estimators should be returned. If None, no filter is
applied and all estimators are returned. Possible values are
'classifier', 'regressor', 'cluster' and 'transformer' to get
estimators only of these specific types, or a list of these to
get the estimators that fit at least one of the types.
Returns
-------
estimators : list of tuples
List of (name, class), where ``name`` is the class name as string
and ``class`` is the actual type of the class.
|
2,019 | def test_legend_auto5():
fig, axs = plt.subplots(ncols=2, figsize=(9.6, 4.8))
leg_bboxes = []
for ax, loc in zip(axs.flat, ("center", "best")):
# An Ellipse patch at the top, a U-shaped Polygon patch at the
# bottom and a ring-like Wedge patch: the correct placement of
# the legend should be in the center.
for _patch in [
mpatches.Ellipse(
xy=(0.5, 0.9), width=0.8, height=0.2, fc="C1"),
mpatches.Polygon(np.array([
[0, 1], [0, 0], [1, 0], [1, 1], [0.9, 1.0], [0.9, 0.1],
[0.1, 0.1], [0.1, 1.0], [0.1, 1.0]]), fc="C1"),
mpatches.Wedge((0.5, 0.5), 0.5, 0, 360, width=0.05, fc="C0")
]:
ax.add_patch(_patch)
ax.plot([0.1, 0.9], [0.9, 0.9], label="A segment") # sthg to label
leg = ax.legend(loc=loc)
fig.canvas.draw()
leg_bboxes.append(
leg.get_window_extent().transformed(ax.transAxes.inverted()))
assert_allclose(leg_bboxes[1].bounds, leg_bboxes[0].bounds)
@image_comparison(['legend_various_labels'], remove_text=True) |
Check that the automatic placement handle a rather complex
case with non rectangular patch. Related to issue #9580.
| 18 | 109 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_legend_auto5():
fig, axs = plt.subplots(ncols=2, figsize=(9.6, 4.8))
leg_bboxes = []
for ax, loc in zip(axs.flat, ("center", "best")):
# An Ellipse patch at the top, a U-shaped Polygon patch at the
# bottom and a ring-like Wedge patch: the correct placement of
# the legend should be in the center.
for _patch in [
mpatches.Ellipse(
xy=(0.5, 0.9), width=0.8, height=0.2, fc="C1"),
mpatches.Polygon(np.array([
[0, 1], [0, 0], [1, 0], [1, 1], [0.9, 1.0], [0.9, 0.1],
[0.1, 0.1], [0.1, 1.0], [0.1, 1.0]]), fc="C1"),
mpatches.Wedge((0.5, 0.5), 0.5, 0, 360, width=0.05, fc="C0")
]:
ax.add_patch(_patch)
ax.plot([0.1, 0.9], [0.9, 0.9], label="A segment") # sthg to label
leg = ax.legend(loc=loc)
fig.canvas.draw()
leg_bboxes.append(
leg.get_window_extent().transformed(ax.transAxes.inverted()))
assert_allclose(leg_bboxes[1].bounds, leg_bboxes[0].bounds)
@image_comparison(['legend_various_labels'], remove_text=True)
```
###Assistant :
Check that the automatic placement handle a rather complex
case with non rectangular patch. Related to issue #9580.
|
2,020 | def _toggle_cursor_visible(self):
if time.monotonic() - self._last_keypress_time > self.cursor_blink_period:
self._cursor_blink_visible = not self._cursor_blink_visible
self.refresh()
| Manages the blinking of the cursor - ensuring blinking only starts when the
user hasn't pressed a key in some time | 21 | 13 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _toggle_cursor_visible(self):
if time.monotonic() - self._last_keypress_time > self.cursor_blink_period:
self._cursor_blink_visible = not self._cursor_blink_visible
self.refresh()
```
###Assistant : Manages the blinking of the cursor - ensuring blinking only starts when the
user hasn't pressed a key in some time
|
2,021 | def parse_semver(version, operator) -> Optional[SemverFilter]:
(operator, negated) = handle_operator_negation(operator)
try:
operator = OPERATOR_TO_DJANGO[operator]
except KeyError:
raise InvalidSearchQuery("Invalid operation 'IN' for semantic version filter.")
version = version if "@" in version else f"{SEMVER_FAKE_PACKAGE}@{version}"
parsed = parse_release_relay(version)
parsed_version = parsed.get("version_parsed")
if parsed_version:
# Convert `pre` to always be a string
prerelease = parsed_version["pre"] if parsed_version["pre"] else ""
semver_filter = SemverFilter(
operator,
[
parsed_version["major"],
parsed_version["minor"],
parsed_version["patch"],
parsed_version["revision"],
0 if prerelease else 1,
prerelease,
],
negated=negated,
)
if parsed["package"] and parsed["package"] != SEMVER_FAKE_PACKAGE:
semver_filter.package = parsed["package"]
return semver_filter
else:
# Try to parse as a wildcard match
package, version = version.split("@", 1)
version_parts = []
if version:
for part in version.split(".", 3):
if part in SEMVER_WILDCARDS:
break
try:
# We assume all ints for a wildcard match - not handling prerelease as
# part of these
version_parts.append(int(part))
except ValueError:
raise InvalidSearchQuery(INVALID_SEMVER_MESSAGE)
package = package if package and package != SEMVER_FAKE_PACKAGE else None
return SemverFilter("exact", version_parts, package, negated)
key_conversion_map: Mapping[
str,
Callable[[SearchFilter, str, Mapping[str, Union[int, str, datetime]]], Optional[Sequence[any]]],
] = {
"environment": _environment_filter_converter,
"message": _message_filter_converter,
TRANSACTION_STATUS_ALIAS: _transaction_status_filter_converter,
"issue.id": _issue_id_filter_converter,
USER_DISPLAY_ALIAS: _user_display_filter_converter,
ERROR_UNHANDLED_ALIAS: _error_unhandled_filter_converter,
"error.handled": _error_handled_filter_converter,
TEAM_KEY_TRANSACTION_ALIAS: _team_key_transaction_filter_converter,
RELEASE_STAGE_ALIAS: _release_stage_filter_converter,
SEMVER_ALIAS: _semver_filter_converter,
SEMVER_PACKAGE_ALIAS: _semver_package_filter_converter,
SEMVER_BUILD_ALIAS: _semver_build_filter_converter,
}
|
Attempts to parse a release version using our semver syntax. version should be in
format `<package_name>@<version>` or `<version>`, where package_name is a string and
version is a version string matching semver format (https://semver.org/). We've
slightly extended this format to allow up to 4 integers. EG
- sentry@1.2.3.4
- sentry@1.2.3.4-alpha
- 1.2.3.4
- 1.2.3.4-alpha
- 1.*
| 55 | 191 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def parse_semver(version, operator) -> Optional[SemverFilter]:
(operator, negated) = handle_operator_negation(operator)
try:
operator = OPERATOR_TO_DJANGO[operator]
except KeyError:
raise InvalidSearchQuery("Invalid operation 'IN' for semantic version filter.")
version = version if "@" in version else f"{SEMVER_FAKE_PACKAGE}@{version}"
parsed = parse_release_relay(version)
parsed_version = parsed.get("version_parsed")
if parsed_version:
# Convert `pre` to always be a string
prerelease = parsed_version["pre"] if parsed_version["pre"] else ""
semver_filter = SemverFilter(
operator,
[
parsed_version["major"],
parsed_version["minor"],
parsed_version["patch"],
parsed_version["revision"],
0 if prerelease else 1,
prerelease,
],
negated=negated,
)
if parsed["package"] and parsed["package"] != SEMVER_FAKE_PACKAGE:
semver_filter.package = parsed["package"]
return semver_filter
else:
# Try to parse as a wildcard match
package, version = version.split("@", 1)
version_parts = []
if version:
for part in version.split(".", 3):
if part in SEMVER_WILDCARDS:
break
try:
# We assume all ints for a wildcard match - not handling prerelease as
# part of these
version_parts.append(int(part))
except ValueError:
raise InvalidSearchQuery(INVALID_SEMVER_MESSAGE)
package = package if package and package != SEMVER_FAKE_PACKAGE else None
return SemverFilter("exact", version_parts, package, negated)
key_conversion_map: Mapping[
str,
Callable[[SearchFilter, str, Mapping[str, Union[int, str, datetime]]], Optional[Sequence[any]]],
] = {
"environment": _environment_filter_converter,
"message": _message_filter_converter,
TRANSACTION_STATUS_ALIAS: _transaction_status_filter_converter,
"issue.id": _issue_id_filter_converter,
USER_DISPLAY_ALIAS: _user_display_filter_converter,
ERROR_UNHANDLED_ALIAS: _error_unhandled_filter_converter,
"error.handled": _error_handled_filter_converter,
TEAM_KEY_TRANSACTION_ALIAS: _team_key_transaction_filter_converter,
RELEASE_STAGE_ALIAS: _release_stage_filter_converter,
SEMVER_ALIAS: _semver_filter_converter,
SEMVER_PACKAGE_ALIAS: _semver_package_filter_converter,
SEMVER_BUILD_ALIAS: _semver_build_filter_converter,
}
```
###Assistant :
Attempts to parse a release version using our semver syntax. version should be in
format `<package_name>@<version>` or `<version>`, where package_name is a string and
version is a version string matching semver format (https://semver.org/). We've
slightly extended this format to allow up to 4 integers. EG
- sentry@1.2.3.4
- sentry@1.2.3.4-alpha
- 1.2.3.4
- 1.2.3.4-alpha
- 1.*
|
2,022 | def test_not_logged_in_gives_403_to_ajax_requests(self):
# Get dashboard
response = self.client.get(
reverse("wagtailadmin_home"), HTTP_X_REQUESTED_WITH="XMLHttpRequest"
)
# AJAX requests should be given a 403 error instead of being redirected
self.assertEqual(response.status_code, 403)
|
This tests that a not logged in user is given a 403 error on AJAX requests
| 16 | 26 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_not_logged_in_gives_403_to_ajax_requests(self):
# Get dashboard
response = self.client.get(
reverse("wagtailadmin_home"), HTTP_X_REQUESTED_WITH="XMLHttpRequest"
)
# AJAX requests should be given a 403 error instead of being redirected
self.assertEqual(response.status_code, 403)
```
###Assistant :
This tests that a not logged in user is given a 403 error on AJAX requests
|
2,023 | def bytes_to_unicode(self) -> Dict[int, str]:
bs: List[int] = (
list(range(ord("!"), ord("~") + 1))
+ list(range(ord("¡"), ord("¬") + 1))
+ list(range(ord("®"), ord("ÿ") + 1))
)
cs: List[int] = bs[:]
n = 0
for b in range(2 ** 8):
if b not in bs:
bs.append(b)
cs.append(2 ** 8 + n)
n += 1
str_cs: List[str] = [chr(n) for n in cs]
return dict(zip(bs, str_cs))
|
Returns list of utf-8 byte and a corresponding list of unicode strings.
The reversible bpe codes work on unicode strings. This means you need a large #
of unicode characters in your vocab if you want to avoid UNKs. When you're at
something like a 10B token dataset you end up needing around 5K for decent
coverage. This is a significant percentage of your normal, say, 32K bpe vocab.
To avoid that, we want lookup tables between utf-8 bytes and unicode strings.
And avoids mapping to whitespace/control characters the bpe code barfs on.
| 93 | 62 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def bytes_to_unicode(self) -> Dict[int, str]:
bs: List[int] = (
list(range(ord("!"), ord("~") + 1))
+ list(range(ord("¡"), ord("¬") + 1))
+ list(range(ord("®"), ord("ÿ") + 1))
)
cs: List[int] = bs[:]
n = 0
for b in range(2 ** 8):
if b not in bs:
bs.append(b)
cs.append(2 ** 8 + n)
n += 1
str_cs: List[str] = [chr(n) for n in cs]
return dict(zip(bs, str_cs))
```
###Assistant :
Returns list of utf-8 byte and a corresponding list of unicode strings.
The reversible bpe codes work on unicode strings. This means you need a large #
of unicode characters in your vocab if you want to avoid UNKs. When you're at
something like a 10B token dataset you end up needing around 5K for decent
coverage. This is a significant percentage of your normal, say, 32K bpe vocab.
To avoid that, we want lookup tables between utf-8 bytes and unicode strings.
And avoids mapping to whitespace/control characters the bpe code barfs on.
|
2,024 | async def test_get_events_custom_calendars(hass, calendar, get_api_events):
config = dict(CALDAV_CONFIG)
config["custom_calendars"] = [
{"name": "Private", "calendar": "Private", "search": "This is a normal event"}
]
assert await async_setup_component(hass, "calendar", {"calendar": config})
await hass.async_block_till_done()
events = await get_api_events("calendar.private_private")
assert events == [
{
"end": {"dateTime": "2017-11-27T10:00:00-08:00"},
"start": {"dateTime": "2017-11-27T09:00:00-08:00"},
"summary": "This is a normal event",
"location": "Hamburg",
"description": "Surprisingly rainy",
"uid": None,
"recurrence_id": None,
"rrule": None,
}
]
| Test that only searched events are returned on API. | 9 | 64 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
async def test_get_events_custom_calendars(hass, calendar, get_api_events):
config = dict(CALDAV_CONFIG)
config["custom_calendars"] = [
{"name": "Private", "calendar": "Private", "search": "This is a normal event"}
]
assert await async_setup_component(hass, "calendar", {"calendar": config})
await hass.async_block_till_done()
events = await get_api_events("calendar.private_private")
assert events == [
{
"end": {"dateTime": "2017-11-27T10:00:00-08:00"},
"start": {"dateTime": "2017-11-27T09:00:00-08:00"},
"summary": "This is a normal event",
"location": "Hamburg",
"description": "Surprisingly rainy",
"uid": None,
"recurrence_id": None,
"rrule": None,
}
]
```
###Assistant : Test that only searched events are returned on API.
|
2,025 | def leaf_symbols(self) -> Iterable[Symbol]:
for arg in self.arguments:
if isinstance(arg, SymbolicExpression):
yield from arg.leaf_symbols()
|
Return a generator of all leaf symbols.
Useful for when you want to inspect when the symbols come from.
No deduplication even if the symbols has duplicates.
| 27 | 14 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def leaf_symbols(self) -> Iterable[Symbol]:
for arg in self.arguments:
if isinstance(arg, SymbolicExpression):
yield from arg.leaf_symbols()
```
###Assistant :
Return a generator of all leaf symbols.
Useful for when you want to inspect when the symbols come from.
No deduplication even if the symbols has duplicates.
|
2,026 | def append_step(self, obs, action, next_obs, reward, terminated, truncated, info):
if self._outfile:
if self._save_info:
self._current_rollout.append(
[obs, action, next_obs, reward, terminated, truncated, info]
)
else:
self._current_rollout.append(
[obs, action, next_obs, reward, terminated, truncated]
)
self._total_steps += 1
@eval_app.command() | Add a step to the current rollout, if we are saving them | 12 | 35 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def append_step(self, obs, action, next_obs, reward, terminated, truncated, info):
if self._outfile:
if self._save_info:
self._current_rollout.append(
[obs, action, next_obs, reward, terminated, truncated, info]
)
else:
self._current_rollout.append(
[obs, action, next_obs, reward, terminated, truncated]
)
self._total_steps += 1
@eval_app.command()
```
###Assistant : Add a step to the current rollout, if we are saving them
|
2,027 | def test_nested_prefetch_is_not_overwritten_by_related_object(self):
queryset = House.objects.only('name').prefetch_related(
Prefetch('rooms', queryset=Room.objects.prefetch_related(
Prefetch('house', queryset=House.objects.only('address')),
)),
)
with self.assertNumQueries(3):
house = queryset.first()
self.assertIs(Room.house.is_cached(self.room), True)
with self.assertNumQueries(0):
house.rooms.first().house.address
|
The prefetched relationship is used rather than populating the reverse
relationship from the parent, when prefetching a set of child objects
related to a set of parent objects and the child queryset itself
specifies a prefetch back to the parent.
| 40 | 21 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_nested_prefetch_is_not_overwritten_by_related_object(self):
queryset = House.objects.only('name').prefetch_related(
Prefetch('rooms', queryset=Room.objects.prefetch_related(
Prefetch('house', queryset=House.objects.only('address')),
)),
)
with self.assertNumQueries(3):
house = queryset.first()
self.assertIs(Room.house.is_cached(self.room), True)
with self.assertNumQueries(0):
house.rooms.first().house.address
```
###Assistant :
The prefetched relationship is used rather than populating the reverse
relationship from the parent, when prefetching a set of child objects
related to a set of parent objects and the child queryset itself
specifies a prefetch back to the parent.
|
2,028 | def apply(self, i):
r
i = _sympify(i)
if i.is_integer is False:
raise NotImplementedError("{} should be an integer.".format(i))
n = self.size
if (i < 0) == True or (i >= n) == True:
raise NotImplementedError(
"{} should be an integer between 0 and {}".format(i, n-1))
if i.is_Integer:
return Integer(self._array_form[i])
return AppliedPermutation(self, i)
| Apply the permutation to an expression.
Parameters
==========
i : Expr
It should be an integer between $0$ and $n-1$ where $n$
is the size of the permutation.
If it is a symbol or a symbolic expression that can
have integer values, an ``AppliedPermutation`` object
will be returned which can represent an unevaluated
function.
Notes
=====
Any permutation can be defined as a bijective function
$\sigma : \{ 0, 1, \dots, n-1 \} \rightarrow \{ 0, 1, \dots, n-1 \}$
where $n$ denotes the size of the permutation.
The definition may even be extended for any set with distinctive
elements, such that the permutation can even be applied for
real numbers or such, however, it is not implemented for now for
computational reasons and the integrity with the group theory
module.
This function is similar to the ``__call__`` magic, however,
``__call__`` magic already has some other applications like
permuting an array or attaching new cycles, which would
not always be mathematically consistent.
This also guarantees that the return type is a SymPy integer,
which guarantees the safety to use assumptions.
| 180 | 51 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def apply(self, i):
r
i = _sympify(i)
if i.is_integer is False:
raise NotImplementedError("{} should be an integer.".format(i))
n = self.size
if (i < 0) == True or (i >= n) == True:
raise NotImplementedError(
"{} should be an integer between 0 and {}".format(i, n-1))
if i.is_Integer:
return Integer(self._array_form[i])
return AppliedPermutation(self, i)
```
###Assistant : Apply the permutation to an expression.
Parameters
==========
i : Expr
It should be an integer between $0$ and $n-1$ where $n$
is the size of the permutation.
If it is a symbol or a symbolic expression that can
have integer values, an ``AppliedPermutation`` object
will be returned which can represent an unevaluated
function.
Notes
=====
Any permutation can be defined as a bijective function
$\sigma : \{ 0, 1, \dots, n-1 \} \rightarrow \{ 0, 1, \dots, n-1 \}$
where $n$ denotes the size of the permutation.
The definition may even be extended for any set with distinctive
elements, such that the permutation can even be applied for
real numbers or such, however, it is not implemented for now for
computational reasons and the integrity with the group theory
module.
This function is similar to the ``__call__`` magic, however,
``__call__`` magic already has some other applications like
permuting an array or attaching new cycles, which would
not always be mathematically consistent.
This also guarantees that the return type is a SymPy integer,
which guarantees the safety to use assumptions.
|
2,029 | def _convert_mesh_to_triangles(self, coordinates):
if isinstance(coordinates, np.ma.MaskedArray):
p = coordinates.data
else:
p = coordinates
p_a = p[:-1, :-1]
p_b = p[:-1, 1:]
p_c = p[1:, 1:]
p_d = p[1:, :-1]
p_center = (p_a + p_b + p_c + p_d) / 4.0
triangles = np.concatenate([
p_a, p_b, p_center,
p_b, p_c, p_center,
p_c, p_d, p_center,
p_d, p_a, p_center,
], axis=2).reshape((-1, 3, 2))
c = self.get_facecolor().reshape((*coordinates.shape[:2], 4))
c_a = c[:-1, :-1]
c_b = c[:-1, 1:]
c_c = c[1:, 1:]
c_d = c[1:, :-1]
c_center = (c_a + c_b + c_c + c_d) / 4.0
colors = np.concatenate([
c_a, c_b, c_center,
c_b, c_c, c_center,
c_c, c_d, c_center,
c_d, c_a, c_center,
], axis=2).reshape((-1, 3, 4))
return triangles, colors
|
Convert a given mesh into a sequence of triangles, each point
with its own color. The result can be used to construct a call to
`~.RendererBase.draw_gouraud_triangles`.
| 26 | 112 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _convert_mesh_to_triangles(self, coordinates):
if isinstance(coordinates, np.ma.MaskedArray):
p = coordinates.data
else:
p = coordinates
p_a = p[:-1, :-1]
p_b = p[:-1, 1:]
p_c = p[1:, 1:]
p_d = p[1:, :-1]
p_center = (p_a + p_b + p_c + p_d) / 4.0
triangles = np.concatenate([
p_a, p_b, p_center,
p_b, p_c, p_center,
p_c, p_d, p_center,
p_d, p_a, p_center,
], axis=2).reshape((-1, 3, 2))
c = self.get_facecolor().reshape((*coordinates.shape[:2], 4))
c_a = c[:-1, :-1]
c_b = c[:-1, 1:]
c_c = c[1:, 1:]
c_d = c[1:, :-1]
c_center = (c_a + c_b + c_c + c_d) / 4.0
colors = np.concatenate([
c_a, c_b, c_center,
c_b, c_c, c_center,
c_c, c_d, c_center,
c_d, c_a, c_center,
], axis=2).reshape((-1, 3, 4))
return triangles, colors
```
###Assistant :
Convert a given mesh into a sequence of triangles, each point
with its own color. The result can be used to construct a call to
`~.RendererBase.draw_gouraud_triangles`.
|
2,030 | def count(self, level=None):
if level is None:
return notna(self._values).sum().astype("int64")
else:
warnings.warn(
"Using the level keyword in DataFrame and Series aggregations is "
"deprecated and will be removed in a future version. Use groupby "
"instead. ser.count(level=1) should use ser.groupby(level=1).count().",
FutureWarning,
stacklevel=find_stack_level(),
)
if not isinstance(self.index, MultiIndex):
raise ValueError("Series.count level is only valid with a MultiIndex")
index = self.index
assert isinstance(index, MultiIndex) # for mypy
if isinstance(level, str):
level = index._get_level_number(level)
lev = index.levels[level]
level_codes = np.array(index.codes[level], subok=False, copy=True)
mask = level_codes == -1
if mask.any():
level_codes[mask] = cnt = len(lev)
lev = lev.insert(cnt, lev._na_value)
obs = level_codes[notna(self._values)]
# Argument "minlength" to "bincount" has incompatible type "Optional[int]";
# expected "SupportsIndex" [arg-type]
out = np.bincount(obs, minlength=len(lev) or None) # type: ignore[arg-type]
return self._constructor(out, index=lev, dtype="int64").__finalize__(
self, method="count"
)
|
Return number of non-NA/null observations in the Series.
Parameters
----------
level : int or level name, default None
If the axis is a MultiIndex (hierarchical), count along a
particular level, collapsing into a smaller Series.
Returns
-------
int or Series (if level specified)
Number of non-null values in the Series.
See Also
--------
DataFrame.count : Count non-NA cells for each column or row.
Examples
--------
>>> s = pd.Series([0.0, 1.0, np.nan])
>>> s.count()
2
| 74 | 126 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def count(self, level=None):
if level is None:
return notna(self._values).sum().astype("int64")
else:
warnings.warn(
"Using the level keyword in DataFrame and Series aggregations is "
"deprecated and will be removed in a future version. Use groupby "
"instead. ser.count(level=1) should use ser.groupby(level=1).count().",
FutureWarning,
stacklevel=find_stack_level(),
)
if not isinstance(self.index, MultiIndex):
raise ValueError("Series.count level is only valid with a MultiIndex")
index = self.index
assert isinstance(index, MultiIndex) # for mypy
if isinstance(level, str):
level = index._get_level_number(level)
lev = index.levels[level]
level_codes = np.array(index.codes[level], subok=False, copy=True)
mask = level_codes == -1
if mask.any():
level_codes[mask] = cnt = len(lev)
lev = lev.insert(cnt, lev._na_value)
obs = level_codes[notna(self._values)]
# Argument "minlength" to "bincount" has incompatible type "Optional[int]";
# expected "SupportsIndex" [arg-type]
out = np.bincount(obs, minlength=len(lev) or None) # type: ignore[arg-type]
return self._constructor(out, index=lev, dtype="int64").__finalize__(
self, method="count"
)
```
###Assistant :
Return number of non-NA/null observations in the Series.
Parameters
----------
level : int or level name, default None
If the axis is a MultiIndex (hierarchical), count along a
particular level, collapsing into a smaller Series.
Returns
-------
int or Series (if level specified)
Number of non-null values in the Series.
See Also
--------
DataFrame.count : Count non-NA cells for each column or row.
Examples
--------
>>> s = pd.Series([0.0, 1.0, np.nan])
>>> s.count()
2
|
2,031 | def __format__(self, specifier, context=None, _localeconv=None):
# Note: PEP 3101 says that if the type is not present then
# there should be at least one digit after the decimal point.
# We take the liberty of ignoring this requirement for
# Decimal---it's presumably there to make sure that
# format(float, '') behaves similarly to str(float).
if context is None:
context = getcontext()
spec = _parse_format_specifier(specifier, _localeconv=_localeconv)
# special values don't care about the type or precision
if self._is_special:
sign = _format_sign(self._sign, spec)
body = str(self.copy_abs())
if spec['type'] == '%':
body += '%'
return _format_align(sign, body, spec)
# a type of None defaults to 'g' or 'G', depending on context
if spec['type'] is None:
spec['type'] = ['g', 'G'][context.capitals]
# if type is '%', adjust exponent of self accordingly
if spec['type'] == '%':
self = _dec_from_triple(self._sign, self._int, self._exp+2)
# round if necessary, taking rounding mode from the context
rounding = context.rounding
precision = spec['precision']
if precision is not None:
if spec['type'] in 'eE':
self = self._round(precision+1, rounding)
elif spec['type'] in 'fF%':
self = self._rescale(-precision, rounding)
elif spec['type'] in 'gG' and len(self._int) > precision:
self = self._round(precision, rounding)
# special case: zeros with a positive exponent can't be
# represented in fixed point; rescale them to 0e0.
if not self and self._exp > 0 and spec['type'] in 'fF%':
self = self._rescale(0, rounding)
# figure out placement of the decimal point
leftdigits = self._exp + len(self._int)
if spec['type'] in 'eE':
if not self and precision is not None:
dotplace = 1 - precision
else:
dotplace = 1
elif spec['type'] in 'fF%':
dotplace = leftdigits
elif spec['type'] in 'gG':
if self._exp <= 0 and leftdigits > -6:
dotplace = leftdigits
else:
dotplace = 1
# find digits before and after decimal point, and get exponent
if dotplace < 0:
intpart = '0'
fracpart = '0'*(-dotplace) + self._int
elif dotplace > len(self._int):
intpart = self._int + '0'*(dotplace-len(self._int))
fracpart = ''
else:
intpart = self._int[:dotplace] or '0'
fracpart = self._int[dotplace:]
exp = leftdigits-dotplace
# done with the decimal-specific stuff; hand over the rest
# of the formatting to the _format_number function
return _format_number(self._sign, intpart, fracpart, exp, spec)
| Format a Decimal instance according to the given specifier.
The specifier should be a standard format specifier, with the
form described in PEP 3101. Formatting types 'e', 'E', 'f',
'F', 'g', 'G', 'n' and '%' are supported. If the formatting
type is omitted it defaults to 'g' or 'G', depending on the
value of context.capitals.
| 55 | 350 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def __format__(self, specifier, context=None, _localeconv=None):
# Note: PEP 3101 says that if the type is not present then
# there should be at least one digit after the decimal point.
# We take the liberty of ignoring this requirement for
# Decimal---it's presumably there to make sure that
# format(float, '') behaves similarly to str(float).
if context is None:
context = getcontext()
spec = _parse_format_specifier(specifier, _localeconv=_localeconv)
# special values don't care about the type or precision
if self._is_special:
sign = _format_sign(self._sign, spec)
body = str(self.copy_abs())
if spec['type'] == '%':
body += '%'
return _format_align(sign, body, spec)
# a type of None defaults to 'g' or 'G', depending on context
if spec['type'] is None:
spec['type'] = ['g', 'G'][context.capitals]
# if type is '%', adjust exponent of self accordingly
if spec['type'] == '%':
self = _dec_from_triple(self._sign, self._int, self._exp+2)
# round if necessary, taking rounding mode from the context
rounding = context.rounding
precision = spec['precision']
if precision is not None:
if spec['type'] in 'eE':
self = self._round(precision+1, rounding)
elif spec['type'] in 'fF%':
self = self._rescale(-precision, rounding)
elif spec['type'] in 'gG' and len(self._int) > precision:
self = self._round(precision, rounding)
# special case: zeros with a positive exponent can't be
# represented in fixed point; rescale them to 0e0.
if not self and self._exp > 0 and spec['type'] in 'fF%':
self = self._rescale(0, rounding)
# figure out placement of the decimal point
leftdigits = self._exp + len(self._int)
if spec['type'] in 'eE':
if not self and precision is not None:
dotplace = 1 - precision
else:
dotplace = 1
elif spec['type'] in 'fF%':
dotplace = leftdigits
elif spec['type'] in 'gG':
if self._exp <= 0 and leftdigits > -6:
dotplace = leftdigits
else:
dotplace = 1
# find digits before and after decimal point, and get exponent
if dotplace < 0:
intpart = '0'
fracpart = '0'*(-dotplace) + self._int
elif dotplace > len(self._int):
intpart = self._int + '0'*(dotplace-len(self._int))
fracpart = ''
else:
intpart = self._int[:dotplace] or '0'
fracpart = self._int[dotplace:]
exp = leftdigits-dotplace
# done with the decimal-specific stuff; hand over the rest
# of the formatting to the _format_number function
return _format_number(self._sign, intpart, fracpart, exp, spec)
```
###Assistant : Format a Decimal instance according to the given specifier.
The specifier should be a standard format specifier, with the
form described in PEP 3101. Formatting types 'e', 'E', 'f',
'F', 'g', 'G', 'n' and '%' are supported. If the formatting
type is omitted it defaults to 'g' or 'G', depending on the
value of context.capitals.
|
2,032 | def test_feature_names_in():
pd = pytest.importorskip("pandas")
iris = datasets.load_iris()
X_np = iris.data
df = pd.DataFrame(X_np, columns=iris.feature_names)
| Check that feature_name_in are recorded by `_validate_data` | 7 | 15 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_feature_names_in():
pd = pytest.importorskip("pandas")
iris = datasets.load_iris()
X_np = iris.data
df = pd.DataFrame(X_np, columns=iris.feature_names)
```
###Assistant : Check that feature_name_in are recorded by `_validate_data`
|
2,033 | def check_interactive_compatibility(self):
from pytorch_lightning.utilities import _IS_INTERACTIVE
if _IS_INTERACTIVE and self._strategy_type is not None and not self._strategy_type.is_interactive_compatible():
raise MisconfigurationException(
f"`Trainer(strategy={self._strategy_type.value!r})` or"
f" `Trainer(accelerator={self._strategy_type.value!r})` is not compatible with an interactive"
" environment. Run your code as a script, or choose one of the compatible backends:"
f" {', '.join(_StrategyType.interactive_compatible_types())}."
" In case you are spawning processes yourself, make sure to include the Trainer"
" creation inside the worker function."
)
| Raises a `MisconfigurationException` if the accelerator and/or plugin is not compatible with an
interactive environment. | 15 | 67 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def check_interactive_compatibility(self):
from pytorch_lightning.utilities import _IS_INTERACTIVE
if _IS_INTERACTIVE and self._strategy_type is not None and not self._strategy_type.is_interactive_compatible():
raise MisconfigurationException(
f"`Trainer(strategy={self._strategy_type.value!r})` or"
f" `Trainer(accelerator={self._strategy_type.value!r})` is not compatible with an interactive"
" environment. Run your code as a script, or choose one of the compatible backends:"
f" {', '.join(_StrategyType.interactive_compatible_types())}."
" In case you are spawning processes yourself, make sure to include the Trainer"
" creation inside the worker function."
)
```
###Assistant : Raises a `MisconfigurationException` if the accelerator and/or plugin is not compatible with an
interactive environment.
|
2,034 | def theme_global(new_theme=None):
if new_theme is not None:
if new_theme not in theme_list():
popup_error_with_traceback('Cannot use custom themes with theme_global call',
'Your request to use theme {} cannot be performed.'.format(new_theme),
'The PySimpleGUI Global User Settings are meant for PySimpleGUI standard items, not user config items',
'You can use any of the many built-in themes instead or use your own UserSettings file to store your custom theme')
return pysimplegui_user_settings.get('-theme-', CURRENT_LOOK_AND_FEEL)
pysimplegui_user_settings.set('-theme-', new_theme)
theme(new_theme)
return new_theme
else:
return pysimplegui_user_settings.get('-theme-', CURRENT_LOOK_AND_FEEL)
|
Sets / Gets the global PySimpleGUI Theme. If none is specified then returns the global theme from user settings.
Note the theme must be a standard, built-in PySimpleGUI theme... not a user-created theme.
:param new_theme: the new theme name to use
:type new_theme: (str)
:return: the currently selected theme
:rtype: (str)
| 51 | 76 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def theme_global(new_theme=None):
if new_theme is not None:
if new_theme not in theme_list():
popup_error_with_traceback('Cannot use custom themes with theme_global call',
'Your request to use theme {} cannot be performed.'.format(new_theme),
'The PySimpleGUI Global User Settings are meant for PySimpleGUI standard items, not user config items',
'You can use any of the many built-in themes instead or use your own UserSettings file to store your custom theme')
return pysimplegui_user_settings.get('-theme-', CURRENT_LOOK_AND_FEEL)
pysimplegui_user_settings.set('-theme-', new_theme)
theme(new_theme)
return new_theme
else:
return pysimplegui_user_settings.get('-theme-', CURRENT_LOOK_AND_FEEL)
```
###Assistant :
Sets / Gets the global PySimpleGUI Theme. If none is specified then returns the global theme from user settings.
Note the theme must be a standard, built-in PySimpleGUI theme... not a user-created theme.
:param new_theme: the new theme name to use
:type new_theme: (str)
:return: the currently selected theme
:rtype: (str)
|
2,035 | def close(self):
self._reset_retries()
self._closed = True
# Chunked-encoded posts are terminated with '0\r\n\r\n'
# For some reason, either Python or node.js seems to
# require an extra \r\n.
try:
self._conn.send("\r\n0\r\n\r\n".encode("utf-8"))
except http_client.socket.error:
# In case the socket has already been closed
return ""
return self._getresponse()
| Close the connection to server.
If available, return a http_client.HTTPResponse object.
Closing the connection involves sending the
Transfer-Encoding terminating bytes.
| 20 | 45 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def close(self):
self._reset_retries()
self._closed = True
# Chunked-encoded posts are terminated with '0\r\n\r\n'
# For some reason, either Python or node.js seems to
# require an extra \r\n.
try:
self._conn.send("\r\n0\r\n\r\n".encode("utf-8"))
except http_client.socket.error:
# In case the socket has already been closed
return ""
return self._getresponse()
```
###Assistant : Close the connection to server.
If available, return a http_client.HTTPResponse object.
Closing the connection involves sending the
Transfer-Encoding terminating bytes.
|
2,036 | def _get_svc_path(name="*", status=None):
# This is the core routine to work with services, called by many
# other functions of this module.
#
# The name of a service is the "apparent" folder's name that contains its
# "run" script. If its "folder" is a symlink, the service is an "alias" of
# the targeted service.
if not SERVICE_DIR:
raise CommandExecutionError("Could not find service directory.")
# path list of enabled services as /AVAIL_SVR_DIRS/$service,
# taking care of any service aliases (do not use os.path.realpath()).
ena = set()
for el in glob.glob(os.path.join(SERVICE_DIR, name)):
if _is_svc(el):
if os.path.islink(el):
ena.add(os.readlink(el))
else:
ena.add(el)
log.trace("found enabled service path: %s", el)
if status == "ENABLED":
return sorted(ena)
# path list of available services as /AVAIL_SVR_DIRS/$service
ava = set()
for d in AVAIL_SVR_DIRS:
for el in glob.glob(os.path.join(d, name)):
if _is_svc(el):
ava.add(el)
log.trace("found available service path: %s", el)
if status == "DISABLED":
# service available but not enabled
ret = ava.difference(ena)
else:
# default: return available services
ret = ava.union(ena)
return sorted(ret)
|
Return a list of paths to services with ``name`` that have the specified ``status``
name
a glob for service name. default is '*'
status
None : all services (no filter, default choice)
'DISABLED' : available service(s) that is not enabled
'ENABLED' : enabled service (whether started on boot or not)
| 50 | 164 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _get_svc_path(name="*", status=None):
# This is the core routine to work with services, called by many
# other functions of this module.
#
# The name of a service is the "apparent" folder's name that contains its
# "run" script. If its "folder" is a symlink, the service is an "alias" of
# the targeted service.
if not SERVICE_DIR:
raise CommandExecutionError("Could not find service directory.")
# path list of enabled services as /AVAIL_SVR_DIRS/$service,
# taking care of any service aliases (do not use os.path.realpath()).
ena = set()
for el in glob.glob(os.path.join(SERVICE_DIR, name)):
if _is_svc(el):
if os.path.islink(el):
ena.add(os.readlink(el))
else:
ena.add(el)
log.trace("found enabled service path: %s", el)
if status == "ENABLED":
return sorted(ena)
# path list of available services as /AVAIL_SVR_DIRS/$service
ava = set()
for d in AVAIL_SVR_DIRS:
for el in glob.glob(os.path.join(d, name)):
if _is_svc(el):
ava.add(el)
log.trace("found available service path: %s", el)
if status == "DISABLED":
# service available but not enabled
ret = ava.difference(ena)
else:
# default: return available services
ret = ava.union(ena)
return sorted(ret)
```
###Assistant :
Return a list of paths to services with ``name`` that have the specified ``status``
name
a glob for service name. default is '*'
status
None : all services (no filter, default choice)
'DISABLED' : available service(s) that is not enabled
'ENABLED' : enabled service (whether started on boot or not)
|
2,037 | def test_model_torch_save_ddp_cpu(tmpdir):
model = BoringModel()
num_epochs = 1
trainer = Trainer(
default_root_dir=tmpdir, max_epochs=num_epochs, strategy="ddp_spawn", accelerator="cpu", devices=2, logger=False
)
temp_path = os.path.join(tmpdir, "temp.pt")
trainer.fit(model)
# Ensure these do not fail
torch.save(trainer.model, temp_path)
torch.save(trainer, temp_path)
@RunIf(min_gpus=2) | Test to ensure torch save does not fail for model and trainer using cpu ddp. | 15 | 34 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_model_torch_save_ddp_cpu(tmpdir):
model = BoringModel()
num_epochs = 1
trainer = Trainer(
default_root_dir=tmpdir, max_epochs=num_epochs, strategy="ddp_spawn", accelerator="cpu", devices=2, logger=False
)
temp_path = os.path.join(tmpdir, "temp.pt")
trainer.fit(model)
# Ensure these do not fail
torch.save(trainer.model, temp_path)
torch.save(trainer, temp_path)
@RunIf(min_gpus=2)
```
###Assistant : Test to ensure torch save does not fail for model and trainer using cpu ddp.
|
2,038 | def default_batch_size(self) -> int:
# Using 2 avoid ONNX making assumption about single sample batch
return OnnxConfig.default_fixed_batch
|
The default batch size to use if no other indication
Returns:
Integer > 0
| 14 | 17 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def default_batch_size(self) -> int:
# Using 2 avoid ONNX making assumption about single sample batch
return OnnxConfig.default_fixed_batch
```
###Assistant :
The default batch size to use if no other indication
Returns:
Integer > 0
|
2,039 | def addMacOSCodeSignature(filenames):
# Weak signing.
identity = getMacOSSigningIdentity()
command = [
"codesign",
"-s",
identity,
"--force",
"--deep",
"--preserve-metadata=entitlements",
]
assert type(filenames) is not str
command.extend(filenames)
with withMadeWritableFileMode(filenames):
executeToolChecked(
logger=postprocessing_logger,
command=command,
absence_message=macos_codesign_usage,
stderr_filter=_filterSigntoolErrorOutput,
)
| Remove the code signature from a filename.
Args:
filenames - The files to be signed.
Returns:
None
Notes:
This is macOS specific.
| 22 | 32 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def addMacOSCodeSignature(filenames):
# Weak signing.
identity = getMacOSSigningIdentity()
command = [
"codesign",
"-s",
identity,
"--force",
"--deep",
"--preserve-metadata=entitlements",
]
assert type(filenames) is not str
command.extend(filenames)
with withMadeWritableFileMode(filenames):
executeToolChecked(
logger=postprocessing_logger,
command=command,
absence_message=macos_codesign_usage,
stderr_filter=_filterSigntoolErrorOutput,
)
```
###Assistant : Remove the code signature from a filename.
Args:
filenames - The files to be signed.
Returns:
None
Notes:
This is macOS specific.
|
2,040 | def exception_handler(exc, context):
if isinstance(exc, Http404):
exc = exceptions.NotFound(*(exc.args))
elif isinstance(exc, PermissionDenied):
exc = exceptions.PermissionDenied(*(exc.args))
if isinstance(exc, exceptions.APIException):
headers = {}
if getattr(exc, 'auth_header', None):
headers['WWW-Authenticate'] = exc.auth_header
if getattr(exc, 'wait', None):
headers['Retry-After'] = '%d' % exc.wait
if isinstance(exc.detail, (list, dict)):
data = exc.detail
else:
data = {'detail': exc.detail}
set_rollback()
return Response(data, status=exc.status_code, headers=headers)
return None
|
Returns the response that should be used for any given exception.
By default we handle the REST framework `APIException`, and also
Django's built-in `Http404` and `PermissionDenied` exceptions.
Any unhandled exceptions may return `None`, which will cause a 500 error
to be raised.
| 42 | 56 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def exception_handler(exc, context):
if isinstance(exc, Http404):
exc = exceptions.NotFound(*(exc.args))
elif isinstance(exc, PermissionDenied):
exc = exceptions.PermissionDenied(*(exc.args))
if isinstance(exc, exceptions.APIException):
headers = {}
if getattr(exc, 'auth_header', None):
headers['WWW-Authenticate'] = exc.auth_header
if getattr(exc, 'wait', None):
headers['Retry-After'] = '%d' % exc.wait
if isinstance(exc.detail, (list, dict)):
data = exc.detail
else:
data = {'detail': exc.detail}
set_rollback()
return Response(data, status=exc.status_code, headers=headers)
return None
```
###Assistant :
Returns the response that should be used for any given exception.
By default we handle the REST framework `APIException`, and also
Django's built-in `Http404` and `PermissionDenied` exceptions.
Any unhandled exceptions may return `None`, which will cause a 500 error
to be raised.
|
2,041 | def compute_cooccurrence_matrix(self, df):
user_item_hits = sparse.coo_matrix(
(np.repeat(1, df.shape[0]), (df[self.col_user_id], df[self.col_item_id])),
shape=(self.n_users, self.n_items),
).tocsr()
item_cooccurrence = user_item_hits.transpose().dot(user_item_hits)
item_cooccurrence = item_cooccurrence.multiply(
item_cooccurrence >= self.threshold
)
return item_cooccurrence.astype(df[self.col_rating].dtype)
| Co-occurrence matrix.
The co-occurrence matrix is defined as :math:`C = U^T * U`
where U is the user_affinity matrix with 1's as values (instead of ratings).
Args:
df (pandas.DataFrame): DataFrame of users and items
Returns:
numpy.ndarray: Co-occurrence matrix
| 38 | 25 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def compute_cooccurrence_matrix(self, df):
user_item_hits = sparse.coo_matrix(
(np.repeat(1, df.shape[0]), (df[self.col_user_id], df[self.col_item_id])),
shape=(self.n_users, self.n_items),
).tocsr()
item_cooccurrence = user_item_hits.transpose().dot(user_item_hits)
item_cooccurrence = item_cooccurrence.multiply(
item_cooccurrence >= self.threshold
)
return item_cooccurrence.astype(df[self.col_rating].dtype)
```
###Assistant : Co-occurrence matrix.
The co-occurrence matrix is defined as :math:`C = U^T * U`
where U is the user_affinity matrix with 1's as values (instead of ratings).
Args:
df (pandas.DataFrame): DataFrame of users and items
Returns:
numpy.ndarray: Co-occurrence matrix
|
2,042 | def _makeTags(tagStr, xml, suppress_LT=Suppress("<"), suppress_GT=Suppress(">")):
if isinstance(tagStr, str_type):
resname = tagStr
tagStr = Keyword(tagStr, caseless=not xml)
else:
resname = tagStr.name
tagAttrName = Word(alphas, alphanums + "_-:")
if xml:
tagAttrValue = dbl_quoted_string.copy().set_parse_action(remove_quotes)
openTag = (
suppress_LT
+ tagStr("tag")
+ Dict(ZeroOrMore(Group(tagAttrName + Suppress("=") + tagAttrValue)))
+ Opt("/", default=[False])("empty").set_parse_action(
lambda s, l, t: t[0] == "/"
)
+ suppress_GT
)
else:
tagAttrValue = quoted_string.copy().set_parse_action(remove_quotes) | Word(
printables, exclude_chars=">"
)
openTag = (
suppress_LT
+ tagStr("tag")
+ Dict(
ZeroOrMore(
Group(
tagAttrName.set_parse_action(lambda t: t[0].lower())
+ Opt(Suppress("=") + tagAttrValue)
)
)
)
+ Opt("/", default=[False])("empty").set_parse_action(
lambda s, l, t: t[0] == "/"
)
+ suppress_GT
)
closeTag = Combine(Literal("</") + tagStr + ">", adjacent=False)
openTag.set_name("<%s>" % resname)
# add start<tagname> results name in parse action now that ungrouped names are not reported at two levels
openTag.add_parse_action(
lambda t: t.__setitem__(
"start" + "".join(resname.replace(":", " ").title().split()), t.copy()
)
)
closeTag = closeTag(
"end" + "".join(resname.replace(":", " ").title().split())
).set_name("</%s>" % resname)
openTag.tag = resname
closeTag.tag = resname
openTag.tag_body = SkipTo(closeTag())
return openTag, closeTag
| Internal helper to construct opening and closing tag expressions, given a tag name | 13 | 164 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _makeTags(tagStr, xml, suppress_LT=Suppress("<"), suppress_GT=Suppress(">")):
if isinstance(tagStr, str_type):
resname = tagStr
tagStr = Keyword(tagStr, caseless=not xml)
else:
resname = tagStr.name
tagAttrName = Word(alphas, alphanums + "_-:")
if xml:
tagAttrValue = dbl_quoted_string.copy().set_parse_action(remove_quotes)
openTag = (
suppress_LT
+ tagStr("tag")
+ Dict(ZeroOrMore(Group(tagAttrName + Suppress("=") + tagAttrValue)))
+ Opt("/", default=[False])("empty").set_parse_action(
lambda s, l, t: t[0] == "/"
)
+ suppress_GT
)
else:
tagAttrValue = quoted_string.copy().set_parse_action(remove_quotes) | Word(
printables, exclude_chars=">"
)
openTag = (
suppress_LT
+ tagStr("tag")
+ Dict(
ZeroOrMore(
Group(
tagAttrName.set_parse_action(lambda t: t[0].lower())
+ Opt(Suppress("=") + tagAttrValue)
)
)
)
+ Opt("/", default=[False])("empty").set_parse_action(
lambda s, l, t: t[0] == "/"
)
+ suppress_GT
)
closeTag = Combine(Literal("</") + tagStr + ">", adjacent=False)
openTag.set_name("<%s>" % resname)
# add start<tagname> results name in parse action now that ungrouped names are not reported at two levels
openTag.add_parse_action(
lambda t: t.__setitem__(
"start" + "".join(resname.replace(":", " ").title().split()), t.copy()
)
)
closeTag = closeTag(
"end" + "".join(resname.replace(":", " ").title().split())
).set_name("</%s>" % resname)
openTag.tag = resname
closeTag.tag = resname
openTag.tag_body = SkipTo(closeTag())
return openTag, closeTag
```
###Assistant : Internal helper to construct opening and closing tag expressions, given a tag name
|
2,043 | def load_data(label_mode="fine"):
if label_mode not in ["fine", "coarse"]:
raise ValueError(
'`label_mode` must be one of `"fine"`, `"coarse"`. '
f"Received: label_mode={label_mode}."
)
dirname = "cifar-100-python"
origin = "https://www.cs.toronto.edu/~kriz/cifar-100-python.tar.gz"
path = get_file(
dirname,
origin=origin,
untar=True,
file_hash="85cd44d02ba6437773c5bbd22e183051d648de2e7d6b014e1ef29b855ba677a7",
)
fpath = os.path.join(path, "train")
x_train, y_train = load_batch(fpath, label_key=label_mode + "_labels")
fpath = os.path.join(path, "test")
x_test, y_test = load_batch(fpath, label_key=label_mode + "_labels")
y_train = np.reshape(y_train, (len(y_train), 1))
y_test = np.reshape(y_test, (len(y_test), 1))
if backend.image_data_format() == "channels_last":
x_train = x_train.transpose(0, 2, 3, 1)
x_test = x_test.transpose(0, 2, 3, 1)
return (x_train, y_train), (x_test, y_test)
| Loads the CIFAR100 dataset.
This is a dataset of 50,000 32x32 color training images and
10,000 test images, labeled over 100 fine-grained classes that are
grouped into 20 coarse-grained classes. See more info at the
[CIFAR homepage](https://www.cs.toronto.edu/~kriz/cifar.html).
Args:
label_mode: one of "fine", "coarse". If it is "fine" the category labels
are the fine-grained labels, if it is "coarse" the output labels are the
coarse-grained superclasses.
Returns:
Tuple of NumPy arrays: `(x_train, y_train), (x_test, y_test)`.
**x_train**: uint8 NumPy array of grayscale image data with shapes
`(50000, 32, 32, 3)`, containing the training data. Pixel values range
from 0 to 255.
**y_train**: uint8 NumPy array of labels (integers in range 0-99)
with shape `(50000, 1)` for the training data.
**x_test**: uint8 NumPy array of grayscale image data with shapes
`(10000, 32, 32, 3)`, containing the test data. Pixel values range
from 0 to 255.
**y_test**: uint8 NumPy array of labels (integers in range 0-99)
with shape `(10000, 1)` for the test data.
Example:
```python
(x_train, y_train), (x_test, y_test) = keras.datasets.cifar100.load_data()
assert x_train.shape == (50000, 32, 32, 3)
assert x_test.shape == (10000, 32, 32, 3)
assert y_train.shape == (50000, 1)
assert y_test.shape == (10000, 1)
```
| 193 | 88 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def load_data(label_mode="fine"):
if label_mode not in ["fine", "coarse"]:
raise ValueError(
'`label_mode` must be one of `"fine"`, `"coarse"`. '
f"Received: label_mode={label_mode}."
)
dirname = "cifar-100-python"
origin = "https://www.cs.toronto.edu/~kriz/cifar-100-python.tar.gz"
path = get_file(
dirname,
origin=origin,
untar=True,
file_hash="85cd44d02ba6437773c5bbd22e183051d648de2e7d6b014e1ef29b855ba677a7",
)
fpath = os.path.join(path, "train")
x_train, y_train = load_batch(fpath, label_key=label_mode + "_labels")
fpath = os.path.join(path, "test")
x_test, y_test = load_batch(fpath, label_key=label_mode + "_labels")
y_train = np.reshape(y_train, (len(y_train), 1))
y_test = np.reshape(y_test, (len(y_test), 1))
if backend.image_data_format() == "channels_last":
x_train = x_train.transpose(0, 2, 3, 1)
x_test = x_test.transpose(0, 2, 3, 1)
return (x_train, y_train), (x_test, y_test)
```
###Assistant : Loads the CIFAR100 dataset.
This is a dataset of 50,000 32x32 color training images and
10,000 test images, labeled over 100 fine-grained classes that are
grouped into 20 coarse-grained classes. See more info at the
[CIFAR homepage](https://www.cs.toronto.edu/~kriz/cifar.html).
Args:
label_mode: one of "fine", "coarse". If it is "fine" the category labels
are the fine-grained labels, if it is "coarse" the output labels are the
coarse-grained superclasses.
Returns:
Tuple of NumPy arrays: `(x_train, y_train), (x_test, y_test)`.
**x_train**: uint8 NumPy array of grayscale image data with shapes
`(50000, 32, 32, 3)`, containing the training data. Pixel values range
from 0 to 255.
**y_train**: uint8 NumPy array of labels (integers in range 0-99)
with shape `(50000, 1)` for the training data.
**x_test**: uint8 NumPy array of grayscale image data with shapes
`(10000, 32, 32, 3)`, containing the test data. Pixel values range
from 0 to 255.
**y_test**: uint8 NumPy array of labels (integers in range 0-99)
with shape `(10000, 1)` for the test data.
Example:
```python
(x_train, y_train), (x_test, y_test) = keras.datasets.cifar100.load_data()
assert x_train.shape == (50000, 32, 32, 3)
assert x_test.shape == (10000, 32, 32, 3)
assert y_train.shape == (50000, 1)
assert y_test.shape == (10000, 1)
```
|
2,044 | def find_airflow_sources_root() -> Path:
default_airflow_sources_root = Path.cwd()
# Try to find airflow sources in current working dir
airflow_sources_root = search_upwards_for_airflow_sources_root(Path.cwd())
if not airflow_sources_root:
# Or if it fails, find it in parents of the directory where the ./breeze.py is.
airflow_sources_root = search_upwards_for_airflow_sources_root(Path(__file__).resolve().parent)
if airflow_sources_root:
os.chdir(airflow_sources_root)
return Path(airflow_sources_root)
else:
console.print(
f"\n[bright_yellow]Could not find Airflow sources location. "
f"Assuming {default_airflow_sources_root}"
)
os.chdir(default_airflow_sources_root)
return Path(default_airflow_sources_root)
AIRFLOW_SOURCES_ROOT = find_airflow_sources_root()
BUILD_CACHE_DIR = AIRFLOW_SOURCES_ROOT / '.build'
FILES_DIR = AIRFLOW_SOURCES_ROOT / 'files'
MSSQL_DATA_VOLUME = AIRFLOW_SOURCES_ROOT / 'tmp_mssql_volume'
MYPY_CACHE_DIR = AIRFLOW_SOURCES_ROOT / '.mypy_cache'
LOGS_DIR = AIRFLOW_SOURCES_ROOT / 'logs'
DIST_DIR = AIRFLOW_SOURCES_ROOT / 'dist'
SCRIPTS_CI_DIR = AIRFLOW_SOURCES_ROOT / 'scripts' / 'ci'
DOCKER_CONTEXT_DIR = AIRFLOW_SOURCES_ROOT / 'docker-context-files'
CACHE_TMP_FILE_DIR = tempfile.TemporaryDirectory()
OUTPUT_LOG = Path(CACHE_TMP_FILE_DIR.name, 'out.log')
|
Find the root of airflow sources. When Breeze is run from sources, it is easy, but this one also
has to handle the case when Breeze is installed via `pipx` so it searches upwards of the current
directory to find the right root of airflow directory.
If not found, current directory is returned (this handles the case when Breeze is run from the local
directory.
:return: Path for the found sources.
| 71 | 114 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def find_airflow_sources_root() -> Path:
default_airflow_sources_root = Path.cwd()
# Try to find airflow sources in current working dir
airflow_sources_root = search_upwards_for_airflow_sources_root(Path.cwd())
if not airflow_sources_root:
# Or if it fails, find it in parents of the directory where the ./breeze.py is.
airflow_sources_root = search_upwards_for_airflow_sources_root(Path(__file__).resolve().parent)
if airflow_sources_root:
os.chdir(airflow_sources_root)
return Path(airflow_sources_root)
else:
console.print(
f"\n[bright_yellow]Could not find Airflow sources location. "
f"Assuming {default_airflow_sources_root}"
)
os.chdir(default_airflow_sources_root)
return Path(default_airflow_sources_root)
AIRFLOW_SOURCES_ROOT = find_airflow_sources_root()
BUILD_CACHE_DIR = AIRFLOW_SOURCES_ROOT / '.build'
FILES_DIR = AIRFLOW_SOURCES_ROOT / 'files'
MSSQL_DATA_VOLUME = AIRFLOW_SOURCES_ROOT / 'tmp_mssql_volume'
MYPY_CACHE_DIR = AIRFLOW_SOURCES_ROOT / '.mypy_cache'
LOGS_DIR = AIRFLOW_SOURCES_ROOT / 'logs'
DIST_DIR = AIRFLOW_SOURCES_ROOT / 'dist'
SCRIPTS_CI_DIR = AIRFLOW_SOURCES_ROOT / 'scripts' / 'ci'
DOCKER_CONTEXT_DIR = AIRFLOW_SOURCES_ROOT / 'docker-context-files'
CACHE_TMP_FILE_DIR = tempfile.TemporaryDirectory()
OUTPUT_LOG = Path(CACHE_TMP_FILE_DIR.name, 'out.log')
```
###Assistant :
Find the root of airflow sources. When Breeze is run from sources, it is easy, but this one also
has to handle the case when Breeze is installed via `pipx` so it searches upwards of the current
directory to find the right root of airflow directory.
If not found, current directory is returned (this handles the case when Breeze is run from the local
directory.
:return: Path for the found sources.
|
2,045 | def _parse_proxy_entry(proxy_str):
config = [c.strip() for c in proxy_str.split(' ') if c]
if not config:
raise ParseProxyError("Empty proxy entry")
if config[0] == "DIRECT":
if len(config) != 1:
raise ParseProxyError("Invalid number of parameters for " +
"DIRECT")
return QNetworkProxy(QNetworkProxy.ProxyType.NoProxy)
elif config[0] == "PROXY":
if len(config) != 2:
raise ParseProxyError("Invalid number of parameters for PROXY")
host, port = PACResolver._parse_proxy_host(config[1])
return QNetworkProxy(QNetworkProxy.ProxyType.HttpProxy, host, port)
elif config[0] in ["SOCKS", "SOCKS5"]:
if len(config) != 2:
raise ParseProxyError("Invalid number of parameters for SOCKS")
host, port = PACResolver._parse_proxy_host(config[1])
return QNetworkProxy(QNetworkProxy.ProxyType.Socks5Proxy, host, port)
else:
err = "Unknown proxy type: {}"
raise ParseProxyError(err.format(config[0]))
| Parse one proxy string entry, as described in PAC specification. | 10 | 94 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _parse_proxy_entry(proxy_str):
config = [c.strip() for c in proxy_str.split(' ') if c]
if not config:
raise ParseProxyError("Empty proxy entry")
if config[0] == "DIRECT":
if len(config) != 1:
raise ParseProxyError("Invalid number of parameters for " +
"DIRECT")
return QNetworkProxy(QNetworkProxy.ProxyType.NoProxy)
elif config[0] == "PROXY":
if len(config) != 2:
raise ParseProxyError("Invalid number of parameters for PROXY")
host, port = PACResolver._parse_proxy_host(config[1])
return QNetworkProxy(QNetworkProxy.ProxyType.HttpProxy, host, port)
elif config[0] in ["SOCKS", "SOCKS5"]:
if len(config) != 2:
raise ParseProxyError("Invalid number of parameters for SOCKS")
host, port = PACResolver._parse_proxy_host(config[1])
return QNetworkProxy(QNetworkProxy.ProxyType.Socks5Proxy, host, port)
else:
err = "Unknown proxy type: {}"
raise ParseProxyError(err.format(config[0]))
```
###Assistant : Parse one proxy string entry, as described in PAC specification.
|
2,046 | def test_gen_pyf(capfd, hello_world_f90, monkeypatch):
ipath = Path(hello_world_f90)
opath = Path(hello_world_f90).stem + ".pyf"
monkeypatch.setattr(sys, "argv", f'f2py -h {opath} {ipath}'.split())
with util.switchdir(ipath.parent):
f2pycli() # Generate wrappers
out, _ = capfd.readouterr()
assert "Saving signatures to file" in out
assert Path(f'{opath}').exists()
| Ensures that a signature file is generated via the CLI
CLI :: -h
| 13 | 37 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_gen_pyf(capfd, hello_world_f90, monkeypatch):
ipath = Path(hello_world_f90)
opath = Path(hello_world_f90).stem + ".pyf"
monkeypatch.setattr(sys, "argv", f'f2py -h {opath} {ipath}'.split())
with util.switchdir(ipath.parent):
f2pycli() # Generate wrappers
out, _ = capfd.readouterr()
assert "Saving signatures to file" in out
assert Path(f'{opath}').exists()
```
###Assistant : Ensures that a signature file is generated via the CLI
CLI :: -h
|
2,047 | def _dictionary(self):
# type: () -> Dict[str, Any]
# NOTE: Dictionaries are not populated if not loaded. So, conditionals
# are not needed here.
retval = {}
for variant in OVERRIDE_ORDER:
retval.update(self._config[variant])
return retval
| A dictionary representing the loaded configuration.
| 6 | 34 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _dictionary(self):
# type: () -> Dict[str, Any]
# NOTE: Dictionaries are not populated if not loaded. So, conditionals
# are not needed here.
retval = {}
for variant in OVERRIDE_ORDER:
retval.update(self._config[variant])
return retval
```
###Assistant : A dictionary representing the loaded configuration.
|
2,048 | def using(self, alias):
return RawQuerySet(
self.raw_query,
model=self.model,
query=self.query.chain(using=alias),
params=self.params,
translations=self.translations,
using=alias,
)
| Select the database this RawQuerySet should execute against. | 8 | 12 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def using(self, alias):
return RawQuerySet(
self.raw_query,
model=self.model,
query=self.query.chain(using=alias),
params=self.params,
translations=self.translations,
using=alias,
)
```
###Assistant : Select the database this RawQuerySet should execute against.
|
2,049 | def get_weights(self):
params = self.weights
return backend.batch_get_value(params)
# TODO(tanzheny): Maybe share this logic with base_layer. | Returns the current weights of the optimizer.
The weights of an optimizer are its state (ie, variables).
This function returns the weight values associated with this
optimizer as a list of Numpy arrays. The first value is always the
iterations count of the optimizer, followed by the optimizer's state
variables in the order they were created. The returned list can in turn
be used to load state into similarly parameterized optimizers.
For example, the RMSprop optimizer for this simple model returns a list of
three values-- the iteration count, followed by the root-mean-square value
of the kernel and bias of the single Dense layer:
>>> opt = tf.keras.optimizers.RMSprop()
>>> m = tf.keras.models.Sequential([tf.keras.layers.Dense(10)])
>>> m.compile(opt, loss='mse')
>>> data = np.arange(100).reshape(5, 20)
>>> labels = np.zeros(5)
>>> results = m.fit(data, labels) # Training.
>>> len(opt.get_weights())
3
Returns:
Weights values as a list of numpy arrays.
| 143 | 15 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_weights(self):
params = self.weights
return backend.batch_get_value(params)
# TODO(tanzheny): Maybe share this logic with base_layer.
```
###Assistant : Returns the current weights of the optimizer.
The weights of an optimizer are its state (ie, variables).
This function returns the weight values associated with this
optimizer as a list of Numpy arrays. The first value is always the
iterations count of the optimizer, followed by the optimizer's state
variables in the order they were created. The returned list can in turn
be used to load state into similarly parameterized optimizers.
For example, the RMSprop optimizer for this simple model returns a list of
three values-- the iteration count, followed by the root-mean-square value
of the kernel and bias of the single Dense layer:
>>> opt = tf.keras.optimizers.RMSprop()
>>> m = tf.keras.models.Sequential([tf.keras.layers.Dense(10)])
>>> m.compile(opt, loss='mse')
>>> data = np.arange(100).reshape(5, 20)
>>> labels = np.zeros(5)
>>> results = m.fit(data, labels) # Training.
>>> len(opt.get_weights())
3
Returns:
Weights values as a list of numpy arrays.
|
2,050 | def _focal_loss_cost(self, cls_pred, gt_labels):
cls_pred = cls_pred.sigmoid()
neg_cost = -(1 - cls_pred + self.eps).log() * (
1 - self.alpha) * cls_pred.pow(self.gamma)
pos_cost = -(cls_pred + self.eps).log() * self.alpha * (
1 - cls_pred).pow(self.gamma)
cls_cost = pos_cost[:, gt_labels] - neg_cost[:, gt_labels]
return cls_cost * self.weight
|
Args:
cls_pred (Tensor): Predicted classification logits, shape
(num_query, num_class).
gt_labels (Tensor): Label of `gt_bboxes`, shape (num_gt,).
Returns:
torch.Tensor: cls_cost value with weight
| 22 | 44 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _focal_loss_cost(self, cls_pred, gt_labels):
cls_pred = cls_pred.sigmoid()
neg_cost = -(1 - cls_pred + self.eps).log() * (
1 - self.alpha) * cls_pred.pow(self.gamma)
pos_cost = -(cls_pred + self.eps).log() * self.alpha * (
1 - cls_pred).pow(self.gamma)
cls_cost = pos_cost[:, gt_labels] - neg_cost[:, gt_labels]
return cls_cost * self.weight
```
###Assistant :
Args:
cls_pred (Tensor): Predicted classification logits, shape
(num_query, num_class).
gt_labels (Tensor): Label of `gt_bboxes`, shape (num_gt,).
Returns:
torch.Tensor: cls_cost value with weight
|
2,051 | def date_extract_sql(self, lookup_type, field_name):
raise NotImplementedError(
"subclasses of BaseDatabaseOperations may require a date_extract_sql() method"
)
|
Given a lookup_type of 'year', 'month', or 'day', return the SQL that
extracts a value from the given date field field_name.
| 21 | 15 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def date_extract_sql(self, lookup_type, field_name):
raise NotImplementedError(
"subclasses of BaseDatabaseOperations may require a date_extract_sql() method"
)
```
###Assistant :
Given a lookup_type of 'year', 'month', or 'day', return the SQL that
extracts a value from the given date field field_name.
|
2,052 | def test_threepid_invite_spamcheck(self) -> None:
# Mock a few functions to prevent the test from failing due to failing to talk to
# a remote IS. We keep the mock for make_and_store_3pid_invite around so we
# can check its call_count later on during the test.
make_invite_mock = Mock(return_value=make_awaitable(0))
self.hs.get_room_member_handler()._make_and_store_3pid_invite = make_invite_mock
self.hs.get_identity_handler().lookup_3pid = Mock(
return_value=make_awaitable(None),
)
# Add a mock to the spamchecker callbacks for user_may_send_3pid_invite. Make it
# allow everything for now.
# `spec` argument is needed for this function mock to have `__qualname__`, which
# is needed for `Measure` metrics buried in SpamChecker.
mock = Mock(
return_value=make_awaitable(synapse.module_api.NOT_SPAM),
spec=lambda *x: None,
)
self.hs.get_spam_checker()._user_may_send_3pid_invite_callbacks.append(mock)
# Send a 3PID invite into the room and check that it succeeded.
email_to_invite = "teresa@example.com"
channel = self.make_request(
method="POST",
path="/rooms/" + self.room_id + "/invite",
content={
"id_server": "example.com",
"id_access_token": "sometoken",
"medium": "email",
"address": email_to_invite,
},
access_token=self.tok,
)
self.assertEqual(channel.code, 200)
# Check that the callback was called with the right params.
mock.assert_called_with(self.user_id, "email", email_to_invite, self.room_id)
# Check that the call to send the invite was made.
make_invite_mock.assert_called_once()
# Now change the return value of the callback to deny any invite and test that
# we can't send the invite.
mock.return_value = make_awaitable(Codes.CONSENT_NOT_GIVEN)
channel = self.make_request(
method="POST",
path="/rooms/" + self.room_id + "/invite",
content={
"id_server": "example.com",
"id_access_token": "sometoken",
"medium": "email",
"address": email_to_invite,
},
access_token=self.tok,
)
self.assertEqual(channel.code, 403)
# Also check that it stopped before calling _make_and_store_3pid_invite.
make_invite_mock.assert_called_once()
|
Test allowing/blocking threepid invites with a spam-check module.
In this test, we use the more recent API in which callbacks return a `Union[Codes, Literal["NOT_SPAM"]]`. | 24 | 227 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_threepid_invite_spamcheck(self) -> None:
# Mock a few functions to prevent the test from failing due to failing to talk to
# a remote IS. We keep the mock for make_and_store_3pid_invite around so we
# can check its call_count later on during the test.
make_invite_mock = Mock(return_value=make_awaitable(0))
self.hs.get_room_member_handler()._make_and_store_3pid_invite = make_invite_mock
self.hs.get_identity_handler().lookup_3pid = Mock(
return_value=make_awaitable(None),
)
# Add a mock to the spamchecker callbacks for user_may_send_3pid_invite. Make it
# allow everything for now.
# `spec` argument is needed for this function mock to have `__qualname__`, which
# is needed for `Measure` metrics buried in SpamChecker.
mock = Mock(
return_value=make_awaitable(synapse.module_api.NOT_SPAM),
spec=lambda *x: None,
)
self.hs.get_spam_checker()._user_may_send_3pid_invite_callbacks.append(mock)
# Send a 3PID invite into the room and check that it succeeded.
email_to_invite = "teresa@example.com"
channel = self.make_request(
method="POST",
path="/rooms/" + self.room_id + "/invite",
content={
"id_server": "example.com",
"id_access_token": "sometoken",
"medium": "email",
"address": email_to_invite,
},
access_token=self.tok,
)
self.assertEqual(channel.code, 200)
# Check that the callback was called with the right params.
mock.assert_called_with(self.user_id, "email", email_to_invite, self.room_id)
# Check that the call to send the invite was made.
make_invite_mock.assert_called_once()
# Now change the return value of the callback to deny any invite and test that
# we can't send the invite.
mock.return_value = make_awaitable(Codes.CONSENT_NOT_GIVEN)
channel = self.make_request(
method="POST",
path="/rooms/" + self.room_id + "/invite",
content={
"id_server": "example.com",
"id_access_token": "sometoken",
"medium": "email",
"address": email_to_invite,
},
access_token=self.tok,
)
self.assertEqual(channel.code, 403)
# Also check that it stopped before calling _make_and_store_3pid_invite.
make_invite_mock.assert_called_once()
```
###Assistant :
Test allowing/blocking threepid invites with a spam-check module.
In this test, we use the more recent API in which callbacks return a `Union[Codes, Literal["NOT_SPAM"]]`.
|
2,053 | def test_generate_invalid_param_val_all_valid(constraints):
with pytest.raises(NotImplementedError):
generate_invalid_param_val(constraints[0], constraints=constraints)
@pytest.mark.parametrize(
"constraint",
[
_ArrayLikes,
_Callables,
_InstancesOf,
_NoneConstraint,
_RandomStates,
_SparseMatrices,
],
) | Check that the function raises NotImplementedError when there's no invalid value
for the constraint.
| 14 | 17 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_generate_invalid_param_val_all_valid(constraints):
with pytest.raises(NotImplementedError):
generate_invalid_param_val(constraints[0], constraints=constraints)
@pytest.mark.parametrize(
"constraint",
[
_ArrayLikes,
_Callables,
_InstancesOf,
_NoneConstraint,
_RandomStates,
_SparseMatrices,
],
)
```
###Assistant : Check that the function raises NotImplementedError when there's no invalid value
for the constraint.
|
2,054 | def load_drawer_from_disk(self):
exists = Path(self.full_path / str('pair_dictionary.json')).resolve().exists()
if exists:
with open(self.full_path / str('pair_dictionary.json'), "r") as fp:
self.pair_dict = json.load(fp)
elif not self.follow_mode:
logger.info("Could not find existing datadrawer, starting from scratch")
else:
logger.warning(f'Follower could not find pair_dictionary at {self.full_path} '
'sending null values back to strategy')
return exists
|
Locate and load a previously saved data drawer full of all pair model metadata in
present model folder.
:returns:
exists: bool = whether or not the drawer was located
| 29 | 47 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def load_drawer_from_disk(self):
exists = Path(self.full_path / str('pair_dictionary.json')).resolve().exists()
if exists:
with open(self.full_path / str('pair_dictionary.json'), "r") as fp:
self.pair_dict = json.load(fp)
elif not self.follow_mode:
logger.info("Could not find existing datadrawer, starting from scratch")
else:
logger.warning(f'Follower could not find pair_dictionary at {self.full_path} '
'sending null values back to strategy')
return exists
```
###Assistant :
Locate and load a previously saved data drawer full of all pair model metadata in
present model folder.
:returns:
exists: bool = whether or not the drawer was located
|
2,055 | def address_exclude(self, other):
if not self._version == other._version:
raise TypeError("%s and %s are not of the same version" % (
self, other))
if not isinstance(other, _BaseNetwork):
raise TypeError("%s is not a network object" % other)
if not other.subnet_of(self):
raise ValueError('%s not contained in %s' % (other, self))
if other == self:
return
# Make sure we're comparing the network of other.
other = other.__class__('%s/%s' % (other.network_address,
other.prefixlen))
s1, s2 = self.subnets()
while s1 != other and s2 != other:
if other.subnet_of(s1):
yield s2
s1, s2 = s1.subnets()
elif other.subnet_of(s2):
yield s1
s1, s2 = s2.subnets()
else:
# If we got here, there's a bug somewhere.
raise AssertionError('Error performing exclusion: '
's1: %s s2: %s other: %s' %
(s1, s2, other))
if s1 == other:
yield s2
elif s2 == other:
yield s1
else:
# If we got here, there's a bug somewhere.
raise AssertionError('Error performing exclusion: '
's1: %s s2: %s other: %s' %
(s1, s2, other))
| Remove an address from a larger block.
For example:
addr1 = ip_network('192.0.2.0/28')
addr2 = ip_network('192.0.2.1/32')
list(addr1.address_exclude(addr2)) =
[IPv4Network('192.0.2.0/32'), IPv4Network('192.0.2.2/31'),
IPv4Network('192.0.2.4/30'), IPv4Network('192.0.2.8/29')]
or IPv6:
addr1 = ip_network('2001:db8::1/32')
addr2 = ip_network('2001:db8::1/128')
list(addr1.address_exclude(addr2)) =
[ip_network('2001:db8::1/128'),
ip_network('2001:db8::2/127'),
ip_network('2001:db8::4/126'),
ip_network('2001:db8::8/125'),
...
ip_network('2001:db8:8000::/33')]
Args:
other: An IPv4Network or IPv6Network object of the same type.
Returns:
An iterator of the IPv(4|6)Network objects which is self
minus other.
Raises:
TypeError: If self and other are of differing address
versions, or if other is not a network object.
ValueError: If other is not completely contained by self.
| 88 | 157 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def address_exclude(self, other):
if not self._version == other._version:
raise TypeError("%s and %s are not of the same version" % (
self, other))
if not isinstance(other, _BaseNetwork):
raise TypeError("%s is not a network object" % other)
if not other.subnet_of(self):
raise ValueError('%s not contained in %s' % (other, self))
if other == self:
return
# Make sure we're comparing the network of other.
other = other.__class__('%s/%s' % (other.network_address,
other.prefixlen))
s1, s2 = self.subnets()
while s1 != other and s2 != other:
if other.subnet_of(s1):
yield s2
s1, s2 = s1.subnets()
elif other.subnet_of(s2):
yield s1
s1, s2 = s2.subnets()
else:
# If we got here, there's a bug somewhere.
raise AssertionError('Error performing exclusion: '
's1: %s s2: %s other: %s' %
(s1, s2, other))
if s1 == other:
yield s2
elif s2 == other:
yield s1
else:
# If we got here, there's a bug somewhere.
raise AssertionError('Error performing exclusion: '
's1: %s s2: %s other: %s' %
(s1, s2, other))
```
###Assistant : Remove an address from a larger block.
For example:
addr1 = ip_network('192.0.2.0/28')
addr2 = ip_network('192.0.2.1/32')
list(addr1.address_exclude(addr2)) =
[IPv4Network('192.0.2.0/32'), IPv4Network('192.0.2.2/31'),
IPv4Network('192.0.2.4/30'), IPv4Network('192.0.2.8/29')]
or IPv6:
addr1 = ip_network('2001:db8::1/32')
addr2 = ip_network('2001:db8::1/128')
list(addr1.address_exclude(addr2)) =
[ip_network('2001:db8::1/128'),
ip_network('2001:db8::2/127'),
ip_network('2001:db8::4/126'),
ip_network('2001:db8::8/125'),
...
ip_network('2001:db8:8000::/33')]
Args:
other: An IPv4Network or IPv6Network object of the same type.
Returns:
An iterator of the IPv(4|6)Network objects which is self
minus other.
Raises:
TypeError: If self and other are of differing address
versions, or if other is not a network object.
ValueError: If other is not completely contained by self.
|
2,056 | def mutual_information(cooccurrence):
with np.errstate(invalid="ignore", divide="ignore"):
result = np.log2(cooccurrence.shape[0] * lift(cooccurrence))
return np.array(result)
| Helper method to calculate the Mutual Information of a matrix of
co-occurrences.
Mutual information is a measurement of the amount of information
explained by the i-th j-th item column vector.
Args:
cooccurrence (numpy.ndarray): The symmetric matrix of co-occurrences of items.
Returns:
numpy.ndarray: The matrix of mutual information between any two items.
| 51 | 12 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def mutual_information(cooccurrence):
with np.errstate(invalid="ignore", divide="ignore"):
result = np.log2(cooccurrence.shape[0] * lift(cooccurrence))
return np.array(result)
```
###Assistant : Helper method to calculate the Mutual Information of a matrix of
co-occurrences.
Mutual information is a measurement of the amount of information
explained by the i-th j-th item column vector.
Args:
cooccurrence (numpy.ndarray): The symmetric matrix of co-occurrences of items.
Returns:
numpy.ndarray: The matrix of mutual information between any two items.
|
2,057 | def _is_dunder(name):
return len(name) > 4 and name.startswith('__') and name.endswith('__')
# Prior to Python 3.7 types did not have `copy_with`. A lot of the equality
# checks, argument expansion etc. are done on the _subs_tre. As a result we
# can't provide a get_type_hints function that strips out annotations.
| Returns True if name is a __dunder_variable_name__. | 7 | 49 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _is_dunder(name):
return len(name) > 4 and name.startswith('__') and name.endswith('__')
# Prior to Python 3.7 types did not have `copy_with`. A lot of the equality
# checks, argument expansion etc. are done on the _subs_tre. As a result we
# can't provide a get_type_hints function that strips out annotations.
```
###Assistant : Returns True if name is a __dunder_variable_name__.
|
2,058 | def fit(self, X, y=None, sample_weight=None):
algorithm = self._choose_algorithm(self.algorithm, self.metric)
if isinstance(self.bandwidth, str):
methods_supported = ("scott", "silvermann")
if self.bandwidth not in methods_supported:
raise ValueError(
"When `bandwidth` is a string, it should be one of: "
f"{', '.join(methods_supported)}. Got {self.bandwidth!r} instead."
)
if self.bandwidth == "scott":
self.bandwidth_ = X.shape[0] ** (-1 / (X.shape[1] + 4))
elif self.bandwidth == "silvermann":
self.bandwidth_ = (X.shape[0] * (X.shape[1] + 2) / 4) ** (
-1 / (X.shape[1] + 4)
)
else:
check_scalar(
self.bandwidth,
"bandwidth",
target_type=numbers.Real,
min_val=0,
include_boundaries="neither",
)
self.bandwidth_ = self.bandwidth
if self.kernel not in VALID_KERNELS:
raise ValueError("invalid kernel: '{0}'".format(self.kernel))
X = self._validate_data(X, order="C", dtype=DTYPE)
if sample_weight is not None:
sample_weight = _check_sample_weight(
sample_weight, X, DTYPE, only_non_negative=True
)
kwargs = self.metric_params
if kwargs is None:
kwargs = {}
self.tree_ = TREE_DICT[algorithm](
X,
metric=self.metric,
leaf_size=self.leaf_size,
sample_weight=sample_weight,
**kwargs,
)
return self
| Fit the Kernel Density model on the data.
Parameters
----------
X : array-like of shape (n_samples, n_features)
List of n_features-dimensional data points. Each row
corresponds to a single data point.
y : None
Ignored. This parameter exists only for compatibility with
:class:`~sklearn.pipeline.Pipeline`.
sample_weight : array-like of shape (n_samples,), default=None
List of sample weights attached to the data X.
.. versionadded:: 0.20
Returns
-------
self : object
Returns the instance itself.
| 70 | 133 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def fit(self, X, y=None, sample_weight=None):
algorithm = self._choose_algorithm(self.algorithm, self.metric)
if isinstance(self.bandwidth, str):
methods_supported = ("scott", "silvermann")
if self.bandwidth not in methods_supported:
raise ValueError(
"When `bandwidth` is a string, it should be one of: "
f"{', '.join(methods_supported)}. Got {self.bandwidth!r} instead."
)
if self.bandwidth == "scott":
self.bandwidth_ = X.shape[0] ** (-1 / (X.shape[1] + 4))
elif self.bandwidth == "silvermann":
self.bandwidth_ = (X.shape[0] * (X.shape[1] + 2) / 4) ** (
-1 / (X.shape[1] + 4)
)
else:
check_scalar(
self.bandwidth,
"bandwidth",
target_type=numbers.Real,
min_val=0,
include_boundaries="neither",
)
self.bandwidth_ = self.bandwidth
if self.kernel not in VALID_KERNELS:
raise ValueError("invalid kernel: '{0}'".format(self.kernel))
X = self._validate_data(X, order="C", dtype=DTYPE)
if sample_weight is not None:
sample_weight = _check_sample_weight(
sample_weight, X, DTYPE, only_non_negative=True
)
kwargs = self.metric_params
if kwargs is None:
kwargs = {}
self.tree_ = TREE_DICT[algorithm](
X,
metric=self.metric,
leaf_size=self.leaf_size,
sample_weight=sample_weight,
**kwargs,
)
return self
```
###Assistant : Fit the Kernel Density model on the data.
Parameters
----------
X : array-like of shape (n_samples, n_features)
List of n_features-dimensional data points. Each row
corresponds to a single data point.
y : None
Ignored. This parameter exists only for compatibility with
:class:`~sklearn.pipeline.Pipeline`.
sample_weight : array-like of shape (n_samples,), default=None
List of sample weights attached to the data X.
.. versionadded:: 0.20
Returns
-------
self : object
Returns the instance itself.
|
2,059 | def get_on_pixels(self, image):
if image.mode != "L":
msg = "Image mode must be L"
raise ValueError(msg)
return _imagingmorph.get_on_pixels(image.im.id)
| Get a list of all turned on pixels in a binary image
Returns a list of tuples of (x,y) coordinates
of all matching pixels. See :ref:`coordinate-system`. | 26 | 18 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_on_pixels(self, image):
if image.mode != "L":
msg = "Image mode must be L"
raise ValueError(msg)
return _imagingmorph.get_on_pixels(image.im.id)
```
###Assistant : Get a list of all turned on pixels in a binary image
Returns a list of tuples of (x,y) coordinates
of all matching pixels. See :ref:`coordinate-system`.
|
2,060 | def visit_Num(self, node):
if isinstance(node.n, int):
return fix_missing_locations(Call(func=Name('Integer', Load()),
args=[node], keywords=[]))
elif isinstance(node.n, float):
return fix_missing_locations(Call(func=Name('Float', Load()),
args=[node], keywords=[]))
return node
| This function exists for backwards compatibility with Python 3.7.
It should be removed when SymPy removes support for Python 3.7. | 20 | 21 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def visit_Num(self, node):
if isinstance(node.n, int):
return fix_missing_locations(Call(func=Name('Integer', Load()),
args=[node], keywords=[]))
elif isinstance(node.n, float):
return fix_missing_locations(Call(func=Name('Float', Load()),
args=[node], keywords=[]))
return node
```
###Assistant : This function exists for backwards compatibility with Python 3.7.
It should be removed when SymPy removes support for Python 3.7.
|
2,061 | def enable(display=1, logdir=None, context=5, format="html"):
sys.excepthook = Hook(display=display, logdir=logdir,
context=context, format=format)
| Install an exception handler that formats tracebacks as HTML.
The optional argument 'display' can be set to 0 to suppress sending the
traceback to the browser, and 'logdir' can be set to a directory to cause
tracebacks to be written to files there. | 43 | 11 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def enable(display=1, logdir=None, context=5, format="html"):
sys.excepthook = Hook(display=display, logdir=logdir,
context=context, format=format)
```
###Assistant : Install an exception handler that formats tracebacks as HTML.
The optional argument 'display' can be set to 0 to suppress sending the
traceback to the browser, and 'logdir' can be set to a directory to cause
tracebacks to be written to files there.
|
2,062 | def parse_list_header(value):
result = []
for item in _parse_list_header(value):
if item[:1] == item[-1:] == '"':
item = unquote_header_value(item[1:-1])
result.append(item)
return result
# From mitsuhiko/werkzeug (used with permission). | Parse lists as described by RFC 2068 Section 2.
In particular, parse comma-separated lists where the elements of
the list may include quoted-strings. A quoted-string could
contain a comma. A non-quoted string could have quotes in the
middle. Quotes are removed automatically after parsing.
It basically works like :func:`parse_set_header` just that items
may appear multiple times and case sensitivity is preserved.
The return value is a standard :class:`list`:
>>> parse_list_header('token, "quoted value"')
['token', 'quoted value']
To create a header from the :class:`list` again, use the
:func:`dump_header` function.
:param value: a string with a list header.
:return: :class:`list`
:rtype: list
| 99 | 27 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def parse_list_header(value):
result = []
for item in _parse_list_header(value):
if item[:1] == item[-1:] == '"':
item = unquote_header_value(item[1:-1])
result.append(item)
return result
# From mitsuhiko/werkzeug (used with permission).
```
###Assistant : Parse lists as described by RFC 2068 Section 2.
In particular, parse comma-separated lists where the elements of
the list may include quoted-strings. A quoted-string could
contain a comma. A non-quoted string could have quotes in the
middle. Quotes are removed automatically after parsing.
It basically works like :func:`parse_set_header` just that items
may appear multiple times and case sensitivity is preserved.
The return value is a standard :class:`list`:
>>> parse_list_header('token, "quoted value"')
['token', 'quoted value']
To create a header from the :class:`list` again, use the
:func:`dump_header` function.
:param value: a string with a list header.
:return: :class:`list`
:rtype: list
|
2,063 | def media_series_title(self) -> str | None:
if self._playing and self._is_feature_available(FeatureName.SeriesName):
return self._playing.series_name
return None
| Title of series of current playing media, TV show only. | 10 | 14 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def media_series_title(self) -> str | None:
if self._playing and self._is_feature_available(FeatureName.SeriesName):
return self._playing.series_name
return None
```
###Assistant : Title of series of current playing media, TV show only.
|
2,064 | def _build_template(name, template, files, config, nav):
# Run `pre_template` plugin events.
template = config['plugins'].run_event(
'pre_template', template, template_name=name, config=config
)
if utils.is_error_template(name):
# Force absolute URLs in the nav of error pages and account for the
# possibility that the docs root might be different than the server root.
# See https://github.com/mkdocs/mkdocs/issues/77.
# However, if site_url is not set, assume the docs root and server root
# are the same. See https://github.com/mkdocs/mkdocs/issues/1598.
base_url = urlsplit(config['site_url'] or '/').path
else:
base_url = utils.get_relative_url('.', name)
context = get_context(nav, files, config, base_url=base_url)
# Run `template_context` plugin events.
context = config['plugins'].run_event(
'template_context', context, template_name=name, config=config
)
output = template.render(context)
# Run `post_template` plugin events.
output = config['plugins'].run_event('post_template', output, template_name=name, config=config)
return output
|
Return rendered output for given template as a string.
| 9 | 116 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _build_template(name, template, files, config, nav):
# Run `pre_template` plugin events.
template = config['plugins'].run_event(
'pre_template', template, template_name=name, config=config
)
if utils.is_error_template(name):
# Force absolute URLs in the nav of error pages and account for the
# possibility that the docs root might be different than the server root.
# See https://github.com/mkdocs/mkdocs/issues/77.
# However, if site_url is not set, assume the docs root and server root
# are the same. See https://github.com/mkdocs/mkdocs/issues/1598.
base_url = urlsplit(config['site_url'] or '/').path
else:
base_url = utils.get_relative_url('.', name)
context = get_context(nav, files, config, base_url=base_url)
# Run `template_context` plugin events.
context = config['plugins'].run_event(
'template_context', context, template_name=name, config=config
)
output = template.render(context)
# Run `post_template` plugin events.
output = config['plugins'].run_event('post_template', output, template_name=name, config=config)
return output
```
###Assistant :
Return rendered output for given template as a string.
|
2,065 | def matches(self, expr, repl_dict=None, old=False):
expr = sympify(expr)
if not isinstance(expr, self.__class__):
return None
if repl_dict is None:
repl_dict = {}
else:
repl_dict = repl_dict.copy()
if self == expr:
return repl_dict
if len(self.args) != len(expr.args):
return None
d = repl_dict # already a copy
for arg, other_arg in zip(self.args, expr.args):
if arg == other_arg:
continue
if arg.is_Relational:
try:
d = arg.xreplace(d).matches(other_arg, d, old=old)
except TypeError: # Should be InvalidComparisonError when introduced
d = None
else:
d = arg.xreplace(d).matches(other_arg, d, old=old)
if d is None:
return None
return d
|
Helper method for match() that looks for a match between Wild symbols
in self and expressions in expr.
Examples
========
>>> from sympy import symbols, Wild, Basic
>>> a, b, c = symbols('a b c')
>>> x = Wild('x')
>>> Basic(a + x, x).matches(Basic(a + b, c)) is None
True
>>> Basic(a + x, x).matches(Basic(a + b + c, b + c))
{x_: b + c}
| 66 | 88 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def matches(self, expr, repl_dict=None, old=False):
expr = sympify(expr)
if not isinstance(expr, self.__class__):
return None
if repl_dict is None:
repl_dict = {}
else:
repl_dict = repl_dict.copy()
if self == expr:
return repl_dict
if len(self.args) != len(expr.args):
return None
d = repl_dict # already a copy
for arg, other_arg in zip(self.args, expr.args):
if arg == other_arg:
continue
if arg.is_Relational:
try:
d = arg.xreplace(d).matches(other_arg, d, old=old)
except TypeError: # Should be InvalidComparisonError when introduced
d = None
else:
d = arg.xreplace(d).matches(other_arg, d, old=old)
if d is None:
return None
return d
```
###Assistant :
Helper method for match() that looks for a match between Wild symbols
in self and expressions in expr.
Examples
========
>>> from sympy import symbols, Wild, Basic
>>> a, b, c = symbols('a b c')
>>> x = Wild('x')
>>> Basic(a + x, x).matches(Basic(a + b, c)) is None
True
>>> Basic(a + x, x).matches(Basic(a + b + c, b + c))
{x_: b + c}
|
2,066 | def _create_placement_group(self, num_workers):
pg = get_current_placement_group()
if pg is None:
bundle = {"CPU": self._num_cpus_per_worker, "GPU": int(self._use_gpu)}
bundles = [bundle] * num_workers
pg = ray.util.placement_group(bundles, strategy="SPREAD")
logger.debug("Waiting for placement group to start.")
ready, _ = ray.wait([pg.ready()], timeout=SGD_PLACEMENT_GROUP_TIMEOUT_S)
if ready:
logger.debug("Placement group has started.")
else:
raise TimeoutError(
"Placement group creation timed out. Make sure "
"your cluster either has enough resources or use "
"an autoscaling cluster. Current resources "
"available: {}, resources requested by the "
"placement group: {}".format(
ray.available_resources(), pg.bundle_specs
)
)
self._worker_placement_group = pg
| Creates a placement group for the workers.
If this worker is already in a placement group then a new one will
not be created. This is primarily for when Tune is the upstream and
will allocate resources for SGD workers.
If this worker is not in a placement group, a new one will be created
and set. The placement group will have a single bundle for each worker
and use the SPREAD strategy for an even distribution.
| 77 | 85 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _create_placement_group(self, num_workers):
pg = get_current_placement_group()
if pg is None:
bundle = {"CPU": self._num_cpus_per_worker, "GPU": int(self._use_gpu)}
bundles = [bundle] * num_workers
pg = ray.util.placement_group(bundles, strategy="SPREAD")
logger.debug("Waiting for placement group to start.")
ready, _ = ray.wait([pg.ready()], timeout=SGD_PLACEMENT_GROUP_TIMEOUT_S)
if ready:
logger.debug("Placement group has started.")
else:
raise TimeoutError(
"Placement group creation timed out. Make sure "
"your cluster either has enough resources or use "
"an autoscaling cluster. Current resources "
"available: {}, resources requested by the "
"placement group: {}".format(
ray.available_resources(), pg.bundle_specs
)
)
self._worker_placement_group = pg
```
###Assistant : Creates a placement group for the workers.
If this worker is already in a placement group then a new one will
not be created. This is primarily for when Tune is the upstream and
will allocate resources for SGD workers.
If this worker is not in a placement group, a new one will be created
and set. The placement group will have a single bundle for each worker
and use the SPREAD strategy for an even distribution.
|
2,067 | def get_warehouse_list(filters):
from frappe.core.doctype.user_permission.user_permission import get_permitted_documents
condition = ""
user_permitted_warehouse = get_permitted_documents("Warehouse")
value = ()
if user_permitted_warehouse:
condition = "and name in %s"
value = set(user_permitted_warehouse)
elif not user_permitted_warehouse and filters.get("warehouse"):
condition = "and name = %s"
value = filters.get("warehouse")
return frappe.db.sql(
.format(
condition=condition
),
value,
as_dict=1,
)
| select name
from `tabWarehouse` where is_group = 0
{condition} | 9 | 48 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_warehouse_list(filters):
from frappe.core.doctype.user_permission.user_permission import get_permitted_documents
condition = ""
user_permitted_warehouse = get_permitted_documents("Warehouse")
value = ()
if user_permitted_warehouse:
condition = "and name in %s"
value = set(user_permitted_warehouse)
elif not user_permitted_warehouse and filters.get("warehouse"):
condition = "and name = %s"
value = filters.get("warehouse")
return frappe.db.sql(
.format(
condition=condition
),
value,
as_dict=1,
)
```
###Assistant : select name
from `tabWarehouse` where is_group = 0
{condition}
|
2,068 | def unmap(self) -> "BaseOperator":
dag = self.dag
if not dag:
raise RuntimeError("Cannot unmap a task without a DAG")
dag._remove_task(self.task_id)
if isinstance(self.operator_class, str):
raise RuntimeError("Cannot unmap a deserialized operator")
return self.operator_class(**self._get_unmap_kwargs())
| Get the "normal" Operator after applying the current mapping. | 9 | 30 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def unmap(self) -> "BaseOperator":
dag = self.dag
if not dag:
raise RuntimeError("Cannot unmap a task without a DAG")
dag._remove_task(self.task_id)
if isinstance(self.operator_class, str):
raise RuntimeError("Cannot unmap a deserialized operator")
return self.operator_class(**self._get_unmap_kwargs())
```
###Assistant : Get the "normal" Operator after applying the current mapping.
|
2,069 | def require_tensorflow(test_case):
if not is_tensorflow_available():
return unittest.skip("test requires TensorFlow")(test_case)
else:
return test_case
|
Decorator marking a test that requires TensorFlow installed. These tests are skipped when TensorFlow isn't
installed
| 16 | 12 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def require_tensorflow(test_case):
if not is_tensorflow_available():
return unittest.skip("test requires TensorFlow")(test_case)
else:
return test_case
```
###Assistant :
Decorator marking a test that requires TensorFlow installed. These tests are skipped when TensorFlow isn't
installed
|
2,070 | def as_completed(fs, timeout=None):
if timeout is not None:
end_time = timeout + time.monotonic()
fs = set(fs)
total_futures = len(fs)
with _AcquireFutures(fs):
finished = set(
f for f in fs
if f._state in [CANCELLED_AND_NOTIFIED, FINISHED])
pending = fs - finished
waiter = _create_and_install_waiters(fs, _AS_COMPLETED)
finished = list(finished)
try:
yield from _yield_finished_futures(finished, waiter,
ref_collect=(fs,))
while pending:
if timeout is None:
wait_timeout = None
else:
wait_timeout = end_time - time.monotonic()
if wait_timeout < 0:
raise TimeoutError(
'%d (of %d) futures unfinished' % (
len(pending), total_futures))
waiter.event.wait(wait_timeout)
with waiter.lock:
finished = waiter.finished_futures
waiter.finished_futures = []
waiter.event.clear()
# reverse to keep finishing order
finished.reverse()
yield from _yield_finished_futures(finished, waiter,
ref_collect=(fs, pending))
finally:
# Remove waiter from unfinished futures
for f in fs:
with f._condition:
f._waiters.remove(waiter)
DoneAndNotDoneFutures = collections.namedtuple(
'DoneAndNotDoneFutures', 'done not_done') | An iterator over the given futures that yields each as it completes.
Args:
fs: The sequence of Futures (possibly created by different Executors) to
iterate over.
timeout: The maximum number of seconds to wait. If None, then there
is no limit on the wait time.
Returns:
An iterator that yields the given Futures as they complete (finished or
cancelled). If any given Futures are duplicated, they will be returned
once.
Raises:
TimeoutError: If the entire result iterator could not be generated
before the given timeout.
| 85 | 125 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def as_completed(fs, timeout=None):
if timeout is not None:
end_time = timeout + time.monotonic()
fs = set(fs)
total_futures = len(fs)
with _AcquireFutures(fs):
finished = set(
f for f in fs
if f._state in [CANCELLED_AND_NOTIFIED, FINISHED])
pending = fs - finished
waiter = _create_and_install_waiters(fs, _AS_COMPLETED)
finished = list(finished)
try:
yield from _yield_finished_futures(finished, waiter,
ref_collect=(fs,))
while pending:
if timeout is None:
wait_timeout = None
else:
wait_timeout = end_time - time.monotonic()
if wait_timeout < 0:
raise TimeoutError(
'%d (of %d) futures unfinished' % (
len(pending), total_futures))
waiter.event.wait(wait_timeout)
with waiter.lock:
finished = waiter.finished_futures
waiter.finished_futures = []
waiter.event.clear()
# reverse to keep finishing order
finished.reverse()
yield from _yield_finished_futures(finished, waiter,
ref_collect=(fs, pending))
finally:
# Remove waiter from unfinished futures
for f in fs:
with f._condition:
f._waiters.remove(waiter)
DoneAndNotDoneFutures = collections.namedtuple(
'DoneAndNotDoneFutures', 'done not_done')
```
###Assistant : An iterator over the given futures that yields each as it completes.
Args:
fs: The sequence of Futures (possibly created by different Executors) to
iterate over.
timeout: The maximum number of seconds to wait. If None, then there
is no limit on the wait time.
Returns:
An iterator that yields the given Futures as they complete (finished or
cancelled). If any given Futures are duplicated, they will be returned
once.
Raises:
TimeoutError: If the entire result iterator could not be generated
before the given timeout.
|
2,071 | def pack_env_dict(self) -> Dict[str, Any]:
env_info = {"window_size": self.CONV_WIDTH,
"reward_kwargs": self.reward_params,
"config": self.config,
"live": self.live}
if self.data_provider:
env_info["fee"] = self.data_provider._exchange \
.get_fee(symbol=self.data_provider.current_whitelist()[0]) # type: ignore
return env_info
|
Create dictionary of environment arguments
| 5 | 27 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def pack_env_dict(self) -> Dict[str, Any]:
env_info = {"window_size": self.CONV_WIDTH,
"reward_kwargs": self.reward_params,
"config": self.config,
"live": self.live}
if self.data_provider:
env_info["fee"] = self.data_provider._exchange \
.get_fee(symbol=self.data_provider.current_whitelist()[0]) # type: ignore
return env_info
```
###Assistant :
Create dictionary of environment arguments
|
2,072 | def sample(self) -> SampleBatchType:
if self.fake_sampler and self.last_batch is not None:
return self.last_batch
elif self.input_reader is None:
raise ValueError(
"RolloutWorker has no `input_reader` object! "
"Cannot call `sample()`. You can try setting "
"`create_env_on_driver` to True."
)
if log_once("sample_start"):
logger.info(
"Generating sample batch of size {}".format(
self.rollout_fragment_length
)
)
batches = [self.input_reader.next()]
steps_so_far = (
batches[0].count
if self.count_steps_by == "env_steps"
else batches[0].agent_steps()
)
# In truncate_episodes mode, never pull more than 1 batch per env.
# This avoids over-running the target batch size.
if self.batch_mode == "truncate_episodes":
max_batches = self.num_envs
else:
max_batches = float("inf")
while (
steps_so_far < self.rollout_fragment_length and len(batches) < max_batches
):
batch = self.input_reader.next()
steps_so_far += (
batch.count
if self.count_steps_by == "env_steps"
else batch.agent_steps()
)
batches.append(batch)
batch = batches[0].concat_samples(batches) if len(batches) > 1 else batches[0]
self.callbacks.on_sample_end(worker=self, samples=batch)
# Always do writes prior to compression for consistency and to allow
# for better compression inside the writer.
self.output_writer.write(batch)
# Do off-policy estimation, if needed.
if self.reward_estimators:
for sub_batch in batch.split_by_episode():
for estimator in self.reward_estimators:
estimator.process(sub_batch)
if log_once("sample_end"):
logger.info("Completed sample batch:\n\n{}\n".format(summarize(batch)))
if self.compress_observations:
batch.compress(bulk=self.compress_observations == "bulk")
if self.fake_sampler:
self.last_batch = batch
return batch
| Returns a batch of experience sampled from this worker.
This method must be implemented by subclasses.
Returns:
A columnar batch of experiences (e.g., tensors).
Examples:
>>> import gym
>>> from ray.rllib.evaluation.rollout_worker import RolloutWorker
>>> from ray.rllib.algorithms.pg.pg_tf_policy import PGTFPolicy
>>> worker = RolloutWorker( # doctest: +SKIP
... env_creator=lambda _: gym.make("CartPole-v0"), # doctest: +SKIP
... policy_spec=PGTFPolicy) # doctest: +SKIP
>>> print(worker.sample()) # doctest: +SKIP
SampleBatch({"obs": [...], "action": [...], ...})
| 67 | 184 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def sample(self) -> SampleBatchType:
if self.fake_sampler and self.last_batch is not None:
return self.last_batch
elif self.input_reader is None:
raise ValueError(
"RolloutWorker has no `input_reader` object! "
"Cannot call `sample()`. You can try setting "
"`create_env_on_driver` to True."
)
if log_once("sample_start"):
logger.info(
"Generating sample batch of size {}".format(
self.rollout_fragment_length
)
)
batches = [self.input_reader.next()]
steps_so_far = (
batches[0].count
if self.count_steps_by == "env_steps"
else batches[0].agent_steps()
)
# In truncate_episodes mode, never pull more than 1 batch per env.
# This avoids over-running the target batch size.
if self.batch_mode == "truncate_episodes":
max_batches = self.num_envs
else:
max_batches = float("inf")
while (
steps_so_far < self.rollout_fragment_length and len(batches) < max_batches
):
batch = self.input_reader.next()
steps_so_far += (
batch.count
if self.count_steps_by == "env_steps"
else batch.agent_steps()
)
batches.append(batch)
batch = batches[0].concat_samples(batches) if len(batches) > 1 else batches[0]
self.callbacks.on_sample_end(worker=self, samples=batch)
# Always do writes prior to compression for consistency and to allow
# for better compression inside the writer.
self.output_writer.write(batch)
# Do off-policy estimation, if needed.
if self.reward_estimators:
for sub_batch in batch.split_by_episode():
for estimator in self.reward_estimators:
estimator.process(sub_batch)
if log_once("sample_end"):
logger.info("Completed sample batch:\n\n{}\n".format(summarize(batch)))
if self.compress_observations:
batch.compress(bulk=self.compress_observations == "bulk")
if self.fake_sampler:
self.last_batch = batch
return batch
```
###Assistant : Returns a batch of experience sampled from this worker.
This method must be implemented by subclasses.
Returns:
A columnar batch of experiences (e.g., tensors).
Examples:
>>> import gym
>>> from ray.rllib.evaluation.rollout_worker import RolloutWorker
>>> from ray.rllib.algorithms.pg.pg_tf_policy import PGTFPolicy
>>> worker = RolloutWorker( # doctest: +SKIP
... env_creator=lambda _: gym.make("CartPole-v0"), # doctest: +SKIP
... policy_spec=PGTFPolicy) # doctest: +SKIP
>>> print(worker.sample()) # doctest: +SKIP
SampleBatch({"obs": [...], "action": [...], ...})
|
2,073 | def test_no_duplicates_for_non_unique_related_object_in_list_filter(self):
parent = Parent.objects.create(name="Mary")
# Two children with the same name
Child.objects.create(parent=parent, name="Daniel")
Child.objects.create(parent=parent, name="Daniel")
m = ParentAdmin(Parent, custom_site)
request = self.factory.get("/parent/", data={"child__name": "Daniel"})
request.user = self.superuser
cl = m.get_changelist_instance(request)
# Exists() is applied.
self.assertEqual(cl.queryset.count(), 1)
# Queryset must be deletable.
self.assertIs(cl.queryset.query.distinct, False)
cl.queryset.delete()
self.assertEqual(cl.queryset.count(), 0)
|
Regressions tests for #15819: If a field listed in list_filters is a
non-unique related object, results shouldn't appear more than once.
| 21 | 47 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_no_duplicates_for_non_unique_related_object_in_list_filter(self):
parent = Parent.objects.create(name="Mary")
# Two children with the same name
Child.objects.create(parent=parent, name="Daniel")
Child.objects.create(parent=parent, name="Daniel")
m = ParentAdmin(Parent, custom_site)
request = self.factory.get("/parent/", data={"child__name": "Daniel"})
request.user = self.superuser
cl = m.get_changelist_instance(request)
# Exists() is applied.
self.assertEqual(cl.queryset.count(), 1)
# Queryset must be deletable.
self.assertIs(cl.queryset.query.distinct, False)
cl.queryset.delete()
self.assertEqual(cl.queryset.count(), 0)
```
###Assistant :
Regressions tests for #15819: If a field listed in list_filters is a
non-unique related object, results shouldn't appear more than once.
|
2,074 | def get_current_tax_app() -> Optional[App]:
return (
App.objects.order_by("pk")
.for_event_type(WebhookEventSyncType.CHECKOUT_CALCULATE_TAXES)
.for_event_type(WebhookEventSyncType.ORDER_CALCULATE_TAXES)
.last()
)
| Return currently used tax app or None, if there aren't any. | 11 | 11 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_current_tax_app() -> Optional[App]:
return (
App.objects.order_by("pk")
.for_event_type(WebhookEventSyncType.CHECKOUT_CALCULATE_TAXES)
.for_event_type(WebhookEventSyncType.ORDER_CALCULATE_TAXES)
.last()
)
```
###Assistant : Return currently used tax app or None, if there aren't any.
|
2,075 | def test_callbacks(self) -> None:
cache: DeferredCache[str, int] = DeferredCache("test")
callbacks = set()
# start with an entry, with a callback
cache.prefill("k1", 10, callback=lambda: callbacks.add("prefill"))
# now replace that entry with a pending result
origin_d: "defer.Deferred[int]" = defer.Deferred()
set_d = cache.set("k1", origin_d, callback=lambda: callbacks.add("set"))
# ... and also make a get request
get_d = cache.get("k1", callback=lambda: callbacks.add("get"))
# we don't expect the invalidation callback for the original value to have
# been called yet, even though get() will now return a different result.
# I'm not sure if that is by design or not.
self.assertEqual(callbacks, set())
# now fire off all the deferreds
origin_d.callback(20)
self.assertEqual(self.successResultOf(set_d), 20)
self.assertEqual(self.successResultOf(get_d), 20)
# now the original invalidation callback should have been called, but none of
# the others
self.assertEqual(callbacks, {"prefill"})
callbacks.clear()
# another update should invalidate both the previous results
cache.prefill("k1", 30)
self.assertEqual(callbacks, {"set", "get"})
| Invalidation callbacks are called at the right time | 8 | 140 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_callbacks(self) -> None:
cache: DeferredCache[str, int] = DeferredCache("test")
callbacks = set()
# start with an entry, with a callback
cache.prefill("k1", 10, callback=lambda: callbacks.add("prefill"))
# now replace that entry with a pending result
origin_d: "defer.Deferred[int]" = defer.Deferred()
set_d = cache.set("k1", origin_d, callback=lambda: callbacks.add("set"))
# ... and also make a get request
get_d = cache.get("k1", callback=lambda: callbacks.add("get"))
# we don't expect the invalidation callback for the original value to have
# been called yet, even though get() will now return a different result.
# I'm not sure if that is by design or not.
self.assertEqual(callbacks, set())
# now fire off all the deferreds
origin_d.callback(20)
self.assertEqual(self.successResultOf(set_d), 20)
self.assertEqual(self.successResultOf(get_d), 20)
# now the original invalidation callback should have been called, but none of
# the others
self.assertEqual(callbacks, {"prefill"})
callbacks.clear()
# another update should invalidate both the previous results
cache.prefill("k1", 30)
self.assertEqual(callbacks, {"set", "get"})
```
###Assistant : Invalidation callbacks are called at the right time
|
2,076 | def sql_flush(style, connection, reset_sequences=True, allow_cascade=False):
tables = connection.introspection.django_table_names(
only_existing=True, include_views=False
)
return connection.ops.sql_flush(
style,
tables,
reset_sequences=reset_sequences,
allow_cascade=allow_cascade,
)
|
Return a list of the SQL statements used to flush the database.
| 12 | 18 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def sql_flush(style, connection, reset_sequences=True, allow_cascade=False):
tables = connection.introspection.django_table_names(
only_existing=True, include_views=False
)
return connection.ops.sql_flush(
style,
tables,
reset_sequences=reset_sequences,
allow_cascade=allow_cascade,
)
```
###Assistant :
Return a list of the SQL statements used to flush the database.
|
2,077 | def generate(cls, size, callback, channels=3, target_mode=None):
size_1d, size_2d, size_3d = cls._check_size(size)
if channels not in (3, 4):
raise ValueError("Only 3 or 4 output channels are supported")
table = [0] * (size_1d * size_2d * size_3d * channels)
idx_out = 0
for b in range(size_3d):
for g in range(size_2d):
for r in range(size_1d):
table[idx_out : idx_out + channels] = callback(
r / (size_1d - 1), g / (size_2d - 1), b / (size_3d - 1)
)
idx_out += channels
return cls(
(size_1d, size_2d, size_3d),
table,
channels=channels,
target_mode=target_mode,
_copy_table=False,
)
| Generates new LUT using provided callback.
:param size: Size of the table. Passed to the constructor.
:param callback: Function with three parameters which correspond
three color channels. Will be called ``size**3``
times with values from 0.0 to 1.0 and should return
a tuple with ``channels`` elements.
:param channels: The number of channels which should return callback.
:param target_mode: Passed to the constructor of the resulting
lookup table.
| 67 | 88 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def generate(cls, size, callback, channels=3, target_mode=None):
size_1d, size_2d, size_3d = cls._check_size(size)
if channels not in (3, 4):
raise ValueError("Only 3 or 4 output channels are supported")
table = [0] * (size_1d * size_2d * size_3d * channels)
idx_out = 0
for b in range(size_3d):
for g in range(size_2d):
for r in range(size_1d):
table[idx_out : idx_out + channels] = callback(
r / (size_1d - 1), g / (size_2d - 1), b / (size_3d - 1)
)
idx_out += channels
return cls(
(size_1d, size_2d, size_3d),
table,
channels=channels,
target_mode=target_mode,
_copy_table=False,
)
```
###Assistant : Generates new LUT using provided callback.
:param size: Size of the table. Passed to the constructor.
:param callback: Function with three parameters which correspond
three color channels. Will be called ``size**3``
times with values from 0.0 to 1.0 and should return
a tuple with ``channels`` elements.
:param channels: The number of channels which should return callback.
:param target_mode: Passed to the constructor of the resulting
lookup table.
|
2,078 | def regex_lookup(self, lookup_type):
raise NotImplementedError(
"subclasses of BaseDatabaseOperations may require a regex_lookup() method"
)
|
Return the string to use in a query when performing regular expression
lookups (using "regex" or "iregex"). It should contain a '%s'
placeholder for the column being searched against.
If the feature is not supported (or part of it is not supported), raise
NotImplementedError.
| 44 | 14 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def regex_lookup(self, lookup_type):
raise NotImplementedError(
"subclasses of BaseDatabaseOperations may require a regex_lookup() method"
)
```
###Assistant :
Return the string to use in a query when performing regular expression
lookups (using "regex" or "iregex"). It should contain a '%s'
placeholder for the column being searched against.
If the feature is not supported (or part of it is not supported), raise
NotImplementedError.
|
2,079 | def link_existing_conversations(doc, state):
if doc.doctype != "Contact":
return
try:
numbers = [d.phone for d in doc.phone_nos]
for number in numbers:
number = strip_number(number)
if not number:
continue
logs = frappe.db.sql_list(
,
dict(phone_number="%{}".format(number), docname=doc.name, doctype=doc.doctype),
)
for log in logs:
call_log = frappe.get_doc("Call Log", log)
call_log.add_link(link_type=doc.doctype, link_name=doc.name)
call_log.save(ignore_permissions=True)
frappe.db.commit()
except Exception:
frappe.log_error(title=_("Error during caller information update"))
|
Called from hooks on creation of Contact or Lead to link all the existing conversations.
SELECT cl.name FROM `tabCall Log` cl
LEFT JOIN `tabDynamic Link` dl
ON cl.name = dl.parent
WHERE (cl.`from` like %(phone_number)s or cl.`to` like %(phone_number)s)
GROUP BY cl.name
HAVING SUM(
CASE
WHEN dl.link_doctype = %(doctype)s AND dl.link_name = %(docname)s
THEN 1
ELSE 0
END
)=0
| 58 | 55 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def link_existing_conversations(doc, state):
if doc.doctype != "Contact":
return
try:
numbers = [d.phone for d in doc.phone_nos]
for number in numbers:
number = strip_number(number)
if not number:
continue
logs = frappe.db.sql_list(
,
dict(phone_number="%{}".format(number), docname=doc.name, doctype=doc.doctype),
)
for log in logs:
call_log = frappe.get_doc("Call Log", log)
call_log.add_link(link_type=doc.doctype, link_name=doc.name)
call_log.save(ignore_permissions=True)
frappe.db.commit()
except Exception:
frappe.log_error(title=_("Error during caller information update"))
```
###Assistant :
Called from hooks on creation of Contact or Lead to link all the existing conversations.
SELECT cl.name FROM `tabCall Log` cl
LEFT JOIN `tabDynamic Link` dl
ON cl.name = dl.parent
WHERE (cl.`from` like %(phone_number)s or cl.`to` like %(phone_number)s)
GROUP BY cl.name
HAVING SUM(
CASE
WHEN dl.link_doctype = %(doctype)s AND dl.link_name = %(docname)s
THEN 1
ELSE 0
END
)=0
|
2,080 | async def async_volume_up(self) -> None:
if hasattr(self, "volume_up"):
await self.hass.async_add_executor_job(self.volume_up)
return
if (
self.volume_level is not None
and self.volume_level < 1
and self.supported_features & MediaPlayerEntityFeature.VOLUME_SET
):
await self.async_set_volume_level(min(1, self.volume_level + 0.1))
| Turn volume up for media player.
This method is a coroutine.
| 11 | 31 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
async def async_volume_up(self) -> None:
if hasattr(self, "volume_up"):
await self.hass.async_add_executor_job(self.volume_up)
return
if (
self.volume_level is not None
and self.volume_level < 1
and self.supported_features & MediaPlayerEntityFeature.VOLUME_SET
):
await self.async_set_volume_level(min(1, self.volume_level + 0.1))
```
###Assistant : Turn volume up for media player.
This method is a coroutine.
|
2,081 | def infer_axes(self) -> bool:
s = self.storable
if s is None:
return False
self.get_attrs()
return True
|
infer the axes of my storer
return a boolean indicating if we have a valid storer or not
| 18 | 16 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def infer_axes(self) -> bool:
s = self.storable
if s is None:
return False
self.get_attrs()
return True
```
###Assistant :
infer the axes of my storer
return a boolean indicating if we have a valid storer or not
|
2,082 | def _validate_set_axis(self, new_labels, old_labels):
new_labels = ensure_index(new_labels)
old_len = len(old_labels)
new_len = len(new_labels)
if old_len != new_len:
raise ValueError(
f"Length mismatch: Expected axis has {old_len} elements, "
+ "new values have {new_len} elements"
)
return new_labels
|
Validate the possibility of replacement of old labels with the new labels.
Parameters
----------
new_labels : list-like
The labels to replace with.
old_labels : list-like
The labels to replace.
Returns
-------
list-like
The validated labels.
| 35 | 36 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _validate_set_axis(self, new_labels, old_labels):
new_labels = ensure_index(new_labels)
old_len = len(old_labels)
new_len = len(new_labels)
if old_len != new_len:
raise ValueError(
f"Length mismatch: Expected axis has {old_len} elements, "
+ "new values have {new_len} elements"
)
return new_labels
```
###Assistant :
Validate the possibility of replacement of old labels with the new labels.
Parameters
----------
new_labels : list-like
The labels to replace with.
old_labels : list-like
The labels to replace.
Returns
-------
list-like
The validated labels.
|
2,083 | def transform_data(result, translated_columns, query_builder) -> EventsResponse:
final_result: EventsResponse = {"data": result["data"], "meta": result["meta"]}
for col in final_result["meta"]:
# Translate back column names that were converted to snuba format
col["name"] = translated_columns.get(col["name"], col["name"])
|
Transform internal names back to the public schema ones.
When getting timeseries results via rollup, this function will
zerofill the output results.
| 22 | 32 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def transform_data(result, translated_columns, query_builder) -> EventsResponse:
final_result: EventsResponse = {"data": result["data"], "meta": result["meta"]}
for col in final_result["meta"]:
# Translate back column names that were converted to snuba format
col["name"] = translated_columns.get(col["name"], col["name"])
```
###Assistant :
Transform internal names back to the public schema ones.
When getting timeseries results via rollup, this function will
zerofill the output results.
|
2,084 | def plot_contour(*args, show=True, **kwargs):
args = list(map(sympify, args))
plot_expr = check_arguments(args, 1, 2)
series = [ContourSeries(*arg) for arg in plot_expr]
plot_contours = Plot(*series, **kwargs)
if len(plot_expr[0].free_symbols) > 2:
raise ValueError('Contour Plot cannot Plot for more than two variables.')
if show:
plot_contours.show()
return plot_contours
|
Draws contour plot of a function
Usage
=====
Single plot
``plot_contour(expr, range_x, range_y, **kwargs)``
If the ranges are not specified, then a default range of (-10, 10) is used.
Multiple plot with the same range.
``plot_contour(expr1, expr2, range_x, range_y, **kwargs)``
If the ranges are not specified, then a default range of (-10, 10) is used.
Multiple plots with different ranges.
``plot_contour((expr1, range_x, range_y), (expr2, range_x, range_y), ..., **kwargs)``
Ranges have to be specified for every expression.
Default range may change in the future if a more advanced default range
detection algorithm is implemented.
Arguments
=========
expr : Expression representing the function along x.
range_x : (:class:`Symbol`, float, float)
A 3-tuple denoting the range of the x variable, e.g. (x, 0, 5).
range_y : (:class:`Symbol`, float, float)
A 3-tuple denoting the range of the y variable, e.g. (y, 0, 5).
Keyword Arguments
=================
Arguments for ``ContourSeries`` class:
nb_of_points_x : int
The x range is sampled uniformly at ``nb_of_points_x`` of points.
nb_of_points_y : int
The y range is sampled uniformly at ``nb_of_points_y`` of points.
Aesthetics:
surface_color : Function which returns a float
Specifies the color for the surface of the plot. See
:class:`sympy.plotting.Plot` for more details.
If there are multiple plots, then the same series arguments are applied to
all the plots. If you want to set these options separately, you can index
the returned ``Plot`` object and set it.
Arguments for ``Plot`` class:
title : str
Title of the plot.
size : (float, float), optional
A tuple in the form (width, height) in inches to specify the size of
the overall figure. The default value is set to ``None``, meaning
the size will be set by the default backend.
See Also
========
Plot, ContourSeries
| 283 | 43 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def plot_contour(*args, show=True, **kwargs):
args = list(map(sympify, args))
plot_expr = check_arguments(args, 1, 2)
series = [ContourSeries(*arg) for arg in plot_expr]
plot_contours = Plot(*series, **kwargs)
if len(plot_expr[0].free_symbols) > 2:
raise ValueError('Contour Plot cannot Plot for more than two variables.')
if show:
plot_contours.show()
return plot_contours
```
###Assistant :
Draws contour plot of a function
Usage
=====
Single plot
``plot_contour(expr, range_x, range_y, **kwargs)``
If the ranges are not specified, then a default range of (-10, 10) is used.
Multiple plot with the same range.
``plot_contour(expr1, expr2, range_x, range_y, **kwargs)``
If the ranges are not specified, then a default range of (-10, 10) is used.
Multiple plots with different ranges.
``plot_contour((expr1, range_x, range_y), (expr2, range_x, range_y), ..., **kwargs)``
Ranges have to be specified for every expression.
Default range may change in the future if a more advanced default range
detection algorithm is implemented.
Arguments
=========
expr : Expression representing the function along x.
range_x : (:class:`Symbol`, float, float)
A 3-tuple denoting the range of the x variable, e.g. (x, 0, 5).
range_y : (:class:`Symbol`, float, float)
A 3-tuple denoting the range of the y variable, e.g. (y, 0, 5).
Keyword Arguments
=================
Arguments for ``ContourSeries`` class:
nb_of_points_x : int
The x range is sampled uniformly at ``nb_of_points_x`` of points.
nb_of_points_y : int
The y range is sampled uniformly at ``nb_of_points_y`` of points.
Aesthetics:
surface_color : Function which returns a float
Specifies the color for the surface of the plot. See
:class:`sympy.plotting.Plot` for more details.
If there are multiple plots, then the same series arguments are applied to
all the plots. If you want to set these options separately, you can index
the returned ``Plot`` object and set it.
Arguments for ``Plot`` class:
title : str
Title of the plot.
size : (float, float), optional
A tuple in the form (width, height) in inches to specify the size of
the overall figure. The default value is set to ``None``, meaning
the size will be set by the default backend.
See Also
========
Plot, ContourSeries
|
2,085 | def peek(self, n=0):
self._check_can_read()
# Relies on the undocumented fact that BufferedReader.peek()
# always returns at least one byte (except at EOF), independent
# of the value of n
return self._buffer.peek(n)
| Return buffered data without advancing the file position.
Always returns at least one byte of data, unless at EOF.
The exact number of bytes returned is unspecified.
| 27 | 31 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def peek(self, n=0):
self._check_can_read()
# Relies on the undocumented fact that BufferedReader.peek()
# always returns at least one byte (except at EOF), independent
# of the value of n
return self._buffer.peek(n)
```
###Assistant : Return buffered data without advancing the file position.
Always returns at least one byte of data, unless at EOF.
The exact number of bytes returned is unspecified.
|
2,086 | def cache_full(self) -> bool:
if self._cache_info["cache_full"]:
return self._cache_info["cache_full"]
with self._lock:
return self._cache_info["cache_full"]
| bool: ``True`` if the cache has been fully populated. ``False`` if there are items still
to be cached. | 18 | 12 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def cache_full(self) -> bool:
if self._cache_info["cache_full"]:
return self._cache_info["cache_full"]
with self._lock:
return self._cache_info["cache_full"]
```
###Assistant : bool: ``True`` if the cache has been fully populated. ``False`` if there are items still
to be cached.
|
2,087 | def get_queryset(self, request):
if self.queryset is None:
raise ImproperlyConfigured(
f"{self.__class__.__name__} does not define a queryset. Set queryset on the class or "
f"override its get_queryset() method."
)
return self.queryset.all()
|
Return the base queryset for the view. By default, this returns self.queryset.all().
Args:
request: The current request
| 17 | 29 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_queryset(self, request):
if self.queryset is None:
raise ImproperlyConfigured(
f"{self.__class__.__name__} does not define a queryset. Set queryset on the class or "
f"override its get_queryset() method."
)
return self.queryset.all()
```
###Assistant :
Return the base queryset for the view. By default, this returns self.queryset.all().
Args:
request: The current request
|
2,088 | def _preprocess(self, inputs):
inputs = self._check_input_text(inputs)
self._max_cls_len = 5
num_workers = self.kwargs[
'num_workers'] if 'num_workers' in self.kwargs else 0
lazy_load = self.kwargs[
'lazy_load'] if 'lazy_load' in self.kwargs else False
# Prompt template: input_text + "是" + "[MASK]" * cls_seq_length
prompt_template = ["是"] + ["[MASK]"] * self._max_cls_len
|
Create the dataset and dataloader for the predict.
| 8 | 46 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _preprocess(self, inputs):
inputs = self._check_input_text(inputs)
self._max_cls_len = 5
num_workers = self.kwargs[
'num_workers'] if 'num_workers' in self.kwargs else 0
lazy_load = self.kwargs[
'lazy_load'] if 'lazy_load' in self.kwargs else False
# Prompt template: input_text + "是" + "[MASK]" * cls_seq_length
prompt_template = ["是"] + ["[MASK]"] * self._max_cls_len
```
###Assistant :
Create the dataset and dataloader for the predict.
|
2,089 | def session_destroy(consul_url=None, token=None, session=None, **kwargs):
ret = {}
if not consul_url:
consul_url = _get_config()
if not consul_url:
log.error("No Consul URL found.")
ret["message"] = "No Consul URL found."
ret["res"] = False
return ret
if not session:
raise SaltInvocationError('Required argument "session" is missing.')
query_params = {}
if "dc" in kwargs:
query_params["dc"] = kwargs["dc"]
function = "session/destroy/{}".format(session)
res = _query(
consul_url=consul_url,
function=function,
token=token,
method="PUT",
query_params=query_params,
)
if res["res"]:
ret["res"] = True
ret["message"] = "Destroyed Session {}.".format(session)
else:
ret["res"] = False
ret["message"] = "Unable to destroy session {}.".format(session)
return ret
|
Destroy session
:param consul_url: The Consul server URL.
:param session: The ID of the session to destroy.
:param dc: By default, the datacenter of the agent is queried;
however, the dc can be provided using the "dc" parameter.
:return: Boolean & message of success or failure.
CLI Example:
.. code-block:: bash
salt '*' consul.session_destroy session='c1c4d223-91cb-3d1f-1ee8-f2af9e7b6716'
| 55 | 86 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def session_destroy(consul_url=None, token=None, session=None, **kwargs):
ret = {}
if not consul_url:
consul_url = _get_config()
if not consul_url:
log.error("No Consul URL found.")
ret["message"] = "No Consul URL found."
ret["res"] = False
return ret
if not session:
raise SaltInvocationError('Required argument "session" is missing.')
query_params = {}
if "dc" in kwargs:
query_params["dc"] = kwargs["dc"]
function = "session/destroy/{}".format(session)
res = _query(
consul_url=consul_url,
function=function,
token=token,
method="PUT",
query_params=query_params,
)
if res["res"]:
ret["res"] = True
ret["message"] = "Destroyed Session {}.".format(session)
else:
ret["res"] = False
ret["message"] = "Unable to destroy session {}.".format(session)
return ret
```
###Assistant :
Destroy session
:param consul_url: The Consul server URL.
:param session: The ID of the session to destroy.
:param dc: By default, the datacenter of the agent is queried;
however, the dc can be provided using the "dc" parameter.
:return: Boolean & message of success or failure.
CLI Example:
.. code-block:: bash
salt '*' consul.session_destroy session='c1c4d223-91cb-3d1f-1ee8-f2af9e7b6716'
|
2,090 | def post_save_action(cls, info, instance, cleaned_input):
manager = load_plugin_manager(info.context)
cls.call_event(manager.collection_updated, instance)
| Override this method with `pass` to avoid triggering product webhook. | 10 | 10 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def post_save_action(cls, info, instance, cleaned_input):
manager = load_plugin_manager(info.context)
cls.call_event(manager.collection_updated, instance)
```
###Assistant : Override this method with `pass` to avoid triggering product webhook.
|
2,091 | def _deserialize_metric(metric_config):
from keras import (
metrics as metrics_module,
) # pylint:disable=g-import-not-at-top
if metric_config in ["accuracy", "acc", "crossentropy", "ce"]:
# Do not deserialize accuracy and cross-entropy strings as we have special
# case handling for these in compile, based on model output shape.
return metric_config
return metrics_module.deserialize(metric_config)
| Deserialize metrics, leaving special strings untouched. | 6 | 47 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _deserialize_metric(metric_config):
from keras import (
metrics as metrics_module,
) # pylint:disable=g-import-not-at-top
if metric_config in ["accuracy", "acc", "crossentropy", "ce"]:
# Do not deserialize accuracy and cross-entropy strings as we have special
# case handling for these in compile, based on model output shape.
return metric_config
return metrics_module.deserialize(metric_config)
```
###Assistant : Deserialize metrics, leaving special strings untouched.
|
2,092 | def softsign(x):
return tf.math.softsign(x)
@keras_export("keras.activations.swish")
@tf.__internal__.dispatch.add_dispatch_support | Softsign activation function, `softsign(x) = x / (abs(x) + 1)`.
Example Usage:
>>> a = tf.constant([-1.0, 0.0, 1.0], dtype = tf.float32)
>>> b = tf.keras.activations.softsign(a)
>>> b.numpy()
array([-0.5, 0. , 0.5], dtype=float32)
Args:
x: Input tensor.
Returns:
The softsign activation: `x / (abs(x) + 1)`.
| 45 | 6 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def softsign(x):
return tf.math.softsign(x)
@keras_export("keras.activations.swish")
@tf.__internal__.dispatch.add_dispatch_support
```
###Assistant : Softsign activation function, `softsign(x) = x / (abs(x) + 1)`.
Example Usage:
>>> a = tf.constant([-1.0, 0.0, 1.0], dtype = tf.float32)
>>> b = tf.keras.activations.softsign(a)
>>> b.numpy()
array([-0.5, 0. , 0.5], dtype=float32)
Args:
x: Input tensor.
Returns:
The softsign activation: `x / (abs(x) + 1)`.
|
2,093 | def compute_inlier_metric(self) -> None:
import scipy.stats as ss
nmb_previous_points = self.data['InlierMetric_nmb_points']
weibull_percentile = self.data['InlierMetric_weib_perc']
train_ft_df = self.data_dictionary['train_features']
train_ft_df_reindexed = train_ft_df.reindex(
index=np.flip(train_ft_df.index)
)
pairwise = pd.DataFrame(
np.triu(
pairwise_distances(train_ft_df_reindexed, n_jobs=self.thread_count)
),
columns=train_ft_df_reindexed.index,
index=train_ft_df_reindexed.index
)
pairwise = pairwise.round(5)
column_labels = [
'{}{}'.format('d', i) for i in range(1, nmb_previous_points+1)
]
distances = pd.DataFrame(
columns=column_labels, index=train_ft_df.index
)
for index in train_ft_df.index[nmb_previous_points]:
current_row = pairwise.loc[[index]]
current_row_no_zeros = current_row.loc[
:, (current_row!=0).any(axis=0)
]
distances.loc[[index]] = current_row_no_zeros.iloc[
:, :nmb_previous_points
]
distances = distances.replace([np.inf, -np.inf], np.nan)
drop_index = pd.isnull(distances).any(1)
distances = distances[drop_index==0]
inliers = pd.DataFrame(index=distances.index)
for key in distances.keys():
current_distances = distances[key].dropna()
fit_params = ss.weibull_min.fit(current_distances)
cutoff = ss.weibull_min.ppf(weibull_percentile, *fit_params)
is_inlier = np.where(
current_distances<=cutoff, 1, 0
)
df_inlier = pd.DataFrame(
{key+'_IsInlier':is_inlier}, index=distances.index
)
inliers = pd.concat(
[inliers, df_inlier], axis=1
)
self.data_dictionary['train_features'] = pd.DataFrame(
data=inliers.sum(axis=1)/nmb_previous_points,
columns=['inlier_metric'],
index = train_ft_df.index
)
percent_outliers = np.round(
100*(1-self.data_dictionary['iniler_metric'].sum()/
len(train_ft_df.index)), 2
)
logger.info('{percent_outliers}%% of data points were identified as outliers')
return None
|
Compute inlier metric from backwards distance distributions.
This metric defines how well features from a timepoint fit
into previous timepoints.
| 20 | 145 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def compute_inlier_metric(self) -> None:
import scipy.stats as ss
nmb_previous_points = self.data['InlierMetric_nmb_points']
weibull_percentile = self.data['InlierMetric_weib_perc']
train_ft_df = self.data_dictionary['train_features']
train_ft_df_reindexed = train_ft_df.reindex(
index=np.flip(train_ft_df.index)
)
pairwise = pd.DataFrame(
np.triu(
pairwise_distances(train_ft_df_reindexed, n_jobs=self.thread_count)
),
columns=train_ft_df_reindexed.index,
index=train_ft_df_reindexed.index
)
pairwise = pairwise.round(5)
column_labels = [
'{}{}'.format('d', i) for i in range(1, nmb_previous_points+1)
]
distances = pd.DataFrame(
columns=column_labels, index=train_ft_df.index
)
for index in train_ft_df.index[nmb_previous_points]:
current_row = pairwise.loc[[index]]
current_row_no_zeros = current_row.loc[
:, (current_row!=0).any(axis=0)
]
distances.loc[[index]] = current_row_no_zeros.iloc[
:, :nmb_previous_points
]
distances = distances.replace([np.inf, -np.inf], np.nan)
drop_index = pd.isnull(distances).any(1)
distances = distances[drop_index==0]
inliers = pd.DataFrame(index=distances.index)
for key in distances.keys():
current_distances = distances[key].dropna()
fit_params = ss.weibull_min.fit(current_distances)
cutoff = ss.weibull_min.ppf(weibull_percentile, *fit_params)
is_inlier = np.where(
current_distances<=cutoff, 1, 0
)
df_inlier = pd.DataFrame(
{key+'_IsInlier':is_inlier}, index=distances.index
)
inliers = pd.concat(
[inliers, df_inlier], axis=1
)
self.data_dictionary['train_features'] = pd.DataFrame(
data=inliers.sum(axis=1)/nmb_previous_points,
columns=['inlier_metric'],
index = train_ft_df.index
)
percent_outliers = np.round(
100*(1-self.data_dictionary['iniler_metric'].sum()/
len(train_ft_df.index)), 2
)
logger.info('{percent_outliers}%% of data points were identified as outliers')
return None
```
###Assistant :
Compute inlier metric from backwards distance distributions.
This metric defines how well features from a timepoint fit
into previous timepoints.
|
2,094 | def items_view(self, traverser, items):
if len(items) == 1:
traverser(items[0])
self.write(",")
else:
self.interleave(lambda: self.write(", "), traverser, items)
| Traverse and separate the given *items* with a comma and append it to
the buffer. If *items* is a single item sequence, a trailing comma
will be added. | 28 | 16 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def items_view(self, traverser, items):
if len(items) == 1:
traverser(items[0])
self.write(",")
else:
self.interleave(lambda: self.write(", "), traverser, items)
```
###Assistant : Traverse and separate the given *items* with a comma and append it to
the buffer. If *items* is a single item sequence, a trailing comma
will be added.
|
2,095 | def refresh_stats(self) -> None:
try:
self._mallctl("epoch", read=False, write=1)
except Exception as e:
logger.warning("Failed to reload jemalloc stats: %s", e)
| Request that jemalloc updates its internal statistics. This needs to
be called before querying for stats, otherwise it will return stale
values.
| 22 | 19 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def refresh_stats(self) -> None:
try:
self._mallctl("epoch", read=False, write=1)
except Exception as e:
logger.warning("Failed to reload jemalloc stats: %s", e)
```
###Assistant : Request that jemalloc updates its internal statistics. This needs to
be called before querying for stats, otherwise it will return stale
values.
|
2,096 | def test_to_numpy_array_multiweight_reduction(func, expected):
G = nx.MultiDiGraph()
weights = [-1, 2, 10.0]
for w in weights:
G.add_edge(0, 1, weight=w)
A = nx.to_numpy_array(G, multigraph_weight=func, dtype=float)
assert np.allclose(A, [[0, expected], [0, 0]])
# Undirected case
A = nx.to_numpy_array(G.to_undirected(), multigraph_weight=func, dtype=float)
assert np.allclose(A, [[0, expected], [expected, 0]])
| Test various functions for reducing multiedge weights. | 7 | 43 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_to_numpy_array_multiweight_reduction(func, expected):
G = nx.MultiDiGraph()
weights = [-1, 2, 10.0]
for w in weights:
G.add_edge(0, 1, weight=w)
A = nx.to_numpy_array(G, multigraph_weight=func, dtype=float)
assert np.allclose(A, [[0, expected], [0, 0]])
# Undirected case
A = nx.to_numpy_array(G.to_undirected(), multigraph_weight=func, dtype=float)
assert np.allclose(A, [[0, expected], [expected, 0]])
```
###Assistant : Test various functions for reducing multiedge weights.
|
2,097 | def test_vr_connector_causal_slice(self):
view_rq_dict = {
"state": ViewRequirement("obs"),
# shift array should be [-2, -1, 0]
"prev_states": ViewRequirement("obs", shift="-2:0"),
# shift array should be [-4, -2, 0]
"prev_strided_states_even": ViewRequirement("obs", shift="-4:0:2"),
# shift array should be [-3, -1]
"prev_strided_states_odd": ViewRequirement("obs", shift="-3:0:2"),
}
obs_arrs = np.arange(10)[:, None] + 1
config = PPOConfig().to_dict()
ctx = ConnectorContext(
view_requirements=view_rq_dict, config=config, is_policy_recurrent=True
)
c = ViewRequirementAgentConnector(ctx)
# keep a queue of observations
obs_list = []
for t, obs in enumerate(obs_arrs):
# t=0 is the next state of t=-1
data = AgentConnectorDataType(
0, 1, {SampleBatch.NEXT_OBS: obs, SampleBatch.T: t - 1}
)
processed = c([data])
for_action = processed[0].data.for_action
if t == 0:
obs_list.extend([obs for _ in range(5)])
else:
# remove the first obs and add the current obs to the end
obs_list.pop(0)
obs_list.append(obs)
# check state
check(for_action["state"], obs[None])
# check prev_states
check(
for_action["prev_states"],
np.stack(obs_list)[np.array([-3, -2, -1])][None],
)
# check prev_strided_states_even
check(
for_action["prev_strided_states_even"],
np.stack(obs_list)[np.array([-5, -3, -1])][None],
)
check(
for_action["prev_strided_states_odd"],
np.stack(obs_list)[np.array([-4, -2])][None],
)
| Test that the ViewRequirementConnector can handle slice shifts correctly. | 9 | 152 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_vr_connector_causal_slice(self):
view_rq_dict = {
"state": ViewRequirement("obs"),
# shift array should be [-2, -1, 0]
"prev_states": ViewRequirement("obs", shift="-2:0"),
# shift array should be [-4, -2, 0]
"prev_strided_states_even": ViewRequirement("obs", shift="-4:0:2"),
# shift array should be [-3, -1]
"prev_strided_states_odd": ViewRequirement("obs", shift="-3:0:2"),
}
obs_arrs = np.arange(10)[:, None] + 1
config = PPOConfig().to_dict()
ctx = ConnectorContext(
view_requirements=view_rq_dict, config=config, is_policy_recurrent=True
)
c = ViewRequirementAgentConnector(ctx)
# keep a queue of observations
obs_list = []
for t, obs in enumerate(obs_arrs):
# t=0 is the next state of t=-1
data = AgentConnectorDataType(
0, 1, {SampleBatch.NEXT_OBS: obs, SampleBatch.T: t - 1}
)
processed = c([data])
for_action = processed[0].data.for_action
if t == 0:
obs_list.extend([obs for _ in range(5)])
else:
# remove the first obs and add the current obs to the end
obs_list.pop(0)
obs_list.append(obs)
# check state
check(for_action["state"], obs[None])
# check prev_states
check(
for_action["prev_states"],
np.stack(obs_list)[np.array([-3, -2, -1])][None],
)
# check prev_strided_states_even
check(
for_action["prev_strided_states_even"],
np.stack(obs_list)[np.array([-5, -3, -1])][None],
)
check(
for_action["prev_strided_states_odd"],
np.stack(obs_list)[np.array([-4, -2])][None],
)
```
###Assistant : Test that the ViewRequirementConnector can handle slice shifts correctly.
|
2,098 | def render(self) -> RenderableType:
return Padding(
Align.right(FigletText(self.value), vertical="middle"),
(0, 1),
style="white on rgb(51,51,51)",
)
| Build a Rich renderable to render the calculator display. | 9 | 14 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def render(self) -> RenderableType:
return Padding(
Align.right(FigletText(self.value), vertical="middle"),
(0, 1),
style="white on rgb(51,51,51)",
)
```
###Assistant : Build a Rich renderable to render the calculator display.
|
2,099 | def in_place_subclassed_model_state_restoration(model):
assert not model._is_graph_network
# Restore layers and build attributes
if (
hasattr(model, "_original_attributes_cache")
and model._original_attributes_cache is not None
):
# Models have sticky attribute assignment, so we want to be careful to
# add back the previous attributes and track Layers by their original
# names without adding dependencies on "utility" attributes which Models
# exempt when they're constructed.
setattr_tracking = model._setattr_tracking
model._setattr_tracking = False
model._self_tracked_trackables = []
for name, value in model._original_attributes_cache.items():
setattr(model, name, value)
if isinstance(value, Layer):
model._self_tracked_trackables.append(value)
model._original_attributes_cache = None
model._setattr_tracking = setattr_tracking
else:
# Restore to the state of a never-called model.
_reset_build_compile_trackers(model)
@keras_export("keras.__internal__.models.clone_and_build_model", v1=[]) | Restores the original state of a model after it was "reset".
This undoes this action of `_in_place_subclassed_model_reset`, which is
called in `clone_and_build_model` if `in_place_reset` is set to True.
Args:
model: Instance of a Keras model created via subclassing, on which
`_in_place_subclassed_model_reset` was previously called.
| 44 | 101 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def in_place_subclassed_model_state_restoration(model):
assert not model._is_graph_network
# Restore layers and build attributes
if (
hasattr(model, "_original_attributes_cache")
and model._original_attributes_cache is not None
):
# Models have sticky attribute assignment, so we want to be careful to
# add back the previous attributes and track Layers by their original
# names without adding dependencies on "utility" attributes which Models
# exempt when they're constructed.
setattr_tracking = model._setattr_tracking
model._setattr_tracking = False
model._self_tracked_trackables = []
for name, value in model._original_attributes_cache.items():
setattr(model, name, value)
if isinstance(value, Layer):
model._self_tracked_trackables.append(value)
model._original_attributes_cache = None
model._setattr_tracking = setattr_tracking
else:
# Restore to the state of a never-called model.
_reset_build_compile_trackers(model)
@keras_export("keras.__internal__.models.clone_and_build_model", v1=[])
```
###Assistant : Restores the original state of a model after it was "reset".
This undoes this action of `_in_place_subclassed_model_reset`, which is
called in `clone_and_build_model` if `in_place_reset` is set to True.
Args:
model: Instance of a Keras model created via subclassing, on which
`_in_place_subclassed_model_reset` was previously called.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.