Unnamed: 0
int64 0
2.93k
| code
stringlengths 101
62.2k
| docs
stringlengths 51
10.7k
| doc_len
int64 4
1.74k
| words
int64 4
4.82k
| lang
stringclasses 1
value | prompt
stringlengths 320
71.2k
|
---|---|---|---|---|---|---|
700 | def _clean_configuration_value(cls, item_type, new_value):
if (
item_type == ConfigurationTypeField.BOOLEAN
and new_value
and not isinstance(new_value, bool)
):
new_value = new_value.lower() == "true"
if item_type == ConfigurationTypeField.OUTPUT:
# OUTPUT field is read only. No need to update it
return
return new_value
| Clean the value that is saved in plugin configuration.
Change the string provided as boolean into the bool value.
Return None for Output type, as it's read only field.
| 29 | 39 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _clean_configuration_value(cls, item_type, new_value):
if (
item_type == ConfigurationTypeField.BOOLEAN
and new_value
and not isinstance(new_value, bool)
):
new_value = new_value.lower() == "true"
if item_type == ConfigurationTypeField.OUTPUT:
# OUTPUT field is read only. No need to update it
return
return new_value
```
###Assistant : Clean the value that is saved in plugin configuration.
Change the string provided as boolean into the bool value.
Return None for Output type, as it's read only field.
|
701 | def coord_map_from_to(top_from, top_to):
# We need to find a common ancestor of top_from and top_to.
# We'll assume that all ancestors are equivalent here (otherwise the graph
# is an inconsistent state (which we could improve this to check for)).
# For now use a brute-force algorithm.
|
Determine the coordinate mapping betweeen a top (from) and a top (to).
Walk the graph to find a common ancestor while composing the coord maps for
from and to until they meet. As a last step the from map is inverted.
| 41 | 47 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def coord_map_from_to(top_from, top_to):
# We need to find a common ancestor of top_from and top_to.
# We'll assume that all ancestors are equivalent here (otherwise the graph
# is an inconsistent state (which we could improve this to check for)).
# For now use a brute-force algorithm.
```
###Assistant :
Determine the coordinate mapping betweeen a top (from) and a top (to).
Walk the graph to find a common ancestor while composing the coord maps for
from and to until they meet. As a last step the from map is inverted.
|
702 | def get_conditions_to_validate_future_sle(sl_entries):
warehouse_items_map = {}
for entry in sl_entries:
if entry.warehouse not in warehouse_items_map:
warehouse_items_map[entry.warehouse] = set()
warehouse_items_map[entry.warehouse].add(entry.item_code)
or_conditions = []
for warehouse, items in warehouse_items_map.items():
or_conditions.append(
f
)
return or_conditions
| warehouse = {frappe.db.escape(warehouse)}
and item_code in ({', '.join(frappe.db.escape(item) for item in items)}) | 12 | 31 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_conditions_to_validate_future_sle(sl_entries):
warehouse_items_map = {}
for entry in sl_entries:
if entry.warehouse not in warehouse_items_map:
warehouse_items_map[entry.warehouse] = set()
warehouse_items_map[entry.warehouse].add(entry.item_code)
or_conditions = []
for warehouse, items in warehouse_items_map.items():
or_conditions.append(
f
)
return or_conditions
```
###Assistant : warehouse = {frappe.db.escape(warehouse)}
and item_code in ({', '.join(frappe.db.escape(item) for item in items)})
|
703 | def split_dataset(items, eval_split_max_size=None, eval_split_size=0.01):
speakers = [item["speaker_name"] for item in items]
is_multi_speaker = len(set(speakers)) > 1
if eval_split_size > 1:
eval_split_size = int(eval_split_size)
else:
if eval_split_max_size:
eval_split_size = min(eval_split_max_size, int(len(items) * eval_split_size))
else:
eval_split_size = int(len(items) * eval_split_size)
assert (
eval_split_size > 0
), " [!] You do not have enough samples for the evaluation set. You can work around this setting the 'eval_split_size' parameter to a minimum of {}".format(
1 / len(items)
)
np.random.seed(0)
np.random.shuffle(items)
if is_multi_speaker:
items_eval = []
speakers = [item["speaker_name"] for item in items]
speaker_counter = Counter(speakers)
while len(items_eval) < eval_split_size:
item_idx = np.random.randint(0, len(items))
speaker_to_be_removed = items[item_idx]["speaker_name"]
if speaker_counter[speaker_to_be_removed] > 1:
items_eval.append(items[item_idx])
speaker_counter[speaker_to_be_removed] -= 1
del items[item_idx]
return items_eval, items
return items[:eval_split_size], items[eval_split_size:]
| Split a dataset into train and eval. Consider speaker distribution in multi-speaker training.
Args:
<<<<<<< HEAD
items (List[List]):
A list of samples. Each sample is a list of `[audio_path, text, speaker_id]`.
eval_split_max_size (int):
Number maximum of samples to be used for evaluation in proportion split. Defaults to None (Disabled).
eval_split_size (float):
If between 0.0 and 1.0 represents the proportion of the dataset to include in the evaluation set.
If > 1, represents the absolute number of evaluation samples. Defaults to 0.01 (1%).
=======
items (List[List]): A list of samples. Each sample is a list of `[text, audio_path, speaker_id]`.
>>>>>>> Fix docstring
| 101 | 118 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def split_dataset(items, eval_split_max_size=None, eval_split_size=0.01):
speakers = [item["speaker_name"] for item in items]
is_multi_speaker = len(set(speakers)) > 1
if eval_split_size > 1:
eval_split_size = int(eval_split_size)
else:
if eval_split_max_size:
eval_split_size = min(eval_split_max_size, int(len(items) * eval_split_size))
else:
eval_split_size = int(len(items) * eval_split_size)
assert (
eval_split_size > 0
), " [!] You do not have enough samples for the evaluation set. You can work around this setting the 'eval_split_size' parameter to a minimum of {}".format(
1 / len(items)
)
np.random.seed(0)
np.random.shuffle(items)
if is_multi_speaker:
items_eval = []
speakers = [item["speaker_name"] for item in items]
speaker_counter = Counter(speakers)
while len(items_eval) < eval_split_size:
item_idx = np.random.randint(0, len(items))
speaker_to_be_removed = items[item_idx]["speaker_name"]
if speaker_counter[speaker_to_be_removed] > 1:
items_eval.append(items[item_idx])
speaker_counter[speaker_to_be_removed] -= 1
del items[item_idx]
return items_eval, items
return items[:eval_split_size], items[eval_split_size:]
```
###Assistant : Split a dataset into train and eval. Consider speaker distribution in multi-speaker training.
Args:
<<<<<<< HEAD
items (List[List]):
A list of samples. Each sample is a list of `[audio_path, text, speaker_id]`.
eval_split_max_size (int):
Number maximum of samples to be used for evaluation in proportion split. Defaults to None (Disabled).
eval_split_size (float):
If between 0.0 and 1.0 represents the proportion of the dataset to include in the evaluation set.
If > 1, represents the absolute number of evaluation samples. Defaults to 0.01 (1%).
=======
items (List[List]): A list of samples. Each sample is a list of `[text, audio_path, speaker_id]`.
>>>>>>> Fix docstring
|
704 | def test_quarantine_media(self) -> None:
media_info = self.get_success(self.store.get_local_media(self.media_id))
assert media_info is not None
self.assertFalse(media_info["quarantined_by"])
# quarantining
channel = self.make_request(
"POST",
self.url % ("quarantine", self.server_name, self.media_id),
access_token=self.admin_user_tok,
)
self.assertEqual(HTTPStatus.OK, channel.code, msg=channel.json_body)
self.assertFalse(channel.json_body)
media_info = self.get_success(self.store.get_local_media(self.media_id))
assert media_info is not None
self.assertTrue(media_info["quarantined_by"])
# remove from quarantine
channel = self.make_request(
"POST",
self.url % ("unquarantine", self.server_name, self.media_id),
access_token=self.admin_user_tok,
)
self.assertEqual(HTTPStatus.OK, channel.code, msg=channel.json_body)
self.assertFalse(channel.json_body)
media_info = self.get_success(self.store.get_local_media(self.media_id))
assert media_info is not None
self.assertFalse(media_info["quarantined_by"])
|
Tests that quarantining and remove from quarantine a media is successfully
| 11 | 67 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_quarantine_media(self) -> None:
media_info = self.get_success(self.store.get_local_media(self.media_id))
assert media_info is not None
self.assertFalse(media_info["quarantined_by"])
# quarantining
channel = self.make_request(
"POST",
self.url % ("quarantine", self.server_name, self.media_id),
access_token=self.admin_user_tok,
)
self.assertEqual(HTTPStatus.OK, channel.code, msg=channel.json_body)
self.assertFalse(channel.json_body)
media_info = self.get_success(self.store.get_local_media(self.media_id))
assert media_info is not None
self.assertTrue(media_info["quarantined_by"])
# remove from quarantine
channel = self.make_request(
"POST",
self.url % ("unquarantine", self.server_name, self.media_id),
access_token=self.admin_user_tok,
)
self.assertEqual(HTTPStatus.OK, channel.code, msg=channel.json_body)
self.assertFalse(channel.json_body)
media_info = self.get_success(self.store.get_local_media(self.media_id))
assert media_info is not None
self.assertFalse(media_info["quarantined_by"])
```
###Assistant :
Tests that quarantining and remove from quarantine a media is successfully
|
705 | def naive_greedy_modularity_communities(G, resolution=1, weight=None):
r
# First create one community for each node
communities = list(frozenset([u]) for u in G.nodes())
# Track merges
merges = []
# Greedily merge communities until no improvement is possible
old_modularity = None
new_modularity = modularity(G, communities, resolution=resolution, weight=weight)
while old_modularity is None or new_modularity > old_modularity:
# Save modularity for comparison
old_modularity = new_modularity
# Find best pair to merge
trial_communities = list(communities)
to_merge = None
for i, u in enumerate(communities):
for j, v in enumerate(communities):
# Skip i==j and empty communities
if j <= i or len(u) == 0 or len(v) == 0:
continue
# Merge communities u and v
trial_communities[j] = u | v
trial_communities[i] = frozenset([])
trial_modularity = modularity(
G, trial_communities, resolution=resolution, weight=weight
)
if trial_modularity >= new_modularity:
# Check if strictly better or tie
if trial_modularity > new_modularity:
# Found new best, save modularity and group indexes
new_modularity = trial_modularity
to_merge = (i, j, new_modularity - old_modularity)
elif to_merge and min(i, j) < min(to_merge[0], to_merge[1]):
# Break ties by choosing pair with lowest min id
new_modularity = trial_modularity
to_merge = (i, j, new_modularity - old_modularity)
# Un-merge
trial_communities[i] = u
trial_communities[j] = v
if to_merge is not None:
# If the best merge improves modularity, use it
merges.append(to_merge)
i, j, dq = to_merge
u, v = communities[i], communities[j]
communities[j] = u | v
communities[i] = frozenset([])
# Remove empty communities and sort
return sorted((c for c in communities if len(c) > 0), key=len, reverse=True)
# old name
_naive_greedy_modularity_communities = naive_greedy_modularity_communities
| Find communities in G using greedy modularity maximization.
This implementation is O(n^4), much slower than alternatives, but it is
provided as an easy-to-understand reference implementation.
Greedy modularity maximization begins with each node in its own community
and joins the pair of communities that most increases modularity until no
such pair exists.
This function maximizes the generalized modularity, where `resolution`
is the resolution parameter, often expressed as $\gamma$.
See :func:`~networkx.algorithms.community.quality.modularity`.
Parameters
----------
G : NetworkX graph
resolution : float (default=1)
If resolution is less than 1, modularity favors larger communities.
Greater than 1 favors smaller communities.
weight : string or None, optional (default=None)
The name of an edge attribute that holds the numerical value used
as a weight. If None, then each edge has weight 1.
The degree is the sum of the edge weights adjacent to the node.
Returns
-------
list
A list of sets of nodes, one for each community.
Sorted by length with largest communities first.
Examples
--------
>>> from networkx.algorithms.community import \
... naive_greedy_modularity_communities
>>> G = nx.karate_club_graph()
>>> c = naive_greedy_modularity_communities(G)
>>> sorted(c[0])
[8, 14, 15, 18, 20, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33]
See Also
--------
greedy_modularity_communities
modularity
| 199 | 250 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def naive_greedy_modularity_communities(G, resolution=1, weight=None):
r
# First create one community for each node
communities = list(frozenset([u]) for u in G.nodes())
# Track merges
merges = []
# Greedily merge communities until no improvement is possible
old_modularity = None
new_modularity = modularity(G, communities, resolution=resolution, weight=weight)
while old_modularity is None or new_modularity > old_modularity:
# Save modularity for comparison
old_modularity = new_modularity
# Find best pair to merge
trial_communities = list(communities)
to_merge = None
for i, u in enumerate(communities):
for j, v in enumerate(communities):
# Skip i==j and empty communities
if j <= i or len(u) == 0 or len(v) == 0:
continue
# Merge communities u and v
trial_communities[j] = u | v
trial_communities[i] = frozenset([])
trial_modularity = modularity(
G, trial_communities, resolution=resolution, weight=weight
)
if trial_modularity >= new_modularity:
# Check if strictly better or tie
if trial_modularity > new_modularity:
# Found new best, save modularity and group indexes
new_modularity = trial_modularity
to_merge = (i, j, new_modularity - old_modularity)
elif to_merge and min(i, j) < min(to_merge[0], to_merge[1]):
# Break ties by choosing pair with lowest min id
new_modularity = trial_modularity
to_merge = (i, j, new_modularity - old_modularity)
# Un-merge
trial_communities[i] = u
trial_communities[j] = v
if to_merge is not None:
# If the best merge improves modularity, use it
merges.append(to_merge)
i, j, dq = to_merge
u, v = communities[i], communities[j]
communities[j] = u | v
communities[i] = frozenset([])
# Remove empty communities and sort
return sorted((c for c in communities if len(c) > 0), key=len, reverse=True)
# old name
_naive_greedy_modularity_communities = naive_greedy_modularity_communities
```
###Assistant : Find communities in G using greedy modularity maximization.
This implementation is O(n^4), much slower than alternatives, but it is
provided as an easy-to-understand reference implementation.
Greedy modularity maximization begins with each node in its own community
and joins the pair of communities that most increases modularity until no
such pair exists.
This function maximizes the generalized modularity, where `resolution`
is the resolution parameter, often expressed as $\gamma$.
See :func:`~networkx.algorithms.community.quality.modularity`.
Parameters
----------
G : NetworkX graph
resolution : float (default=1)
If resolution is less than 1, modularity favors larger communities.
Greater than 1 favors smaller communities.
weight : string or None, optional (default=None)
The name of an edge attribute that holds the numerical value used
as a weight. If None, then each edge has weight 1.
The degree is the sum of the edge weights adjacent to the node.
Returns
-------
list
A list of sets of nodes, one for each community.
Sorted by length with largest communities first.
Examples
--------
>>> from networkx.algorithms.community import \
... naive_greedy_modularity_communities
>>> G = nx.karate_club_graph()
>>> c = naive_greedy_modularity_communities(G)
>>> sorted(c[0])
[8, 14, 15, 18, 20, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33]
See Also
--------
greedy_modularity_communities
modularity
|
706 | def invoke(self) -> Generator[PowerShell, None, None]:
logger = copy(self.log)
logger.setLevel(self._logging_level)
local_context = self._conn is None
if local_context:
self.__enter__()
try:
assert self._conn is not None
ps = PowerShell(self._conn)
yield ps
ps.begin_invoke()
streams = [
ps.output,
ps.streams.debug,
ps.streams.error,
ps.streams.information,
ps.streams.progress,
ps.streams.verbose,
ps.streams.warning,
]
offsets = [0 for _ in streams]
# We're using polling to make sure output and streams are
# handled while the process is running.
while ps.state == PSInvocationState.RUNNING:
ps.poll_invoke(timeout=self._operation_timeout)
for i, stream in enumerate(streams):
offset = offsets[i]
while len(stream) > offset:
record = stream[offset]
# Records received on the output stream during job
# status polling are handled via an optional callback,
# while the other streams are simply logged.
if stream is ps.output:
if self._on_output_callback is not None:
self._on_output_callback(record)
else:
self._log_record(logger.log, record)
offset += 1
offsets[i] = offset
# For good measure, we'll make sure the process has
# stopped running in any case.
ps.end_invoke()
self.log.info("Invocation state: %s", str(PSInvocationState(ps.state)))
if ps.streams.error:
raise AirflowException("Process had one or more errors")
finally:
if local_context:
self.__exit__(None, None, None)
|
Context manager that yields a PowerShell object to which commands can be
added. Upon exit, the commands will be invoked.
| 20 | 167 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def invoke(self) -> Generator[PowerShell, None, None]:
logger = copy(self.log)
logger.setLevel(self._logging_level)
local_context = self._conn is None
if local_context:
self.__enter__()
try:
assert self._conn is not None
ps = PowerShell(self._conn)
yield ps
ps.begin_invoke()
streams = [
ps.output,
ps.streams.debug,
ps.streams.error,
ps.streams.information,
ps.streams.progress,
ps.streams.verbose,
ps.streams.warning,
]
offsets = [0 for _ in streams]
# We're using polling to make sure output and streams are
# handled while the process is running.
while ps.state == PSInvocationState.RUNNING:
ps.poll_invoke(timeout=self._operation_timeout)
for i, stream in enumerate(streams):
offset = offsets[i]
while len(stream) > offset:
record = stream[offset]
# Records received on the output stream during job
# status polling are handled via an optional callback,
# while the other streams are simply logged.
if stream is ps.output:
if self._on_output_callback is not None:
self._on_output_callback(record)
else:
self._log_record(logger.log, record)
offset += 1
offsets[i] = offset
# For good measure, we'll make sure the process has
# stopped running in any case.
ps.end_invoke()
self.log.info("Invocation state: %s", str(PSInvocationState(ps.state)))
if ps.streams.error:
raise AirflowException("Process had one or more errors")
finally:
if local_context:
self.__exit__(None, None, None)
```
###Assistant :
Context manager that yields a PowerShell object to which commands can be
added. Upon exit, the commands will be invoked.
|
707 | def util_call_before_task_publish_handler(self, headers_to_use, body_to_use):
self.assertEqual(PaperlessTask.objects.all().count(), 0)
before_task_publish_handler(headers=headers_to_use, body=body_to_use)
self.assertEqual(PaperlessTask.objects.all().count(), 1)
|
Simple utility to call the pre-run handle and ensure it created a single task
instance
| 15 | 10 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def util_call_before_task_publish_handler(self, headers_to_use, body_to_use):
self.assertEqual(PaperlessTask.objects.all().count(), 0)
before_task_publish_handler(headers=headers_to_use, body=body_to_use)
self.assertEqual(PaperlessTask.objects.all().count(), 1)
```
###Assistant :
Simple utility to call the pre-run handle and ensure it created a single task
instance
|
708 | def test_form(self):
form = self.EventPageForm(instance=self.event_page)
self.assertIn("comments", form.formsets)
comments_formset = form.formsets["comments"]
self.assertEqual(len(comments_formset.forms), 1)
self.assertEqual(comments_formset.forms[0].user, self.commenting_user)
replies_formset = comments_formset.forms[0].formsets["replies"]
self.assertEqual(len(replies_formset.forms), 2)
self.assertEqual(replies_formset.forms[0].user, self.commenting_user)
|
Check that the form has the comments/replies formsets, and that the
user has been set on each CommentForm/CommentReplyForm subclass
| 19 | 21 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_form(self):
form = self.EventPageForm(instance=self.event_page)
self.assertIn("comments", form.formsets)
comments_formset = form.formsets["comments"]
self.assertEqual(len(comments_formset.forms), 1)
self.assertEqual(comments_formset.forms[0].user, self.commenting_user)
replies_formset = comments_formset.forms[0].formsets["replies"]
self.assertEqual(len(replies_formset.forms), 2)
self.assertEqual(replies_formset.forms[0].user, self.commenting_user)
```
###Assistant :
Check that the form has the comments/replies formsets, and that the
user has been set on each CommentForm/CommentReplyForm subclass
|
709 | def _doc(self, doc_type, default, lang="eng"):
corpus = self._wordnet_corpus_reader
if lang not in corpus.langs():
return None
elif lang == "eng":
return default
else:
corpus._load_lang_data(lang)
of = corpus.ss2of(self)
i = corpus.lg_attrs.index(doc_type)
if of in corpus._lang_data[lang][i]:
return corpus._lang_data[lang][i][of]
else:
return None
| Helper method for Synset.definition and Synset.examples | 6 | 38 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _doc(self, doc_type, default, lang="eng"):
corpus = self._wordnet_corpus_reader
if lang not in corpus.langs():
return None
elif lang == "eng":
return default
else:
corpus._load_lang_data(lang)
of = corpus.ss2of(self)
i = corpus.lg_attrs.index(doc_type)
if of in corpus._lang_data[lang][i]:
return corpus._lang_data[lang][i][of]
else:
return None
```
###Assistant : Helper method for Synset.definition and Synset.examples
|
710 | def map(self, mapper):
new_categories = self.categories.map(mapper)
try:
return self.from_codes(
self._codes.copy(), categories=new_categories, ordered=self.ordered
)
except ValueError:
# NA values are represented in self._codes with -1
# np.take causes NA values to take final element in new_categories
if np.any(self._codes == -1):
new_categories = new_categories.insert(len(new_categories), np.nan)
return np.take(new_categories, self._codes)
__eq__ = _cat_compare_op(operator.eq)
__ne__ = _cat_compare_op(operator.ne)
__lt__ = _cat_compare_op(operator.lt)
__gt__ = _cat_compare_op(operator.gt)
__le__ = _cat_compare_op(operator.le)
__ge__ = _cat_compare_op(operator.ge)
# -------------------------------------------------------------
# Validators; ideally these can be de-duplicated
|
Map categories using an input mapping or function.
Maps the categories to new categories. If the mapping correspondence is
one-to-one the result is a :class:`~pandas.Categorical` which has the
same order property as the original, otherwise a :class:`~pandas.Index`
is returned. NaN values are unaffected.
If a `dict` or :class:`~pandas.Series` is used any unmapped category is
mapped to `NaN`. Note that if this happens an :class:`~pandas.Index`
will be returned.
Parameters
----------
mapper : function, dict, or Series
Mapping correspondence.
Returns
-------
pandas.Categorical or pandas.Index
Mapped categorical.
See Also
--------
CategoricalIndex.map : Apply a mapping correspondence on a
:class:`~pandas.CategoricalIndex`.
Index.map : Apply a mapping correspondence on an
:class:`~pandas.Index`.
Series.map : Apply a mapping correspondence on a
:class:`~pandas.Series`.
Series.apply : Apply more complex functions on a
:class:`~pandas.Series`.
Examples
--------
>>> cat = pd.Categorical(['a', 'b', 'c'])
>>> cat
['a', 'b', 'c']
Categories (3, object): ['a', 'b', 'c']
>>> cat.map(lambda x: x.upper())
['A', 'B', 'C']
Categories (3, object): ['A', 'B', 'C']
>>> cat.map({'a': 'first', 'b': 'second', 'c': 'third'})
['first', 'second', 'third']
Categories (3, object): ['first', 'second', 'third']
If the mapping is one-to-one the ordering of the categories is
preserved:
>>> cat = pd.Categorical(['a', 'b', 'c'], ordered=True)
>>> cat
['a', 'b', 'c']
Categories (3, object): ['a' < 'b' < 'c']
>>> cat.map({'a': 3, 'b': 2, 'c': 1})
[3, 2, 1]
Categories (3, int64): [3 < 2 < 1]
If the mapping is not one-to-one an :class:`~pandas.Index` is returned:
>>> cat.map({'a': 'first', 'b': 'second', 'c': 'first'})
Index(['first', 'second', 'first'], dtype='object')
If a `dict` is used, all unmapped categories are mapped to `NaN` and
the result is an :class:`~pandas.Index`:
>>> cat.map({'a': 'first', 'b': 'second'})
Index(['first', 'second', nan], dtype='object')
| 269 | 73 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def map(self, mapper):
new_categories = self.categories.map(mapper)
try:
return self.from_codes(
self._codes.copy(), categories=new_categories, ordered=self.ordered
)
except ValueError:
# NA values are represented in self._codes with -1
# np.take causes NA values to take final element in new_categories
if np.any(self._codes == -1):
new_categories = new_categories.insert(len(new_categories), np.nan)
return np.take(new_categories, self._codes)
__eq__ = _cat_compare_op(operator.eq)
__ne__ = _cat_compare_op(operator.ne)
__lt__ = _cat_compare_op(operator.lt)
__gt__ = _cat_compare_op(operator.gt)
__le__ = _cat_compare_op(operator.le)
__ge__ = _cat_compare_op(operator.ge)
# -------------------------------------------------------------
# Validators; ideally these can be de-duplicated
```
###Assistant :
Map categories using an input mapping or function.
Maps the categories to new categories. If the mapping correspondence is
one-to-one the result is a :class:`~pandas.Categorical` which has the
same order property as the original, otherwise a :class:`~pandas.Index`
is returned. NaN values are unaffected.
If a `dict` or :class:`~pandas.Series` is used any unmapped category is
mapped to `NaN`. Note that if this happens an :class:`~pandas.Index`
will be returned.
Parameters
----------
mapper : function, dict, or Series
Mapping correspondence.
Returns
-------
pandas.Categorical or pandas.Index
Mapped categorical.
See Also
--------
CategoricalIndex.map : Apply a mapping correspondence on a
:class:`~pandas.CategoricalIndex`.
Index.map : Apply a mapping correspondence on an
:class:`~pandas.Index`.
Series.map : Apply a mapping correspondence on a
:class:`~pandas.Series`.
Series.apply : Apply more complex functions on a
:class:`~pandas.Series`.
Examples
--------
>>> cat = pd.Categorical(['a', 'b', 'c'])
>>> cat
['a', 'b', 'c']
Categories (3, object): ['a', 'b', 'c']
>>> cat.map(lambda x: x.upper())
['A', 'B', 'C']
Categories (3, object): ['A', 'B', 'C']
>>> cat.map({'a': 'first', 'b': 'second', 'c': 'third'})
['first', 'second', 'third']
Categories (3, object): ['first', 'second', 'third']
If the mapping is one-to-one the ordering of the categories is
preserved:
>>> cat = pd.Categorical(['a', 'b', 'c'], ordered=True)
>>> cat
['a', 'b', 'c']
Categories (3, object): ['a' < 'b' < 'c']
>>> cat.map({'a': 3, 'b': 2, 'c': 1})
[3, 2, 1]
Categories (3, int64): [3 < 2 < 1]
If the mapping is not one-to-one an :class:`~pandas.Index` is returned:
>>> cat.map({'a': 'first', 'b': 'second', 'c': 'first'})
Index(['first', 'second', 'first'], dtype='object')
If a `dict` is used, all unmapped categories are mapped to `NaN` and
the result is an :class:`~pandas.Index`:
>>> cat.map({'a': 'first', 'b': 'second'})
Index(['first', 'second', nan], dtype='object')
|
711 | def to_pickle_distributed(cls, qc, **kwargs):
if not (
isinstance(kwargs["filepath_or_buffer"], str)
and "*" in kwargs["filepath_or_buffer"]
) or not isinstance(qc, PandasQueryCompiler):
warnings.warn("Defaulting to Modin core implementation")
return PandasOnUnidistIO.to_pickle(qc, **kwargs)
|
When `*` in the filename all partitions are written to their own separate file.
The filenames is determined as follows:
- if `*` in the filename then it will be replaced by the increasing sequence 0, 1, 2, …
- if `*` is not the filename, then will be used default implementation.
Examples #1: 4 partitions and input filename="partition*.pkl.gz", then filenames will be:
`partition0.pkl.gz`, `partition1.pkl.gz`, `partition2.pkl.gz`, `partition3.pkl.gz`.
Parameters
----------
qc : BaseQueryCompiler
The query compiler of the Modin dataframe that we want
to run ``to_pickle_distributed`` on.
**kwargs : dict
Parameters for ``pandas.to_pickle(**kwargs)``.
| 92 | 26 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def to_pickle_distributed(cls, qc, **kwargs):
if not (
isinstance(kwargs["filepath_or_buffer"], str)
and "*" in kwargs["filepath_or_buffer"]
) or not isinstance(qc, PandasQueryCompiler):
warnings.warn("Defaulting to Modin core implementation")
return PandasOnUnidistIO.to_pickle(qc, **kwargs)
```
###Assistant :
When `*` in the filename all partitions are written to their own separate file.
The filenames is determined as follows:
- if `*` in the filename then it will be replaced by the increasing sequence 0, 1, 2, …
- if `*` is not the filename, then will be used default implementation.
Examples #1: 4 partitions and input filename="partition*.pkl.gz", then filenames will be:
`partition0.pkl.gz`, `partition1.pkl.gz`, `partition2.pkl.gz`, `partition3.pkl.gz`.
Parameters
----------
qc : BaseQueryCompiler
The query compiler of the Modin dataframe that we want
to run ``to_pickle_distributed`` on.
**kwargs : dict
Parameters for ``pandas.to_pickle(**kwargs)``.
|
712 | def project_state(self, nodes=None, at_end=True):
return self.graph.make_state(
nodes=nodes, at_end=at_end, real_apps=self.unmigrated_apps
)
|
Return a ProjectState object representing the most recent state
that the loaded migrations represent.
See graph.make_state() for the meaning of "nodes" and "at_end".
| 23 | 10 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def project_state(self, nodes=None, at_end=True):
return self.graph.make_state(
nodes=nodes, at_end=at_end, real_apps=self.unmigrated_apps
)
```
###Assistant :
Return a ProjectState object representing the most recent state
that the loaded migrations represent.
See graph.make_state() for the meaning of "nodes" and "at_end".
|
713 | def center(self, frequency=1000):
equal_energy_fr = self.__class__(name='equal_energy', frequency=self.frequency.copy(), raw=self.raw.copy())
equal_energy_fr.interpolate()
interpolator = InterpolatedUnivariateSpline(np.log10(equal_energy_fr.frequency), equal_energy_fr.raw, k=1)
if type(frequency) in [list, np.ndarray] and len(frequency) > 1:
# Use the average of the gain values between the given frequencies as the difference to be subtracted
diff = np.mean(equal_energy_fr.raw[np.logical_and(
equal_energy_fr.frequency >= frequency[0],
equal_energy_fr.frequency <= frequency[1]
)])
else:
if type(frequency) in [list, np.ndarray]:
# List or array with only one element
frequency = frequency[0]
# Use the gain value at the given frequency as the difference to be subtracted
diff = interpolator(np.log10(frequency))
self.raw -= diff
if len(self.smoothed):
self.smoothed -= diff
if len(self.error):
self.error += diff
if len(self.error_smoothed):
self.error_smoothed += diff
# Everything but raw, smoothed, errors and target is affected by centering, reset them
self.reset(raw=False, smoothed=False, error=False, error_smoothed=False, target=False)
return -diff
| Removed bias from frequency response.
Args:
frequency: Frequency which is set to 0 dB. If this is a list with two values then an average between the two
frequencies is set to 0 dB.
Returns:
Gain shifted
| 37 | 125 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def center(self, frequency=1000):
equal_energy_fr = self.__class__(name='equal_energy', frequency=self.frequency.copy(), raw=self.raw.copy())
equal_energy_fr.interpolate()
interpolator = InterpolatedUnivariateSpline(np.log10(equal_energy_fr.frequency), equal_energy_fr.raw, k=1)
if type(frequency) in [list, np.ndarray] and len(frequency) > 1:
# Use the average of the gain values between the given frequencies as the difference to be subtracted
diff = np.mean(equal_energy_fr.raw[np.logical_and(
equal_energy_fr.frequency >= frequency[0],
equal_energy_fr.frequency <= frequency[1]
)])
else:
if type(frequency) in [list, np.ndarray]:
# List or array with only one element
frequency = frequency[0]
# Use the gain value at the given frequency as the difference to be subtracted
diff = interpolator(np.log10(frequency))
self.raw -= diff
if len(self.smoothed):
self.smoothed -= diff
if len(self.error):
self.error += diff
if len(self.error_smoothed):
self.error_smoothed += diff
# Everything but raw, smoothed, errors and target is affected by centering, reset them
self.reset(raw=False, smoothed=False, error=False, error_smoothed=False, target=False)
return -diff
```
###Assistant : Removed bias from frequency response.
Args:
frequency: Frequency which is set to 0 dB. If this is a list with two values then an average between the two
frequencies is set to 0 dB.
Returns:
Gain shifted
|
714 | def _i18n_cache_key_suffix(request, cache_key):
if settings.USE_I18N:
# first check if LocaleMiddleware or another middleware added
# LANGUAGE_CODE to request, then fall back to the active language
# which in turn can also fall back to settings.LANGUAGE_CODE
cache_key += ".%s" % getattr(request, "LANGUAGE_CODE", get_language())
if settings.USE_TZ:
cache_key += ".%s" % get_current_timezone_name()
return cache_key
| If necessary, add the current locale or time zone to the cache key. | 13 | 51 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _i18n_cache_key_suffix(request, cache_key):
if settings.USE_I18N:
# first check if LocaleMiddleware or another middleware added
# LANGUAGE_CODE to request, then fall back to the active language
# which in turn can also fall back to settings.LANGUAGE_CODE
cache_key += ".%s" % getattr(request, "LANGUAGE_CODE", get_language())
if settings.USE_TZ:
cache_key += ".%s" % get_current_timezone_name()
return cache_key
```
###Assistant : If necessary, add the current locale or time zone to the cache key.
|
715 | def test_pandas_arff_parser_strip_double_quotes(parser_func):
pd = pytest.importorskip("pandas")
arff_file = BytesIO(
textwrap.dedent(
| Check that we properly strip double quotes from the data. | 10 | 9 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_pandas_arff_parser_strip_double_quotes(parser_func):
pd = pytest.importorskip("pandas")
arff_file = BytesIO(
textwrap.dedent(
```
###Assistant : Check that we properly strip double quotes from the data.
|
716 | def get_tokens_unprocessed(self, text=None, context=None):
tokendefs = self._tokens
if not context:
ctx = LexerContext(text, 0)
statetokens = tokendefs['root']
else:
ctx = context
statetokens = tokendefs[ctx.stack[-1]]
text = ctx.text
while 1:
for rexmatch, action, new_state in statetokens:
m = rexmatch(text, ctx.pos, ctx.end)
if m:
if action is not None:
if type(action) is _TokenType:
yield ctx.pos, action, m.group()
ctx.pos = m.end()
else:
yield from action(self, m, ctx)
if not new_state:
# altered the state stack?
statetokens = tokendefs[ctx.stack[-1]]
# CAUTION: callback must set ctx.pos!
if new_state is not None:
# state transition
if isinstance(new_state, tuple):
for state in new_state:
if state == '#pop':
if len(ctx.stack) > 1:
ctx.stack.pop()
elif state == '#push':
ctx.stack.append(ctx.stack[-1])
else:
ctx.stack.append(state)
elif isinstance(new_state, int):
# see RegexLexer for why this check is made
if abs(new_state) >= len(ctx.stack):
del ctx.state[1:]
else:
del ctx.stack[new_state:]
elif new_state == '#push':
ctx.stack.append(ctx.stack[-1])
else:
assert False, "wrong state def: %r" % new_state
statetokens = tokendefs[ctx.stack[-1]]
break
else:
try:
if ctx.pos >= ctx.end:
break
if text[ctx.pos] == '\n':
# at EOL, reset state to "root"
ctx.stack = ['root']
statetokens = tokendefs['root']
yield ctx.pos, Text, '\n'
ctx.pos += 1
continue
yield ctx.pos, Error, text[ctx.pos]
ctx.pos += 1
except IndexError:
break
|
Split ``text`` into (tokentype, text) pairs.
If ``context`` is given, use this lexer context instead.
| 15 | 193 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_tokens_unprocessed(self, text=None, context=None):
tokendefs = self._tokens
if not context:
ctx = LexerContext(text, 0)
statetokens = tokendefs['root']
else:
ctx = context
statetokens = tokendefs[ctx.stack[-1]]
text = ctx.text
while 1:
for rexmatch, action, new_state in statetokens:
m = rexmatch(text, ctx.pos, ctx.end)
if m:
if action is not None:
if type(action) is _TokenType:
yield ctx.pos, action, m.group()
ctx.pos = m.end()
else:
yield from action(self, m, ctx)
if not new_state:
# altered the state stack?
statetokens = tokendefs[ctx.stack[-1]]
# CAUTION: callback must set ctx.pos!
if new_state is not None:
# state transition
if isinstance(new_state, tuple):
for state in new_state:
if state == '#pop':
if len(ctx.stack) > 1:
ctx.stack.pop()
elif state == '#push':
ctx.stack.append(ctx.stack[-1])
else:
ctx.stack.append(state)
elif isinstance(new_state, int):
# see RegexLexer for why this check is made
if abs(new_state) >= len(ctx.stack):
del ctx.state[1:]
else:
del ctx.stack[new_state:]
elif new_state == '#push':
ctx.stack.append(ctx.stack[-1])
else:
assert False, "wrong state def: %r" % new_state
statetokens = tokendefs[ctx.stack[-1]]
break
else:
try:
if ctx.pos >= ctx.end:
break
if text[ctx.pos] == '\n':
# at EOL, reset state to "root"
ctx.stack = ['root']
statetokens = tokendefs['root']
yield ctx.pos, Text, '\n'
ctx.pos += 1
continue
yield ctx.pos, Error, text[ctx.pos]
ctx.pos += 1
except IndexError:
break
```
###Assistant :
Split ``text`` into (tokentype, text) pairs.
If ``context`` is given, use this lexer context instead.
|
717 | def get_policy_data_from_agent_data(agent_data, policy_map_fn):
policy_data = {}
for agent_id, data in agent_data.items():
policy_id = policy_map_fn(agent_id)
policy_data.setdefault(policy_id, {})
policy_data[policy_id].setdefault("agent_id", [])
if data["obs"].ndim == 1:
policy_data[policy_id]["agent_id"].append(agent_id)
else:
policy_data[policy_id]["agent_id"] += [agent_id] * len(data["obs"])
for k, v in data.items():
policy_data[policy_id].setdefault(k, [])
if v.ndim == 1:
v = v[None]
policy_data[policy_id][k].append(v)
for policy_id in policy_data:
policy_data[policy_id] = {
k: np.concatenate(v) if k != "agent_id" else v
for k, v in policy_data[policy_id].items()
}
return policy_data
| Utility function to get policy data from agent data and policy map function.
It also keeps track of agent_id for each row so that we can retreive the agent
level information after the forward pass.
Returns:
dict of module_id to module data
| 42 | 67 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_policy_data_from_agent_data(agent_data, policy_map_fn):
policy_data = {}
for agent_id, data in agent_data.items():
policy_id = policy_map_fn(agent_id)
policy_data.setdefault(policy_id, {})
policy_data[policy_id].setdefault("agent_id", [])
if data["obs"].ndim == 1:
policy_data[policy_id]["agent_id"].append(agent_id)
else:
policy_data[policy_id]["agent_id"] += [agent_id] * len(data["obs"])
for k, v in data.items():
policy_data[policy_id].setdefault(k, [])
if v.ndim == 1:
v = v[None]
policy_data[policy_id][k].append(v)
for policy_id in policy_data:
policy_data[policy_id] = {
k: np.concatenate(v) if k != "agent_id" else v
for k, v in policy_data[policy_id].items()
}
return policy_data
```
###Assistant : Utility function to get policy data from agent data and policy map function.
It also keeps track of agent_id for each row so that we can retreive the agent
level information after the forward pass.
Returns:
dict of module_id to module data
|
718 | def _update_dimensions(self) -> None:
total_width = sum(column.width for column in self.columns)
self.virtual_size = Size(
total_width,
len(self._y_offsets) + (self.header_height if self.show_header else 0),
)
| Called to recalculate the virtual (scrollable) size. | 7 | 23 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _update_dimensions(self) -> None:
total_width = sum(column.width for column in self.columns)
self.virtual_size = Size(
total_width,
len(self._y_offsets) + (self.header_height if self.show_header else 0),
)
```
###Assistant : Called to recalculate the virtual (scrollable) size.
|
719 | def _store(self, messages, response, *args, **kwargs):
raise NotImplementedError(
"subclasses of BaseStorage must provide a _store() method"
)
|
Store a list of messages and return a list of any messages which could
not be stored.
One type of object must be able to be stored, ``Message``.
**This method must be implemented by a subclass.**
| 36 | 17 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _store(self, messages, response, *args, **kwargs):
raise NotImplementedError(
"subclasses of BaseStorage must provide a _store() method"
)
```
###Assistant :
Store a list of messages and return a list of any messages which could
not be stored.
One type of object must be able to be stored, ``Message``.
**This method must be implemented by a subclass.**
|
720 | def test_from_is_negative(self) -> None:
channel = self.make_request(
"GET",
self.url + "?from=-5",
access_token=self.admin_user_tok,
)
self.assertEqual(400, channel.code, msg=channel.json_body)
self.assertEqual(Codes.INVALID_PARAM, channel.json_body["errcode"])
|
Testing that a negative from parameter returns a 400
| 9 | 18 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_from_is_negative(self) -> None:
channel = self.make_request(
"GET",
self.url + "?from=-5",
access_token=self.admin_user_tok,
)
self.assertEqual(400, channel.code, msg=channel.json_body)
self.assertEqual(Codes.INVALID_PARAM, channel.json_body["errcode"])
```
###Assistant :
Testing that a negative from parameter returns a 400
|
721 | async def test_focused_child_widget_no_inherit_empty_bindings_with_movement_bindings_on_screen() -> None:
| A focused child widget, that doesn't inherit bindings and sets BINDINGS empty, with movement bindings in the screen, should trigger screen actions. | 22 | 5 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
async def test_focused_child_widget_no_inherit_empty_bindings_with_movement_bindings_on_screen() -> None:
```
###Assistant : A focused child widget, that doesn't inherit bindings and sets BINDINGS empty, with movement bindings in the screen, should trigger screen actions.
|
722 | def addtoken(self, type, value, context):
# Map from token to label
ilabel = self.classify(type, value, context)
# Loop until the token is shifted; may raise exceptions
while True:
dfa, state, node = self.stack[-1]
states, first = dfa
arcs = states[state]
# Look for a state with this label
for i, newstate in arcs:
t, v = self.grammar.labels[i]
if ilabel == i:
# Look it up in the list of labels
assert t < 256
# Shift a token; we're done with it
self.shift(type, value, newstate, context)
# Pop while we are in an accept-only state
state = newstate
while states[state] == [(0, state)]:
self.pop()
if not self.stack:
# Done parsing!
return True
dfa, state, node = self.stack[-1]
states, first = dfa
# Done with this token
return False
elif t >= 256:
# See if it's a symbol and if we're in its first set
itsdfa = self.grammar.dfas[t]
itsstates, itsfirst = itsdfa
if ilabel in itsfirst:
# Push a symbol
self.push(t, self.grammar.dfas[t], newstate, context)
break # To continue the outer while loop
else:
if (0, state) in arcs:
# An accepting state, pop it and try something else
self.pop()
if not self.stack:
# Done parsing, but another token is input
raise ParseError("too much input",
type, value, context)
else:
# No success finding a transition
raise ParseError("bad input", type, value, context)
| Add a token; return True iff this is the end of the program. | 13 | 220 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def addtoken(self, type, value, context):
# Map from token to label
ilabel = self.classify(type, value, context)
# Loop until the token is shifted; may raise exceptions
while True:
dfa, state, node = self.stack[-1]
states, first = dfa
arcs = states[state]
# Look for a state with this label
for i, newstate in arcs:
t, v = self.grammar.labels[i]
if ilabel == i:
# Look it up in the list of labels
assert t < 256
# Shift a token; we're done with it
self.shift(type, value, newstate, context)
# Pop while we are in an accept-only state
state = newstate
while states[state] == [(0, state)]:
self.pop()
if not self.stack:
# Done parsing!
return True
dfa, state, node = self.stack[-1]
states, first = dfa
# Done with this token
return False
elif t >= 256:
# See if it's a symbol and if we're in its first set
itsdfa = self.grammar.dfas[t]
itsstates, itsfirst = itsdfa
if ilabel in itsfirst:
# Push a symbol
self.push(t, self.grammar.dfas[t], newstate, context)
break # To continue the outer while loop
else:
if (0, state) in arcs:
# An accepting state, pop it and try something else
self.pop()
if not self.stack:
# Done parsing, but another token is input
raise ParseError("too much input",
type, value, context)
else:
# No success finding a transition
raise ParseError("bad input", type, value, context)
```
###Assistant : Add a token; return True iff this is the end of the program.
|
723 | def import_local_settings():
try:
import airflow_local_settings
if hasattr(airflow_local_settings, "__all__"):
for i in airflow_local_settings.__all__:
globals()[i] = getattr(airflow_local_settings, i)
else:
for k, v in airflow_local_settings.__dict__.items():
if not k.startswith("__"):
globals()[k] = v
# TODO: Remove once deprecated
if "policy" in globals() and "task_policy" not in globals():
warnings.warn(
"Using `policy` in airflow_local_settings.py is deprecated. "
"Please rename your `policy` to `task_policy`.",
DeprecationWarning,
stacklevel=2,
)
globals()["task_policy"] = globals()["policy"]
del globals()["policy"]
if not hasattr(task_instance_mutation_hook, 'is_noop'):
task_instance_mutation_hook.is_noop = False
log.info("Loaded airflow_local_settings from %s .", airflow_local_settings.__file__)
except ModuleNotFoundError as e:
if e.name == "airflow_local_settings":
log.debug("No airflow_local_settings to import.", exc_info=True)
else:
log.critical(
"Failed to import airflow_local_settings due to a transitive module not found error.",
exc_info=True,
)
raise
except ImportError:
log.critical("Failed to import airflow_local_settings.", exc_info=True)
raise
| Import airflow_local_settings.py files to allow overriding any configs in settings.py file | 11 | 115 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def import_local_settings():
try:
import airflow_local_settings
if hasattr(airflow_local_settings, "__all__"):
for i in airflow_local_settings.__all__:
globals()[i] = getattr(airflow_local_settings, i)
else:
for k, v in airflow_local_settings.__dict__.items():
if not k.startswith("__"):
globals()[k] = v
# TODO: Remove once deprecated
if "policy" in globals() and "task_policy" not in globals():
warnings.warn(
"Using `policy` in airflow_local_settings.py is deprecated. "
"Please rename your `policy` to `task_policy`.",
DeprecationWarning,
stacklevel=2,
)
globals()["task_policy"] = globals()["policy"]
del globals()["policy"]
if not hasattr(task_instance_mutation_hook, 'is_noop'):
task_instance_mutation_hook.is_noop = False
log.info("Loaded airflow_local_settings from %s .", airflow_local_settings.__file__)
except ModuleNotFoundError as e:
if e.name == "airflow_local_settings":
log.debug("No airflow_local_settings to import.", exc_info=True)
else:
log.critical(
"Failed to import airflow_local_settings due to a transitive module not found error.",
exc_info=True,
)
raise
except ImportError:
log.critical("Failed to import airflow_local_settings.", exc_info=True)
raise
```
###Assistant : Import airflow_local_settings.py files to allow overriding any configs in settings.py file
|
724 | def unpolarify(eq, subs=None, exponents_only=False):
if isinstance(eq, bool):
return eq
eq = sympify(eq)
if subs is not None:
return unpolarify(eq.subs(subs))
changed = True
pause = False
if exponents_only:
pause = True
while changed:
changed = False
res = _unpolarify(eq, exponents_only, pause)
if res != eq:
changed = True
eq = res
if isinstance(res, bool):
return res
# Finally, replacing Exp(0) by 1 is always correct.
# So is polar_lift(0) -> 0.
return res.subs({exp_polar(0): 1, polar_lift(0): 0})
|
If `p` denotes the projection from the Riemann surface of the logarithm to
the complex line, return a simplified version `eq'` of `eq` such that
`p(eq') = p(eq)`.
Also apply the substitution subs in the end. (This is a convenience, since
``unpolarify``, in a certain sense, undoes :func:`polarify`.)
Examples
========
>>> from sympy import unpolarify, polar_lift, sin, I
>>> unpolarify(polar_lift(I + 2))
2 + I
>>> unpolarify(sin(polar_lift(I + 7)))
sin(7 + I)
| 72 | 75 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def unpolarify(eq, subs=None, exponents_only=False):
if isinstance(eq, bool):
return eq
eq = sympify(eq)
if subs is not None:
return unpolarify(eq.subs(subs))
changed = True
pause = False
if exponents_only:
pause = True
while changed:
changed = False
res = _unpolarify(eq, exponents_only, pause)
if res != eq:
changed = True
eq = res
if isinstance(res, bool):
return res
# Finally, replacing Exp(0) by 1 is always correct.
# So is polar_lift(0) -> 0.
return res.subs({exp_polar(0): 1, polar_lift(0): 0})
```
###Assistant :
If `p` denotes the projection from the Riemann surface of the logarithm to
the complex line, return a simplified version `eq'` of `eq` such that
`p(eq') = p(eq)`.
Also apply the substitution subs in the end. (This is a convenience, since
``unpolarify``, in a certain sense, undoes :func:`polarify`.)
Examples
========
>>> from sympy import unpolarify, polar_lift, sin, I
>>> unpolarify(polar_lift(I + 2))
2 + I
>>> unpolarify(sin(polar_lift(I + 7)))
sin(7 + I)
|
725 | def check_and_raise_error(self) -> None:
for thread in self._threads:
thread.check_and_raise_error()
| Check all threads for errors
Exposed for :mod:`~plugins.extract.pipeline` to check plugin's threads for errors
| 14 | 9 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def check_and_raise_error(self) -> None:
for thread in self._threads:
thread.check_and_raise_error()
```
###Assistant : Check all threads for errors
Exposed for :mod:`~plugins.extract.pipeline` to check plugin's threads for errors
|
726 | def recorder_or_dbworker(self) -> bool:
thread_name = threading.current_thread().name
return bool(
thread_name == "Recorder" or thread_name.startswith(DB_WORKER_PREFIX)
)
| Check if the thread is a recorder or dbworker thread. | 10 | 15 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def recorder_or_dbworker(self) -> bool:
thread_name = threading.current_thread().name
return bool(
thread_name == "Recorder" or thread_name.startswith(DB_WORKER_PREFIX)
)
```
###Assistant : Check if the thread is a recorder or dbworker thread.
|
727 | def renew_resnet_paths(old_list, n_shave_prefix_segments=0):
mapping = []
for old_item in old_list:
new_item = old_item.replace('in_layers.0', 'norm1')
new_item = new_item.replace('in_layers.2', 'conv1')
new_item = new_item.replace('out_layers.0', 'norm2')
new_item = new_item.replace('out_layers.3', 'conv2')
new_item = new_item.replace('emb_layers.1', 'time_emb_proj')
new_item = new_item.replace('skip_connection', 'conv_shortcut')
new_item = shave_segments(new_item, n_shave_prefix_segments=n_shave_prefix_segments)
mapping.append({'old': old_item, 'new': new_item})
return mapping
|
Updates paths inside resnets to the new naming scheme (local renaming)
| 11 | 44 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def renew_resnet_paths(old_list, n_shave_prefix_segments=0):
mapping = []
for old_item in old_list:
new_item = old_item.replace('in_layers.0', 'norm1')
new_item = new_item.replace('in_layers.2', 'conv1')
new_item = new_item.replace('out_layers.0', 'norm2')
new_item = new_item.replace('out_layers.3', 'conv2')
new_item = new_item.replace('emb_layers.1', 'time_emb_proj')
new_item = new_item.replace('skip_connection', 'conv_shortcut')
new_item = shave_segments(new_item, n_shave_prefix_segments=n_shave_prefix_segments)
mapping.append({'old': old_item, 'new': new_item})
return mapping
```
###Assistant :
Updates paths inside resnets to the new naming scheme (local renaming)
|
728 | def _assert_splits_match(nested_splits_lists):
error_msg = (
"Inputs must have identical ragged splits. "
f"Input received: {nested_splits_lists}"
)
for splits_list in nested_splits_lists:
if len(splits_list) != len(nested_splits_lists[0]):
raise ValueError(error_msg)
return [
tf.debugging.assert_equal(s1, s2, message=error_msg)
for splits_list in nested_splits_lists[1:]
for (s1, s2) in zip(nested_splits_lists[0], splits_list)
]
| Checks that the given splits lists are identical.
Performs static tests to ensure that the given splits lists are identical,
and returns a list of control dependency op tensors that check that they are
fully identical.
Args:
nested_splits_lists: A list of nested_splits_lists, where each split_list
is a list of `splits` tensors from a `RaggedTensor`, ordered from
outermost ragged dimension to innermost ragged dimension.
Returns:
A list of control dependency op tensors.
Raises:
ValueError: If the splits are not identical.
| 79 | 42 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _assert_splits_match(nested_splits_lists):
error_msg = (
"Inputs must have identical ragged splits. "
f"Input received: {nested_splits_lists}"
)
for splits_list in nested_splits_lists:
if len(splits_list) != len(nested_splits_lists[0]):
raise ValueError(error_msg)
return [
tf.debugging.assert_equal(s1, s2, message=error_msg)
for splits_list in nested_splits_lists[1:]
for (s1, s2) in zip(nested_splits_lists[0], splits_list)
]
```
###Assistant : Checks that the given splits lists are identical.
Performs static tests to ensure that the given splits lists are identical,
and returns a list of control dependency op tensors that check that they are
fully identical.
Args:
nested_splits_lists: A list of nested_splits_lists, where each split_list
is a list of `splits` tensors from a `RaggedTensor`, ordered from
outermost ragged dimension to innermost ragged dimension.
Returns:
A list of control dependency op tensors.
Raises:
ValueError: If the splits are not identical.
|
729 | def _iteration_limit_callback(self, *args) -> None:
try:
limit = self.vars["display_iterations"].get()
except tk.TclError:
# Don't update when there is no value in the variable
return
logger.debug("Updating graph iteration limit: (new_value: %s, args: %s)",
limit, args)
for graph in self.subnotebook.children.values():
graph.calcs.set_iterations_limit(limit)
| Limit the amount of data displayed in the live graph on a iteration slider
variable change. | 16 | 38 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _iteration_limit_callback(self, *args) -> None:
try:
limit = self.vars["display_iterations"].get()
except tk.TclError:
# Don't update when there is no value in the variable
return
logger.debug("Updating graph iteration limit: (new_value: %s, args: %s)",
limit, args)
for graph in self.subnotebook.children.values():
graph.calcs.set_iterations_limit(limit)
```
###Assistant : Limit the amount of data displayed in the live graph on a iteration slider
variable change.
|
730 | def test_queued_dagruns_stops_creating_when_max_active_is_reached(self, dag_maker):
with dag_maker(max_active_runs=10) as dag:
EmptyOperator(task_id='mytask')
session = settings.Session()
self.scheduler_job = SchedulerJob(subdir=os.devnull)
self.scheduler_job.executor = MockExecutor()
self.scheduler_job.processor_agent = mock.MagicMock()
self.scheduler_job.dagbag = dag_maker.dagbag
session = settings.Session()
orm_dag = session.query(DagModel).get(dag.dag_id)
assert orm_dag is not None
for _ in range(20):
self.scheduler_job._create_dag_runs([orm_dag], session)
drs = session.query(DagRun).all()
assert len(drs) == 10
for dr in drs:
dr.state = State.RUNNING
session.merge(dr)
session.commit()
assert session.query(DagRun.state).filter(DagRun.state == State.RUNNING).count() == 10
for _ in range(20):
self.scheduler_job._create_dag_runs([orm_dag], session)
assert session.query(DagRun).count() == 10
assert session.query(DagRun.state).filter(DagRun.state == State.RUNNING).count() == 10
assert session.query(DagRun.state).filter(DagRun.state == State.QUEUED).count() == 0
assert orm_dag.next_dagrun_create_after is None
| This tests that queued dagruns stops creating once max_active_runs is reached | 11 | 88 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_queued_dagruns_stops_creating_when_max_active_is_reached(self, dag_maker):
with dag_maker(max_active_runs=10) as dag:
EmptyOperator(task_id='mytask')
session = settings.Session()
self.scheduler_job = SchedulerJob(subdir=os.devnull)
self.scheduler_job.executor = MockExecutor()
self.scheduler_job.processor_agent = mock.MagicMock()
self.scheduler_job.dagbag = dag_maker.dagbag
session = settings.Session()
orm_dag = session.query(DagModel).get(dag.dag_id)
assert orm_dag is not None
for _ in range(20):
self.scheduler_job._create_dag_runs([orm_dag], session)
drs = session.query(DagRun).all()
assert len(drs) == 10
for dr in drs:
dr.state = State.RUNNING
session.merge(dr)
session.commit()
assert session.query(DagRun.state).filter(DagRun.state == State.RUNNING).count() == 10
for _ in range(20):
self.scheduler_job._create_dag_runs([orm_dag], session)
assert session.query(DagRun).count() == 10
assert session.query(DagRun.state).filter(DagRun.state == State.RUNNING).count() == 10
assert session.query(DagRun.state).filter(DagRun.state == State.QUEUED).count() == 0
assert orm_dag.next_dagrun_create_after is None
```
###Assistant : This tests that queued dagruns stops creating once max_active_runs is reached
|
731 | def test_escape_sequence_resulting_in_multiple_keypresses(parser):
events = list(parser.feed("\x1b[2;4~"))
assert len(events) == 2
assert events[0].key == "escape"
assert events[1].key == "shift+insert"
| Some sequences are interpreted as more than 1 keypress | 9 | 17 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_escape_sequence_resulting_in_multiple_keypresses(parser):
events = list(parser.feed("\x1b[2;4~"))
assert len(events) == 2
assert events[0].key == "escape"
assert events[1].key == "shift+insert"
```
###Assistant : Some sequences are interpreted as more than 1 keypress
|
732 | def generate_altered_options(self):
models_to_check = self.kept_model_keys.union(
self.kept_proxy_keys,
self.kept_unmanaged_keys,
# unmanaged converted to managed
self.old_unmanaged_keys & self.new_model_keys,
# managed converted to unmanaged
self.old_model_keys & self.new_unmanaged_keys,
)
for app_label, model_name in sorted(models_to_check):
old_model_name = self.renamed_models.get(
(app_label, model_name), model_name
)
old_model_state = self.from_state.models[app_label, old_model_name]
new_model_state = self.to_state.models[app_label, model_name]
old_options = {
key: value
for key, value in old_model_state.options.items()
if key in AlterModelOptions.ALTER_OPTION_KEYS
}
new_options = {
key: value
for key, value in new_model_state.options.items()
if key in AlterModelOptions.ALTER_OPTION_KEYS
}
if old_options != new_options:
self.add_operation(
app_label,
operations.AlterModelOptions(
name=model_name,
options=new_options,
),
)
|
Work out if any non-schema-affecting options have changed and make an
operation to represent them in state changes (in case Python code in
migrations needs them).
| 26 | 85 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def generate_altered_options(self):
models_to_check = self.kept_model_keys.union(
self.kept_proxy_keys,
self.kept_unmanaged_keys,
# unmanaged converted to managed
self.old_unmanaged_keys & self.new_model_keys,
# managed converted to unmanaged
self.old_model_keys & self.new_unmanaged_keys,
)
for app_label, model_name in sorted(models_to_check):
old_model_name = self.renamed_models.get(
(app_label, model_name), model_name
)
old_model_state = self.from_state.models[app_label, old_model_name]
new_model_state = self.to_state.models[app_label, model_name]
old_options = {
key: value
for key, value in old_model_state.options.items()
if key in AlterModelOptions.ALTER_OPTION_KEYS
}
new_options = {
key: value
for key, value in new_model_state.options.items()
if key in AlterModelOptions.ALTER_OPTION_KEYS
}
if old_options != new_options:
self.add_operation(
app_label,
operations.AlterModelOptions(
name=model_name,
options=new_options,
),
)
```
###Assistant :
Work out if any non-schema-affecting options have changed and make an
operation to represent them in state changes (in case Python code in
migrations needs them).
|
733 | def _check_readonly_fields(self, obj):
if obj.readonly_fields == ():
return []
elif not isinstance(obj.readonly_fields, (list, tuple)):
return must_be(
"a list or tuple", option="readonly_fields", obj=obj, id="admin.E034"
)
else:
return list(
chain.from_iterable(
self._check_readonly_fields_item(
obj, field_name, "readonly_fields[%d]" % index
)
for index, field_name in enumerate(obj.readonly_fields)
)
)
| Check that readonly_fields refers to proper attribute or field. | 9 | 42 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _check_readonly_fields(self, obj):
if obj.readonly_fields == ():
return []
elif not isinstance(obj.readonly_fields, (list, tuple)):
return must_be(
"a list or tuple", option="readonly_fields", obj=obj, id="admin.E034"
)
else:
return list(
chain.from_iterable(
self._check_readonly_fields_item(
obj, field_name, "readonly_fields[%d]" % index
)
for index, field_name in enumerate(obj.readonly_fields)
)
)
```
###Assistant : Check that readonly_fields refers to proper attribute or field.
|
734 | def querystring(context, **kwargs):
request = context["request"]
querydict = request.GET.copy()
# Can't do querydict.update(kwargs), because QueryDict.update() appends to
# the list of values, instead of replacing the values.
for key, value in kwargs.items():
if value is None:
# Remove the key if the value is None
querydict.pop(key, None)
else:
# Set the key otherwise
querydict[key] = str(value)
return "?" + querydict.urlencode()
@register.simple_tag(takes_context=True) |
Print out the current querystring. Any keyword arguments to this template
tag will be added to the querystring before it is printed out.
<a href="/page/{% querystring key='value' %}">
Will result in something like:
<a href="/page/?foo=bar&key=value">
| 35 | 61 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def querystring(context, **kwargs):
request = context["request"]
querydict = request.GET.copy()
# Can't do querydict.update(kwargs), because QueryDict.update() appends to
# the list of values, instead of replacing the values.
for key, value in kwargs.items():
if value is None:
# Remove the key if the value is None
querydict.pop(key, None)
else:
# Set the key otherwise
querydict[key] = str(value)
return "?" + querydict.urlencode()
@register.simple_tag(takes_context=True)
```
###Assistant :
Print out the current querystring. Any keyword arguments to this template
tag will be added to the querystring before it is printed out.
<a href="/page/{% querystring key='value' %}">
Will result in something like:
<a href="/page/?foo=bar&key=value">
|
735 | def _get_fingerprint_of_schema_without_irrelevant_keys(self) -> Text:
graph_schema = self._execution_context.graph_schema
schema_as_dict = graph_schema.as_dict()
for node_name, node_dict in schema_as_dict["nodes"].items():
config_copy = copy.deepcopy(node_dict["config"])
config_copy.pop(EPOCHS, None)
# ignore default values since they're filled in anyway later and can
# end up in configs (or not) in mysterious ways
defaults = graph_schema.nodes[node_name].uses.get_default_config()
for key, default_value in defaults.items():
if key in config_copy and config_copy[key] == default_value:
config_copy.pop(key)
node_dict["config"] = config_copy
node_dict.pop("eager")
node_dict.pop("constructor_name")
return rasa.shared.utils.io.deep_container_fingerprint(schema_as_dict)
| Returns a fingerprint of the given schema with certain items removed.
These items include specifications that do not influence actual training
results such as "eager" mode. The only configuration (in your config) that is
allowed to change is the number of `epochs`.
Returns:
fingerprint
| 44 | 66 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _get_fingerprint_of_schema_without_irrelevant_keys(self) -> Text:
graph_schema = self._execution_context.graph_schema
schema_as_dict = graph_schema.as_dict()
for node_name, node_dict in schema_as_dict["nodes"].items():
config_copy = copy.deepcopy(node_dict["config"])
config_copy.pop(EPOCHS, None)
# ignore default values since they're filled in anyway later and can
# end up in configs (or not) in mysterious ways
defaults = graph_schema.nodes[node_name].uses.get_default_config()
for key, default_value in defaults.items():
if key in config_copy and config_copy[key] == default_value:
config_copy.pop(key)
node_dict["config"] = config_copy
node_dict.pop("eager")
node_dict.pop("constructor_name")
return rasa.shared.utils.io.deep_container_fingerprint(schema_as_dict)
```
###Assistant : Returns a fingerprint of the given schema with certain items removed.
These items include specifications that do not influence actual training
results such as "eager" mode. The only configuration (in your config) that is
allowed to change is the number of `epochs`.
Returns:
fingerprint
|
736 | def get_gi_typelibs(module, version):
module_info = GiModuleInfo(module, version)
return module_info.collect_typelib_data()
|
Return a tuple of (binaries, datas, hiddenimports) to be used by PyGObject related hooks. Searches for and adds
dependencies recursively.
:param module: GI module name, as passed to 'gi.require_version()'
:param version: GI module version, as passed to 'gi.require_version()'
| 38 | 9 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_gi_typelibs(module, version):
module_info = GiModuleInfo(module, version)
return module_info.collect_typelib_data()
```
###Assistant :
Return a tuple of (binaries, datas, hiddenimports) to be used by PyGObject related hooks. Searches for and adds
dependencies recursively.
:param module: GI module name, as passed to 'gi.require_version()'
:param version: GI module version, as passed to 'gi.require_version()'
|
737 | def _sync_dag_view_permissions(self, dag_id, access_control):
dag_resource_name = permissions.resource_name_for_dag(dag_id)
|
Set the access policy on the given DAG's ViewModel.
:param dag_id: the ID of the DAG whose permissions should be updated
:param access_control: a dict where each key is a rolename and
each value is a set() of action names (e.g. {'can_read'})
| 42 | 7 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _sync_dag_view_permissions(self, dag_id, access_control):
dag_resource_name = permissions.resource_name_for_dag(dag_id)
```
###Assistant :
Set the access policy on the given DAG's ViewModel.
:param dag_id: the ID of the DAG whose permissions should be updated
:param access_control: a dict where each key is a rolename and
each value is a set() of action names (e.g. {'can_read'})
|
738 | def test_series_equal_datetime_values_mismatch(rtol):
msg =
s1 = Series(pd.date_range("2018-01-01", periods=3, freq="D"))
s2 = Series(pd.date_range("2019-02-02", periods=3, freq="D"))
with pytest.raises(AssertionError, match=msg):
tm.assert_series_equal(s1, s2, rtol=rtol)
| Series are different
Series values are different \\(100.0 %\\)
\\[index\\]: \\[0, 1, 2\\]
\\[left\\]: \\[1514764800000000000, 1514851200000000000, 1514937600000000000\\]
\\[right\\]: \\[1549065600000000000, 1549152000000000000, 1549238400000000000\\] | 21 | 20 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_series_equal_datetime_values_mismatch(rtol):
msg =
s1 = Series(pd.date_range("2018-01-01", periods=3, freq="D"))
s2 = Series(pd.date_range("2019-02-02", periods=3, freq="D"))
with pytest.raises(AssertionError, match=msg):
tm.assert_series_equal(s1, s2, rtol=rtol)
```
###Assistant : Series are different
Series values are different \\(100.0 %\\)
\\[index\\]: \\[0, 1, 2\\]
\\[left\\]: \\[1514764800000000000, 1514851200000000000, 1514937600000000000\\]
\\[right\\]: \\[1549065600000000000, 1549152000000000000, 1549238400000000000\\]
|
739 | def test_decrypt_pillar_invalid_renderer(salt_master, grains, pillar_homedir):
opts = salt_master.config.copy()
opts["decrypt_pillar"] = [{"secrets:vault": "gpg"}]
opts["decrypt_pillar_default"] = "foo"
opts["decrypt_pillar_renderers"] = ["foo", "bar"]
pillar_obj = salt.pillar.Pillar(opts, grains, "test", "base")
ret = pillar_obj.compile_pillar()
expected = copy.deepcopy(GPG_PILLAR_ENCRYPTED)
expected["_errors"] = [
"Failed to decrypt pillar key 'secrets:vault': 'gpg' is not a valid decryption"
" renderer. Valid choices are: foo, bar"
]
assert ret["_errors"] == expected["_errors"]
assert ret["secrets"]["vault"]["foo"] == expected["secrets"]["vault"]["foo"]
assert ret["secrets"]["vault"]["bar"] == expected["secrets"]["vault"]["bar"]
assert ret["secrets"]["vault"]["baz"] == expected["secrets"]["vault"]["baz"]
assert ret["secrets"]["vault"]["qux"] == expected["secrets"]["vault"]["qux"]
|
Test decryption using a renderer which is not permitted. It should
fail, leaving the encrypted keys intact, and add an error to the pillar
dictionary.
decrypt_pillar_default: foo
decrypt_pillar_renderers:
- foo
- bar
decrypt_pillar:
- 'secrets:vault': gpg
| 36 | 73 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_decrypt_pillar_invalid_renderer(salt_master, grains, pillar_homedir):
opts = salt_master.config.copy()
opts["decrypt_pillar"] = [{"secrets:vault": "gpg"}]
opts["decrypt_pillar_default"] = "foo"
opts["decrypt_pillar_renderers"] = ["foo", "bar"]
pillar_obj = salt.pillar.Pillar(opts, grains, "test", "base")
ret = pillar_obj.compile_pillar()
expected = copy.deepcopy(GPG_PILLAR_ENCRYPTED)
expected["_errors"] = [
"Failed to decrypt pillar key 'secrets:vault': 'gpg' is not a valid decryption"
" renderer. Valid choices are: foo, bar"
]
assert ret["_errors"] == expected["_errors"]
assert ret["secrets"]["vault"]["foo"] == expected["secrets"]["vault"]["foo"]
assert ret["secrets"]["vault"]["bar"] == expected["secrets"]["vault"]["bar"]
assert ret["secrets"]["vault"]["baz"] == expected["secrets"]["vault"]["baz"]
assert ret["secrets"]["vault"]["qux"] == expected["secrets"]["vault"]["qux"]
```
###Assistant :
Test decryption using a renderer which is not permitted. It should
fail, leaving the encrypted keys intact, and add an error to the pillar
dictionary.
decrypt_pillar_default: foo
decrypt_pillar_renderers:
- foo
- bar
decrypt_pillar:
- 'secrets:vault': gpg
|
740 | def get_trial_name():
warnings.warn(
_deprecation_msg,
DeprecationWarning,
)
_session = get_session()
if _session:
return _session.trial_name
@DeveloperAPI | Trial name for the corresponding trial.
For function API use only.
| 11 | 14 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_trial_name():
warnings.warn(
_deprecation_msg,
DeprecationWarning,
)
_session = get_session()
if _session:
return _session.trial_name
@DeveloperAPI
```
###Assistant : Trial name for the corresponding trial.
For function API use only.
|
741 | def get_scheduler_lock(collection=None, scheduler=None):
from dask import multiprocessing
from dask.base import get_scheduler
actual_get = get_scheduler(collections=[collection], scheduler=scheduler)
if actual_get == multiprocessing.get:
return multiprocessing.get_context().Manager().Lock()
return SerializableLock()
| Get an instance of the appropriate lock for a certain situation based on
scheduler used. | 15 | 23 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_scheduler_lock(collection=None, scheduler=None):
from dask import multiprocessing
from dask.base import get_scheduler
actual_get = get_scheduler(collections=[collection], scheduler=scheduler)
if actual_get == multiprocessing.get:
return multiprocessing.get_context().Manager().Lock()
return SerializableLock()
```
###Assistant : Get an instance of the appropriate lock for a certain situation based on
scheduler used.
|
742 | def verify_dataset_shuffled(x):
assert isinstance(x, tf.data.Dataset)
graph_def = get_dataset_graph_def(x)
for node in graph_def.node:
if node.op.startswith("ShuffleDataset"):
return True
# Also check graph_def.library.function for ds.interleave or ds.flat_map
for function in graph_def.library.function:
for node in function.node_def:
if node.op.startswith("ShuffleDataset"):
return True
logging.warning(
"Expected a shuffled dataset but input dataset `x` is "
"not shuffled. Please invoke `shuffle()` on input dataset."
)
return False
| Verifies that the dataset is shuffled.
Args:
x: Dataset passed as an input to the model.
Returns:
boolean, whether the input dataset is shuffled or not.
| 26 | 58 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def verify_dataset_shuffled(x):
assert isinstance(x, tf.data.Dataset)
graph_def = get_dataset_graph_def(x)
for node in graph_def.node:
if node.op.startswith("ShuffleDataset"):
return True
# Also check graph_def.library.function for ds.interleave or ds.flat_map
for function in graph_def.library.function:
for node in function.node_def:
if node.op.startswith("ShuffleDataset"):
return True
logging.warning(
"Expected a shuffled dataset but input dataset `x` is "
"not shuffled. Please invoke `shuffle()` on input dataset."
)
return False
```
###Assistant : Verifies that the dataset is shuffled.
Args:
x: Dataset passed as an input to the model.
Returns:
boolean, whether the input dataset is shuffled or not.
|
743 | def get_binance_available_quotes_for_each_coin() -> dict:
trading_pairs = _get_trading_pairs()
results = defaultdict(list)
for pair in trading_pairs:
results[pair["baseAsset"]].append(pair["quoteAsset"])
return results
@log_start_end(log=logger) | Helper methods that for every coin available on Binance add all quote assets. [Source: Binance]
Returns
-------
dict:
All quote assets for given coin
{'ETH' : ['BTC', 'USDT' ...], 'UNI' : ['ETH', 'BTC','BUSD', ...]
| 34 | 18 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_binance_available_quotes_for_each_coin() -> dict:
trading_pairs = _get_trading_pairs()
results = defaultdict(list)
for pair in trading_pairs:
results[pair["baseAsset"]].append(pair["quoteAsset"])
return results
@log_start_end(log=logger)
```
###Assistant : Helper methods that for every coin available on Binance add all quote assets. [Source: Binance]
Returns
-------
dict:
All quote assets for given coin
{'ETH' : ['BTC', 'USDT' ...], 'UNI' : ['ETH', 'BTC','BUSD', ...]
|
744 | def update_qty_in_future_sle(args, allow_negative_stock=False):
datetime_limit_condition = ""
qty_shift = args.actual_qty
# find difference/shift in qty caused by stock reconciliation
if args.voucher_type == "Stock Reconciliation":
qty_shift = get_stock_reco_qty_shift(args)
# find the next nearest stock reco so that we only recalculate SLEs till that point
next_stock_reco_detail = get_next_stock_reco(args)
if next_stock_reco_detail:
detail = next_stock_reco_detail[0]
# add condition to update SLEs before this date & time
datetime_limit_condition = get_datetime_limit_condition(detail)
frappe.db.sql(
.format(
qty_shift=qty_shift, datetime_limit_condition=datetime_limit_condition
),
args,
)
validate_negative_qty_in_future_sle(args, allow_negative_stock)
| Recalculate Qty after Transaction in future SLEs based on current SLE.
update `tabStock Ledger Entry`
set qty_after_transaction = qty_after_transaction + {qty_shift}
where
item_code = %(item_code)s
and warehouse = %(warehouse)s
and voucher_no != %(voucher_no)s
and is_cancelled = 0
and (timestamp(posting_date, posting_time) > timestamp(%(posting_date)s, %(posting_time)s)
or (
timestamp(posting_date, posting_time) = timestamp(%(posting_date)s, %(posting_time)s)
and creation > %(creation)s
)
)
{datetime_limit_condition}
| 57 | 73 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def update_qty_in_future_sle(args, allow_negative_stock=False):
datetime_limit_condition = ""
qty_shift = args.actual_qty
# find difference/shift in qty caused by stock reconciliation
if args.voucher_type == "Stock Reconciliation":
qty_shift = get_stock_reco_qty_shift(args)
# find the next nearest stock reco so that we only recalculate SLEs till that point
next_stock_reco_detail = get_next_stock_reco(args)
if next_stock_reco_detail:
detail = next_stock_reco_detail[0]
# add condition to update SLEs before this date & time
datetime_limit_condition = get_datetime_limit_condition(detail)
frappe.db.sql(
.format(
qty_shift=qty_shift, datetime_limit_condition=datetime_limit_condition
),
args,
)
validate_negative_qty_in_future_sle(args, allow_negative_stock)
```
###Assistant : Recalculate Qty after Transaction in future SLEs based on current SLE.
update `tabStock Ledger Entry`
set qty_after_transaction = qty_after_transaction + {qty_shift}
where
item_code = %(item_code)s
and warehouse = %(warehouse)s
and voucher_no != %(voucher_no)s
and is_cancelled = 0
and (timestamp(posting_date, posting_time) > timestamp(%(posting_date)s, %(posting_time)s)
or (
timestamp(posting_date, posting_time) = timestamp(%(posting_date)s, %(posting_time)s)
and creation > %(creation)s
)
)
{datetime_limit_condition}
|
745 | def get_current_timezone_tag(parser, token):
# token.split_contents() isn't useful here because this tag doesn't accept variable as arguments
args = token.contents.split()
if len(args) != 3 or args[1] != 'as':
raise TemplateSyntaxError(
"'get_current_timezone' requires 'as variable' (got %r)" % args
)
return GetCurrentTimezoneNode(args[2])
|
Store the name of the current time zone in the context.
Usage::
{% get_current_timezone as TIME_ZONE %}
This will fetch the currently active time zone and put its name
into the ``TIME_ZONE`` context variable.
| 34 | 40 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_current_timezone_tag(parser, token):
# token.split_contents() isn't useful here because this tag doesn't accept variable as arguments
args = token.contents.split()
if len(args) != 3 or args[1] != 'as':
raise TemplateSyntaxError(
"'get_current_timezone' requires 'as variable' (got %r)" % args
)
return GetCurrentTimezoneNode(args[2])
```
###Assistant :
Store the name of the current time zone in the context.
Usage::
{% get_current_timezone as TIME_ZONE %}
This will fetch the currently active time zone and put its name
into the ``TIME_ZONE`` context variable.
|
746 | def test_save_multiple_world_logs_mutator(self):
with testing_utils.tempdir() as tmpdir:
log_report = os.path.join(tmpdir, 'world_logs.jsonl')
multitask = 'integration_tests:mutators=flatten,integration_tests:ReverseTeacher:mutator=reverse'
valid, test = testing_utils.train_model(
{
'task': multitask,
'validation_max_exs': 10,
'model': 'repeat_label',
'short_final_eval': True,
'num_epochs': 1.0,
'world_logs': log_report,
}
)
for task in multitask.split(','):
task_log_report = get_task_world_logs(
task, log_report, is_multitask=True
)
with PathManager.open(task_log_report) as f:
json_lines = f.readlines()
assert len(json_lines) == 5
@register_agent("fake_report") |
Test that we can save multiple world_logs from train model on multiple tasks
with mutators present.
| 16 | 55 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_save_multiple_world_logs_mutator(self):
with testing_utils.tempdir() as tmpdir:
log_report = os.path.join(tmpdir, 'world_logs.jsonl')
multitask = 'integration_tests:mutators=flatten,integration_tests:ReverseTeacher:mutator=reverse'
valid, test = testing_utils.train_model(
{
'task': multitask,
'validation_max_exs': 10,
'model': 'repeat_label',
'short_final_eval': True,
'num_epochs': 1.0,
'world_logs': log_report,
}
)
for task in multitask.split(','):
task_log_report = get_task_world_logs(
task, log_report, is_multitask=True
)
with PathManager.open(task_log_report) as f:
json_lines = f.readlines()
assert len(json_lines) == 5
@register_agent("fake_report")
```
###Assistant :
Test that we can save multiple world_logs from train model on multiple tasks
with mutators present.
|
747 | def testDotsInLogdir(self):
local_dir_path = Path("/tmp/test_rel_dots")
local_dir = str(local_dir_path)
if local_dir_path.exists():
local_dir = tempfile.mkdtemp(prefix=str(local_dir_path) + "_")
trial = Trial(trainable_name="rel_logdir", local_dir=local_dir)
with self.assertRaises(ValueError):
trial.logdir = "/tmp/test_rel/../dots"
with self.assertRaises(ValueError):
trial.logdir = local_dir + "/../"
if shutil.rmtree.avoids_symlink_attacks:
if local_dir_path.exists():
shutil.rmtree(local_dir)
| This should result in errors as dots in paths are not allowed. | 12 | 36 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def testDotsInLogdir(self):
local_dir_path = Path("/tmp/test_rel_dots")
local_dir = str(local_dir_path)
if local_dir_path.exists():
local_dir = tempfile.mkdtemp(prefix=str(local_dir_path) + "_")
trial = Trial(trainable_name="rel_logdir", local_dir=local_dir)
with self.assertRaises(ValueError):
trial.logdir = "/tmp/test_rel/../dots"
with self.assertRaises(ValueError):
trial.logdir = local_dir + "/../"
if shutil.rmtree.avoids_symlink_attacks:
if local_dir_path.exists():
shutil.rmtree(local_dir)
```
###Assistant : This should result in errors as dots in paths are not allowed.
|
748 | def test_delete_post(self):
# Send request
response = self.client.post(
reverse("wagtailimages:delete_multiple", args=(self.image.id,))
)
# Check response
self.assertEqual(response.status_code, 200)
self.assertEqual(response["Content-Type"], "application/json")
# Make sure the image is deleted
self.assertFalse(Image.objects.filter(id=self.image.id).exists())
# Check JSON
response_json = json.loads(response.content.decode())
self.assertIn("image_id", response_json)
self.assertIn("success", response_json)
self.assertEqual(response_json["image_id"], self.image.id)
self.assertTrue(response_json["success"])
@override_settings(WAGTAILIMAGES_IMAGE_MODEL="tests.CustomImage") |
This tests that a POST request to the delete view deletes the image
| 13 | 40 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_delete_post(self):
# Send request
response = self.client.post(
reverse("wagtailimages:delete_multiple", args=(self.image.id,))
)
# Check response
self.assertEqual(response.status_code, 200)
self.assertEqual(response["Content-Type"], "application/json")
# Make sure the image is deleted
self.assertFalse(Image.objects.filter(id=self.image.id).exists())
# Check JSON
response_json = json.loads(response.content.decode())
self.assertIn("image_id", response_json)
self.assertIn("success", response_json)
self.assertEqual(response_json["image_id"], self.image.id)
self.assertTrue(response_json["success"])
@override_settings(WAGTAILIMAGES_IMAGE_MODEL="tests.CustomImage")
```
###Assistant :
This tests that a POST request to the delete view deletes the image
|
749 | def _show_diff_helper(self, frame_data, expected_frame_data):
import matplotlib.gridspec as gridspec # type: ignore
import matplotlib.pyplot as plt
gs = gridspec.GridSpec(2, 2)
fig = plt.figure()
fig.suptitle(f"Test for {str(self.scene).replace('Test', '')}", fontsize=16)
ax = fig.add_subplot(gs[0, 0])
ax.imshow(frame_data)
ax.set_title("Generated :")
ax = fig.add_subplot(gs[0, 1])
ax.imshow(expected_frame_data)
ax.set_title("Expected :")
ax = fig.add_subplot(gs[1, :])
diff_im = expected_frame_data.copy()
diff_im = np.where(
frame_data != np.array([0, 0, 0, 255]),
np.array([0, 255, 0, 255], dtype="uint8"),
np.array([0, 0, 0, 255], dtype="uint8"),
) # Set any non-black pixels to green
np.putmask(
diff_im,
expected_frame_data != frame_data,
np.array([255, 0, 0, 255], dtype="uint8"),
) # Set any different pixels to red
ax.imshow(diff_im, interpolation="nearest")
ax.set_title("Differences summary : (green = same, red = different)")
plt.show()
plt.savefig(f"{self.scene}.png")
| Will visually display with matplotlib differences between frame generated and the one expected. | 13 | 106 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _show_diff_helper(self, frame_data, expected_frame_data):
import matplotlib.gridspec as gridspec # type: ignore
import matplotlib.pyplot as plt
gs = gridspec.GridSpec(2, 2)
fig = plt.figure()
fig.suptitle(f"Test for {str(self.scene).replace('Test', '')}", fontsize=16)
ax = fig.add_subplot(gs[0, 0])
ax.imshow(frame_data)
ax.set_title("Generated :")
ax = fig.add_subplot(gs[0, 1])
ax.imshow(expected_frame_data)
ax.set_title("Expected :")
ax = fig.add_subplot(gs[1, :])
diff_im = expected_frame_data.copy()
diff_im = np.where(
frame_data != np.array([0, 0, 0, 255]),
np.array([0, 255, 0, 255], dtype="uint8"),
np.array([0, 0, 0, 255], dtype="uint8"),
) # Set any non-black pixels to green
np.putmask(
diff_im,
expected_frame_data != frame_data,
np.array([255, 0, 0, 255], dtype="uint8"),
) # Set any different pixels to red
ax.imshow(diff_im, interpolation="nearest")
ax.set_title("Differences summary : (green = same, red = different)")
plt.show()
plt.savefig(f"{self.scene}.png")
```
###Assistant : Will visually display with matplotlib differences between frame generated and the one expected.
|
750 | def consume_capacity(self, task):
if self.is_container_group:
self.container_group_jobs += 1
self.container_group_consumed_forks += task.task_impact
else:
raise RuntimeError("We only track capacity for container groups at the instance group level. Otherwise, consume capacity on instances.")
| We only consume capacity on an instance group level if it is a container group. Otherwise we consume capacity on an instance level. | 23 | 30 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def consume_capacity(self, task):
if self.is_container_group:
self.container_group_jobs += 1
self.container_group_consumed_forks += task.task_impact
else:
raise RuntimeError("We only track capacity for container groups at the instance group level. Otherwise, consume capacity on instances.")
```
###Assistant : We only consume capacity on an instance group level if it is a container group. Otherwise we consume capacity on an instance level.
|
751 | def get_avail_mem_per_ray_worker_node(spark, object_store_memory_per_node):
num_cpus_per_spark_task = int(
spark.sparkContext.getConf().get("spark.task.cpus", "1")
)
|
Return the available heap memory and object store memory for each ray worker.
NB: We have one ray node per spark task.
| 22 | 9 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_avail_mem_per_ray_worker_node(spark, object_store_memory_per_node):
num_cpus_per_spark_task = int(
spark.sparkContext.getConf().get("spark.task.cpus", "1")
)
```
###Assistant :
Return the available heap memory and object store memory for each ray worker.
NB: We have one ray node per spark task.
|
752 | def _iter_egg_info_dependencies(self) -> Iterable[str]:
for entry in self._iter_requires_txt_entries():
if entry.extra and entry.marker:
marker = f'({entry.marker}) and extra == "{safe_extra(entry.extra)}"'
elif entry.extra:
marker = f'extra == "{safe_extra(entry.extra)}"'
elif entry.marker:
marker = entry.marker
else:
marker = ""
if marker:
yield f"{entry.requirement} ; {marker}"
else:
yield entry.requirement
| Get distribution dependencies from the egg-info directory.
To ease parsing, this converts a legacy dependency entry into a PEP 508
requirement string. Like ``_iter_requires_txt_entries()``, there is code
in ``importlib.metadata`` that does mostly the same, but not do exactly
what we need.
Namely, ``importlib.metadata`` does not normalize the extra name before
putting it into the requirement string, which causes marker comparison
to fail because the dist-info format do normalize. This is consistent in
all currently available PEP 517 backends, although not standardized.
| 81 | 44 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _iter_egg_info_dependencies(self) -> Iterable[str]:
for entry in self._iter_requires_txt_entries():
if entry.extra and entry.marker:
marker = f'({entry.marker}) and extra == "{safe_extra(entry.extra)}"'
elif entry.extra:
marker = f'extra == "{safe_extra(entry.extra)}"'
elif entry.marker:
marker = entry.marker
else:
marker = ""
if marker:
yield f"{entry.requirement} ; {marker}"
else:
yield entry.requirement
```
###Assistant : Get distribution dependencies from the egg-info directory.
To ease parsing, this converts a legacy dependency entry into a PEP 508
requirement string. Like ``_iter_requires_txt_entries()``, there is code
in ``importlib.metadata`` that does mostly the same, but not do exactly
what we need.
Namely, ``importlib.metadata`` does not normalize the extra name before
putting it into the requirement string, which causes marker comparison
to fail because the dist-info format do normalize. This is consistent in
all currently available PEP 517 backends, although not standardized.
|
753 | def cache_from_source(path, debug_override=None):
with warnings.catch_warnings():
warnings.simplefilter('ignore')
return util.cache_from_source(path, debug_override)
| **DEPRECATED**
Given the path to a .py file, return the path to its .pyc file.
The .py file does not need to exist; this simply returns the path to the
.pyc file calculated as if the .py file were imported.
If debug_override is not None, then it must be a boolean and is used in
place of sys.flags.optimize.
If sys.implementation.cache_tag is None then NotImplementedError is raised.
| 66 | 9 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def cache_from_source(path, debug_override=None):
with warnings.catch_warnings():
warnings.simplefilter('ignore')
return util.cache_from_source(path, debug_override)
```
###Assistant : **DEPRECATED**
Given the path to a .py file, return the path to its .pyc file.
The .py file does not need to exist; this simply returns the path to the
.pyc file calculated as if the .py file were imported.
If debug_override is not None, then it must be a boolean and is used in
place of sys.flags.optimize.
If sys.implementation.cache_tag is None then NotImplementedError is raised.
|
754 | def _url_collapse_path(path):
# Query component should not be involved.
path, _, query = path.partition('?')
path = urllib.parse.unquote(path)
# Similar to os.path.split(os.path.normpath(path)) but specific to URL
# path semantics rather than local operating system semantics.
path_parts = path.split('/')
head_parts = []
for part in path_parts[:-1]:
if part == '..':
head_parts.pop() # IndexError if more '..' than prior parts
elif part and part != '.':
head_parts.append( part )
if path_parts:
tail_part = path_parts.pop()
if tail_part:
if tail_part == '..':
head_parts.pop()
tail_part = ''
elif tail_part == '.':
tail_part = ''
else:
tail_part = ''
if query:
tail_part = '?'.join((tail_part, query))
splitpath = ('/' + '/'.join(head_parts), tail_part)
collapsed_path = "/".join(splitpath)
return collapsed_path
nobody = None
|
Given a URL path, remove extra '/'s and '.' path elements and collapse
any '..' references and returns a collapsed path.
Implements something akin to RFC-2396 5.2 step 6 to parse relative paths.
The utility of this function is limited to is_cgi method and helps
preventing some security attacks.
Returns: The reconstituted URL, which will always start with a '/'.
Raises: IndexError if too many '..' occur within the path.
| 70 | 112 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _url_collapse_path(path):
# Query component should not be involved.
path, _, query = path.partition('?')
path = urllib.parse.unquote(path)
# Similar to os.path.split(os.path.normpath(path)) but specific to URL
# path semantics rather than local operating system semantics.
path_parts = path.split('/')
head_parts = []
for part in path_parts[:-1]:
if part == '..':
head_parts.pop() # IndexError if more '..' than prior parts
elif part and part != '.':
head_parts.append( part )
if path_parts:
tail_part = path_parts.pop()
if tail_part:
if tail_part == '..':
head_parts.pop()
tail_part = ''
elif tail_part == '.':
tail_part = ''
else:
tail_part = ''
if query:
tail_part = '?'.join((tail_part, query))
splitpath = ('/' + '/'.join(head_parts), tail_part)
collapsed_path = "/".join(splitpath)
return collapsed_path
nobody = None
```
###Assistant :
Given a URL path, remove extra '/'s and '.' path elements and collapse
any '..' references and returns a collapsed path.
Implements something akin to RFC-2396 5.2 step 6 to parse relative paths.
The utility of this function is limited to is_cgi method and helps
preventing some security attacks.
Returns: The reconstituted URL, which will always start with a '/'.
Raises: IndexError if too many '..' occur within the path.
|
755 | def to_coo(self, row_levels=(0,), column_levels=(1,), sort_labels: bool = False):
from pandas.core.arrays.sparse.scipy_sparse import sparse_series_to_coo
A, rows, columns = sparse_series_to_coo(
self._parent, row_levels, column_levels, sort_labels=sort_labels
)
return A, rows, columns
|
Create a scipy.sparse.coo_matrix from a Series with MultiIndex.
Use row_levels and column_levels to determine the row and column
coordinates respectively. row_levels and column_levels are the names
(labels) or numbers of the levels. {row_levels, column_levels} must be
a partition of the MultiIndex level names (or numbers).
Parameters
----------
row_levels : tuple/list
column_levels : tuple/list
sort_labels : bool, default False
Sort the row and column labels before forming the sparse matrix.
When `row_levels` and/or `column_levels` refer to a single level,
set to `True` for a faster execution.
Returns
-------
y : scipy.sparse.coo_matrix
rows : list (row labels)
columns : list (column labels)
Examples
--------
>>> s = pd.Series([3.0, np.nan, 1.0, 3.0, np.nan, np.nan])
>>> s.index = pd.MultiIndex.from_tuples(
... [
... (1, 2, "a", 0),
... (1, 2, "a", 1),
... (1, 1, "b", 0),
... (1, 1, "b", 1),
... (2, 1, "b", 0),
... (2, 1, "b", 1)
... ],
... names=["A", "B", "C", "D"],
... )
>>> s
A B C D
1 2 a 0 3.0
1 NaN
1 b 0 1.0
1 3.0
2 1 b 0 NaN
1 NaN
dtype: float64
>>> ss = s.astype("Sparse")
>>> ss
A B C D
1 2 a 0 3.0
1 NaN
1 b 0 1.0
1 3.0
2 1 b 0 NaN
1 NaN
dtype: Sparse[float64, nan]
>>> A, rows, columns = ss.sparse.to_coo(
... row_levels=["A", "B"], column_levels=["C", "D"], sort_labels=True
... )
>>> A
<3x4 sparse matrix of type '<class 'numpy.float64'>'
with 3 stored elements in COOrdinate format>
>>> A.todense()
matrix([[0., 0., 1., 3.],
[3., 0., 0., 0.],
[0., 0., 0., 0.]])
>>> rows
[(1, 1), (1, 2), (2, 1)]
>>> columns
[('a', 0), ('a', 1), ('b', 0), ('b', 1)]
| 279 | 26 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def to_coo(self, row_levels=(0,), column_levels=(1,), sort_labels: bool = False):
from pandas.core.arrays.sparse.scipy_sparse import sparse_series_to_coo
A, rows, columns = sparse_series_to_coo(
self._parent, row_levels, column_levels, sort_labels=sort_labels
)
return A, rows, columns
```
###Assistant :
Create a scipy.sparse.coo_matrix from a Series with MultiIndex.
Use row_levels and column_levels to determine the row and column
coordinates respectively. row_levels and column_levels are the names
(labels) or numbers of the levels. {row_levels, column_levels} must be
a partition of the MultiIndex level names (or numbers).
Parameters
----------
row_levels : tuple/list
column_levels : tuple/list
sort_labels : bool, default False
Sort the row and column labels before forming the sparse matrix.
When `row_levels` and/or `column_levels` refer to a single level,
set to `True` for a faster execution.
Returns
-------
y : scipy.sparse.coo_matrix
rows : list (row labels)
columns : list (column labels)
Examples
--------
>>> s = pd.Series([3.0, np.nan, 1.0, 3.0, np.nan, np.nan])
>>> s.index = pd.MultiIndex.from_tuples(
... [
... (1, 2, "a", 0),
... (1, 2, "a", 1),
... (1, 1, "b", 0),
... (1, 1, "b", 1),
... (2, 1, "b", 0),
... (2, 1, "b", 1)
... ],
... names=["A", "B", "C", "D"],
... )
>>> s
A B C D
1 2 a 0 3.0
1 NaN
1 b 0 1.0
1 3.0
2 1 b 0 NaN
1 NaN
dtype: float64
>>> ss = s.astype("Sparse")
>>> ss
A B C D
1 2 a 0 3.0
1 NaN
1 b 0 1.0
1 3.0
2 1 b 0 NaN
1 NaN
dtype: Sparse[float64, nan]
>>> A, rows, columns = ss.sparse.to_coo(
... row_levels=["A", "B"], column_levels=["C", "D"], sort_labels=True
... )
>>> A
<3x4 sparse matrix of type '<class 'numpy.float64'>'
with 3 stored elements in COOrdinate format>
>>> A.todense()
matrix([[0., 0., 1., 3.],
[3., 0., 0., 0.],
[0., 0., 0., 0.]])
>>> rows
[(1, 1), (1, 2), (2, 1)]
>>> columns
[('a', 0), ('a', 1), ('b', 0), ('b', 1)]
|
756 | def _get_memory_heuristic_values(self) -> Dict[str, Union[str, float, bool]]:
return {
'ignore_in_session_memories': self.opt.get(
'ignore_in_session_memories_mkm', False
),
'memory_overlap_threshold': self.opt.get('memory_overlap_threshold', 0.0),
'memory_hard_block_for_n_turns': self.opt.get(
'memory_hard_block_for_n_turns', 0
),
'memory_soft_block_decay_factor': self.opt.get(
'memory_soft_block_decay_factor', 0.0
),
}
|
Extract heuristics from self.opt.
| 4 | 28 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _get_memory_heuristic_values(self) -> Dict[str, Union[str, float, bool]]:
return {
'ignore_in_session_memories': self.opt.get(
'ignore_in_session_memories_mkm', False
),
'memory_overlap_threshold': self.opt.get('memory_overlap_threshold', 0.0),
'memory_hard_block_for_n_turns': self.opt.get(
'memory_hard_block_for_n_turns', 0
),
'memory_soft_block_decay_factor': self.opt.get(
'memory_soft_block_decay_factor', 0.0
),
}
```
###Assistant :
Extract heuristics from self.opt.
|
757 | def plugin_list_buttons(context, model):
return _get_registered_content(model, 'list_buttons', context)
|
Render all list buttons registered by plugins
| 7 | 7 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def plugin_list_buttons(context, model):
return _get_registered_content(model, 'list_buttons', context)
```
###Assistant :
Render all list buttons registered by plugins
|
758 | def drop_path(self, inputs):
# if prob is 0 or eval mode, return original input
if self.drop_prob == 0. or not self.training:
return inputs
keep_prob = 1 - self.drop_prob
keep_prob = paddle.to_tensor(keep_prob, dtype='float32')
shape = (inputs.shape[0], ) + (1, ) * (inputs.ndim - 1) # shape=(N, 1, 1, 1)
random_tensor = keep_prob + paddle.rand(shape, dtype=inputs.dtype)
random_tensor = random_tensor.floor() # mask
output = inputs.divide(keep_prob) * random_tensor #divide is to keep same output expectation
return output
| drop path op
Args:
input: tensor with arbitrary shape
drop_prob: float number of drop path probability, default: 0.0
training: bool, if current mode is training, default: False
Returns:
output: output tensor after drop path
| 34 | 73 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def drop_path(self, inputs):
# if prob is 0 or eval mode, return original input
if self.drop_prob == 0. or not self.training:
return inputs
keep_prob = 1 - self.drop_prob
keep_prob = paddle.to_tensor(keep_prob, dtype='float32')
shape = (inputs.shape[0], ) + (1, ) * (inputs.ndim - 1) # shape=(N, 1, 1, 1)
random_tensor = keep_prob + paddle.rand(shape, dtype=inputs.dtype)
random_tensor = random_tensor.floor() # mask
output = inputs.divide(keep_prob) * random_tensor #divide is to keep same output expectation
return output
```
###Assistant : drop path op
Args:
input: tensor with arbitrary shape
drop_prob: float number of drop path probability, default: 0.0
training: bool, if current mode is training, default: False
Returns:
output: output tensor after drop path
|
759 | def set_style(style=None, rc=None):
style_object = axes_style(style, rc)
mpl.rcParams.update(style_object)
|
Set the parameters that control the general style of the plots.
The style parameters control properties like the color of the background and
whether a grid is enabled by default. This is accomplished using the
matplotlib rcParams system.
The options are illustrated in the
:doc:`aesthetics tutorial <../tutorial/aesthetics>`.
See :func:`axes_style` to get the parameter values.
Parameters
----------
style : dict, or one of {darkgrid, whitegrid, dark, white, ticks}
A dictionary of parameters or the name of a preconfigured style.
rc : dict, optional
Parameter mappings to override the values in the preset seaborn
style dictionaries. This only updates parameters that are
considered part of the style definition.
Examples
--------
.. include:: ../docstrings/set_style.rst
| 111 | 8 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def set_style(style=None, rc=None):
style_object = axes_style(style, rc)
mpl.rcParams.update(style_object)
```
###Assistant :
Set the parameters that control the general style of the plots.
The style parameters control properties like the color of the background and
whether a grid is enabled by default. This is accomplished using the
matplotlib rcParams system.
The options are illustrated in the
:doc:`aesthetics tutorial <../tutorial/aesthetics>`.
See :func:`axes_style` to get the parameter values.
Parameters
----------
style : dict, or one of {darkgrid, whitegrid, dark, white, ticks}
A dictionary of parameters or the name of a preconfigured style.
rc : dict, optional
Parameter mappings to override the values in the preset seaborn
style dictionaries. This only updates parameters that are
considered part of the style definition.
Examples
--------
.. include:: ../docstrings/set_style.rst
|
760 | def _resize(self, image, shorter=800, longer=1333, size_divisor=32, resample=Image.BICUBIC):
if not isinstance(image, Image.Image):
image = self.to_pil_image(image)
w, h = image.size
min_size = shorter
max_size = longer
scale = min_size / min(w, h)
if h < w:
newh, neww = min_size, scale * w
else:
newh, neww = scale * h, min_size
if max(newh, neww) > max_size:
scale = max_size / max(newh, neww)
newh = newh * scale
neww = neww * scale
newh, neww = int(newh + 0.5), int(neww + 0.5)
newh, neww = newh // size_divisor * size_divisor, neww // size_divisor * size_divisor
return self.resize(image, size=(neww, newh), resample=resample)
|
Resizes the shorter edge of `image` to `shorter` and limits the longer edge to under `longer`, while preserving
the aspect ratio. Also makes sure that both the height and width can be divided by `size_divisor`.
Based on original implementation:
https://github.com/dandelin/ViLT/blob/3db8b5035464afee84d951bf6322e1b27f1d072d/vilt/transforms/utils.py#L5
Args:
image (`PIL.Image`):
The image to resize.
shorter (`int`, *optional*, defaults to `800`):
The size to which to resize the shorter side of the image.
longer (`int`, *optional*, defaults to `1333`):
The size by which to limit the longer side of the image, while preserving the aspect ratio.
size_divisor (`int`, *optional*, defaults to `32`):
The size by which both the height and the width must be divisible.
resample (`int`, *optional*, defaults to `PIL.Image.BICUBIC`):
An optional resampling filter.
| 117 | 97 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _resize(self, image, shorter=800, longer=1333, size_divisor=32, resample=Image.BICUBIC):
if not isinstance(image, Image.Image):
image = self.to_pil_image(image)
w, h = image.size
min_size = shorter
max_size = longer
scale = min_size / min(w, h)
if h < w:
newh, neww = min_size, scale * w
else:
newh, neww = scale * h, min_size
if max(newh, neww) > max_size:
scale = max_size / max(newh, neww)
newh = newh * scale
neww = neww * scale
newh, neww = int(newh + 0.5), int(neww + 0.5)
newh, neww = newh // size_divisor * size_divisor, neww // size_divisor * size_divisor
return self.resize(image, size=(neww, newh), resample=resample)
```
###Assistant :
Resizes the shorter edge of `image` to `shorter` and limits the longer edge to under `longer`, while preserving
the aspect ratio. Also makes sure that both the height and width can be divided by `size_divisor`.
Based on original implementation:
https://github.com/dandelin/ViLT/blob/3db8b5035464afee84d951bf6322e1b27f1d072d/vilt/transforms/utils.py#L5
Args:
image (`PIL.Image`):
The image to resize.
shorter (`int`, *optional*, defaults to `800`):
The size to which to resize the shorter side of the image.
longer (`int`, *optional*, defaults to `1333`):
The size by which to limit the longer side of the image, while preserving the aspect ratio.
size_divisor (`int`, *optional*, defaults to `32`):
The size by which both the height and the width must be divisible.
resample (`int`, *optional*, defaults to `PIL.Image.BICUBIC`):
An optional resampling filter.
|
761 | def extra_action_out_fn(self) -> Dict[str, TensorType]:
extra_action_fetches = super().extra_action_out_fn()
extra_action_fetches.update(self._policy_extra_action_fetches)
return extra_action_fetches
| Extra values to fetch and return from compute_actions().
Returns:
Dict[str, TensorType]: An extra fetch-dict to be passed to and
returned from the compute_actions() call.
| 24 | 11 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def extra_action_out_fn(self) -> Dict[str, TensorType]:
extra_action_fetches = super().extra_action_out_fn()
extra_action_fetches.update(self._policy_extra_action_fetches)
return extra_action_fetches
```
###Assistant : Extra values to fetch and return from compute_actions().
Returns:
Dict[str, TensorType]: An extra fetch-dict to be passed to and
returned from the compute_actions() call.
|
762 | def get_all_exported_dataset_infos(cls) -> dict:
dset_infos_file_path = os.path.join(cls.get_imported_module_dir(), config.DATASETDICT_INFOS_FILENAME)
if os.path.exists(dset_infos_file_path):
return DatasetInfosDict.from_directory(cls.get_imported_module_dir())
return {}
| Empty dict if doesn't exist
Example:
```py
>>> from datasets import load_dataset_builder
>>> ds_builder = load_dataset_builder('rotten_tomatoes')
>>> ds_builder.get_all_exported_dataset_infos()
{'default': DatasetInfo(description="Movie Review Dataset.\nThis is a dataset of containing 5,331 positive and 5,331 negative processed\nsentences from Rotten Tomatoes movie reviews. This data was first used in Bo\nPang and Lillian Lee, ``Seeing stars: Exploiting class relationships for\nsentiment categorization with respect to rating scales.'', Proceedings of the\nACL, 2005.\n", citation='@InProceedings{Pang+Lee:05a,\n author = {Bo Pang and Lillian Lee},\n title = {Seeing stars: Exploiting class relationships for sentiment\n categorization with respect to rating scales},\n booktitle = {Proceedings of the ACL},\n year = 2005\n}\n', homepage='http://www.cs.cornell.edu/people/pabo/movie-review-data/', license='', features={'text': Value(dtype='string', id=None), 'label': ClassLabel(num_classes=2, names=['neg', 'pos'], id=None)}, post_processed=None, supervised_keys=SupervisedKeysData(input='', output=''), task_templates=[TextClassification(task='text-classification', text_column='text', label_column='label')], builder_name='rotten_tomatoes_movie_review', config_name='default', version=1.0.0, splits={'train': SplitInfo(name='train', num_bytes=1074810, num_examples=8530, dataset_name='rotten_tomatoes_movie_review'), 'validation': SplitInfo(name='validation', num_bytes=134679, num_examples=1066, dataset_name='rotten_tomatoes_movie_review'), 'test': SplitInfo(name='test', num_bytes=135972, num_examples=1066, dataset_name='rotten_tomatoes_movie_review')}, download_checksums={'https://storage.googleapis.com/seldon-datasets/sentence_polarity_v1/rt-polaritydata.tar.gz': {'num_bytes': 487770, 'checksum': 'a05befe52aafda71d458d188a1c54506a998b1308613ba76bbda2e5029409ce9'}}, download_size=487770, post_processing_size=None, dataset_size=1345461, size_in_bytes=1833231)}
```
| 140 | 14 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_all_exported_dataset_infos(cls) -> dict:
dset_infos_file_path = os.path.join(cls.get_imported_module_dir(), config.DATASETDICT_INFOS_FILENAME)
if os.path.exists(dset_infos_file_path):
return DatasetInfosDict.from_directory(cls.get_imported_module_dir())
return {}
```
###Assistant : Empty dict if doesn't exist
Example:
```py
>>> from datasets import load_dataset_builder
>>> ds_builder = load_dataset_builder('rotten_tomatoes')
>>> ds_builder.get_all_exported_dataset_infos()
{'default': DatasetInfo(description="Movie Review Dataset.\nThis is a dataset of containing 5,331 positive and 5,331 negative processed\nsentences from Rotten Tomatoes movie reviews. This data was first used in Bo\nPang and Lillian Lee, ``Seeing stars: Exploiting class relationships for\nsentiment categorization with respect to rating scales.'', Proceedings of the\nACL, 2005.\n", citation='@InProceedings{Pang+Lee:05a,\n author = {Bo Pang and Lillian Lee},\n title = {Seeing stars: Exploiting class relationships for sentiment\n categorization with respect to rating scales},\n booktitle = {Proceedings of the ACL},\n year = 2005\n}\n', homepage='http://www.cs.cornell.edu/people/pabo/movie-review-data/', license='', features={'text': Value(dtype='string', id=None), 'label': ClassLabel(num_classes=2, names=['neg', 'pos'], id=None)}, post_processed=None, supervised_keys=SupervisedKeysData(input='', output=''), task_templates=[TextClassification(task='text-classification', text_column='text', label_column='label')], builder_name='rotten_tomatoes_movie_review', config_name='default', version=1.0.0, splits={'train': SplitInfo(name='train', num_bytes=1074810, num_examples=8530, dataset_name='rotten_tomatoes_movie_review'), 'validation': SplitInfo(name='validation', num_bytes=134679, num_examples=1066, dataset_name='rotten_tomatoes_movie_review'), 'test': SplitInfo(name='test', num_bytes=135972, num_examples=1066, dataset_name='rotten_tomatoes_movie_review')}, download_checksums={'https://storage.googleapis.com/seldon-datasets/sentence_polarity_v1/rt-polaritydata.tar.gz': {'num_bytes': 487770, 'checksum': 'a05befe52aafda71d458d188a1c54506a998b1308613ba76bbda2e5029409ce9'}}, download_size=487770, post_processing_size=None, dataset_size=1345461, size_in_bytes=1833231)}
```
|
763 | def to_sanitized_dict(self) -> Dict[str, Any]:
d = self.to_dict()
d = {
** d, ** {
"train_batch_size": self.train_batch_size,
"eval_batch_size": self.eval_batch_size
}
}
valid_types = [bool, int, float, str]
valid_types.append(paddle.Tensor)
return {
k: v if type(v) in valid_types else str(v)
for k, v in d.items()
}
|
Sanitized serialization to use with TensorBoard’s hparams
| 7 | 44 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def to_sanitized_dict(self) -> Dict[str, Any]:
d = self.to_dict()
d = {
** d, ** {
"train_batch_size": self.train_batch_size,
"eval_batch_size": self.eval_batch_size
}
}
valid_types = [bool, int, float, str]
valid_types.append(paddle.Tensor)
return {
k: v if type(v) in valid_types else str(v)
for k, v in d.items()
}
```
###Assistant :
Sanitized serialization to use with TensorBoard’s hparams
|
764 | def readinto(self, b):
self._check_can_read()
return self._buffer.readinto(b)
| Read bytes into b.
Returns the number of bytes read (0 for EOF).
| 13 | 6 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def readinto(self, b):
self._check_can_read()
return self._buffer.readinto(b)
```
###Assistant : Read bytes into b.
Returns the number of bytes read (0 for EOF).
|
765 | def _output_type_handler(cursor, name, defaultType, length, precision, scale):
if defaultType == Database.NUMBER:
if scale == -127:
if precision == 0:
# NUMBER column: decimal-precision floating point.
# This will normally be an integer from a sequence,
# but it could be a decimal value.
outconverter = FormatStylePlaceholderCursor._output_number_converter
else:
# FLOAT column: binary-precision floating point.
# This comes from FloatField columns.
outconverter = float
elif precision > 0:
# NUMBER(p,s) column: decimal-precision fixed point.
# This comes from IntegerField and DecimalField columns.
outconverter = FormatStylePlaceholderCursor._get_decimal_converter(
precision, scale
)
else:
# No type information. This normally comes from a
# mathematical expression in the SELECT list. Guess int
# or Decimal based on whether it has a decimal point.
outconverter = FormatStylePlaceholderCursor._output_number_converter
return cursor.var(
Database.STRING,
size=255,
arraysize=cursor.arraysize,
outconverter=outconverter,
)
|
Called for each db column fetched from cursors. Return numbers as the
appropriate Python type.
| 15 | 126 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _output_type_handler(cursor, name, defaultType, length, precision, scale):
if defaultType == Database.NUMBER:
if scale == -127:
if precision == 0:
# NUMBER column: decimal-precision floating point.
# This will normally be an integer from a sequence,
# but it could be a decimal value.
outconverter = FormatStylePlaceholderCursor._output_number_converter
else:
# FLOAT column: binary-precision floating point.
# This comes from FloatField columns.
outconverter = float
elif precision > 0:
# NUMBER(p,s) column: decimal-precision fixed point.
# This comes from IntegerField and DecimalField columns.
outconverter = FormatStylePlaceholderCursor._get_decimal_converter(
precision, scale
)
else:
# No type information. This normally comes from a
# mathematical expression in the SELECT list. Guess int
# or Decimal based on whether it has a decimal point.
outconverter = FormatStylePlaceholderCursor._output_number_converter
return cursor.var(
Database.STRING,
size=255,
arraysize=cursor.arraysize,
outconverter=outconverter,
)
```
###Assistant :
Called for each db column fetched from cursors. Return numbers as the
appropriate Python type.
|
766 | def decode_nested_example(schema, obj):
# Nested structures: we allow dict, list/tuples, sequences
if isinstance(schema, dict):
return {
k: decode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)
}
elif isinstance(schema, (list, tuple)):
sub_schema = schema[0]
if obj is None:
return None
else:
if len(obj) > 0:
for first_elmt in obj:
if _check_non_null_non_empty_recursive(first_elmt, sub_schema):
break
if decode_nested_example(sub_schema, first_elmt) != first_elmt:
return [decode_nested_example(sub_schema, o) for o in obj]
return list(obj)
elif isinstance(schema, Sequence):
# We allow to reverse list of dict => dict of list for compatiblity with tfds
if isinstance(schema.feature, dict):
return {k: decode_nested_example([schema.feature[k]], obj[k]) for k in schema.feature}
else:
return decode_nested_example([schema.feature], obj)
# Object with special decoding:
elif isinstance(schema, (Audio, Image)):
return schema.decode_example(obj) if obj is not None else None
return obj
| Decode a nested example.
This is used since some features (in particular Audio and Image) have some logic during decoding.
To avoid iterating over possibly long lists, it first checks (recursively) if the first element that is not None or empty (if it is a sequence) has to be decoded.
If the first element needs to be decoded, then all the elements of the list will be decoded, otherwise they'll stay the same.
| 73 | 121 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def decode_nested_example(schema, obj):
# Nested structures: we allow dict, list/tuples, sequences
if isinstance(schema, dict):
return {
k: decode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)
}
elif isinstance(schema, (list, tuple)):
sub_schema = schema[0]
if obj is None:
return None
else:
if len(obj) > 0:
for first_elmt in obj:
if _check_non_null_non_empty_recursive(first_elmt, sub_schema):
break
if decode_nested_example(sub_schema, first_elmt) != first_elmt:
return [decode_nested_example(sub_schema, o) for o in obj]
return list(obj)
elif isinstance(schema, Sequence):
# We allow to reverse list of dict => dict of list for compatiblity with tfds
if isinstance(schema.feature, dict):
return {k: decode_nested_example([schema.feature[k]], obj[k]) for k in schema.feature}
else:
return decode_nested_example([schema.feature], obj)
# Object with special decoding:
elif isinstance(schema, (Audio, Image)):
return schema.decode_example(obj) if obj is not None else None
return obj
```
###Assistant : Decode a nested example.
This is used since some features (in particular Audio and Image) have some logic during decoding.
To avoid iterating over possibly long lists, it first checks (recursively) if the first element that is not None or empty (if it is a sequence) has to be decoded.
If the first element needs to be decoded, then all the elements of the list will be decoded, otherwise they'll stay the same.
|
767 | def running_under_virtualenv() -> bool:
return _running_under_venv() or _running_under_legacy_virtualenv()
| True if we're running inside a virtual environment, False otherwise. | 10 | 8 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def running_under_virtualenv() -> bool:
return _running_under_venv() or _running_under_legacy_virtualenv()
```
###Assistant : True if we're running inside a virtual environment, False otherwise.
|
768 | def forward_train(self, x, data_samples, proposal_cfg=None, **kwargs):
img_metas = [data_sample['meta'] for data_sample in data_samples]
outs = self(x)
gt_bboxes = [
data_sample.gt_instances.bboxes for data_sample in data_samples
]
if hasattr(data_samples[0].gt_instances, 'labels'):
gt_labels = [
data_sample.gt_instances.labels for data_sample in data_samples
]
else:
# RPN
gt_labels = None
if hasattr(data_samples[0], 'instances_ignore'):
gt_bboxes_ignore = [
data_sample.ignored_instances.bboxes
for data_sample in data_samples
]
else:
gt_bboxes_ignore = None
if gt_labels is None:
loss_inputs = outs + (gt_bboxes, img_metas)
else:
loss_inputs = outs + (gt_bboxes, gt_labels, img_metas)
losses = self.loss(*loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore)
if proposal_cfg is None:
return losses
else:
results_list = self.get_results(
*outs, img_metas=img_metas, cfg=proposal_cfg)
return losses, results_list
|
Args:
x (list[Tensor]): Features from FPN.
data_samples (list[:obj:`GeneralData`]): Each item contains
the meta information of each image and corresponding
annotations.
proposal_cfg (mmcv.Config): Test / postprocessing configuration,
if None, test_cfg would be used
Returns:
tuple or Tensor: When `proposal_cfg` is None, the detector is a \
normal one-stage detector, The return value is the losses.
- losses: (dict[str, Tensor]): A dictionary of loss components.
When the `proposal_cfg` is not None, the head is used as a
`rpn_head`, the return value is a tuple contains:
- losses: (dict[str, Tensor]): A dictionary of loss components.
- results_list (list[:obj:`InstanceData`]): Detection
results of each image after the post process.
Each item usually contains following keys.
- scores (Tensor): Classification scores, has a shape
(num_instance,)
- labels (Tensor): Labels of bboxes, has a shape
(num_instances,).
- bboxes (Tensor): Has a shape (num_instances, 4),
the last dimension 4 arrange as (x1, y1, x2, y2).
| 147 | 97 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def forward_train(self, x, data_samples, proposal_cfg=None, **kwargs):
img_metas = [data_sample['meta'] for data_sample in data_samples]
outs = self(x)
gt_bboxes = [
data_sample.gt_instances.bboxes for data_sample in data_samples
]
if hasattr(data_samples[0].gt_instances, 'labels'):
gt_labels = [
data_sample.gt_instances.labels for data_sample in data_samples
]
else:
# RPN
gt_labels = None
if hasattr(data_samples[0], 'instances_ignore'):
gt_bboxes_ignore = [
data_sample.ignored_instances.bboxes
for data_sample in data_samples
]
else:
gt_bboxes_ignore = None
if gt_labels is None:
loss_inputs = outs + (gt_bboxes, img_metas)
else:
loss_inputs = outs + (gt_bboxes, gt_labels, img_metas)
losses = self.loss(*loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore)
if proposal_cfg is None:
return losses
else:
results_list = self.get_results(
*outs, img_metas=img_metas, cfg=proposal_cfg)
return losses, results_list
```
###Assistant :
Args:
x (list[Tensor]): Features from FPN.
data_samples (list[:obj:`GeneralData`]): Each item contains
the meta information of each image and corresponding
annotations.
proposal_cfg (mmcv.Config): Test / postprocessing configuration,
if None, test_cfg would be used
Returns:
tuple or Tensor: When `proposal_cfg` is None, the detector is a \
normal one-stage detector, The return value is the losses.
- losses: (dict[str, Tensor]): A dictionary of loss components.
When the `proposal_cfg` is not None, the head is used as a
`rpn_head`, the return value is a tuple contains:
- losses: (dict[str, Tensor]): A dictionary of loss components.
- results_list (list[:obj:`InstanceData`]): Detection
results of each image after the post process.
Each item usually contains following keys.
- scores (Tensor): Classification scores, has a shape
(num_instance,)
- labels (Tensor): Labels of bboxes, has a shape
(num_instances,).
- bboxes (Tensor): Has a shape (num_instances, 4),
the last dimension 4 arrange as (x1, y1, x2, y2).
|
769 | def cf(self):
return {
cf.name: cf.deserialize(self.custom_field_data.get(cf.name))
for cf in self.custom_fields
}
|
Return a dictionary mapping each custom field for this instance to its deserialized value.
```python
>>> tenant = Tenant.objects.first()
>>> tenant.cf
{'primary_site': <Site: DM-NYC>, 'cust_id': 'DMI01', 'is_active': True}
```
| 29 | 11 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def cf(self):
return {
cf.name: cf.deserialize(self.custom_field_data.get(cf.name))
for cf in self.custom_fields
}
```
###Assistant :
Return a dictionary mapping each custom field for this instance to its deserialized value.
```python
>>> tenant = Tenant.objects.first()
>>> tenant.cf
{'primary_site': <Site: DM-NYC>, 'cust_id': 'DMI01', 'is_active': True}
```
|
770 | def test_arf_layout_negative_a_check(self):
G = self.Gs
pytest.raises(ValueError, nx.arf_layout, G=G, a=-1)
|
Checks input parameters correctly raises errors. For example, `a` should be larger than 1
| 14 | 9 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_arf_layout_negative_a_check(self):
G = self.Gs
pytest.raises(ValueError, nx.arf_layout, G=G, a=-1)
```
###Assistant :
Checks input parameters correctly raises errors. For example, `a` should be larger than 1
|
771 | async def test_connected_device_registered(hass):
registry = mock_registry(hass)
dispatches = []
| Test dispatch on connected device being registered. | 7 | 9 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
async def test_connected_device_registered(hass):
registry = mock_registry(hass)
dispatches = []
```
###Assistant : Test dispatch on connected device being registered.
|
772 | def test_ddp_sharded_strategy_checkpoint_multi_gpu(tmpdir):
model = BoringModel()
trainer = Trainer(gpus=2, strategy="ddp_sharded_spawn", fast_dev_run=True)
trainer.fit(model)
checkpoint_path = os.path.join(tmpdir, "model.pt")
trainer.save_checkpoint(checkpoint_path)
saved_model = BoringModel.load_from_checkpoint(checkpoint_path)
# Assert model parameters are identical after loading
for ddp_param, shard_param in zip(model.parameters(), saved_model.parameters()):
assert torch.equal(ddp_param.to("cpu"), shard_param)
@RunIf(min_gpus=2, skip_windows=True, fairscale=True) | Test to ensure that checkpoint is saved correctly when using multiple GPUs. | 12 | 39 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_ddp_sharded_strategy_checkpoint_multi_gpu(tmpdir):
model = BoringModel()
trainer = Trainer(gpus=2, strategy="ddp_sharded_spawn", fast_dev_run=True)
trainer.fit(model)
checkpoint_path = os.path.join(tmpdir, "model.pt")
trainer.save_checkpoint(checkpoint_path)
saved_model = BoringModel.load_from_checkpoint(checkpoint_path)
# Assert model parameters are identical after loading
for ddp_param, shard_param in zip(model.parameters(), saved_model.parameters()):
assert torch.equal(ddp_param.to("cpu"), shard_param)
@RunIf(min_gpus=2, skip_windows=True, fairscale=True)
```
###Assistant : Test to ensure that checkpoint is saved correctly when using multiple GPUs.
|
773 | def autoscale(self) -> None:
for deployment_name, (
deployment_info,
route_prefix,
) in self.list_deployments().items():
deployment_config = deployment_info.deployment_config
autoscaling_policy = deployment_info.autoscaling_policy
if autoscaling_policy is None:
continue
replicas = self.deployment_state_manager._deployment_states[
deployment_name
]._replicas
running_replicas = replicas.get([ReplicaState.RUNNING])
current_num_ongoing_requests = []
for replica in running_replicas:
replica_tag = replica.replica_tag
num_ongoing_requests = self.autoscaling_metrics_store.window_average(
replica_tag,
time.time() - autoscaling_policy.config.look_back_period_s,
)
if num_ongoing_requests is not None:
current_num_ongoing_requests.append(num_ongoing_requests)
if len(current_num_ongoing_requests) == 0:
continue
new_deployment_config = deployment_config.copy()
decision_num_replicas = autoscaling_policy.get_decision_num_replicas(
current_num_ongoing_requests=current_num_ongoing_requests,
curr_target_num_replicas=deployment_config.num_replicas,
)
new_deployment_config.num_replicas = decision_num_replicas
new_deployment_info = copy(deployment_info)
new_deployment_info.deployment_config = new_deployment_config
goal_id, updating = self.deployment_state_manager.deploy(
deployment_name, new_deployment_info
)
| Updates autoscaling deployments with calculated num_replicas. | 6 | 85 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def autoscale(self) -> None:
for deployment_name, (
deployment_info,
route_prefix,
) in self.list_deployments().items():
deployment_config = deployment_info.deployment_config
autoscaling_policy = deployment_info.autoscaling_policy
if autoscaling_policy is None:
continue
replicas = self.deployment_state_manager._deployment_states[
deployment_name
]._replicas
running_replicas = replicas.get([ReplicaState.RUNNING])
current_num_ongoing_requests = []
for replica in running_replicas:
replica_tag = replica.replica_tag
num_ongoing_requests = self.autoscaling_metrics_store.window_average(
replica_tag,
time.time() - autoscaling_policy.config.look_back_period_s,
)
if num_ongoing_requests is not None:
current_num_ongoing_requests.append(num_ongoing_requests)
if len(current_num_ongoing_requests) == 0:
continue
new_deployment_config = deployment_config.copy()
decision_num_replicas = autoscaling_policy.get_decision_num_replicas(
current_num_ongoing_requests=current_num_ongoing_requests,
curr_target_num_replicas=deployment_config.num_replicas,
)
new_deployment_config.num_replicas = decision_num_replicas
new_deployment_info = copy(deployment_info)
new_deployment_info.deployment_config = new_deployment_config
goal_id, updating = self.deployment_state_manager.deploy(
deployment_name, new_deployment_info
)
```
###Assistant : Updates autoscaling deployments with calculated num_replicas.
|
774 | def match_state_dict(model_state_dict, weight_state_dict):
model_keys = sorted(model_state_dict.keys())
weight_keys = sorted(weight_state_dict.keys())
|
Match between the model state dict and pretrained weight state dict.
Return the matched state dict.
The method supposes that all the names in pretrained weight state dict are
subclass of the names in models`, if the prefix 'backbone.' in pretrained weight
keys is stripped. And we could get the candidates for each model key. Then we
select the name with the longest matched size as the final match result. For
example, the model state dict has the name of
'backbone.res2.res2a.branch2a.conv.weight' and the pretrained weight as
name of 'res2.res2a.branch2a.conv.weight' and 'branch2a.conv.weight'. We
match the 'res2.res2a.branch2a.conv.weight' to the model key.
| 99 | 9 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def match_state_dict(model_state_dict, weight_state_dict):
model_keys = sorted(model_state_dict.keys())
weight_keys = sorted(weight_state_dict.keys())
```
###Assistant :
Match between the model state dict and pretrained weight state dict.
Return the matched state dict.
The method supposes that all the names in pretrained weight state dict are
subclass of the names in models`, if the prefix 'backbone.' in pretrained weight
keys is stripped. And we could get the candidates for each model key. Then we
select the name with the longest matched size as the final match result. For
example, the model state dict has the name of
'backbone.res2.res2a.branch2a.conv.weight' and the pretrained weight as
name of 'res2.res2a.branch2a.conv.weight' and 'branch2a.conv.weight'. We
match the 'res2.res2a.branch2a.conv.weight' to the model key.
|
775 | def local_node_connectivity(G, source, target, cutoff=None):
if target == source:
raise nx.NetworkXError("source and target have to be different nodes.")
# Maximum possible node independent paths
if G.is_directed():
possible = min(G.out_degree(source), G.in_degree(target))
else:
possible = min(G.degree(source), G.degree(target))
K = 0
if not possible:
return K
if cutoff is None:
cutoff = float("inf")
exclude = set()
for i in range(min(possible, cutoff)):
try:
path = _bidirectional_shortest_path(G, source, target, exclude)
exclude.update(set(path))
K += 1
except nx.NetworkXNoPath:
break
return K
| Compute node connectivity between source and target.
Pairwise or local node connectivity between two distinct and nonadjacent
nodes is the minimum number of nodes that must be removed (minimum
separating cutset) to disconnect them. By Menger's theorem, this is equal
to the number of node independent paths (paths that share no nodes other
than source and target). Which is what we compute in this function.
This algorithm is a fast approximation that gives an strict lower
bound on the actual number of node independent paths between two nodes [1]_.
It works for both directed and undirected graphs.
Parameters
----------
G : NetworkX graph
source : node
Starting node for node connectivity
target : node
Ending node for node connectivity
cutoff : integer
Maximum node connectivity to consider. If None, the minimum degree
of source or target is used as a cutoff. Default value None.
Returns
-------
k: integer
pairwise node connectivity
Examples
--------
>>> # Platonic octahedral graph has node connectivity 4
>>> # for each non adjacent node pair
>>> from networkx.algorithms import approximation as approx
>>> G = nx.octahedral_graph()
>>> approx.local_node_connectivity(G, 0, 5)
4
Notes
-----
This algorithm [1]_ finds node independents paths between two nodes by
computing their shortest path using BFS, marking the nodes of the path
found as 'used' and then searching other shortest paths excluding the
nodes marked as used until no more paths exist. It is not exact because
a shortest path could use nodes that, if the path were longer, may belong
to two different node independent paths. Thus it only guarantees an
strict lower bound on node connectivity.
Note that the authors propose a further refinement, losing accuracy and
gaining speed, which is not implemented yet.
See also
--------
all_pairs_node_connectivity
node_connectivity
References
----------
.. [1] White, Douglas R., and Mark Newman. 2001 A Fast Algorithm for
Node-Independent Paths. Santa Fe Institute Working Paper #01-07-035
http://eclectic.ss.uci.edu/~drwhite/working.pdf
| 314 | 74 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def local_node_connectivity(G, source, target, cutoff=None):
if target == source:
raise nx.NetworkXError("source and target have to be different nodes.")
# Maximum possible node independent paths
if G.is_directed():
possible = min(G.out_degree(source), G.in_degree(target))
else:
possible = min(G.degree(source), G.degree(target))
K = 0
if not possible:
return K
if cutoff is None:
cutoff = float("inf")
exclude = set()
for i in range(min(possible, cutoff)):
try:
path = _bidirectional_shortest_path(G, source, target, exclude)
exclude.update(set(path))
K += 1
except nx.NetworkXNoPath:
break
return K
```
###Assistant : Compute node connectivity between source and target.
Pairwise or local node connectivity between two distinct and nonadjacent
nodes is the minimum number of nodes that must be removed (minimum
separating cutset) to disconnect them. By Menger's theorem, this is equal
to the number of node independent paths (paths that share no nodes other
than source and target). Which is what we compute in this function.
This algorithm is a fast approximation that gives an strict lower
bound on the actual number of node independent paths between two nodes [1]_.
It works for both directed and undirected graphs.
Parameters
----------
G : NetworkX graph
source : node
Starting node for node connectivity
target : node
Ending node for node connectivity
cutoff : integer
Maximum node connectivity to consider. If None, the minimum degree
of source or target is used as a cutoff. Default value None.
Returns
-------
k: integer
pairwise node connectivity
Examples
--------
>>> # Platonic octahedral graph has node connectivity 4
>>> # for each non adjacent node pair
>>> from networkx.algorithms import approximation as approx
>>> G = nx.octahedral_graph()
>>> approx.local_node_connectivity(G, 0, 5)
4
Notes
-----
This algorithm [1]_ finds node independents paths between two nodes by
computing their shortest path using BFS, marking the nodes of the path
found as 'used' and then searching other shortest paths excluding the
nodes marked as used until no more paths exist. It is not exact because
a shortest path could use nodes that, if the path were longer, may belong
to two different node independent paths. Thus it only guarantees an
strict lower bound on node connectivity.
Note that the authors propose a further refinement, losing accuracy and
gaining speed, which is not implemented yet.
See also
--------
all_pairs_node_connectivity
node_connectivity
References
----------
.. [1] White, Douglas R., and Mark Newman. 2001 A Fast Algorithm for
Node-Independent Paths. Santa Fe Institute Working Paper #01-07-035
http://eclectic.ss.uci.edu/~drwhite/working.pdf
|
776 | def _copartition(self, axis, other, how, sort, force_repartition=False):
if isinstance(other, type(self)):
other = [other]
self_index = self.axes[axis]
others_index = [o.axes[axis] for o in other]
joined_index, make_reindexer = self._join_index_objects(
axis, [self_index] + others_index, how, sort
)
frames = [self] + other
non_empty_frames_idx = [
i for i, o in enumerate(frames) if o._partitions.size != 0
]
# If all frames are empty
if len(non_empty_frames_idx) == 0:
return (
self._partitions,
[o._partitions for o in other],
joined_index,
# There are no partition sizes because the resulting dataframe
# has no partitions.
[],
)
base_frame_idx = non_empty_frames_idx[0]
other_frames = frames[base_frame_idx + 1 :]
# Picking first non-empty frame
base_frame = frames[non_empty_frames_idx[0]]
base_index = base_frame.axes[axis]
# define conditions for reindexing and repartitioning `self` frame
do_reindex_base = not base_index.equals(joined_index)
do_repartition_base = force_repartition or do_reindex_base
# Perform repartitioning and reindexing for `base_frame` if needed.
# Also define length of base and frames. We will need to know the
# lengths for alignment.
if do_repartition_base:
reindexed_base = base_frame._partition_mgr_cls.map_axis_partitions(
axis,
base_frame._partitions,
make_reindexer(do_reindex_base, base_frame_idx),
)
if axis:
base_lengths = [obj.width() for obj in reindexed_base[0]]
else:
base_lengths = [obj.length() for obj in reindexed_base.T[0]]
else:
reindexed_base = base_frame._partitions
base_lengths = self._column_widths if axis else self._row_lengths
others_lengths = [o._axes_lengths[axis] for o in other_frames]
# define conditions for reindexing and repartitioning `other` frames
do_reindex_others = [
not o.axes[axis].equals(joined_index) for o in other_frames
]
do_repartition_others = [None] * len(other_frames)
for i in range(len(other_frames)):
do_repartition_others[i] = (
force_repartition
or do_reindex_others[i]
or others_lengths[i] != base_lengths
)
# perform repartitioning and reindexing for `other_frames` if needed
reindexed_other_list = [None] * len(other_frames)
for i in range(len(other_frames)):
if do_repartition_others[i]:
# indices of others frame start from `base_frame_idx` + 1
reindexed_other_list[i] = other_frames[
i
]._partition_mgr_cls.map_axis_partitions(
axis,
other_frames[i]._partitions,
make_reindexer(do_repartition_others[i], base_frame_idx + 1 + i),
lengths=base_lengths,
)
else:
reindexed_other_list[i] = other_frames[i]._partitions
reindexed_frames = (
[frames[i]._partitions for i in range(base_frame_idx)]
+ [reindexed_base]
+ reindexed_other_list
)
return (reindexed_frames[0], reindexed_frames[1:], joined_index, base_lengths)
|
Copartition two Modin DataFrames.
Perform aligning of partitions, index and partition blocks.
Parameters
----------
axis : {0, 1}
Axis to copartition along (0 - rows, 1 - columns).
other : PandasDataframe
Other Modin DataFrame(s) to copartition against.
how : str
How to manage joining the index object ("left", "right", etc.).
sort : bool
Whether sort the joined index or not.
force_repartition : bool, default: False
Whether force the repartitioning or not. By default,
this method will skip repartitioning if it is possible. This is because
reindexing is extremely inefficient. Because this method is used to
`join` or `append`, it is vital that the internal indices match.
Returns
-------
tuple
Tuple containing:
1) 2-d NumPy array of aligned left partitions
2) list of 2-d NumPy arrays of aligned right partitions
3) joined index along ``axis``
4) List with sizes of partitions along axis that partitioning
was done on. This list will be empty if and only if all
the frames are empty.
| 161 | 304 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _copartition(self, axis, other, how, sort, force_repartition=False):
if isinstance(other, type(self)):
other = [other]
self_index = self.axes[axis]
others_index = [o.axes[axis] for o in other]
joined_index, make_reindexer = self._join_index_objects(
axis, [self_index] + others_index, how, sort
)
frames = [self] + other
non_empty_frames_idx = [
i for i, o in enumerate(frames) if o._partitions.size != 0
]
# If all frames are empty
if len(non_empty_frames_idx) == 0:
return (
self._partitions,
[o._partitions for o in other],
joined_index,
# There are no partition sizes because the resulting dataframe
# has no partitions.
[],
)
base_frame_idx = non_empty_frames_idx[0]
other_frames = frames[base_frame_idx + 1 :]
# Picking first non-empty frame
base_frame = frames[non_empty_frames_idx[0]]
base_index = base_frame.axes[axis]
# define conditions for reindexing and repartitioning `self` frame
do_reindex_base = not base_index.equals(joined_index)
do_repartition_base = force_repartition or do_reindex_base
# Perform repartitioning and reindexing for `base_frame` if needed.
# Also define length of base and frames. We will need to know the
# lengths for alignment.
if do_repartition_base:
reindexed_base = base_frame._partition_mgr_cls.map_axis_partitions(
axis,
base_frame._partitions,
make_reindexer(do_reindex_base, base_frame_idx),
)
if axis:
base_lengths = [obj.width() for obj in reindexed_base[0]]
else:
base_lengths = [obj.length() for obj in reindexed_base.T[0]]
else:
reindexed_base = base_frame._partitions
base_lengths = self._column_widths if axis else self._row_lengths
others_lengths = [o._axes_lengths[axis] for o in other_frames]
# define conditions for reindexing and repartitioning `other` frames
do_reindex_others = [
not o.axes[axis].equals(joined_index) for o in other_frames
]
do_repartition_others = [None] * len(other_frames)
for i in range(len(other_frames)):
do_repartition_others[i] = (
force_repartition
or do_reindex_others[i]
or others_lengths[i] != base_lengths
)
# perform repartitioning and reindexing for `other_frames` if needed
reindexed_other_list = [None] * len(other_frames)
for i in range(len(other_frames)):
if do_repartition_others[i]:
# indices of others frame start from `base_frame_idx` + 1
reindexed_other_list[i] = other_frames[
i
]._partition_mgr_cls.map_axis_partitions(
axis,
other_frames[i]._partitions,
make_reindexer(do_repartition_others[i], base_frame_idx + 1 + i),
lengths=base_lengths,
)
else:
reindexed_other_list[i] = other_frames[i]._partitions
reindexed_frames = (
[frames[i]._partitions for i in range(base_frame_idx)]
+ [reindexed_base]
+ reindexed_other_list
)
return (reindexed_frames[0], reindexed_frames[1:], joined_index, base_lengths)
```
###Assistant :
Copartition two Modin DataFrames.
Perform aligning of partitions, index and partition blocks.
Parameters
----------
axis : {0, 1}
Axis to copartition along (0 - rows, 1 - columns).
other : PandasDataframe
Other Modin DataFrame(s) to copartition against.
how : str
How to manage joining the index object ("left", "right", etc.).
sort : bool
Whether sort the joined index or not.
force_repartition : bool, default: False
Whether force the repartitioning or not. By default,
this method will skip repartitioning if it is possible. This is because
reindexing is extremely inefficient. Because this method is used to
`join` or `append`, it is vital that the internal indices match.
Returns
-------
tuple
Tuple containing:
1) 2-d NumPy array of aligned left partitions
2) list of 2-d NumPy arrays of aligned right partitions
3) joined index along ``axis``
4) List with sizes of partitions along axis that partitioning
was done on. This list will be empty if and only if all
the frames are empty.
|
777 | def size(self) -> int:
# override Index.size to avoid materializing _values
return len(self)
# --------------------------------------------------------------------
# Levels Methods
|
Return the number of elements in the underlying data.
| 9 | 18 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def size(self) -> int:
# override Index.size to avoid materializing _values
return len(self)
# --------------------------------------------------------------------
# Levels Methods
```
###Assistant :
Return the number of elements in the underlying data.
|
778 | def groupby_agg(self, by, axis, agg, groupby_args, **kwargs):
# Currently we only expect 'by' to be a projection of the same frame.
# If 'by' holds a list of columns/series, then we create such projection
# to re-use code.
if not isinstance(by, DFAlgQueryCompiler):
if is_list_like(by):
by_cols = []
by_frames = []
for obj in by:
if isinstance(obj, str):
by_cols.append(obj)
elif hasattr(obj, "_modin_frame"):
by_frames.append(obj._modin_frame)
else:
raise NotImplementedError("unsupported groupby args")
by_cols = Index.__new__(Index, data=by_cols, dtype=self.columns.dtype)
by_frame = self.mask(col_labels=by_cols)
if by_frames:
by_frame = by_frame.concat(
axis=1, other_modin_frames=by_frames, ignore_index=True
)
else:
raise NotImplementedError("unsupported groupby args")
else:
by_frame = by._modin_frame
if axis != 0:
raise NotImplementedError("groupby is supported for axis = 0 only")
base = by_frame._find_common_projections_base(self)
if base is None:
raise NotImplementedError("unsupported groupby args")
if groupby_args["level"] is not None:
raise NotImplementedError("levels are not supported for groupby")
drop = kwargs.get("drop", True)
as_index = groupby_args.get("as_index", True)
groupby_cols = by_frame.columns
if isinstance(agg, dict):
agg_cols = agg.keys()
elif not drop:
# If 'by' data came from a different frame then 'self-aggregation'
# columns are more prioritized.
agg_cols = self.columns
else:
agg_cols = [col for col in self.columns if col not in groupby_cols]
# Mimic pandas behaviour: pandas does not allow for aggregation to be empty
# in case of multi-column 'by'.
if not as_index and len(agg_cols) == 0 and len(groupby_cols) > 1:
agg_cols = self.columns
# Create new base where all required columns are computed. We don't allow
# complex expressions to be a group key or an aggeregate operand.
allowed_nodes = (FrameNode, TransformNode)
if not isinstance(by_frame._op, allowed_nodes):
raise NotImplementedError(
"OmniSci doesn't allow complex expression to be a group key. "
f"The only allowed frame nodes are: {tuple(o.__name__ for o in allowed_nodes)}, "
f"met '{type(by_frame._op).__name__}'."
)
col_to_delete_template = "__delete_me_{name}"
|
Groupby with aggregation operation.
Parameters
----------
by : DFAlgQueryCompiler or list-like of str
Grouping keys.
axis : {0, 1}
Only rows groupby is supported, so should be 0.
agg : str or dict
Aggregates to compute.
groupby_args : dict
Additional groupby args.
**kwargs : dict
Keyword args. Currently ignored.
Returns
-------
OmnisciOnNativeDataframe
The new frame.
| 55 | 278 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def groupby_agg(self, by, axis, agg, groupby_args, **kwargs):
# Currently we only expect 'by' to be a projection of the same frame.
# If 'by' holds a list of columns/series, then we create such projection
# to re-use code.
if not isinstance(by, DFAlgQueryCompiler):
if is_list_like(by):
by_cols = []
by_frames = []
for obj in by:
if isinstance(obj, str):
by_cols.append(obj)
elif hasattr(obj, "_modin_frame"):
by_frames.append(obj._modin_frame)
else:
raise NotImplementedError("unsupported groupby args")
by_cols = Index.__new__(Index, data=by_cols, dtype=self.columns.dtype)
by_frame = self.mask(col_labels=by_cols)
if by_frames:
by_frame = by_frame.concat(
axis=1, other_modin_frames=by_frames, ignore_index=True
)
else:
raise NotImplementedError("unsupported groupby args")
else:
by_frame = by._modin_frame
if axis != 0:
raise NotImplementedError("groupby is supported for axis = 0 only")
base = by_frame._find_common_projections_base(self)
if base is None:
raise NotImplementedError("unsupported groupby args")
if groupby_args["level"] is not None:
raise NotImplementedError("levels are not supported for groupby")
drop = kwargs.get("drop", True)
as_index = groupby_args.get("as_index", True)
groupby_cols = by_frame.columns
if isinstance(agg, dict):
agg_cols = agg.keys()
elif not drop:
# If 'by' data came from a different frame then 'self-aggregation'
# columns are more prioritized.
agg_cols = self.columns
else:
agg_cols = [col for col in self.columns if col not in groupby_cols]
# Mimic pandas behaviour: pandas does not allow for aggregation to be empty
# in case of multi-column 'by'.
if not as_index and len(agg_cols) == 0 and len(groupby_cols) > 1:
agg_cols = self.columns
# Create new base where all required columns are computed. We don't allow
# complex expressions to be a group key or an aggeregate operand.
allowed_nodes = (FrameNode, TransformNode)
if not isinstance(by_frame._op, allowed_nodes):
raise NotImplementedError(
"OmniSci doesn't allow complex expression to be a group key. "
f"The only allowed frame nodes are: {tuple(o.__name__ for o in allowed_nodes)}, "
f"met '{type(by_frame._op).__name__}'."
)
col_to_delete_template = "__delete_me_{name}"
```
###Assistant :
Groupby with aggregation operation.
Parameters
----------
by : DFAlgQueryCompiler or list-like of str
Grouping keys.
axis : {0, 1}
Only rows groupby is supported, so should be 0.
agg : str or dict
Aggregates to compute.
groupby_args : dict
Additional groupby args.
**kwargs : dict
Keyword args. Currently ignored.
Returns
-------
OmnisciOnNativeDataframe
The new frame.
|
779 | def delete_subscription_from_snuba(query_subscription_id, **kwargs):
try:
subscription = QuerySubscription.objects.get(id=query_subscription_id)
except QuerySubscription.DoesNotExist:
metrics.incr("snuba.subscriptions.delete.subscription_does_not_exist")
return
if subscription.status not in [
QuerySubscription.Status.DELETING.value,
QuerySubscription.Status.DISABLED.value,
]:
metrics.incr("snuba.subscriptions.delete.incorrect_status")
return
if subscription.subscription_id is not None:
query_dataset = QueryDatasets(subscription.snuba_query.dataset)
entity_key = get_entity_key_from_snuba_query(
subscription.snuba_query, subscription.project.organization_id, subscription.project_id
)
_delete_from_snuba(
query_dataset,
subscription.subscription_id,
entity_key,
)
if subscription.status == QuerySubscription.Status.DELETING.value:
subscription.delete()
else:
subscription.update(subscription_id=None)
|
Task to delete a corresponding subscription in Snuba from a `QuerySubscription` in
Sentry.
If the local subscription is marked for deletion (as opposed to disabled),
then we delete the local subscription once we've successfully removed from Snuba.
| 37 | 48 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def delete_subscription_from_snuba(query_subscription_id, **kwargs):
try:
subscription = QuerySubscription.objects.get(id=query_subscription_id)
except QuerySubscription.DoesNotExist:
metrics.incr("snuba.subscriptions.delete.subscription_does_not_exist")
return
if subscription.status not in [
QuerySubscription.Status.DELETING.value,
QuerySubscription.Status.DISABLED.value,
]:
metrics.incr("snuba.subscriptions.delete.incorrect_status")
return
if subscription.subscription_id is not None:
query_dataset = QueryDatasets(subscription.snuba_query.dataset)
entity_key = get_entity_key_from_snuba_query(
subscription.snuba_query, subscription.project.organization_id, subscription.project_id
)
_delete_from_snuba(
query_dataset,
subscription.subscription_id,
entity_key,
)
if subscription.status == QuerySubscription.Status.DELETING.value:
subscription.delete()
else:
subscription.update(subscription_id=None)
```
###Assistant :
Task to delete a corresponding subscription in Snuba from a `QuerySubscription` in
Sentry.
If the local subscription is marked for deletion (as opposed to disabled),
then we delete the local subscription once we've successfully removed from Snuba.
|
780 | def test_changing_timer_with_messages_shown(qtbot, view, config_stub):
config_stub.val.messages.timeout = 900000 # 15s
view.show_message(message.MessageInfo(usertypes.MessageLevel.info, 'test'))
with qtbot.wait_signal(view._clear_timer.timeout):
config_stub.val.messages.timeout = 100
@pytest.mark.parametrize('count, expected', [(1, 100), (3, 300),
(5, 500), (7, 500)]) | When we change messages.timeout, the timer should be restarted. | 9 | 26 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_changing_timer_with_messages_shown(qtbot, view, config_stub):
config_stub.val.messages.timeout = 900000 # 15s
view.show_message(message.MessageInfo(usertypes.MessageLevel.info, 'test'))
with qtbot.wait_signal(view._clear_timer.timeout):
config_stub.val.messages.timeout = 100
@pytest.mark.parametrize('count, expected', [(1, 100), (3, 300),
(5, 500), (7, 500)])
```
###Assistant : When we change messages.timeout, the timer should be restarted.
|
781 | def get_events(start, end, filters=None):
from frappe.desk.calendar import get_event_conditions
events = []
event_color = {
"Pending": "#fff4f0",
"Under Review": "#d3e8fc",
"Cleared": "#eaf5ed",
"Rejected": "#fce7e7",
}
conditions = get_event_conditions("Interview", filters)
interviews = frappe.db.sql(
.format(
conditions=conditions
),
{"start": start, "end": end},
as_dict=True,
update={"allDay": 0},
)
for d in interviews:
subject_data = []
for field in ["name", "job_applicant", "interview_round"]:
if not d.get(field):
continue
subject_data.append(d.get(field))
color = event_color.get(d.status)
interview_data = {
"from": get_datetime("%s %s" % (d.scheduled_on, d.from_time or "00:00:00")),
"to": get_datetime("%s %s" % (d.scheduled_on, d.to_time or "00:00:00")),
"name": d.name,
"subject": "\n".join(subject_data),
"color": color if color else "#89bcde",
}
events.append(interview_data)
return events
| Returns events for Gantt / Calendar view rendering.
:param start: Start date-time.
:param end: End date-time.
:param filters: Filters (JSON).
SELECT DISTINCT
`tabInterview`.name, `tabInterview`.job_applicant, `tabInterview`.interview_round,
`tabInterview`.scheduled_on, `tabInterview`.status, `tabInterview`.from_time as from_time,
`tabInterview`.to_time as to_time
from
`tabInterview`
where
(`tabInterview`.scheduled_on between %(start)s and %(end)s)
and docstatus != 2
{conditions}
| 46 | 96 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_events(start, end, filters=None):
from frappe.desk.calendar import get_event_conditions
events = []
event_color = {
"Pending": "#fff4f0",
"Under Review": "#d3e8fc",
"Cleared": "#eaf5ed",
"Rejected": "#fce7e7",
}
conditions = get_event_conditions("Interview", filters)
interviews = frappe.db.sql(
.format(
conditions=conditions
),
{"start": start, "end": end},
as_dict=True,
update={"allDay": 0},
)
for d in interviews:
subject_data = []
for field in ["name", "job_applicant", "interview_round"]:
if not d.get(field):
continue
subject_data.append(d.get(field))
color = event_color.get(d.status)
interview_data = {
"from": get_datetime("%s %s" % (d.scheduled_on, d.from_time or "00:00:00")),
"to": get_datetime("%s %s" % (d.scheduled_on, d.to_time or "00:00:00")),
"name": d.name,
"subject": "\n".join(subject_data),
"color": color if color else "#89bcde",
}
events.append(interview_data)
return events
```
###Assistant : Returns events for Gantt / Calendar view rendering.
:param start: Start date-time.
:param end: End date-time.
:param filters: Filters (JSON).
SELECT DISTINCT
`tabInterview`.name, `tabInterview`.job_applicant, `tabInterview`.interview_round,
`tabInterview`.scheduled_on, `tabInterview`.status, `tabInterview`.from_time as from_time,
`tabInterview`.to_time as to_time
from
`tabInterview`
where
(`tabInterview`.scheduled_on between %(start)s and %(end)s)
and docstatus != 2
{conditions}
|
782 | def testFuncTrainableCheckpointConfigValidation(self):
with self.assertRaises(ValueError):
Experiment(
name="foo",
run="f1", # Will point to a wrapped function trainable
checkpoint_config=CheckpointConfig(checkpoint_at_end=True),
)
with self.assertRaises(ValueError):
Experiment(
name="foo",
run="f1",
checkpoint_config=CheckpointConfig(checkpoint_frequency=1),
)
with self.assertRaises(ValueError):
Experiment(
name="foo",
run=lambda config: 1,
checkpoint_config=CheckpointConfig(checkpoint_at_end=True),
)
| Raise an error when trying to specify checkpoint_at_end/checkpoint_frequency
with a function trainable. | 12 | 33 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def testFuncTrainableCheckpointConfigValidation(self):
with self.assertRaises(ValueError):
Experiment(
name="foo",
run="f1", # Will point to a wrapped function trainable
checkpoint_config=CheckpointConfig(checkpoint_at_end=True),
)
with self.assertRaises(ValueError):
Experiment(
name="foo",
run="f1",
checkpoint_config=CheckpointConfig(checkpoint_frequency=1),
)
with self.assertRaises(ValueError):
Experiment(
name="foo",
run=lambda config: 1,
checkpoint_config=CheckpointConfig(checkpoint_at_end=True),
)
```
###Assistant : Raise an error when trying to specify checkpoint_at_end/checkpoint_frequency
with a function trainable.
|
783 | def create_perspective_transform(src, dst, round=False, splat_args=False):
try:
transform_matrix = create_perspective_transform_matrix(src, dst)
error = None
except np.linalg.LinAlgError as e:
transform_matrix = np.identity(3, dtype=np.float)
error = "invalid input quads (%s and %s): %s" %(src, dst, e)
error = error.replace("\n", "")
to_eval = "def perspective_transform(%s):\n" %(
splat_args and "*pt" or "pt",
)
to_eval += " res = np.dot(transform_matrix, ((pt[0], ), (pt[1], ), (1, )))\n"
to_eval += " res = res / res[2]\n"
if round:
to_eval += " return (int(round(res[0][0])), int(round(res[1][0])))\n"
else:
to_eval += " return (res[0][0], res[1][0])\n"
locals = {
"transform_matrix": transform_matrix,
}
locals.update(globals())
exec(to_eval,locals,locals)
res = locals["perspective_transform"]
res.matrix = transform_matrix
res.error = error
return res
| Returns a function which will transform points in quadrilateral
``src`` to the corresponding points on quadrilateral ``dst``::
>>> transform = create_perspective_transform(
... [(0, 0), (10, 0), (10, 10), (0, 10)],
... [(50, 50), (100, 50), (100, 100), (50, 100)],
... )
>>> transform((5, 5))
(74.99999999999639, 74.999999999999957)
If ``round`` is ``True`` then points will be rounded to the nearest
integer and integer values will be returned.
>>> transform = create_perspective_transform(
... [(0, 0), (10, 0), (10, 10), (0, 10)],
... [(50, 50), (100, 50), (100, 100), (50, 100)],
... round=True,
... )
>>> transform((5, 5))
(75, 75)
If ``splat_args`` is ``True`` the function will accept two arguments
instead of a tuple.
>>> transform = create_perspective_transform(
... [(0, 0), (10, 0), (10, 10), (0, 10)],
... [(50, 50), (100, 50), (100, 100), (50, 100)],
... splat_args=True,
... )
>>> transform(5, 5)
(74.99999999999639, 74.999999999999957)
If the input values yield an invalid transformation matrix an identity
function will be returned and the ``error`` attribute will be set to a
description of the error::
>>> tranform = create_perspective_transform(
... np.zeros((4, 2)),
... np.zeros((4, 2)),
... )
>>> transform((5, 5))
(5.0, 5.0)
>>> transform.error
'invalid input quads (...): Singular matrix
| 194 | 102 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def create_perspective_transform(src, dst, round=False, splat_args=False):
try:
transform_matrix = create_perspective_transform_matrix(src, dst)
error = None
except np.linalg.LinAlgError as e:
transform_matrix = np.identity(3, dtype=np.float)
error = "invalid input quads (%s and %s): %s" %(src, dst, e)
error = error.replace("\n", "")
to_eval = "def perspective_transform(%s):\n" %(
splat_args and "*pt" or "pt",
)
to_eval += " res = np.dot(transform_matrix, ((pt[0], ), (pt[1], ), (1, )))\n"
to_eval += " res = res / res[2]\n"
if round:
to_eval += " return (int(round(res[0][0])), int(round(res[1][0])))\n"
else:
to_eval += " return (res[0][0], res[1][0])\n"
locals = {
"transform_matrix": transform_matrix,
}
locals.update(globals())
exec(to_eval,locals,locals)
res = locals["perspective_transform"]
res.matrix = transform_matrix
res.error = error
return res
```
###Assistant : Returns a function which will transform points in quadrilateral
``src`` to the corresponding points on quadrilateral ``dst``::
>>> transform = create_perspective_transform(
... [(0, 0), (10, 0), (10, 10), (0, 10)],
... [(50, 50), (100, 50), (100, 100), (50, 100)],
... )
>>> transform((5, 5))
(74.99999999999639, 74.999999999999957)
If ``round`` is ``True`` then points will be rounded to the nearest
integer and integer values will be returned.
>>> transform = create_perspective_transform(
... [(0, 0), (10, 0), (10, 10), (0, 10)],
... [(50, 50), (100, 50), (100, 100), (50, 100)],
... round=True,
... )
>>> transform((5, 5))
(75, 75)
If ``splat_args`` is ``True`` the function will accept two arguments
instead of a tuple.
>>> transform = create_perspective_transform(
... [(0, 0), (10, 0), (10, 10), (0, 10)],
... [(50, 50), (100, 50), (100, 100), (50, 100)],
... splat_args=True,
... )
>>> transform(5, 5)
(74.99999999999639, 74.999999999999957)
If the input values yield an invalid transformation matrix an identity
function will be returned and the ``error`` attribute will be set to a
description of the error::
>>> tranform = create_perspective_transform(
... np.zeros((4, 2)),
... np.zeros((4, 2)),
... )
>>> transform((5, 5))
(5.0, 5.0)
>>> transform.error
'invalid input quads (...): Singular matrix
|
784 | def get(self, url, cache=True, **kwargs):
if not url.isValid():
urlutils.invalid_url_error(url, "start download")
return None
req = QNetworkRequest(url)
user_agent = websettings.user_agent(url)
req.setHeader(QNetworkRequest.KnownHeaders.UserAgentHeader, user_agent)
if not cache:
req.setAttribute(QNetworkRequest.Attribute.CacheSaveControlAttribute, False)
return self.get_request(req, **kwargs)
| Start a download with a link URL.
Args:
url: The URL to get, as QUrl
cache: If set to False, don't cache the response.
**kwargs: passed to get_request().
Return:
The created DownloadItem.
| 32 | 29 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get(self, url, cache=True, **kwargs):
if not url.isValid():
urlutils.invalid_url_error(url, "start download")
return None
req = QNetworkRequest(url)
user_agent = websettings.user_agent(url)
req.setHeader(QNetworkRequest.KnownHeaders.UserAgentHeader, user_agent)
if not cache:
req.setAttribute(QNetworkRequest.Attribute.CacheSaveControlAttribute, False)
return self.get_request(req, **kwargs)
```
###Assistant : Start a download with a link URL.
Args:
url: The URL to get, as QUrl
cache: If set to False, don't cache the response.
**kwargs: passed to get_request().
Return:
The created DownloadItem.
|
785 | def test_send_join_partial_state(self):
joining_user = "@misspiggy:" + self.OTHER_SERVER_NAME
join_result = self._make_join(joining_user)
join_event_dict = join_result["event"]
add_hashes_and_signatures(
KNOWN_ROOM_VERSIONS[DEFAULT_ROOM_VERSION],
join_event_dict,
signature_name=self.OTHER_SERVER_NAME,
signing_key=self.OTHER_SERVER_SIGNATURE_KEY,
)
channel = self.make_signed_federation_request(
"PUT",
f"/_matrix/federation/v2/send_join/{self._room_id}/x?org.matrix.msc3706.partial_state=true",
content=join_event_dict,
)
self.assertEquals(channel.code, 200, channel.json_body)
# expect a reduced room state
returned_state = [
(ev["type"], ev["state_key"]) for ev in channel.json_body["state"]
]
self.assertCountEqual(
returned_state,
[
("m.room.create", ""),
("m.room.power_levels", ""),
("m.room.join_rules", ""),
("m.room.history_visibility", ""),
],
)
# the auth chain should not include anything already in "state"
returned_auth_chain_events = [
(ev["type"], ev["state_key"]) for ev in channel.json_body["auth_chain"]
]
self.assertCountEqual(
returned_auth_chain_events,
[
("m.room.member", "@kermit:test"),
],
)
# the room should show that the new user is a member
r = self.get_success(
self.hs.get_state_handler().get_current_state(self._room_id)
)
self.assertEqual(r[("m.room.member", joining_user)].membership, "join")
| When MSC3706 support is enabled, /send_join should return partial state | 10 | 106 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_send_join_partial_state(self):
joining_user = "@misspiggy:" + self.OTHER_SERVER_NAME
join_result = self._make_join(joining_user)
join_event_dict = join_result["event"]
add_hashes_and_signatures(
KNOWN_ROOM_VERSIONS[DEFAULT_ROOM_VERSION],
join_event_dict,
signature_name=self.OTHER_SERVER_NAME,
signing_key=self.OTHER_SERVER_SIGNATURE_KEY,
)
channel = self.make_signed_federation_request(
"PUT",
f"/_matrix/federation/v2/send_join/{self._room_id}/x?org.matrix.msc3706.partial_state=true",
content=join_event_dict,
)
self.assertEquals(channel.code, 200, channel.json_body)
# expect a reduced room state
returned_state = [
(ev["type"], ev["state_key"]) for ev in channel.json_body["state"]
]
self.assertCountEqual(
returned_state,
[
("m.room.create", ""),
("m.room.power_levels", ""),
("m.room.join_rules", ""),
("m.room.history_visibility", ""),
],
)
# the auth chain should not include anything already in "state"
returned_auth_chain_events = [
(ev["type"], ev["state_key"]) for ev in channel.json_body["auth_chain"]
]
self.assertCountEqual(
returned_auth_chain_events,
[
("m.room.member", "@kermit:test"),
],
)
# the room should show that the new user is a member
r = self.get_success(
self.hs.get_state_handler().get_current_state(self._room_id)
)
self.assertEqual(r[("m.room.member", joining_user)].membership, "join")
```
###Assistant : When MSC3706 support is enabled, /send_join should return partial state
|
786 | async def drain(self):
if self._reader is not None:
exc = self._reader.exception()
if exc is not None:
raise exc
if self._transport.is_closing():
# Wait for protocol.connection_lost() call
# Raise connection closing error if any,
# ConnectionResetError otherwise
# Yield to the event loop so connection_lost() may be
# called. Without this, _drain_helper() would return
# immediately, and code that calls
# write(...); await drain()
# in a loop would never call connection_lost(), so it
# would not see an error when the socket is closed.
await sleep(0)
await self._protocol._drain_helper()
| Flush the write buffer.
The intended use is to write
w.write(data)
await w.drain()
| 13 | 87 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
async def drain(self):
if self._reader is not None:
exc = self._reader.exception()
if exc is not None:
raise exc
if self._transport.is_closing():
# Wait for protocol.connection_lost() call
# Raise connection closing error if any,
# ConnectionResetError otherwise
# Yield to the event loop so connection_lost() may be
# called. Without this, _drain_helper() would return
# immediately, and code that calls
# write(...); await drain()
# in a loop would never call connection_lost(), so it
# would not see an error when the socket is closed.
await sleep(0)
await self._protocol._drain_helper()
```
###Assistant : Flush the write buffer.
The intended use is to write
w.write(data)
await w.drain()
|
787 | def on_chord_header_start(self, chord, **header) -> dict:
if not isinstance(chord.tasks, group):
chord.tasks = group(chord.tasks)
return self.on_group_start(chord.tasks, **header)
| Method that is called on сhord header stamping start.
Arguments:
chord (chord): chord that is stamped.
headers (Dict): Partial headers that could be merged with existing headers.
Returns:
Dict: headers to update.
| 32 | 16 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def on_chord_header_start(self, chord, **header) -> dict:
if not isinstance(chord.tasks, group):
chord.tasks = group(chord.tasks)
return self.on_group_start(chord.tasks, **header)
```
###Assistant : Method that is called on сhord header stamping start.
Arguments:
chord (chord): chord that is stamped.
headers (Dict): Partial headers that could be merged with existing headers.
Returns:
Dict: headers to update.
|
788 | def load_historic_predictions_from_disk(self):
exists = self.historic_predictions_path.is_file()
if exists:
try:
with open(self.historic_predictions_path, "rb") as fp:
self.historic_predictions = cloudpickle.load(fp)
logger.info(
f"Found existing historic predictions at {self.full_path}, but beware "
"that statistics may be inaccurate if the bot has been offline for "
"an extended period of time."
)
except EOFError:
logger.warning(
'Historical prediction file was corrupted. Trying to load backup file.')
with open(self.historic_predictions_bkp_path, "rb") as fp:
self.historic_predictions = cloudpickle.load(fp)
logger.warning('FreqAI successfully loaded the backup historical predictions file.')
elif not self.follow_mode:
logger.info("Could not find existing historic_predictions, starting from scratch")
else:
logger.warning(
f"Follower could not find historic predictions at {self.full_path} "
"sending null values back to strategy"
)
return exists
|
Locate and load a previously saved historic predictions.
:return: bool - whether or not the drawer was located
| 18 | 105 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def load_historic_predictions_from_disk(self):
exists = self.historic_predictions_path.is_file()
if exists:
try:
with open(self.historic_predictions_path, "rb") as fp:
self.historic_predictions = cloudpickle.load(fp)
logger.info(
f"Found existing historic predictions at {self.full_path}, but beware "
"that statistics may be inaccurate if the bot has been offline for "
"an extended period of time."
)
except EOFError:
logger.warning(
'Historical prediction file was corrupted. Trying to load backup file.')
with open(self.historic_predictions_bkp_path, "rb") as fp:
self.historic_predictions = cloudpickle.load(fp)
logger.warning('FreqAI successfully loaded the backup historical predictions file.')
elif not self.follow_mode:
logger.info("Could not find existing historic_predictions, starting from scratch")
else:
logger.warning(
f"Follower could not find historic predictions at {self.full_path} "
"sending null values back to strategy"
)
return exists
```
###Assistant :
Locate and load a previously saved historic predictions.
:return: bool - whether or not the drawer was located
|
789 | def test_background(self):
css =
stylesheet = Stylesheet()
stylesheet.parse(css)
styles = stylesheet.rules[0].styles
assert styles.text_background == Color("red", type=ColorType.STANDARD, number=1)
| #some-widget {
text: on red;
}
| 6 | 17 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_background(self):
css =
stylesheet = Stylesheet()
stylesheet.parse(css)
styles = stylesheet.rules[0].styles
assert styles.text_background == Color("red", type=ColorType.STANDARD, number=1)
```
###Assistant : #some-widget {
text: on red;
}
|
790 | def update(self) -> None:
try:
response = requests.get(self._url, timeout=5)
except (requests.exceptions.RequestException, ValueError):
_LOGGER.warning(
"Could not update status for DTE Energy Bridge (%s)", self._attr_name
)
return
if response.status_code != HTTPStatus.OK:
_LOGGER.warning(
"Invalid status_code from DTE Energy Bridge: %s (%s)",
response.status_code,
self._attr_name,
)
return
response_split = response.text.split()
if len(response_split) != 2:
_LOGGER.warning(
'Invalid response from DTE Energy Bridge: "%s" (%s)',
response.text,
self._attr_name,
)
return
val = float(response_split[0])
# A workaround for a bug in the DTE energy bridge.
# The returned value can randomly be in W or kW. Checking for a
# a decimal seems to be a reliable way to determine the units.
# Limiting to version 1 because version 2 apparently always returns
# values in the format 000000.000 kW, but the scaling is Watts
# NOT kWatts
if self._version == 1 and "." in response_split[0]:
self._attr_native_value = val
else:
self._attr_native_value = val / 1000
| Get the energy usage data from the DTE energy bridge. | 10 | 146 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def update(self) -> None:
try:
response = requests.get(self._url, timeout=5)
except (requests.exceptions.RequestException, ValueError):
_LOGGER.warning(
"Could not update status for DTE Energy Bridge (%s)", self._attr_name
)
return
if response.status_code != HTTPStatus.OK:
_LOGGER.warning(
"Invalid status_code from DTE Energy Bridge: %s (%s)",
response.status_code,
self._attr_name,
)
return
response_split = response.text.split()
if len(response_split) != 2:
_LOGGER.warning(
'Invalid response from DTE Energy Bridge: "%s" (%s)',
response.text,
self._attr_name,
)
return
val = float(response_split[0])
# A workaround for a bug in the DTE energy bridge.
# The returned value can randomly be in W or kW. Checking for a
# a decimal seems to be a reliable way to determine the units.
# Limiting to version 1 because version 2 apparently always returns
# values in the format 000000.000 kW, but the scaling is Watts
# NOT kWatts
if self._version == 1 and "." in response_split[0]:
self._attr_native_value = val
else:
self._attr_native_value = val / 1000
```
###Assistant : Get the energy usage data from the DTE energy bridge.
|
791 | def ndependencies(dependencies, dependents):
num_needed = {}
result = {}
for k, v in dependencies.items():
num_needed[k] = len(v)
if not v:
result[k] = 1
num_dependencies = num_needed.copy()
current = []
current_pop = current.pop
current_append = current.append
for key in result:
for parent in dependents[key]:
num_needed[parent] -= 1
if not num_needed[parent]:
current_append(parent)
while current:
key = current_pop()
result[key] = 1 + sum(result[child] for child in dependencies[key])
for parent in dependents[key]:
num_needed[parent] -= 1
if not num_needed[parent]:
current_append(parent)
return num_dependencies, result
| Number of total data elements on which this key depends
For each key we return the number of tasks that must be run for us to run
this task.
Examples
--------
>>> inc = lambda x: x + 1
>>> dsk = {'a': 1, 'b': (inc, 'a'), 'c': (inc, 'b')}
>>> dependencies, dependents = get_deps(dsk)
>>> num_dependencies, total_dependencies = ndependencies(dependencies, dependents)
>>> sorted(total_dependencies.items())
[('a', 1), ('b', 2), ('c', 3)]
Returns
-------
num_dependencies: Dict[key, int]
total_dependencies: Dict[key, int]
| 77 | 78 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def ndependencies(dependencies, dependents):
num_needed = {}
result = {}
for k, v in dependencies.items():
num_needed[k] = len(v)
if not v:
result[k] = 1
num_dependencies = num_needed.copy()
current = []
current_pop = current.pop
current_append = current.append
for key in result:
for parent in dependents[key]:
num_needed[parent] -= 1
if not num_needed[parent]:
current_append(parent)
while current:
key = current_pop()
result[key] = 1 + sum(result[child] for child in dependencies[key])
for parent in dependents[key]:
num_needed[parent] -= 1
if not num_needed[parent]:
current_append(parent)
return num_dependencies, result
```
###Assistant : Number of total data elements on which this key depends
For each key we return the number of tasks that must be run for us to run
this task.
Examples
--------
>>> inc = lambda x: x + 1
>>> dsk = {'a': 1, 'b': (inc, 'a'), 'c': (inc, 'b')}
>>> dependencies, dependents = get_deps(dsk)
>>> num_dependencies, total_dependencies = ndependencies(dependencies, dependents)
>>> sorted(total_dependencies.items())
[('a', 1), ('b', 2), ('c', 3)]
Returns
-------
num_dependencies: Dict[key, int]
total_dependencies: Dict[key, int]
|
792 | def validate_attr(self, append) -> None:
if append:
existing_fields = getattr(self.attrs, self.kind_attr, None)
if existing_fields is not None and existing_fields != list(self.values):
raise ValueError("appended items do not match existing items in table!")
existing_dtype = getattr(self.attrs, self.dtype_attr, None)
if existing_dtype is not None and existing_dtype != self.dtype:
raise ValueError(
"appended items dtype do not match existing items dtype in table!"
)
| validate that we have the same order as the existing & same dtype | 13 | 59 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def validate_attr(self, append) -> None:
if append:
existing_fields = getattr(self.attrs, self.kind_attr, None)
if existing_fields is not None and existing_fields != list(self.values):
raise ValueError("appended items do not match existing items in table!")
existing_dtype = getattr(self.attrs, self.dtype_attr, None)
if existing_dtype is not None and existing_dtype != self.dtype:
raise ValueError(
"appended items dtype do not match existing items dtype in table!"
)
```
###Assistant : validate that we have the same order as the existing & same dtype
|
793 | def __getitem__(self, parameters):
item = typing._type_check(parameters,
f'{self._name} accepts only single type')
return typing._GenericAlias(self, (item,))
Final = _FinalForm('Final',
doc= | A special typing construct to indicate that a name
cannot be re-assigned or overridden in a subclass.
For example:
MAX_SIZE: Final = 9000
MAX_SIZE += 1 # Error reported by type checker | 32 | 18 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def __getitem__(self, parameters):
item = typing._type_check(parameters,
f'{self._name} accepts only single type')
return typing._GenericAlias(self, (item,))
Final = _FinalForm('Final',
doc=
```
###Assistant : A special typing construct to indicate that a name
cannot be re-assigned or overridden in a subclass.
For example:
MAX_SIZE: Final = 9000
MAX_SIZE += 1 # Error reported by type checker
|
794 | def type_spec_from_value(value):
if is_extension_type(value):
return value._type_spec # pylint: disable=protected-access
# Get a TensorSpec for array-like data without
# converting the data to a Tensor
if hasattr(value, "shape") and hasattr(value, "dtype"):
return tf.TensorSpec(value.shape, value.dtype)
else:
return tf.type_spec_from_value(value)
| Grab type_spec without converting array-likes to tensors. | 7 | 36 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def type_spec_from_value(value):
if is_extension_type(value):
return value._type_spec # pylint: disable=protected-access
# Get a TensorSpec for array-like data without
# converting the data to a Tensor
if hasattr(value, "shape") and hasattr(value, "dtype"):
return tf.TensorSpec(value.shape, value.dtype)
else:
return tf.type_spec_from_value(value)
```
###Assistant : Grab type_spec without converting array-likes to tensors.
|
795 | def make_window():
sg.theme(settings.get('-theme-', 'DarkBlue2')) # set the theme
layout = [[sg.Text('Settings Window')],
[sg.Input(settings.get('-input-', ''), k='-IN-')],
[sg.Listbox(sg.theme_list(), default_values=[settings['-theme-'],], size=(15, 10), k='-LISTBOX-')],
[sg.CB('Option 1', settings.get('-option1-', True), k='-CB1-')],
[sg.CB('Option 2', settings.get('-option2-', False), k='-CB2-')],
[sg.T('Settings file = ' + settings.get_filename())],
[sg.Button('Save'), sg.Button('Settings Dictionary'), sg.Button('Exit without saving', k='Exit')]]
window = sg.Window('A Settings Window', layout)
|
Creates a new window. The default values for some elements are pulled directly from the
"User Settings" without the use of temp variables.
Some get_entry calls don't have a default value, such as theme, because there was an initial call
that would have set the default value if the setting wasn't present. Could still put the default
value if you wanted but it would be 2 places to change if you wanted a different default value.
Use of a lookup table to map between element keys and user settings could be aded. This demo
is intentionally done without one to show how to use the settings APIs in the most basic,
straightforward way.
If your application allows changing the theme, then a make_window function is good to have
so that you can close and re-create a window easily.
:return: (sg.Window) The window that was created
| 145 | 49 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def make_window():
sg.theme(settings.get('-theme-', 'DarkBlue2')) # set the theme
layout = [[sg.Text('Settings Window')],
[sg.Input(settings.get('-input-', ''), k='-IN-')],
[sg.Listbox(sg.theme_list(), default_values=[settings['-theme-'],], size=(15, 10), k='-LISTBOX-')],
[sg.CB('Option 1', settings.get('-option1-', True), k='-CB1-')],
[sg.CB('Option 2', settings.get('-option2-', False), k='-CB2-')],
[sg.T('Settings file = ' + settings.get_filename())],
[sg.Button('Save'), sg.Button('Settings Dictionary'), sg.Button('Exit without saving', k='Exit')]]
window = sg.Window('A Settings Window', layout)
```
###Assistant :
Creates a new window. The default values for some elements are pulled directly from the
"User Settings" without the use of temp variables.
Some get_entry calls don't have a default value, such as theme, because there was an initial call
that would have set the default value if the setting wasn't present. Could still put the default
value if you wanted but it would be 2 places to change if you wanted a different default value.
Use of a lookup table to map between element keys and user settings could be aded. This demo
is intentionally done without one to show how to use the settings APIs in the most basic,
straightforward way.
If your application allows changing the theme, then a make_window function is good to have
so that you can close and re-create a window easily.
:return: (sg.Window) The window that was created
|
796 | async def _collect(self) -> CommonUsageMetrics:
dau_count = await self._store.count_daily_users()
return CommonUsageMetrics(
daily_active_users=dau_count,
)
| Collect the common metrics and either create the CommonUsageMetrics object to
use if it doesn't exist yet, or update it.
| 20 | 13 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
async def _collect(self) -> CommonUsageMetrics:
dau_count = await self._store.count_daily_users()
return CommonUsageMetrics(
daily_active_users=dau_count,
)
```
###Assistant : Collect the common metrics and either create the CommonUsageMetrics object to
use if it doesn't exist yet, or update it.
|
797 | def getPreprocessorSymbols(cls):
if cls.preprocessor_symbols is None:
cls.preprocessor_symbols = OrderedDict()
for plugin in getActivePlugins():
value = plugin.getPreprocessorSymbols()
if value is not None:
assert type(value) is dict, value
# We order per plugin, but from the plugins, lets just take a dict
# and achieve determinism by ordering the defines by name.
for key, value in sorted(value.items()):
# False alarm, pylint: disable=I0021,unsupported-assignment-operation
cls.preprocessor_symbols[key] = value
return cls.preprocessor_symbols
| Let plugins provide C defines to be used in compilation.
Notes:
The plugins can each contribute, but are hopefully using
a namespace for their defines.
Returns:
OrderedDict(), where None value indicates no define value,
i.e. "-Dkey=value" vs. "-Dkey"
| 38 | 65 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def getPreprocessorSymbols(cls):
if cls.preprocessor_symbols is None:
cls.preprocessor_symbols = OrderedDict()
for plugin in getActivePlugins():
value = plugin.getPreprocessorSymbols()
if value is not None:
assert type(value) is dict, value
# We order per plugin, but from the plugins, lets just take a dict
# and achieve determinism by ordering the defines by name.
for key, value in sorted(value.items()):
# False alarm, pylint: disable=I0021,unsupported-assignment-operation
cls.preprocessor_symbols[key] = value
return cls.preprocessor_symbols
```
###Assistant : Let plugins provide C defines to be used in compilation.
Notes:
The plugins can each contribute, but are hopefully using
a namespace for their defines.
Returns:
OrderedDict(), where None value indicates no define value,
i.e. "-Dkey=value" vs. "-Dkey"
|
798 | def _check_xy(self, renderer=None):
if renderer is None:
renderer = self.figure._get_renderer()
b = self.get_annotation_clip()
if b or (b is None and self.xycoords == "data"):
# check if self.xy is inside the axes.
xy_pixel = self._get_position_xy(renderer)
return self.axes.contains_point(xy_pixel)
return True
| Check whether the annotation at *xy_pixel* should be drawn. | 9 | 38 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _check_xy(self, renderer=None):
if renderer is None:
renderer = self.figure._get_renderer()
b = self.get_annotation_clip()
if b or (b is None and self.xycoords == "data"):
# check if self.xy is inside the axes.
xy_pixel = self._get_position_xy(renderer)
return self.axes.contains_point(xy_pixel)
return True
```
###Assistant : Check whether the annotation at *xy_pixel* should be drawn.
|
799 | def _split_ssh_args(argstring):
# In Python3, shlex.split doesn't work on a byte string.
return [to_text(x.strip()) for x in shlex.split(argstring) if x.strip()]
|
Takes a string like '-o Foo=1 -o Bar="foo bar"' and returns a
list ['-o', 'Foo=1', '-o', 'Bar=foo bar'] that can be added to
the argument list. The list will not contain any empty elements.
| 34 | 20 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _split_ssh_args(argstring):
# In Python3, shlex.split doesn't work on a byte string.
return [to_text(x.strip()) for x in shlex.split(argstring) if x.strip()]
```
###Assistant :
Takes a string like '-o Foo=1 -o Bar="foo bar"' and returns a
list ['-o', 'Foo=1', '-o', 'Bar=foo bar'] that can be added to
the argument list. The list will not contain any empty elements.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.