Unnamed: 0
int64 0
2.93k
| code
stringlengths 101
62.2k
| docs
stringlengths 51
10.7k
| doc_len
int64 4
1.74k
| words
int64 4
4.82k
| lang
stringclasses 1
value | prompt
stringlengths 320
71.2k
|
---|---|---|---|---|---|---|
2,400 | def xatom(self, name, *args):
name = name.upper()
#if not name in self.capabilities: # Let the server decide!
# raise self.error('unknown extension command: %s' % name)
if not name in Commands:
Commands[name] = (self.state,)
return self._simple_command(name, *args)
# Private methods
| Allow simple extension commands
notified by server in CAPABILITY response.
Assumes command is legal in current state.
(typ, [data]) = <instance>.xatom(name, arg, ...)
Returns response appropriate to extension command `name'.
| 30 | 39 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def xatom(self, name, *args):
name = name.upper()
#if not name in self.capabilities: # Let the server decide!
# raise self.error('unknown extension command: %s' % name)
if not name in Commands:
Commands[name] = (self.state,)
return self._simple_command(name, *args)
# Private methods
```
###Assistant : Allow simple extension commands
notified by server in CAPABILITY response.
Assumes command is legal in current state.
(typ, [data]) = <instance>.xatom(name, arg, ...)
Returns response appropriate to extension command `name'.
|
2,401 | def forward(self, outputs, targets):
outputs = outputs.clip(self.epsilon, 1 - self.epsilon)
log_loss = targets * dp_log(outputs) + ((targets * -1) + 1) * dp_log((outputs * -1) + 1)
log_loss = log_loss.sum(axis=1) * -1
return log_loss.mean()
| Forward pass.
.. math:: L = -t \\log(p) - (1 - t) \\log(1 - p)
Parameters
----------
outputs : numpy.array
Predictions in (0, 1), such as sigmoidal output of a neural network.
targets : numpy.array
Targets in [0, 1], such as ground truth labels.
| 44 | 34 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def forward(self, outputs, targets):
outputs = outputs.clip(self.epsilon, 1 - self.epsilon)
log_loss = targets * dp_log(outputs) + ((targets * -1) + 1) * dp_log((outputs * -1) + 1)
log_loss = log_loss.sum(axis=1) * -1
return log_loss.mean()
```
###Assistant : Forward pass.
.. math:: L = -t \\log(p) - (1 - t) \\log(1 - p)
Parameters
----------
outputs : numpy.array
Predictions in (0, 1), such as sigmoidal output of a neural network.
targets : numpy.array
Targets in [0, 1], such as ground truth labels.
|
2,402 | def numeric_assortativity_coefficient(G, attribute, nodes=None):
if nodes is None:
nodes = G.nodes
vals = {G.nodes[n][attribute] for n in nodes}
mapping = {d: i for i, d, in enumerate(vals)}
M = attribute_mixing_matrix(G, attribute, nodes, mapping)
return _numeric_ac(M, mapping)
| Compute assortativity for numerical node attributes.
Assortativity measures the similarity of connections
in the graph with respect to the given numeric attribute.
Parameters
----------
G : NetworkX graph
attribute : string
Node attribute key.
nodes: list or iterable (optional)
Compute numeric assortativity only for attributes of nodes in
container. The default is all nodes.
Returns
-------
r: float
Assortativity of graph for given attribute
Examples
--------
>>> G = nx.Graph()
>>> G.add_nodes_from([0, 1], size=2)
>>> G.add_nodes_from([2, 3], size=3)
>>> G.add_edges_from([(0, 1), (2, 3)])
>>> print(nx.numeric_assortativity_coefficient(G, "size"))
1.0
Notes
-----
This computes Eq. (21) in Ref. [1]_ , which is the Pearson correlation
coefficient of the specified (scalar valued) attribute across edges.
References
----------
.. [1] M. E. J. Newman, Mixing patterns in networks
Physical Review E, 67 026126, 2003
| 129 | 36 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def numeric_assortativity_coefficient(G, attribute, nodes=None):
if nodes is None:
nodes = G.nodes
vals = {G.nodes[n][attribute] for n in nodes}
mapping = {d: i for i, d, in enumerate(vals)}
M = attribute_mixing_matrix(G, attribute, nodes, mapping)
return _numeric_ac(M, mapping)
```
###Assistant : Compute assortativity for numerical node attributes.
Assortativity measures the similarity of connections
in the graph with respect to the given numeric attribute.
Parameters
----------
G : NetworkX graph
attribute : string
Node attribute key.
nodes: list or iterable (optional)
Compute numeric assortativity only for attributes of nodes in
container. The default is all nodes.
Returns
-------
r: float
Assortativity of graph for given attribute
Examples
--------
>>> G = nx.Graph()
>>> G.add_nodes_from([0, 1], size=2)
>>> G.add_nodes_from([2, 3], size=3)
>>> G.add_edges_from([(0, 1), (2, 3)])
>>> print(nx.numeric_assortativity_coefficient(G, "size"))
1.0
Notes
-----
This computes Eq. (21) in Ref. [1]_ , which is the Pearson correlation
coefficient of the specified (scalar valued) attribute across edges.
References
----------
.. [1] M. E. J. Newman, Mixing patterns in networks
Physical Review E, 67 026126, 2003
|
2,403 | def write_filepath(filepath, strategy):
dirpath = os.path.dirname(filepath)
base = os.path.basename(filepath)
return os.path.join(write_dirpath(dirpath, strategy), base)
| Returns the writing file path to be used to save file distributedly.
Directory to contain `filepath` would be created if it doesn't exist.
Args:
filepath: Original filepath that would be used without distribution.
strategy: The tf.distribute strategy object currently used.
Returns:
The writing filepath that should be used to save file with distribution.
| 53 | 13 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def write_filepath(filepath, strategy):
dirpath = os.path.dirname(filepath)
base = os.path.basename(filepath)
return os.path.join(write_dirpath(dirpath, strategy), base)
```
###Assistant : Returns the writing file path to be used to save file distributedly.
Directory to contain `filepath` would be created if it doesn't exist.
Args:
filepath: Original filepath that would be used without distribution.
strategy: The tf.distribute strategy object currently used.
Returns:
The writing filepath that should be used to save file with distribution.
|
2,404 | def serving(self, inputs):
output = self.call(inputs)
return self.serving_output(output)
CONVNEXT_START_DOCSTRING = r
CONVNEXT_INPUTS_DOCSTRING = r
@add_start_docstrings(
"The bare ConvNext model outputting raw features without any specific head on top.",
CONVNEXT_START_DOCSTRING,
) |
Method used for serving the model.
Args:
inputs (`Dict[str, tf.Tensor]`):
The input of the saved model as a dictionary of tensors.
This model inherits from [`TFPreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
<Tip>
TF 2.0 models accepts two formats as inputs:
- having all inputs as keyword arguments (like PyTorch models), or
- having all inputs as a list, tuple or dict in the first positional arguments.
This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
tensors in the first argument of the model call function: `model(inputs)`.
</Tip>
Parameters:
config ([`ConvNextConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~TFPreTrainedModel.from_pretrained`] method to load the model weights.
Args:
pixel_values (`np.ndarray`, `tf.Tensor`, `List[tf.Tensor]` ``Dict[str, tf.Tensor]` or `Dict[str, np.ndarray]` and each example must have the shape `(batch_size, num_channels, height, width)`):
Pixel values. Pixel values can be obtained using [`ConvNextFeatureExtractor`]. See
[`ConvNextFeatureExtractor.__call__`] for details.
output_hidden_states (`bool`, *optional*):
Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (`bool`, *optional*):
Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
| 298 | 30 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def serving(self, inputs):
output = self.call(inputs)
return self.serving_output(output)
CONVNEXT_START_DOCSTRING = r
CONVNEXT_INPUTS_DOCSTRING = r
@add_start_docstrings(
"The bare ConvNext model outputting raw features without any specific head on top.",
CONVNEXT_START_DOCSTRING,
)
```
###Assistant :
Method used for serving the model.
Args:
inputs (`Dict[str, tf.Tensor]`):
The input of the saved model as a dictionary of tensors.
This model inherits from [`TFPreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
<Tip>
TF 2.0 models accepts two formats as inputs:
- having all inputs as keyword arguments (like PyTorch models), or
- having all inputs as a list, tuple or dict in the first positional arguments.
This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
tensors in the first argument of the model call function: `model(inputs)`.
</Tip>
Parameters:
config ([`ConvNextConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~TFPreTrainedModel.from_pretrained`] method to load the model weights.
Args:
pixel_values (`np.ndarray`, `tf.Tensor`, `List[tf.Tensor]` ``Dict[str, tf.Tensor]` or `Dict[str, np.ndarray]` and each example must have the shape `(batch_size, num_channels, height, width)`):
Pixel values. Pixel values can be obtained using [`ConvNextFeatureExtractor`]. See
[`ConvNextFeatureExtractor.__call__`] for details.
output_hidden_states (`bool`, *optional*):
Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (`bool`, *optional*):
Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
|
2,405 | def getclasstree(classes, unique=False):
children = {}
roots = []
for c in classes:
if c.__bases__:
for parent in c.__bases__:
if parent not in children:
children[parent] = []
if c not in children[parent]:
children[parent].append(c)
if unique and parent in classes: break
elif c not in roots:
roots.append(c)
for parent in children:
if parent not in classes:
roots.append(parent)
return walktree(roots, children, None)
# ------------------------------------------------ argument list extraction
Arguments = namedtuple('Arguments', 'args, varargs, varkw')
| Arrange the given list of classes into a hierarchy of nested lists.
Where a nested list appears, it contains classes derived from the class
whose entry immediately precedes the list. Each entry is a 2-tuple
containing a class and a tuple of its base classes. If the 'unique'
argument is true, exactly one entry appears in the returned structure
for each class in the given list. Otherwise, classes using multiple
inheritance and their descendants will appear multiple times. | 78 | 71 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def getclasstree(classes, unique=False):
children = {}
roots = []
for c in classes:
if c.__bases__:
for parent in c.__bases__:
if parent not in children:
children[parent] = []
if c not in children[parent]:
children[parent].append(c)
if unique and parent in classes: break
elif c not in roots:
roots.append(c)
for parent in children:
if parent not in classes:
roots.append(parent)
return walktree(roots, children, None)
# ------------------------------------------------ argument list extraction
Arguments = namedtuple('Arguments', 'args, varargs, varkw')
```
###Assistant : Arrange the given list of classes into a hierarchy of nested lists.
Where a nested list appears, it contains classes derived from the class
whose entry immediately precedes the list. Each entry is a 2-tuple
containing a class and a tuple of its base classes. If the 'unique'
argument is true, exactly one entry appears in the returned structure
for each class in the given list. Otherwise, classes using multiple
inheritance and their descendants will appear multiple times.
|
2,406 | def get_revision(cls, location):
# type: (str) -> str
raise NotImplementedError
|
Return the current commit id of the files at the given location.
| 12 | 10 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_revision(cls, location):
# type: (str) -> str
raise NotImplementedError
```
###Assistant :
Return the current commit id of the files at the given location.
|
2,407 | def get_current_settings() -> Settings:
from prefect.context import ProfileContext
profile = ProfileContext.get()
if profile is not None:
return profile.settings
return get_settings_from_env()
|
Returns a settings object populated with values from the current profile or, if no
profile is active, the environment.
| 19 | 20 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_current_settings() -> Settings:
from prefect.context import ProfileContext
profile = ProfileContext.get()
if profile is not None:
return profile.settings
return get_settings_from_env()
```
###Assistant :
Returns a settings object populated with values from the current profile or, if no
profile is active, the environment.
|
2,408 | def test_compare_key_greater(self):
self.not_at_dest_sync_strategy.determine_should_sync.return_value = False
# Try when the sync strategy says to sync the file.
self.not_at_src_sync_strategy.determine_should_sync.return_value = True
src_files = []
dest_files = []
ref_list = []
result_list = []
time = datetime.datetime.now()
src_file = FileStat(src='', dest='',
compare_key='domparator_test.py', size=10,
last_update=time, src_type='local',
dest_type='s3', operation_name='upload')
dest_file = FileStat(src='', dest='',
compare_key='comparator_test.py', size=10,
last_update=time, src_type='s3',
dest_type='local', operation_name='')
src_files.append(src_file)
dest_files.append(dest_file)
ref_list.append(dest_file)
files = self.comparator.call(iter(src_files), iter(dest_files))
for filename in files:
result_list.append(filename)
self.assertEqual(result_list, ref_list)
# Now try when the sync strategy says not to sync the file.
self.not_at_src_sync_strategy.determine_should_sync.return_value = False
result_list = []
ref_list = []
files = self.comparator.call(iter(src_files), iter(dest_files))
for filename in files:
result_list.append(filename)
self.assertEqual(result_list, ref_list)
|
Confirm the appropriate action is taken when the soruce compare key
is greater than the destination compare key.
| 18 | 101 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_compare_key_greater(self):
self.not_at_dest_sync_strategy.determine_should_sync.return_value = False
# Try when the sync strategy says to sync the file.
self.not_at_src_sync_strategy.determine_should_sync.return_value = True
src_files = []
dest_files = []
ref_list = []
result_list = []
time = datetime.datetime.now()
src_file = FileStat(src='', dest='',
compare_key='domparator_test.py', size=10,
last_update=time, src_type='local',
dest_type='s3', operation_name='upload')
dest_file = FileStat(src='', dest='',
compare_key='comparator_test.py', size=10,
last_update=time, src_type='s3',
dest_type='local', operation_name='')
src_files.append(src_file)
dest_files.append(dest_file)
ref_list.append(dest_file)
files = self.comparator.call(iter(src_files), iter(dest_files))
for filename in files:
result_list.append(filename)
self.assertEqual(result_list, ref_list)
# Now try when the sync strategy says not to sync the file.
self.not_at_src_sync_strategy.determine_should_sync.return_value = False
result_list = []
ref_list = []
files = self.comparator.call(iter(src_files), iter(dest_files))
for filename in files:
result_list.append(filename)
self.assertEqual(result_list, ref_list)
```
###Assistant :
Confirm the appropriate action is taken when the soruce compare key
is greater than the destination compare key.
|
2,409 | def test_custom_exact_lookup_none_rhs(self):
field = Author._meta.get_field("birthdate")
OldExactLookup = field.get_lookup("exact")
author = Author.objects.create(name="author", birthdate=None)
try:
field.register_lookup(Exactly, "exact")
self.assertEqual(Author.objects.get(birthdate__exact=None), author)
finally:
field.register_lookup(OldExactLookup, "exact")
|
__exact=None is transformed to __isnull=True if a custom lookup class
with lookup_name != 'exact' is registered as the `exact` lookup.
| 20 | 20 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_custom_exact_lookup_none_rhs(self):
field = Author._meta.get_field("birthdate")
OldExactLookup = field.get_lookup("exact")
author = Author.objects.create(name="author", birthdate=None)
try:
field.register_lookup(Exactly, "exact")
self.assertEqual(Author.objects.get(birthdate__exact=None), author)
finally:
field.register_lookup(OldExactLookup, "exact")
```
###Assistant :
__exact=None is transformed to __isnull=True if a custom lookup class
with lookup_name != 'exact' is registered as the `exact` lookup.
|
2,410 | def batch_pairwise_distances(U, V):
with tf.variable_scope('pairwise_dist_block'):
# Squared norms of each row in U and V.
norm_u = tf.reduce_sum(tf.square(U), 1)
norm_v = tf.reduce_sum(tf.square(V), 1)
# norm_u as a row and norm_v as a column vectors.
norm_u = tf.reshape(norm_u, [-1, 1])
norm_v = tf.reshape(norm_v, [1, -1])
# Pairwise squared Euclidean distances.
D = tf.maximum(norm_u - 2*tf.matmul(U, V, False, True) + norm_v, 0.0)
return D
#----------------------------------------------------------------------------
| Compute pairwise distances between two batches of feature vectors. | 9 | 63 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def batch_pairwise_distances(U, V):
with tf.variable_scope('pairwise_dist_block'):
# Squared norms of each row in U and V.
norm_u = tf.reduce_sum(tf.square(U), 1)
norm_v = tf.reduce_sum(tf.square(V), 1)
# norm_u as a row and norm_v as a column vectors.
norm_u = tf.reshape(norm_u, [-1, 1])
norm_v = tf.reshape(norm_v, [1, -1])
# Pairwise squared Euclidean distances.
D = tf.maximum(norm_u - 2*tf.matmul(U, V, False, True) + norm_v, 0.0)
return D
#----------------------------------------------------------------------------
```
###Assistant : Compute pairwise distances between two batches of feature vectors.
|
2,411 | def expand_dims(self, image):
self._ensure_format_supported(image)
# Do nothing if PIL image
if isinstance(image, PIL.Image.Image):
return image
if is_torch_tensor(image):
image = image.unsqueeze(0)
else:
image = np.expand_dims(image, axis=0)
return image
|
Expands 2-dimensional `image` to 3 dimensions.
Args:
image (`PIL.Image.Image` or `np.ndarray` or `torch.Tensor`):
The image to expand.
| 17 | 27 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def expand_dims(self, image):
self._ensure_format_supported(image)
# Do nothing if PIL image
if isinstance(image, PIL.Image.Image):
return image
if is_torch_tensor(image):
image = image.unsqueeze(0)
else:
image = np.expand_dims(image, axis=0)
return image
```
###Assistant :
Expands 2-dimensional `image` to 3 dimensions.
Args:
image (`PIL.Image.Image` or `np.ndarray` or `torch.Tensor`):
The image to expand.
|
2,412 | def test_find_executable_task_instances_order_execution_date(self, dag_maker):
dag_id_1 = 'SchedulerJobTest.test_find_executable_task_instances_order_execution_date-a'
dag_id_2 = 'SchedulerJobTest.test_find_executable_task_instances_order_execution_date-b'
task_id = 'task-a'
session = settings.Session()
with dag_maker(dag_id=dag_id_1, max_active_tasks=16, session=session):
EmptyOperator(task_id=task_id)
dr1 = dag_maker.create_dagrun(execution_date=DEFAULT_DATE + timedelta(hours=1))
with dag_maker(dag_id=dag_id_2, max_active_tasks=16, session=session):
EmptyOperator(task_id=task_id)
dr2 = dag_maker.create_dagrun()
dr1 = session.merge(dr1, load=False)
self.scheduler_job = SchedulerJob(subdir=os.devnull)
tis = dr1.task_instances + dr2.task_instances
for ti in tis:
ti.state = State.SCHEDULED
session.merge(ti)
session.flush()
res = self.scheduler_job._executable_task_instances_to_queued(max_tis=1, session=session)
session.flush()
assert [ti.key for ti in res] == [tis[1].key]
session.rollback()
|
Test that task instances follow execution_date order priority. If two dagruns with
different execution dates are scheduled, tasks with earliest dagrun execution date will first
be executed
| 27 | 68 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_find_executable_task_instances_order_execution_date(self, dag_maker):
dag_id_1 = 'SchedulerJobTest.test_find_executable_task_instances_order_execution_date-a'
dag_id_2 = 'SchedulerJobTest.test_find_executable_task_instances_order_execution_date-b'
task_id = 'task-a'
session = settings.Session()
with dag_maker(dag_id=dag_id_1, max_active_tasks=16, session=session):
EmptyOperator(task_id=task_id)
dr1 = dag_maker.create_dagrun(execution_date=DEFAULT_DATE + timedelta(hours=1))
with dag_maker(dag_id=dag_id_2, max_active_tasks=16, session=session):
EmptyOperator(task_id=task_id)
dr2 = dag_maker.create_dagrun()
dr1 = session.merge(dr1, load=False)
self.scheduler_job = SchedulerJob(subdir=os.devnull)
tis = dr1.task_instances + dr2.task_instances
for ti in tis:
ti.state = State.SCHEDULED
session.merge(ti)
session.flush()
res = self.scheduler_job._executable_task_instances_to_queued(max_tis=1, session=session)
session.flush()
assert [ti.key for ti in res] == [tis[1].key]
session.rollback()
```
###Assistant :
Test that task instances follow execution_date order priority. If two dagruns with
different execution dates are scheduled, tasks with earliest dagrun execution date will first
be executed
|
2,413 | def dispatch(self, request, *args, **kwargs):
self.determine_active_organization(request, kwargs.get("organization_slug", None))
if self.csrf_protect:
if hasattr(self.dispatch.__func__, "csrf_exempt"):
delattr(self.dispatch.__func__, "csrf_exempt")
response = self.test_csrf(request)
if response:
return response
if self.is_auth_required(request, *args, **kwargs):
return self.handle_auth_required(request, *args, **kwargs)
if self.is_sudo_required(request, *args, **kwargs):
return self.handle_sudo_required(request, *args, **kwargs)
args, kwargs = self.convert_args(request, *args, **kwargs)
request.access = self.get_access(request, *args, **kwargs)
if not self.has_permission(request, *args, **kwargs):
return self.handle_permission_required(request, *args, **kwargs)
if "organization" in kwargs:
org = kwargs["organization"]
if self.is_member_disabled_from_limit(request, org):
return self.handle_disabled_member(org)
if self.is_not_2fa_compliant(request, org):
return self.handle_not_2fa_compliant(request, *args, **kwargs)
self.request = request
self.default_context = self.get_context_data(request, *args, **kwargs)
return self.handle(request, *args, **kwargs)
|
A note on the CSRF protection process.
Because the CSRF decorators don't work well with view subclasses, we
allow them to control whether a CSRF check is done by setting
self.csrf_protect. This has a couple of implications:
1. We need to mark this method as @csrf_exempt so that when the CSRF
middleware checks it as part of the regular middleware sequence, it
always passes.
2. If self.csrf_protect is set, we will re-run the CSRF check ourselves
using CsrfViewMiddleware().process_view()
3. But first we must remove the csrf_exempt attribute that was set by
the decorator so that the middleware doesn't shortcut and pass the
check unconditionally again.
| 105 | 89 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def dispatch(self, request, *args, **kwargs):
self.determine_active_organization(request, kwargs.get("organization_slug", None))
if self.csrf_protect:
if hasattr(self.dispatch.__func__, "csrf_exempt"):
delattr(self.dispatch.__func__, "csrf_exempt")
response = self.test_csrf(request)
if response:
return response
if self.is_auth_required(request, *args, **kwargs):
return self.handle_auth_required(request, *args, **kwargs)
if self.is_sudo_required(request, *args, **kwargs):
return self.handle_sudo_required(request, *args, **kwargs)
args, kwargs = self.convert_args(request, *args, **kwargs)
request.access = self.get_access(request, *args, **kwargs)
if not self.has_permission(request, *args, **kwargs):
return self.handle_permission_required(request, *args, **kwargs)
if "organization" in kwargs:
org = kwargs["organization"]
if self.is_member_disabled_from_limit(request, org):
return self.handle_disabled_member(org)
if self.is_not_2fa_compliant(request, org):
return self.handle_not_2fa_compliant(request, *args, **kwargs)
self.request = request
self.default_context = self.get_context_data(request, *args, **kwargs)
return self.handle(request, *args, **kwargs)
```
###Assistant :
A note on the CSRF protection process.
Because the CSRF decorators don't work well with view subclasses, we
allow them to control whether a CSRF check is done by setting
self.csrf_protect. This has a couple of implications:
1. We need to mark this method as @csrf_exempt so that when the CSRF
middleware checks it as part of the regular middleware sequence, it
always passes.
2. If self.csrf_protect is set, we will re-run the CSRF check ourselves
using CsrfViewMiddleware().process_view()
3. But first we must remove the csrf_exempt attribute that was set by
the decorator so that the middleware doesn't shortcut and pass the
check unconditionally again.
|
2,414 | def interpolator(self) -> int:
assert self._interpolator is not None
return self._interpolator
| int: The cv2 interpolator required to transpose the mask to a full frame. | 13 | 11 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def interpolator(self) -> int:
assert self._interpolator is not None
return self._interpolator
```
###Assistant : int: The cv2 interpolator required to transpose the mask to a full frame.
|
2,415 | def test_cluster_interrupt(start_connected_cluster, tmpdir):
cluster = start_connected_cluster
dirpath = str(tmpdir)
# Needs to be in scope for pytest | Tests run_experiment on cluster shutdown with actual interrupt.
This is an end-to-end test.
| 13 | 17 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_cluster_interrupt(start_connected_cluster, tmpdir):
cluster = start_connected_cluster
dirpath = str(tmpdir)
# Needs to be in scope for pytest
```
###Assistant : Tests run_experiment on cluster shutdown with actual interrupt.
This is an end-to-end test.
|
2,416 | def test_version(self) -> None:
for version in ("1.0", 1.0, 1):
result = self.parse_response({"version": version, "type": "link"})
# An empty Open Graph response is an error, ensure the URL is included.
self.assertIn("og:url", result.open_graph_result)
# A missing version should be treated as 1.0.
result = self.parse_response({"type": "link"})
self.assertIn("og:url", result.open_graph_result)
# Invalid versions should be rejected.
for version in ("2.0", "1", 1.1, 0, None, {}, []):
result = self.parse_response({"version": version, "type": "link"})
# An empty Open Graph response is an error, ensure the URL is included.
self.assertEqual({}, result.open_graph_result)
| Accept versions that are similar to 1.0 as a string or int (or missing). | 14 | 85 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_version(self) -> None:
for version in ("1.0", 1.0, 1):
result = self.parse_response({"version": version, "type": "link"})
# An empty Open Graph response is an error, ensure the URL is included.
self.assertIn("og:url", result.open_graph_result)
# A missing version should be treated as 1.0.
result = self.parse_response({"type": "link"})
self.assertIn("og:url", result.open_graph_result)
# Invalid versions should be rejected.
for version in ("2.0", "1", 1.1, 0, None, {}, []):
result = self.parse_response({"version": version, "type": "link"})
# An empty Open Graph response is an error, ensure the URL is included.
self.assertEqual({}, result.open_graph_result)
```
###Assistant : Accept versions that are similar to 1.0 as a string or int (or missing).
|
2,417 | def test_page_allowing_subpages(self):
response = self.client.get(
reverse("wagtailadmin_userbar_frontend", args=(self.event_index.id,))
)
# page allows subpages, so the 'add page' button should show
expected_url = reverse(
"wagtailadmin_pages:add_subpage", args=(self.event_index.id,)
)
needle = f
self.assertTagInHTML(needle, str(response.content))
|
<a href="{expected_url}" target="_parent" role="menuitem">
<svg class="icon icon-plus wagtail-action-icon" aria-hidden="true" focusable="false">
<use href="#icon-plus"></use>
</svg>
Add a child page
</a>
| 18 | 30 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_page_allowing_subpages(self):
response = self.client.get(
reverse("wagtailadmin_userbar_frontend", args=(self.event_index.id,))
)
# page allows subpages, so the 'add page' button should show
expected_url = reverse(
"wagtailadmin_pages:add_subpage", args=(self.event_index.id,)
)
needle = f
self.assertTagInHTML(needle, str(response.content))
```
###Assistant :
<a href="{expected_url}" target="_parent" role="menuitem">
<svg class="icon icon-plus wagtail-action-icon" aria-hidden="true" focusable="false">
<use href="#icon-plus"></use>
</svg>
Add a child page
</a>
|
2,418 | def render_output_ui(self, streamlit_app, input) -> None: # type: ignore
src, result = self.__root__
streamlit_app.subheader("Synthesized Audio")
streamlit_app.audio(result.content, format="audio/wav")
fig, ax = plt.subplots()
ax.imshow(src.mel, aspect="equal", interpolation="none")
ax.set_title("mel spectrogram(Source Audio)")
streamlit_app.pyplot(fig)
fig, ax = plt.subplots()
ax.imshow(result.mel, aspect="equal", interpolation="none")
ax.set_title("mel spectrogram(Result Audio)")
streamlit_app.pyplot(fig)
| Custom output UI.
If this method is implmeneted, it will be used instead of the default Output UI renderer.
| 19 | 39 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def render_output_ui(self, streamlit_app, input) -> None: # type: ignore
src, result = self.__root__
streamlit_app.subheader("Synthesized Audio")
streamlit_app.audio(result.content, format="audio/wav")
fig, ax = plt.subplots()
ax.imshow(src.mel, aspect="equal", interpolation="none")
ax.set_title("mel spectrogram(Source Audio)")
streamlit_app.pyplot(fig)
fig, ax = plt.subplots()
ax.imshow(result.mel, aspect="equal", interpolation="none")
ax.set_title("mel spectrogram(Result Audio)")
streamlit_app.pyplot(fig)
```
###Assistant : Custom output UI.
If this method is implmeneted, it will be used instead of the default Output UI renderer.
|
2,419 | def get_default_mesh(self):
return self._default_mesh
LayoutMap.get.__doc__ = LayoutMap.__getitem__.__doc__
@keras_export("keras.dtensor.experimental.layout_map_scope", v1=[])
@contextlib.contextmanager | Return the default `Mesh` set at instance creation.
The `Mesh` can be used to create default replicated `Layout` when there
isn't a match of the input string query.
| 28 | 10 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_default_mesh(self):
return self._default_mesh
LayoutMap.get.__doc__ = LayoutMap.__getitem__.__doc__
@keras_export("keras.dtensor.experimental.layout_map_scope", v1=[])
@contextlib.contextmanager
```
###Assistant : Return the default `Mesh` set at instance creation.
The `Mesh` can be used to create default replicated `Layout` when there
isn't a match of the input string query.
|
2,420 | def __setitem__(self, key, item): # pragma: no cover
raise NotImplementedError("Implemented by subclasses")
|
Assign `item` value to dataset located by `key`.
Parameters
----------
key : callable or tuple
The global row numbers to assign data to.
item : modin.pandas.DataFrame, modin.pandas.Series or scalar
Value that should be assigned to located dataset.
See Also
--------
pandas.DataFrame.iloc
| 41 | 12 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def __setitem__(self, key, item): # pragma: no cover
raise NotImplementedError("Implemented by subclasses")
```
###Assistant :
Assign `item` value to dataset located by `key`.
Parameters
----------
key : callable or tuple
The global row numbers to assign data to.
item : modin.pandas.DataFrame, modin.pandas.Series or scalar
Value that should be assigned to located dataset.
See Also
--------
pandas.DataFrame.iloc
|
2,421 | def test_bad_origin_cannot_be_parsed(self):
req = self._get_POST_request_with_token()
req.META["HTTP_HOST"] = "www.example.com"
req.META["HTTP_ORIGIN"] = "https://["
mw = CsrfViewMiddleware(post_form_view)
self._check_referer_rejects(mw, req)
self.assertIs(mw._origin_verified(req), False)
with self.assertLogs("django.security.csrf", "WARNING") as cm:
response = mw.process_view(req, post_form_view, (), {})
self.assertEqual(response.status_code, 403)
msg = REASON_BAD_ORIGIN % req.META["HTTP_ORIGIN"]
self.assertEqual(cm.records[0].getMessage(), "Forbidden (%s): " % msg)
|
A POST request with an origin that can't be parsed by urlparse() is
rejected.
| 14 | 42 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_bad_origin_cannot_be_parsed(self):
req = self._get_POST_request_with_token()
req.META["HTTP_HOST"] = "www.example.com"
req.META["HTTP_ORIGIN"] = "https://["
mw = CsrfViewMiddleware(post_form_view)
self._check_referer_rejects(mw, req)
self.assertIs(mw._origin_verified(req), False)
with self.assertLogs("django.security.csrf", "WARNING") as cm:
response = mw.process_view(req, post_form_view, (), {})
self.assertEqual(response.status_code, 403)
msg = REASON_BAD_ORIGIN % req.META["HTTP_ORIGIN"]
self.assertEqual(cm.records[0].getMessage(), "Forbidden (%s): " % msg)
```
###Assistant :
A POST request with an origin that can't be parsed by urlparse() is
rejected.
|
2,422 | def move_from_center(coord, centers, deltas, axmask=(True, True, True)):
return _move_from_center(coord, centers, deltas, axmask=axmask)
|
For each coordinate where *axmask* is True, move *coord* away from
*centers* by *deltas*.
| 14 | 12 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def move_from_center(coord, centers, deltas, axmask=(True, True, True)):
return _move_from_center(coord, centers, deltas, axmask=axmask)
```
###Assistant :
For each coordinate where *axmask* is True, move *coord* away from
*centers* by *deltas*.
|
2,423 | def get_execution_info(self, job_id, function_descriptor):
function_id = function_descriptor.function_id
# If the function has already been loaded,
# There's no need to load again
if function_id in self._function_execution_info:
return self._function_execution_info[function_id]
if self._worker.load_code_from_local:
# Load function from local code.
if not function_descriptor.is_actor_method():
# If the function is not able to be loaded,
# try to load it from GCS,
# even if load_code_from_local is set True
if self._load_function_from_local(function_descriptor) is True:
return self._function_execution_info[function_id]
# Load function from GCS.
# Wait until the function to be executed has actually been
# registered on this worker. We will push warnings to the user if
# we spend too long in this loop.
# The driver function may not be found in sys.path. Try to load
# the function from GCS.
with profiling.profile("wait_for_function"):
self._wait_for_function(function_descriptor, job_id)
try:
function_id = function_descriptor.function_id
info = self._function_execution_info[function_id]
except KeyError as e:
message = (
"Error occurs in get_execution_info: "
"job_id: %s, function_descriptor: %s. Message: %s"
% (job_id, function_descriptor, e)
)
raise KeyError(message)
return info
| Get the FunctionExecutionInfo of a remote function.
Args:
job_id: ID of the job that the function belongs to.
function_descriptor: The FunctionDescriptor of the function to get.
Returns:
A FunctionExecutionInfo object.
| 30 | 162 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_execution_info(self, job_id, function_descriptor):
function_id = function_descriptor.function_id
# If the function has already been loaded,
# There's no need to load again
if function_id in self._function_execution_info:
return self._function_execution_info[function_id]
if self._worker.load_code_from_local:
# Load function from local code.
if not function_descriptor.is_actor_method():
# If the function is not able to be loaded,
# try to load it from GCS,
# even if load_code_from_local is set True
if self._load_function_from_local(function_descriptor) is True:
return self._function_execution_info[function_id]
# Load function from GCS.
# Wait until the function to be executed has actually been
# registered on this worker. We will push warnings to the user if
# we spend too long in this loop.
# The driver function may not be found in sys.path. Try to load
# the function from GCS.
with profiling.profile("wait_for_function"):
self._wait_for_function(function_descriptor, job_id)
try:
function_id = function_descriptor.function_id
info = self._function_execution_info[function_id]
except KeyError as e:
message = (
"Error occurs in get_execution_info: "
"job_id: %s, function_descriptor: %s. Message: %s"
% (job_id, function_descriptor, e)
)
raise KeyError(message)
return info
```
###Assistant : Get the FunctionExecutionInfo of a remote function.
Args:
job_id: ID of the job that the function belongs to.
function_descriptor: The FunctionDescriptor of the function to get.
Returns:
A FunctionExecutionInfo object.
|
2,424 | def exclude_all_devices(self) -> bool:
return all(idx in _EXCLUDE_DEVICES for idx in range(self._device_count))
| bool: ``True`` if all GPU devices have been explicitly disabled otherwise ``False`` | 12 | 12 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def exclude_all_devices(self) -> bool:
return all(idx in _EXCLUDE_DEVICES for idx in range(self._device_count))
```
###Assistant : bool: ``True`` if all GPU devices have been explicitly disabled otherwise ``False``
|
2,425 | def serialize_labels(self, resources):
labels = []
for label in resources:
if label in AlexaGlobalCatalog.__dict__.values():
label = {"@type": "asset", "value": {"assetId": label}}
else:
label = {"@type": "text", "value": {"text": label, "locale": "en-US"}}
labels.append(label)
return {"friendlyNames": labels}
| Return resource label objects for friendlyNames serialized for an API response. | 11 | 35 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def serialize_labels(self, resources):
labels = []
for label in resources:
if label in AlexaGlobalCatalog.__dict__.values():
label = {"@type": "asset", "value": {"assetId": label}}
else:
label = {"@type": "text", "value": {"text": label, "locale": "en-US"}}
labels.append(label)
return {"friendlyNames": labels}
```
###Assistant : Return resource label objects for friendlyNames serialized for an API response.
|
2,426 | def _decode_bitonal(self):
data = bytearray()
total_bytes = self.state.xsize * self.state.ysize
comment_spans = False
while len(data) != total_bytes:
block = self._read_block() # read next block
if not block:
# eof
break
while block and comment_spans:
comment_end = self._find_comment_end(block)
if comment_end != -1: # comment ends in this block
block = block[comment_end + 1 :] # delete tail of previous comment
break
else: # comment spans whole block
block = self._read_block()
block, comment_spans = self._ignore_comments(block)
tokens = b"".join(block.split())
for token in tokens:
if token not in (48, 49):
raise ValueError(f"Invalid token for this mode: {bytes([token])}")
data = (data + tokens)[:total_bytes]
invert = bytes.maketrans(b"01", b"\xFF\x00")
return data.translate(invert)
|
This is a separate method because in the plain PBM format, all data tokens are
exactly one byte, so the inter-token whitespace is optional.
| 24 | 104 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _decode_bitonal(self):
data = bytearray()
total_bytes = self.state.xsize * self.state.ysize
comment_spans = False
while len(data) != total_bytes:
block = self._read_block() # read next block
if not block:
# eof
break
while block and comment_spans:
comment_end = self._find_comment_end(block)
if comment_end != -1: # comment ends in this block
block = block[comment_end + 1 :] # delete tail of previous comment
break
else: # comment spans whole block
block = self._read_block()
block, comment_spans = self._ignore_comments(block)
tokens = b"".join(block.split())
for token in tokens:
if token not in (48, 49):
raise ValueError(f"Invalid token for this mode: {bytes([token])}")
data = (data + tokens)[:total_bytes]
invert = bytes.maketrans(b"01", b"\xFF\x00")
return data.translate(invert)
```
###Assistant :
This is a separate method because in the plain PBM format, all data tokens are
exactly one byte, so the inter-token whitespace is optional.
|
2,427 | def recast_to_symbols(eqs, symbols):
if not iterable(eqs) and iterable(symbols):
raise ValueError('Both eqs and symbols must be iterable')
orig = list(symbols)
symbols = list(ordered(symbols))
swap_sym = {}
i = 0
for j, s in enumerate(symbols):
if not isinstance(s, Symbol) and s not in swap_sym:
swap_sym[s] = Dummy('X%d' % i)
i += 1
new_f = []
for i in eqs:
isubs = getattr(i, 'subs', None)
if isubs is not None:
new_f.append(isubs(swap_sym))
else:
new_f.append(i)
restore = {v: k for k, v in swap_sym.items()}
return new_f, [swap_sym.get(i, i) for i in orig], restore
|
Return (e, s, d) where e and s are versions of *eqs* and
*symbols* in which any non-Symbol objects in *symbols* have
been replaced with generic Dummy symbols and d is a dictionary
that can be used to restore the original expressions.
Examples
========
>>> from sympy.solvers.solvers import recast_to_symbols
>>> from sympy import symbols, Function
>>> x, y = symbols('x y')
>>> fx = Function('f')(x)
>>> eqs, syms = [fx + 1, x, y], [fx, y]
>>> e, s, d = recast_to_symbols(eqs, syms); (e, s, d)
([_X0 + 1, x, y], [_X0, y], {_X0: f(x)})
The original equations and symbols can be restored using d:
>>> assert [i.xreplace(d) for i in eqs] == eqs
>>> assert [d.get(i, i) for i in s] == syms
| 124 | 88 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def recast_to_symbols(eqs, symbols):
if not iterable(eqs) and iterable(symbols):
raise ValueError('Both eqs and symbols must be iterable')
orig = list(symbols)
symbols = list(ordered(symbols))
swap_sym = {}
i = 0
for j, s in enumerate(symbols):
if not isinstance(s, Symbol) and s not in swap_sym:
swap_sym[s] = Dummy('X%d' % i)
i += 1
new_f = []
for i in eqs:
isubs = getattr(i, 'subs', None)
if isubs is not None:
new_f.append(isubs(swap_sym))
else:
new_f.append(i)
restore = {v: k for k, v in swap_sym.items()}
return new_f, [swap_sym.get(i, i) for i in orig], restore
```
###Assistant :
Return (e, s, d) where e and s are versions of *eqs* and
*symbols* in which any non-Symbol objects in *symbols* have
been replaced with generic Dummy symbols and d is a dictionary
that can be used to restore the original expressions.
Examples
========
>>> from sympy.solvers.solvers import recast_to_symbols
>>> from sympy import symbols, Function
>>> x, y = symbols('x y')
>>> fx = Function('f')(x)
>>> eqs, syms = [fx + 1, x, y], [fx, y]
>>> e, s, d = recast_to_symbols(eqs, syms); (e, s, d)
([_X0 + 1, x, y], [_X0, y], {_X0: f(x)})
The original equations and symbols can be restored using d:
>>> assert [i.xreplace(d) for i in eqs] == eqs
>>> assert [d.get(i, i) for i in s] == syms
|
2,428 | def model_is_indexable(cls, model, allow_child_models=False):
if getattr(model, "wagtail_reference_index_ignore", False):
return False
# Don't check any models that have a parental key, references from these will be collected from the parent
if not allow_child_models and any(
[isinstance(field, ParentalKey) for field in model._meta.get_fields()]
):
return False
for field in model._meta.get_fields():
if field.is_relation and field.many_to_one:
if getattr(field, "wagtail_reference_index_ignore", False):
continue
if getattr(
field.related_model, "wagtail_reference_index_ignore", False
):
continue
if isinstance(field, (ParentalKey, GenericRel)):
continue
return True
if hasattr(field, "extract_references"):
return True
if issubclass(model, ClusterableModel):
for child_relation in get_all_child_relations(model):
if cls.model_is_indexable(
child_relation.related_model,
allow_child_models=True,
):
return True
return False
|
Returns True if the given model may have outbound references that we would be interested in recording in the index.
Args:
model (type): a Django model class
allow_child_models (boolean): Child models are not indexable on their own. If you are looking at
a child model from the perspective of indexing it through its parent,
set this to True to disable checking for this. Default False.
| 65 | 91 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def model_is_indexable(cls, model, allow_child_models=False):
if getattr(model, "wagtail_reference_index_ignore", False):
return False
# Don't check any models that have a parental key, references from these will be collected from the parent
if not allow_child_models and any(
[isinstance(field, ParentalKey) for field in model._meta.get_fields()]
):
return False
for field in model._meta.get_fields():
if field.is_relation and field.many_to_one:
if getattr(field, "wagtail_reference_index_ignore", False):
continue
if getattr(
field.related_model, "wagtail_reference_index_ignore", False
):
continue
if isinstance(field, (ParentalKey, GenericRel)):
continue
return True
if hasattr(field, "extract_references"):
return True
if issubclass(model, ClusterableModel):
for child_relation in get_all_child_relations(model):
if cls.model_is_indexable(
child_relation.related_model,
allow_child_models=True,
):
return True
return False
```
###Assistant :
Returns True if the given model may have outbound references that we would be interested in recording in the index.
Args:
model (type): a Django model class
allow_child_models (boolean): Child models are not indexable on their own. If you are looking at
a child model from the perspective of indexing it through its parent,
set this to True to disable checking for this. Default False.
|
2,429 | def get_encodings_from_content(content):
warnings.warn(
(
"In requests 3.0, get_encodings_from_content will be removed. For "
"more information, please see the discussion on issue #2266. (This"
" warning should only appear once.)"
),
DeprecationWarning,
)
charset_re = re.compile(r'<meta.*?charset=["\']*(.+?)["\'>]', flags=re.I)
pragma_re = re.compile(r'<meta.*?content=["\']*;?charset=(.+?)["\'>]', flags=re.I)
xml_re = re.compile(r'^<\?xml.*?encoding=["\']*(.+?)["\'>]')
return (
charset_re.findall(content)
+ pragma_re.findall(content)
+ xml_re.findall(content)
)
| Returns encodings from given content string.
:param content: bytestring to extract encodings from.
| 13 | 51 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_encodings_from_content(content):
warnings.warn(
(
"In requests 3.0, get_encodings_from_content will be removed. For "
"more information, please see the discussion on issue #2266. (This"
" warning should only appear once.)"
),
DeprecationWarning,
)
charset_re = re.compile(r'<meta.*?charset=["\']*(.+?)["\'>]', flags=re.I)
pragma_re = re.compile(r'<meta.*?content=["\']*;?charset=(.+?)["\'>]', flags=re.I)
xml_re = re.compile(r'^<\?xml.*?encoding=["\']*(.+?)["\'>]')
return (
charset_re.findall(content)
+ pragma_re.findall(content)
+ xml_re.findall(content)
)
```
###Assistant : Returns encodings from given content string.
:param content: bytestring to extract encodings from.
|
2,430 | def _scale_axis_limits(self, scale_x, scale_y, scale_z):
# Get the axis limits and centers
minx, maxx, miny, maxy, minz, maxz = self.get_w_lims()
cx = (maxx + minx)/2
cy = (maxy + miny)/2
cz = (maxz + minz)/2
# Scale the data range
dx = (maxx - minx)*scale_x
dy = (maxy - miny)*scale_y
dz = (maxz - minz)*scale_z
# Set the scaled axis limits
self.set_xlim3d(cx - dx/2, cx + dx/2)
self.set_ylim3d(cy - dy/2, cy + dy/2)
self.set_zlim3d(cz - dz/2, cz + dz/2)
|
Keeping the center of the x, y, and z data axes fixed, scale their
limits by scale factors. A scale factor > 1 zooms out and a scale
factor < 1 zooms in.
Parameters
----------
scale_x : float
Scale factor for the x data axis.
scale_y : float
Scale factor for the y data axis.
scale_z : float
Scale factor for the z data axis.
| 65 | 79 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _scale_axis_limits(self, scale_x, scale_y, scale_z):
# Get the axis limits and centers
minx, maxx, miny, maxy, minz, maxz = self.get_w_lims()
cx = (maxx + minx)/2
cy = (maxy + miny)/2
cz = (maxz + minz)/2
# Scale the data range
dx = (maxx - minx)*scale_x
dy = (maxy - miny)*scale_y
dz = (maxz - minz)*scale_z
# Set the scaled axis limits
self.set_xlim3d(cx - dx/2, cx + dx/2)
self.set_ylim3d(cy - dy/2, cy + dy/2)
self.set_zlim3d(cz - dz/2, cz + dz/2)
```
###Assistant :
Keeping the center of the x, y, and z data axes fixed, scale their
limits by scale factors. A scale factor > 1 zooms out and a scale
factor < 1 zooms in.
Parameters
----------
scale_x : float
Scale factor for the x data axis.
scale_y : float
Scale factor for the y data axis.
scale_z : float
Scale factor for the z data axis.
|
2,431 | def layers(self) -> tuple[str, ...]:
for node in self.ancestors:
if not isinstance(node, Widget):
break
if node.styles.has_rule("layers"):
return node.styles.layers
return ("default",)
| Layers of from parent.
Returns:
tuple[str, ...]: Tuple of layer names.
| 11 | 20 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def layers(self) -> tuple[str, ...]:
for node in self.ancestors:
if not isinstance(node, Widget):
break
if node.styles.has_rule("layers"):
return node.styles.layers
return ("default",)
```
###Assistant : Layers of from parent.
Returns:
tuple[str, ...]: Tuple of layer names.
|
2,432 | def get_roi_head_cfg(fname):
config = _get_config_module(fname)
model = copy.deepcopy(config.model)
roi_head = model.roi_head
train_cfg = None if model.train_cfg is None else model.train_cfg.rcnn
test_cfg = None if model.test_cfg is None else model.test_cfg.rcnn
roi_head.update(dict(train_cfg=train_cfg, test_cfg=test_cfg))
return roi_head
| Grab configs necessary to create a roi_head.
These are deep copied to allow for safe modification of parameters without
influencing other tests.
| 22 | 33 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_roi_head_cfg(fname):
config = _get_config_module(fname)
model = copy.deepcopy(config.model)
roi_head = model.roi_head
train_cfg = None if model.train_cfg is None else model.train_cfg.rcnn
test_cfg = None if model.test_cfg is None else model.test_cfg.rcnn
roi_head.update(dict(train_cfg=train_cfg, test_cfg=test_cfg))
return roi_head
```
###Assistant : Grab configs necessary to create a roi_head.
These are deep copied to allow for safe modification of parameters without
influencing other tests.
|
2,433 | def print_as_log(*args, **kwargs):
from prefect.context import FlowRunContext, TaskRunContext
context = TaskRunContext.get() or FlowRunContext.get()
if not context or not context.log_prints:
return print(*args, **kwargs)
logger = get_run_logger()
# Print to an in-memory buffer; so we do not need to implement `print`
buffer = io.StringIO()
kwargs["file"] = buffer
print(*args, **kwargs)
# Remove trailing whitespace to prevent duplicates
logger.info(buffer.getvalue().rstrip())
@contextmanager |
A patch for `print` to send printed messages to the Prefect run logger.
If no run is active, `print` will behave as if it were not patched.
| 27 | 56 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def print_as_log(*args, **kwargs):
from prefect.context import FlowRunContext, TaskRunContext
context = TaskRunContext.get() or FlowRunContext.get()
if not context or not context.log_prints:
return print(*args, **kwargs)
logger = get_run_logger()
# Print to an in-memory buffer; so we do not need to implement `print`
buffer = io.StringIO()
kwargs["file"] = buffer
print(*args, **kwargs)
# Remove trailing whitespace to prevent duplicates
logger.info(buffer.getvalue().rstrip())
@contextmanager
```
###Assistant :
A patch for `print` to send printed messages to the Prefect run logger.
If no run is active, `print` will behave as if it were not patched.
|
2,434 | def test_image():
# Test fails for matplotlib 1.5+ because the size of the image
# generated by matplotlib has changed.
if Version(matplotlib.__version__) == Version("3.4.1"):
image_size = 432
else:
pytest.skip("Test fails for older matplotlib")
np.random.seed(0) # image size depends on the seed
fig, ax = plt.subplots(figsize=(2, 2))
ax.imshow(np.random.random((10, 10)), cmap=plt.cm.jet, interpolation="nearest")
_assert_output_equal(
fake_renderer_output(fig, FakeRenderer),
f,
)
|
opening figure
opening axes
draw image of size {image_size}
closing axes
closing figure
| 13 | 55 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_image():
# Test fails for matplotlib 1.5+ because the size of the image
# generated by matplotlib has changed.
if Version(matplotlib.__version__) == Version("3.4.1"):
image_size = 432
else:
pytest.skip("Test fails for older matplotlib")
np.random.seed(0) # image size depends on the seed
fig, ax = plt.subplots(figsize=(2, 2))
ax.imshow(np.random.random((10, 10)), cmap=plt.cm.jet, interpolation="nearest")
_assert_output_equal(
fake_renderer_output(fig, FakeRenderer),
f,
)
```
###Assistant :
opening figure
opening axes
draw image of size {image_size}
closing axes
closing figure
|
2,435 | def ignore_cidr(vm_, ip):
from ipaddress import ip_address, ip_network
cidrs = config.get_cloud_config_value(
"ignore_cidr", vm_, __opts__, default=[], search_global=False
)
if cidrs and isinstance(cidrs, str):
cidrs = [cidrs]
for cidr in cidrs or []:
if ip_address(ip) in ip_network(cidr):
log.warning("IP %r found within %r; ignoring it.", ip, cidr)
return True
return False
|
Return True if we are to ignore the specified IP.
| 10 | 48 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def ignore_cidr(vm_, ip):
from ipaddress import ip_address, ip_network
cidrs = config.get_cloud_config_value(
"ignore_cidr", vm_, __opts__, default=[], search_global=False
)
if cidrs and isinstance(cidrs, str):
cidrs = [cidrs]
for cidr in cidrs or []:
if ip_address(ip) in ip_network(cidr):
log.warning("IP %r found within %r; ignoring it.", ip, cidr)
return True
return False
```
###Assistant :
Return True if we are to ignore the specified IP.
|
2,436 | def encode_nested_example(schema, obj):
# Nested structures: we allow dict, list/tuples, sequences
if isinstance(schema, dict):
return {k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in zip_dict(schema, obj)}
elif isinstance(schema, (list, tuple)):
sub_schema = schema[0]
if obj is None:
return None
else:
if len(obj) > 0:
for first_elmt in obj:
if _check_non_null_non_empty_recursive(first_elmt, sub_schema):
break
if encode_nested_example(sub_schema, first_elmt) != first_elmt:
return [encode_nested_example(sub_schema, o) for o in obj]
return list(obj)
elif isinstance(schema, Sequence):
# We allow to reverse list of dict => dict of list for compatiblity with tfds
if isinstance(schema.feature, dict):
# dict of list to fill
list_dict = {}
if isinstance(obj, (list, tuple)):
# obj is a list of dict
for k, dict_tuples in zip_dict(schema.feature, *obj):
list_dict[k] = [encode_nested_example(dict_tuples[0], o) for o in dict_tuples[1:]]
return list_dict
else:
# obj is a single dict
for k, (sub_schema, sub_objs) in zip_dict(schema.feature, obj):
list_dict[k] = [encode_nested_example(sub_schema, o) for o in sub_objs]
return list_dict
# schema.feature is not a dict
if isinstance(obj, str): # don't interpret a string as a list
raise ValueError(f"Got a string but expected a list instead: '{obj}'")
if obj is None:
return None
else:
if len(obj) > 0:
for first_elmt in obj:
if _check_non_null_non_empty_recursive(first_elmt, schema.feature):
break
# be careful when comparing tensors here
if not isinstance(first_elmt, list) or encode_nested_example(schema.feature, first_elmt) != first_elmt:
return [encode_nested_example(schema.feature, o) for o in obj]
return list(obj)
# Object with special encoding:
# ClassLabel will convert from string to int, TranslationVariableLanguages does some checks
elif isinstance(schema, (Audio, Image, ClassLabel, TranslationVariableLanguages, Value, _ArrayXD)):
return schema.encode_example(obj) if obj is not None else None
# Other object should be directly convertible to a native Arrow type (like Translation and Translation)
return obj
| Encode a nested example.
This is used since some features (in particular ClassLabel) have some logic during encoding.
To avoid iterating over possibly long lists, it first checks (recursively) if the first element that is not None or empty (if it is a sequence) has to be encoded.
If the first element needs to be encoded, then all the elements of the list will be encoded, otherwise they'll stay the same.
| 71 | 270 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def encode_nested_example(schema, obj):
# Nested structures: we allow dict, list/tuples, sequences
if isinstance(schema, dict):
return {k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in zip_dict(schema, obj)}
elif isinstance(schema, (list, tuple)):
sub_schema = schema[0]
if obj is None:
return None
else:
if len(obj) > 0:
for first_elmt in obj:
if _check_non_null_non_empty_recursive(first_elmt, sub_schema):
break
if encode_nested_example(sub_schema, first_elmt) != first_elmt:
return [encode_nested_example(sub_schema, o) for o in obj]
return list(obj)
elif isinstance(schema, Sequence):
# We allow to reverse list of dict => dict of list for compatiblity with tfds
if isinstance(schema.feature, dict):
# dict of list to fill
list_dict = {}
if isinstance(obj, (list, tuple)):
# obj is a list of dict
for k, dict_tuples in zip_dict(schema.feature, *obj):
list_dict[k] = [encode_nested_example(dict_tuples[0], o) for o in dict_tuples[1:]]
return list_dict
else:
# obj is a single dict
for k, (sub_schema, sub_objs) in zip_dict(schema.feature, obj):
list_dict[k] = [encode_nested_example(sub_schema, o) for o in sub_objs]
return list_dict
# schema.feature is not a dict
if isinstance(obj, str): # don't interpret a string as a list
raise ValueError(f"Got a string but expected a list instead: '{obj}'")
if obj is None:
return None
else:
if len(obj) > 0:
for first_elmt in obj:
if _check_non_null_non_empty_recursive(first_elmt, schema.feature):
break
# be careful when comparing tensors here
if not isinstance(first_elmt, list) or encode_nested_example(schema.feature, first_elmt) != first_elmt:
return [encode_nested_example(schema.feature, o) for o in obj]
return list(obj)
# Object with special encoding:
# ClassLabel will convert from string to int, TranslationVariableLanguages does some checks
elif isinstance(schema, (Audio, Image, ClassLabel, TranslationVariableLanguages, Value, _ArrayXD)):
return schema.encode_example(obj) if obj is not None else None
# Other object should be directly convertible to a native Arrow type (like Translation and Translation)
return obj
```
###Assistant : Encode a nested example.
This is used since some features (in particular ClassLabel) have some logic during encoding.
To avoid iterating over possibly long lists, it first checks (recursively) if the first element that is not None or empty (if it is a sequence) has to be encoded.
If the first element needs to be encoded, then all the elements of the list will be encoded, otherwise they'll stay the same.
|
2,437 | def _configure_matplotlib(cls):
rcParams["keymap.fullscreen"] = [k for k in rcParams["keymap.fullscreen"] if k != "f"]
rcParams["keymap.save"] = [k for k in rcParams["keymap.save"] if k != "s"]
rcParams["keymap.home"] = [k for k in rcParams["keymap.home"] if k != "r"]
rcParams["figure.raise_window"] = False
| Remove `F`, 'S' and 'R' from their default bindings and stop Matplotlib from stealing
focus | 15 | 38 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _configure_matplotlib(cls):
rcParams["keymap.fullscreen"] = [k for k in rcParams["keymap.fullscreen"] if k != "f"]
rcParams["keymap.save"] = [k for k in rcParams["keymap.save"] if k != "s"]
rcParams["keymap.home"] = [k for k in rcParams["keymap.home"] if k != "r"]
rcParams["figure.raise_window"] = False
```
###Assistant : Remove `F`, 'S' and 'R' from their default bindings and stop Matplotlib from stealing
focus
|
2,438 | def set_permission_cache(user, key, value):
from django.core.cache import cache
# store this key, so we can clean it when required
cache_key = get_cache_key(user, key)
cache.set(cache_key, value,
get_cms_setting('CACHE_DURATIONS')['permissions'],
version=get_cache_permission_version())
|
Helper method for storing values in cache. Stores used keys so
all of them can be cleaned when clean_permission_cache gets called.
| 21 | 27 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def set_permission_cache(user, key, value):
from django.core.cache import cache
# store this key, so we can clean it when required
cache_key = get_cache_key(user, key)
cache.set(cache_key, value,
get_cms_setting('CACHE_DURATIONS')['permissions'],
version=get_cache_permission_version())
```
###Assistant :
Helper method for storing values in cache. Stores used keys so
all of them can be cleaned when clean_permission_cache gets called.
|
2,439 | def cookies(self) -> multidict.MultiDictView[str, tuple[str, multidict.MultiDict[str, Optional[str]]]]:
return multidict.MultiDictView(
self._get_cookies,
self._set_cookies
)
|
The response cookies. A possibly empty `MultiDictView`, where the keys are cookie
name strings, and values are `(cookie value, attributes)` tuples. Within
attributes, unary attributes (e.g. `HTTPOnly`) are indicated by a `None` value.
Modifications to the MultiDictView update `Response.headers`, and vice versa.
*Warning:* Changes to `attributes` will not be picked up unless you also reassign
the `(cookie value, attributes)` tuple directly in the `MultiDictView`.
| 64 | 12 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def cookies(self) -> multidict.MultiDictView[str, tuple[str, multidict.MultiDict[str, Optional[str]]]]:
return multidict.MultiDictView(
self._get_cookies,
self._set_cookies
)
```
###Assistant :
The response cookies. A possibly empty `MultiDictView`, where the keys are cookie
name strings, and values are `(cookie value, attributes)` tuples. Within
attributes, unary attributes (e.g. `HTTPOnly`) are indicated by a `None` value.
Modifications to the MultiDictView update `Response.headers`, and vice versa.
*Warning:* Changes to `attributes` will not be picked up unless you also reassign
the `(cookie value, attributes)` tuple directly in the `MultiDictView`.
|
2,440 | def _normalize_entries(entries, separators=None):
norm_files = {}
for entry in entries:
norm_files[normalize_file(entry.path, separators=separators)] = entry
return norm_files
|
Normalizes the entry paths to use the POSIX path separator.
*entries* (:class:`~collections.abc.Iterable` of :class:`.TreeEntry`)
contains the entries to be normalized.
*separators* (:class:`~collections.abc.Collection` of :class:`str`; or
:data:`None`) optionally contains the path separators to normalize.
See :func:`normalize_file` for more information.
Returns a :class:`dict` mapping the each normalized file path (:class:`str`)
to the entry (:class:`.TreeEntry`)
| 52 | 16 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _normalize_entries(entries, separators=None):
norm_files = {}
for entry in entries:
norm_files[normalize_file(entry.path, separators=separators)] = entry
return norm_files
```
###Assistant :
Normalizes the entry paths to use the POSIX path separator.
*entries* (:class:`~collections.abc.Iterable` of :class:`.TreeEntry`)
contains the entries to be normalized.
*separators* (:class:`~collections.abc.Collection` of :class:`str`; or
:data:`None`) optionally contains the path separators to normalize.
See :func:`normalize_file` for more information.
Returns a :class:`dict` mapping the each normalized file path (:class:`str`)
to the entry (:class:`.TreeEntry`)
|
2,441 | def assertXMLNotEqual(self, xml1, xml2, msg=None):
try:
result = compare_xml(xml1, xml2)
except Exception as e:
standardMsg = "First or second argument is not valid XML\n%s" % e
self.fail(self._formatMessage(msg, standardMsg))
else:
if result:
standardMsg = "%s == %s" % (
safe_repr(xml1, True),
safe_repr(xml2, True),
)
self.fail(self._formatMessage(msg, standardMsg))
|
Assert that two XML snippets are not semantically equivalent.
Whitespace in most cases is ignored and attribute ordering is not
significant. The arguments must be valid XML.
| 27 | 45 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def assertXMLNotEqual(self, xml1, xml2, msg=None):
try:
result = compare_xml(xml1, xml2)
except Exception as e:
standardMsg = "First or second argument is not valid XML\n%s" % e
self.fail(self._formatMessage(msg, standardMsg))
else:
if result:
standardMsg = "%s == %s" % (
safe_repr(xml1, True),
safe_repr(xml2, True),
)
self.fail(self._formatMessage(msg, standardMsg))
```
###Assistant :
Assert that two XML snippets are not semantically equivalent.
Whitespace in most cases is ignored and attribute ordering is not
significant. The arguments must be valid XML.
|
2,442 | def token_kwargs(bits, parser, support_legacy=False):
if not bits:
return {}
match = kwarg_re.match(bits[0])
kwarg_format = match and match[1]
if not kwarg_format:
if not support_legacy:
return {}
if len(bits) < 3 or bits[1] != "as":
return {}
kwargs = {}
while bits:
if kwarg_format:
match = kwarg_re.match(bits[0])
if not match or not match[1]:
return kwargs
key, value = match.groups()
del bits[:1]
else:
if len(bits) < 3 or bits[1] != "as":
return kwargs
key, value = bits[2], bits[0]
del bits[:3]
kwargs[key] = parser.compile_filter(value)
if bits and not kwarg_format:
if bits[0] != "and":
return kwargs
del bits[:1]
return kwargs
|
Parse token keyword arguments and return a dictionary of the arguments
retrieved from the ``bits`` token list.
`bits` is a list containing the remainder of the token (split by spaces)
that is to be checked for arguments. Valid arguments are removed from this
list.
`support_legacy` - if True, the legacy format ``1 as foo`` is accepted.
Otherwise, only the standard ``foo=1`` format is allowed.
There is no requirement for all remaining token ``bits`` to be keyword
arguments, so return the dictionary as soon as an invalid argument format
is reached.
| 90 | 95 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def token_kwargs(bits, parser, support_legacy=False):
if not bits:
return {}
match = kwarg_re.match(bits[0])
kwarg_format = match and match[1]
if not kwarg_format:
if not support_legacy:
return {}
if len(bits) < 3 or bits[1] != "as":
return {}
kwargs = {}
while bits:
if kwarg_format:
match = kwarg_re.match(bits[0])
if not match or not match[1]:
return kwargs
key, value = match.groups()
del bits[:1]
else:
if len(bits) < 3 or bits[1] != "as":
return kwargs
key, value = bits[2], bits[0]
del bits[:3]
kwargs[key] = parser.compile_filter(value)
if bits and not kwarg_format:
if bits[0] != "and":
return kwargs
del bits[:1]
return kwargs
```
###Assistant :
Parse token keyword arguments and return a dictionary of the arguments
retrieved from the ``bits`` token list.
`bits` is a list containing the remainder of the token (split by spaces)
that is to be checked for arguments. Valid arguments are removed from this
list.
`support_legacy` - if True, the legacy format ``1 as foo`` is accepted.
Otherwise, only the standard ``foo=1`` format is allowed.
There is no requirement for all remaining token ``bits`` to be keyword
arguments, so return the dictionary as soon as an invalid argument format
is reached.
|
2,443 | def get_cost_of_delayed_shipments(scorecard):
return get_total_cost_of_shipments(scorecard) - get_cost_of_on_time_shipments(scorecard)
| Gets the total cost of all delayed shipments in the period (based on Purchase Receipts - POs) | 17 | 6 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_cost_of_delayed_shipments(scorecard):
return get_total_cost_of_shipments(scorecard) - get_cost_of_on_time_shipments(scorecard)
```
###Assistant : Gets the total cost of all delayed shipments in the period (based on Purchase Receipts - POs)
|
2,444 | def weight_reduce_loss(loss, weight=None, reduction='mean', avg_factor=None):
# if weight is specified, apply element-wise weight
if weight is not None:
loss = loss * weight
# if avg_factor is not specified, just reduce the loss
if avg_factor is None:
loss = reduce_loss(loss, reduction)
else:
# if reduction is mean, then average the loss by avg_factor
if reduction == 'mean':
# Avoid causing ZeroDivisionError when avg_factor is 0.0,
# i.e., all labels of an image belong to ignore index.
eps = torch.finfo(torch.float32).eps
loss = loss.sum() / (avg_factor + eps)
# if reduction is 'none', then do nothing, otherwise raise an error
elif reduction != 'none':
raise ValueError('avg_factor can not be used with reduction="sum"')
return loss
| Apply element-wise weight and reduce loss.
Args:
loss (Tensor): Element-wise loss.
weight (Tensor): Element-wise weights.
reduction (str): Same as built-in losses of PyTorch.
avg_factor (float): Average factor when computing the mean of losses.
Returns:
Tensor: Processed loss values.
| 38 | 112 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def weight_reduce_loss(loss, weight=None, reduction='mean', avg_factor=None):
# if weight is specified, apply element-wise weight
if weight is not None:
loss = loss * weight
# if avg_factor is not specified, just reduce the loss
if avg_factor is None:
loss = reduce_loss(loss, reduction)
else:
# if reduction is mean, then average the loss by avg_factor
if reduction == 'mean':
# Avoid causing ZeroDivisionError when avg_factor is 0.0,
# i.e., all labels of an image belong to ignore index.
eps = torch.finfo(torch.float32).eps
loss = loss.sum() / (avg_factor + eps)
# if reduction is 'none', then do nothing, otherwise raise an error
elif reduction != 'none':
raise ValueError('avg_factor can not be used with reduction="sum"')
return loss
```
###Assistant : Apply element-wise weight and reduce loss.
Args:
loss (Tensor): Element-wise loss.
weight (Tensor): Element-wise weights.
reduction (str): Same as built-in losses of PyTorch.
avg_factor (float): Average factor when computing the mean of losses.
Returns:
Tensor: Processed loss values.
|
2,445 | def test_failure_to_run_iterations():
rnd = np.random.RandomState(0)
X = rnd.standard_normal((100, 10))
A = X @ X.T
Q = rnd.standard_normal((X.shape[0], 4))
with pytest.warns(UserWarning, match="Exited at iteration"):
eigenvalues, _ = lobpcg(A, Q, maxiter=20)
assert(np.max(eigenvalues) > 0)
@pytest.mark.filterwarnings("ignore:The problem size") | Check that the code exists gracefully without breaking. Issue #10974.
| 10 | 35 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_failure_to_run_iterations():
rnd = np.random.RandomState(0)
X = rnd.standard_normal((100, 10))
A = X @ X.T
Q = rnd.standard_normal((X.shape[0], 4))
with pytest.warns(UserWarning, match="Exited at iteration"):
eigenvalues, _ = lobpcg(A, Q, maxiter=20)
assert(np.max(eigenvalues) > 0)
@pytest.mark.filterwarnings("ignore:The problem size")
```
###Assistant : Check that the code exists gracefully without breaking. Issue #10974.
|
2,446 | def test_predictor_tableau_header(self, mock_handler):
df = pd.DataFrame([
{'a': 1, 'b': 'one'},
{'a': 2, 'b': 'two'},
{'a': 1, 'b': 'three'},
])
self.set_handler(mock_handler, name='pg', tables={'tasks': df})
# --- use predictor ---
predicted_value = 5
predictor = {
'name': 'task_model',
'predict': 'p',
'dtypes': {
'p': dtype.float,
'a': dtype.integer,
'b': dtype.categorical
},
'predicted_value': predicted_value
}
self.set_predictor(predictor)
ret = self.command_executor.execute_command(parse_sql(f, dialect='mindsdb'))
# second column is having last value of 'b'
# 3: count rows, 4: sum of 'a', 5 max of prediction
assert ret.data[0] == [3, 4, 5]
|
SELECT
SUM(1) AS `cnt__0B4A4E8BD11C48FFB4730D4D2C32191A_ok`,
sum(`Custom SQL Query`.`a`) AS `sum_height_ok`,
max(`Custom SQL Query`.`p`) AS `sum_length1_ok`
FROM (
SELECT res.a, res.p
FROM pg.tasks as source
JOIN mindsdb.task_model as res
) `Custom SQL Query`
HAVING (COUNT(1) > 0)
| 35 | 82 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_predictor_tableau_header(self, mock_handler):
df = pd.DataFrame([
{'a': 1, 'b': 'one'},
{'a': 2, 'b': 'two'},
{'a': 1, 'b': 'three'},
])
self.set_handler(mock_handler, name='pg', tables={'tasks': df})
# --- use predictor ---
predicted_value = 5
predictor = {
'name': 'task_model',
'predict': 'p',
'dtypes': {
'p': dtype.float,
'a': dtype.integer,
'b': dtype.categorical
},
'predicted_value': predicted_value
}
self.set_predictor(predictor)
ret = self.command_executor.execute_command(parse_sql(f, dialect='mindsdb'))
# second column is having last value of 'b'
# 3: count rows, 4: sum of 'a', 5 max of prediction
assert ret.data[0] == [3, 4, 5]
```
###Assistant :
SELECT
SUM(1) AS `cnt__0B4A4E8BD11C48FFB4730D4D2C32191A_ok`,
sum(`Custom SQL Query`.`a`) AS `sum_height_ok`,
max(`Custom SQL Query`.`p`) AS `sum_length1_ok`
FROM (
SELECT res.a, res.p
FROM pg.tasks as source
JOIN mindsdb.task_model as res
) `Custom SQL Query`
HAVING (COUNT(1) > 0)
|
2,447 | def test_conflicting_specified_basename_different_models(self):
self.router.register(r'notes', NoteViewSet)
with pytest.raises(ImproperlyConfigured):
self.router.register(r'notes_basename', BasenameViewSet, basename='routertestmodel')
|
Ensure 2 routers with different models, and a conflicting basename specified
throws an exception
| 14 | 9 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_conflicting_specified_basename_different_models(self):
self.router.register(r'notes', NoteViewSet)
with pytest.raises(ImproperlyConfigured):
self.router.register(r'notes_basename', BasenameViewSet, basename='routertestmodel')
```
###Assistant :
Ensure 2 routers with different models, and a conflicting basename specified
throws an exception
|
2,448 | def _get_boosted_releases(self) -> BoostedReleases:
boosted_releases = BoostedReleases()
for boosted_release_cache_key, timestamp in self.redis_client.hgetall(
self._generate_cache_key_for_boosted_releases_hash()
).items():
extracted_data = self._extract_data_from_cache_key(boosted_release_cache_key)
if extracted_data:
release_id, environment = extracted_data
boosted_releases.add_release(
cache_key=boosted_release_cache_key,
id=release_id,
timestamp=float(timestamp),
environment=environment,
)
return boosted_releases
|
Returns all the boosted releases and parses them based on key and value data.
This method should not be called directly as the boosted releases are not extended, thus they contain only a
subset of information.
| 36 | 31 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _get_boosted_releases(self) -> BoostedReleases:
boosted_releases = BoostedReleases()
for boosted_release_cache_key, timestamp in self.redis_client.hgetall(
self._generate_cache_key_for_boosted_releases_hash()
).items():
extracted_data = self._extract_data_from_cache_key(boosted_release_cache_key)
if extracted_data:
release_id, environment = extracted_data
boosted_releases.add_release(
cache_key=boosted_release_cache_key,
id=release_id,
timestamp=float(timestamp),
environment=environment,
)
return boosted_releases
```
###Assistant :
Returns all the boosted releases and parses them based on key and value data.
This method should not be called directly as the boosted releases are not extended, thus they contain only a
subset of information.
|
2,449 | def store_stats_summary(reply):
store_summary = "--- Aggregate object store stats across all nodes ---\n"
# TODO(ekl) it would be nice if we could provide a full memory usage
# breakdown by type (e.g., pinned by worker, primary, etc.)
store_summary += (
"Plasma memory usage {} MiB, {} objects, {}% full, {}% "
"needed\n".format(
int(reply.store_stats.object_store_bytes_used / (1024 * 1024)),
reply.store_stats.num_local_objects,
round(
100
* reply.store_stats.object_store_bytes_used
/ reply.store_stats.object_store_bytes_avail,
2,
),
round(
100
* reply.store_stats.object_store_bytes_primary_copy
/ reply.store_stats.object_store_bytes_avail,
2,
),
)
)
if reply.store_stats.object_store_bytes_fallback > 0:
store_summary += "Plasma filesystem mmap usage: {} MiB\n".format(
int(reply.store_stats.object_store_bytes_fallback / (1024 * 1024))
)
if reply.store_stats.spill_time_total_s > 0:
store_summary += (
"Spilled {} MiB, {} objects, avg write throughput {} MiB/s\n".format(
int(reply.store_stats.spilled_bytes_total / (1024 * 1024)),
reply.store_stats.spilled_objects_total,
int(
reply.store_stats.spilled_bytes_total
/ (1024 * 1024)
/ reply.store_stats.spill_time_total_s
),
)
)
if reply.store_stats.restore_time_total_s > 0:
store_summary += (
"Restored {} MiB, {} objects, avg read throughput {} MiB/s\n".format(
int(reply.store_stats.restored_bytes_total / (1024 * 1024)),
reply.store_stats.restored_objects_total,
int(
reply.store_stats.restored_bytes_total
/ (1024 * 1024)
/ reply.store_stats.restore_time_total_s
),
)
)
if reply.store_stats.consumed_bytes > 0:
store_summary += "Objects consumed by Ray tasks: {} MiB.\n".format(
int(reply.store_stats.consumed_bytes / (1024 * 1024))
)
if reply.store_stats.object_pulls_queued:
store_summary += "Object fetches queued, waiting for available memory."
return store_summary
| Returns formatted string describing object store stats in all nodes. | 10 | 194 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def store_stats_summary(reply):
store_summary = "--- Aggregate object store stats across all nodes ---\n"
# TODO(ekl) it would be nice if we could provide a full memory usage
# breakdown by type (e.g., pinned by worker, primary, etc.)
store_summary += (
"Plasma memory usage {} MiB, {} objects, {}% full, {}% "
"needed\n".format(
int(reply.store_stats.object_store_bytes_used / (1024 * 1024)),
reply.store_stats.num_local_objects,
round(
100
* reply.store_stats.object_store_bytes_used
/ reply.store_stats.object_store_bytes_avail,
2,
),
round(
100
* reply.store_stats.object_store_bytes_primary_copy
/ reply.store_stats.object_store_bytes_avail,
2,
),
)
)
if reply.store_stats.object_store_bytes_fallback > 0:
store_summary += "Plasma filesystem mmap usage: {} MiB\n".format(
int(reply.store_stats.object_store_bytes_fallback / (1024 * 1024))
)
if reply.store_stats.spill_time_total_s > 0:
store_summary += (
"Spilled {} MiB, {} objects, avg write throughput {} MiB/s\n".format(
int(reply.store_stats.spilled_bytes_total / (1024 * 1024)),
reply.store_stats.spilled_objects_total,
int(
reply.store_stats.spilled_bytes_total
/ (1024 * 1024)
/ reply.store_stats.spill_time_total_s
),
)
)
if reply.store_stats.restore_time_total_s > 0:
store_summary += (
"Restored {} MiB, {} objects, avg read throughput {} MiB/s\n".format(
int(reply.store_stats.restored_bytes_total / (1024 * 1024)),
reply.store_stats.restored_objects_total,
int(
reply.store_stats.restored_bytes_total
/ (1024 * 1024)
/ reply.store_stats.restore_time_total_s
),
)
)
if reply.store_stats.consumed_bytes > 0:
store_summary += "Objects consumed by Ray tasks: {} MiB.\n".format(
int(reply.store_stats.consumed_bytes / (1024 * 1024))
)
if reply.store_stats.object_pulls_queued:
store_summary += "Object fetches queued, waiting for available memory."
return store_summary
```
###Assistant : Returns formatted string describing object store stats in all nodes.
|
2,450 | def test_spam_checker_deny(self) -> None:
self.get_failure(self.handler.register_user(localpart="user"), SynapseError)
| A spam checker can deny registration, which results in an error. | 11 | 6 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_spam_checker_deny(self) -> None:
self.get_failure(self.handler.register_user(localpart="user"), SynapseError)
```
###Assistant : A spam checker can deny registration, which results in an error.
|
2,451 | def _get_pitch_yaw_roll(self) -> None:
proj_matrix = np.zeros((3, 4), dtype="float32")
proj_matrix[:3, :3] = cv2.Rodrigues(self._rotation)[0]
euler = cv2.decomposeProjectionMatrix(proj_matrix)[-1]
self._pitch_yaw_roll = cast(Tuple[float, float, float], tuple(euler.squeeze()))
logger.trace("yaw_pitch: %s", self._pitch_yaw_roll) # type: ignore
| Obtain the yaw, roll and pitch from the :attr:`_rotation` in eular angles. | 12 | 28 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _get_pitch_yaw_roll(self) -> None:
proj_matrix = np.zeros((3, 4), dtype="float32")
proj_matrix[:3, :3] = cv2.Rodrigues(self._rotation)[0]
euler = cv2.decomposeProjectionMatrix(proj_matrix)[-1]
self._pitch_yaw_roll = cast(Tuple[float, float, float], tuple(euler.squeeze()))
logger.trace("yaw_pitch: %s", self._pitch_yaw_roll) # type: ignore
```
###Assistant : Obtain the yaw, roll and pitch from the :attr:`_rotation` in eular angles.
|
2,452 | def aiohttp_notify_servers_mock() -> Iterable[Mock]:
with patch(
"homeassistant.components.dlna_dmr.data.AiohttpNotifyServer"
) as mock_constructor:
servers = []
| Construct mock AiohttpNotifyServer on demand, eliminating network use.
This fixture provides a list of the constructed servers.
| 17 | 13 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def aiohttp_notify_servers_mock() -> Iterable[Mock]:
with patch(
"homeassistant.components.dlna_dmr.data.AiohttpNotifyServer"
) as mock_constructor:
servers = []
```
###Assistant : Construct mock AiohttpNotifyServer on demand, eliminating network use.
This fixture provides a list of the constructed servers.
|
2,453 | def line_collection_2d_to_3d(col, zs=0, zdir='z'):
segments3d = _paths_to_3d_segments(col.get_paths(), zs, zdir)
col.__class__ = Line3DCollection
col.set_segments(segments3d)
| Convert a `.LineCollection` to a `.Line3DCollection` object. | 7 | 13 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def line_collection_2d_to_3d(col, zs=0, zdir='z'):
segments3d = _paths_to_3d_segments(col.get_paths(), zs, zdir)
col.__class__ = Line3DCollection
col.set_segments(segments3d)
```
###Assistant : Convert a `.LineCollection` to a `.Line3DCollection` object.
|
2,454 | def execute():
frappe.reload_doc("e_commerce", "web_template", "item_card_group")
blocks = frappe.db.get_all(
"Web Page Block",
filters={"web_template": "Item Card Group"},
fields=["parent", "web_template_values", "name"]
)
fields = generate_fields_to_edit()
for block in blocks:
web_template_value = json.loads(block.get('web_template_values'))
for field in fields:
item = web_template_value.get(field)
if not item:
continue
if frappe.db.exists("Website Item", {"item_code": item}):
website_item = frappe.db.get_value("Website Item", {"item_code": item})
else:
website_item = make_new_website_item(item, web_template_value, field)
continue
if website_item:
web_template_value[field] = website_item
frappe.db.set_value("Web Page Block", block.name, "web_template_values", json.dumps(web_template_value))
|
Convert all Item links to Website Item link values in
exisitng 'Item Card Group' Web Page Block data.
| 18 | 69 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def execute():
frappe.reload_doc("e_commerce", "web_template", "item_card_group")
blocks = frappe.db.get_all(
"Web Page Block",
filters={"web_template": "Item Card Group"},
fields=["parent", "web_template_values", "name"]
)
fields = generate_fields_to_edit()
for block in blocks:
web_template_value = json.loads(block.get('web_template_values'))
for field in fields:
item = web_template_value.get(field)
if not item:
continue
if frappe.db.exists("Website Item", {"item_code": item}):
website_item = frappe.db.get_value("Website Item", {"item_code": item})
else:
website_item = make_new_website_item(item, web_template_value, field)
continue
if website_item:
web_template_value[field] = website_item
frappe.db.set_value("Web Page Block", block.name, "web_template_values", json.dumps(web_template_value))
```
###Assistant :
Convert all Item links to Website Item link values in
exisitng 'Item Card Group' Web Page Block data.
|
2,455 | def set_pickradius(self, pickradius):
if not isinstance(pickradius, Number) or pickradius < 0:
raise ValueError("pick radius should be a distance")
self._pickradius = pickradius
pickradius = property(get_pickradius, set_pickradius)
|
Set the pick radius used for containment tests.
See `.contains` for more details.
Parameters
----------
pickradius : float
Pick radius, in points.
| 22 | 25 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def set_pickradius(self, pickradius):
if not isinstance(pickradius, Number) or pickradius < 0:
raise ValueError("pick radius should be a distance")
self._pickradius = pickradius
pickradius = property(get_pickradius, set_pickradius)
```
###Assistant :
Set the pick radius used for containment tests.
See `.contains` for more details.
Parameters
----------
pickradius : float
Pick radius, in points.
|
2,456 | def _predict(self):
with self._lock:
self._predicted_images = []
for frame in self._input_images:
self._predictor.in_queue.put(frame)
idx = 0
while idx < self._sample_size:
logger.debug("Predicting face %s of %s", idx + 1, self._sample_size)
items = self._predictor.out_queue.get()
if items == "EOF":
logger.debug("Received EOF")
break
for item in items:
self._predicted_images.append(item)
logger.debug("Predicted face %s of %s", idx + 1, self._sample_size)
idx += 1
logger.debug("Predicted faces")
| Predict from the loaded frames.
With a threading lock (to prevent stacking), run the selected faces through the Faceswap
model predict function and add the output to :attr:`predicted`
| 28 | 57 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _predict(self):
with self._lock:
self._predicted_images = []
for frame in self._input_images:
self._predictor.in_queue.put(frame)
idx = 0
while idx < self._sample_size:
logger.debug("Predicting face %s of %s", idx + 1, self._sample_size)
items = self._predictor.out_queue.get()
if items == "EOF":
logger.debug("Received EOF")
break
for item in items:
self._predicted_images.append(item)
logger.debug("Predicted face %s of %s", idx + 1, self._sample_size)
idx += 1
logger.debug("Predicted faces")
```
###Assistant : Predict from the loaded frames.
With a threading lock (to prevent stacking), run the selected faces through the Faceswap
model predict function and add the output to :attr:`predicted`
|
2,457 | def exclude(f):
J, new = f.rep.exclude()
gens = [gen for j, gen in enumerate(f.gens) if j not in J]
return f.per(new, gens=gens)
|
Remove unnecessary generators from ``f``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import a, b, c, d, x
>>> Poly(a + x, a, b, c, d, x).exclude()
Poly(a + x, a, x, domain='ZZ')
| 36 | 22 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def exclude(f):
J, new = f.rep.exclude()
gens = [gen for j, gen in enumerate(f.gens) if j not in J]
return f.per(new, gens=gens)
```
###Assistant :
Remove unnecessary generators from ``f``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import a, b, c, d, x
>>> Poly(a + x, a, b, c, d, x).exclude()
Poly(a + x, a, x, domain='ZZ')
|
2,458 | def set_ticks(self, ticks, labels=None, *, minor=False, **kwargs):
result = self._set_tick_locations(ticks, minor=minor)
if labels is not None:
self.set_ticklabels(labels, minor=minor, **kwargs)
return result
|
Set this Axis' tick locations and optionally labels.
If necessary, the view limits of the Axis are expanded so that all
given ticks are visible.
Parameters
----------
ticks : list of floats
List of tick locations. The axis `.Locator` is replaced by a
`~.ticker.FixedLocator`.
Some tick formatters will not label arbitrary tick positions;
e.g. log formatters only label decade ticks by default. In
such a case you can set a formatter explicitly on the axis
using `.Axis.set_major_formatter` or provide formatted
*labels* yourself.
labels : list of str, optional
List of tick labels. If not set, the labels are generated with
the axis tick `.Formatter`.
minor : bool, default: False
If ``False``, set the major ticks; if ``True``, the minor ticks.
**kwargs
`.Text` properties for the labels. These take effect only if you
pass *labels*. In other cases, please use `~.Axes.tick_params`.
Notes
-----
The mandatory expansion of the view limits is an intentional design
choice to prevent the surprise of a non-visible tick. If you need
other limits, you should set the limits explicitly after setting the
ticks.
| 177 | 21 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def set_ticks(self, ticks, labels=None, *, minor=False, **kwargs):
result = self._set_tick_locations(ticks, minor=minor)
if labels is not None:
self.set_ticklabels(labels, minor=minor, **kwargs)
return result
```
###Assistant :
Set this Axis' tick locations and optionally labels.
If necessary, the view limits of the Axis are expanded so that all
given ticks are visible.
Parameters
----------
ticks : list of floats
List of tick locations. The axis `.Locator` is replaced by a
`~.ticker.FixedLocator`.
Some tick formatters will not label arbitrary tick positions;
e.g. log formatters only label decade ticks by default. In
such a case you can set a formatter explicitly on the axis
using `.Axis.set_major_formatter` or provide formatted
*labels* yourself.
labels : list of str, optional
List of tick labels. If not set, the labels are generated with
the axis tick `.Formatter`.
minor : bool, default: False
If ``False``, set the major ticks; if ``True``, the minor ticks.
**kwargs
`.Text` properties for the labels. These take effect only if you
pass *labels*. In other cases, please use `~.Axes.tick_params`.
Notes
-----
The mandatory expansion of the view limits is an intentional design
choice to prevent the surprise of a non-visible tick. If you need
other limits, you should set the limits explicitly after setting the
ticks.
|
2,459 | def get_all_customers(date_range, company, field, limit=None):
if field == "outstanding_amount":
filters = [["docstatus", "=", "1"], ["company", "=", company]]
if date_range:
date_range = frappe.parse_json(date_range)
filters.append(["posting_date", ">=", "between", [date_range[0], date_range[1]]])
return frappe.db.get_all(
"Sales Invoice",
fields=["customer as name", "sum(outstanding_amount) as value"],
filters=filters,
group_by="customer",
order_by="value desc",
limit=limit,
)
else:
if field == "total_sales_amount":
select_field = "sum(so_item.base_net_amount)"
elif field == "total_qty_sold":
select_field = "sum(so_item.stock_qty)"
date_condition = get_date_condition(date_range, "so.transaction_date")
return frappe.db.sql(
.format(
select_field, date_condition
),
(company, cint(limit)),
as_dict=1,
)
@frappe.whitelist() |
select so.customer as name, {0} as value
FROM `tabSales Order` as so JOIN `tabSales Order Item` as so_item
ON so.name = so_item.parent
where so.docstatus = 1 {1} and so.company = %s
group by so.customer
order by value DESC
limit %s
| 40 | 73 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_all_customers(date_range, company, field, limit=None):
if field == "outstanding_amount":
filters = [["docstatus", "=", "1"], ["company", "=", company]]
if date_range:
date_range = frappe.parse_json(date_range)
filters.append(["posting_date", ">=", "between", [date_range[0], date_range[1]]])
return frappe.db.get_all(
"Sales Invoice",
fields=["customer as name", "sum(outstanding_amount) as value"],
filters=filters,
group_by="customer",
order_by="value desc",
limit=limit,
)
else:
if field == "total_sales_amount":
select_field = "sum(so_item.base_net_amount)"
elif field == "total_qty_sold":
select_field = "sum(so_item.stock_qty)"
date_condition = get_date_condition(date_range, "so.transaction_date")
return frappe.db.sql(
.format(
select_field, date_condition
),
(company, cint(limit)),
as_dict=1,
)
@frappe.whitelist()
```
###Assistant :
select so.customer as name, {0} as value
FROM `tabSales Order` as so JOIN `tabSales Order Item` as so_item
ON so.name = so_item.parent
where so.docstatus = 1 {1} and so.company = %s
group by so.customer
order by value DESC
limit %s
|
2,460 | async def _async_create_radio_entity(self) -> FlowResult:
assert self._title is not None
assert self._radio_type is not None
assert self._device_path is not None
assert self._device_settings is not None
device_settings = self._device_settings.copy()
device_settings[CONF_DEVICE_PATH] = await self.hass.async_add_executor_job(
usb.get_serial_by_id, self._device_path
)
return self.async_create_entry(
title=self._title,
data={
CONF_DEVICE: device_settings,
CONF_RADIO_TYPE: self._radio_type.name,
},
)
| Create a config entity with the current flow state. | 9 | 45 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
async def _async_create_radio_entity(self) -> FlowResult:
assert self._title is not None
assert self._radio_type is not None
assert self._device_path is not None
assert self._device_settings is not None
device_settings = self._device_settings.copy()
device_settings[CONF_DEVICE_PATH] = await self.hass.async_add_executor_job(
usb.get_serial_by_id, self._device_path
)
return self.async_create_entry(
title=self._title,
data={
CONF_DEVICE: device_settings,
CONF_RADIO_TYPE: self._radio_type.name,
},
)
```
###Assistant : Create a config entity with the current flow state.
|
2,461 | def publish_daemon(self, publish_payload, *args, **kwargs):
context = zmq.Context(1)
ioloop = salt.ext.tornado.ioloop.IOLoop()
ioloop.make_current()
# Set up the context |
Bind to the interface specified in the configuration file
| 9 | 17 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def publish_daemon(self, publish_payload, *args, **kwargs):
context = zmq.Context(1)
ioloop = salt.ext.tornado.ioloop.IOLoop()
ioloop.make_current()
# Set up the context
```
###Assistant :
Bind to the interface specified in the configuration file
|
2,462 | def _print_Pow(self, expr, rational=False):
PREC = precedence(expr)
if expr.exp is S.Half and not rational:
return "sqrt(%s)" % self._print(expr.base)
if expr.is_commutative:
if -expr.exp is S.Half and not rational:
# Note: Don't test "expr.exp == -S.Half" here, because that will
# match -0.5, which we don't want.
return "%s/sqrt(%s)" % tuple(map(lambda arg: self._print(arg), (S.One, expr.base)))
if expr.exp is -S.One:
# Similarly to the S.Half case, don't test with "==" here.
return '%s/%s' % (self._print(S.One),
self.parenthesize(expr.base, PREC, strict=False))
e = self.parenthesize(expr.exp, PREC, strict=False)
if self.printmethod == '_sympyrepr' and expr.exp.is_Rational and expr.exp.q != 1:
# the parenthesized exp should be '(Rational(a, b))' so strip parens,
# but just check to be sure.
if e.startswith('(Rational'):
return '%s**%s' % (self.parenthesize(expr.base, PREC, strict=False), e[1:-1])
return '%s**%s' % (self.parenthesize(expr.base, PREC, strict=False), e)
| Printing helper function for ``Pow``
Parameters
==========
rational : bool, optional
If ``True``, it will not attempt printing ``sqrt(x)`` or
``x**S.Half`` as ``sqrt``, and will use ``x**(1/2)``
instead.
See examples for additional details
Examples
========
>>> from sympy import sqrt, StrPrinter
>>> from sympy.abc import x
How ``rational`` keyword works with ``sqrt``:
>>> printer = StrPrinter()
>>> printer._print_Pow(sqrt(x), rational=True)
'x**(1/2)'
>>> printer._print_Pow(sqrt(x), rational=False)
'sqrt(x)'
>>> printer._print_Pow(1/sqrt(x), rational=True)
'x**(-1/2)'
>>> printer._print_Pow(1/sqrt(x), rational=False)
'1/sqrt(x)'
Notes
=====
``sqrt(x)`` is canonicalized as ``Pow(x, S.Half)`` in SymPy,
so there is no need of defining a separate printer for ``sqrt``.
Instead, it should be handled here as well.
| 102 | 124 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _print_Pow(self, expr, rational=False):
PREC = precedence(expr)
if expr.exp is S.Half and not rational:
return "sqrt(%s)" % self._print(expr.base)
if expr.is_commutative:
if -expr.exp is S.Half and not rational:
# Note: Don't test "expr.exp == -S.Half" here, because that will
# match -0.5, which we don't want.
return "%s/sqrt(%s)" % tuple(map(lambda arg: self._print(arg), (S.One, expr.base)))
if expr.exp is -S.One:
# Similarly to the S.Half case, don't test with "==" here.
return '%s/%s' % (self._print(S.One),
self.parenthesize(expr.base, PREC, strict=False))
e = self.parenthesize(expr.exp, PREC, strict=False)
if self.printmethod == '_sympyrepr' and expr.exp.is_Rational and expr.exp.q != 1:
# the parenthesized exp should be '(Rational(a, b))' so strip parens,
# but just check to be sure.
if e.startswith('(Rational'):
return '%s**%s' % (self.parenthesize(expr.base, PREC, strict=False), e[1:-1])
return '%s**%s' % (self.parenthesize(expr.base, PREC, strict=False), e)
```
###Assistant : Printing helper function for ``Pow``
Parameters
==========
rational : bool, optional
If ``True``, it will not attempt printing ``sqrt(x)`` or
``x**S.Half`` as ``sqrt``, and will use ``x**(1/2)``
instead.
See examples for additional details
Examples
========
>>> from sympy import sqrt, StrPrinter
>>> from sympy.abc import x
How ``rational`` keyword works with ``sqrt``:
>>> printer = StrPrinter()
>>> printer._print_Pow(sqrt(x), rational=True)
'x**(1/2)'
>>> printer._print_Pow(sqrt(x), rational=False)
'sqrt(x)'
>>> printer._print_Pow(1/sqrt(x), rational=True)
'x**(-1/2)'
>>> printer._print_Pow(1/sqrt(x), rational=False)
'1/sqrt(x)'
Notes
=====
``sqrt(x)`` is canonicalized as ``Pow(x, S.Half)`` in SymPy,
so there is no need of defining a separate printer for ``sqrt``.
Instead, it should be handled here as well.
|
2,463 | def _save_tab(self, tab, active, minimal=False):
data: _JsonType = {'history': []}
if active:
data['active'] = True
if minimal:
history = [tab.history.current_item()]
else:
history = tab.history
for idx, item in enumerate(history):
qtutils.ensure_valid(item)
item_data = self._save_tab_item(tab, idx, item)
if item.url().scheme() == 'qute' and item.url().host() == 'back':
# don't add qute://back to the session file
if item_data.get('active', False) and data['history']:
# mark entry before qute://back as active
data['history'][-1]['active'] = True
else:
data['history'].append(item_data)
return data
| Get a dict with data for a single tab.
Args:
tab: The WebView to save.
active: Whether the tab is currently active.
| 22 | 70 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _save_tab(self, tab, active, minimal=False):
data: _JsonType = {'history': []}
if active:
data['active'] = True
if minimal:
history = [tab.history.current_item()]
else:
history = tab.history
for idx, item in enumerate(history):
qtutils.ensure_valid(item)
item_data = self._save_tab_item(tab, idx, item)
if item.url().scheme() == 'qute' and item.url().host() == 'back':
# don't add qute://back to the session file
if item_data.get('active', False) and data['history']:
# mark entry before qute://back as active
data['history'][-1]['active'] = True
else:
data['history'].append(item_data)
return data
```
###Assistant : Get a dict with data for a single tab.
Args:
tab: The WebView to save.
active: Whether the tab is currently active.
|
2,464 | def testBestCheckpoints(self):
keep_checkpoints_num = 4
checkpoint_manager = self.checkpoint_manager(keep_checkpoints_num)
checkpoints = [
Checkpoint(Checkpoint.PERSISTENT, i, self.mock_result(i)) for i in range(16)
]
random.shuffle(checkpoints)
for checkpoint in checkpoints:
checkpoint_manager.on_checkpoint(checkpoint)
best_checkpoints = checkpoint_manager.best_checkpoints()
self.assertEqual(len(best_checkpoints), keep_checkpoints_num)
for i in range(len(best_checkpoints)):
self.assertEqual(best_checkpoints[i].value, i + 12)
|
Tests that the best checkpoints are tracked and ordered correctly.
| 10 | 38 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def testBestCheckpoints(self):
keep_checkpoints_num = 4
checkpoint_manager = self.checkpoint_manager(keep_checkpoints_num)
checkpoints = [
Checkpoint(Checkpoint.PERSISTENT, i, self.mock_result(i)) for i in range(16)
]
random.shuffle(checkpoints)
for checkpoint in checkpoints:
checkpoint_manager.on_checkpoint(checkpoint)
best_checkpoints = checkpoint_manager.best_checkpoints()
self.assertEqual(len(best_checkpoints), keep_checkpoints_num)
for i in range(len(best_checkpoints)):
self.assertEqual(best_checkpoints[i].value, i + 12)
```
###Assistant :
Tests that the best checkpoints are tracked and ordered correctly.
|
2,465 | def save(self):
s = self._read_from_storage() # type: _Settings
for k, v in self.__dict__.items():
if k[0] == '_':
continue
if hasattr(s, k):
setattr(s, k, v)
log.debug("_ConfigSQL updating storage")
self._session.merge(s)
try:
self._session.commit()
except OperationalError as e:
log.error('Database error: %s', e)
self._session.rollback()
self.load()
| Apply all configuration values to the underlying storage. | 8 | 40 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def save(self):
s = self._read_from_storage() # type: _Settings
for k, v in self.__dict__.items():
if k[0] == '_':
continue
if hasattr(s, k):
setattr(s, k, v)
log.debug("_ConfigSQL updating storage")
self._session.merge(s)
try:
self._session.commit()
except OperationalError as e:
log.error('Database error: %s', e)
self._session.rollback()
self.load()
```
###Assistant : Apply all configuration values to the underlying storage.
|
2,466 | def test_transactions(self):
prev_hour = timezone.now() - timedelta(hours=1)
event = self.transaction_data.copy()
event.update(
{
"start_timestamp": iso_format(prev_hour - timedelta(minutes=1)),
"timestamp": iso_format(prev_hour),
"tags": {"foo": "bar"},
"transaction": "this is where a transaction's 'message' is stored",
}
)
transaction = self.store_event(project_id=self.project.id, data=event)
perf_issue = transaction.groups[0]
perf_issue.update(first_seen=prev_hour)
Activity.objects.create(
project=self.project,
group=perf_issue,
type=ActivityType.SET_REGRESSION.value,
datetime=prev_hour,
data={"event_id": transaction.event_id},
)
conditions = [{"id": "sentry.rules.conditions.regression_event.RegressionEventCondition"}]
filters = [
{
"id": "sentry.rules.filters.tagged_event.TaggedEventFilter",
"key": "foo",
"match": "eq",
"value": "bar",
}
]
result = preview(self.project, conditions, filters, "all", "all", 0)
assert perf_issue.id in result
filters[0]["value"] = "baz"
result = preview(self.project, conditions, filters, "all", "all", 0)
assert perf_issue.id not in result
filters = [
{
"id": "sentry.rules.filters.event_attribute.EventAttributeFilter",
"attribute": "message",
"match": "eq",
"value": "this is where a transaction's 'message' is stored",
}
]
result = preview(self.project, conditions, filters, "all", "all", 0)
assert perf_issue.id in result
filters[0]["value"] = "wrong message"
result = preview(self.project, conditions, filters, "all", "all", 0)
assert perf_issue.id not in result
# this can be tested when SNS-1891 is fixed
|
conditions = [{"id": "sentry.rules.conditions.first_seen_event.FirstSeenEventCondition"}]
filters = [{
"id": "sentry.rules.filters.tagged_event.TaggedEventFilter",
"key": "foo",
"match": "eq",
"value": "bar",
}]
result = preview(self.project, conditions, filters, "all", "all", 0)
assert perf_issue.id in result
| 28 | 153 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_transactions(self):
prev_hour = timezone.now() - timedelta(hours=1)
event = self.transaction_data.copy()
event.update(
{
"start_timestamp": iso_format(prev_hour - timedelta(minutes=1)),
"timestamp": iso_format(prev_hour),
"tags": {"foo": "bar"},
"transaction": "this is where a transaction's 'message' is stored",
}
)
transaction = self.store_event(project_id=self.project.id, data=event)
perf_issue = transaction.groups[0]
perf_issue.update(first_seen=prev_hour)
Activity.objects.create(
project=self.project,
group=perf_issue,
type=ActivityType.SET_REGRESSION.value,
datetime=prev_hour,
data={"event_id": transaction.event_id},
)
conditions = [{"id": "sentry.rules.conditions.regression_event.RegressionEventCondition"}]
filters = [
{
"id": "sentry.rules.filters.tagged_event.TaggedEventFilter",
"key": "foo",
"match": "eq",
"value": "bar",
}
]
result = preview(self.project, conditions, filters, "all", "all", 0)
assert perf_issue.id in result
filters[0]["value"] = "baz"
result = preview(self.project, conditions, filters, "all", "all", 0)
assert perf_issue.id not in result
filters = [
{
"id": "sentry.rules.filters.event_attribute.EventAttributeFilter",
"attribute": "message",
"match": "eq",
"value": "this is where a transaction's 'message' is stored",
}
]
result = preview(self.project, conditions, filters, "all", "all", 0)
assert perf_issue.id in result
filters[0]["value"] = "wrong message"
result = preview(self.project, conditions, filters, "all", "all", 0)
assert perf_issue.id not in result
# this can be tested when SNS-1891 is fixed
```
###Assistant :
conditions = [{"id": "sentry.rules.conditions.first_seen_event.FirstSeenEventCondition"}]
filters = [{
"id": "sentry.rules.filters.tagged_event.TaggedEventFilter",
"key": "foo",
"match": "eq",
"value": "bar",
}]
result = preview(self.project, conditions, filters, "all", "all", 0)
assert perf_issue.id in result
|
2,467 | def test_follows_semver_all_releases_semver_and_missing_package_semver_release_version(self):
assert (
follows_semver_versioning_scheme(
org_id=self.org.id, project_id=self.proj_1.id, release_version="2.0.0"
)
is False
)
|
Test that ensures that even if a project is following semver, then if the release_version
supplied lacks a package, then for that specific release we opt the project out of being
considered a semver project
| 35 | 12 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_follows_semver_all_releases_semver_and_missing_package_semver_release_version(self):
assert (
follows_semver_versioning_scheme(
org_id=self.org.id, project_id=self.proj_1.id, release_version="2.0.0"
)
is False
)
```
###Assistant :
Test that ensures that even if a project is following semver, then if the release_version
supplied lacks a package, then for that specific release we opt the project out of being
considered a semver project
|
2,468 | def MultivariateT(syms, mu, sigma, v):
return multivariate_rv(MultivariateTDistribution, syms, mu, sigma, v)
#-------------------------------------------------------------------------------
# Multivariate Normal Gamma distribution ---------------------------------------
|
Creates a joint random variable with multivariate T-distribution.
Parameters
==========
syms : A symbol/str
For identifying the random variable.
mu : A list/matrix
Representing the location vector
sigma : The shape matrix for the distribution
Examples
========
>>> from sympy.stats import density, MultivariateT
>>> from sympy import Symbol
>>> x = Symbol("x")
>>> X = MultivariateT("x", [1, 1], [[1, 0], [0, 1]], 2)
>>> density(X)(1, 2)
2/(9*pi)
Returns
=======
RandomSymbol
| 70 | 18 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def MultivariateT(syms, mu, sigma, v):
return multivariate_rv(MultivariateTDistribution, syms, mu, sigma, v)
#-------------------------------------------------------------------------------
# Multivariate Normal Gamma distribution ---------------------------------------
```
###Assistant :
Creates a joint random variable with multivariate T-distribution.
Parameters
==========
syms : A symbol/str
For identifying the random variable.
mu : A list/matrix
Representing the location vector
sigma : The shape matrix for the distribution
Examples
========
>>> from sympy.stats import density, MultivariateT
>>> from sympy import Symbol
>>> x = Symbol("x")
>>> X = MultivariateT("x", [1, 1], [[1, 0], [0, 1]], 2)
>>> density(X)(1, 2)
2/(9*pi)
Returns
=======
RandomSymbol
|
2,469 | def _can_hold_identifiers_and_holds_name(self, name) -> bool:
if self.is_object() or is_string_dtype(self.dtype) or self.is_categorical():
return name in self
return False
|
Faster check for ``name in self`` when we know `name` is a Python
identifier (e.g. in NDFrame.__getattr__, which hits this to support
. key lookup). For indexes that can't hold identifiers (everything
but object & categorical) we just return False.
https://github.com/pandas-dev/pandas/issues/19764
| 41 | 17 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _can_hold_identifiers_and_holds_name(self, name) -> bool:
if self.is_object() or is_string_dtype(self.dtype) or self.is_categorical():
return name in self
return False
```
###Assistant :
Faster check for ``name in self`` when we know `name` is a Python
identifier (e.g. in NDFrame.__getattr__, which hits this to support
. key lookup). For indexes that can't hold identifiers (everything
but object & categorical) we just return False.
https://github.com/pandas-dev/pandas/issues/19764
|
2,470 | def test_remove_other_alias(self) -> None:
# Create a second alias.
other_test_alias = "#test2:test"
other_room_alias = self._add_alias(other_test_alias)
# Set the alias as the canonical alias for this room.
self._set_canonical_alias(
{
"alias": self.test_alias,
"alt_aliases": [self.test_alias, other_test_alias],
}
)
data = self._get_canonical_alias()
self.assertEqual(data["content"]["alias"], self.test_alias)
self.assertEqual(
data["content"]["alt_aliases"], [self.test_alias, other_test_alias]
)
# Delete the second alias.
self.get_success(
self.handler.delete_association(
create_requester(self.admin_user), other_room_alias
)
)
data = self._get_canonical_alias()
self.assertEqual(data["content"]["alias"], self.test_alias)
self.assertEqual(data["content"]["alt_aliases"], [self.test_alias])
| Removing an alias listed as in alt_aliases should remove it there too. | 12 | 63 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_remove_other_alias(self) -> None:
# Create a second alias.
other_test_alias = "#test2:test"
other_room_alias = self._add_alias(other_test_alias)
# Set the alias as the canonical alias for this room.
self._set_canonical_alias(
{
"alias": self.test_alias,
"alt_aliases": [self.test_alias, other_test_alias],
}
)
data = self._get_canonical_alias()
self.assertEqual(data["content"]["alias"], self.test_alias)
self.assertEqual(
data["content"]["alt_aliases"], [self.test_alias, other_test_alias]
)
# Delete the second alias.
self.get_success(
self.handler.delete_association(
create_requester(self.admin_user), other_room_alias
)
)
data = self._get_canonical_alias()
self.assertEqual(data["content"]["alias"], self.test_alias)
self.assertEqual(data["content"]["alt_aliases"], [self.test_alias])
```
###Assistant : Removing an alias listed as in alt_aliases should remove it there too.
|
2,471 | def test_get_comments_no_doc(self):
response = self.client.get(
"/api/documents/500/comments/",
format="json",
)
self.assertEqual(response.status_code, 404)
|
GIVEN:
- A request to get comments from a non-existent document
WHEN:
- API request for document comments is made
THEN:
- HTTP 404 is returned
| 26 | 10 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_get_comments_no_doc(self):
response = self.client.get(
"/api/documents/500/comments/",
format="json",
)
self.assertEqual(response.status_code, 404)
```
###Assistant :
GIVEN:
- A request to get comments from a non-existent document
WHEN:
- API request for document comments is made
THEN:
- HTTP 404 is returned
|
2,472 | def test_mapping_keypad(self, config_stub, keyparser):
config_stub.val.bindings.commands = {'normal': {'a': 'nop'}}
config_stub.val.bindings.key_mappings = {'1': 'a'}
info = keyutils.KeyInfo(Qt.Key.Key_1, Qt.KeyboardModifier.KeypadModifier)
keyparser.handle(info.to_event())
keyparser.execute.assert_called_once_with('nop', None)
| Make sure falling back to non-numpad keys works with mappings. | 10 | 20 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_mapping_keypad(self, config_stub, keyparser):
config_stub.val.bindings.commands = {'normal': {'a': 'nop'}}
config_stub.val.bindings.key_mappings = {'1': 'a'}
info = keyutils.KeyInfo(Qt.Key.Key_1, Qt.KeyboardModifier.KeypadModifier)
keyparser.handle(info.to_event())
keyparser.execute.assert_called_once_with('nop', None)
```
###Assistant : Make sure falling back to non-numpad keys works with mappings.
|
2,473 | def verify_liked_image(browser, logger):
browser.refresh()
unlike_xpath = read_xpath(like_image.__name__, "unlike")
like_elem = browser.find_elements(By.XPATH, unlike_xpath)
if len(like_elem) == 1:
return True
else:
logger.warning("--> Image was NOT liked! You have a BLOCK on likes!")
return False
| Check for a ban on likes using the last liked image | 11 | 32 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def verify_liked_image(browser, logger):
browser.refresh()
unlike_xpath = read_xpath(like_image.__name__, "unlike")
like_elem = browser.find_elements(By.XPATH, unlike_xpath)
if len(like_elem) == 1:
return True
else:
logger.warning("--> Image was NOT liked! You have a BLOCK on likes!")
return False
```
###Assistant : Check for a ban on likes using the last liked image
|
2,474 | def get_bboxes(self, dst_type='hbb'):
from ..bbox import get_box_type
_, box_type_cls = get_box_type(dst_type)
return box_type_cls.from_instance_masks(self)
| Get the certain type boxes from masks.
Please refer to ``mmdet.structures.bbox.box_type`` for more details of
the box type.
Args:
dst_type: Destination box type.
Returns:
:obj:`BaseBoxes`: Certain type boxes.
| 28 | 13 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_bboxes(self, dst_type='hbb'):
from ..bbox import get_box_type
_, box_type_cls = get_box_type(dst_type)
return box_type_cls.from_instance_masks(self)
```
###Assistant : Get the certain type boxes from masks.
Please refer to ``mmdet.structures.bbox.box_type`` for more details of
the box type.
Args:
dst_type: Destination box type.
Returns:
:obj:`BaseBoxes`: Certain type boxes.
|
2,475 | def deprecate_call():
sympy_deprecation_warning(
,
deprecated_since_version="1.5",
active_deprecations_target="deprecated-tensor-fun-eval",
stacklevel=4,
)
|
Calling a tensor like Tensor(*indices) is deprecated. Use
Tensor.substitute_indices() instead.
| 10 | 8 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def deprecate_call():
sympy_deprecation_warning(
,
deprecated_since_version="1.5",
active_deprecations_target="deprecated-tensor-fun-eval",
stacklevel=4,
)
```
###Assistant :
Calling a tensor like Tensor(*indices) is deprecated. Use
Tensor.substitute_indices() instead.
|
2,476 | def assign_proto(proto, name, val):
is_repeated_field = hasattr(getattr(proto, name), 'extend')
if is_repeated_field and not isinstance(val, list):
val = [val]
if isinstance(val, list):
if isinstance(val[0], dict):
for item in val:
proto_item = getattr(proto, name).add()
for k, v in six.iteritems(item):
assign_proto(proto_item, k, v)
else:
getattr(proto, name).extend(val)
elif isinstance(val, dict):
for k, v in six.iteritems(val):
assign_proto(getattr(proto, name), k, v)
else:
setattr(proto, name, val)
| Assign a Python object to a protobuf message, based on the Python
type (in recursive fashion). Lists become repeated fields/messages, dicts
become messages, and other types are assigned directly. For convenience,
repeated fields whose values are not lists are converted to single-element
lists; e.g., `my_repeated_int_field=3` is converted to
`my_repeated_int_field=[3]`. | 49 | 59 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def assign_proto(proto, name, val):
is_repeated_field = hasattr(getattr(proto, name), 'extend')
if is_repeated_field and not isinstance(val, list):
val = [val]
if isinstance(val, list):
if isinstance(val[0], dict):
for item in val:
proto_item = getattr(proto, name).add()
for k, v in six.iteritems(item):
assign_proto(proto_item, k, v)
else:
getattr(proto, name).extend(val)
elif isinstance(val, dict):
for k, v in six.iteritems(val):
assign_proto(getattr(proto, name), k, v)
else:
setattr(proto, name, val)
```
###Assistant : Assign a Python object to a protobuf message, based on the Python
type (in recursive fashion). Lists become repeated fields/messages, dicts
become messages, and other types are assigned directly. For convenience,
repeated fields whose values are not lists are converted to single-element
lists; e.g., `my_repeated_int_field=3` is converted to
`my_repeated_int_field=[3]`.
|
2,477 | def __new__(cls, *args, **kwargs):
sympy_deprecation_warning(
,
deprecated_since_version="1.8",
active_deprecations_target='deprecated-askhandler',
)
return super().__new__(cls, *args, **kwargs)
|
The AskHandler system is deprecated. The AskHandler class should
be replaced with the multipledispatch handler of Predicate
| 17 | 13 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def __new__(cls, *args, **kwargs):
sympy_deprecation_warning(
,
deprecated_since_version="1.8",
active_deprecations_target='deprecated-askhandler',
)
return super().__new__(cls, *args, **kwargs)
```
###Assistant :
The AskHandler system is deprecated. The AskHandler class should
be replaced with the multipledispatch handler of Predicate
|
2,478 | def incr_mean_variance_axis(X, *, axis, last_mean, last_var, last_n, weights=None):
_raise_error_wrong_axis(axis)
if not isinstance(X, (sp.csr_matrix, sp.csc_matrix)):
_raise_typeerror(X)
if np.size(last_n) == 1:
last_n = np.full(last_mean.shape, last_n, dtype=last_mean.dtype)
if not (np.size(last_mean) == np.size(last_var) == np.size(last_n)):
raise ValueError("last_mean, last_var, last_n do not have the same shapes.")
if axis == 1:
if np.size(last_mean) != X.shape[0]:
raise ValueError(
"If axis=1, then last_mean, last_n, last_var should be of "
f"size n_samples {X.shape[0]} (Got {np.size(last_mean)})."
)
else: # axis == 0
if np.size(last_mean) != X.shape[1]:
raise ValueError(
"If axis=0, then last_mean, last_n, last_var should be of "
f"size n_features {X.shape[1]} (Got {np.size(last_mean)})."
)
X = X.T if axis == 1 else X
if weights is not None:
weights = _check_sample_weight(weights, X, dtype=X.dtype)
return _incr_mean_var_axis0(
X, last_mean=last_mean, last_var=last_var, last_n=last_n, weights=weights
)
| Compute incremental mean and variance along an axis on a CSR or CSC matrix.
last_mean, last_var are the statistics computed at the last step by this
function. Both must be initialized to 0-arrays of the proper size, i.e.
the number of features in X. last_n is the number of samples encountered
until now.
Parameters
----------
X : CSR or CSC sparse matrix of shape (n_samples, n_features)
Input data.
axis : {0, 1}
Axis along which the axis should be computed.
last_mean : ndarray of shape (n_features,) or (n_samples,), dtype=floating
Array of means to update with the new data X.
Should be of shape (n_features,) if axis=0 or (n_samples,) if axis=1.
last_var : ndarray of shape (n_features,) or (n_samples,), dtype=floating
Array of variances to update with the new data X.
Should be of shape (n_features,) if axis=0 or (n_samples,) if axis=1.
last_n : float or ndarray of shape (n_features,) or (n_samples,), \
dtype=floating
Sum of the weights seen so far, excluding the current weights
If not float, it should be of shape (n_features,) if
axis=0 or (n_samples,) if axis=1. If float it corresponds to
having same weights for all samples (or features).
weights : ndarray of shape (n_samples,) or (n_features,), default=None
If axis is set to 0 shape is (n_samples,) or
if axis is set to 1 shape is (n_features,).
If it is set to None, then samples are equally weighted.
.. versionadded:: 0.24
Returns
-------
means : ndarray of shape (n_features,) or (n_samples,), dtype=floating
Updated feature-wise means if axis = 0 or
sample-wise means if axis = 1.
variances : ndarray of shape (n_features,) or (n_samples,), dtype=floating
Updated feature-wise variances if axis = 0 or
sample-wise variances if axis = 1.
n : ndarray of shape (n_features,) or (n_samples,), dtype=integral
Updated number of seen samples per feature if axis=0
or number of seen features per sample if axis=1.
If weights is not None, n is a sum of the weights of the seen
samples or features instead of the actual number of seen
samples or features.
Notes
-----
NaNs are ignored in the algorithm.
| 344 | 121 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def incr_mean_variance_axis(X, *, axis, last_mean, last_var, last_n, weights=None):
_raise_error_wrong_axis(axis)
if not isinstance(X, (sp.csr_matrix, sp.csc_matrix)):
_raise_typeerror(X)
if np.size(last_n) == 1:
last_n = np.full(last_mean.shape, last_n, dtype=last_mean.dtype)
if not (np.size(last_mean) == np.size(last_var) == np.size(last_n)):
raise ValueError("last_mean, last_var, last_n do not have the same shapes.")
if axis == 1:
if np.size(last_mean) != X.shape[0]:
raise ValueError(
"If axis=1, then last_mean, last_n, last_var should be of "
f"size n_samples {X.shape[0]} (Got {np.size(last_mean)})."
)
else: # axis == 0
if np.size(last_mean) != X.shape[1]:
raise ValueError(
"If axis=0, then last_mean, last_n, last_var should be of "
f"size n_features {X.shape[1]} (Got {np.size(last_mean)})."
)
X = X.T if axis == 1 else X
if weights is not None:
weights = _check_sample_weight(weights, X, dtype=X.dtype)
return _incr_mean_var_axis0(
X, last_mean=last_mean, last_var=last_var, last_n=last_n, weights=weights
)
```
###Assistant : Compute incremental mean and variance along an axis on a CSR or CSC matrix.
last_mean, last_var are the statistics computed at the last step by this
function. Both must be initialized to 0-arrays of the proper size, i.e.
the number of features in X. last_n is the number of samples encountered
until now.
Parameters
----------
X : CSR or CSC sparse matrix of shape (n_samples, n_features)
Input data.
axis : {0, 1}
Axis along which the axis should be computed.
last_mean : ndarray of shape (n_features,) or (n_samples,), dtype=floating
Array of means to update with the new data X.
Should be of shape (n_features,) if axis=0 or (n_samples,) if axis=1.
last_var : ndarray of shape (n_features,) or (n_samples,), dtype=floating
Array of variances to update with the new data X.
Should be of shape (n_features,) if axis=0 or (n_samples,) if axis=1.
last_n : float or ndarray of shape (n_features,) or (n_samples,), \
dtype=floating
Sum of the weights seen so far, excluding the current weights
If not float, it should be of shape (n_features,) if
axis=0 or (n_samples,) if axis=1. If float it corresponds to
having same weights for all samples (or features).
weights : ndarray of shape (n_samples,) or (n_features,), default=None
If axis is set to 0 shape is (n_samples,) or
if axis is set to 1 shape is (n_features,).
If it is set to None, then samples are equally weighted.
.. versionadded:: 0.24
Returns
-------
means : ndarray of shape (n_features,) or (n_samples,), dtype=floating
Updated feature-wise means if axis = 0 or
sample-wise means if axis = 1.
variances : ndarray of shape (n_features,) or (n_samples,), dtype=floating
Updated feature-wise variances if axis = 0 or
sample-wise variances if axis = 1.
n : ndarray of shape (n_features,) or (n_samples,), dtype=integral
Updated number of seen samples per feature if axis=0
or number of seen features per sample if axis=1.
If weights is not None, n is a sum of the weights of the seen
samples or features instead of the actual number of seen
samples or features.
Notes
-----
NaNs are ignored in the algorithm.
|
2,479 | async def test_thermostat_with_no_off_after_recheck(hass, hk_driver, events):
entity_id = "climate.test"
# support_auto = True
hass.states.async_set(
entity_id,
HVACMode.COOL,
{
ATTR_SUPPORTED_FEATURES: SUPPORT_TARGET_TEMPERATURE
| SUPPORT_TARGET_TEMPERATURE_RANGE,
ATTR_HVAC_MODES: [],
},
)
await hass.async_block_till_done()
acc = Thermostat(hass, hk_driver, "Climate", entity_id, 1, None)
hk_driver.add_accessory(acc)
await acc.run()
await hass.async_block_till_done()
assert acc.char_cooling_thresh_temp.value == 23.0
assert acc.char_heating_thresh_temp.value == 19.0
assert acc.char_cooling_thresh_temp.properties[PROP_MAX_VALUE] == DEFAULT_MAX_TEMP
assert acc.char_cooling_thresh_temp.properties[PROP_MIN_VALUE] == 7.0
assert acc.char_cooling_thresh_temp.properties[PROP_MIN_STEP] == 0.1
assert acc.char_heating_thresh_temp.properties[PROP_MAX_VALUE] == DEFAULT_MAX_TEMP
assert acc.char_heating_thresh_temp.properties[PROP_MIN_VALUE] == 7.0
assert acc.char_heating_thresh_temp.properties[PROP_MIN_STEP] == 0.1
assert acc.char_target_heat_cool.value == 2
hass.states.async_set(
entity_id,
HVACMode.HEAT_COOL,
{
ATTR_TARGET_TEMP_HIGH: 22.0,
ATTR_TARGET_TEMP_LOW: 20.0,
ATTR_CURRENT_TEMPERATURE: 18.0,
ATTR_HVAC_ACTION: HVACAction.HEATING,
ATTR_HVAC_MODES: [HVACMode.HEAT_COOL, HVACMode.AUTO],
},
)
await hass.async_block_till_done()
assert acc.char_heating_thresh_temp.value == 20.0
assert acc.char_cooling_thresh_temp.value == 22.0
assert acc.char_current_heat_cool.value == 1
assert acc.char_target_heat_cool.value == 3
assert acc.char_current_temp.value == 18.0
assert acc.char_display_units.value == 0
| Test if a thermostat that is not ready when we first see it that actually does not have off. | 19 | 118 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
async def test_thermostat_with_no_off_after_recheck(hass, hk_driver, events):
entity_id = "climate.test"
# support_auto = True
hass.states.async_set(
entity_id,
HVACMode.COOL,
{
ATTR_SUPPORTED_FEATURES: SUPPORT_TARGET_TEMPERATURE
| SUPPORT_TARGET_TEMPERATURE_RANGE,
ATTR_HVAC_MODES: [],
},
)
await hass.async_block_till_done()
acc = Thermostat(hass, hk_driver, "Climate", entity_id, 1, None)
hk_driver.add_accessory(acc)
await acc.run()
await hass.async_block_till_done()
assert acc.char_cooling_thresh_temp.value == 23.0
assert acc.char_heating_thresh_temp.value == 19.0
assert acc.char_cooling_thresh_temp.properties[PROP_MAX_VALUE] == DEFAULT_MAX_TEMP
assert acc.char_cooling_thresh_temp.properties[PROP_MIN_VALUE] == 7.0
assert acc.char_cooling_thresh_temp.properties[PROP_MIN_STEP] == 0.1
assert acc.char_heating_thresh_temp.properties[PROP_MAX_VALUE] == DEFAULT_MAX_TEMP
assert acc.char_heating_thresh_temp.properties[PROP_MIN_VALUE] == 7.0
assert acc.char_heating_thresh_temp.properties[PROP_MIN_STEP] == 0.1
assert acc.char_target_heat_cool.value == 2
hass.states.async_set(
entity_id,
HVACMode.HEAT_COOL,
{
ATTR_TARGET_TEMP_HIGH: 22.0,
ATTR_TARGET_TEMP_LOW: 20.0,
ATTR_CURRENT_TEMPERATURE: 18.0,
ATTR_HVAC_ACTION: HVACAction.HEATING,
ATTR_HVAC_MODES: [HVACMode.HEAT_COOL, HVACMode.AUTO],
},
)
await hass.async_block_till_done()
assert acc.char_heating_thresh_temp.value == 20.0
assert acc.char_cooling_thresh_temp.value == 22.0
assert acc.char_current_heat_cool.value == 1
assert acc.char_target_heat_cool.value == 3
assert acc.char_current_temp.value == 18.0
assert acc.char_display_units.value == 0
```
###Assistant : Test if a thermostat that is not ready when we first see it that actually does not have off.
|
2,480 | def save(self, *args, **kwargs):
is_new = self.pk is None
if is_new:
clean_name = get_field_clean_name(self.label)
self.clean_name = clean_name
super().save(*args, **kwargs)
|
When new fields are created, generate a template safe ascii name to use as the
JSON storage reference for this field. Previously created fields will be updated
to use the legacy unidecode method via checks & _migrate_legacy_clean_name.
We do not want to update the clean name on any subsequent changes to the label
as this would invalidate any previously submitted data.
| 61 | 19 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def save(self, *args, **kwargs):
is_new = self.pk is None
if is_new:
clean_name = get_field_clean_name(self.label)
self.clean_name = clean_name
super().save(*args, **kwargs)
```
###Assistant :
When new fields are created, generate a template safe ascii name to use as the
JSON storage reference for this field. Previously created fields will be updated
to use the legacy unidecode method via checks & _migrate_legacy_clean_name.
We do not want to update the clean name on any subsequent changes to the label
as this would invalidate any previously submitted data.
|
2,481 | def dask_task_wrapper(func, repack, key, ray_pretask_cbs, ray_posttask_cbs, *args):
if ray_pretask_cbs is not None:
pre_states = [
cb(key, args) if cb is not None else None for cb in ray_pretask_cbs
]
repacked_args, repacked_deps = repack(args)
# Recursively execute Dask-inlined tasks.
actual_args = [_execute_task(a, repacked_deps) for a in repacked_args]
# Execute the actual underlying Dask task.
result = func(*actual_args)
if ray_posttask_cbs is not None:
for cb, pre_state in zip(ray_posttask_cbs, pre_states):
if cb is not None:
cb(key, result, pre_state)
return result
|
A Ray remote function acting as a Dask task wrapper. This function will
repackage the given flat `args` into its original data structures using
`repack`, execute any Dask subtasks within the repackaged arguments
(inlined by Dask's optimization pass), and then pass the concrete task
arguments to the provide Dask task function, `func`.
Args:
func (callable): The Dask task function to execute.
repack (callable): A function that repackages the provided args into
the original (possibly nested) Python objects.
key (str): The Dask key for this task.
ray_pretask_cbs (callable): Pre-task execution callbacks.
ray_posttask_cbs (callable): Post-task execution callback.
*args (ObjectRef): Ray object references representing the Dask task's
arguments.
Returns:
The output of the Dask task. In the context of Ray, a
dask_task_wrapper.remote() invocation will return a Ray object
reference representing the Ray task's result.
| 131 | 77 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def dask_task_wrapper(func, repack, key, ray_pretask_cbs, ray_posttask_cbs, *args):
if ray_pretask_cbs is not None:
pre_states = [
cb(key, args) if cb is not None else None for cb in ray_pretask_cbs
]
repacked_args, repacked_deps = repack(args)
# Recursively execute Dask-inlined tasks.
actual_args = [_execute_task(a, repacked_deps) for a in repacked_args]
# Execute the actual underlying Dask task.
result = func(*actual_args)
if ray_posttask_cbs is not None:
for cb, pre_state in zip(ray_posttask_cbs, pre_states):
if cb is not None:
cb(key, result, pre_state)
return result
```
###Assistant :
A Ray remote function acting as a Dask task wrapper. This function will
repackage the given flat `args` into its original data structures using
`repack`, execute any Dask subtasks within the repackaged arguments
(inlined by Dask's optimization pass), and then pass the concrete task
arguments to the provide Dask task function, `func`.
Args:
func (callable): The Dask task function to execute.
repack (callable): A function that repackages the provided args into
the original (possibly nested) Python objects.
key (str): The Dask key for this task.
ray_pretask_cbs (callable): Pre-task execution callbacks.
ray_posttask_cbs (callable): Post-task execution callback.
*args (ObjectRef): Ray object references representing the Dask task's
arguments.
Returns:
The output of the Dask task. In the context of Ray, a
dask_task_wrapper.remote() invocation will return a Ray object
reference representing the Ray task's result.
|
2,482 | def words(count, common=True):
word_list = list(COMMON_WORDS) if common else []
c = len(word_list)
if count > c:
count -= c
while count > 0:
c = min(count, len(WORDS))
count -= c
word_list += random.sample(WORDS, c)
else:
word_list = word_list[:count]
return " ".join(word_list)
|
Return a string of `count` lorem ipsum words separated by a single space.
If `common` is True, then the first 19 words will be the standard
'lorem ipsum' words. Otherwise, all words will be selected randomly.
| 36 | 42 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def words(count, common=True):
word_list = list(COMMON_WORDS) if common else []
c = len(word_list)
if count > c:
count -= c
while count > 0:
c = min(count, len(WORDS))
count -= c
word_list += random.sample(WORDS, c)
else:
word_list = word_list[:count]
return " ".join(word_list)
```
###Assistant :
Return a string of `count` lorem ipsum words separated by a single space.
If `common` is True, then the first 19 words will be the standard
'lorem ipsum' words. Otherwise, all words will be selected randomly.
|
2,483 | def jumpTo(self, bytes):
try:
self._position = self.index(bytes, self.position) + len(bytes) - 1
except ValueError:
raise StopIteration
return True
| Look for the next sequence of bytes matching a given sequence. If
a match is found advance the position to the last byte of the match | 26 | 18 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def jumpTo(self, bytes):
try:
self._position = self.index(bytes, self.position) + len(bytes) - 1
except ValueError:
raise StopIteration
return True
```
###Assistant : Look for the next sequence of bytes matching a given sequence. If
a match is found advance the position to the last byte of the match
|
2,484 | def available(self) -> bool:
expire_after: int | None = self._config.get(CONF_EXPIRE_AFTER)
# mypy doesn't know about fget: https://github.com/python/mypy/issues/6185
return MqttAvailability.available.fget(self) and ( # type: ignore[attr-defined]
expire_after is None or not self._expired
)
| Return true if the device is available and value has not expired. | 12 | 31 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def available(self) -> bool:
expire_after: int | None = self._config.get(CONF_EXPIRE_AFTER)
# mypy doesn't know about fget: https://github.com/python/mypy/issues/6185
return MqttAvailability.available.fget(self) and ( # type: ignore[attr-defined]
expire_after is None or not self._expired
)
```
###Assistant : Return true if the device is available and value has not expired.
|
2,485 | def Logistic(name, mu, s):
r
return rv(name, LogisticDistribution, (mu, s))
#-------------------------------------------------------------------------------
# Log-logistic distribution --------------------------------------------------------
|
Create a continuous random variable with a logistic distribution.
Explanation
===========
The density of the logistic distribution is given by
.. math::
f(x) := \frac{e^{-(x-\mu)/s}} {s\left(1+e^{-(x-\mu)/s}\right)^2}
Parameters
==========
mu : Real number, the location (mean)
s : Real number, `s > 0`, a scale
Returns
=======
RandomSymbol
Examples
========
>>> from sympy.stats import Logistic, density, cdf
>>> from sympy import Symbol
>>> mu = Symbol("mu", real=True)
>>> s = Symbol("s", positive=True)
>>> z = Symbol("z")
>>> X = Logistic("x", mu, s)
>>> density(X)(z)
exp((mu - z)/s)/(s*(exp((mu - z)/s) + 1)**2)
>>> cdf(X)(z)
1/(exp((mu - z)/s) + 1)
References
==========
.. [1] https://en.wikipedia.org/wiki/Logistic_distribution
.. [2] http://mathworld.wolfram.com/LogisticDistribution.html
| 105 | 15 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def Logistic(name, mu, s):
r
return rv(name, LogisticDistribution, (mu, s))
#-------------------------------------------------------------------------------
# Log-logistic distribution --------------------------------------------------------
```
###Assistant :
Create a continuous random variable with a logistic distribution.
Explanation
===========
The density of the logistic distribution is given by
.. math::
f(x) := \frac{e^{-(x-\mu)/s}} {s\left(1+e^{-(x-\mu)/s}\right)^2}
Parameters
==========
mu : Real number, the location (mean)
s : Real number, `s > 0`, a scale
Returns
=======
RandomSymbol
Examples
========
>>> from sympy.stats import Logistic, density, cdf
>>> from sympy import Symbol
>>> mu = Symbol("mu", real=True)
>>> s = Symbol("s", positive=True)
>>> z = Symbol("z")
>>> X = Logistic("x", mu, s)
>>> density(X)(z)
exp((mu - z)/s)/(s*(exp((mu - z)/s) + 1)**2)
>>> cdf(X)(z)
1/(exp((mu - z)/s) + 1)
References
==========
.. [1] https://en.wikipedia.org/wiki/Logistic_distribution
.. [2] http://mathworld.wolfram.com/LogisticDistribution.html
|
2,486 | def __call__(self, feat_maps, comp_attribs):
assert isinstance(feat_maps, paddle.Tensor)
assert comp_attribs.ndim == 3
assert comp_attribs.shape[2] == 8
sorted_dist_inds_batch = []
local_graph_batch = []
knn_batch = []
node_feat_batch = []
node_label_batch = []
for batch_ind in range(comp_attribs.shape[0]):
num_comps = int(comp_attribs[batch_ind, 0, 0])
comp_geo_attribs = comp_attribs[batch_ind, :num_comps, 1:7]
node_labels = comp_attribs[batch_ind, :num_comps, 7].astype(
np.int32)
comp_centers = comp_geo_attribs[:, 0:2]
distance_matrix = euclidean_distance_matrix(comp_centers,
comp_centers)
batch_id = np.zeros(
(comp_geo_attribs.shape[0], 1), dtype=np.float32) * batch_ind
comp_geo_attribs[:, -2] = np.clip(comp_geo_attribs[:, -2], -1, 1)
angle = np.arccos(comp_geo_attribs[:, -2]) * np.sign(
comp_geo_attribs[:, -1])
angle = angle.reshape((-1, 1))
rotated_rois = np.hstack(
[batch_id, comp_geo_attribs[:, :-2], angle])
rois = paddle.to_tensor(rotated_rois)
content_feats = self.pooling(feat_maps[batch_ind].unsqueeze(0),
rois)
content_feats = content_feats.reshape([content_feats.shape[0], -1])
geo_feats = feature_embedding(comp_geo_attribs,
self.node_geo_feat_dim)
geo_feats = paddle.to_tensor(geo_feats)
node_feats = paddle.concat([content_feats, geo_feats], axis=-1)
sorted_dist_inds = np.argsort(distance_matrix, axis=1)
pivot_local_graphs, pivot_knns = self.generate_local_graphs(
sorted_dist_inds, node_labels)
node_feat_batch.append(node_feats)
node_label_batch.append(node_labels)
local_graph_batch.append(pivot_local_graphs)
knn_batch.append(pivot_knns)
sorted_dist_inds_batch.append(sorted_dist_inds)
(node_feats, adjacent_matrices, knn_inds, gt_linkage) = \
self.generate_gcn_input(node_feat_batch,
node_label_batch,
local_graph_batch,
knn_batch,
sorted_dist_inds_batch)
return node_feats, adjacent_matrices, knn_inds, gt_linkage
| Generate local graphs as GCN input.
Args:
feat_maps (Tensor): The feature maps to extract the content
features of text components.
comp_attribs (ndarray): The text component attributes.
Returns:
local_graphs_node_feat (Tensor): The node features of graph.
adjacent_matrices (Tensor): The adjacent matrices of local graphs.
pivots_knn_inds (Tensor): The k-nearest neighbor indices in local
graph.
gt_linkage (Tensor): The surpervision signal of GCN for linkage
prediction.
| 61 | 146 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def __call__(self, feat_maps, comp_attribs):
assert isinstance(feat_maps, paddle.Tensor)
assert comp_attribs.ndim == 3
assert comp_attribs.shape[2] == 8
sorted_dist_inds_batch = []
local_graph_batch = []
knn_batch = []
node_feat_batch = []
node_label_batch = []
for batch_ind in range(comp_attribs.shape[0]):
num_comps = int(comp_attribs[batch_ind, 0, 0])
comp_geo_attribs = comp_attribs[batch_ind, :num_comps, 1:7]
node_labels = comp_attribs[batch_ind, :num_comps, 7].astype(
np.int32)
comp_centers = comp_geo_attribs[:, 0:2]
distance_matrix = euclidean_distance_matrix(comp_centers,
comp_centers)
batch_id = np.zeros(
(comp_geo_attribs.shape[0], 1), dtype=np.float32) * batch_ind
comp_geo_attribs[:, -2] = np.clip(comp_geo_attribs[:, -2], -1, 1)
angle = np.arccos(comp_geo_attribs[:, -2]) * np.sign(
comp_geo_attribs[:, -1])
angle = angle.reshape((-1, 1))
rotated_rois = np.hstack(
[batch_id, comp_geo_attribs[:, :-2], angle])
rois = paddle.to_tensor(rotated_rois)
content_feats = self.pooling(feat_maps[batch_ind].unsqueeze(0),
rois)
content_feats = content_feats.reshape([content_feats.shape[0], -1])
geo_feats = feature_embedding(comp_geo_attribs,
self.node_geo_feat_dim)
geo_feats = paddle.to_tensor(geo_feats)
node_feats = paddle.concat([content_feats, geo_feats], axis=-1)
sorted_dist_inds = np.argsort(distance_matrix, axis=1)
pivot_local_graphs, pivot_knns = self.generate_local_graphs(
sorted_dist_inds, node_labels)
node_feat_batch.append(node_feats)
node_label_batch.append(node_labels)
local_graph_batch.append(pivot_local_graphs)
knn_batch.append(pivot_knns)
sorted_dist_inds_batch.append(sorted_dist_inds)
(node_feats, adjacent_matrices, knn_inds, gt_linkage) = \
self.generate_gcn_input(node_feat_batch,
node_label_batch,
local_graph_batch,
knn_batch,
sorted_dist_inds_batch)
return node_feats, adjacent_matrices, knn_inds, gt_linkage
```
###Assistant : Generate local graphs as GCN input.
Args:
feat_maps (Tensor): The feature maps to extract the content
features of text components.
comp_attribs (ndarray): The text component attributes.
Returns:
local_graphs_node_feat (Tensor): The node features of graph.
adjacent_matrices (Tensor): The adjacent matrices of local graphs.
pivots_knn_inds (Tensor): The k-nearest neighbor indices in local
graph.
gt_linkage (Tensor): The surpervision signal of GCN for linkage
prediction.
|
2,487 | def _is_zero_copy_arrow_op(cls, op) -> bool:
is_zero_copy_op = False
if isinstance(op, (FrameNode, TransformNode, UnionNode)):
# - FrameNode: already materialized PyArrow table
# - TransformNode: select certain columns of the table, implemented zero-copy (``df._arrow_select``)
# - UnionNode: concatenate PyArrow tables, implemented zero-copy (``df._arrow_concat``)
is_zero_copy_op = True
elif isinstance(op, MaskNode) and (
isinstance(op.row_positions, slice) or is_range_like(op.row_positions)
):
# Can select rows zero-copy if indexer is a slice-like (``df._arrow_row_slice``)
is_zero_copy_op = True
return is_zero_copy_op and all(
# Walk the computation tree
cls._is_zero_copy_arrow_op(_op)
for _op in getattr(op, "inputs", [])
)
|
Check whether the passed node of the delayed computation tree could be executed zero-copy via PyArrow execution.
Parameters
----------
op : DFAlgNode
Returns
-------
bool
| 25 | 85 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _is_zero_copy_arrow_op(cls, op) -> bool:
is_zero_copy_op = False
if isinstance(op, (FrameNode, TransformNode, UnionNode)):
# - FrameNode: already materialized PyArrow table
# - TransformNode: select certain columns of the table, implemented zero-copy (``df._arrow_select``)
# - UnionNode: concatenate PyArrow tables, implemented zero-copy (``df._arrow_concat``)
is_zero_copy_op = True
elif isinstance(op, MaskNode) and (
isinstance(op.row_positions, slice) or is_range_like(op.row_positions)
):
# Can select rows zero-copy if indexer is a slice-like (``df._arrow_row_slice``)
is_zero_copy_op = True
return is_zero_copy_op and all(
# Walk the computation tree
cls._is_zero_copy_arrow_op(_op)
for _op in getattr(op, "inputs", [])
)
```
###Assistant :
Check whether the passed node of the delayed computation tree could be executed zero-copy via PyArrow execution.
Parameters
----------
op : DFAlgNode
Returns
-------
bool
|
2,488 | def batch_p_dist(x, y, p=2):
x = x.unsqueeze(1)
diff = x - y
return paddle.norm(diff, p=p, axis=list(range(2, diff.dim())))
@register |
calculate pairwise p_dist, the first index of x and y are batch
return [x.shape[0], y.shape[0]]
| 15 | 18 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def batch_p_dist(x, y, p=2):
x = x.unsqueeze(1)
diff = x - y
return paddle.norm(diff, p=p, axis=list(range(2, diff.dim())))
@register
```
###Assistant :
calculate pairwise p_dist, the first index of x and y are batch
return [x.shape[0], y.shape[0]]
|
2,489 | def get_employee_shift(employee, for_timestamp=None, consider_default_shift=False, next_shift_direction=None):
if for_timestamp is None:
for_timestamp = now_datetime()
shift_details = get_shift_for_timestamp(employee, for_timestamp)
# if shift assignment is not found, consider default shift
default_shift = frappe.db.get_value('Employee', employee, 'default_shift')
if not shift_details and consider_default_shift:
shift_details = get_shift_details(default_shift, for_timestamp.date())
# if its a holiday, reset
if shift_details and is_holiday_date(employee, shift_details):
shift_details = None
# if no shift is found, find next or prev shift based on direction
if not shift_details and next_shift_direction:
shift_details = get_prev_or_next_shift(employee, for_timestamp, consider_default_shift, default_shift, next_shift_direction)
return shift_details
| Returns a Shift Type for the given employee on the given date. (excluding the holidays)
:param employee: Employee for which shift is required.
:param for_timestamp: DateTime on which shift is required
:param consider_default_shift: If set to true, default shift is taken when no shift assignment is found.
:param next_shift_direction: One of: None, 'forward', 'reverse'. Direction to look for next shift if shift not found on given date.
| 67 | 82 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_employee_shift(employee, for_timestamp=None, consider_default_shift=False, next_shift_direction=None):
if for_timestamp is None:
for_timestamp = now_datetime()
shift_details = get_shift_for_timestamp(employee, for_timestamp)
# if shift assignment is not found, consider default shift
default_shift = frappe.db.get_value('Employee', employee, 'default_shift')
if not shift_details and consider_default_shift:
shift_details = get_shift_details(default_shift, for_timestamp.date())
# if its a holiday, reset
if shift_details and is_holiday_date(employee, shift_details):
shift_details = None
# if no shift is found, find next or prev shift based on direction
if not shift_details and next_shift_direction:
shift_details = get_prev_or_next_shift(employee, for_timestamp, consider_default_shift, default_shift, next_shift_direction)
return shift_details
```
###Assistant : Returns a Shift Type for the given employee on the given date. (excluding the holidays)
:param employee: Employee for which shift is required.
:param for_timestamp: DateTime on which shift is required
:param consider_default_shift: If set to true, default shift is taken when no shift assignment is found.
:param next_shift_direction: One of: None, 'forward', 'reverse'. Direction to look for next shift if shift not found on given date.
|
2,490 | def get_create_form_class(self):
self.create_model = self.get_create_model()
if self.create_model:
return get_task_form_class(self.create_model)
else:
return None
|
To be called after dispatch(); returns the form class for creating a new task
| 14 | 12 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_create_form_class(self):
self.create_model = self.get_create_model()
if self.create_model:
return get_task_form_class(self.create_model)
else:
return None
```
###Assistant :
To be called after dispatch(); returns the form class for creating a new task
|
2,491 | def retrieve_image(self):
image = self.storage.open(self.image_path, "rb")
image_format = self.get_image_metadata_from_file(image)
return (Image.open(image), image_format)
| Return a PIL Image instance stored at `image_path`. | 8 | 12 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def retrieve_image(self):
image = self.storage.open(self.image_path, "rb")
image_format = self.get_image_metadata_from_file(image)
return (Image.open(image), image_format)
```
###Assistant : Return a PIL Image instance stored at `image_path`.
|
2,492 | def chebval(x, c, tensor=True):
c = np.array(c, ndmin=1, copy=True)
if c.dtype.char in '?bBhHiIlLqQpP':
c = c.astype(np.double)
if isinstance(x, (tuple, list)):
x = np.asarray(x)
if isinstance(x, np.ndarray) and tensor:
c = c.reshape(c.shape + (1,)*x.ndim)
if len(c) == 1:
c0 = c[0]
c1 = 0
elif len(c) == 2:
c0 = c[0]
c1 = c[1]
else:
x2 = 2*x
c0 = c[-2]
c1 = c[-1]
for i in range(3, len(c) + 1):
tmp = c0
c0 = c[-i] - c1
c1 = tmp + c1*x2
return c0 + c1*x
|
Evaluate a Chebyshev series at points x.
If `c` is of length `n + 1`, this function returns the value:
.. math:: p(x) = c_0 * T_0(x) + c_1 * T_1(x) + ... + c_n * T_n(x)
The parameter `x` is converted to an array only if it is a tuple or a
list, otherwise it is treated as a scalar. In either case, either `x`
or its elements must support multiplication and addition both with
themselves and with the elements of `c`.
If `c` is a 1-D array, then `p(x)` will have the same shape as `x`. If
`c` is multidimensional, then the shape of the result depends on the
value of `tensor`. If `tensor` is true the shape will be c.shape[1:] +
x.shape. If `tensor` is false the shape will be c.shape[1:]. Note that
scalars have shape (,).
Trailing zeros in the coefficients will be used in the evaluation, so
they should be avoided if efficiency is a concern.
Parameters
----------
x : array_like, compatible object
If `x` is a list or tuple, it is converted to an ndarray, otherwise
it is left unchanged and treated as a scalar. In either case, `x`
or its elements must support addition and multiplication with
themselves and with the elements of `c`.
c : array_like
Array of coefficients ordered so that the coefficients for terms of
degree n are contained in c[n]. If `c` is multidimensional the
remaining indices enumerate multiple polynomials. In the two
dimensional case the coefficients may be thought of as stored in
the columns of `c`.
tensor : boolean, optional
If True, the shape of the coefficient array is extended with ones
on the right, one for each dimension of `x`. Scalars have dimension 0
for this action. The result is that every column of coefficients in
`c` is evaluated for every element of `x`. If False, `x` is broadcast
over the columns of `c` for the evaluation. This keyword is useful
when `c` is multidimensional. The default value is True.
.. versionadded:: 1.7.0
Returns
-------
values : ndarray, algebra_like
The shape of the return value is described above.
See Also
--------
chebval2d, chebgrid2d, chebval3d, chebgrid3d
Notes
-----
The evaluation uses Clenshaw recursion, aka synthetic division.
| 369 | 87 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def chebval(x, c, tensor=True):
c = np.array(c, ndmin=1, copy=True)
if c.dtype.char in '?bBhHiIlLqQpP':
c = c.astype(np.double)
if isinstance(x, (tuple, list)):
x = np.asarray(x)
if isinstance(x, np.ndarray) and tensor:
c = c.reshape(c.shape + (1,)*x.ndim)
if len(c) == 1:
c0 = c[0]
c1 = 0
elif len(c) == 2:
c0 = c[0]
c1 = c[1]
else:
x2 = 2*x
c0 = c[-2]
c1 = c[-1]
for i in range(3, len(c) + 1):
tmp = c0
c0 = c[-i] - c1
c1 = tmp + c1*x2
return c0 + c1*x
```
###Assistant :
Evaluate a Chebyshev series at points x.
If `c` is of length `n + 1`, this function returns the value:
.. math:: p(x) = c_0 * T_0(x) + c_1 * T_1(x) + ... + c_n * T_n(x)
The parameter `x` is converted to an array only if it is a tuple or a
list, otherwise it is treated as a scalar. In either case, either `x`
or its elements must support multiplication and addition both with
themselves and with the elements of `c`.
If `c` is a 1-D array, then `p(x)` will have the same shape as `x`. If
`c` is multidimensional, then the shape of the result depends on the
value of `tensor`. If `tensor` is true the shape will be c.shape[1:] +
x.shape. If `tensor` is false the shape will be c.shape[1:]. Note that
scalars have shape (,).
Trailing zeros in the coefficients will be used in the evaluation, so
they should be avoided if efficiency is a concern.
Parameters
----------
x : array_like, compatible object
If `x` is a list or tuple, it is converted to an ndarray, otherwise
it is left unchanged and treated as a scalar. In either case, `x`
or its elements must support addition and multiplication with
themselves and with the elements of `c`.
c : array_like
Array of coefficients ordered so that the coefficients for terms of
degree n are contained in c[n]. If `c` is multidimensional the
remaining indices enumerate multiple polynomials. In the two
dimensional case the coefficients may be thought of as stored in
the columns of `c`.
tensor : boolean, optional
If True, the shape of the coefficient array is extended with ones
on the right, one for each dimension of `x`. Scalars have dimension 0
for this action. The result is that every column of coefficients in
`c` is evaluated for every element of `x`. If False, `x` is broadcast
over the columns of `c` for the evaluation. This keyword is useful
when `c` is multidimensional. The default value is True.
.. versionadded:: 1.7.0
Returns
-------
values : ndarray, algebra_like
The shape of the return value is described above.
See Also
--------
chebval2d, chebgrid2d, chebval3d, chebgrid3d
Notes
-----
The evaluation uses Clenshaw recursion, aka synthetic division.
|
2,493 | def dagrun_queued(self):
dag_id = request.form.get('dag_id')
dag_run_id = request.form.get('dag_run_id')
confirmed = request.form.get('confirmed') == 'true'
origin = get_safe_url(request.form.get('origin'))
return self._mark_dagrun_state_as_queued(dag_id, dag_run_id, confirmed, origin)
| Queue DagRun so tasks that haven't run yet can be started. | 11 | 21 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def dagrun_queued(self):
dag_id = request.form.get('dag_id')
dag_run_id = request.form.get('dag_run_id')
confirmed = request.form.get('confirmed') == 'true'
origin = get_safe_url(request.form.get('origin'))
return self._mark_dagrun_state_as_queued(dag_id, dag_run_id, confirmed, origin)
```
###Assistant : Queue DagRun so tasks that haven't run yet can be started.
|
2,494 | def data_to_internal(data):
internal = data.copy()
if 'extra_vars' in data:
internal['extra_vars'] = json.loads(data['extra_vars'])
if 'credentials' in data:
internal['credentials'] = set(Credential.objects.get(pk=_id) for _id in data['credentials'])
if 'inventory' in data:
internal['inventory'] = Inventory.objects.get(pk=data['inventory'])
if 'execution_environment' in data:
internal['execution_environment'] = ExecutionEnvironment.objects.get(pk=data['execution_environment'])
if 'labels' in data:
internal['labels'] = [Label.objects.get(pk=_id) for _id in data['labels']]
if 'instance_groups' in data:
internal['instance_groups'] = [InstanceGroup.objects.get(pk=_id) for _id in data['instance_groups']]
return internal
# End of setup, tests start here
@pytest.mark.django_db
@pytest.mark.job_runtime_vars |
returns internal representation, model objects, dictionaries, etc
as opposed to integer primary keys and JSON strings
| 16 | 70 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def data_to_internal(data):
internal = data.copy()
if 'extra_vars' in data:
internal['extra_vars'] = json.loads(data['extra_vars'])
if 'credentials' in data:
internal['credentials'] = set(Credential.objects.get(pk=_id) for _id in data['credentials'])
if 'inventory' in data:
internal['inventory'] = Inventory.objects.get(pk=data['inventory'])
if 'execution_environment' in data:
internal['execution_environment'] = ExecutionEnvironment.objects.get(pk=data['execution_environment'])
if 'labels' in data:
internal['labels'] = [Label.objects.get(pk=_id) for _id in data['labels']]
if 'instance_groups' in data:
internal['instance_groups'] = [InstanceGroup.objects.get(pk=_id) for _id in data['instance_groups']]
return internal
# End of setup, tests start here
@pytest.mark.django_db
@pytest.mark.job_runtime_vars
```
###Assistant :
returns internal representation, model objects, dictionaries, etc
as opposed to integer primary keys and JSON strings
|
2,495 | def _generate_sys_info_str(*sys_info) -> str:
if sys_info:
return "<br>".join(sys_info).replace("\n", "<br>")
return ""
| Format system info into a string.
*sys_info: System info strings to be included.
Returns:
Formatted string containing system information.
| 19 | 11 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _generate_sys_info_str(*sys_info) -> str:
if sys_info:
return "<br>".join(sys_info).replace("\n", "<br>")
return ""
```
###Assistant : Format system info into a string.
*sys_info: System info strings to be included.
Returns:
Formatted string containing system information.
|
2,496 | def default(self, obj):
if isinstance(obj, tf.TensorShape):
items = obj.as_list() if obj.rank is not None else None
return {"class_name": "TensorShape", "items": items}
return get_json_type(obj)
| Encodes objects for types that aren't handled by the default
encoder. | 11 | 23 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def default(self, obj):
if isinstance(obj, tf.TensorShape):
items = obj.as_list() if obj.rank is not None else None
return {"class_name": "TensorShape", "items": items}
return get_json_type(obj)
```
###Assistant : Encodes objects for types that aren't handled by the default
encoder.
|
2,497 | async def _pause_and_wait_for_callback(self):
self._pause_requested = True
await self.async_media_pause()
try: | Send pause and wait for the pause callback to be received. | 11 | 9 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
async def _pause_and_wait_for_callback(self):
self._pause_requested = True
await self.async_media_pause()
try:
```
###Assistant : Send pause and wait for the pause callback to be received.
|
2,498 | def delete_checkpoint(self, checkpoint_path):
# Ensure TrialCheckpoints are converted
if isinstance(checkpoint_path, TrialCheckpoint):
checkpoint_path = checkpoint_path.local_path
try:
checkpoint_dir = TrainableUtil.find_checkpoint_dir(checkpoint_path)
except FileNotFoundError:
# The checkpoint won't exist locally if the
# trial was rescheduled to another worker.
logger.debug(
f"Local checkpoint not found during garbage collection: "
f"{self.trial_id} - {checkpoint_path}"
)
return
else:
if self.uses_cloud_checkpointing:
self.storage_client.delete(self._storage_path(checkpoint_dir))
if os.path.exists(checkpoint_dir):
shutil.rmtree(checkpoint_dir)
| Deletes local copy of checkpoint.
Args:
checkpoint_path (str): Path to checkpoint.
| 11 | 56 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def delete_checkpoint(self, checkpoint_path):
# Ensure TrialCheckpoints are converted
if isinstance(checkpoint_path, TrialCheckpoint):
checkpoint_path = checkpoint_path.local_path
try:
checkpoint_dir = TrainableUtil.find_checkpoint_dir(checkpoint_path)
except FileNotFoundError:
# The checkpoint won't exist locally if the
# trial was rescheduled to another worker.
logger.debug(
f"Local checkpoint not found during garbage collection: "
f"{self.trial_id} - {checkpoint_path}"
)
return
else:
if self.uses_cloud_checkpointing:
self.storage_client.delete(self._storage_path(checkpoint_dir))
if os.path.exists(checkpoint_dir):
shutil.rmtree(checkpoint_dir)
```
###Assistant : Deletes local copy of checkpoint.
Args:
checkpoint_path (str): Path to checkpoint.
|
2,499 | def _resize_worker_group(self, state_dict, max_retries=10):
old_workers = self.worker_group.num_workers
self.worker_group.reset()
time.sleep(1)
for i in range(max_retries):
new_workers = self.worker_group.new_workers_size()
if new_workers:
self._last_resize = time.time()
startup_success = self._start_workers(int(new_workers))
if not startup_success:
logger.info(
f"Worker startup failed. Retrying "
f"{max_retries-i-1} more times."
)
self.worker_group.reset()
continue
self.load_state_dict(state_dict, blocking=True)
if self.use_local and new_workers == 1 and old_workers > 1:
# Major hack. If we go from LocalDistributedRunner to a
# standard TorchRunner we have to manually reset the
# dummy actor handle global vars.
# TODO(amog): Refactor LocalDistributedTorchRunner to
# not use global variables for resource reservation.
ray.util.sgd.torch.distributed_torch_runner._dummy_cuda_actor = None
ray.util.sgd.torch.distributed_torch_runner._dummy_cpu_actor = None
return
else:
delay = 2 ** i
logger.warning("No new workers found. Retrying in %d sec." % delay)
time.sleep(delay)
raise RuntimeError("Exceeded max_retries for relaunching workers.")
| Resizes the number of remote workers based on available resources.
Total number of workers will never exceed `num_workers` amount.
Args:
state_dict (dict): The state dict to load to all workers.
max_retries (int): How many times to attempt to resize workers
before failing.
| 42 | 119 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _resize_worker_group(self, state_dict, max_retries=10):
old_workers = self.worker_group.num_workers
self.worker_group.reset()
time.sleep(1)
for i in range(max_retries):
new_workers = self.worker_group.new_workers_size()
if new_workers:
self._last_resize = time.time()
startup_success = self._start_workers(int(new_workers))
if not startup_success:
logger.info(
f"Worker startup failed. Retrying "
f"{max_retries-i-1} more times."
)
self.worker_group.reset()
continue
self.load_state_dict(state_dict, blocking=True)
if self.use_local and new_workers == 1 and old_workers > 1:
# Major hack. If we go from LocalDistributedRunner to a
# standard TorchRunner we have to manually reset the
# dummy actor handle global vars.
# TODO(amog): Refactor LocalDistributedTorchRunner to
# not use global variables for resource reservation.
ray.util.sgd.torch.distributed_torch_runner._dummy_cuda_actor = None
ray.util.sgd.torch.distributed_torch_runner._dummy_cpu_actor = None
return
else:
delay = 2 ** i
logger.warning("No new workers found. Retrying in %d sec." % delay)
time.sleep(delay)
raise RuntimeError("Exceeded max_retries for relaunching workers.")
```
###Assistant : Resizes the number of remote workers based on available resources.
Total number of workers will never exceed `num_workers` amount.
Args:
state_dict (dict): The state dict to load to all workers.
max_retries (int): How many times to attempt to resize workers
before failing.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.