Unnamed: 0
int64 0
2.93k
| code
stringlengths 101
62.2k
| docs
stringlengths 51
10.7k
| doc_len
int64 4
1.74k
| words
int64 4
4.82k
| lang
stringclasses 1
value | prompt
stringlengths 320
71.2k
|
---|---|---|---|---|---|---|
2,500 | def check_result_same(results, pipeline_results, check_keys):
for key in check_keys:
if results.get(key, None) is None:
continue
if isinstance(results[key], (BitmapMasks, PolygonMasks)):
assert_allclose(pipeline_results[key].to_ndarray(),
results[key].to_ndarray())
else:
assert_allclose(pipeline_results[key], results[key])
| Check whether the ``pipeline_results`` is the same with the predefined
``results``.
Args:
results (dict): Predefined results which should be the standard
output of the transform pipeline.
pipeline_results (dict): Results processed by the transform
pipeline.
check_keys (tuple): Keys that need to be checked between
results and pipeline_results.
| 46 | 23 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def check_result_same(results, pipeline_results, check_keys):
for key in check_keys:
if results.get(key, None) is None:
continue
if isinstance(results[key], (BitmapMasks, PolygonMasks)):
assert_allclose(pipeline_results[key].to_ndarray(),
results[key].to_ndarray())
else:
assert_allclose(pipeline_results[key], results[key])
```
###Assistant : Check whether the ``pipeline_results`` is the same with the predefined
``results``.
Args:
results (dict): Predefined results which should be the standard
output of the transform pipeline.
pipeline_results (dict): Results processed by the transform
pipeline.
check_keys (tuple): Keys that need to be checked between
results and pipeline_results.
|
2,501 | def get_rasa_sdk_version() -> Text:
dependencies_filename = "pyproject.toml"
toml_data = toml.load(project_root() / dependencies_filename)
try:
sdk_version = toml_data["tool"]["poetry"]["dependencies"]["rasa-sdk"]
return sdk_version[1:].strip()
except AttributeError:
raise Exception(f"Failed to find Rasa SDK version in {dependencies_filename}")
| Find out what the referenced version of the Rasa SDK is. | 11 | 29 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_rasa_sdk_version() -> Text:
dependencies_filename = "pyproject.toml"
toml_data = toml.load(project_root() / dependencies_filename)
try:
sdk_version = toml_data["tool"]["poetry"]["dependencies"]["rasa-sdk"]
return sdk_version[1:].strip()
except AttributeError:
raise Exception(f"Failed to find Rasa SDK version in {dependencies_filename}")
```
###Assistant : Find out what the referenced version of the Rasa SDK is.
|
2,502 | def test_update_omitted_version(self) -> None:
version = self.get_success(
self.handler.create_version(
self.local_user,
{
"algorithm": "m.megolm_backup.v1",
"auth_data": "first_version_auth_data",
},
)
)
self.assertEqual(version, "1")
self.get_success(
self.handler.update_version(
self.local_user,
version,
{
"algorithm": "m.megolm_backup.v1",
"auth_data": "revised_first_version_auth_data",
},
)
)
# check we can retrieve it as the current version
res = self.get_success(self.handler.get_version_info(self.local_user))
del res["etag"] # etag is opaque, so don't test its contents
self.assertDictEqual(
res,
{
"algorithm": "m.megolm_backup.v1",
"auth_data": "revised_first_version_auth_data",
"version": version,
"count": 0,
},
)
| Check that the update succeeds if the version is missing from the body | 13 | 68 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_update_omitted_version(self) -> None:
version = self.get_success(
self.handler.create_version(
self.local_user,
{
"algorithm": "m.megolm_backup.v1",
"auth_data": "first_version_auth_data",
},
)
)
self.assertEqual(version, "1")
self.get_success(
self.handler.update_version(
self.local_user,
version,
{
"algorithm": "m.megolm_backup.v1",
"auth_data": "revised_first_version_auth_data",
},
)
)
# check we can retrieve it as the current version
res = self.get_success(self.handler.get_version_info(self.local_user))
del res["etag"] # etag is opaque, so don't test its contents
self.assertDictEqual(
res,
{
"algorithm": "m.megolm_backup.v1",
"auth_data": "revised_first_version_auth_data",
"version": version,
"count": 0,
},
)
```
###Assistant : Check that the update succeeds if the version is missing from the body
|
2,503 | def parameter_value(self, other, u, v=None):
from sympy.geometry.point import Point
if not isinstance(other, GeometryEntity):
other = Point(other, dim=self.ambient_dimension)
if not isinstance(other, Point):
raise ValueError("other must be a point")
if other == self.p1:
return other
if isinstance(u, Symbol) and v is None:
delta = self.arbitrary_point(u) - self.p1
eq = delta - (other - self.p1).unit
sol = solve(eq, u, dict=True)
elif isinstance(u, Symbol) and isinstance(v, Symbol):
pt = self.arbitrary_point(u, v)
sol = solve(pt - other, (u, v), dict=True)
else:
raise ValueError('expecting 1 or 2 symbols')
if not sol:
raise ValueError("Given point is not on %s" % func_name(self))
return sol[0] # {t: tval} or {u: uval, v: vval}
| Return the parameter(s) corresponding to the given point.
Examples
========
>>> from sympy import pi, Plane
>>> from sympy.abc import t, u, v
>>> p = Plane((2, 0, 0), (0, 0, 1), (0, 1, 0))
By default, the parameter value returned defines a point
that is a distance of 1 from the Plane's p1 value and
in line with the given point:
>>> on_circle = p.arbitrary_point(t).subs(t, pi/4)
>>> on_circle.distance(p.p1)
1
>>> p.parameter_value(on_circle, t)
{t: pi/4}
Moving the point twice as far from p1 does not change
the parameter value:
>>> off_circle = p.p1 + (on_circle - p.p1)*2
>>> off_circle.distance(p.p1)
2
>>> p.parameter_value(off_circle, t)
{t: pi/4}
If the 2-value parameter is desired, supply the two
parameter symbols and a replacement dictionary will
be returned:
>>> p.parameter_value(on_circle, u, v)
{u: sqrt(10)/10, v: sqrt(10)/30}
>>> p.parameter_value(off_circle, u, v)
{u: sqrt(10)/5, v: sqrt(10)/15}
| 139 | 104 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def parameter_value(self, other, u, v=None):
from sympy.geometry.point import Point
if not isinstance(other, GeometryEntity):
other = Point(other, dim=self.ambient_dimension)
if not isinstance(other, Point):
raise ValueError("other must be a point")
if other == self.p1:
return other
if isinstance(u, Symbol) and v is None:
delta = self.arbitrary_point(u) - self.p1
eq = delta - (other - self.p1).unit
sol = solve(eq, u, dict=True)
elif isinstance(u, Symbol) and isinstance(v, Symbol):
pt = self.arbitrary_point(u, v)
sol = solve(pt - other, (u, v), dict=True)
else:
raise ValueError('expecting 1 or 2 symbols')
if not sol:
raise ValueError("Given point is not on %s" % func_name(self))
return sol[0] # {t: tval} or {u: uval, v: vval}
```
###Assistant : Return the parameter(s) corresponding to the given point.
Examples
========
>>> from sympy import pi, Plane
>>> from sympy.abc import t, u, v
>>> p = Plane((2, 0, 0), (0, 0, 1), (0, 1, 0))
By default, the parameter value returned defines a point
that is a distance of 1 from the Plane's p1 value and
in line with the given point:
>>> on_circle = p.arbitrary_point(t).subs(t, pi/4)
>>> on_circle.distance(p.p1)
1
>>> p.parameter_value(on_circle, t)
{t: pi/4}
Moving the point twice as far from p1 does not change
the parameter value:
>>> off_circle = p.p1 + (on_circle - p.p1)*2
>>> off_circle.distance(p.p1)
2
>>> p.parameter_value(off_circle, t)
{t: pi/4}
If the 2-value parameter is desired, supply the two
parameter symbols and a replacement dictionary will
be returned:
>>> p.parameter_value(on_circle, u, v)
{u: sqrt(10)/10, v: sqrt(10)/30}
>>> p.parameter_value(off_circle, u, v)
{u: sqrt(10)/5, v: sqrt(10)/15}
|
2,504 | def orthographic_projection(X, camera):
camera = camera.reshape((-1, 1, 3))
X_trans = X[:, :, :2] + camera[:, :, 1:]
shape = paddle.shape(X_trans)
X_2d = (camera[:, :, 0] * X_trans.reshape((shape[0], -1))).reshape(shape)
return X_2d
@register | Perform orthographic projection of 3D points X using the camera parameters
Args:
X: size = [B, N, 3]
camera: size = [B, 3]
Returns:
Projected 2D points -- size = [B, N, 2]
| 33 | 31 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def orthographic_projection(X, camera):
camera = camera.reshape((-1, 1, 3))
X_trans = X[:, :, :2] + camera[:, :, 1:]
shape = paddle.shape(X_trans)
X_2d = (camera[:, :, 0] * X_trans.reshape((shape[0], -1))).reshape(shape)
return X_2d
@register
```
###Assistant : Perform orthographic projection of 3D points X using the camera parameters
Args:
X: size = [B, N, 3]
camera: size = [B, 3]
Returns:
Projected 2D points -- size = [B, N, 2]
|
2,505 | def _load_state_id(self, state_id):
remote_calls = [
worker.load_state_stream.remote(state_id) for worker in self.remote_workers
]
return remote_calls
| Loads the object with id `state_id` to all workers. | 9 | 14 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _load_state_id(self, state_id):
remote_calls = [
worker.load_state_stream.remote(state_id) for worker in self.remote_workers
]
return remote_calls
```
###Assistant : Loads the object with id `state_id` to all workers.
|
2,506 | def __new__(cls, p1, pt=None, angle=None, **kwargs):
p1 = Point(p1, dim=2)
if pt is not None and angle is None:
try:
p2 = Point(pt, dim=2)
except (NotImplementedError, TypeError, ValueError):
raise ValueError(filldedent())
if p1 == p2:
raise ValueError('A Ray requires two distinct points.')
elif angle is not None and pt is None:
# we need to know if the angle is an odd multiple of pi/2
angle = sympify(angle)
c = _pi_coeff(angle)
p2 = None
if c is not None:
if c.is_Rational:
if c.q == 2:
if c.p == 1:
p2 = p1 + Point(0, 1)
elif c.p == 3:
p2 = p1 + Point(0, -1)
elif c.q == 1:
if c.p == 0:
p2 = p1 + Point(1, 0)
elif c.p == 1:
p2 = p1 + Point(-1, 0)
if p2 is None:
c *= S.Pi
else:
c = angle % (2*S.Pi)
if not p2:
m = 2*c/S.Pi
left = And(1 < m, m < 3) # is it in quadrant 2 or 3?
x = Piecewise((-1, left), (Piecewise((0, Eq(m % 1, 0)), (1, True)), True))
y = Piecewise((-tan(c), left), (Piecewise((1, Eq(m, 1)), (-1, Eq(m, 3)), (tan(c), True)), True))
p2 = p1 + Point(x, y)
else:
raise ValueError('A 2nd point or keyword "angle" must be used.')
return LinearEntity2D.__new__(cls, p1, p2, **kwargs)
|
The 2nd argument was not a valid Point; if
it was meant to be an angle it should be
given with keyword "angle". | 23 | 210 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def __new__(cls, p1, pt=None, angle=None, **kwargs):
p1 = Point(p1, dim=2)
if pt is not None and angle is None:
try:
p2 = Point(pt, dim=2)
except (NotImplementedError, TypeError, ValueError):
raise ValueError(filldedent())
if p1 == p2:
raise ValueError('A Ray requires two distinct points.')
elif angle is not None and pt is None:
# we need to know if the angle is an odd multiple of pi/2
angle = sympify(angle)
c = _pi_coeff(angle)
p2 = None
if c is not None:
if c.is_Rational:
if c.q == 2:
if c.p == 1:
p2 = p1 + Point(0, 1)
elif c.p == 3:
p2 = p1 + Point(0, -1)
elif c.q == 1:
if c.p == 0:
p2 = p1 + Point(1, 0)
elif c.p == 1:
p2 = p1 + Point(-1, 0)
if p2 is None:
c *= S.Pi
else:
c = angle % (2*S.Pi)
if not p2:
m = 2*c/S.Pi
left = And(1 < m, m < 3) # is it in quadrant 2 or 3?
x = Piecewise((-1, left), (Piecewise((0, Eq(m % 1, 0)), (1, True)), True))
y = Piecewise((-tan(c), left), (Piecewise((1, Eq(m, 1)), (-1, Eq(m, 3)), (tan(c), True)), True))
p2 = p1 + Point(x, y)
else:
raise ValueError('A 2nd point or keyword "angle" must be used.')
return LinearEntity2D.__new__(cls, p1, p2, **kwargs)
```
###Assistant :
The 2nd argument was not a valid Point; if
it was meant to be an angle it should be
given with keyword "angle".
|
2,507 | def delegate_command(args, host_state, exclude, require): # type: (EnvironmentConfig, HostState, t.List[str], t.List[str]) -> None
con = host_state.controller_profile.get_origin_controller_connection()
working_directory = host_state.controller_profile.get_working_directory()
host_delegation = not isinstance(args.controller, OriginConfig)
if host_delegation:
if data_context().content.collection:
content_root = os.path.join(working_directory, data_context().content.collection.directory)
else:
content_root = os.path.join(working_directory, 'ansible')
ansible_bin_path = os.path.join(working_directory, 'ansible', 'bin')
with tempfile.NamedTemporaryFile(prefix='ansible-source-', suffix='.tgz') as payload_file:
create_payload(args, payload_file.name)
con.extract_archive(chdir=working_directory, src=payload_file)
else:
content_root = working_directory
ansible_bin_path = ANSIBLE_BIN_PATH
command = generate_command(args, host_state.controller_profile.python, ansible_bin_path, content_root, exclude, require)
if isinstance(con, SshConnection):
ssh = con.settings
else:
ssh = None
options = []
if isinstance(args, IntegrationConfig) and args.controller.is_managed and all(target.is_managed for target in args.targets):
if not args.allow_destructive:
options.append('--allow-destructive')
with support_container_context(args, ssh) as containers: # type: t.Optional[ContainerDatabase]
if containers:
options.extend(['--containers', json.dumps(containers.to_dict())])
# Run unit tests unprivileged to prevent stray writes to the source tree.
# Also disconnect from the network once requirements have been installed.
if isinstance(args, UnitsConfig) and isinstance(con, DockerConnection):
pytest_user = 'pytest'
writable_dirs = [
os.path.join(content_root, ResultType.JUNIT.relative_path),
os.path.join(content_root, ResultType.COVERAGE.relative_path),
]
con.run(['mkdir', '-p'] + writable_dirs)
con.run(['chmod', '777'] + writable_dirs)
con.run(['chmod', '755', working_directory])
con.run(['chmod', '644', os.path.join(content_root, args.metadata_path)])
con.run(['useradd', pytest_user, '--create-home'])
con.run(insert_options(command, options + ['--requirements-mode', 'only']))
container = con.inspect()
networks = container.get_network_names()
if networks is not None:
for network in networks:
con.disconnect_network(network)
else:
display.warning('Network disconnection is not supported (this is normal under podman). '
'Tests will not be isolated from the network. Network-related tests may misbehave.')
options.extend(['--requirements-mode', 'skip'])
con.user = pytest_user
success = False
try:
con.run(insert_options(command, options))
success = True
finally:
if host_delegation:
download_results(args, con, content_root, success)
| Delegate execution based on the provided host state. | 8 | 231 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def delegate_command(args, host_state, exclude, require): # type: (EnvironmentConfig, HostState, t.List[str], t.List[str]) -> None
con = host_state.controller_profile.get_origin_controller_connection()
working_directory = host_state.controller_profile.get_working_directory()
host_delegation = not isinstance(args.controller, OriginConfig)
if host_delegation:
if data_context().content.collection:
content_root = os.path.join(working_directory, data_context().content.collection.directory)
else:
content_root = os.path.join(working_directory, 'ansible')
ansible_bin_path = os.path.join(working_directory, 'ansible', 'bin')
with tempfile.NamedTemporaryFile(prefix='ansible-source-', suffix='.tgz') as payload_file:
create_payload(args, payload_file.name)
con.extract_archive(chdir=working_directory, src=payload_file)
else:
content_root = working_directory
ansible_bin_path = ANSIBLE_BIN_PATH
command = generate_command(args, host_state.controller_profile.python, ansible_bin_path, content_root, exclude, require)
if isinstance(con, SshConnection):
ssh = con.settings
else:
ssh = None
options = []
if isinstance(args, IntegrationConfig) and args.controller.is_managed and all(target.is_managed for target in args.targets):
if not args.allow_destructive:
options.append('--allow-destructive')
with support_container_context(args, ssh) as containers: # type: t.Optional[ContainerDatabase]
if containers:
options.extend(['--containers', json.dumps(containers.to_dict())])
# Run unit tests unprivileged to prevent stray writes to the source tree.
# Also disconnect from the network once requirements have been installed.
if isinstance(args, UnitsConfig) and isinstance(con, DockerConnection):
pytest_user = 'pytest'
writable_dirs = [
os.path.join(content_root, ResultType.JUNIT.relative_path),
os.path.join(content_root, ResultType.COVERAGE.relative_path),
]
con.run(['mkdir', '-p'] + writable_dirs)
con.run(['chmod', '777'] + writable_dirs)
con.run(['chmod', '755', working_directory])
con.run(['chmod', '644', os.path.join(content_root, args.metadata_path)])
con.run(['useradd', pytest_user, '--create-home'])
con.run(insert_options(command, options + ['--requirements-mode', 'only']))
container = con.inspect()
networks = container.get_network_names()
if networks is not None:
for network in networks:
con.disconnect_network(network)
else:
display.warning('Network disconnection is not supported (this is normal under podman). '
'Tests will not be isolated from the network. Network-related tests may misbehave.')
options.extend(['--requirements-mode', 'skip'])
con.user = pytest_user
success = False
try:
con.run(insert_options(command, options))
success = True
finally:
if host_delegation:
download_results(args, con, content_root, success)
```
###Assistant : Delegate execution based on the provided host state.
|
2,508 | def test_in_predicate_requires_an_iterable(tmp_path, engine, filter_value):
path = tmp_path / "gh_8720_pandas.parquet"
df = pd.DataFrame(
{"A": [1, 2, 3, 4], "B": [1, 1, 2, 2]},
)
df.to_parquet(path, engine=engine)
with pytest.raises(TypeError, match="Value of 'in' filter"):
dd.read_parquet(path, engine=engine, filters=filter_value)
# pandas to_parquet outputs a single file, dask outputs a folder with global
# metadata that changes the filtering code path
ddf = dd.from_pandas(df, npartitions=2)
path = tmp_path / "gh_8720_dask.parquet"
ddf.to_parquet(path, engine=engine)
with pytest.raises(TypeError, match="Value of 'in' filter"):
dd.read_parquet(path, engine=engine, filters=filter_value)
| Regression test for https://github.com/dask/dask/issues/8720 | 4 | 75 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_in_predicate_requires_an_iterable(tmp_path, engine, filter_value):
path = tmp_path / "gh_8720_pandas.parquet"
df = pd.DataFrame(
{"A": [1, 2, 3, 4], "B": [1, 1, 2, 2]},
)
df.to_parquet(path, engine=engine)
with pytest.raises(TypeError, match="Value of 'in' filter"):
dd.read_parquet(path, engine=engine, filters=filter_value)
# pandas to_parquet outputs a single file, dask outputs a folder with global
# metadata that changes the filtering code path
ddf = dd.from_pandas(df, npartitions=2)
path = tmp_path / "gh_8720_dask.parquet"
ddf.to_parquet(path, engine=engine)
with pytest.raises(TypeError, match="Value of 'in' filter"):
dd.read_parquet(path, engine=engine, filters=filter_value)
```
###Assistant : Regression test for https://github.com/dask/dask/issues/8720
|
2,509 | def only_targets(self, target_type): # type: (t.Type[THostConfig]) -> t.List[THostConfig]
if not self.targets:
raise Exception('There must be one or more targets.')
assert type_guard(self.targets, target_type)
return t.cast(t.List[THostConfig], self.targets)
|
Return a list of target host configurations.
Requires that there are one or more targets, all the specified type.
| 19 | 25 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def only_targets(self, target_type): # type: (t.Type[THostConfig]) -> t.List[THostConfig]
if not self.targets:
raise Exception('There must be one or more targets.')
assert type_guard(self.targets, target_type)
return t.cast(t.List[THostConfig], self.targets)
```
###Assistant :
Return a list of target host configurations.
Requires that there are one or more targets, all the specified type.
|
2,510 | def _get_animated_artists(self):
return tuple([a for ax_ in self.ax.get_figure().get_axes()
for a in ax_.get_children()
if a.get_animated() and a not in self.artists])
|
Convenience method to get all animated artists of a figure, except
those already present in self.artists. 'z_order' is ignored.
| 19 | 19 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _get_animated_artists(self):
return tuple([a for ax_ in self.ax.get_figure().get_axes()
for a in ax_.get_children()
if a.get_animated() and a not in self.artists])
```
###Assistant :
Convenience method to get all animated artists of a figure, except
those already present in self.artists. 'z_order' is ignored.
|
2,511 | def new_gridlines(self, ax):
gridlines = GridlinesCollection(
None, transform=ax.transData, colors=mpl.rcParams['grid.color'],
linestyles=mpl.rcParams['grid.linestyle'],
linewidths=mpl.rcParams['grid.linewidth'])
ax._set_artist_props(gridlines)
gridlines.set_grid_helper(self)
ax.axes._set_artist_props(gridlines)
# gridlines.set_clip_path(self.axes.patch)
# set_clip_path need to be deferred after Axes.cla is completed.
# It is done inside the cla.
return gridlines
|
Create and return a new GridlineCollection instance.
*which* : "major" or "minor"
*axis* : "both", "x" or "y"
| 18 | 35 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def new_gridlines(self, ax):
gridlines = GridlinesCollection(
None, transform=ax.transData, colors=mpl.rcParams['grid.color'],
linestyles=mpl.rcParams['grid.linestyle'],
linewidths=mpl.rcParams['grid.linewidth'])
ax._set_artist_props(gridlines)
gridlines.set_grid_helper(self)
ax.axes._set_artist_props(gridlines)
# gridlines.set_clip_path(self.axes.patch)
# set_clip_path need to be deferred after Axes.cla is completed.
# It is done inside the cla.
return gridlines
```
###Assistant :
Create and return a new GridlineCollection instance.
*which* : "major" or "minor"
*axis* : "both", "x" or "y"
|
2,512 | def duplicates_removed(it, already_seen=()):
lst = []
seen = set()
for i in it:
if i in seen or i in already_seen:
continue
lst.append(i)
seen.add(i)
return lst
|
Returns a list with duplicates removed from the iterable `it`.
Order is preserved.
| 13 | 26 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def duplicates_removed(it, already_seen=()):
lst = []
seen = set()
for i in it:
if i in seen or i in already_seen:
continue
lst.append(i)
seen.add(i)
return lst
```
###Assistant :
Returns a list with duplicates removed from the iterable `it`.
Order is preserved.
|
2,513 | def masked_all(shape, dtype=float):
a = masked_array(np.empty(shape, dtype),
mask=np.ones(shape, make_mask_descr(dtype)))
return a
|
Empty masked array with all elements masked.
Return an empty masked array of the given shape and dtype, where all the
data are masked.
Parameters
----------
shape : int or tuple of ints
Shape of the required MaskedArray, e.g., ``(2, 3)`` or ``2``.
dtype : dtype, optional
Data type of the output.
Returns
-------
a : MaskedArray
A masked array with all data masked.
See Also
--------
masked_all_like : Empty masked array modelled on an existing array.
Examples
--------
>>> import numpy.ma as ma
>>> ma.masked_all((3, 3))
masked_array(
data=[[--, --, --],
[--, --, --],
[--, --, --]],
mask=[[ True, True, True],
[ True, True, True],
[ True, True, True]],
fill_value=1e+20,
dtype=float64)
The `dtype` parameter defines the underlying data type.
>>> a = ma.masked_all((3, 3))
>>> a.dtype
dtype('float64')
>>> a = ma.masked_all((3, 3), dtype=np.int32)
>>> a.dtype
dtype('int32')
| 136 | 11 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def masked_all(shape, dtype=float):
a = masked_array(np.empty(shape, dtype),
mask=np.ones(shape, make_mask_descr(dtype)))
return a
```
###Assistant :
Empty masked array with all elements masked.
Return an empty masked array of the given shape and dtype, where all the
data are masked.
Parameters
----------
shape : int or tuple of ints
Shape of the required MaskedArray, e.g., ``(2, 3)`` or ``2``.
dtype : dtype, optional
Data type of the output.
Returns
-------
a : MaskedArray
A masked array with all data masked.
See Also
--------
masked_all_like : Empty masked array modelled on an existing array.
Examples
--------
>>> import numpy.ma as ma
>>> ma.masked_all((3, 3))
masked_array(
data=[[--, --, --],
[--, --, --],
[--, --, --]],
mask=[[ True, True, True],
[ True, True, True],
[ True, True, True]],
fill_value=1e+20,
dtype=float64)
The `dtype` parameter defines the underlying data type.
>>> a = ma.masked_all((3, 3))
>>> a.dtype
dtype('float64')
>>> a = ma.masked_all((3, 3), dtype=np.int32)
>>> a.dtype
dtype('int32')
|
2,514 | def closeness_centrality(G, u=None, distance=None, wf_improved=True):
r
if G.is_directed():
G = G.reverse() # create a reversed graph view
if distance is not None:
# use Dijkstra's algorithm with specified attribute as edge weight
path_length = functools.partial(
nx.single_source_dijkstra_path_length, weight=distance
)
else:
path_length = nx.single_source_shortest_path_length
if u is None:
nodes = G.nodes
else:
nodes = [u]
closeness_centrality = {}
for n in nodes:
sp = path_length(G, n)
totsp = sum(sp.values())
len_G = len(G)
_closeness_centrality = 0.0
if totsp > 0.0 and len_G > 1:
_closeness_centrality = (len(sp) - 1.0) / totsp
# normalize to number of nodes-1 in connected part
if wf_improved:
s = (len(sp) - 1.0) / (len_G - 1)
_closeness_centrality *= s
closeness_centrality[n] = _closeness_centrality
if u is not None:
return closeness_centrality[u]
else:
return closeness_centrality
@not_implemented_for("directed") | Compute closeness centrality for nodes.
Closeness centrality [1]_ of a node `u` is the reciprocal of the
average shortest path distance to `u` over all `n-1` reachable nodes.
.. math::
C(u) = \frac{n - 1}{\sum_{v=1}^{n-1} d(v, u)},
where `d(v, u)` is the shortest-path distance between `v` and `u`,
and `n-1` is the number of nodes reachable from `u`. Notice that the
closeness distance function computes the incoming distance to `u`
for directed graphs. To use outward distance, act on `G.reverse()`.
Notice that higher values of closeness indicate higher centrality.
Wasserman and Faust propose an improved formula for graphs with
more than one connected component. The result is "a ratio of the
fraction of actors in the group who are reachable, to the average
distance" from the reachable actors [2]_. You might think this
scale factor is inverted but it is not. As is, nodes from small
components receive a smaller closeness value. Letting `N` denote
the number of nodes in the graph,
.. math::
C_{WF}(u) = \frac{n-1}{N-1} \frac{n - 1}{\sum_{v=1}^{n-1} d(v, u)},
Parameters
----------
G : graph
A NetworkX graph
u : node, optional
Return only the value for node u
distance : edge attribute key, optional (default=None)
Use the specified edge attribute as the edge distance in shortest
path calculations
wf_improved : bool, optional (default=True)
If True, scale by the fraction of nodes reachable. This gives the
Wasserman and Faust improved formula. For single component graphs
it is the same as the original formula.
Returns
-------
nodes : dictionary
Dictionary of nodes with closeness centrality as the value.
Examples
--------
>>> G = nx.Graph([(0, 1), (0, 2), (0, 3), (1, 2), (1, 3)])
>>> nx.closeness_centrality(G)
{0: 1.0, 1: 1.0, 2: 0.75, 3: 0.75}
See Also
--------
betweenness_centrality, load_centrality, eigenvector_centrality,
degree_centrality, incremental_closeness_centrality
Notes
-----
The closeness centrality is normalized to `(n-1)/(|G|-1)` where
`n` is the number of nodes in the connected part of graph
containing the node. If the graph is not completely connected,
this algorithm computes the closeness centrality for each
connected part separately scaled by that parts size.
If the 'distance' keyword is set to an edge attribute key then the
shortest-path length will be computed using Dijkstra's algorithm with
that edge attribute as the edge weight.
The closeness centrality uses *inward* distance to a node, not outward.
If you want to use outword distances apply the function to `G.reverse()`
In NetworkX 2.2 and earlier a bug caused Dijkstra's algorithm to use the
outward distance rather than the inward distance. If you use a 'distance'
keyword and a DiGraph, your results will change between v2.2 and v2.3.
References
----------
.. [1] Linton C. Freeman: Centrality in networks: I.
Conceptual clarification. Social Networks 1:215-239, 1979.
https://doi.org/10.1016/0378-8733(78)90021-7
.. [2] pg. 201 of Wasserman, S. and Faust, K.,
Social Network Analysis: Methods and Applications, 1994,
Cambridge University Press.
| 467 | 125 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def closeness_centrality(G, u=None, distance=None, wf_improved=True):
r
if G.is_directed():
G = G.reverse() # create a reversed graph view
if distance is not None:
# use Dijkstra's algorithm with specified attribute as edge weight
path_length = functools.partial(
nx.single_source_dijkstra_path_length, weight=distance
)
else:
path_length = nx.single_source_shortest_path_length
if u is None:
nodes = G.nodes
else:
nodes = [u]
closeness_centrality = {}
for n in nodes:
sp = path_length(G, n)
totsp = sum(sp.values())
len_G = len(G)
_closeness_centrality = 0.0
if totsp > 0.0 and len_G > 1:
_closeness_centrality = (len(sp) - 1.0) / totsp
# normalize to number of nodes-1 in connected part
if wf_improved:
s = (len(sp) - 1.0) / (len_G - 1)
_closeness_centrality *= s
closeness_centrality[n] = _closeness_centrality
if u is not None:
return closeness_centrality[u]
else:
return closeness_centrality
@not_implemented_for("directed")
```
###Assistant : Compute closeness centrality for nodes.
Closeness centrality [1]_ of a node `u` is the reciprocal of the
average shortest path distance to `u` over all `n-1` reachable nodes.
.. math::
C(u) = \frac{n - 1}{\sum_{v=1}^{n-1} d(v, u)},
where `d(v, u)` is the shortest-path distance between `v` and `u`,
and `n-1` is the number of nodes reachable from `u`. Notice that the
closeness distance function computes the incoming distance to `u`
for directed graphs. To use outward distance, act on `G.reverse()`.
Notice that higher values of closeness indicate higher centrality.
Wasserman and Faust propose an improved formula for graphs with
more than one connected component. The result is "a ratio of the
fraction of actors in the group who are reachable, to the average
distance" from the reachable actors [2]_. You might think this
scale factor is inverted but it is not. As is, nodes from small
components receive a smaller closeness value. Letting `N` denote
the number of nodes in the graph,
.. math::
C_{WF}(u) = \frac{n-1}{N-1} \frac{n - 1}{\sum_{v=1}^{n-1} d(v, u)},
Parameters
----------
G : graph
A NetworkX graph
u : node, optional
Return only the value for node u
distance : edge attribute key, optional (default=None)
Use the specified edge attribute as the edge distance in shortest
path calculations
wf_improved : bool, optional (default=True)
If True, scale by the fraction of nodes reachable. This gives the
Wasserman and Faust improved formula. For single component graphs
it is the same as the original formula.
Returns
-------
nodes : dictionary
Dictionary of nodes with closeness centrality as the value.
Examples
--------
>>> G = nx.Graph([(0, 1), (0, 2), (0, 3), (1, 2), (1, 3)])
>>> nx.closeness_centrality(G)
{0: 1.0, 1: 1.0, 2: 0.75, 3: 0.75}
See Also
--------
betweenness_centrality, load_centrality, eigenvector_centrality,
degree_centrality, incremental_closeness_centrality
Notes
-----
The closeness centrality is normalized to `(n-1)/(|G|-1)` where
`n` is the number of nodes in the connected part of graph
containing the node. If the graph is not completely connected,
this algorithm computes the closeness centrality for each
connected part separately scaled by that parts size.
If the 'distance' keyword is set to an edge attribute key then the
shortest-path length will be computed using Dijkstra's algorithm with
that edge attribute as the edge weight.
The closeness centrality uses *inward* distance to a node, not outward.
If you want to use outword distances apply the function to `G.reverse()`
In NetworkX 2.2 and earlier a bug caused Dijkstra's algorithm to use the
outward distance rather than the inward distance. If you use a 'distance'
keyword and a DiGraph, your results will change between v2.2 and v2.3.
References
----------
.. [1] Linton C. Freeman: Centrality in networks: I.
Conceptual clarification. Social Networks 1:215-239, 1979.
https://doi.org/10.1016/0378-8733(78)90021-7
.. [2] pg. 201 of Wasserman, S. and Faust, K.,
Social Network Analysis: Methods and Applications, 1994,
Cambridge University Press.
|
2,515 | async def _remove_old_push_actions_that_have_rotated(self) -> None:
# We want to clear out anything that is older than a day that *has* already
# been rotated.
rotated_upto_stream_ordering = await self.db_pool.simple_select_one_onecol(
table="event_push_summary_stream_ordering",
keyvalues={},
retcol="stream_ordering",
)
max_stream_ordering_to_delete = min(
rotated_upto_stream_ordering, self.stream_ordering_day_ago
)
| Clear out old push actions that have been summarised. | 9 | 38 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
async def _remove_old_push_actions_that_have_rotated(self) -> None:
# We want to clear out anything that is older than a day that *has* already
# been rotated.
rotated_upto_stream_ordering = await self.db_pool.simple_select_one_onecol(
table="event_push_summary_stream_ordering",
keyvalues={},
retcol="stream_ordering",
)
max_stream_ordering_to_delete = min(
rotated_upto_stream_ordering, self.stream_ordering_day_ago
)
```
###Assistant : Clear out old push actions that have been summarised.
|
2,516 | def test_indent():
multiline_string =
indented_multiline_string =
assert indented_multiline_string == _indent(multiline_string, 4)
| Assert that indenting a multiline string by 4 spaces prepends 4 spaces before each new line.test
test1
test2
test3 test
test1
test2
test3 | 23 | 11 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_indent():
multiline_string =
indented_multiline_string =
assert indented_multiline_string == _indent(multiline_string, 4)
```
###Assistant : Assert that indenting a multiline string by 4 spaces prepends 4 spaces before each new line.test
test1
test2
test3 test
test1
test2
test3
|
2,517 | def update_inputs_outputs_dims(model, input_dims, output_dims): # type: (ModelProto, Dict[Text, List[Any]], Dict[Text, List[Any]]) -> ModelProto
dim_param_set = set() # type: Set[Text]
|
This function updates the dimension sizes of the model's inputs and outputs to the values
provided in input_dims and output_dims. if the dim value provided is negative, a unique dim_param
will be set for that dimension.
Example. if we have the following shape for inputs and outputs:
shape(input_1) = ('b', 3, 'w', 'h')
shape(input_2) = ('b', 4)
and shape(output) = ('b', 'd', 5)
The parameters can be provided as:
input_dims = {
"input_1": ['b', 3, 'w', 'h'],
"input_2": ['b', 4],
}
output_dims = {
"output": ['b', -1, 5]
}
Putting it together:
model = onnx.load('model.onnx')
updated_model = update_inputs_outputs_dims(model, input_dims, output_dims)
onnx.save(updated_model, 'model.onnx')
| 102 | 19 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def update_inputs_outputs_dims(model, input_dims, output_dims): # type: (ModelProto, Dict[Text, List[Any]], Dict[Text, List[Any]]) -> ModelProto
dim_param_set = set() # type: Set[Text]
```
###Assistant :
This function updates the dimension sizes of the model's inputs and outputs to the values
provided in input_dims and output_dims. if the dim value provided is negative, a unique dim_param
will be set for that dimension.
Example. if we have the following shape for inputs and outputs:
shape(input_1) = ('b', 3, 'w', 'h')
shape(input_2) = ('b', 4)
and shape(output) = ('b', 'd', 5)
The parameters can be provided as:
input_dims = {
"input_1": ['b', 3, 'w', 'h'],
"input_2": ['b', 4],
}
output_dims = {
"output": ['b', -1, 5]
}
Putting it together:
model = onnx.load('model.onnx')
updated_model = update_inputs_outputs_dims(model, input_dims, output_dims)
onnx.save(updated_model, 'model.onnx')
|
2,518 | def get_aliased_columns(aliased_columns, model_alias, targets, mode=None):
for col in targets:
if mode == 'input':
if str(col.parts[0]) != model_alias and col.alias is not None:
aliased_columns[aliased_columns.index(col.parts[-1])] = str(col.alias)
if mode == 'output':
if str(col.parts[0]) == model_alias and col.alias is not None:
aliased_columns[aliased_columns.index('prediction')] = str(col.alias)
return aliased_columns | This method assumes mdb_sql will alert if there are two columns with the same alias | 15 | 43 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_aliased_columns(aliased_columns, model_alias, targets, mode=None):
for col in targets:
if mode == 'input':
if str(col.parts[0]) != model_alias and col.alias is not None:
aliased_columns[aliased_columns.index(col.parts[-1])] = str(col.alias)
if mode == 'output':
if str(col.parts[0]) == model_alias and col.alias is not None:
aliased_columns[aliased_columns.index('prediction')] = str(col.alias)
return aliased_columns
```
###Assistant : This method assumes mdb_sql will alert if there are two columns with the same alias
|
2,519 | def get_model_urls(app_label, model_name):
paths = []
# Retrieve registered views for this model
try:
views = registry['views'][app_label][model_name]
except KeyError:
# No views have been registered for this model
views = []
for view in views:
# Import the view class or function
callable = import_string(view['path'])
if issubclass(callable, View):
callable = callable.as_view()
# Create a path to the view
paths.append(
path(f"{view['name']}/", callable, name=f"{model_name}_{view['name']}", kwargs=view['kwargs'])
)
return paths
|
Return a list of URL paths for detail views registered to the given model.
Args:
app_label: App/plugin name
model_name: Model name
| 21 | 66 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_model_urls(app_label, model_name):
paths = []
# Retrieve registered views for this model
try:
views = registry['views'][app_label][model_name]
except KeyError:
# No views have been registered for this model
views = []
for view in views:
# Import the view class or function
callable = import_string(view['path'])
if issubclass(callable, View):
callable = callable.as_view()
# Create a path to the view
paths.append(
path(f"{view['name']}/", callable, name=f"{model_name}_{view['name']}", kwargs=view['kwargs'])
)
return paths
```
###Assistant :
Return a list of URL paths for detail views registered to the given model.
Args:
app_label: App/plugin name
model_name: Model name
|
2,520 | def head(self, url, **kwargs):
r
kwargs.setdefault("allow_redirects", False)
return self.request("HEAD", url, **kwargs)
| Sends a HEAD request. Returns :class:`Response` object.
:param url: URL for the new :class:`Request` object.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:rtype: requests.Response
| 24 | 11 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def head(self, url, **kwargs):
r
kwargs.setdefault("allow_redirects", False)
return self.request("HEAD", url, **kwargs)
```
###Assistant : Sends a HEAD request. Returns :class:`Response` object.
:param url: URL for the new :class:`Request` object.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:rtype: requests.Response
|
2,521 | async def get_and_submit_flow_runs(self) -> List[FlowRun]:
if not self.started:
raise RuntimeError("Agent is not started. Use `async with OrionAgent()...`")
self.logger.debug("Checking for flow runs...")
submittable_runs = await self.client.read_flow_runs(
sort=FlowRunSort.NEXT_SCHEDULED_START_TIME_ASC,
flow_run_filter=self.flow_run_query_filter(),
)
for flow_run in submittable_runs:
self.logger.info(f"Submitting flow run '{flow_run.id}'")
self.submitting_flow_run_ids.add(flow_run.id)
self.task_group.start_soon(
self.submit_run,
flow_run,
)
return submittable_runs
|
Queries for scheduled flow runs and submits them for execution in parallel
| 12 | 43 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
async def get_and_submit_flow_runs(self) -> List[FlowRun]:
if not self.started:
raise RuntimeError("Agent is not started. Use `async with OrionAgent()...`")
self.logger.debug("Checking for flow runs...")
submittable_runs = await self.client.read_flow_runs(
sort=FlowRunSort.NEXT_SCHEDULED_START_TIME_ASC,
flow_run_filter=self.flow_run_query_filter(),
)
for flow_run in submittable_runs:
self.logger.info(f"Submitting flow run '{flow_run.id}'")
self.submitting_flow_run_ids.add(flow_run.id)
self.task_group.start_soon(
self.submit_run,
flow_run,
)
return submittable_runs
```
###Assistant :
Queries for scheduled flow runs and submits them for execution in parallel
|
2,522 | def get_quoted_string(value):
quoted_string = QuotedString()
if value and value[0] in CFWS_LEADER:
token, value = get_cfws(value)
quoted_string.append(token)
token, value = get_bare_quoted_string(value)
quoted_string.append(token)
if value and value[0] in CFWS_LEADER:
token, value = get_cfws(value)
quoted_string.append(token)
return quoted_string, value
| quoted-string = [CFWS] <bare-quoted-string> [CFWS]
'bare-quoted-string' is an intermediate class defined by this
parser and not by the RFC grammar. It is the quoted string
without any attached CFWS.
| 29 | 35 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_quoted_string(value):
quoted_string = QuotedString()
if value and value[0] in CFWS_LEADER:
token, value = get_cfws(value)
quoted_string.append(token)
token, value = get_bare_quoted_string(value)
quoted_string.append(token)
if value and value[0] in CFWS_LEADER:
token, value = get_cfws(value)
quoted_string.append(token)
return quoted_string, value
```
###Assistant : quoted-string = [CFWS] <bare-quoted-string> [CFWS]
'bare-quoted-string' is an intermediate class defined by this
parser and not by the RFC grammar. It is the quoted string
without any attached CFWS.
|
2,523 | def test_defined_keyword_exist_in_schema(self, keyword, discovered_catalog):
schemas_errors = []
for stream_name, stream in discovered_catalog.items():
check_result = find_keyword_schema(stream.json_schema, key=keyword)
if check_result:
schemas_errors.append(stream_name)
assert not schemas_errors, f"Found not allowed `{keyword}` keyword for selected streams: {schemas_errors}."
| Checking for the presence of not allowed keywords within each json schema | 12 | 31 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_defined_keyword_exist_in_schema(self, keyword, discovered_catalog):
schemas_errors = []
for stream_name, stream in discovered_catalog.items():
check_result = find_keyword_schema(stream.json_schema, key=keyword)
if check_result:
schemas_errors.append(stream_name)
assert not schemas_errors, f"Found not allowed `{keyword}` keyword for selected streams: {schemas_errors}."
```
###Assistant : Checking for the presence of not allowed keywords within each json schema
|
2,524 | def normalize_path_patterns(patterns):
patterns = [os.path.normcase(p) for p in patterns]
dir_suffixes = {"%s*" % path_sep for path_sep in {"/", os.sep}}
norm_patterns = []
for pattern in patterns:
for dir_suffix in dir_suffixes:
if pattern.endswith(dir_suffix):
norm_patterns.append(pattern[: -len(dir_suffix)])
break
else:
norm_patterns.append(pattern)
return norm_patterns
| Normalize an iterable of glob style patterns based on OS. | 10 | 39 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def normalize_path_patterns(patterns):
patterns = [os.path.normcase(p) for p in patterns]
dir_suffixes = {"%s*" % path_sep for path_sep in {"/", os.sep}}
norm_patterns = []
for pattern in patterns:
for dir_suffix in dir_suffixes:
if pattern.endswith(dir_suffix):
norm_patterns.append(pattern[: -len(dir_suffix)])
break
else:
norm_patterns.append(pattern)
return norm_patterns
```
###Assistant : Normalize an iterable of glob style patterns based on OS.
|
2,525 | def get_protobuf_schema() -> GeneratedProtocolMessageType:
return ErrorResponseMessage_PB
| Return the type of protobuf object which stores a class of this type
As a part of serialization and deserialization, we need the ability to
lookup the protobuf object type directly from the object type. This
static method allows us to do this.
Importantly, this method is also used to create the reverse lookup ability within
the metaclass of Serializable. In the metaclass, it calls this method and then
it takes whatever type is returned from this method and adds an attribute to it
with the type of this class attached to it. See the MetaSerializable class for
details.
:return: the type of protobuf object which corresponds to this class.
:rtype: GeneratedProtocolMessageType
| 112 | 6 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_protobuf_schema() -> GeneratedProtocolMessageType:
return ErrorResponseMessage_PB
```
###Assistant : Return the type of protobuf object which stores a class of this type
As a part of serialization and deserialization, we need the ability to
lookup the protobuf object type directly from the object type. This
static method allows us to do this.
Importantly, this method is also used to create the reverse lookup ability within
the metaclass of Serializable. In the metaclass, it calls this method and then
it takes whatever type is returned from this method and adds an attribute to it
with the type of this class attached to it. See the MetaSerializable class for
details.
:return: the type of protobuf object which corresponds to this class.
:rtype: GeneratedProtocolMessageType
|
2,526 | def test_remote_media_cache_retention(self) -> None:
# Advance 31 days (in seconds)
self.reactor.advance(31 * 24 * 60 * 60)
# Check that media has been correctly purged.
# Local media should be unaffected.
# Remote media accessed <30 days ago should still exist.
self._assert_if_mxc_uris_purged(
purged=[
(self.remote_server_name, self.remote_not_recently_accessed_media),
],
not_purged=[
(self.remote_server_name, self.remote_recently_accessed_media),
(self.hs.config.server.server_name, self.local_recently_accessed_media),
(
self.hs.config.server.server_name,
self.local_not_recently_accessed_media,
),
(self.hs.config.server.server_name, self.local_never_accessed_media),
],
)
|
Tests that entries from the remote media cache that have not been accessed
recently is purged, while local media is unaffected.
| 21 | 59 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_remote_media_cache_retention(self) -> None:
# Advance 31 days (in seconds)
self.reactor.advance(31 * 24 * 60 * 60)
# Check that media has been correctly purged.
# Local media should be unaffected.
# Remote media accessed <30 days ago should still exist.
self._assert_if_mxc_uris_purged(
purged=[
(self.remote_server_name, self.remote_not_recently_accessed_media),
],
not_purged=[
(self.remote_server_name, self.remote_recently_accessed_media),
(self.hs.config.server.server_name, self.local_recently_accessed_media),
(
self.hs.config.server.server_name,
self.local_not_recently_accessed_media,
),
(self.hs.config.server.server_name, self.local_never_accessed_media),
],
)
```
###Assistant :
Tests that entries from the remote media cache that have not been accessed
recently is purged, while local media is unaffected.
|
2,527 | def setmodulation(self, modu):
# type: (int) -> bool
# According to https://nmap.org/npcap/guide/npcap-devguide.html#npcap-feature-dot11 # noqa: E501
self._check_npcap_requirement()
_modus = {
0: "dsss",
1: "fhss",
2: "irbaseband",
3: "ofdm",
4: "hrdss",
5: "erp",
6: "ht",
7: "vht",
8: "ihv",
9: "mimo-ofdm",
10: "mimo-ofdm",
}
m = _modus.get(modu, "unknown") if isinstance(modu, int) else modu
return self._npcap_set("modu", str(m))
| Set the interface modulation. It can be:
- 0: dsss
- 1: fhss
- 2: irbaseband
- 3: ofdm
- 4: hrdss
- 5: erp
- 6: ht
- 7: vht
- 8: ihv
- 9: mimo-ofdm
- 10: mimo-ofdm
- the value directly
Only available with Npcap. | 48 | 54 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def setmodulation(self, modu):
# type: (int) -> bool
# According to https://nmap.org/npcap/guide/npcap-devguide.html#npcap-feature-dot11 # noqa: E501
self._check_npcap_requirement()
_modus = {
0: "dsss",
1: "fhss",
2: "irbaseband",
3: "ofdm",
4: "hrdss",
5: "erp",
6: "ht",
7: "vht",
8: "ihv",
9: "mimo-ofdm",
10: "mimo-ofdm",
}
m = _modus.get(modu, "unknown") if isinstance(modu, int) else modu
return self._npcap_set("modu", str(m))
```
###Assistant : Set the interface modulation. It can be:
- 0: dsss
- 1: fhss
- 2: irbaseband
- 3: ofdm
- 4: hrdss
- 5: erp
- 6: ht
- 7: vht
- 8: ihv
- 9: mimo-ofdm
- 10: mimo-ofdm
- the value directly
Only available with Npcap.
|
2,528 | def _on_connection_error(self, connection, exception):
log.error("Failed to connect", exc_info=True)
|
Invoked by pika when connection on connection error
:param connection:
:param exception:
:return:
| 13 | 8 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _on_connection_error(self, connection, exception):
log.error("Failed to connect", exc_info=True)
```
###Assistant :
Invoked by pika when connection on connection error
:param connection:
:param exception:
:return:
|
2,529 | def is_monotonic_decreasing(self) -> bool:
# monotonic decreasing if and only if reverse is monotonic increasing
return self[::-1].is_monotonic_increasing
|
Return a boolean if the values are equal or decreasing.
| 10 | 17 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def is_monotonic_decreasing(self) -> bool:
# monotonic decreasing if and only if reverse is monotonic increasing
return self[::-1].is_monotonic_increasing
```
###Assistant :
Return a boolean if the values are equal or decreasing.
|
2,530 | def test_sends_assignment_notification(self):
url = f"/api/0/issues/{self.group.id}/"
with self.tasks():
response = self.client.put(url, format="json", data={"assignedTo": self.user.username})
assert response.status_code == 200, response.content
msg = mail.outbox[0]
# check the txt version
assert f"assigned {self.short_id} to themselves" in msg.body
# check the html version
assert f"{self.short_id}</a> to themselves</p>" in msg.alternatives[0][0]
attachment, text = get_attachment()
assert text == f"Issue assigned to {self.name} by themselves"
assert attachment["title"] == self.group.title
assert (
attachment["footer"]
== f"{self.project.slug} | <http://testserver/settings/account/notifications/workflow/?referrer=assigned_activity-slack-user|Notification Settings>"
)
|
Test that an email AND Slack notification are sent with
the expected values when an issue is assigned.
| 18 | 70 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_sends_assignment_notification(self):
url = f"/api/0/issues/{self.group.id}/"
with self.tasks():
response = self.client.put(url, format="json", data={"assignedTo": self.user.username})
assert response.status_code == 200, response.content
msg = mail.outbox[0]
# check the txt version
assert f"assigned {self.short_id} to themselves" in msg.body
# check the html version
assert f"{self.short_id}</a> to themselves</p>" in msg.alternatives[0][0]
attachment, text = get_attachment()
assert text == f"Issue assigned to {self.name} by themselves"
assert attachment["title"] == self.group.title
assert (
attachment["footer"]
== f"{self.project.slug} | <http://testserver/settings/account/notifications/workflow/?referrer=assigned_activity-slack-user|Notification Settings>"
)
```
###Assistant :
Test that an email AND Slack notification are sent with
the expected values when an issue is assigned.
|
2,531 | def _ask_default(self, default=''):
self.prompt_output.write('Please enter the default value as valid Python.')
if default:
self.prompt_output.write(
f"Accept the default '{default}' by pressing 'Enter' or "
f"provide another value."
)
self.prompt_output.write(
'The datetime and django.utils.timezone modules are available, so '
'it is possible to provide e.g. timezone.now as a value.'
)
self.prompt_output.write("Type 'exit' to exit this prompt")
while True:
if default:
prompt = "[default: {}] >>> ".format(default)
else:
prompt = ">>> "
self.prompt_output.write(prompt, ending='')
code = input()
if not code and default:
code = default
if not code:
self.prompt_output.write("Please enter some code, or 'exit' (without quotes) to exit.")
elif code == "exit":
sys.exit(1)
else:
try:
return eval(code, {}, {'datetime': datetime, 'timezone': timezone})
except (SyntaxError, NameError) as e:
self.prompt_output.write('Invalid input: %s' % e)
|
Prompt for a default value.
The ``default`` argument allows providing a custom default value (as a
string) which will be shown to the user and used as the return value
if the user doesn't provide any other input.
| 38 | 119 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _ask_default(self, default=''):
self.prompt_output.write('Please enter the default value as valid Python.')
if default:
self.prompt_output.write(
f"Accept the default '{default}' by pressing 'Enter' or "
f"provide another value."
)
self.prompt_output.write(
'The datetime and django.utils.timezone modules are available, so '
'it is possible to provide e.g. timezone.now as a value.'
)
self.prompt_output.write("Type 'exit' to exit this prompt")
while True:
if default:
prompt = "[default: {}] >>> ".format(default)
else:
prompt = ">>> "
self.prompt_output.write(prompt, ending='')
code = input()
if not code and default:
code = default
if not code:
self.prompt_output.write("Please enter some code, or 'exit' (without quotes) to exit.")
elif code == "exit":
sys.exit(1)
else:
try:
return eval(code, {}, {'datetime': datetime, 'timezone': timezone})
except (SyntaxError, NameError) as e:
self.prompt_output.write('Invalid input: %s' % e)
```
###Assistant :
Prompt for a default value.
The ``default`` argument allows providing a custom default value (as a
string) which will be shown to the user and used as the return value
if the user doesn't provide any other input.
|
2,532 | def reset_modules(self) -> None:
self.modules = {}
self.update_modules()
self.parse_modules()
| Reset the loaded modules list. This is called from cleanup to clear
temporarily loaded modules. | 15 | 9 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def reset_modules(self) -> None:
self.modules = {}
self.update_modules()
self.parse_modules()
```
###Assistant : Reset the loaded modules list. This is called from cleanup to clear
temporarily loaded modules.
|
2,533 | def test_ddppo_compilation(self):
config = ppo.ddppo.DEFAULT_CONFIG.copy()
config["num_gpus_per_worker"] = 0
num_iterations = 2
for _ in framework_iterator(config, frameworks="torch"):
trainer = ppo.ddppo.DDPPOTrainer(config=config, env="CartPole-v0")
for i in range(num_iterations):
results = trainer.train()
check_train_results(results)
print(results)
# Make sure, weights on all workers are the same (including
# local one).
weights = trainer.workers.foreach_worker(lambda w: w.get_weights())
for w in weights[1:]:
check(w, weights[0])
check_compute_single_action(trainer)
trainer.stop()
| Test whether a DDPPOTrainer can be built with both frameworks. | 10 | 56 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_ddppo_compilation(self):
config = ppo.ddppo.DEFAULT_CONFIG.copy()
config["num_gpus_per_worker"] = 0
num_iterations = 2
for _ in framework_iterator(config, frameworks="torch"):
trainer = ppo.ddppo.DDPPOTrainer(config=config, env="CartPole-v0")
for i in range(num_iterations):
results = trainer.train()
check_train_results(results)
print(results)
# Make sure, weights on all workers are the same (including
# local one).
weights = trainer.workers.foreach_worker(lambda w: w.get_weights())
for w in weights[1:]:
check(w, weights[0])
check_compute_single_action(trainer)
trainer.stop()
```
###Assistant : Test whether a DDPPOTrainer can be built with both frameworks.
|
2,534 | def log_cosh(y_true, y_pred):
y_pred = tf.convert_to_tensor(y_pred)
y_true = tf.cast(y_true, y_pred.dtype)
def _logcosh(x):
return x + tf.math.softplus(-2. * x) - tf.cast(
tf.math.log(2.), x.dtype)
return backend.mean(_logcosh(y_pred - y_true), axis=-1)
@keras_export('keras.metrics.categorical_crossentropy',
'keras.losses.categorical_crossentropy')
@tf.__internal__.dispatch.add_dispatch_support | Logarithm of the hyperbolic cosine of the prediction error.
`log(cosh(x))` is approximately equal to `(x ** 2) / 2` for small `x` and
to `abs(x) - log(2)` for large `x`. This means that 'logcosh' works mostly
like the mean squared error, but will not be so strongly affected by the
occasional wildly incorrect prediction.
Standalone usage:
>>> y_true = np.random.random(size=(2, 3))
>>> y_pred = np.random.random(size=(2, 3))
>>> loss = tf.keras.losses.logcosh(y_true, y_pred)
>>> assert loss.shape == (2,)
>>> x = y_pred - y_true
>>> assert np.allclose(
... loss.numpy(),
... np.mean(x + np.log(np.exp(-2. * x) + 1.) - tf.math.log(2.), axis=-1),
... atol=1e-5)
Args:
y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.
y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.
Returns:
Logcosh error values. shape = `[batch_size, d0, .. dN-1]`.
| 131 | 30 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def log_cosh(y_true, y_pred):
y_pred = tf.convert_to_tensor(y_pred)
y_true = tf.cast(y_true, y_pred.dtype)
def _logcosh(x):
return x + tf.math.softplus(-2. * x) - tf.cast(
tf.math.log(2.), x.dtype)
return backend.mean(_logcosh(y_pred - y_true), axis=-1)
@keras_export('keras.metrics.categorical_crossentropy',
'keras.losses.categorical_crossentropy')
@tf.__internal__.dispatch.add_dispatch_support
```
###Assistant : Logarithm of the hyperbolic cosine of the prediction error.
`log(cosh(x))` is approximately equal to `(x ** 2) / 2` for small `x` and
to `abs(x) - log(2)` for large `x`. This means that 'logcosh' works mostly
like the mean squared error, but will not be so strongly affected by the
occasional wildly incorrect prediction.
Standalone usage:
>>> y_true = np.random.random(size=(2, 3))
>>> y_pred = np.random.random(size=(2, 3))
>>> loss = tf.keras.losses.logcosh(y_true, y_pred)
>>> assert loss.shape == (2,)
>>> x = y_pred - y_true
>>> assert np.allclose(
... loss.numpy(),
... np.mean(x + np.log(np.exp(-2. * x) + 1.) - tf.math.log(2.), axis=-1),
... atol=1e-5)
Args:
y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.
y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.
Returns:
Logcosh error values. shape = `[batch_size, d0, .. dN-1]`.
|
2,535 | def make_predict_function(self, force=False):
if self.predict_function is not None and not force:
return self.predict_function
| Creates a function that executes one step of inference.
This method can be overridden to support custom inference logic.
This method is called by `Model.predict` and `Model.predict_on_batch`.
Typically, this method directly controls `tf.function` and
`tf.distribute.Strategy` settings, and delegates the actual evaluation
logic to `Model.predict_step`.
This function is cached the first time `Model.predict` or
`Model.predict_on_batch` is called. The cache is cleared whenever
`Model.compile` is called. You can skip the cache and generate again the
function with `force=True`.
Args:
force: Whether to regenerate the predict function and skip the cached
function if available.
Returns:
Function. The function created by this method should accept a
`tf.data.Iterator`, and return the outputs of the `Model`.
| 110 | 13 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def make_predict_function(self, force=False):
if self.predict_function is not None and not force:
return self.predict_function
```
###Assistant : Creates a function that executes one step of inference.
This method can be overridden to support custom inference logic.
This method is called by `Model.predict` and `Model.predict_on_batch`.
Typically, this method directly controls `tf.function` and
`tf.distribute.Strategy` settings, and delegates the actual evaluation
logic to `Model.predict_step`.
This function is cached the first time `Model.predict` or
`Model.predict_on_batch` is called. The cache is cleared whenever
`Model.compile` is called. You can skip the cache and generate again the
function with `force=True`.
Args:
force: Whether to regenerate the predict function and skip the cached
function if available.
Returns:
Function. The function created by this method should accept a
`tf.data.Iterator`, and return the outputs of the `Model`.
|
2,536 | def fit(self, X, y, **fit_params):
self._validate_params()
return self._fit(X, y, **fit_params)
| Fit the RFE model and then the underlying estimator on the selected features.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
The training input samples.
y : array-like of shape (n_samples,)
The target values.
**fit_params : dict
Additional parameters passed to the `fit` method of the underlying
estimator.
Returns
-------
self : object
Fitted estimator.
| 58 | 10 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def fit(self, X, y, **fit_params):
self._validate_params()
return self._fit(X, y, **fit_params)
```
###Assistant : Fit the RFE model and then the underlying estimator on the selected features.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
The training input samples.
y : array-like of shape (n_samples,)
The target values.
**fit_params : dict
Additional parameters passed to the `fit` method of the underlying
estimator.
Returns
-------
self : object
Fitted estimator.
|
2,537 | def test_invalid_number_selection_fails():
number_string = "99999999"
result = get_first_menu_and_fail(number_string)
lines = result.stdout.splitlines()
# Strange string addition are due to coloring, I believe
assert lines[-1] == f"\x1b[31mInvalid selection {number_string}\x1b[0m"
assert result.exit_code == 1
|
We need to make sure that if we give an invalid number that the CLI
will exit.
| 17 | 31 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_invalid_number_selection_fails():
number_string = "99999999"
result = get_first_menu_and_fail(number_string)
lines = result.stdout.splitlines()
# Strange string addition are due to coloring, I believe
assert lines[-1] == f"\x1b[31mInvalid selection {number_string}\x1b[0m"
assert result.exit_code == 1
```
###Assistant :
We need to make sure that if we give an invalid number that the CLI
will exit.
|
2,538 | def filldedent(s, w=70, **kwargs):
return '\n' + fill(dedent(str(s)).strip('\n'), width=w, **kwargs)
|
Strips leading and trailing empty lines from a copy of ``s``, then dedents,
fills and returns it.
Empty line stripping serves to deal with docstrings like this one that
start with a newline after the initial triple quote, inserting an empty
line at the beginning of the string.
Additional keyword arguments will be passed to ``textwrap.fill()``.
See Also
========
strlines, rawlines
| 61 | 10 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def filldedent(s, w=70, **kwargs):
return '\n' + fill(dedent(str(s)).strip('\n'), width=w, **kwargs)
```
###Assistant :
Strips leading and trailing empty lines from a copy of ``s``, then dedents,
fills and returns it.
Empty line stripping serves to deal with docstrings like this one that
start with a newline after the initial triple quote, inserting an empty
line at the beginning of the string.
Additional keyword arguments will be passed to ``textwrap.fill()``.
See Also
========
strlines, rawlines
|
2,539 | def in1d(ar1, ar2, assume_unique=False, invert=False, kind=None):
# Ravel both arrays, behavior for the first array could be different
ar1 = np.asarray(ar1).ravel()
ar2 = np.asarray(ar2).ravel()
# Ensure that iteration through object arrays yields size-1 arrays
if ar2.dtype == object:
ar2 = ar2.reshape(-1, 1)
# Convert booleans to uint8 so we can use the fast integer algorithm
if ar1.dtype == bool:
ar1 = ar1.view(np.uint8)
if ar2.dtype == bool:
ar2 = ar2.view(np.uint8)
# Check if we can use a fast integer algorithm:
integer_arrays = (np.issubdtype(ar1.dtype, np.integer) and
np.issubdtype(ar2.dtype, np.integer))
if kind not in {None, 'sort', 'table'}:
raise ValueError(
"Invalid kind: {0}. ".format(kind)
+ "Please use None, 'sort' or 'table'.")
if integer_arrays and kind in {None, 'table'}:
ar2_min = np.min(ar2)
ar2_max = np.max(ar2)
ar2_range = int(ar2_max) - int(ar2_min)
# Constraints on whether we can actually use the table method:
range_safe_from_overflow = ar2_range < np.iinfo(ar2.dtype).max
below_memory_constraint = ar2_range <= 6 * (ar1.size + ar2.size)
# Optimal performance is for approximately
# log10(size) > (log10(range) - 2.27) / 0.927.
# However, here we set the requirement that by default
# the intermediate array can only be 6x
# the combined memory allocation of the original
# arrays. See discussion on
# https://github.com/numpy/numpy/pull/12065.
if (
range_safe_from_overflow and
(below_memory_constraint or kind == 'table')
):
if invert:
outgoing_array = np.ones_like(ar1, dtype=bool)
else:
outgoing_array = np.zeros_like(ar1, dtype=bool)
# Make elements 1 where the integer exists in ar2
if invert:
isin_helper_ar = np.ones(ar2_range + 1, dtype=bool)
isin_helper_ar[ar2 - ar2_min] = 0
else:
isin_helper_ar = np.zeros(ar2_range + 1, dtype=bool)
isin_helper_ar[ar2 - ar2_min] = 1
# Mask out elements we know won't work
basic_mask = (ar1 <= ar2_max) & (ar1 >= ar2_min)
outgoing_array[basic_mask] = isin_helper_ar[ar1[basic_mask] -
ar2_min]
return outgoing_array
elif kind == 'table': # not range_safe_from_overflow
raise RuntimeError(
"You have specified kind='table', "
"but the range of values in `ar2` exceeds the "
"maximum integer of the datatype. "
"Please set `kind` to None or 'sort'."
)
elif kind == 'table':
raise ValueError(
"The 'table' method is only "
"supported for boolean or integer arrays. "
"Please select 'sort' or None for kind."
)
# Check if one of the arrays may contain arbitrary objects
contains_object = ar1.dtype.hasobject or ar2.dtype.hasobject
# This code is run when
# a) the first condition is true, making the code significantly faster
# b) the second condition is true (i.e. `ar1` or `ar2` may contain
# arbitrary objects), since then sorting is not guaranteed to work
if len(ar2) < 10 * len(ar1) ** 0.145 or contains_object:
if invert:
mask = np.ones(len(ar1), dtype=bool)
for a in ar2:
mask &= (ar1 != a)
else:
mask = np.zeros(len(ar1), dtype=bool)
for a in ar2:
mask |= (ar1 == a)
return mask
# Otherwise use sorting
if not assume_unique:
ar1, rev_idx = np.unique(ar1, return_inverse=True)
ar2 = np.unique(ar2)
ar = np.concatenate((ar1, ar2))
# We need this to be a stable sort, so always use 'mergesort'
# here. The values from the first array should always come before
# the values from the second array.
order = ar.argsort(kind='mergesort')
sar = ar[order]
if invert:
bool_ar = (sar[1:] != sar[:-1])
else:
bool_ar = (sar[1:] == sar[:-1])
flag = np.concatenate((bool_ar, [invert]))
ret = np.empty(ar.shape, dtype=bool)
ret[order] = flag
if assume_unique:
return ret[:len(ar1)]
else:
return ret[rev_idx]
|
Test whether each element of a 1-D array is also present in a second array.
Returns a boolean array the same length as `ar1` that is True
where an element of `ar1` is in `ar2` and False otherwise.
We recommend using :func:`isin` instead of `in1d` for new code.
Parameters
----------
ar1 : (M,) array_like
Input array.
ar2 : array_like
The values against which to test each value of `ar1`.
assume_unique : bool, optional
If True, the input arrays are both assumed to be unique, which
can speed up the calculation. Default is False.
invert : bool, optional
If True, the values in the returned array are inverted (that is,
False where an element of `ar1` is in `ar2` and True otherwise).
Default is False. ``np.in1d(a, b, invert=True)`` is equivalent
to (but is faster than) ``np.invert(in1d(a, b))``.
kind : {None, 'sort', 'table'}, optional
The algorithm to use. This will not affect the final result,
but will affect the speed. Default will select automatically
based on memory considerations.
* If 'sort', will use a mergesort-based approach. This will have
a memory usage of roughly 6 times the sum of the sizes of
`ar1` and `ar2`, not accounting for size of dtypes.
* If 'table', will use a key-dictionary approach similar
to a counting sort. This is only available for boolean and
integer arrays. This will have a memory usage of the
size of `ar1` plus the max-min value of `ar2`. This tends
to be the faster method if the following formula is true:
``log10(len(ar2)) > (log10(max(ar2)-min(ar2)) - 2.27) / 0.927``,
but may use greater memory.
* If `None`, will automatically choose 'table' if
the required memory allocation is less than or equal to
6 times the sum of the sizes of `ar1` and `ar2`,
otherwise will use 'sort'. This is done to not use
a large amount of memory by default, even though
'table' may be faster in most cases.
.. versionadded:: 1.8.0
Returns
-------
in1d : (M,) ndarray, bool
The values `ar1[in1d]` are in `ar2`.
See Also
--------
isin : Version of this function that preserves the
shape of ar1.
numpy.lib.arraysetops : Module with a number of other functions for
performing set operations on arrays.
Notes
-----
`in1d` can be considered as an element-wise function version of the
python keyword `in`, for 1-D sequences. ``in1d(a, b)`` is roughly
equivalent to ``np.array([item in b for item in a])``.
However, this idea fails if `ar2` is a set, or similar (non-sequence)
container: As ``ar2`` is converted to an array, in those cases
``asarray(ar2)`` is an object array rather than the expected array of
contained values.
.. versionadded:: 1.4.0
Examples
--------
>>> test = np.array([0, 1, 2, 5, 0])
>>> states = [0, 2]
>>> mask = np.in1d(test, states)
>>> mask
array([ True, False, True, False, True])
>>> test[mask]
array([0, 2, 0])
>>> mask = np.in1d(test, states, invert=True)
>>> mask
array([False, True, False, True, False])
>>> test[mask]
array([1, 5])
| 485 | 528 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def in1d(ar1, ar2, assume_unique=False, invert=False, kind=None):
# Ravel both arrays, behavior for the first array could be different
ar1 = np.asarray(ar1).ravel()
ar2 = np.asarray(ar2).ravel()
# Ensure that iteration through object arrays yields size-1 arrays
if ar2.dtype == object:
ar2 = ar2.reshape(-1, 1)
# Convert booleans to uint8 so we can use the fast integer algorithm
if ar1.dtype == bool:
ar1 = ar1.view(np.uint8)
if ar2.dtype == bool:
ar2 = ar2.view(np.uint8)
# Check if we can use a fast integer algorithm:
integer_arrays = (np.issubdtype(ar1.dtype, np.integer) and
np.issubdtype(ar2.dtype, np.integer))
if kind not in {None, 'sort', 'table'}:
raise ValueError(
"Invalid kind: {0}. ".format(kind)
+ "Please use None, 'sort' or 'table'.")
if integer_arrays and kind in {None, 'table'}:
ar2_min = np.min(ar2)
ar2_max = np.max(ar2)
ar2_range = int(ar2_max) - int(ar2_min)
# Constraints on whether we can actually use the table method:
range_safe_from_overflow = ar2_range < np.iinfo(ar2.dtype).max
below_memory_constraint = ar2_range <= 6 * (ar1.size + ar2.size)
# Optimal performance is for approximately
# log10(size) > (log10(range) - 2.27) / 0.927.
# However, here we set the requirement that by default
# the intermediate array can only be 6x
# the combined memory allocation of the original
# arrays. See discussion on
# https://github.com/numpy/numpy/pull/12065.
if (
range_safe_from_overflow and
(below_memory_constraint or kind == 'table')
):
if invert:
outgoing_array = np.ones_like(ar1, dtype=bool)
else:
outgoing_array = np.zeros_like(ar1, dtype=bool)
# Make elements 1 where the integer exists in ar2
if invert:
isin_helper_ar = np.ones(ar2_range + 1, dtype=bool)
isin_helper_ar[ar2 - ar2_min] = 0
else:
isin_helper_ar = np.zeros(ar2_range + 1, dtype=bool)
isin_helper_ar[ar2 - ar2_min] = 1
# Mask out elements we know won't work
basic_mask = (ar1 <= ar2_max) & (ar1 >= ar2_min)
outgoing_array[basic_mask] = isin_helper_ar[ar1[basic_mask] -
ar2_min]
return outgoing_array
elif kind == 'table': # not range_safe_from_overflow
raise RuntimeError(
"You have specified kind='table', "
"but the range of values in `ar2` exceeds the "
"maximum integer of the datatype. "
"Please set `kind` to None or 'sort'."
)
elif kind == 'table':
raise ValueError(
"The 'table' method is only "
"supported for boolean or integer arrays. "
"Please select 'sort' or None for kind."
)
# Check if one of the arrays may contain arbitrary objects
contains_object = ar1.dtype.hasobject or ar2.dtype.hasobject
# This code is run when
# a) the first condition is true, making the code significantly faster
# b) the second condition is true (i.e. `ar1` or `ar2` may contain
# arbitrary objects), since then sorting is not guaranteed to work
if len(ar2) < 10 * len(ar1) ** 0.145 or contains_object:
if invert:
mask = np.ones(len(ar1), dtype=bool)
for a in ar2:
mask &= (ar1 != a)
else:
mask = np.zeros(len(ar1), dtype=bool)
for a in ar2:
mask |= (ar1 == a)
return mask
# Otherwise use sorting
if not assume_unique:
ar1, rev_idx = np.unique(ar1, return_inverse=True)
ar2 = np.unique(ar2)
ar = np.concatenate((ar1, ar2))
# We need this to be a stable sort, so always use 'mergesort'
# here. The values from the first array should always come before
# the values from the second array.
order = ar.argsort(kind='mergesort')
sar = ar[order]
if invert:
bool_ar = (sar[1:] != sar[:-1])
else:
bool_ar = (sar[1:] == sar[:-1])
flag = np.concatenate((bool_ar, [invert]))
ret = np.empty(ar.shape, dtype=bool)
ret[order] = flag
if assume_unique:
return ret[:len(ar1)]
else:
return ret[rev_idx]
```
###Assistant :
Test whether each element of a 1-D array is also present in a second array.
Returns a boolean array the same length as `ar1` that is True
where an element of `ar1` is in `ar2` and False otherwise.
We recommend using :func:`isin` instead of `in1d` for new code.
Parameters
----------
ar1 : (M,) array_like
Input array.
ar2 : array_like
The values against which to test each value of `ar1`.
assume_unique : bool, optional
If True, the input arrays are both assumed to be unique, which
can speed up the calculation. Default is False.
invert : bool, optional
If True, the values in the returned array are inverted (that is,
False where an element of `ar1` is in `ar2` and True otherwise).
Default is False. ``np.in1d(a, b, invert=True)`` is equivalent
to (but is faster than) ``np.invert(in1d(a, b))``.
kind : {None, 'sort', 'table'}, optional
The algorithm to use. This will not affect the final result,
but will affect the speed. Default will select automatically
based on memory considerations.
* If 'sort', will use a mergesort-based approach. This will have
a memory usage of roughly 6 times the sum of the sizes of
`ar1` and `ar2`, not accounting for size of dtypes.
* If 'table', will use a key-dictionary approach similar
to a counting sort. This is only available for boolean and
integer arrays. This will have a memory usage of the
size of `ar1` plus the max-min value of `ar2`. This tends
to be the faster method if the following formula is true:
``log10(len(ar2)) > (log10(max(ar2)-min(ar2)) - 2.27) / 0.927``,
but may use greater memory.
* If `None`, will automatically choose 'table' if
the required memory allocation is less than or equal to
6 times the sum of the sizes of `ar1` and `ar2`,
otherwise will use 'sort'. This is done to not use
a large amount of memory by default, even though
'table' may be faster in most cases.
.. versionadded:: 1.8.0
Returns
-------
in1d : (M,) ndarray, bool
The values `ar1[in1d]` are in `ar2`.
See Also
--------
isin : Version of this function that preserves the
shape of ar1.
numpy.lib.arraysetops : Module with a number of other functions for
performing set operations on arrays.
Notes
-----
`in1d` can be considered as an element-wise function version of the
python keyword `in`, for 1-D sequences. ``in1d(a, b)`` is roughly
equivalent to ``np.array([item in b for item in a])``.
However, this idea fails if `ar2` is a set, or similar (non-sequence)
container: As ``ar2`` is converted to an array, in those cases
``asarray(ar2)`` is an object array rather than the expected array of
contained values.
.. versionadded:: 1.4.0
Examples
--------
>>> test = np.array([0, 1, 2, 5, 0])
>>> states = [0, 2]
>>> mask = np.in1d(test, states)
>>> mask
array([ True, False, True, False, True])
>>> test[mask]
array([0, 2, 0])
>>> mask = np.in1d(test, states, invert=True)
>>> mask
array([False, True, False, True, False])
>>> test[mask]
array([1, 5])
|
2,540 | def test_devices(self) -> None:
# Login in as the user
self._get_token()
# Check that we don't see a new device in our devices list
channel = self.make_request(
"GET", "devices", b"{}", access_token=self.other_user_tok
)
self.assertEqual(HTTPStatus.OK, channel.code, msg=channel.json_body)
# We should only see the one device (from the login in `prepare`)
self.assertEqual(len(channel.json_body["devices"]), 1)
| Tests that logging in as a user doesn't create a new device for them. | 14 | 50 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_devices(self) -> None:
# Login in as the user
self._get_token()
# Check that we don't see a new device in our devices list
channel = self.make_request(
"GET", "devices", b"{}", access_token=self.other_user_tok
)
self.assertEqual(HTTPStatus.OK, channel.code, msg=channel.json_body)
# We should only see the one device (from the login in `prepare`)
self.assertEqual(len(channel.json_body["devices"]), 1)
```
###Assistant : Tests that logging in as a user doesn't create a new device for them.
|
2,541 | def in_ipython() -> bool:
try:
eval('__IPYTHON__')
except NameError:
return False
else: # pragma: no cover
return True
|
Check whether we're in an ipython environment, including jupyter notebooks.
| 10 | 17 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def in_ipython() -> bool:
try:
eval('__IPYTHON__')
except NameError:
return False
else: # pragma: no cover
return True
```
###Assistant :
Check whether we're in an ipython environment, including jupyter notebooks.
|
2,542 | def test_stroptions_deprecated_internal_subset():
with pytest.raises(ValueError, match="deprecated options must be a subset"):
StrOptions({"a", "b", "c"}, deprecated={"a", "d"})
with pytest.raises(ValueError, match="internal options must be a subset"):
StrOptions({"a", "b", "c"}, internal={"a", "d"})
| Check that the deprecated and internal parameters must be subsets of options. | 12 | 28 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_stroptions_deprecated_internal_subset():
with pytest.raises(ValueError, match="deprecated options must be a subset"):
StrOptions({"a", "b", "c"}, deprecated={"a", "d"})
with pytest.raises(ValueError, match="internal options must be a subset"):
StrOptions({"a", "b", "c"}, internal={"a", "d"})
```
###Assistant : Check that the deprecated and internal parameters must be subsets of options.
|
2,543 | def test_change_view_without_object_change_permission(self):
change_url = reverse("admin9:admin_views_article_change", args=(self.a1.pk,))
self.client.force_login(self.viewuser)
response = self.client.get(change_url)
self.assertEqual(response.context["title"], "View article")
self.assertContains(response, "<title>View article | Django site admin</title>")
self.assertContains(response, "<h1>View article</h1>")
self.assertContains(
response,
'<a href="/test_admin/admin9/admin_views/article/" class="closelink">Close</a>',
)
|
The object should be read-only if the user has permission to view it
and change objects of that type but not to change the current object.
| 26 | 29 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_change_view_without_object_change_permission(self):
change_url = reverse("admin9:admin_views_article_change", args=(self.a1.pk,))
self.client.force_login(self.viewuser)
response = self.client.get(change_url)
self.assertEqual(response.context["title"], "View article")
self.assertContains(response, "<title>View article | Django site admin</title>")
self.assertContains(response, "<h1>View article</h1>")
self.assertContains(
response,
'<a href="/test_admin/admin9/admin_views/article/" class="closelink">Close</a>',
)
```
###Assistant :
The object should be read-only if the user has permission to view it
and change objects of that type but not to change the current object.
|
2,544 | def __getattr__(name):
import warnings
if name in __deprecated_num_index_names:
warnings.warn(
f"pandas.{name} is deprecated "
"and will be removed from pandas in a future version. "
"Use pandas.Index with the appropriate dtype instead.",
FutureWarning,
stacklevel=2,
)
from pandas.core.api import Float64Index, Int64Index, UInt64Index
return {
"Float64Index": Float64Index,
"Int64Index": Int64Index,
"UInt64Index": UInt64Index,
}[name]
elif name == "datetime":
warnings.warn(
"The pandas.datetime class is deprecated "
"and will be removed from pandas in a future version. "
"Import from datetime module instead.",
FutureWarning,
stacklevel=2,
)
from datetime import datetime as dt
return dt
elif name == "np":
warnings.warn(
"The pandas.np module is deprecated "
"and will be removed from pandas in a future version. "
"Import numpy directly instead.",
FutureWarning,
stacklevel=2,
)
import numpy as np
return np
elif name in {"SparseSeries", "SparseDataFrame"}:
warnings.warn(
f"The {name} class is removed from pandas. Accessing it from "
"the top-level namespace will also be removed in the next version.",
FutureWarning,
stacklevel=2,
)
return type(name, (), {})
elif name == "SparseArray":
warnings.warn(
"The pandas.SparseArray class is deprecated "
"and will be removed from pandas in a future version. "
"Use pandas.arrays.SparseArray instead.",
FutureWarning,
stacklevel=2,
)
from pandas.core.arrays.sparse import SparseArray as _SparseArray
return _SparseArray
raise AttributeError(f"module 'pandas' has no attribute '{name}'")
# module level doc-string
__doc__ =
# Use __all__ to let type checkers know what is part of the public API.
# Pandas is not (yet) a py.typed library: the public API is determined
# based on the documentation.
__all__ = [
"BooleanDtype",
"Categorical",
"CategoricalDtype",
"CategoricalIndex",
"DataFrame",
"DateOffset",
"DatetimeIndex",
"DatetimeTZDtype",
"ExcelFile",
"ExcelWriter",
"Flags",
"Float32Dtype",
"Float64Dtype",
"Grouper",
"HDFStore",
"Index",
"IndexSlice",
"Int16Dtype",
"Int32Dtype",
"Int64Dtype",
"Int8Dtype",
"Interval",
"IntervalDtype",
"IntervalIndex",
"MultiIndex",
"NA",
"NaT",
"NamedAgg",
"Period",
"PeriodDtype",
"PeriodIndex",
"RangeIndex",
"Series",
"SparseDtype",
"StringDtype",
"Timedelta",
"TimedeltaIndex",
"Timestamp",
"UInt16Dtype",
"UInt32Dtype",
"UInt64Dtype",
"UInt8Dtype",
"api",
"array",
"arrays",
"bdate_range",
"concat",
"crosstab",
"cut",
"date_range",
"describe_option",
"errors",
"eval",
"factorize",
"get_dummies",
"get_option",
"infer_freq",
"interval_range",
"io",
"isna",
"isnull",
"json_normalize",
"lreshape",
"melt",
"merge",
"merge_asof",
"merge_ordered",
"notna",
"notnull",
"offsets",
"option_context",
"options",
"period_range",
"pivot",
"pivot_table",
"plotting",
"qcut",
"read_clipboard",
"read_csv",
"read_excel",
"read_feather",
"read_fwf",
"read_gbq",
"read_hdf",
"read_html",
"read_json",
"read_orc",
"read_parquet",
"read_pickle",
"read_sas",
"read_spss",
"read_sql",
"read_sql_query",
"read_sql_table",
"read_stata",
"read_table",
"read_xml",
"reset_option",
"set_eng_float_format",
"set_option",
"show_versions",
"test",
"testing",
"timedelta_range",
"to_datetime",
"to_numeric",
"to_pickle",
"to_timedelta",
"tseries",
"unique",
"value_counts",
"wide_to_long",
]
|
pandas - a powerful data analysis and manipulation library for Python
=====================================================================
**pandas** is a Python package providing fast, flexible, and expressive data
structures designed to make working with "relational" or "labeled" data both
easy and intuitive. It aims to be the fundamental high-level building block for
doing practical, **real world** data analysis in Python. Additionally, it has
the broader goal of becoming **the most powerful and flexible open source data
analysis / manipulation tool available in any language**. It is already well on
its way toward this goal.
Main Features
-------------
Here are just a few of the things that pandas does well:
- Easy handling of missing data in floating point as well as non-floating
point data.
- Size mutability: columns can be inserted and deleted from DataFrame and
higher dimensional objects
- Automatic and explicit data alignment: objects can be explicitly aligned
to a set of labels, or the user can simply ignore the labels and let
`Series`, `DataFrame`, etc. automatically align the data for you in
computations.
- Powerful, flexible group by functionality to perform split-apply-combine
operations on data sets, for both aggregating and transforming data.
- Make it easy to convert ragged, differently-indexed data in other Python
and NumPy data structures into DataFrame objects.
- Intelligent label-based slicing, fancy indexing, and subsetting of large
data sets.
- Intuitive merging and joining data sets.
- Flexible reshaping and pivoting of data sets.
- Hierarchical labeling of axes (possible to have multiple labels per tick).
- Robust IO tools for loading data from flat files (CSV and delimited),
Excel files, databases, and saving/loading data from the ultrafast HDF5
format.
- Time series-specific functionality: date range generation and frequency
conversion, moving window statistics, date shifting and lagging.
| 289 | 355 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def __getattr__(name):
import warnings
if name in __deprecated_num_index_names:
warnings.warn(
f"pandas.{name} is deprecated "
"and will be removed from pandas in a future version. "
"Use pandas.Index with the appropriate dtype instead.",
FutureWarning,
stacklevel=2,
)
from pandas.core.api import Float64Index, Int64Index, UInt64Index
return {
"Float64Index": Float64Index,
"Int64Index": Int64Index,
"UInt64Index": UInt64Index,
}[name]
elif name == "datetime":
warnings.warn(
"The pandas.datetime class is deprecated "
"and will be removed from pandas in a future version. "
"Import from datetime module instead.",
FutureWarning,
stacklevel=2,
)
from datetime import datetime as dt
return dt
elif name == "np":
warnings.warn(
"The pandas.np module is deprecated "
"and will be removed from pandas in a future version. "
"Import numpy directly instead.",
FutureWarning,
stacklevel=2,
)
import numpy as np
return np
elif name in {"SparseSeries", "SparseDataFrame"}:
warnings.warn(
f"The {name} class is removed from pandas. Accessing it from "
"the top-level namespace will also be removed in the next version.",
FutureWarning,
stacklevel=2,
)
return type(name, (), {})
elif name == "SparseArray":
warnings.warn(
"The pandas.SparseArray class is deprecated "
"and will be removed from pandas in a future version. "
"Use pandas.arrays.SparseArray instead.",
FutureWarning,
stacklevel=2,
)
from pandas.core.arrays.sparse import SparseArray as _SparseArray
return _SparseArray
raise AttributeError(f"module 'pandas' has no attribute '{name}'")
# module level doc-string
__doc__ =
# Use __all__ to let type checkers know what is part of the public API.
# Pandas is not (yet) a py.typed library: the public API is determined
# based on the documentation.
__all__ = [
"BooleanDtype",
"Categorical",
"CategoricalDtype",
"CategoricalIndex",
"DataFrame",
"DateOffset",
"DatetimeIndex",
"DatetimeTZDtype",
"ExcelFile",
"ExcelWriter",
"Flags",
"Float32Dtype",
"Float64Dtype",
"Grouper",
"HDFStore",
"Index",
"IndexSlice",
"Int16Dtype",
"Int32Dtype",
"Int64Dtype",
"Int8Dtype",
"Interval",
"IntervalDtype",
"IntervalIndex",
"MultiIndex",
"NA",
"NaT",
"NamedAgg",
"Period",
"PeriodDtype",
"PeriodIndex",
"RangeIndex",
"Series",
"SparseDtype",
"StringDtype",
"Timedelta",
"TimedeltaIndex",
"Timestamp",
"UInt16Dtype",
"UInt32Dtype",
"UInt64Dtype",
"UInt8Dtype",
"api",
"array",
"arrays",
"bdate_range",
"concat",
"crosstab",
"cut",
"date_range",
"describe_option",
"errors",
"eval",
"factorize",
"get_dummies",
"get_option",
"infer_freq",
"interval_range",
"io",
"isna",
"isnull",
"json_normalize",
"lreshape",
"melt",
"merge",
"merge_asof",
"merge_ordered",
"notna",
"notnull",
"offsets",
"option_context",
"options",
"period_range",
"pivot",
"pivot_table",
"plotting",
"qcut",
"read_clipboard",
"read_csv",
"read_excel",
"read_feather",
"read_fwf",
"read_gbq",
"read_hdf",
"read_html",
"read_json",
"read_orc",
"read_parquet",
"read_pickle",
"read_sas",
"read_spss",
"read_sql",
"read_sql_query",
"read_sql_table",
"read_stata",
"read_table",
"read_xml",
"reset_option",
"set_eng_float_format",
"set_option",
"show_versions",
"test",
"testing",
"timedelta_range",
"to_datetime",
"to_numeric",
"to_pickle",
"to_timedelta",
"tseries",
"unique",
"value_counts",
"wide_to_long",
]
```
###Assistant :
pandas - a powerful data analysis and manipulation library for Python
=====================================================================
**pandas** is a Python package providing fast, flexible, and expressive data
structures designed to make working with "relational" or "labeled" data both
easy and intuitive. It aims to be the fundamental high-level building block for
doing practical, **real world** data analysis in Python. Additionally, it has
the broader goal of becoming **the most powerful and flexible open source data
analysis / manipulation tool available in any language**. It is already well on
its way toward this goal.
Main Features
-------------
Here are just a few of the things that pandas does well:
- Easy handling of missing data in floating point as well as non-floating
point data.
- Size mutability: columns can be inserted and deleted from DataFrame and
higher dimensional objects
- Automatic and explicit data alignment: objects can be explicitly aligned
to a set of labels, or the user can simply ignore the labels and let
`Series`, `DataFrame`, etc. automatically align the data for you in
computations.
- Powerful, flexible group by functionality to perform split-apply-combine
operations on data sets, for both aggregating and transforming data.
- Make it easy to convert ragged, differently-indexed data in other Python
and NumPy data structures into DataFrame objects.
- Intelligent label-based slicing, fancy indexing, and subsetting of large
data sets.
- Intuitive merging and joining data sets.
- Flexible reshaping and pivoting of data sets.
- Hierarchical labeling of axes (possible to have multiple labels per tick).
- Robust IO tools for loading data from flat files (CSV and delimited),
Excel files, databases, and saving/loading data from the ultrafast HDF5
format.
- Time series-specific functionality: date range generation and frequency
conversion, moving window statistics, date shifting and lagging.
|
2,545 | def readlink(path, canonicalize=False):
path = os.path.expanduser(path)
if not os.path.isabs(path):
raise SaltInvocationError("Path to link must be absolute.")
if not os.path.islink(path):
raise SaltInvocationError("A valid link was not specified.")
if canonicalize:
return os.path.realpath(path)
else:
return salt.utils.path.readlink(path)
|
.. versionadded:: 2014.1.0
Return the path that a symlink points to
If canonicalize is set to True, then it return the final target
CLI Example:
.. code-block:: bash
salt '*' file.readlink /path/to/link
| 32 | 33 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def readlink(path, canonicalize=False):
path = os.path.expanduser(path)
if not os.path.isabs(path):
raise SaltInvocationError("Path to link must be absolute.")
if not os.path.islink(path):
raise SaltInvocationError("A valid link was not specified.")
if canonicalize:
return os.path.realpath(path)
else:
return salt.utils.path.readlink(path)
```
###Assistant :
.. versionadded:: 2014.1.0
Return the path that a symlink points to
If canonicalize is set to True, then it return the final target
CLI Example:
.. code-block:: bash
salt '*' file.readlink /path/to/link
|
2,546 | def warns_deprecated_sympy():
with warns(SymPyDeprecationWarning):
yield
@contextlib.contextmanager |
Shorthand for ``warns(SymPyDeprecationWarning)``
This is the recommended way to test that ``SymPyDeprecationWarning`` is
emitted for deprecated features in SymPy. To test for other warnings use
``warns``. To suppress warnings without asserting that they are emitted
use ``ignore_warnings``.
.. note::
``warns_deprecated_sympy()`` is only intended for internal use in the
SymPy test suite to test that a deprecation warning triggers properly.
All other code in the SymPy codebase, including documentation examples,
should not use deprecated behavior.
If you are a user of SymPy and you want to disable
SymPyDeprecationWarnings, use ``warnings`` filters (see
:ref:`silencing-sympy-deprecation-warnings`).
>>> from sympy.testing.pytest import warns_deprecated_sympy
>>> from sympy.utilities.exceptions import SymPyDeprecationWarning
>>> with warns_deprecated_sympy():
... SymPyDeprecationWarning("Don't use", feature="old thing",
... deprecated_since_version="1.0", issue=123).warn()
>>> with warns_deprecated_sympy():
... pass
Traceback (most recent call last):
...
Failed: DID NOT WARN. No warnings of type \
SymPyDeprecationWarning was emitted. The list of emitted warnings is: [].
| 143 | 6 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def warns_deprecated_sympy():
with warns(SymPyDeprecationWarning):
yield
@contextlib.contextmanager
```
###Assistant :
Shorthand for ``warns(SymPyDeprecationWarning)``
This is the recommended way to test that ``SymPyDeprecationWarning`` is
emitted for deprecated features in SymPy. To test for other warnings use
``warns``. To suppress warnings without asserting that they are emitted
use ``ignore_warnings``.
.. note::
``warns_deprecated_sympy()`` is only intended for internal use in the
SymPy test suite to test that a deprecation warning triggers properly.
All other code in the SymPy codebase, including documentation examples,
should not use deprecated behavior.
If you are a user of SymPy and you want to disable
SymPyDeprecationWarnings, use ``warnings`` filters (see
:ref:`silencing-sympy-deprecation-warnings`).
>>> from sympy.testing.pytest import warns_deprecated_sympy
>>> from sympy.utilities.exceptions import SymPyDeprecationWarning
>>> with warns_deprecated_sympy():
... SymPyDeprecationWarning("Don't use", feature="old thing",
... deprecated_since_version="1.0", issue=123).warn()
>>> with warns_deprecated_sympy():
... pass
Traceback (most recent call last):
...
Failed: DID NOT WARN. No warnings of type \
SymPyDeprecationWarning was emitted. The list of emitted warnings is: [].
|
2,547 | def assert_array_equal(x, y, err_msg='', verbose=True, *, strict=False):
__tracebackhide__ = True # Hide traceback for py.test
assert_array_compare(operator.__eq__, x, y, err_msg=err_msg,
verbose=verbose, header='Arrays are not equal',
strict=strict)
|
Raises an AssertionError if two array_like objects are not equal.
Given two array_like objects, check that the shape is equal and all
elements of these objects are equal (but see the Notes for the special
handling of a scalar). An exception is raised at shape mismatch or
conflicting values. In contrast to the standard usage in numpy, NaNs
are compared like numbers, no assertion is raised if both objects have
NaNs in the same positions.
The usual caution for verifying equality with floating point numbers is
advised.
Parameters
----------
x : array_like
The actual object to check.
y : array_like
The desired, expected object.
err_msg : str, optional
The error message to be printed in case of failure.
verbose : bool, optional
If True, the conflicting values are appended to the error message.
strict : bool, optional
If True, raise an AssertionError when either the shape or the data
type of the array_like objects does not match. The special
handling for scalars mentioned in the Notes section is disabled.
Raises
------
AssertionError
If actual and desired objects are not equal.
See Also
--------
assert_allclose: Compare two array_like objects for equality with desired
relative and/or absolute precision.
assert_array_almost_equal_nulp, assert_array_max_ulp, assert_equal
Notes
-----
When one of `x` and `y` is a scalar and the other is array_like, the
function checks that each element of the array_like object is equal to
the scalar. This behaviour can be disabled with the `strict` parameter.
Examples
--------
The first assert does not raise an exception:
>>> np.testing.assert_array_equal([1.0,2.33333,np.nan],
... [np.exp(0),2.33333, np.nan])
Assert fails with numerical imprecision with floats:
>>> np.testing.assert_array_equal([1.0,np.pi,np.nan],
... [1, np.sqrt(np.pi)**2, np.nan])
Traceback (most recent call last):
...
AssertionError:
Arrays are not equal
<BLANKLINE>
Mismatched elements: 1 / 3 (33.3%)
Max absolute difference: 4.4408921e-16
Max relative difference: 1.41357986e-16
x: array([1. , 3.141593, nan])
y: array([1. , 3.141593, nan])
Use `assert_allclose` or one of the nulp (number of floating point values)
functions for these cases instead:
>>> np.testing.assert_allclose([1.0,np.pi,np.nan],
... [1, np.sqrt(np.pi)**2, np.nan],
... rtol=1e-10, atol=0)
As mentioned in the Notes section, `assert_array_equal` has special
handling for scalars. Here the test checks that each value in `x` is 3:
>>> x = np.full((2, 5), fill_value=3)
>>> np.testing.assert_array_equal(x, 3)
Use `strict` to raise an AssertionError when comparing a scalar with an
array:
>>> np.testing.assert_array_equal(x, 3, strict=True)
Traceback (most recent call last):
...
AssertionError:
Arrays are not equal
<BLANKLINE>
(shapes (2, 5), () mismatch)
x: array([[3, 3, 3, 3, 3],
[3, 3, 3, 3, 3]])
y: array(3)
The `strict` parameter also ensures that the array data types match:
>>> x = np.array([2, 2, 2])
>>> y = np.array([2., 2., 2.], dtype=np.float32)
>>> np.testing.assert_array_equal(x, y, strict=True)
Traceback (most recent call last):
...
AssertionError:
Arrays are not equal
<BLANKLINE>
(dtypes int64, float32 mismatch)
x: array([2, 2, 2])
y: array([2., 2., 2.], dtype=float32)
| 461 | 25 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def assert_array_equal(x, y, err_msg='', verbose=True, *, strict=False):
__tracebackhide__ = True # Hide traceback for py.test
assert_array_compare(operator.__eq__, x, y, err_msg=err_msg,
verbose=verbose, header='Arrays are not equal',
strict=strict)
```
###Assistant :
Raises an AssertionError if two array_like objects are not equal.
Given two array_like objects, check that the shape is equal and all
elements of these objects are equal (but see the Notes for the special
handling of a scalar). An exception is raised at shape mismatch or
conflicting values. In contrast to the standard usage in numpy, NaNs
are compared like numbers, no assertion is raised if both objects have
NaNs in the same positions.
The usual caution for verifying equality with floating point numbers is
advised.
Parameters
----------
x : array_like
The actual object to check.
y : array_like
The desired, expected object.
err_msg : str, optional
The error message to be printed in case of failure.
verbose : bool, optional
If True, the conflicting values are appended to the error message.
strict : bool, optional
If True, raise an AssertionError when either the shape or the data
type of the array_like objects does not match. The special
handling for scalars mentioned in the Notes section is disabled.
Raises
------
AssertionError
If actual and desired objects are not equal.
See Also
--------
assert_allclose: Compare two array_like objects for equality with desired
relative and/or absolute precision.
assert_array_almost_equal_nulp, assert_array_max_ulp, assert_equal
Notes
-----
When one of `x` and `y` is a scalar and the other is array_like, the
function checks that each element of the array_like object is equal to
the scalar. This behaviour can be disabled with the `strict` parameter.
Examples
--------
The first assert does not raise an exception:
>>> np.testing.assert_array_equal([1.0,2.33333,np.nan],
... [np.exp(0),2.33333, np.nan])
Assert fails with numerical imprecision with floats:
>>> np.testing.assert_array_equal([1.0,np.pi,np.nan],
... [1, np.sqrt(np.pi)**2, np.nan])
Traceback (most recent call last):
...
AssertionError:
Arrays are not equal
<BLANKLINE>
Mismatched elements: 1 / 3 (33.3%)
Max absolute difference: 4.4408921e-16
Max relative difference: 1.41357986e-16
x: array([1. , 3.141593, nan])
y: array([1. , 3.141593, nan])
Use `assert_allclose` or one of the nulp (number of floating point values)
functions for these cases instead:
>>> np.testing.assert_allclose([1.0,np.pi,np.nan],
... [1, np.sqrt(np.pi)**2, np.nan],
... rtol=1e-10, atol=0)
As mentioned in the Notes section, `assert_array_equal` has special
handling for scalars. Here the test checks that each value in `x` is 3:
>>> x = np.full((2, 5), fill_value=3)
>>> np.testing.assert_array_equal(x, 3)
Use `strict` to raise an AssertionError when comparing a scalar with an
array:
>>> np.testing.assert_array_equal(x, 3, strict=True)
Traceback (most recent call last):
...
AssertionError:
Arrays are not equal
<BLANKLINE>
(shapes (2, 5), () mismatch)
x: array([[3, 3, 3, 3, 3],
[3, 3, 3, 3, 3]])
y: array(3)
The `strict` parameter also ensures that the array data types match:
>>> x = np.array([2, 2, 2])
>>> y = np.array([2., 2., 2.], dtype=np.float32)
>>> np.testing.assert_array_equal(x, y, strict=True)
Traceback (most recent call last):
...
AssertionError:
Arrays are not equal
<BLANKLINE>
(dtypes int64, float32 mismatch)
x: array([2, 2, 2])
y: array([2., 2., 2.], dtype=float32)
|
2,548 | def update_from_data_x(self, x, ignore=None):
x = np.ravel(x)
self.update_from_data_xy(np.column_stack([x, np.ones(x.size)]),
ignore=ignore, updatey=False)
|
Update the x-bounds of the `Bbox` based on the passed in data. After
updating, the bounds will have positive *width*, and *x0* will be the
minimal value.
Parameters
----------
x : `~numpy.ndarray`
Array of x-values.
ignore : bool, optional
- When ``True``, ignore the existing bounds of the `Bbox`.
- When ``False``, include the existing bounds of the `Bbox`.
- When ``None``, use the last value passed to :meth:`ignore`.
| 69 | 11 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def update_from_data_x(self, x, ignore=None):
x = np.ravel(x)
self.update_from_data_xy(np.column_stack([x, np.ones(x.size)]),
ignore=ignore, updatey=False)
```
###Assistant :
Update the x-bounds of the `Bbox` based on the passed in data. After
updating, the bounds will have positive *width*, and *x0* will be the
minimal value.
Parameters
----------
x : `~numpy.ndarray`
Array of x-values.
ignore : bool, optional
- When ``True``, ignore the existing bounds of the `Bbox`.
- When ``False``, include the existing bounds of the `Bbox`.
- When ``None``, use the last value passed to :meth:`ignore`.
|
2,549 | def test_https_malformed_referer(self):
malformed_referer_msg = "Referer checking failed - Referer is malformed."
req = self._get_POST_request_with_token()
req._is_secure_override = True
req.META["HTTP_REFERER"] = "http://http://www.example.com/"
mw = CsrfViewMiddleware(post_form_view)
self._check_referer_rejects(mw, req)
response = mw.process_view(req, post_form_view, (), {})
self.assertContains(
response,
"Referer checking failed - Referer is insecure while host is secure.",
status_code=403,
)
# Empty
req.META["HTTP_REFERER"] = ""
self._check_referer_rejects(mw, req)
response = mw.process_view(req, post_form_view, (), {})
self.assertContains(response, malformed_referer_msg, status_code=403)
# Non-ASCII
req.META["HTTP_REFERER"] = "ØBöIß"
self._check_referer_rejects(mw, req)
response = mw.process_view(req, post_form_view, (), {})
self.assertContains(response, malformed_referer_msg, status_code=403)
# missing scheme
# >>> urlparse('//example.com/')
# ParseResult(scheme='', netloc='example.com', path='/', params='', query='', fragment='')
req.META["HTTP_REFERER"] = "//example.com/"
self._check_referer_rejects(mw, req)
response = mw.process_view(req, post_form_view, (), {})
self.assertContains(response, malformed_referer_msg, status_code=403)
# missing netloc
# >>> urlparse('https://')
# ParseResult(scheme='https', netloc='', path='', params='', query='', fragment='')
req.META["HTTP_REFERER"] = "https://"
self._check_referer_rejects(mw, req)
response = mw.process_view(req, post_form_view, (), {})
self.assertContains(response, malformed_referer_msg, status_code=403)
# Invalid URL
# >>> urlparse('https://[')
# ValueError: Invalid IPv6 URL
req.META["HTTP_REFERER"] = "https://["
self._check_referer_rejects(mw, req)
response = mw.process_view(req, post_form_view, (), {})
self.assertContains(response, malformed_referer_msg, status_code=403)
|
A POST HTTPS request with a bad referer is rejected.
| 10 | 157 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_https_malformed_referer(self):
malformed_referer_msg = "Referer checking failed - Referer is malformed."
req = self._get_POST_request_with_token()
req._is_secure_override = True
req.META["HTTP_REFERER"] = "http://http://www.example.com/"
mw = CsrfViewMiddleware(post_form_view)
self._check_referer_rejects(mw, req)
response = mw.process_view(req, post_form_view, (), {})
self.assertContains(
response,
"Referer checking failed - Referer is insecure while host is secure.",
status_code=403,
)
# Empty
req.META["HTTP_REFERER"] = ""
self._check_referer_rejects(mw, req)
response = mw.process_view(req, post_form_view, (), {})
self.assertContains(response, malformed_referer_msg, status_code=403)
# Non-ASCII
req.META["HTTP_REFERER"] = "ØBöIß"
self._check_referer_rejects(mw, req)
response = mw.process_view(req, post_form_view, (), {})
self.assertContains(response, malformed_referer_msg, status_code=403)
# missing scheme
# >>> urlparse('//example.com/')
# ParseResult(scheme='', netloc='example.com', path='/', params='', query='', fragment='')
req.META["HTTP_REFERER"] = "//example.com/"
self._check_referer_rejects(mw, req)
response = mw.process_view(req, post_form_view, (), {})
self.assertContains(response, malformed_referer_msg, status_code=403)
# missing netloc
# >>> urlparse('https://')
# ParseResult(scheme='https', netloc='', path='', params='', query='', fragment='')
req.META["HTTP_REFERER"] = "https://"
self._check_referer_rejects(mw, req)
response = mw.process_view(req, post_form_view, (), {})
self.assertContains(response, malformed_referer_msg, status_code=403)
# Invalid URL
# >>> urlparse('https://[')
# ValueError: Invalid IPv6 URL
req.META["HTTP_REFERER"] = "https://["
self._check_referer_rejects(mw, req)
response = mw.process_view(req, post_form_view, (), {})
self.assertContains(response, malformed_referer_msg, status_code=403)
```
###Assistant :
A POST HTTPS request with a bad referer is rejected.
|
2,550 | def test_missing_cpp_namespace(self) -> None:
yaml_str =
output_error = self.get_errors_from_gen_backend_stubs(yaml_str)
self.assertExpectedInline(output_error, )
| \
backend: XLA
supported:
- absYou must provide a value for "cpp_namespace" | 12 | 11 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_missing_cpp_namespace(self) -> None:
yaml_str =
output_error = self.get_errors_from_gen_backend_stubs(yaml_str)
self.assertExpectedInline(output_error, )
```
###Assistant : \
backend: XLA
supported:
- absYou must provide a value for "cpp_namespace"
|
2,551 | def single_source_dijkstra_path_length(G, source, cutoff=None, weight="weight"):
return multi_source_dijkstra_path_length(G, {source}, cutoff=cutoff, weight=weight)
| Find shortest weighted path lengths in G from a source node.
Compute the shortest path length between source and all other
reachable nodes for a weighted graph.
Parameters
----------
G : NetworkX graph
source : node label
Starting node for path
cutoff : integer or float, optional
Length (sum of edge weights) at which the search is stopped.
If cutoff is provided, only return paths with summed weight <= cutoff.
weight : string or function
If this is a string, then edge weights will be accessed via the
edge attribute with this key (that is, the weight of the edge
joining `u` to `v` will be ``G.edges[u, v][weight]``). If no
such edge attribute exists, the weight of the edge is assumed to
be one.
If this is a function, the weight of an edge is the value
returned by the function. The function must accept exactly three
positional arguments: the two endpoints of an edge and the
dictionary of edge attributes for that edge. The function must
return a number or None to indicate a hidden edge.
Returns
-------
length : dict
Dict keyed by node to shortest path length from source.
Raises
------
NodeNotFound
If `source` is not in `G`.
Examples
--------
>>> G = nx.path_graph(5)
>>> length = nx.single_source_dijkstra_path_length(G, 0)
>>> length[4]
4
>>> for node in [0, 1, 2, 3, 4]:
... print(f"{node}: {length[node]}")
0: 0
1: 1
2: 2
3: 3
4: 4
Notes
-----
Edge weight attributes must be numerical.
Distances are calculated as sums of weighted edges traversed.
The weight function can be used to hide edges by returning None.
So ``weight = lambda u, v, d: 1 if d['color']=="red" else None``
will find the shortest red path.
See Also
--------
single_source_dijkstra, single_source_bellman_ford_path_length
| 289 | 10 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def single_source_dijkstra_path_length(G, source, cutoff=None, weight="weight"):
return multi_source_dijkstra_path_length(G, {source}, cutoff=cutoff, weight=weight)
```
###Assistant : Find shortest weighted path lengths in G from a source node.
Compute the shortest path length between source and all other
reachable nodes for a weighted graph.
Parameters
----------
G : NetworkX graph
source : node label
Starting node for path
cutoff : integer or float, optional
Length (sum of edge weights) at which the search is stopped.
If cutoff is provided, only return paths with summed weight <= cutoff.
weight : string or function
If this is a string, then edge weights will be accessed via the
edge attribute with this key (that is, the weight of the edge
joining `u` to `v` will be ``G.edges[u, v][weight]``). If no
such edge attribute exists, the weight of the edge is assumed to
be one.
If this is a function, the weight of an edge is the value
returned by the function. The function must accept exactly three
positional arguments: the two endpoints of an edge and the
dictionary of edge attributes for that edge. The function must
return a number or None to indicate a hidden edge.
Returns
-------
length : dict
Dict keyed by node to shortest path length from source.
Raises
------
NodeNotFound
If `source` is not in `G`.
Examples
--------
>>> G = nx.path_graph(5)
>>> length = nx.single_source_dijkstra_path_length(G, 0)
>>> length[4]
4
>>> for node in [0, 1, 2, 3, 4]:
... print(f"{node}: {length[node]}")
0: 0
1: 1
2: 2
3: 3
4: 4
Notes
-----
Edge weight attributes must be numerical.
Distances are calculated as sums of weighted edges traversed.
The weight function can be used to hide edges by returning None.
So ``weight = lambda u, v, d: 1 if d['color']=="red" else None``
will find the shortest red path.
See Also
--------
single_source_dijkstra, single_source_bellman_ford_path_length
|
2,552 | def print_help(self):
has_ticker_start = "" if self.ticker else "[unvl]"
has_ticker_end = "" if self.ticker else "[/unvl]"
help_text = f
console.print(text=help_text, menu="Stocks - Behavioural Analysis")
| [cmds]
load load a specific stock ticker for analysis
[param]Ticker: [/param]{self.ticker.upper() or None}
{has_ticker_start}
[src][Finbrain][/src]
headlines sentiment from 15+ major news headlines
[src][Finnhub][/src]
stats sentiment stats including comparison with sector{has_ticker_end}
[src][Reddit][/src]
wsb show what WSB gang is up to in subreddit wallstreetbets
watchlist show other users watchlist
popular show popular tickers
spac_c show other users spacs announcements from subreddit SPACs community
spac show other users spacs announcements from other subs{has_ticker_start}
getdd gets due diligence from another user's post{has_ticker_end}
[src][Stocktwits][/src]
trending trending stocks
stalker stalk stocktwits user's last messages{has_ticker_start}
bullbear estimate quick sentiment from last 30 messages on board
messages output up to the 30 last messages on the board
[src][Twitter][/src]
infer infer about stock's sentiment from latest tweets
sentiment in-depth sentiment prediction from tweets over time
[src][Google][/src]
mentions interest over time based on stock's mentions
regions regions that show highest interest in stock
queries top related queries with this stock
rise top rising related queries with stock{has_ticker_end}
[src][SentimentInvestor][/src]
popularsi show most popular stocks on social media right now
emerging show stocks that are being talked about more than usual{has_ticker_start}
metrics core social sentiment metrics for this stock
social social media figures for stock popularity
historical plot the past week of data for a selected metric{has_ticker_end}[/cmds]
| 205 | 24 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def print_help(self):
has_ticker_start = "" if self.ticker else "[unvl]"
has_ticker_end = "" if self.ticker else "[/unvl]"
help_text = f
console.print(text=help_text, menu="Stocks - Behavioural Analysis")
```
###Assistant : [cmds]
load load a specific stock ticker for analysis
[param]Ticker: [/param]{self.ticker.upper() or None}
{has_ticker_start}
[src][Finbrain][/src]
headlines sentiment from 15+ major news headlines
[src][Finnhub][/src]
stats sentiment stats including comparison with sector{has_ticker_end}
[src][Reddit][/src]
wsb show what WSB gang is up to in subreddit wallstreetbets
watchlist show other users watchlist
popular show popular tickers
spac_c show other users spacs announcements from subreddit SPACs community
spac show other users spacs announcements from other subs{has_ticker_start}
getdd gets due diligence from another user's post{has_ticker_end}
[src][Stocktwits][/src]
trending trending stocks
stalker stalk stocktwits user's last messages{has_ticker_start}
bullbear estimate quick sentiment from last 30 messages on board
messages output up to the 30 last messages on the board
[src][Twitter][/src]
infer infer about stock's sentiment from latest tweets
sentiment in-depth sentiment prediction from tweets over time
[src][Google][/src]
mentions interest over time based on stock's mentions
regions regions that show highest interest in stock
queries top related queries with this stock
rise top rising related queries with stock{has_ticker_end}
[src][SentimentInvestor][/src]
popularsi show most popular stocks on social media right now
emerging show stocks that are being talked about more than usual{has_ticker_start}
metrics core social sentiment metrics for this stock
social social media figures for stock popularity
historical plot the past week of data for a selected metric{has_ticker_end}[/cmds]
|
2,553 | def iterate_instructions(code_object):
# The arg extension the EXTENDED_ARG opcode represents is automatically handled by get_instructions() but the
# instruction is left in. Get rid of it to make subsequent parsing easier/safer.
yield from (i for i in get_instructions(code_object) if i.opname != "EXTENDED_ARG")
yield None
# For each constant in this code object that is itself a code object,
# parse this constant in the same manner.
for constant in code_object.co_consts:
if inspect.iscode(constant):
yield from iterate_instructions(constant)
| Delivers the byte-code instructions as a continuous stream.
Yields `dis.Instruction`. After each code-block (`co_code`), `None` is
yielded to mark the end of the block and to interrupt the steam.
| 29 | 75 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def iterate_instructions(code_object):
# The arg extension the EXTENDED_ARG opcode represents is automatically handled by get_instructions() but the
# instruction is left in. Get rid of it to make subsequent parsing easier/safer.
yield from (i for i in get_instructions(code_object) if i.opname != "EXTENDED_ARG")
yield None
# For each constant in this code object that is itself a code object,
# parse this constant in the same manner.
for constant in code_object.co_consts:
if inspect.iscode(constant):
yield from iterate_instructions(constant)
```
###Assistant : Delivers the byte-code instructions as a continuous stream.
Yields `dis.Instruction`. After each code-block (`co_code`), `None` is
yielded to mark the end of the block and to interrupt the steam.
|
2,554 | def fit(self) -> ResultGrid:
if not self._is_ray_client:
try:
return self._local_tuner.fit()
except Exception as e:
raise TuneError(
f"Tune run failed. "
f'Please use tuner = Tuner.restore("'
f'{self._local_tuner.get_experiment_checkpoint_dir()}") to resume.'
) from e
else:
experiment_checkpoint_dir = ray.get(
self._remote_tuner.get_experiment_checkpoint_dir.remote()
)
try:
return ray.get(self._remote_tuner.fit.remote())
except Exception as e:
raise TuneError(
f"Tune run failed. "
f'Please use tuner = Tuner.restore("'
f'{experiment_checkpoint_dir}") to resume.'
) from e
| Executes hyperparameter tuning job as configured and returns result.
Failure handling:
For the kind of exception that happens during the execution of a trial,
one may inspect it together with stacktrace through the returned result grid.
See ``ResultGrid`` for reference. Each trial may fail up to a certain number.
This is configured by ``RunConfig.FailureConfig.max_failures``.
Exception that happens beyond trials will be thrown by this method as well.
In such cases, there will be instruction like the following printed out
at the end of console output to inform users on how to resume.
Please use tuner = Tuner.restore("~/ray_results/tuner_resume")
to resume.
Raises:
RayTaskError: If user-provided trainable raises an exception
TuneError: General Ray Tune error.
| 112 | 61 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def fit(self) -> ResultGrid:
if not self._is_ray_client:
try:
return self._local_tuner.fit()
except Exception as e:
raise TuneError(
f"Tune run failed. "
f'Please use tuner = Tuner.restore("'
f'{self._local_tuner.get_experiment_checkpoint_dir()}") to resume.'
) from e
else:
experiment_checkpoint_dir = ray.get(
self._remote_tuner.get_experiment_checkpoint_dir.remote()
)
try:
return ray.get(self._remote_tuner.fit.remote())
except Exception as e:
raise TuneError(
f"Tune run failed. "
f'Please use tuner = Tuner.restore("'
f'{experiment_checkpoint_dir}") to resume.'
) from e
```
###Assistant : Executes hyperparameter tuning job as configured and returns result.
Failure handling:
For the kind of exception that happens during the execution of a trial,
one may inspect it together with stacktrace through the returned result grid.
See ``ResultGrid`` for reference. Each trial may fail up to a certain number.
This is configured by ``RunConfig.FailureConfig.max_failures``.
Exception that happens beyond trials will be thrown by this method as well.
In such cases, there will be instruction like the following printed out
at the end of console output to inform users on how to resume.
Please use tuner = Tuner.restore("~/ray_results/tuner_resume")
to resume.
Raises:
RayTaskError: If user-provided trainable raises an exception
TuneError: General Ray Tune error.
|
2,555 | def _handle_coordinator_update(self) -> None:
self._refresh()
super()._handle_coordinator_update()
|
Handle updated data from the coordinator.
Tests fails without this method.
| 11 | 6 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _handle_coordinator_update(self) -> None:
self._refresh()
super()._handle_coordinator_update()
```
###Assistant :
Handle updated data from the coordinator.
Tests fails without this method.
|
2,556 | def _skew_1d(self, column, bias=True, nan_policy="propagate"):
# import depends on scipy, not installed by default
from dask.array import stats as da_stats
if pd.Int64Dtype.is_dtype(column._meta_nonempty):
column = column.astype("f8")
if not np.issubdtype(column.dtype, np.number):
column = column.astype("f8")
name = self._token_prefix + "skew-1d-" + tokenize(column)
array_skew = da_stats.skew(
column.values, axis=0, bias=bias, nan_policy=nan_policy
)
layer = {(name, 0): (methods.wrap_skew_reduction, (array_skew._name,), None)}
graph = HighLevelGraph.from_collections(name, layer, dependencies=[array_skew])
return new_dd_object(
graph, name, column._meta_nonempty.skew(), divisions=[None, None]
)
| 1D version of the skew calculation.
Uses the array version from da.stats in case we are passing in a single series
| 21 | 67 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _skew_1d(self, column, bias=True, nan_policy="propagate"):
# import depends on scipy, not installed by default
from dask.array import stats as da_stats
if pd.Int64Dtype.is_dtype(column._meta_nonempty):
column = column.astype("f8")
if not np.issubdtype(column.dtype, np.number):
column = column.astype("f8")
name = self._token_prefix + "skew-1d-" + tokenize(column)
array_skew = da_stats.skew(
column.values, axis=0, bias=bias, nan_policy=nan_policy
)
layer = {(name, 0): (methods.wrap_skew_reduction, (array_skew._name,), None)}
graph = HighLevelGraph.from_collections(name, layer, dependencies=[array_skew])
return new_dd_object(
graph, name, column._meta_nonempty.skew(), divisions=[None, None]
)
```
###Assistant : 1D version of the skew calculation.
Uses the array version from da.stats in case we are passing in a single series
|
2,557 | def test_device_classes_aligned():
for device_class in NumberDeviceClass:
assert hasattr(SensorDeviceClass, device_class.name)
assert getattr(SensorDeviceClass, device_class.name).value == device_class.value
| Make sure all number device classes are also available in SensorDeviceClass. | 11 | 14 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_device_classes_aligned():
for device_class in NumberDeviceClass:
assert hasattr(SensorDeviceClass, device_class.name)
assert getattr(SensorDeviceClass, device_class.name).value == device_class.value
```
###Assistant : Make sure all number device classes are also available in SensorDeviceClass.
|
2,558 | def intersection_all(graphs):
R = None
for i, G in enumerate(graphs):
G_nodes_set = set(G.nodes)
G_edges_set = set(G.edges(keys=True) if G.is_multigraph() else G.edges())
if i == 0:
# create new graph
R = G.__class__()
node_intersection = G_nodes_set
edge_intersection = G_edges_set
elif G.is_multigraph() != R.is_multigraph():
raise nx.NetworkXError("All graphs must be graphs or multigraphs.")
else:
node_intersection &= G_nodes_set
edge_intersection &= G_edges_set
R.graph.update(G.graph)
if R is None:
raise ValueError("cannot apply intersection_all to an empty list")
R.add_nodes_from(node_intersection)
R.add_edges_from(edge_intersection)
return R
| Returns a new graph that contains only the nodes and the edges that exist in
all graphs.
Parameters
----------
graphs : iterable
Iterable of NetworkX graphs
Returns
-------
R : A new graph with the same type as the first graph in list
Raises
------
ValueError
If `graphs` is an empty list.
Notes
-----
Attributes from the graph, nodes, and edges are not copied to the new
graph.
| 68 | 73 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def intersection_all(graphs):
R = None
for i, G in enumerate(graphs):
G_nodes_set = set(G.nodes)
G_edges_set = set(G.edges(keys=True) if G.is_multigraph() else G.edges())
if i == 0:
# create new graph
R = G.__class__()
node_intersection = G_nodes_set
edge_intersection = G_edges_set
elif G.is_multigraph() != R.is_multigraph():
raise nx.NetworkXError("All graphs must be graphs or multigraphs.")
else:
node_intersection &= G_nodes_set
edge_intersection &= G_edges_set
R.graph.update(G.graph)
if R is None:
raise ValueError("cannot apply intersection_all to an empty list")
R.add_nodes_from(node_intersection)
R.add_edges_from(edge_intersection)
return R
```
###Assistant : Returns a new graph that contains only the nodes and the edges that exist in
all graphs.
Parameters
----------
graphs : iterable
Iterable of NetworkX graphs
Returns
-------
R : A new graph with the same type as the first graph in list
Raises
------
ValueError
If `graphs` is an empty list.
Notes
-----
Attributes from the graph, nodes, and edges are not copied to the new
graph.
|
2,559 | def _looks_like_red_hat_scheme() -> bool:
from distutils.command.install import install
from distutils.dist import Distribution
cmd: Any = install(Distribution())
cmd.finalize_options()
return (
cmd.exec_prefix == f"{os.path.normpath(sys.exec_prefix)}/local"
and cmd.prefix == f"{os.path.normpath(sys.prefix)}/local"
)
@functools.lru_cache(maxsize=None) | Red Hat patches ``sys.prefix`` and ``sys.exec_prefix``.
Red Hat's ``00251-change-user-install-location.patch`` changes the install
command's ``prefix`` and ``exec_prefix`` to append ``"/local"``. This is
(fortunately?) done quite unconditionally, so we create a default command
object without any configuration to detect this.
| 38 | 28 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _looks_like_red_hat_scheme() -> bool:
from distutils.command.install import install
from distutils.dist import Distribution
cmd: Any = install(Distribution())
cmd.finalize_options()
return (
cmd.exec_prefix == f"{os.path.normpath(sys.exec_prefix)}/local"
and cmd.prefix == f"{os.path.normpath(sys.prefix)}/local"
)
@functools.lru_cache(maxsize=None)
```
###Assistant : Red Hat patches ``sys.prefix`` and ``sys.exec_prefix``.
Red Hat's ``00251-change-user-install-location.patch`` changes the install
command's ``prefix`` and ``exec_prefix`` to append ``"/local"``. This is
(fortunately?) done quite unconditionally, so we create a default command
object without any configuration to detect this.
|
2,560 | def _create_sql_query(self) -> str:
escaper = ParamEscaper()
maybe_with = ""
if self._encryption is not None or self._credential is not None:
maybe_encryption = ""
if self._encryption is not None:
maybe_encryption = self._generate_options("ENCRYPTION", escaper, self._encryption, False)
maybe_credential = ""
if self._credential is not None:
maybe_credential = self._generate_options("CREDENTIAL", escaper, self._credential, False)
maybe_with = f" WITH ({maybe_credential} {maybe_encryption})"
location = escaper.escape_item(self._file_location) + maybe_with
if self._expression_list is not None:
location = f"(SELECT {self._expression_list} FROM {location})"
files_or_pattern = ""
if self._pattern is not None:
files_or_pattern = f"PATTERN = {escaper.escape_item(self._pattern)}\n"
elif self._files is not None:
files_or_pattern = f"FILES = {escaper.escape_item(self._files)}\n"
format_options = self._generate_options("FORMAT_OPTIONS", escaper, self._format_options) + "\n"
copy_options = self._generate_options("COPY_OPTIONS", escaper, self._copy_options) + "\n"
validation = ""
if self._validate is not None:
if isinstance(self._validate, bool):
if self._validate:
validation = "VALIDATE ALL\n"
elif isinstance(self._validate, int):
if self._validate < 0:
raise AirflowException(
"Number of rows for validation should be positive, got: " + str(self._validate)
)
validation = f"VALIDATE {self._validate} ROWS\n"
else:
raise AirflowException("Incorrect data type for validate parameter: " + type(self._validate))
# TODO: think on how to make sure that table_name and expression_list aren't used for SQL injection
sql = f
return sql.strip()
| COPY INTO {self._table_name}
FROM {location}
FILEFORMAT = {self._file_format}
{validation}{files_or_pattern}{format_options}{copy_options}
| 9 | 184 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _create_sql_query(self) -> str:
escaper = ParamEscaper()
maybe_with = ""
if self._encryption is not None or self._credential is not None:
maybe_encryption = ""
if self._encryption is not None:
maybe_encryption = self._generate_options("ENCRYPTION", escaper, self._encryption, False)
maybe_credential = ""
if self._credential is not None:
maybe_credential = self._generate_options("CREDENTIAL", escaper, self._credential, False)
maybe_with = f" WITH ({maybe_credential} {maybe_encryption})"
location = escaper.escape_item(self._file_location) + maybe_with
if self._expression_list is not None:
location = f"(SELECT {self._expression_list} FROM {location})"
files_or_pattern = ""
if self._pattern is not None:
files_or_pattern = f"PATTERN = {escaper.escape_item(self._pattern)}\n"
elif self._files is not None:
files_or_pattern = f"FILES = {escaper.escape_item(self._files)}\n"
format_options = self._generate_options("FORMAT_OPTIONS", escaper, self._format_options) + "\n"
copy_options = self._generate_options("COPY_OPTIONS", escaper, self._copy_options) + "\n"
validation = ""
if self._validate is not None:
if isinstance(self._validate, bool):
if self._validate:
validation = "VALIDATE ALL\n"
elif isinstance(self._validate, int):
if self._validate < 0:
raise AirflowException(
"Number of rows for validation should be positive, got: " + str(self._validate)
)
validation = f"VALIDATE {self._validate} ROWS\n"
else:
raise AirflowException("Incorrect data type for validate parameter: " + type(self._validate))
# TODO: think on how to make sure that table_name and expression_list aren't used for SQL injection
sql = f
return sql.strip()
```
###Assistant : COPY INTO {self._table_name}
FROM {location}
FILEFORMAT = {self._file_format}
{validation}{files_or_pattern}{format_options}{copy_options}
|
2,561 | def predict(self, x, **kwargs):
proba = self.model.predict(x, **kwargs)
if proba.shape[-1] > 1:
classes = proba.argmax(axis=-1)
else:
classes = (proba > 0.5).astype("int32")
return self.classes_[classes]
| Returns the class predictions for the given test data.
Args:
x: array-like, shape `(n_samples, n_features)`
Test samples where `n_samples` is the number of samples
and `n_features` is the number of features.
**kwargs: dictionary arguments
Legal arguments are the arguments
of `Sequential.predict`.
Returns:
preds: array-like, shape `(n_samples,)`
Class predictions.
| 48 | 23 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def predict(self, x, **kwargs):
proba = self.model.predict(x, **kwargs)
if proba.shape[-1] > 1:
classes = proba.argmax(axis=-1)
else:
classes = (proba > 0.5).astype("int32")
return self.classes_[classes]
```
###Assistant : Returns the class predictions for the given test data.
Args:
x: array-like, shape `(n_samples, n_features)`
Test samples where `n_samples` is the number of samples
and `n_features` is the number of features.
**kwargs: dictionary arguments
Legal arguments are the arguments
of `Sequential.predict`.
Returns:
preds: array-like, shape `(n_samples,)`
Class predictions.
|
2,562 | def resize_image_type0(self, img):
limit_side_len = self.max_side_len
h, w, _ = img.shape
# limit the max side
if max(h, w) > limit_side_len:
if h > w:
ratio = float(limit_side_len) / h
else:
ratio = float(limit_side_len) / w
else:
ratio = 1.
resize_h = int(h * ratio)
resize_w = int(w * ratio)
resize_h = int(round(resize_h / 32) * 32)
resize_w = int(round(resize_w / 32) * 32)
try:
if int(resize_w) <= 0 or int(resize_h) <= 0:
return None, (None, None)
img = cv2.resize(img, (int(resize_w), int(resize_h)))
except:
print(img.shape, resize_w, resize_h)
sys.exit(0)
ratio_h = resize_h / float(h)
ratio_w = resize_w / float(w)
# return img, np.array([h, w])
return img, [ratio_h, ratio_w]
|
resize image to a size multiple of 32 which is required by the network
args:
img(array): array with shape [h, w, c]
return(tuple):
img, (ratio_h, ratio_w)
| 26 | 106 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def resize_image_type0(self, img):
limit_side_len = self.max_side_len
h, w, _ = img.shape
# limit the max side
if max(h, w) > limit_side_len:
if h > w:
ratio = float(limit_side_len) / h
else:
ratio = float(limit_side_len) / w
else:
ratio = 1.
resize_h = int(h * ratio)
resize_w = int(w * ratio)
resize_h = int(round(resize_h / 32) * 32)
resize_w = int(round(resize_w / 32) * 32)
try:
if int(resize_w) <= 0 or int(resize_h) <= 0:
return None, (None, None)
img = cv2.resize(img, (int(resize_w), int(resize_h)))
except:
print(img.shape, resize_w, resize_h)
sys.exit(0)
ratio_h = resize_h / float(h)
ratio_w = resize_w / float(w)
# return img, np.array([h, w])
return img, [ratio_h, ratio_w]
```
###Assistant :
resize image to a size multiple of 32 which is required by the network
args:
img(array): array with shape [h, w, c]
return(tuple):
img, (ratio_h, ratio_w)
|
2,563 | def metrics(self):
metrics = []
if self._is_compiled:
# TODO(omalleyt): Track `LossesContainer` and `MetricsContainer` objects
# so that attr names are not load-bearing.
if self.compiled_loss is not None:
metrics += self.compiled_loss.metrics
if self.compiled_metrics is not None:
metrics += self.compiled_metrics.metrics
for l in self._flatten_layers():
metrics.extend(l._metrics) # pylint: disable=protected-access
return metrics
| Returns the model's metrics added using `compile()`, `add_metric()` APIs.
Note: Metrics passed to `compile()` are available only after a `keras.Model`
has been trained/evaluated on actual data.
Examples:
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> [m.name for m in model.metrics]
[]
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> model.fit(x, y)
>>> [m.name for m in model.metrics]
['loss', 'mae']
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> d = tf.keras.layers.Dense(2, name='out')
>>> output_1 = d(inputs)
>>> output_2 = d(inputs)
>>> model = tf.keras.models.Model(
... inputs=inputs, outputs=[output_1, output_2])
>>> model.add_metric(
... tf.reduce_sum(output_2), name='mean', aggregation='mean')
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
>>> model.fit(x, (y, y))
>>> [m.name for m in model.metrics]
['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
'out_1_acc', 'mean']
| 128 | 48 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def metrics(self):
metrics = []
if self._is_compiled:
# TODO(omalleyt): Track `LossesContainer` and `MetricsContainer` objects
# so that attr names are not load-bearing.
if self.compiled_loss is not None:
metrics += self.compiled_loss.metrics
if self.compiled_metrics is not None:
metrics += self.compiled_metrics.metrics
for l in self._flatten_layers():
metrics.extend(l._metrics) # pylint: disable=protected-access
return metrics
```
###Assistant : Returns the model's metrics added using `compile()`, `add_metric()` APIs.
Note: Metrics passed to `compile()` are available only after a `keras.Model`
has been trained/evaluated on actual data.
Examples:
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> [m.name for m in model.metrics]
[]
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> model.fit(x, y)
>>> [m.name for m in model.metrics]
['loss', 'mae']
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> d = tf.keras.layers.Dense(2, name='out')
>>> output_1 = d(inputs)
>>> output_2 = d(inputs)
>>> model = tf.keras.models.Model(
... inputs=inputs, outputs=[output_1, output_2])
>>> model.add_metric(
... tf.reduce_sum(output_2), name='mean', aggregation='mean')
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
>>> model.fit(x, (y, y))
>>> [m.name for m in model.metrics]
['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
'out_1_acc', 'mean']
|
2,564 | def set_level(request, level):
if not hasattr(request, "_messages"):
return False
request._messages.level = level
return True
|
Set the minimum level of messages to be recorded, and return ``True`` if
the level was recorded successfully.
If set to ``None``, use the default level (see the get_level() function).
| 30 | 14 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def set_level(request, level):
if not hasattr(request, "_messages"):
return False
request._messages.level = level
return True
```
###Assistant :
Set the minimum level of messages to be recorded, and return ``True`` if
the level was recorded successfully.
If set to ``None``, use the default level (see the get_level() function).
|
2,565 | def all_pairs_lowest_common_ancestor(G, pairs=None):
if not nx.is_directed_acyclic_graph(G):
raise nx.NetworkXError("LCA only defined on directed acyclic graphs.")
if len(G) == 0:
raise nx.NetworkXPointlessConcept("LCA meaningless on null graphs.")
if pairs is None:
pairs = combinations_with_replacement(G, 2)
else:
# Convert iterator to iterable, if necessary. Trim duplicates.
pairs = dict.fromkeys(pairs)
# Verify that each of the nodes in the provided pairs is in G
nodeset = set(G)
for pair in pairs:
if set(pair) - nodeset:
raise nx.NodeNotFound(
f"Node(s) {set(pair) - nodeset} from pair {pair} not in G."
)
# Once input validation is done, construct the generator | Return the lowest common ancestor of all pairs or the provided pairs
Parameters
----------
G : NetworkX directed graph
pairs : iterable of pairs of nodes, optional (default: all pairs)
The pairs of nodes of interest.
If None, will find the LCA of all pairs of nodes.
Yields
------
((node1, node2), lca) : 2-tuple
Where lca is least common ancestor of node1 and node2.
Note that for the default case, the order of the node pair is not considered,
e.g. you will not get both ``(a, b)`` and ``(b, a)``
Raises
------
NetworkXPointlessConcept
If `G` is null.
NetworkXError
If `G` is not a DAG.
Examples
--------
The default behavior is to yield the lowest common ancestor for all
possible combinations of nodes in `G`, including self-pairings:
>>> G = nx.DiGraph([(0, 1), (0, 3), (1, 2)])
>>> dict(nx.all_pairs_lowest_common_ancestor(G))
{(0, 0): 0, (0, 1): 0, (0, 3): 0, (0, 2): 0, (1, 1): 1, (1, 3): 0, (1, 2): 1, (3, 3): 3, (3, 2): 0, (2, 2): 2}
The pairs argument can be used to limit the output to only the
specified node pairings:
>>> dict(nx.all_pairs_lowest_common_ancestor(G, pairs=[(1, 2), (2, 3)]))
{(1, 2): 1, (2, 3): 0}
Notes
-----
Only defined on non-null directed acyclic graphs.
See Also
--------
lowest_common_ancestor
| 208 | 92 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def all_pairs_lowest_common_ancestor(G, pairs=None):
if not nx.is_directed_acyclic_graph(G):
raise nx.NetworkXError("LCA only defined on directed acyclic graphs.")
if len(G) == 0:
raise nx.NetworkXPointlessConcept("LCA meaningless on null graphs.")
if pairs is None:
pairs = combinations_with_replacement(G, 2)
else:
# Convert iterator to iterable, if necessary. Trim duplicates.
pairs = dict.fromkeys(pairs)
# Verify that each of the nodes in the provided pairs is in G
nodeset = set(G)
for pair in pairs:
if set(pair) - nodeset:
raise nx.NodeNotFound(
f"Node(s) {set(pair) - nodeset} from pair {pair} not in G."
)
# Once input validation is done, construct the generator
```
###Assistant : Return the lowest common ancestor of all pairs or the provided pairs
Parameters
----------
G : NetworkX directed graph
pairs : iterable of pairs of nodes, optional (default: all pairs)
The pairs of nodes of interest.
If None, will find the LCA of all pairs of nodes.
Yields
------
((node1, node2), lca) : 2-tuple
Where lca is least common ancestor of node1 and node2.
Note that for the default case, the order of the node pair is not considered,
e.g. you will not get both ``(a, b)`` and ``(b, a)``
Raises
------
NetworkXPointlessConcept
If `G` is null.
NetworkXError
If `G` is not a DAG.
Examples
--------
The default behavior is to yield the lowest common ancestor for all
possible combinations of nodes in `G`, including self-pairings:
>>> G = nx.DiGraph([(0, 1), (0, 3), (1, 2)])
>>> dict(nx.all_pairs_lowest_common_ancestor(G))
{(0, 0): 0, (0, 1): 0, (0, 3): 0, (0, 2): 0, (1, 1): 1, (1, 3): 0, (1, 2): 1, (3, 3): 3, (3, 2): 0, (2, 2): 2}
The pairs argument can be used to limit the output to only the
specified node pairings:
>>> dict(nx.all_pairs_lowest_common_ancestor(G, pairs=[(1, 2), (2, 3)]))
{(1, 2): 1, (2, 3): 0}
Notes
-----
Only defined on non-null directed acyclic graphs.
See Also
--------
lowest_common_ancestor
|
2,566 | async def test_is_pickleable_after_start(self, task_runner):
task_runner.client_kwargs["set_as_default"] = True |
The task_runner must be picklable as it is attached to `PrefectFuture` objects
Reimplemented to set Dask client as default to allow unpickling
| 22 | 7 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
async def test_is_pickleable_after_start(self, task_runner):
task_runner.client_kwargs["set_as_default"] = True
```
###Assistant :
The task_runner must be picklable as it is attached to `PrefectFuture` objects
Reimplemented to set Dask client as default to allow unpickling
|
2,567 | def _add_callback_signalsafe(self, handle):
self._add_callback(handle)
self._write_to_self()
| Like _add_callback() but called from a signal handler. | 8 | 5 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _add_callback_signalsafe(self, handle):
self._add_callback(handle)
self._write_to_self()
```
###Assistant : Like _add_callback() but called from a signal handler.
|
2,568 | def get_local_ffmpeg() -> Optional[Path]:
ffmpeg_path = Path(
get_spotdl_path(), "ffmpeg" + ".exe" if platform.system() == "Windows" else ""
)
if ffmpeg_path.is_file():
return ffmpeg_path
return None
|
Get local ffmpeg binary path or None if not found.
| 10 | 24 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_local_ffmpeg() -> Optional[Path]:
ffmpeg_path = Path(
get_spotdl_path(), "ffmpeg" + ".exe" if platform.system() == "Windows" else ""
)
if ffmpeg_path.is_file():
return ffmpeg_path
return None
```
###Assistant :
Get local ffmpeg binary path or None if not found.
|
2,569 | def _render_cmd(cmd, cwd, template, saltenv=None, pillarenv=None, pillar_override=None):
if saltenv is None:
saltenv = __opts__.get("saltenv", "base")
if not template:
return (cmd, cwd)
# render the path as a template using path_template_engine as the engine
if template not in salt.utils.templates.TEMPLATE_REGISTRY:
raise CommandExecutionError(
"Attempted to render file paths with unavailable engine {}".format(template)
)
kwargs = {}
kwargs["salt"] = __salt__
if pillarenv is not None or pillar_override is not None:
pillarenv = pillarenv or __opts__["pillarenv"]
kwargs["pillar"] = _gather_pillar(pillarenv, pillar_override)
else:
kwargs["pillar"] = __pillar__
kwargs["grains"] = __grains__
kwargs["opts"] = __opts__
kwargs["saltenv"] = saltenv
|
If template is a valid template engine, process the cmd and cwd through
that engine.
| 15 | 88 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _render_cmd(cmd, cwd, template, saltenv=None, pillarenv=None, pillar_override=None):
if saltenv is None:
saltenv = __opts__.get("saltenv", "base")
if not template:
return (cmd, cwd)
# render the path as a template using path_template_engine as the engine
if template not in salt.utils.templates.TEMPLATE_REGISTRY:
raise CommandExecutionError(
"Attempted to render file paths with unavailable engine {}".format(template)
)
kwargs = {}
kwargs["salt"] = __salt__
if pillarenv is not None or pillar_override is not None:
pillarenv = pillarenv or __opts__["pillarenv"]
kwargs["pillar"] = _gather_pillar(pillarenv, pillar_override)
else:
kwargs["pillar"] = __pillar__
kwargs["grains"] = __grains__
kwargs["opts"] = __opts__
kwargs["saltenv"] = saltenv
```
###Assistant :
If template is a valid template engine, process the cmd and cwd through
that engine.
|
2,570 | def test_windows_1252(self) -> None:
html = b
tree = decode_body(html, "http://example.com/test.html")
og = parse_html_to_open_graph(tree, "http://example.com/test.html")
self.assertEqual(og, {"og:title": "ó", "og:description": "Some text."})
| A body which uses cp1252, but doesn't declare that.
<html>
<head><title>\xf3</title></head>
<body>
Some text.
</body>
</html>
| 16 | 21 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_windows_1252(self) -> None:
html = b
tree = decode_body(html, "http://example.com/test.html")
og = parse_html_to_open_graph(tree, "http://example.com/test.html")
self.assertEqual(og, {"og:title": "ó", "og:description": "Some text."})
```
###Assistant : A body which uses cp1252, but doesn't declare that.
<html>
<head><title>\xf3</title></head>
<body>
Some text.
</body>
</html>
|
2,571 | def square_root(value, default=_SENTINEL):
try:
return math.sqrt(float(value))
except (ValueError, TypeError):
if default is _SENTINEL:
raise_no_default("sqrt", value)
return default
| Filter and function to get square root of the value. | 10 | 17 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def square_root(value, default=_SENTINEL):
try:
return math.sqrt(float(value))
except (ValueError, TypeError):
if default is _SENTINEL:
raise_no_default("sqrt", value)
return default
```
###Assistant : Filter and function to get square root of the value.
|
2,572 | async def test_track_task_functions(event_loop):
hass = ha.HomeAssistant()
try:
assert hass._track_task
hass.async_stop_track_tasks()
assert not hass._track_task
hass.async_track_tasks()
assert hass._track_task
finally:
await hass.async_stop()
| Test function to start/stop track task and initial state. | 9 | 19 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
async def test_track_task_functions(event_loop):
hass = ha.HomeAssistant()
try:
assert hass._track_task
hass.async_stop_track_tasks()
assert not hass._track_task
hass.async_track_tasks()
assert hass._track_task
finally:
await hass.async_stop()
```
###Assistant : Test function to start/stop track task and initial state.
|
2,573 | def media_position_updated_at(self) -> datetime | None:
if self._device.movie.play_status in KALEIDESCAPE_PLAYING_STATES:
return utcnow()
return None
| When was the position of the current playing media valid. | 10 | 14 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def media_position_updated_at(self) -> datetime | None:
if self._device.movie.play_status in KALEIDESCAPE_PLAYING_STATES:
return utcnow()
return None
```
###Assistant : When was the position of the current playing media valid.
|
2,574 | def write(self, data):
if not isinstance(data, (bytes, bytearray, memoryview)):
raise TypeError(f"data: expecting a bytes-like instance, "
f"got {type(data).__name__}")
if not data:
return
self._ssl_protocol._write_appdata(data)
| Write some data bytes to the transport.
This does not block; it buffers the data and arranges for it
to be sent out asynchronously.
| 24 | 23 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def write(self, data):
if not isinstance(data, (bytes, bytearray, memoryview)):
raise TypeError(f"data: expecting a bytes-like instance, "
f"got {type(data).__name__}")
if not data:
return
self._ssl_protocol._write_appdata(data)
```
###Assistant : Write some data bytes to the transport.
This does not block; it buffers the data and arranges for it
to be sent out asynchronously.
|
2,575 | def _extract_color_tags(self):
tags = re.finditer(
r'<color\s+col="([^"]+)"(\s+offset="([^"]+)")?>(.+?)</color>',
self.original_text,
re.S,
)
colormap = []
for tag in tags:
start = self._count_real_chars(self.original_text[: tag.start(0)])
end = start + self._count_real_chars(tag.group(4))
offsets = tag.group(3).split(",") if tag.group(3) else [0]
start_offset = int(offsets[0]) if offsets[0] else 0
end_offset = int(offsets[1]) if len(offsets) == 2 and offsets[1] else 0
colormap.append(
{
"start": start,
"end": end,
"color": tag.group(1),
"start_offset": start_offset,
"end_offset": end_offset,
},
)
self.text = re.sub("<color[^>]+>(.+?)</color>", r"\1", self.text, 0, re.S)
return colormap
| Used to determine which parts (if any) of the string should be formatted
with a custom color.
Removes the ``<color>`` tag, as it is not part of Pango's markup and would cause an error.
Note: Using the ``<color>`` tags is deprecated. As soon as the legacy syntax is gone, this function
will be removed.
| 54 | 73 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _extract_color_tags(self):
tags = re.finditer(
r'<color\s+col="([^"]+)"(\s+offset="([^"]+)")?>(.+?)</color>',
self.original_text,
re.S,
)
colormap = []
for tag in tags:
start = self._count_real_chars(self.original_text[: tag.start(0)])
end = start + self._count_real_chars(tag.group(4))
offsets = tag.group(3).split(",") if tag.group(3) else [0]
start_offset = int(offsets[0]) if offsets[0] else 0
end_offset = int(offsets[1]) if len(offsets) == 2 and offsets[1] else 0
colormap.append(
{
"start": start,
"end": end,
"color": tag.group(1),
"start_offset": start_offset,
"end_offset": end_offset,
},
)
self.text = re.sub("<color[^>]+>(.+?)</color>", r"\1", self.text, 0, re.S)
return colormap
```
###Assistant : Used to determine which parts (if any) of the string should be formatted
with a custom color.
Removes the ``<color>`` tag, as it is not part of Pango's markup and would cause an error.
Note: Using the ``<color>`` tags is deprecated. As soon as the legacy syntax is gone, this function
will be removed.
|
2,576 | def write_readme(self, file_path, parametric_eq_peqs=None, fixed_band_eq_peq=None):
file_path = os.path.abspath(file_path)
dir_path = os.path.dirname(file_path)
model = self.name
# Write model
s = '# {}\n'.format(model)
s += 'See [usage instructions](https://github.com/jaakkopasanen/AutoEq#usage) for more options and info.\n\n'
# Add parametric EQ settings
if parametric_eq_peqs is not None:
s += '### Parametric EQs\n'
if len(parametric_eq_peqs) > 1:
compound = PEQ(self.frequency.copy(), parametric_eq_peqs[0].fs, [])
n = 0
filter_ranges = ''
preamps = ''
for i, peq in enumerate(parametric_eq_peqs):
for filt in peq.filters:
compound.add_filter(filt)
filter_ranges += f'1-{len(peq.filters) + n}'
preamps += f'{-compound.max_gain - 0.1:.1f} dB'
if i < len(parametric_eq_peqs) - 2:
filter_ranges += ', '
preamps += ', '
elif i == len(parametric_eq_peqs) - 2:
filter_ranges += ' or '
preamps += ' or '
n += len(peq.filters)
s += f'You can use filters {filter_ranges}. Apply preamp of {preamps}, respectively.\n\n'
else:
compound = PEQ(self.frequency.copy(), parametric_eq_peqs[0].fs, [])
for peq in parametric_eq_peqs:
for filt in peq.filters:
compound.add_filter(filt)
s += f'Apply preamp of -{compound.max_gain + 0.1:.1f} dB when using parametric equalizer.\n\n'
s += compound.markdown_table() + '\n\n'
# Add fixed band eq
if fixed_band_eq_peq is not None:
s += f'### Fixed Band EQs\nWhen using fixed band (also called graphic) equalizer, apply preamp of ' \
f'**-{fixed_band_eq_peq.max_gain + 0.1:.1f} dB** (if available) and set gains manually with these ' \
f'parameters.\n\n{fixed_band_eq_peq.markdown_table()}\n\n'
# Write image link
img_path = os.path.join(dir_path, model + '.png')
if os.path.isfile(img_path):
img_url = f'./{os.path.split(img_path)[1]}'
img_url = urllib.parse.quote(img_url, safe="%/:=&?~#+!$,;'@()*[]")
s += f'### Graphs\n\n'
# Write file
with open(file_path, 'w', encoding='utf-8') as f:
f.write(s)
| Writes README.md with picture and Equalizer APO settings. | 8 | 239 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def write_readme(self, file_path, parametric_eq_peqs=None, fixed_band_eq_peq=None):
file_path = os.path.abspath(file_path)
dir_path = os.path.dirname(file_path)
model = self.name
# Write model
s = '# {}\n'.format(model)
s += 'See [usage instructions](https://github.com/jaakkopasanen/AutoEq#usage) for more options and info.\n\n'
# Add parametric EQ settings
if parametric_eq_peqs is not None:
s += '### Parametric EQs\n'
if len(parametric_eq_peqs) > 1:
compound = PEQ(self.frequency.copy(), parametric_eq_peqs[0].fs, [])
n = 0
filter_ranges = ''
preamps = ''
for i, peq in enumerate(parametric_eq_peqs):
for filt in peq.filters:
compound.add_filter(filt)
filter_ranges += f'1-{len(peq.filters) + n}'
preamps += f'{-compound.max_gain - 0.1:.1f} dB'
if i < len(parametric_eq_peqs) - 2:
filter_ranges += ', '
preamps += ', '
elif i == len(parametric_eq_peqs) - 2:
filter_ranges += ' or '
preamps += ' or '
n += len(peq.filters)
s += f'You can use filters {filter_ranges}. Apply preamp of {preamps}, respectively.\n\n'
else:
compound = PEQ(self.frequency.copy(), parametric_eq_peqs[0].fs, [])
for peq in parametric_eq_peqs:
for filt in peq.filters:
compound.add_filter(filt)
s += f'Apply preamp of -{compound.max_gain + 0.1:.1f} dB when using parametric equalizer.\n\n'
s += compound.markdown_table() + '\n\n'
# Add fixed band eq
if fixed_band_eq_peq is not None:
s += f'### Fixed Band EQs\nWhen using fixed band (also called graphic) equalizer, apply preamp of ' \
f'**-{fixed_band_eq_peq.max_gain + 0.1:.1f} dB** (if available) and set gains manually with these ' \
f'parameters.\n\n{fixed_band_eq_peq.markdown_table()}\n\n'
# Write image link
img_path = os.path.join(dir_path, model + '.png')
if os.path.isfile(img_path):
img_url = f'./{os.path.split(img_path)[1]}'
img_url = urllib.parse.quote(img_url, safe="%/:=&?~#+!$,;'@()*[]")
s += f'### Graphs\n\n'
# Write file
with open(file_path, 'w', encoding='utf-8') as f:
f.write(s)
```
###Assistant : Writes README.md with picture and Equalizer APO settings.
|
2,577 | def _get_execution_environment():
if os.environ.get("CI", "False").lower() == "true":
execution_env = "ci"
elif "google.colab" in sys.modules:
execution_env = "colab"
elif "KUBERNETES_SERVICE_HOST" in os.environ:
execution_env = "kubernetes"
elif HAYSTACK_DOCKER_CONTAINER in os.environ:
execution_env = os.environ.get(HAYSTACK_DOCKER_CONTAINER)
# check if pytest is imported
elif "pytest" in sys.modules:
execution_env = "test"
else:
try:
execution_env = get_ipython().__class__.__name__ # pylint: disable=undefined-variable
except NameError:
execution_env = "script"
return execution_env
|
Identifies the execution environment that Haystack is running in.
Options are: colab notebook, kubernetes, CPU/GPU docker container, test environment, jupyter notebook, python script
| 23 | 59 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _get_execution_environment():
if os.environ.get("CI", "False").lower() == "true":
execution_env = "ci"
elif "google.colab" in sys.modules:
execution_env = "colab"
elif "KUBERNETES_SERVICE_HOST" in os.environ:
execution_env = "kubernetes"
elif HAYSTACK_DOCKER_CONTAINER in os.environ:
execution_env = os.environ.get(HAYSTACK_DOCKER_CONTAINER)
# check if pytest is imported
elif "pytest" in sys.modules:
execution_env = "test"
else:
try:
execution_env = get_ipython().__class__.__name__ # pylint: disable=undefined-variable
except NameError:
execution_env = "script"
return execution_env
```
###Assistant :
Identifies the execution environment that Haystack is running in.
Options are: colab notebook, kubernetes, CPU/GPU docker container, test environment, jupyter notebook, python script
|
2,578 | def _ensure_html_response(url, session):
# type: (str, PipSession) -> None
scheme, netloc, path, query, fragment = urllib.parse.urlsplit(url)
if scheme not in {'http', 'https'}:
raise _NotHTTP()
resp = session.head(url, allow_redirects=True)
raise_for_status(resp)
_ensure_html_header(resp)
| Send a HEAD request to the URL, and ensure the response contains HTML.
Raises `_NotHTTP` if the URL is not available for a HEAD request, or
`_NotHTML` if the content type is not text/html.
| 34 | 30 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _ensure_html_response(url, session):
# type: (str, PipSession) -> None
scheme, netloc, path, query, fragment = urllib.parse.urlsplit(url)
if scheme not in {'http', 'https'}:
raise _NotHTTP()
resp = session.head(url, allow_redirects=True)
raise_for_status(resp)
_ensure_html_header(resp)
```
###Assistant : Send a HEAD request to the URL, and ensure the response contains HTML.
Raises `_NotHTTP` if the URL is not available for a HEAD request, or
`_NotHTML` if the content type is not text/html.
|
2,579 | def _parse_item(self) -> Optional[Tuple[Optional[Key], Item]]:
self.mark()
with self._state as state:
while True:
c = self._current
if c == "\n":
# Found a newline; Return all whitespace found up to this point.
self.inc()
return None, Whitespace(self.extract())
elif c in " \t\r":
# Skip whitespace.
if not self.inc():
return None, Whitespace(self.extract())
elif c == "#":
# Found a comment, parse it
indent = self.extract()
cws, comment, trail = self._parse_comment_trail()
return None, Comment(Trivia(indent, cws, comment, trail))
elif c == "[":
# Found a table, delegate to the calling function.
return
else:
# Begining of a KV pair.
# Return to beginning of whitespace so it gets included
# as indentation for the KV about to be parsed.
state.restore = True
break
return self._parse_key_value(True)
|
Attempts to parse the next item and returns it, along with its key
if the item is value-like.
| 18 | 120 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _parse_item(self) -> Optional[Tuple[Optional[Key], Item]]:
self.mark()
with self._state as state:
while True:
c = self._current
if c == "\n":
# Found a newline; Return all whitespace found up to this point.
self.inc()
return None, Whitespace(self.extract())
elif c in " \t\r":
# Skip whitespace.
if not self.inc():
return None, Whitespace(self.extract())
elif c == "#":
# Found a comment, parse it
indent = self.extract()
cws, comment, trail = self._parse_comment_trail()
return None, Comment(Trivia(indent, cws, comment, trail))
elif c == "[":
# Found a table, delegate to the calling function.
return
else:
# Begining of a KV pair.
# Return to beginning of whitespace so it gets included
# as indentation for the KV about to be parsed.
state.restore = True
break
return self._parse_key_value(True)
```
###Assistant :
Attempts to parse the next item and returns it, along with its key
if the item is value-like.
|
2,580 | def _xreplace(self, rule):
if self in rule:
return rule[self], True
elif rule:
rule = self._dedupe_indices_in_rule(rule)
args = []
changed = False
for a in self.args:
_xreplace = getattr(a, '_xreplace', None)
if _xreplace is not None:
a_xr = _xreplace(rule)
args.append(a_xr[0])
changed |= a_xr[1]
else:
args.append(a)
args = tuple(args)
if changed:
return self.func(*args), True
return self, False
|
Helper for xreplace. Tracks whether a replacement actually occurred.
Given that the rule has entries {old:new, ...}, this handles the fact
that if a dummy index in new is the same as an index in self, the
dummy index in new must be renamed.
| 44 | 55 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _xreplace(self, rule):
if self in rule:
return rule[self], True
elif rule:
rule = self._dedupe_indices_in_rule(rule)
args = []
changed = False
for a in self.args:
_xreplace = getattr(a, '_xreplace', None)
if _xreplace is not None:
a_xr = _xreplace(rule)
args.append(a_xr[0])
changed |= a_xr[1]
else:
args.append(a)
args = tuple(args)
if changed:
return self.func(*args), True
return self, False
```
###Assistant :
Helper for xreplace. Tracks whether a replacement actually occurred.
Given that the rule has entries {old:new, ...}, this handles the fact
that if a dummy index in new is the same as an index in self, the
dummy index in new must be renamed.
|
2,581 | async def async_turn_on(self) -> None:
await self._client.play()
await self._update_playlists(no_throttle=True)
| Service to send the MPD the command to start playing. | 10 | 9 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
async def async_turn_on(self) -> None:
await self._client.play()
await self._update_playlists(no_throttle=True)
```
###Assistant : Service to send the MPD the command to start playing.
|
2,582 | def _build_network_on_replica(model, mode, inputs=None, targets=None):
# Need to do imports here since we run into a circular dependency error.
from keras import models # pylint: disable=g-import-not-at-top
from keras.engine import sequential # pylint: disable=g-import-not-at-top
# We rely on the internal methods to avoid having share_weights weights in the
# public API.
if isinstance(model, sequential.Sequential):
updated_model = models._clone_sequential_model(
model, input_tensors=inputs, layer_fn=models.share_weights
)
else:
updated_model = models._clone_functional_model(
model, input_tensors=inputs, layer_fn=models.share_weights
)
# Callable losses added directly to a functional Model need to be added
# here.
updated_model._callable_losses = model._callable_losses
# Recast all low precision outputs back to float32 since we only casted
# the inputs to bfloat16 and not targets. This is done so that we can preserve
# precision when calculating the loss value. | Build an updated model on replicas.
We create a new Keras model while sharing the variables from the old graph.
Building a new sub-graph is required since the original keras model creates
placeholders for the input and the output that are not accessible till we
call iterator.get_next() inside the step_fn for `fit`/`evaluate`/`predict`.
The sharing of weights and layers between the old and the new model guarantee
that we're using Strategy variables and any updates on either model are
reflected correctly in callbacks and loop iterations.
We need to make sure we share the optimizers between the old and the new model
as well so that optimizer state is not lost if the user is running fit
multiple times.
Args:
model: Model to be replicated across Replicas
mode: Which of fit/eval/predict is building the distributed network
inputs: Input variables to be passed to the model
targets: Target tensor to be passed to model.compile
Returns:
A new model with shared layers with the old model.
| 163 | 122 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _build_network_on_replica(model, mode, inputs=None, targets=None):
# Need to do imports here since we run into a circular dependency error.
from keras import models # pylint: disable=g-import-not-at-top
from keras.engine import sequential # pylint: disable=g-import-not-at-top
# We rely on the internal methods to avoid having share_weights weights in the
# public API.
if isinstance(model, sequential.Sequential):
updated_model = models._clone_sequential_model(
model, input_tensors=inputs, layer_fn=models.share_weights
)
else:
updated_model = models._clone_functional_model(
model, input_tensors=inputs, layer_fn=models.share_weights
)
# Callable losses added directly to a functional Model need to be added
# here.
updated_model._callable_losses = model._callable_losses
# Recast all low precision outputs back to float32 since we only casted
# the inputs to bfloat16 and not targets. This is done so that we can preserve
# precision when calculating the loss value.
```
###Assistant : Build an updated model on replicas.
We create a new Keras model while sharing the variables from the old graph.
Building a new sub-graph is required since the original keras model creates
placeholders for the input and the output that are not accessible till we
call iterator.get_next() inside the step_fn for `fit`/`evaluate`/`predict`.
The sharing of weights and layers between the old and the new model guarantee
that we're using Strategy variables and any updates on either model are
reflected correctly in callbacks and loop iterations.
We need to make sure we share the optimizers between the old and the new model
as well so that optimizer state is not lost if the user is running fit
multiple times.
Args:
model: Model to be replicated across Replicas
mode: Which of fit/eval/predict is building the distributed network
inputs: Input variables to be passed to the model
targets: Target tensor to be passed to model.compile
Returns:
A new model with shared layers with the old model.
|
2,583 | def can_jit_compile(warn=False):
if platform.system() == "Darwin" and "arm" in platform.processor().lower():
if warn:
logging.warning(
"Tensorflow is not compiled with XLA on Mac M1 Arm processors, "
"so cannot set `jit_compile` to True."
)
return False
return True
| Returns True if TensorFlow XLA is available for the platform. | 10 | 36 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def can_jit_compile(warn=False):
if platform.system() == "Darwin" and "arm" in platform.processor().lower():
if warn:
logging.warning(
"Tensorflow is not compiled with XLA on Mac M1 Arm processors, "
"so cannot set `jit_compile` to True."
)
return False
return True
```
###Assistant : Returns True if TensorFlow XLA is available for the platform.
|
2,584 | def __setstate__(self, state) -> None:
# TODO (sven): Validate that our config and the config in state are compatible.
# For example, the model architectures may differ.
# Also, what should the behavior be if e.g. some training parameter
# (e.g. lr) changed?
if hasattr(self, "workers") and "worker" in state:
self.workers.local_worker().set_state(state["worker"])
remote_state = ray.put(state["worker"])
for r in self.workers.remote_workers():
r.set_state.remote(remote_state)
if self.evaluation_workers:
# If evaluation workers are used, also restore the policies
# there in case they are used for evaluation purpose.
for r in self.evaluation_workers.remote_workers():
r.set_state.remote(remote_state)
# If necessary, restore replay data as well.
if self.local_replay_buffer is not None:
# TODO: Experimental functionality: Restore contents of replay
# buffer from checkpoint, only if user has configured this.
if self.config.get("store_buffer_in_checkpoints"):
if "local_replay_buffer" in state:
self.local_replay_buffer.set_state(state["local_replay_buffer"])
else:
logger.warning(
"`store_buffer_in_checkpoints` is True, but no replay "
"data found in state!"
)
elif "local_replay_buffer" in state and log_once(
"no_store_buffer_in_checkpoints_but_data_found"
):
logger.warning(
"`store_buffer_in_checkpoints` is False, but some replay "
"data found in state!"
)
if self.train_exec_impl is not None:
self.train_exec_impl.shared_metrics.get().restore(state["train_exec_impl"])
| Sets the algorithm to the provided state.
Args:
state: The state dict to restore this Algorithm instance to. `state` may
have been returned by a call to an Algorithm's `__getstate__()` method.
| 31 | 165 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def __setstate__(self, state) -> None:
# TODO (sven): Validate that our config and the config in state are compatible.
# For example, the model architectures may differ.
# Also, what should the behavior be if e.g. some training parameter
# (e.g. lr) changed?
if hasattr(self, "workers") and "worker" in state:
self.workers.local_worker().set_state(state["worker"])
remote_state = ray.put(state["worker"])
for r in self.workers.remote_workers():
r.set_state.remote(remote_state)
if self.evaluation_workers:
# If evaluation workers are used, also restore the policies
# there in case they are used for evaluation purpose.
for r in self.evaluation_workers.remote_workers():
r.set_state.remote(remote_state)
# If necessary, restore replay data as well.
if self.local_replay_buffer is not None:
# TODO: Experimental functionality: Restore contents of replay
# buffer from checkpoint, only if user has configured this.
if self.config.get("store_buffer_in_checkpoints"):
if "local_replay_buffer" in state:
self.local_replay_buffer.set_state(state["local_replay_buffer"])
else:
logger.warning(
"`store_buffer_in_checkpoints` is True, but no replay "
"data found in state!"
)
elif "local_replay_buffer" in state and log_once(
"no_store_buffer_in_checkpoints_but_data_found"
):
logger.warning(
"`store_buffer_in_checkpoints` is False, but some replay "
"data found in state!"
)
if self.train_exec_impl is not None:
self.train_exec_impl.shared_metrics.get().restore(state["train_exec_impl"])
```
###Assistant : Sets the algorithm to the provided state.
Args:
state: The state dict to restore this Algorithm instance to. `state` may
have been returned by a call to an Algorithm's `__getstate__()` method.
|
2,585 | def test_http2_client_aborts(tctx, stream, when, how):
server = Placeholder(Server)
flow = Placeholder(HTTPFlow)
playbook, cff = start_h2_client(tctx)
resp = Placeholder(bytes)
|
Test handling of the case where a client aborts during request or response transmission.
If the client aborts the request transmission, we must trigger an error hook,
if the client disconnects during response transmission, no error hook is triggered.
| 39 | 18 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_http2_client_aborts(tctx, stream, when, how):
server = Placeholder(Server)
flow = Placeholder(HTTPFlow)
playbook, cff = start_h2_client(tctx)
resp = Placeholder(bytes)
```
###Assistant :
Test handling of the case where a client aborts during request or response transmission.
If the client aborts the request transmission, we must trigger an error hook,
if the client disconnects during response transmission, no error hook is triggered.
|
2,586 | def get_changelist_instance(self, request):
list_display = self.get_list_display(request)
list_display_links = self.get_list_display_links(request, list_display)
# Add the action checkboxes if any actions are available.
if self.get_actions(request):
list_display = ["action_checkbox", *list_display]
sortable_by = self.get_sortable_by(request)
ChangeList = self.get_changelist(request)
return ChangeList(
request,
self.model,
list_display,
list_display_links,
self.get_list_filter(request),
self.date_hierarchy,
self.get_search_fields(request),
self.get_list_select_related(request),
self.list_per_page,
self.list_max_show_all,
self.list_editable,
self,
sortable_by,
self.search_help_text,
)
|
Return a `ChangeList` instance based on `request`. May raise
`IncorrectLookupParameters`.
| 10 | 49 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_changelist_instance(self, request):
list_display = self.get_list_display(request)
list_display_links = self.get_list_display_links(request, list_display)
# Add the action checkboxes if any actions are available.
if self.get_actions(request):
list_display = ["action_checkbox", *list_display]
sortable_by = self.get_sortable_by(request)
ChangeList = self.get_changelist(request)
return ChangeList(
request,
self.model,
list_display,
list_display_links,
self.get_list_filter(request),
self.date_hierarchy,
self.get_search_fields(request),
self.get_list_select_related(request),
self.list_per_page,
self.list_max_show_all,
self.list_editable,
self,
sortable_by,
self.search_help_text,
)
```
###Assistant :
Return a `ChangeList` instance based on `request`. May raise
`IncorrectLookupParameters`.
|
2,587 | def _use_cholesky(u, m, n, params):
a, b, c = params
_, N = u.shape
x = c * (u.T.conj() @ u) + jnp.eye(N, dtype=jnp.dtype(u))
# Pads the lower-right corner with the identity matrix to prevent the Cholesky
# decomposition from failing due to the matrix not being PSD if padded with
# zeros.
x = _mask(x, (n, n), jnp.eye(N, dtype=x.dtype))
# `y` is lower triangular.
y = lax_linalg.cholesky(x, symmetrize_input=False)
z = lax_linalg.triangular_solve(
y, u.T, left_side=True, lower=True, conjugate_a=True).conj()
z = lax_linalg.triangular_solve(y, z, left_side=True, lower=True,
transpose_a=True, conjugate_a=True).T.conj()
e = b / c
u = e * u + (a - e) * z
return u
| QDWH iteration using Cholesky decomposition.
Args:
u: a matrix, with static (padded) shape M x N
m, n: the dynamic shape of the matrix, where m <= M and n <= N.
params: the QDWH parameters.
| 36 | 103 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _use_cholesky(u, m, n, params):
a, b, c = params
_, N = u.shape
x = c * (u.T.conj() @ u) + jnp.eye(N, dtype=jnp.dtype(u))
# Pads the lower-right corner with the identity matrix to prevent the Cholesky
# decomposition from failing due to the matrix not being PSD if padded with
# zeros.
x = _mask(x, (n, n), jnp.eye(N, dtype=x.dtype))
# `y` is lower triangular.
y = lax_linalg.cholesky(x, symmetrize_input=False)
z = lax_linalg.triangular_solve(
y, u.T, left_side=True, lower=True, conjugate_a=True).conj()
z = lax_linalg.triangular_solve(y, z, left_side=True, lower=True,
transpose_a=True, conjugate_a=True).T.conj()
e = b / c
u = e * u + (a - e) * z
return u
```
###Assistant : QDWH iteration using Cholesky decomposition.
Args:
u: a matrix, with static (padded) shape M x N
m, n: the dynamic shape of the matrix, where m <= M and n <= N.
params: the QDWH parameters.
|
2,588 | def piecewise_integrate(self, x, **kwargs):
from sympy.integrals import integrate
return self.func(*[(integrate(e, x, **kwargs), c) for e, c in self.args])
| Return the Piecewise with each expression being
replaced with its antiderivative. To obtain a continuous
antiderivative, use the :func:`~.integrate` function or method.
Examples
========
>>> from sympy import Piecewise
>>> from sympy.abc import x
>>> p = Piecewise((0, x < 0), (1, x < 1), (2, True))
>>> p.piecewise_integrate(x)
Piecewise((0, x < 0), (x, x < 1), (2*x, True))
Note that this does not give a continuous function, e.g.
at x = 1 the 3rd condition applies and the antiderivative
there is 2*x so the value of the antiderivative is 2:
>>> anti = _
>>> anti.subs(x, 1)
2
The continuous derivative accounts for the integral *up to*
the point of interest, however:
>>> p.integrate(x)
Piecewise((0, x < 0), (x, x < 1), (2*x - 1, True))
>>> _.subs(x, 1)
1
See Also
========
Piecewise._eval_integral
| 135 | 18 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def piecewise_integrate(self, x, **kwargs):
from sympy.integrals import integrate
return self.func(*[(integrate(e, x, **kwargs), c) for e, c in self.args])
```
###Assistant : Return the Piecewise with each expression being
replaced with its antiderivative. To obtain a continuous
antiderivative, use the :func:`~.integrate` function or method.
Examples
========
>>> from sympy import Piecewise
>>> from sympy.abc import x
>>> p = Piecewise((0, x < 0), (1, x < 1), (2, True))
>>> p.piecewise_integrate(x)
Piecewise((0, x < 0), (x, x < 1), (2*x, True))
Note that this does not give a continuous function, e.g.
at x = 1 the 3rd condition applies and the antiderivative
there is 2*x so the value of the antiderivative is 2:
>>> anti = _
>>> anti.subs(x, 1)
2
The continuous derivative accounts for the integral *up to*
the point of interest, however:
>>> p.integrate(x)
Piecewise((0, x < 0), (x, x < 1), (2*x - 1, True))
>>> _.subs(x, 1)
1
See Also
========
Piecewise._eval_integral
|
2,589 | def check_send_to_ereader(entry):
formats = list()
book_formats = list()
if len(entry.data):
for ele in iter(entry.data):
if ele.uncompressed_size < config.mail_size:
formats.append(ele.format)
if 'EPUB' in formats:
book_formats.append({'format': 'Epub',
'convert': 0,
'text': _('Send %(format)s to E-Reader', format='Epub')})
if 'MOBI' in formats:
book_formats.append({'format': 'Mobi',
'convert': 0,
'text': _('Send %(format)s to E-Reader', format='Mobi')})
if 'PDF' in formats:
book_formats.append({'format': 'Pdf',
'convert': 0,
'text': _('Send %(format)s to E-Reader', format='Pdf')})
if 'AZW' in formats:
book_formats.append({'format': 'Azw',
'convert': 0,
'text': _('Send %(format)s to E-Reader', format='Azw')})
if config.config_converterpath:
book_formats.extend(check_send_to_ereader_with_converter(formats))
return book_formats
else:
log.error(u'Cannot find book entry %d', entry.id)
return None
# Check if a reader is existing for any of the book formats, if not, return empty list, otherwise return
# list with supported formats |
returns all available book formats for sending to E-Reader
| 9 | 114 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def check_send_to_ereader(entry):
formats = list()
book_formats = list()
if len(entry.data):
for ele in iter(entry.data):
if ele.uncompressed_size < config.mail_size:
formats.append(ele.format)
if 'EPUB' in formats:
book_formats.append({'format': 'Epub',
'convert': 0,
'text': _('Send %(format)s to E-Reader', format='Epub')})
if 'MOBI' in formats:
book_formats.append({'format': 'Mobi',
'convert': 0,
'text': _('Send %(format)s to E-Reader', format='Mobi')})
if 'PDF' in formats:
book_formats.append({'format': 'Pdf',
'convert': 0,
'text': _('Send %(format)s to E-Reader', format='Pdf')})
if 'AZW' in formats:
book_formats.append({'format': 'Azw',
'convert': 0,
'text': _('Send %(format)s to E-Reader', format='Azw')})
if config.config_converterpath:
book_formats.extend(check_send_to_ereader_with_converter(formats))
return book_formats
else:
log.error(u'Cannot find book entry %d', entry.id)
return None
# Check if a reader is existing for any of the book formats, if not, return empty list, otherwise return
# list with supported formats
```
###Assistant :
returns all available book formats for sending to E-Reader
|
2,590 | def Multinomial(syms, n, *p):
if not isinstance(p[0], list):
p = (list(p), )
return multivariate_rv(MultinomialDistribution, syms, n, p[0])
#-------------------------------------------------------------------------------
# Negative Multinomial Distribution --------------------------------------------
|
Creates a discrete random variable with Multinomial Distribution.
The density of the said distribution can be found at [1].
Parameters
==========
n : Positive integer
Represents number of trials
p : List of event probabilites
Must be in the range of [0, 1]
Returns
=======
RandomSymbol
Examples
========
>>> from sympy.stats import density, Multinomial, marginal_distribution
>>> from sympy import symbols
>>> x1, x2, x3 = symbols('x1, x2, x3', nonnegative=True, integer=True)
>>> p1, p2, p3 = symbols('p1, p2, p3', positive=True)
>>> M = Multinomial('M', 3, p1, p2, p3)
>>> density(M)(x1, x2, x3)
Piecewise((6*p1**x1*p2**x2*p3**x3/(factorial(x1)*factorial(x2)*factorial(x3)),
Eq(x1 + x2 + x3, 3)), (0, True))
>>> marginal_distribution(M, M[0])(x1).subs(x1, 1)
3*p1*p2**2 + 6*p1*p2*p3 + 3*p1*p3**2
References
==========
.. [1] https://en.wikipedia.org/wiki/Multinomial_distribution
.. [2] http://mathworld.wolfram.com/MultinomialDistribution.html
| 117 | 23 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def Multinomial(syms, n, *p):
if not isinstance(p[0], list):
p = (list(p), )
return multivariate_rv(MultinomialDistribution, syms, n, p[0])
#-------------------------------------------------------------------------------
# Negative Multinomial Distribution --------------------------------------------
```
###Assistant :
Creates a discrete random variable with Multinomial Distribution.
The density of the said distribution can be found at [1].
Parameters
==========
n : Positive integer
Represents number of trials
p : List of event probabilites
Must be in the range of [0, 1]
Returns
=======
RandomSymbol
Examples
========
>>> from sympy.stats import density, Multinomial, marginal_distribution
>>> from sympy import symbols
>>> x1, x2, x3 = symbols('x1, x2, x3', nonnegative=True, integer=True)
>>> p1, p2, p3 = symbols('p1, p2, p3', positive=True)
>>> M = Multinomial('M', 3, p1, p2, p3)
>>> density(M)(x1, x2, x3)
Piecewise((6*p1**x1*p2**x2*p3**x3/(factorial(x1)*factorial(x2)*factorial(x3)),
Eq(x1 + x2 + x3, 3)), (0, True))
>>> marginal_distribution(M, M[0])(x1).subs(x1, 1)
3*p1*p2**2 + 6*p1*p2*p3 + 3*p1*p3**2
References
==========
.. [1] https://en.wikipedia.org/wiki/Multinomial_distribution
.. [2] http://mathworld.wolfram.com/MultinomialDistribution.html
|
2,591 | def psi_n(n, x, m, omega):
# sympify arguments
n, x, m, omega = map(S, [n, x, m, omega])
nu = m * omega / hbar
# normalization coefficient
C = (nu/pi)**Rational(1, 4) * sqrt(1/(2**n*factorial(n)))
return C * exp(-nu* x**2 /2) * hermite(n, sqrt(nu)*x)
|
Returns the wavefunction psi_{n} for the One-dimensional harmonic oscillator.
Parameters
==========
n :
the "nodal" quantum number. Corresponds to the number of nodes in the
wavefunction. ``n >= 0``
x :
x coordinate.
m :
Mass of the particle.
omega :
Angular frequency of the oscillator.
Examples
========
>>> from sympy.physics.qho_1d import psi_n
>>> from sympy.abc import m, x, omega
>>> psi_n(0, x, m, omega)
(m*omega)**(1/4)*exp(-m*omega*x**2/(2*hbar))/(hbar**(1/4)*pi**(1/4))
| 66 | 43 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def psi_n(n, x, m, omega):
# sympify arguments
n, x, m, omega = map(S, [n, x, m, omega])
nu = m * omega / hbar
# normalization coefficient
C = (nu/pi)**Rational(1, 4) * sqrt(1/(2**n*factorial(n)))
return C * exp(-nu* x**2 /2) * hermite(n, sqrt(nu)*x)
```
###Assistant :
Returns the wavefunction psi_{n} for the One-dimensional harmonic oscillator.
Parameters
==========
n :
the "nodal" quantum number. Corresponds to the number of nodes in the
wavefunction. ``n >= 0``
x :
x coordinate.
m :
Mass of the particle.
omega :
Angular frequency of the oscillator.
Examples
========
>>> from sympy.physics.qho_1d import psi_n
>>> from sympy.abc import m, x, omega
>>> psi_n(0, x, m, omega)
(m*omega)**(1/4)*exp(-m*omega*x**2/(2*hbar))/(hbar**(1/4)*pi**(1/4))
|
2,592 | def getsourcefile(object):
filename = getfile(object)
all_bytecode_suffixes = importlib.machinery.DEBUG_BYTECODE_SUFFIXES[:]
all_bytecode_suffixes += importlib.machinery.OPTIMIZED_BYTECODE_SUFFIXES[:]
if any(filename.endswith(s) for s in all_bytecode_suffixes):
filename = (os.path.splitext(filename)[0] +
importlib.machinery.SOURCE_SUFFIXES[0])
elif any(filename.endswith(s) for s in
importlib.machinery.EXTENSION_SUFFIXES):
return None
if os.path.exists(filename):
return filename
# only return a non-existent filename if the module has a PEP 302 loader
module = getmodule(object, filename)
if getattr(module, '__loader__', None) is not None:
return filename
elif getattr(getattr(module, "__spec__", None), "loader", None) is not None:
return filename
# or it is in the linecache
elif filename in linecache.cache:
return filename
| Return the filename that can be used to locate an object's source.
Return None if no way can be identified to get the source.
| 24 | 85 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def getsourcefile(object):
filename = getfile(object)
all_bytecode_suffixes = importlib.machinery.DEBUG_BYTECODE_SUFFIXES[:]
all_bytecode_suffixes += importlib.machinery.OPTIMIZED_BYTECODE_SUFFIXES[:]
if any(filename.endswith(s) for s in all_bytecode_suffixes):
filename = (os.path.splitext(filename)[0] +
importlib.machinery.SOURCE_SUFFIXES[0])
elif any(filename.endswith(s) for s in
importlib.machinery.EXTENSION_SUFFIXES):
return None
if os.path.exists(filename):
return filename
# only return a non-existent filename if the module has a PEP 302 loader
module = getmodule(object, filename)
if getattr(module, '__loader__', None) is not None:
return filename
elif getattr(getattr(module, "__spec__", None), "loader", None) is not None:
return filename
# or it is in the linecache
elif filename in linecache.cache:
return filename
```
###Assistant : Return the filename that can be used to locate an object's source.
Return None if no way can be identified to get the source.
|
2,593 | def module_repr(self, module):
warnings.warn("importlib.abc.Loader.module_repr() is deprecated and "
"slated for removal in Python 3.12", DeprecationWarning)
# The exception will cause ModuleType.__repr__ to ignore this method.
raise NotImplementedError
| Return a module's repr.
Used by the module type when the method does not raise
NotImplementedError.
This method is deprecated.
| 20 | 27 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def module_repr(self, module):
warnings.warn("importlib.abc.Loader.module_repr() is deprecated and "
"slated for removal in Python 3.12", DeprecationWarning)
# The exception will cause ModuleType.__repr__ to ignore this method.
raise NotImplementedError
```
###Assistant : Return a module's repr.
Used by the module type when the method does not raise
NotImplementedError.
This method is deprecated.
|
2,594 | def test_need_validated_email(self):
with self.assertRaises(SynapseError) as cm:
self.get_success_or_raise(
self.hs.get_pusherpool().add_or_update_pusher(
user_id=self.user_id,
access_token=self.token_id,
kind="email",
app_id="m.email",
app_display_name="Email Notifications",
device_display_name="[email protected]",
pushkey="[email protected]",
lang=None,
data={},
)
)
self.assertEqual(400, cm.exception.code)
self.assertEqual(Codes.THREEPID_NOT_FOUND, cm.exception.errcode)
| Test that we can only add an email pusher if the user has validated
their email.
| 16 | 24 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_need_validated_email(self):
with self.assertRaises(SynapseError) as cm:
self.get_success_or_raise(
self.hs.get_pusherpool().add_or_update_pusher(
user_id=self.user_id,
access_token=self.token_id,
kind="email",
app_id="m.email",
app_display_name="Email Notifications",
device_display_name="[email protected]",
pushkey="[email protected]",
lang=None,
data={},
)
)
self.assertEqual(400, cm.exception.code)
self.assertEqual(Codes.THREEPID_NOT_FOUND, cm.exception.errcode)
```
###Assistant : Test that we can only add an email pusher if the user has validated
their email.
|
2,595 | def test_get_existing_comments(self):
doc = Document.objects.create(
title="test",
mime_type="application/pdf",
content="this is a document which will have comments!",
)
comment = Comment.objects.create(
comment="This is a comment.",
document=doc,
user=self.user,
)
response = self.client.get(
f"/api/documents/{doc.pk}/comments/",
format="json",
)
self.assertEqual(response.status_code, 200)
resp_data = response.json()
self.assertEqual(len(resp_data), 1)
resp_data = resp_data[0]
del resp_data["created"]
self.assertDictEqual(
resp_data,
{
"id": comment.id,
"comment": comment.comment,
"user": {
"id": comment.user.id,
"username": comment.user.username,
"firstname": comment.user.first_name,
"lastname": comment.user.last_name,
},
},
)
|
GIVEN:
- A document with a single comment
WHEN:
- API reuqest for document comments is made
THEN:
- The associated comment is returned
| 24 | 64 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_get_existing_comments(self):
doc = Document.objects.create(
title="test",
mime_type="application/pdf",
content="this is a document which will have comments!",
)
comment = Comment.objects.create(
comment="This is a comment.",
document=doc,
user=self.user,
)
response = self.client.get(
f"/api/documents/{doc.pk}/comments/",
format="json",
)
self.assertEqual(response.status_code, 200)
resp_data = response.json()
self.assertEqual(len(resp_data), 1)
resp_data = resp_data[0]
del resp_data["created"]
self.assertDictEqual(
resp_data,
{
"id": comment.id,
"comment": comment.comment,
"user": {
"id": comment.user.id,
"username": comment.user.username,
"firstname": comment.user.first_name,
"lastname": comment.user.last_name,
},
},
)
```
###Assistant :
GIVEN:
- A document with a single comment
WHEN:
- API reuqest for document comments is made
THEN:
- The associated comment is returned
|
2,596 | def collate_full_clips(batch):
max_mel_length = max([b[0].shape[1] for b in batch]) if len(batch) > 1 else batch[0][0].shape[1]
max_audio_length = max([b[1].shape[0] for b in batch]) if len(batch) > 1 else batch[0][1].shape[0]
mels = torch.zeros([len(batch), batch[0][0].shape[0], max_mel_length])
audios = torch.zeros([len(batch), max_audio_length])
for idx, b in enumerate(batch):
mel = b[0]
audio = b[1]
mels[idx, :, : mel.shape[1]] = mel
audios[idx, : audio.shape[0]] = audio
return mels, audios
| This is used in tune_wavegrad.py.
It pads sequences to the max length. | 12 | 62 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def collate_full_clips(batch):
max_mel_length = max([b[0].shape[1] for b in batch]) if len(batch) > 1 else batch[0][0].shape[1]
max_audio_length = max([b[1].shape[0] for b in batch]) if len(batch) > 1 else batch[0][1].shape[0]
mels = torch.zeros([len(batch), batch[0][0].shape[0], max_mel_length])
audios = torch.zeros([len(batch), max_audio_length])
for idx, b in enumerate(batch):
mel = b[0]
audio = b[1]
mels[idx, :, : mel.shape[1]] = mel
audios[idx, : audio.shape[0]] = audio
return mels, audios
```
###Assistant : This is used in tune_wavegrad.py.
It pads sequences to the max length.
|
2,597 | def feature_test(self, name, force_flags=None, macros=[]):
if force_flags is None:
force_flags = self.feature_flags(name)
self.dist_log(
"testing feature '%s' with flags (%s)" % (
name, ' '.join(force_flags)
))
# Each CPU feature must have C source code contains at
# least one intrinsic or instruction related to this feature.
test_path = os.path.join(
self.conf_check_path, "cpu_%s.c" % name.lower()
)
if not os.path.exists(test_path):
self.dist_fatal("feature test file is not exist", test_path)
test = self.dist_test(
test_path, force_flags + self.cc_flags["werror"], macros=macros
)
if not test:
self.dist_log("testing failed", stderr=True)
return test
|
Test a certain CPU feature against the compiler through its own
check file.
Parameters
----------
name : str
Supported CPU feature name.
force_flags : list or None, optional
If None(default), the returned flags from `feature_flags()`
will be used.
macros : list of tuples, optional
A list of C macro definitions.
| 50 | 81 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def feature_test(self, name, force_flags=None, macros=[]):
if force_flags is None:
force_flags = self.feature_flags(name)
self.dist_log(
"testing feature '%s' with flags (%s)" % (
name, ' '.join(force_flags)
))
# Each CPU feature must have C source code contains at
# least one intrinsic or instruction related to this feature.
test_path = os.path.join(
self.conf_check_path, "cpu_%s.c" % name.lower()
)
if not os.path.exists(test_path):
self.dist_fatal("feature test file is not exist", test_path)
test = self.dist_test(
test_path, force_flags + self.cc_flags["werror"], macros=macros
)
if not test:
self.dist_log("testing failed", stderr=True)
return test
```
###Assistant :
Test a certain CPU feature against the compiler through its own
check file.
Parameters
----------
name : str
Supported CPU feature name.
force_flags : list or None, optional
If None(default), the returned flags from `feature_flags()`
will be used.
macros : list of tuples, optional
A list of C macro definitions.
|
2,598 | def get_file_path(self) -> str:
if self.file_name is None:
raise ValueError("Must specify file for SVGMobject")
return get_full_vector_image_path(self.file_name)
| Search for an existing file based on the specified file name. | 11 | 16 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_file_path(self) -> str:
if self.file_name is None:
raise ValueError("Must specify file for SVGMobject")
return get_full_vector_image_path(self.file_name)
```
###Assistant : Search for an existing file based on the specified file name.
|
2,599 | def default_config(self) -> Dict[str, Any]:
base = super().default_config()
base["redis"] = {"enabled": True}
return base
|
Overrides the default config to enable Redis.
Even if the test only uses make_worker_hs, the main process needs Redis
enabled otherwise it won't create a Fake Redis server to listen on the
Redis port and accept fake TCP connections.
| 39 | 14 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def default_config(self) -> Dict[str, Any]:
base = super().default_config()
base["redis"] = {"enabled": True}
return base
```
###Assistant :
Overrides the default config to enable Redis.
Even if the test only uses make_worker_hs, the main process needs Redis
enabled otherwise it won't create a Fake Redis server to listen on the
Redis port and accept fake TCP connections.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.