Unnamed: 0
int64 0
2.93k
| code
stringlengths 101
62.2k
| docs
stringlengths 51
10.7k
| doc_len
int64 4
1.74k
| words
int64 4
4.82k
| lang
stringclasses 1
value | prompt
stringlengths 320
71.2k
|
---|---|---|---|---|---|---|
2,200 | def _setup_room_for_insertion_backfill_tests(self) -> _BackfillSetupInfo:
room_id = "!backfill-room-test:some-host"
depth_map: Dict[str, int] = {
"1": 1,
"2": 2,
"insertion_eventA": 3,
"3": 4,
"insertion_eventB": 5,
"4": 6,
"5": 7,
}
|
Sets up a room with various insertion event backward extremities to test
backfill functions against.
Returns:
_BackfillSetupInfo including the `room_id` to test against and
`depth_map` of events in the room
| 30 | 27 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _setup_room_for_insertion_backfill_tests(self) -> _BackfillSetupInfo:
room_id = "!backfill-room-test:some-host"
depth_map: Dict[str, int] = {
"1": 1,
"2": 2,
"insertion_eventA": 3,
"3": 4,
"insertion_eventB": 5,
"4": 6,
"5": 7,
}
```
###Assistant :
Sets up a room with various insertion event backward extremities to test
backfill functions against.
Returns:
_BackfillSetupInfo including the `room_id` to test against and
`depth_map` of events in the room
|
2,201 | def postprocessing(data):
if type_to_string(type(data)) == "torch.Tensor":
try:
import torch
from torchvision import transforms
# By default Torch tensors are displayed as images. To display them as JSON,
# the user can simply convert them to numpy arrays.
transformer = transforms.ToPILImage()
return transformer(torch.squeeze(data))
except ModuleNotFoundError:
logger.warning(
"Module `torchvision` isn't installed, unable to process torch tensor."
)
return data
return data
| Add support for types that are not supported by Gradio.
Some data types like PyTorch tensors, cannot be processed and displayed through
Gradio. Thus we extend support to these data types by transforming them into a form
that Gradio can process and display.
| 43 | 59 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def postprocessing(data):
if type_to_string(type(data)) == "torch.Tensor":
try:
import torch
from torchvision import transforms
# By default Torch tensors are displayed as images. To display them as JSON,
# the user can simply convert them to numpy arrays.
transformer = transforms.ToPILImage()
return transformer(torch.squeeze(data))
except ModuleNotFoundError:
logger.warning(
"Module `torchvision` isn't installed, unable to process torch tensor."
)
return data
return data
```
###Assistant : Add support for types that are not supported by Gradio.
Some data types like PyTorch tensors, cannot be processed and displayed through
Gradio. Thus we extend support to these data types by transforming them into a form
that Gradio can process and display.
|
2,202 | def get_scrap_item_details(bom_no):
scrap_items = {}
for item in frappe.db.sql(
,
bom_no,
as_dict=1,
):
scrap_items[item.item_code] = item.stock_qty
return scrap_items
| select item_code, stock_qty from `tabBOM Scrap Item`
where parent = %s | 11 | 18 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_scrap_item_details(bom_no):
scrap_items = {}
for item in frappe.db.sql(
,
bom_no,
as_dict=1,
):
scrap_items[item.item_code] = item.stock_qty
return scrap_items
```
###Assistant : select item_code, stock_qty from `tabBOM Scrap Item`
where parent = %s
|
2,203 | def get_is_active(self, session=NEW_SESSION) -> Optional[None]:
return session.query(DagModel.is_active).filter(DagModel.dag_id == self.dag_id).scalar()
| Returns a boolean indicating whether this DAG is active | 9 | 9 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_is_active(self, session=NEW_SESSION) -> Optional[None]:
return session.query(DagModel.is_active).filter(DagModel.dag_id == self.dag_id).scalar()
```
###Assistant : Returns a boolean indicating whether this DAG is active
|
2,204 | def is_homepage(self) -> bool:
return self.is_top_level and self.is_index and self.file.url in ('.', './', 'index.html')
previous_page: Optional[Page]
next_page: Optional[Page]
parent: Optional[Section]
children: None = None
is_section: bool = False
is_page: bool = True
is_link: bool = False
| Evaluates to `True` for the homepage of the site and `False` for all other pages.The [page][mkdocs.structure.pages.Page] object for the previous page or `None`.
The value will be `None` if the current page is the first item in the site navigation
or if the current page is not included in the navigation at all.The [page][mkdocs.structure.pages.Page] object for the next page or `None`.
The value will be `None` if the current page is the last item in the site navigation
or if the current page is not included in the navigation at all.The immediate parent of the page in the site navigation. `None` if the
page is at the top level.Pages do not contain children and the attribute is always `None`.Indicates that the navigation object is a "section" object. Always `False` for page objects.Indicates that the navigation object is a "page" object. Always `True` for page objects.Indicates that the navigation object is a "link" object. Always `False` for page objects. | 158 | 36 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def is_homepage(self) -> bool:
return self.is_top_level and self.is_index and self.file.url in ('.', './', 'index.html')
previous_page: Optional[Page]
next_page: Optional[Page]
parent: Optional[Section]
children: None = None
is_section: bool = False
is_page: bool = True
is_link: bool = False
```
###Assistant : Evaluates to `True` for the homepage of the site and `False` for all other pages.The [page][mkdocs.structure.pages.Page] object for the previous page or `None`.
The value will be `None` if the current page is the first item in the site navigation
or if the current page is not included in the navigation at all.The [page][mkdocs.structure.pages.Page] object for the next page or `None`.
The value will be `None` if the current page is the last item in the site navigation
or if the current page is not included in the navigation at all.The immediate parent of the page in the site navigation. `None` if the
page is at the top level.Pages do not contain children and the attribute is always `None`.Indicates that the navigation object is a "section" object. Always `False` for page objects.Indicates that the navigation object is a "page" object. Always `True` for page objects.Indicates that the navigation object is a "link" object. Always `False` for page objects.
|
2,205 | def _filetypes(self):
all_files = ("All files", "*.*")
filetypes = dict(
default=(all_files,),
alignments=[("Faceswap Alignments", "*.fsa"), all_files],
config_project=[("Faceswap Project files", "*.fsw"), all_files],
config_task=[("Faceswap Task files", "*.fst"), all_files],
config_all=[("Faceswap Project and Task files", "*.fst *.fsw"), all_files],
csv=[("Comma separated values", "*.csv"), all_files],
image=[("Bitmap", "*.bmp"),
("JPG", "*.jpeg *.jpg"),
("PNG", "*.png"),
("TIFF", "*.tif *.tiff"),
all_files],
ini=[("Faceswap config files", "*.ini"), all_files],
json=[("JSON file", "*.json"), all_files],
model=[("Keras model files", "*.h5"), all_files],
state=[("State files", "*.json"), all_files],
log=[("Log files", "*.log"), all_files],
video=[("Audio Video Interleave", "*.avi"),
("Flash Video", "*.flv"),
("Matroska", "*.mkv"),
("MOV", "*.mov"),
("MP4", "*.mp4"),
("MPEG", "*.mpeg *.mpg *.ts *.vob"),
("WebM", "*.webm"),
("Windows Media Video", "*.wmv"),
all_files])
# Add in multi-select options and upper case extensions for Linux
for key in filetypes:
if platform.system() == "Linux":
filetypes[key] = [item
if item[0] == "All files"
else (item[0], f"{item[1]} {item[1].upper()}")
for item in filetypes[key]]
if len(filetypes[key]) > 2:
multi = [f"{key.title()} Files"]
multi.append(" ".join([ftype[1]
for ftype in filetypes[key] if ftype[0] != "All files"]))
filetypes[key].insert(0, tuple(multi))
return filetypes
| dict: The accepted extensions for each file type for opening/saving | 10 | 154 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _filetypes(self):
all_files = ("All files", "*.*")
filetypes = dict(
default=(all_files,),
alignments=[("Faceswap Alignments", "*.fsa"), all_files],
config_project=[("Faceswap Project files", "*.fsw"), all_files],
config_task=[("Faceswap Task files", "*.fst"), all_files],
config_all=[("Faceswap Project and Task files", "*.fst *.fsw"), all_files],
csv=[("Comma separated values", "*.csv"), all_files],
image=[("Bitmap", "*.bmp"),
("JPG", "*.jpeg *.jpg"),
("PNG", "*.png"),
("TIFF", "*.tif *.tiff"),
all_files],
ini=[("Faceswap config files", "*.ini"), all_files],
json=[("JSON file", "*.json"), all_files],
model=[("Keras model files", "*.h5"), all_files],
state=[("State files", "*.json"), all_files],
log=[("Log files", "*.log"), all_files],
video=[("Audio Video Interleave", "*.avi"),
("Flash Video", "*.flv"),
("Matroska", "*.mkv"),
("MOV", "*.mov"),
("MP4", "*.mp4"),
("MPEG", "*.mpeg *.mpg *.ts *.vob"),
("WebM", "*.webm"),
("Windows Media Video", "*.wmv"),
all_files])
# Add in multi-select options and upper case extensions for Linux
for key in filetypes:
if platform.system() == "Linux":
filetypes[key] = [item
if item[0] == "All files"
else (item[0], f"{item[1]} {item[1].upper()}")
for item in filetypes[key]]
if len(filetypes[key]) > 2:
multi = [f"{key.title()} Files"]
multi.append(" ".join([ftype[1]
for ftype in filetypes[key] if ftype[0] != "All files"]))
filetypes[key].insert(0, tuple(multi))
return filetypes
```
###Assistant : dict: The accepted extensions for each file type for opening/saving
|
2,206 | def result(self):
if self._state == _CANCELLED:
exc = self._make_cancelled_error()
raise exc
if self._state != _FINISHED:
raise exceptions.InvalidStateError('Result is not ready.')
self.__log_traceback = False
if self._exception is not None:
raise self._exception
return self._result
| Return the result this future represents.
If the future has been cancelled, raises CancelledError. If the
future's result isn't yet available, raises InvalidStateError. If
the future is done and has an exception set, this exception is raised.
| 37 | 32 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def result(self):
if self._state == _CANCELLED:
exc = self._make_cancelled_error()
raise exc
if self._state != _FINISHED:
raise exceptions.InvalidStateError('Result is not ready.')
self.__log_traceback = False
if self._exception is not None:
raise self._exception
return self._result
```
###Assistant : Return the result this future represents.
If the future has been cancelled, raises CancelledError. If the
future's result isn't yet available, raises InvalidStateError. If
the future is done and has an exception set, this exception is raised.
|
2,207 | def _set_fsspec_for_multiprocess() -> None:
fsspec.asyn.iothread[0] = None
fsspec.asyn.loop[0] = None
|
Clear reference to the loop and thread.
This is necessary otherwise HTTPFileSystem hangs in the ML training loop.
Only required for fsspec >= 0.9.0
See https://github.com/fsspec/gcsfs/issues/379
| 26 | 10 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _set_fsspec_for_multiprocess() -> None:
fsspec.asyn.iothread[0] = None
fsspec.asyn.loop[0] = None
```
###Assistant :
Clear reference to the loop and thread.
This is necessary otherwise HTTPFileSystem hangs in the ML training loop.
Only required for fsspec >= 0.9.0
See https://github.com/fsspec/gcsfs/issues/379
|
2,208 | def stable_cumsum(arr, axis=None, rtol=1e-05, atol=1e-08):
out = np.cumsum(arr, axis=axis, dtype=np.float64)
expected = np.sum(arr, axis=axis, dtype=np.float64)
if not np.all(
np.isclose(
out.take(-1, axis=axis), expected, rtol=rtol, atol=atol, equal_nan=True
)
):
warnings.warn(
"cumsum was found to be unstable: "
"its last element does not correspond to sum",
RuntimeWarning,
)
return out
| Use high precision for cumsum and check that final value matches sum.
Warns if the final cumulative sum does not match the sum (up to the chosen
tolerance).
Parameters
----------
arr : array-like
To be cumulatively summed as flat.
axis : int, default=None
Axis along which the cumulative sum is computed.
The default (None) is to compute the cumsum over the flattened array.
rtol : float, default=1e-05
Relative tolerance, see ``np.allclose``.
atol : float, default=1e-08
Absolute tolerance, see ``np.allclose``.
Returns
-------
out : ndarray
Array with the cumulative sums along the chosen axis.
| 93 | 47 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def stable_cumsum(arr, axis=None, rtol=1e-05, atol=1e-08):
out = np.cumsum(arr, axis=axis, dtype=np.float64)
expected = np.sum(arr, axis=axis, dtype=np.float64)
if not np.all(
np.isclose(
out.take(-1, axis=axis), expected, rtol=rtol, atol=atol, equal_nan=True
)
):
warnings.warn(
"cumsum was found to be unstable: "
"its last element does not correspond to sum",
RuntimeWarning,
)
return out
```
###Assistant : Use high precision for cumsum and check that final value matches sum.
Warns if the final cumulative sum does not match the sum (up to the chosen
tolerance).
Parameters
----------
arr : array-like
To be cumulatively summed as flat.
axis : int, default=None
Axis along which the cumulative sum is computed.
The default (None) is to compute the cumsum over the flattened array.
rtol : float, default=1e-05
Relative tolerance, see ``np.allclose``.
atol : float, default=1e-08
Absolute tolerance, see ``np.allclose``.
Returns
-------
out : ndarray
Array with the cumulative sums along the chosen axis.
|
2,209 | def confirm(self):
args = request.args
dag_id = args.get('dag_id')
task_id = args.get('task_id')
dag_run_id = args.get('dag_run_id')
state = args.get('state')
origin = args.get('origin')
if 'map_index' not in args:
map_indexes: Optional[List[int]] = None
else:
map_indexes = args.getlist('map_index', type=int)
upstream = to_boolean(args.get('upstream'))
downstream = to_boolean(args.get('downstream'))
future = to_boolean(args.get('future'))
past = to_boolean(args.get('past'))
origin = origin or url_for('Airflow.index')
dag = get_airflow_app().dag_bag.get_dag(dag_id)
if not dag:
msg = f'DAG {dag_id} not found'
return redirect_or_json(origin, msg, status='error', status_code=404)
try:
task = dag.get_task(task_id)
except airflow.exceptions.TaskNotFound:
msg = f"Task {task_id} not found"
return redirect_or_json(origin, msg, status='error', status_code=404)
task.dag = dag
if state not in (
'success',
'failed',
):
msg = f"Invalid state {state}, must be either 'success' or 'failed'"
return redirect_or_json(origin, msg, status='error', status_code=400)
latest_execution_date = dag.get_latest_execution_date()
if not latest_execution_date:
msg = f"Cannot mark tasks as {state}, seem that dag {dag_id} has never run"
return redirect_or_json(origin, msg, status='error', status_code=400)
if map_indexes is None:
tasks: Union[List[Operator], List[Tuple[Operator, int]]] = [task]
else:
tasks = [(task, map_index) for map_index in map_indexes]
to_be_altered = set_state(
tasks=tasks,
run_id=dag_run_id,
upstream=upstream,
downstream=downstream,
future=future,
past=past,
state=state,
commit=False,
)
if request.headers.get('Accept') == 'application/json':
details = [str(t) for t in to_be_altered]
return htmlsafe_json_dumps(details, separators=(',', ':'))
details = "\n".join(str(t) for t in to_be_altered)
response = self.render_template(
"airflow/confirm.html",
endpoint=url_for(f'Airflow.{state}'),
message=f"Task instances you are about to mark as {state}:",
details=details,
)
return response
| Show confirmation page for marking tasks as success or failed. | 10 | 208 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def confirm(self):
args = request.args
dag_id = args.get('dag_id')
task_id = args.get('task_id')
dag_run_id = args.get('dag_run_id')
state = args.get('state')
origin = args.get('origin')
if 'map_index' not in args:
map_indexes: Optional[List[int]] = None
else:
map_indexes = args.getlist('map_index', type=int)
upstream = to_boolean(args.get('upstream'))
downstream = to_boolean(args.get('downstream'))
future = to_boolean(args.get('future'))
past = to_boolean(args.get('past'))
origin = origin or url_for('Airflow.index')
dag = get_airflow_app().dag_bag.get_dag(dag_id)
if not dag:
msg = f'DAG {dag_id} not found'
return redirect_or_json(origin, msg, status='error', status_code=404)
try:
task = dag.get_task(task_id)
except airflow.exceptions.TaskNotFound:
msg = f"Task {task_id} not found"
return redirect_or_json(origin, msg, status='error', status_code=404)
task.dag = dag
if state not in (
'success',
'failed',
):
msg = f"Invalid state {state}, must be either 'success' or 'failed'"
return redirect_or_json(origin, msg, status='error', status_code=400)
latest_execution_date = dag.get_latest_execution_date()
if not latest_execution_date:
msg = f"Cannot mark tasks as {state}, seem that dag {dag_id} has never run"
return redirect_or_json(origin, msg, status='error', status_code=400)
if map_indexes is None:
tasks: Union[List[Operator], List[Tuple[Operator, int]]] = [task]
else:
tasks = [(task, map_index) for map_index in map_indexes]
to_be_altered = set_state(
tasks=tasks,
run_id=dag_run_id,
upstream=upstream,
downstream=downstream,
future=future,
past=past,
state=state,
commit=False,
)
if request.headers.get('Accept') == 'application/json':
details = [str(t) for t in to_be_altered]
return htmlsafe_json_dumps(details, separators=(',', ':'))
details = "\n".join(str(t) for t in to_be_altered)
response = self.render_template(
"airflow/confirm.html",
endpoint=url_for(f'Airflow.{state}'),
message=f"Task instances you are about to mark as {state}:",
details=details,
)
return response
```
###Assistant : Show confirmation page for marking tasks as success or failed.
|
2,210 | def resample(self) -> Dict[str, Any]:
result = {}
for module in self.nas_modules:
result.update(module.resample(memo=result))
return result
| Trigger the resample for each ``nas_module``.
Sometimes (e.g., in differentiable cases), it does nothing.
Returns
-------
dict
Sampled architecture.
| 19 | 15 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def resample(self) -> Dict[str, Any]:
result = {}
for module in self.nas_modules:
result.update(module.resample(memo=result))
return result
```
###Assistant : Trigger the resample for each ``nas_module``.
Sometimes (e.g., in differentiable cases), it does nothing.
Returns
-------
dict
Sampled architecture.
|
2,211 | def test_json_to_doc_attribute_consistency(doc):
doc_json = doc.to_json()
doc_json["tokens"][1].pop("morph")
with pytest.raises(ValueError):
Doc(doc.vocab).from_json(doc_json)
| Test that Doc.from_json() raises an exception if tokens don't all have the same set of properties. | 16 | 9 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_json_to_doc_attribute_consistency(doc):
doc_json = doc.to_json()
doc_json["tokens"][1].pop("morph")
with pytest.raises(ValueError):
Doc(doc.vocab).from_json(doc_json)
```
###Assistant : Test that Doc.from_json() raises an exception if tokens don't all have the same set of properties.
|
2,212 | def test_index_css_classes(self):
# General index page
response = self.client.get(reverse("admin:index"))
self.assertContains(response, '<div class="app-admin_views module')
self.assertContains(response, '<tr class="model-actor">')
self.assertContains(response, '<tr class="model-album">')
# App index page
response = self.client.get(reverse("admin:app_list", args=("admin_views",)))
self.assertContains(response, '<div class="app-admin_views module')
self.assertContains(response, '<tr class="model-actor">')
self.assertContains(response, '<tr class="model-album">')
|
CSS class names are used for each app and model on the admin index
pages (#17050).
| 16 | 37 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_index_css_classes(self):
# General index page
response = self.client.get(reverse("admin:index"))
self.assertContains(response, '<div class="app-admin_views module')
self.assertContains(response, '<tr class="model-actor">')
self.assertContains(response, '<tr class="model-album">')
# App index page
response = self.client.get(reverse("admin:app_list", args=("admin_views",)))
self.assertContains(response, '<div class="app-admin_views module')
self.assertContains(response, '<tr class="model-actor">')
self.assertContains(response, '<tr class="model-album">')
```
###Assistant :
CSS class names are used for each app and model on the admin index
pages (#17050).
|
2,213 | def test_action_column_class(self):
response = self.client.get(reverse("admin:admin_views_subscriber_changelist"))
self.assertIsNotNone(response.context["action_form"])
self.assertContains(response, "action-checkbox-column")
| The checkbox column class is present in the response. | 9 | 8 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_action_column_class(self):
response = self.client.get(reverse("admin:admin_views_subscriber_changelist"))
self.assertIsNotNone(response.context["action_form"])
self.assertContains(response, "action-checkbox-column")
```
###Assistant : The checkbox column class is present in the response.
|
2,214 | def hashkey(cls, *args, **kwargs):
return cachetools.keys.hashkey(f"<{cls.__name__}>", *args, **kwargs)
|
Usage of @cachetools.cached has changed to @cachetools.cachedmethod
The previous cachetools decorator called the hash function and passed in (self, key).
The new cachtools decorator calls the hash function with just (key).
Ideally, we would continue to pass self, however, the cachetools decorator interface
does not allow us to.
This hashkey function is to maintain that the key generated looks like
('<SettingsWrapper>', key). The thought is that maybe it is important to namespace
our cache to the SettingsWrapper scope in case some other usage of this cache exists.
I can not think of how any other system could and would use our private cache, but
for safety sake we are ensuring the key schema does not change.
| 116 | 8 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def hashkey(cls, *args, **kwargs):
return cachetools.keys.hashkey(f"<{cls.__name__}>", *args, **kwargs)
```
###Assistant :
Usage of @cachetools.cached has changed to @cachetools.cachedmethod
The previous cachetools decorator called the hash function and passed in (self, key).
The new cachtools decorator calls the hash function with just (key).
Ideally, we would continue to pass self, however, the cachetools decorator interface
does not allow us to.
This hashkey function is to maintain that the key generated looks like
('<SettingsWrapper>', key). The thought is that maybe it is important to namespace
our cache to the SettingsWrapper scope in case some other usage of this cache exists.
I can not think of how any other system could and would use our private cache, but
for safety sake we are ensuring the key schema does not change.
|
2,215 | def load(cls, path):
with open(path) as yaml_file:
data = yaml.safe_load(yaml_file)
if not isinstance(data, dict):
raise TypeError(f'Conent of config file {path} is not a dict/object')
utils.set_base_path(Path(path).parent)
config = cls(**data)
utils.unset_base_path()
return config
|
Load a YAML config file from file system.
Since YAML is a superset of JSON, it can also load JSON files.
This method raises exception if:
- The file is not available
- The file content is not valid YAML
- Top level value of the YAML is not object
- The YAML contains not supported fields
It does not raise exception when the YAML misses fields or contains bad fields.
Parameters
----------
path : PathLike
Path of the config file.
Returns
-------
cls
An object of ConfigBase subclass.
| 89 | 31 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def load(cls, path):
with open(path) as yaml_file:
data = yaml.safe_load(yaml_file)
if not isinstance(data, dict):
raise TypeError(f'Conent of config file {path} is not a dict/object')
utils.set_base_path(Path(path).parent)
config = cls(**data)
utils.unset_base_path()
return config
```
###Assistant :
Load a YAML config file from file system.
Since YAML is a superset of JSON, it can also load JSON files.
This method raises exception if:
- The file is not available
- The file content is not valid YAML
- Top level value of the YAML is not object
- The YAML contains not supported fields
It does not raise exception when the YAML misses fields or contains bad fields.
Parameters
----------
path : PathLike
Path of the config file.
Returns
-------
cls
An object of ConfigBase subclass.
|
2,216 | def get_leave_allocation_for_period(employee, leave_type, from_date, to_date):
leave_allocated = 0
leave_allocations = frappe.db.sql(
,
{"from_date": from_date, "to_date": to_date, "employee": employee, "leave_type": leave_type},
as_dict=1,
)
if leave_allocations:
for leave_alloc in leave_allocations:
leave_allocated += leave_alloc.total_leaves_allocated
return leave_allocated
@frappe.whitelist() |
select employee, leave_type, from_date, to_date, total_leaves_allocated
from `tabLeave Allocation`
where employee=%(employee)s and leave_type=%(leave_type)s
and docstatus=1
and (from_date between %(from_date)s and %(to_date)s
or to_date between %(from_date)s and %(to_date)s
or (from_date < %(from_date)s and to_date > %(to_date)s))
| 35 | 34 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_leave_allocation_for_period(employee, leave_type, from_date, to_date):
leave_allocated = 0
leave_allocations = frappe.db.sql(
,
{"from_date": from_date, "to_date": to_date, "employee": employee, "leave_type": leave_type},
as_dict=1,
)
if leave_allocations:
for leave_alloc in leave_allocations:
leave_allocated += leave_alloc.total_leaves_allocated
return leave_allocated
@frappe.whitelist()
```
###Assistant :
select employee, leave_type, from_date, to_date, total_leaves_allocated
from `tabLeave Allocation`
where employee=%(employee)s and leave_type=%(leave_type)s
and docstatus=1
and (from_date between %(from_date)s and %(to_date)s
or to_date between %(from_date)s and %(to_date)s
or (from_date < %(from_date)s and to_date > %(to_date)s))
|
2,217 | def test_proxy_model_content_type_is_used_for_log_entries(self):
proxy_content_type = ContentType.objects.get_for_model(
ArticleProxy, for_concrete_model=False
)
post_data = {
"site": self.site.pk,
"title": "Foo",
"hist": "Bar",
"created_0": "2015-12-25",
"created_1": "00:00",
}
changelist_url = reverse("admin:admin_utils_articleproxy_changelist")
# add
proxy_add_url = reverse("admin:admin_utils_articleproxy_add")
response = self.client.post(proxy_add_url, post_data)
self.assertRedirects(response, changelist_url)
proxy_addition_log = LogEntry.objects.latest("id")
self.assertEqual(proxy_addition_log.action_flag, ADDITION)
self.assertEqual(proxy_addition_log.content_type, proxy_content_type)
# change
article_id = proxy_addition_log.object_id
proxy_change_url = reverse(
"admin:admin_utils_articleproxy_change", args=(article_id,)
)
post_data["title"] = "New"
response = self.client.post(proxy_change_url, post_data)
self.assertRedirects(response, changelist_url)
proxy_change_log = LogEntry.objects.latest("id")
self.assertEqual(proxy_change_log.action_flag, CHANGE)
self.assertEqual(proxy_change_log.content_type, proxy_content_type)
# delete
proxy_delete_url = reverse(
"admin:admin_utils_articleproxy_delete", args=(article_id,)
)
response = self.client.post(proxy_delete_url, {"post": "yes"})
self.assertRedirects(response, changelist_url)
proxy_delete_log = LogEntry.objects.latest("id")
self.assertEqual(proxy_delete_log.action_flag, DELETION)
self.assertEqual(proxy_delete_log.content_type, proxy_content_type)
|
Log entries for proxy models should have the proxy model's contenttype
(#21084).
| 12 | 92 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_proxy_model_content_type_is_used_for_log_entries(self):
proxy_content_type = ContentType.objects.get_for_model(
ArticleProxy, for_concrete_model=False
)
post_data = {
"site": self.site.pk,
"title": "Foo",
"hist": "Bar",
"created_0": "2015-12-25",
"created_1": "00:00",
}
changelist_url = reverse("admin:admin_utils_articleproxy_changelist")
# add
proxy_add_url = reverse("admin:admin_utils_articleproxy_add")
response = self.client.post(proxy_add_url, post_data)
self.assertRedirects(response, changelist_url)
proxy_addition_log = LogEntry.objects.latest("id")
self.assertEqual(proxy_addition_log.action_flag, ADDITION)
self.assertEqual(proxy_addition_log.content_type, proxy_content_type)
# change
article_id = proxy_addition_log.object_id
proxy_change_url = reverse(
"admin:admin_utils_articleproxy_change", args=(article_id,)
)
post_data["title"] = "New"
response = self.client.post(proxy_change_url, post_data)
self.assertRedirects(response, changelist_url)
proxy_change_log = LogEntry.objects.latest("id")
self.assertEqual(proxy_change_log.action_flag, CHANGE)
self.assertEqual(proxy_change_log.content_type, proxy_content_type)
# delete
proxy_delete_url = reverse(
"admin:admin_utils_articleproxy_delete", args=(article_id,)
)
response = self.client.post(proxy_delete_url, {"post": "yes"})
self.assertRedirects(response, changelist_url)
proxy_delete_log = LogEntry.objects.latest("id")
self.assertEqual(proxy_delete_log.action_flag, DELETION)
self.assertEqual(proxy_delete_log.content_type, proxy_content_type)
```
###Assistant :
Log entries for proxy models should have the proxy model's contenttype
(#21084).
|
2,218 | def typename(typ, short=False) -> str:
if not isinstance(typ, type):
return typename(type(typ))
try:
if not typ.__module__ or typ.__module__ == "builtins":
return typ.__name__
else:
if short:
module, *_ = typ.__module__.split(".")
else:
module = typ.__module__
return module + "." + typ.__name__
except AttributeError:
return str(typ)
|
Return the name of a type
Examples
--------
>>> typename(int)
'int'
>>> from dask.core import literal
>>> typename(literal)
'dask.core.literal'
>>> typename(literal, short=True)
'dask.literal'
| 23 | 42 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def typename(typ, short=False) -> str:
if not isinstance(typ, type):
return typename(type(typ))
try:
if not typ.__module__ or typ.__module__ == "builtins":
return typ.__name__
else:
if short:
module, *_ = typ.__module__.split(".")
else:
module = typ.__module__
return module + "." + typ.__name__
except AttributeError:
return str(typ)
```
###Assistant :
Return the name of a type
Examples
--------
>>> typename(int)
'int'
>>> from dask.core import literal
>>> typename(literal)
'dask.core.literal'
>>> typename(literal, short=True)
'dask.literal'
|
2,219 | def fetch_command(self, subcommand):
# Get commands outside of try block to prevent swallowing exceptions
commands = get_commands()
try:
app_name = commands[subcommand]
except KeyError:
if os.environ.get("DJANGO_SETTINGS_MODULE"):
# If `subcommand` is missing due to misconfigured settings, the
# following line will retrigger an ImproperlyConfigured exception
# (get_commands() swallows the original one) so the user is
# informed about it.
settings.INSTALLED_APPS
elif not settings.configured:
sys.stderr.write("No Django settings specified.\n")
possible_matches = get_close_matches(subcommand, commands)
sys.stderr.write("Unknown command: %r" % subcommand)
if possible_matches:
sys.stderr.write(". Did you mean %s?" % possible_matches[0])
sys.stderr.write("\nType '%s help' for usage.\n" % self.prog_name)
sys.exit(1)
if isinstance(app_name, BaseCommand):
# If the command is already loaded, use it directly.
klass = app_name
else:
klass = load_command_class(app_name, subcommand)
return klass
|
Try to fetch the given subcommand, printing a message with the
appropriate command called from the command line (usually
"django-admin" or "manage.py") if it can't be found.
| 27 | 114 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def fetch_command(self, subcommand):
# Get commands outside of try block to prevent swallowing exceptions
commands = get_commands()
try:
app_name = commands[subcommand]
except KeyError:
if os.environ.get("DJANGO_SETTINGS_MODULE"):
# If `subcommand` is missing due to misconfigured settings, the
# following line will retrigger an ImproperlyConfigured exception
# (get_commands() swallows the original one) so the user is
# informed about it.
settings.INSTALLED_APPS
elif not settings.configured:
sys.stderr.write("No Django settings specified.\n")
possible_matches = get_close_matches(subcommand, commands)
sys.stderr.write("Unknown command: %r" % subcommand)
if possible_matches:
sys.stderr.write(". Did you mean %s?" % possible_matches[0])
sys.stderr.write("\nType '%s help' for usage.\n" % self.prog_name)
sys.exit(1)
if isinstance(app_name, BaseCommand):
# If the command is already loaded, use it directly.
klass = app_name
else:
klass = load_command_class(app_name, subcommand)
return klass
```
###Assistant :
Try to fetch the given subcommand, printing a message with the
appropriate command called from the command line (usually
"django-admin" or "manage.py") if it can't be found.
|
2,220 | def url(self, name):
name = self._normalize_name(clean_name(name))
blob = self.bucket.blob(name)
blob_params = self.get_object_parameters(name)
no_signed_url = (
blob_params.get('acl', self.default_acl) == 'publicRead' or not self.querystring_auth)
if not self.custom_endpoint and no_signed_url:
return blob.public_url
elif no_signed_url:
out = '{storage_base_url}/{quoted_name}'.format(
storage_base_url=self.custom_endpoint,
quoted_name=_quote(name, safe=b"/~"),
)
return out
elif not self.custom_endpoint:
out2 = blob.generate_signed_url(
expiration=self.expiration,
version="v4",
**self._get_signing_kwargs()
)
return out2
else:
out3 = blob.generate_signed_url(
bucket_bound_hostname=self.custom_endpoint,
expiration=self.expiration,
version="v4",
**self._get_signing_kwargs()
)
return out3
|
Return public url or a signed url for the Blob.
This DOES NOT check for existance of Blob - that makes codes too slow
for many use cases.
Overridden to force the use of the IAM signBlob API.
See https://github.com/googleapis/python-storage/blob/519074112775c19742522158f612b467cf590219/google/cloud/storage/_signing.py#L628 # NOQA
| 42 | 63 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def url(self, name):
name = self._normalize_name(clean_name(name))
blob = self.bucket.blob(name)
blob_params = self.get_object_parameters(name)
no_signed_url = (
blob_params.get('acl', self.default_acl) == 'publicRead' or not self.querystring_auth)
if not self.custom_endpoint and no_signed_url:
return blob.public_url
elif no_signed_url:
out = '{storage_base_url}/{quoted_name}'.format(
storage_base_url=self.custom_endpoint,
quoted_name=_quote(name, safe=b"/~"),
)
return out
elif not self.custom_endpoint:
out2 = blob.generate_signed_url(
expiration=self.expiration,
version="v4",
**self._get_signing_kwargs()
)
return out2
else:
out3 = blob.generate_signed_url(
bucket_bound_hostname=self.custom_endpoint,
expiration=self.expiration,
version="v4",
**self._get_signing_kwargs()
)
return out3
```
###Assistant :
Return public url or a signed url for the Blob.
This DOES NOT check for existance of Blob - that makes codes too slow
for many use cases.
Overridden to force the use of the IAM signBlob API.
See https://github.com/googleapis/python-storage/blob/519074112775c19742522158f612b467cf590219/google/cloud/storage/_signing.py#L628 # NOQA
|
2,221 | def check_is_fitted(estimator, attributes=None, *, msg=None, all_or_any=all):
if isclass(estimator):
raise TypeError("{} is a class, not an instance.".format(estimator))
if msg is None:
msg = (
"This %(name)s instance is not fitted yet. Call 'fit' with "
"appropriate arguments before using this estimator."
)
if not hasattr(estimator, "fit"):
raise TypeError("%s is not an estimator instance." % (estimator))
if attributes is not None:
if not isinstance(attributes, (list, tuple)):
attributes = [attributes]
fitted = all_or_any([hasattr(estimator, attr) for attr in attributes])
elif hasattr(estimator, "__sklearn_is_fitted__"):
fitted = estimator.__sklearn_is_fitted__()
else:
fitted = [
v for v in vars(estimator) if v.endswith("_") and not v.startswith("__")
]
if not fitted:
raise NotFittedError(msg % {"name": type(estimator).__name__})
| Perform is_fitted validation for estimator.
Checks if the estimator is fitted by verifying the presence of
fitted attributes (ending with a trailing underscore) and otherwise
raises a NotFittedError with the given message.
If an estimator does not set any attributes with a trailing underscore, it
can define a ``__sklearn_is_fitted__`` method returning a boolean to specify if the
estimator is fitted or not.
Parameters
----------
estimator : estimator instance
Estimator instance for which the check is performed.
attributes : str, list or tuple of str, default=None
Attribute name(s) given as string or a list/tuple of strings
Eg.: ``["coef_", "estimator_", ...], "coef_"``
If `None`, `estimator` is considered fitted if there exist an
attribute that ends with a underscore and does not start with double
underscore.
msg : str, default=None
The default error message is, "This %(name)s instance is not fitted
yet. Call 'fit' with appropriate arguments before using this
estimator."
For custom messages if "%(name)s" is present in the message string,
it is substituted for the estimator name.
Eg. : "Estimator, %(name)s, must be fitted before sparsifying".
all_or_any : callable, {all, any}, default=all
Specify whether all or any of the given attributes must exist.
Raises
------
TypeError
If the estimator is a class or not an estimator instance
NotFittedError
If the attributes are not found.
| 213 | 104 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def check_is_fitted(estimator, attributes=None, *, msg=None, all_or_any=all):
if isclass(estimator):
raise TypeError("{} is a class, not an instance.".format(estimator))
if msg is None:
msg = (
"This %(name)s instance is not fitted yet. Call 'fit' with "
"appropriate arguments before using this estimator."
)
if not hasattr(estimator, "fit"):
raise TypeError("%s is not an estimator instance." % (estimator))
if attributes is not None:
if not isinstance(attributes, (list, tuple)):
attributes = [attributes]
fitted = all_or_any([hasattr(estimator, attr) for attr in attributes])
elif hasattr(estimator, "__sklearn_is_fitted__"):
fitted = estimator.__sklearn_is_fitted__()
else:
fitted = [
v for v in vars(estimator) if v.endswith("_") and not v.startswith("__")
]
if not fitted:
raise NotFittedError(msg % {"name": type(estimator).__name__})
```
###Assistant : Perform is_fitted validation for estimator.
Checks if the estimator is fitted by verifying the presence of
fitted attributes (ending with a trailing underscore) and otherwise
raises a NotFittedError with the given message.
If an estimator does not set any attributes with a trailing underscore, it
can define a ``__sklearn_is_fitted__`` method returning a boolean to specify if the
estimator is fitted or not.
Parameters
----------
estimator : estimator instance
Estimator instance for which the check is performed.
attributes : str, list or tuple of str, default=None
Attribute name(s) given as string or a list/tuple of strings
Eg.: ``["coef_", "estimator_", ...], "coef_"``
If `None`, `estimator` is considered fitted if there exist an
attribute that ends with a underscore and does not start with double
underscore.
msg : str, default=None
The default error message is, "This %(name)s instance is not fitted
yet. Call 'fit' with appropriate arguments before using this
estimator."
For custom messages if "%(name)s" is present in the message string,
it is substituted for the estimator name.
Eg. : "Estimator, %(name)s, must be fitted before sparsifying".
all_or_any : callable, {all, any}, default=all
Specify whether all or any of the given attributes must exist.
Raises
------
TypeError
If the estimator is a class or not an estimator instance
NotFittedError
If the attributes are not found.
|
2,222 | def p_mean_variance(self, model, x, t, transformer_out, clip_denoised=True, model_kwargs=None):
if model_kwargs is None:
model_kwargs = {}
B, C = x.shape[:2]
assert t.shape == (B,)
model_output = model(x, t, transformer_out)
assert model_output.shape == (B, C * 2, *x.shape[2:])
model_output, model_var_values = torch.split(model_output, C, dim=1)
min_log = _extract_into_tensor(self.noise_scheduler.posterior_log_variance_clipped, t, x.shape)
max_log = _extract_into_tensor(np.log(self.noise_scheduler.betas), t, x.shape)
# The model_var_values is [-1, 1] for [min_var, max_var].
frac = (model_var_values + 1) / 2
model_log_variance = frac * max_log + (1 - frac) * min_log
model_variance = torch.exp(model_log_variance)
pred_xstart = self._predict_xstart_from_eps(x_t=x, t=t, eps=model_output)
if clip_denoised:
pred_xstart = pred_xstart.clamp(-1, 1)
model_mean, _, _ = self.q_posterior_mean_variance(x_start=pred_xstart, x_t=x, t=t)
assert model_mean.shape == model_log_variance.shape == pred_xstart.shape == x.shape
return model_mean, model_variance, model_log_variance, pred_xstart
|
Apply the model to get p(x_{t-1} | x_t), as well as a prediction of
the initial x, x_0.
:param model: the model, which takes a signal and a batch of timesteps
as input.
:param x: the [N x C x ...] tensor at time t.
:param t: a 1-D Tensor of timesteps.
:param clip_denoised: if True, clip the denoised signal into [-1, 1].
:param model_kwargs: if not None, a dict of extra keyword arguments to
pass to the model. This can be used for conditioning.
:return: a dict with the following keys:
- 'mean': the model mean output.
- 'variance': the model variance output.
- 'log_variance': the log of 'variance'.
- 'pred_xstart': the prediction for x_0.
| 116 | 113 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def p_mean_variance(self, model, x, t, transformer_out, clip_denoised=True, model_kwargs=None):
if model_kwargs is None:
model_kwargs = {}
B, C = x.shape[:2]
assert t.shape == (B,)
model_output = model(x, t, transformer_out)
assert model_output.shape == (B, C * 2, *x.shape[2:])
model_output, model_var_values = torch.split(model_output, C, dim=1)
min_log = _extract_into_tensor(self.noise_scheduler.posterior_log_variance_clipped, t, x.shape)
max_log = _extract_into_tensor(np.log(self.noise_scheduler.betas), t, x.shape)
# The model_var_values is [-1, 1] for [min_var, max_var].
frac = (model_var_values + 1) / 2
model_log_variance = frac * max_log + (1 - frac) * min_log
model_variance = torch.exp(model_log_variance)
pred_xstart = self._predict_xstart_from_eps(x_t=x, t=t, eps=model_output)
if clip_denoised:
pred_xstart = pred_xstart.clamp(-1, 1)
model_mean, _, _ = self.q_posterior_mean_variance(x_start=pred_xstart, x_t=x, t=t)
assert model_mean.shape == model_log_variance.shape == pred_xstart.shape == x.shape
return model_mean, model_variance, model_log_variance, pred_xstart
```
###Assistant :
Apply the model to get p(x_{t-1} | x_t), as well as a prediction of
the initial x, x_0.
:param model: the model, which takes a signal and a batch of timesteps
as input.
:param x: the [N x C x ...] tensor at time t.
:param t: a 1-D Tensor of timesteps.
:param clip_denoised: if True, clip the denoised signal into [-1, 1].
:param model_kwargs: if not None, a dict of extra keyword arguments to
pass to the model. This can be used for conditioning.
:return: a dict with the following keys:
- 'mean': the model mean output.
- 'variance': the model variance output.
- 'log_variance': the log of 'variance'.
- 'pred_xstart': the prediction for x_0.
|
2,223 | def test_invalid_parameters_in_stacking():
stacker = StackingClassifier(estimators=[])
html_output = estimator_html_repr(stacker)
assert html.escape(str(stacker)) in html_output
| Invalidate stacking configuration uses default repr.
Non-regression test for #24009.
| 10 | 12 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_invalid_parameters_in_stacking():
stacker = StackingClassifier(estimators=[])
html_output = estimator_html_repr(stacker)
assert html.escape(str(stacker)) in html_output
```
###Assistant : Invalidate stacking configuration uses default repr.
Non-regression test for #24009.
|
2,224 | def add_permissions():
for doctype in ("South Africa VAT Settings", "South Africa VAT Account"):
add_permission(doctype, "All", 0)
for role in ("Accounts Manager", "Accounts User", "System Manager"):
add_permission(doctype, role, 0)
update_permission_property(doctype, role, 0, "write", 1)
update_permission_property(doctype, role, 0, "create", 1)
if not frappe.db.get_value("Custom Role", dict(report="VAT Audit Report")):
frappe.get_doc(
dict(
doctype="Custom Role",
report="VAT Audit Report",
roles=[dict(role="Accounts User"), dict(role="Accounts Manager"), dict(role="Auditor")],
)
).insert()
| Add Permissions for South Africa VAT Settings and South Africa VAT Account
and VAT Audit Report | 16 | 59 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def add_permissions():
for doctype in ("South Africa VAT Settings", "South Africa VAT Account"):
add_permission(doctype, "All", 0)
for role in ("Accounts Manager", "Accounts User", "System Manager"):
add_permission(doctype, role, 0)
update_permission_property(doctype, role, 0, "write", 1)
update_permission_property(doctype, role, 0, "create", 1)
if not frappe.db.get_value("Custom Role", dict(report="VAT Audit Report")):
frappe.get_doc(
dict(
doctype="Custom Role",
report="VAT Audit Report",
roles=[dict(role="Accounts User"), dict(role="Accounts Manager"), dict(role="Auditor")],
)
).insert()
```
###Assistant : Add Permissions for South Africa VAT Settings and South Africa VAT Account
and VAT Audit Report
|
2,225 | def has_delete_permission(self, request, obj=None):
opts = self.opts
codename = get_permission_codename("delete", opts)
return request.user.has_perm("%s.%s" % (opts.app_label, codename))
|
Return True if the given request has permission to change the given
Django model instance, the default implementation doesn't examine the
`obj` parameter.
Can be overridden by the user in subclasses. In such case it should
return True if the given request has permission to delete the `obj`
model instance. If `obj` is None, this should return True if the given
request has permission to delete *any* object of the given type.
| 72 | 16 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def has_delete_permission(self, request, obj=None):
opts = self.opts
codename = get_permission_codename("delete", opts)
return request.user.has_perm("%s.%s" % (opts.app_label, codename))
```
###Assistant :
Return True if the given request has permission to change the given
Django model instance, the default implementation doesn't examine the
`obj` parameter.
Can be overridden by the user in subclasses. In such case it should
return True if the given request has permission to delete the `obj`
model instance. If `obj` is None, this should return True if the given
request has permission to delete *any* object of the given type.
|
2,226 | def set_3d_properties(self, zs=0, zdir='z'):
xs = self.get_xdata()
ys = self.get_ydata()
zs = cbook._to_unmasked_float_array(zs).ravel()
zs = np.broadcast_to(zs, len(xs))
self._verts3d = juggle_axes(xs, ys, zs, zdir)
self.stale = True
|
Set the *z* position and direction of the line.
Parameters
----------
zs : float or array of floats
The location along the *zdir* axis in 3D space to position the
line.
zdir : {'x', 'y', 'z'}
Plane to plot line orthogonal to. Default: 'z'.
See `.get_dir_vector` for a description of the values.
| 52 | 26 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def set_3d_properties(self, zs=0, zdir='z'):
xs = self.get_xdata()
ys = self.get_ydata()
zs = cbook._to_unmasked_float_array(zs).ravel()
zs = np.broadcast_to(zs, len(xs))
self._verts3d = juggle_axes(xs, ys, zs, zdir)
self.stale = True
```
###Assistant :
Set the *z* position and direction of the line.
Parameters
----------
zs : float or array of floats
The location along the *zdir* axis in 3D space to position the
line.
zdir : {'x', 'y', 'z'}
Plane to plot line orthogonal to. Default: 'z'.
See `.get_dir_vector` for a description of the values.
|
2,227 | def clear_tasks(self):
logger.debug("Clearing stored tasks")
self._tasks = {}
| Clears all of the stored tasks.
This is required when loading a task stored in a legacy project file, and is only to be
called by :class:`Project` when a project has been loaded which is in fact a task.
| 39 | 8 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def clear_tasks(self):
logger.debug("Clearing stored tasks")
self._tasks = {}
```
###Assistant : Clears all of the stored tasks.
This is required when loading a task stored in a legacy project file, and is only to be
called by :class:`Project` when a project has been loaded which is in fact a task.
|
2,228 | def add_department_leaves(events, start, end, employee, company):
department = frappe.db.get_value("Employee", employee, "department")
if not department:
return
# department leaves
department_employees = frappe.db.sql_list(
,
(department, company),
)
filter_conditions = ' and employee in ("%s")' % '", "'.join(department_employees)
add_leaves(events, start, end, filter_conditions=filter_conditions)
| select name from tabEmployee where department=%s
and company=%s | 8 | 39 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def add_department_leaves(events, start, end, employee, company):
department = frappe.db.get_value("Employee", employee, "department")
if not department:
return
# department leaves
department_employees = frappe.db.sql_list(
,
(department, company),
)
filter_conditions = ' and employee in ("%s")' % '", "'.join(department_employees)
add_leaves(events, start, end, filter_conditions=filter_conditions)
```
###Assistant : select name from tabEmployee where department=%s
and company=%s
|
2,229 | def is_file(self, follow_links=None):
if follow_links is None:
follow_links = True
node_stat = self._stat if follow_links else self._lstat
return stat.S_ISREG(node_stat.st_mode)
|
Get whether the entry is a regular file.
*follow_links* (:class:`bool` or :data:`None`) is whether to follow
symbolic links. If this is :data:`True`, a symlink to a regular file
will result in :data:`True`. Default is :data:`None` for :data:`True`.
Returns whether the entry is a regular file (:class:`bool`).
| 46 | 19 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def is_file(self, follow_links=None):
if follow_links is None:
follow_links = True
node_stat = self._stat if follow_links else self._lstat
return stat.S_ISREG(node_stat.st_mode)
```
###Assistant :
Get whether the entry is a regular file.
*follow_links* (:class:`bool` or :data:`None`) is whether to follow
symbolic links. If this is :data:`True`, a symlink to a regular file
will result in :data:`True`. Default is :data:`None` for :data:`True`.
Returns whether the entry is a regular file (:class:`bool`).
|
2,230 | def test_tabular_model_form_meta_readonly_field(self):
response = self.client.get(reverse("admin:admin_inlines_someparentmodel_add"))
self.assertContains(
response,
'<img src="/static/admin/img/icon-unknown.svg" '
'class="help help-tooltip" width="10" height="10" '
'alt="(Help text from ModelForm.Meta)" '
'title="Help text from ModelForm.Meta">',
)
self.assertContains(response, "Label from ModelForm.Meta")
|
Tabular inlines use ModelForm.Meta.help_texts and labels for read-only
fields.
| 9 | 29 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_tabular_model_form_meta_readonly_field(self):
response = self.client.get(reverse("admin:admin_inlines_someparentmodel_add"))
self.assertContains(
response,
'<img src="/static/admin/img/icon-unknown.svg" '
'class="help help-tooltip" width="10" height="10" '
'alt="(Help text from ModelForm.Meta)" '
'title="Help text from ModelForm.Meta">',
)
self.assertContains(response, "Label from ModelForm.Meta")
```
###Assistant :
Tabular inlines use ModelForm.Meta.help_texts and labels for read-only
fields.
|
2,231 | def test_edit_get_unlocked_no_lock_permission(self):
# Use edit permission only
self.set_permissions(["change"])
# Get the edit page
response = self.client.get(self.get_url("edit"))
html = response.content.decode()
lock_url = self.get_url("lock")
# Should not show lock message
self.assertNotContains(
response,
"<b>'I'm a lockable snippet!' was locked</b>",
)
# Should show unlocked information in the side panel
self.assertContains(
response,
f"Anyone can edit this {self.model_name}.",
)
# Should not show info to lock the object in the side panel
self.assertNotContains(
response,
"Lock it to prevent others from editing.",
)
# Should show Save action menu item
self.assertContains(
response,
f"<em>{self.save_button_label}</em>",
html=True,
)
# Should not show Locked action menu item
self.assertTagInHTML(
'<button type="submit" disabled>Locked</button>',
html,
count=0,
allow_extra_attrs=True,
)
# Should not show the lock button
self.assertTagInHTML(
f'<button type="button" data-url="{lock_url}" data-action-lock-unlock>Lock</button>',
html,
count=0,
allow_extra_attrs=True,
)
| A user cannot lock an object without the lock permission. | 10 | 121 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_edit_get_unlocked_no_lock_permission(self):
# Use edit permission only
self.set_permissions(["change"])
# Get the edit page
response = self.client.get(self.get_url("edit"))
html = response.content.decode()
lock_url = self.get_url("lock")
# Should not show lock message
self.assertNotContains(
response,
"<b>'I'm a lockable snippet!' was locked</b>",
)
# Should show unlocked information in the side panel
self.assertContains(
response,
f"Anyone can edit this {self.model_name}.",
)
# Should not show info to lock the object in the side panel
self.assertNotContains(
response,
"Lock it to prevent others from editing.",
)
# Should show Save action menu item
self.assertContains(
response,
f"<em>{self.save_button_label}</em>",
html=True,
)
# Should not show Locked action menu item
self.assertTagInHTML(
'<button type="submit" disabled>Locked</button>',
html,
count=0,
allow_extra_attrs=True,
)
# Should not show the lock button
self.assertTagInHTML(
f'<button type="button" data-url="{lock_url}" data-action-lock-unlock>Lock</button>',
html,
count=0,
allow_extra_attrs=True,
)
```
###Assistant : A user cannot lock an object without the lock permission.
|
2,232 | def increment_project_counter(project, delta=1, using="default"):
if delta <= 0:
raise ValueError("There is only one way, and that's up.")
sample_rate = options.get("store.projectcounter-modern-upsert-sample-rate")
modern_upsert = sample_rate and random.random() <= sample_rate
# To prevent the statement_timeout leaking into the session we need to use
# set local which can be used only within a transaction
with transaction.atomic(using=using):
cur = connections[using].cursor()
try:
statement_timeout = None
if settings.SENTRY_PROJECT_COUNTER_STATEMENT_TIMEOUT:
# WARNING: This is not a proper fix and should be removed once
# we have better way of generating next_short_id.
cur.execute("show statement_timeout")
statement_timeout = cur.fetchone()[0]
cur.execute(
"set local statement_timeout = %s",
[settings.SENTRY_PROJECT_COUNTER_STATEMENT_TIMEOUT],
)
if modern_upsert:
# Our postgres wrapper thing does not allow for named arguments
cur.execute(
"insert into sentry_projectcounter (project_id, value) "
"values (%s, %s) "
"on conflict (project_id) do update "
"set value = sentry_projectcounter.value + %s "
"returning value",
[project.id, delta, delta],
)
else:
cur.execute(
"select sentry_increment_project_counter(%s, %s)",
[project.id, delta],
)
project_counter = cur.fetchone()[0]
if statement_timeout is not None:
cur.execute(
"set local statement_timeout = %s",
[statement_timeout],
)
return project_counter
finally:
cur.close()
# this must be idempotent because it seems to execute twice
# (at least during test runs) | This method primarily exists so that south code can use it. | 11 | 184 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def increment_project_counter(project, delta=1, using="default"):
if delta <= 0:
raise ValueError("There is only one way, and that's up.")
sample_rate = options.get("store.projectcounter-modern-upsert-sample-rate")
modern_upsert = sample_rate and random.random() <= sample_rate
# To prevent the statement_timeout leaking into the session we need to use
# set local which can be used only within a transaction
with transaction.atomic(using=using):
cur = connections[using].cursor()
try:
statement_timeout = None
if settings.SENTRY_PROJECT_COUNTER_STATEMENT_TIMEOUT:
# WARNING: This is not a proper fix and should be removed once
# we have better way of generating next_short_id.
cur.execute("show statement_timeout")
statement_timeout = cur.fetchone()[0]
cur.execute(
"set local statement_timeout = %s",
[settings.SENTRY_PROJECT_COUNTER_STATEMENT_TIMEOUT],
)
if modern_upsert:
# Our postgres wrapper thing does not allow for named arguments
cur.execute(
"insert into sentry_projectcounter (project_id, value) "
"values (%s, %s) "
"on conflict (project_id) do update "
"set value = sentry_projectcounter.value + %s "
"returning value",
[project.id, delta, delta],
)
else:
cur.execute(
"select sentry_increment_project_counter(%s, %s)",
[project.id, delta],
)
project_counter = cur.fetchone()[0]
if statement_timeout is not None:
cur.execute(
"set local statement_timeout = %s",
[statement_timeout],
)
return project_counter
finally:
cur.close()
# this must be idempotent because it seems to execute twice
# (at least during test runs)
```
###Assistant : This method primarily exists so that south code can use it.
|
2,233 | def test_multiple_gen_nexts_closed_in_different_order(self) -> None:
id_gen = self._create_id_generator()
| Check that we handle overlapping calls to gen_next, even when their IDs
created and persisted in different orders. | 18 | 7 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_multiple_gen_nexts_closed_in_different_order(self) -> None:
id_gen = self._create_id_generator()
```
###Assistant : Check that we handle overlapping calls to gen_next, even when their IDs
created and persisted in different orders.
|
2,234 | def str_presenter(dumper, data):
if len(data.splitlines()) > 1: # check for multiline string
return dumper.represent_scalar("tag:yaml.org,2002:str", data, style="|")
return dumper.represent_scalar("tag:yaml.org,2002:str", data)
yaml.add_representer(str, str_presenter)
yaml.representer.SafeRepresenter.add_representer(str, str_presenter)
deployment_app = PrefectTyper(
name="deployment", help="Commands for working with deployments."
)
app.add_typer(deployment_app)
|
configures yaml for dumping multiline strings
Ref: https://stackoverflow.com/questions/8640959/how-can-i-control-what-scalar-form-pyyaml-uses-for-my-data
| 8 | 34 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def str_presenter(dumper, data):
if len(data.splitlines()) > 1: # check for multiline string
return dumper.represent_scalar("tag:yaml.org,2002:str", data, style="|")
return dumper.represent_scalar("tag:yaml.org,2002:str", data)
yaml.add_representer(str, str_presenter)
yaml.representer.SafeRepresenter.add_representer(str, str_presenter)
deployment_app = PrefectTyper(
name="deployment", help="Commands for working with deployments."
)
app.add_typer(deployment_app)
```
###Assistant :
configures yaml for dumping multiline strings
Ref: https://stackoverflow.com/questions/8640959/how-can-i-control-what-scalar-form-pyyaml-uses-for-my-data
|
2,235 | def _get_permissions(self, user_obj, obj, from_name):
if not user_obj.is_active or user_obj.is_anonymous or obj is not None:
return set()
perm_cache_name = "_effective_permissions_cache"
if not getattr(user_obj, perm_cache_name, None):
perms = getattr(self, f"_get_{from_name}_permissions")(user_obj)
perms = perms.values_list("content_type__app_label", "codename").order_by()
setattr(user_obj, perm_cache_name, {f"{ct}.{name}" for ct, name in perms})
return getattr(user_obj, perm_cache_name)
| Return the permissions of `user_obj` from `from_name`.
`from_name` can be either "group" or "user" to return permissions from
`_get_group_permissions` or `_get_user_permissions` respectively.
| 22 | 44 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _get_permissions(self, user_obj, obj, from_name):
if not user_obj.is_active or user_obj.is_anonymous or obj is not None:
return set()
perm_cache_name = "_effective_permissions_cache"
if not getattr(user_obj, perm_cache_name, None):
perms = getattr(self, f"_get_{from_name}_permissions")(user_obj)
perms = perms.values_list("content_type__app_label", "codename").order_by()
setattr(user_obj, perm_cache_name, {f"{ct}.{name}" for ct, name in perms})
return getattr(user_obj, perm_cache_name)
```
###Assistant : Return the permissions of `user_obj` from `from_name`.
`from_name` can be either "group" or "user" to return permissions from
`_get_group_permissions` or `_get_user_permissions` respectively.
|
2,236 | def test_trainable_layers(self):
model = model = self._get_model()
# Set the last layer to *not* be trainable.
model.layers[-1].trainable = False
self._train_model(model, use_dataset=True)
loaded = self._save_and_load(model)
self._test_evaluation(model, loaded)
self.assertFalse(model.layers[-1].trainable)
self.assertFalse(loaded.layers[-1].trainable)
| Tests that trainable status of individual layers is preserved. | 9 | 28 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_trainable_layers(self):
model = model = self._get_model()
# Set the last layer to *not* be trainable.
model.layers[-1].trainable = False
self._train_model(model, use_dataset=True)
loaded = self._save_and_load(model)
self._test_evaluation(model, loaded)
self.assertFalse(model.layers[-1].trainable)
self.assertFalse(loaded.layers[-1].trainable)
```
###Assistant : Tests that trainable status of individual layers is preserved.
|
2,237 | def _reorder_labels(self, row_positions=None, col_positions=None):
if row_positions is not None:
ordered_rows = self._partition_mgr_cls.map_axis_partitions(
0, self._partitions, lambda df: df.iloc[row_positions]
)
row_idx = self.index[row_positions]
else:
ordered_rows = self._partitions
row_idx = self.index
if col_positions is not None:
ordered_cols = self._partition_mgr_cls.map_axis_partitions(
1, ordered_rows, lambda df: df.iloc[:, col_positions]
)
col_idx = self.columns[col_positions]
else:
ordered_cols = ordered_rows
col_idx = self.columns
return self.__constructor__(ordered_cols, row_idx, col_idx)
|
Reorder the column and or rows in this DataFrame.
Parameters
----------
row_positions : list of int, optional
The ordered list of new row orders such that each position within the list
indicates the new position.
col_positions : list of int, optional
The ordered list of new column orders such that each position within the
list indicates the new position.
Returns
-------
PandasDataframe
A new PandasDataframe with reordered columns and/or rows.
| 70 | 57 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _reorder_labels(self, row_positions=None, col_positions=None):
if row_positions is not None:
ordered_rows = self._partition_mgr_cls.map_axis_partitions(
0, self._partitions, lambda df: df.iloc[row_positions]
)
row_idx = self.index[row_positions]
else:
ordered_rows = self._partitions
row_idx = self.index
if col_positions is not None:
ordered_cols = self._partition_mgr_cls.map_axis_partitions(
1, ordered_rows, lambda df: df.iloc[:, col_positions]
)
col_idx = self.columns[col_positions]
else:
ordered_cols = ordered_rows
col_idx = self.columns
return self.__constructor__(ordered_cols, row_idx, col_idx)
```
###Assistant :
Reorder the column and or rows in this DataFrame.
Parameters
----------
row_positions : list of int, optional
The ordered list of new row orders such that each position within the list
indicates the new position.
col_positions : list of int, optional
The ordered list of new column orders such that each position within the
list indicates the new position.
Returns
-------
PandasDataframe
A new PandasDataframe with reordered columns and/or rows.
|
2,238 | def get_actual_sle_dict(name):
sles = frappe.db.sql(
,
name,
as_dict=1,
)
sle_dict = {}
for d in sles:
sle_dict[(d.item_code, d.warehouse)] = {
"actual_qty": d.actual_qty,
"stock_value_difference": d.stock_value_difference,
}
return sle_dict
|
select
item_code, warehouse,
sum(actual_qty) as actual_qty,
sum(stock_value_difference) as stock_value_difference
from `tabStock Ledger Entry`
where voucher_type = 'Asset Capitalization' and voucher_no = %s
group by item_code, warehouse
having actual_qty != 0
| 30 | 27 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_actual_sle_dict(name):
sles = frappe.db.sql(
,
name,
as_dict=1,
)
sle_dict = {}
for d in sles:
sle_dict[(d.item_code, d.warehouse)] = {
"actual_qty": d.actual_qty,
"stock_value_difference": d.stock_value_difference,
}
return sle_dict
```
###Assistant :
select
item_code, warehouse,
sum(actual_qty) as actual_qty,
sum(stock_value_difference) as stock_value_difference
from `tabStock Ledger Entry`
where voucher_type = 'Asset Capitalization' and voucher_no = %s
group by item_code, warehouse
having actual_qty != 0
|
2,239 | async def _async_process_on_unload(self) -> None:
if self._on_unload is not None:
while self._on_unload:
self._on_unload.pop()()
while self._pending_tasks:
pending = [task for task in self._pending_tasks if not task.done()]
self._pending_tasks.clear()
if pending:
await asyncio.gather(*pending)
| Process the on_unload callbacks and wait for pending tasks. | 9 | 30 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
async def _async_process_on_unload(self) -> None:
if self._on_unload is not None:
while self._on_unload:
self._on_unload.pop()()
while self._pending_tasks:
pending = [task for task in self._pending_tasks if not task.done()]
self._pending_tasks.clear()
if pending:
await asyncio.gather(*pending)
```
###Assistant : Process the on_unload callbacks and wait for pending tasks.
|
2,240 | def apply_func(partition, func, *args, **kwargs):
result = func(partition, *args, **kwargs)
return result, get_ip()
|
Execute a function on the partition in a worker process.
Parameters
----------
partition : pandas.DataFrame
A pandas DataFrame the function needs to be executed on.
func : callable
The function to perform.
*args : list
Positional arguments to pass to ``func``.
**kwargs : dict
Keyword arguments to pass to ``func``.
Returns
-------
pandas.DataFrame
The resulting pandas DataFrame.
str
The node IP address of the worker process.
Notes
-----
Directly passing a call queue entry (i.e. a list of [func, args, kwargs]) instead of
destructuring it causes a performance penalty.
| 89 | 13 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def apply_func(partition, func, *args, **kwargs):
result = func(partition, *args, **kwargs)
return result, get_ip()
```
###Assistant :
Execute a function on the partition in a worker process.
Parameters
----------
partition : pandas.DataFrame
A pandas DataFrame the function needs to be executed on.
func : callable
The function to perform.
*args : list
Positional arguments to pass to ``func``.
**kwargs : dict
Keyword arguments to pass to ``func``.
Returns
-------
pandas.DataFrame
The resulting pandas DataFrame.
str
The node IP address of the worker process.
Notes
-----
Directly passing a call queue entry (i.e. a list of [func, args, kwargs]) instead of
destructuring it causes a performance penalty.
|
2,241 | def _proc_function_remote(self, *, fun, low, user, tag, jid, daemonize=True):
if daemonize and not salt.utils.platform.is_windows():
# Shutdown the multiprocessing before daemonizing
salt.log.setup.shutdown_multiprocessing_logging()
salt.utils.process.daemonize()
# Reconfigure multiprocessing logging after daemonizing
salt.log.setup.setup_multiprocessing_logging()
# pack a few things into low
low["__jid__"] = jid
low["__user__"] = user
low["__tag__"] = tag
try:
return self.cmd_sync(low)
except salt.exceptions.EauthAuthenticationError as exc:
log.error(exc)
|
Run this method in a multiprocess target to execute the function on the
master and fire the return data on the event bus
| 23 | 53 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _proc_function_remote(self, *, fun, low, user, tag, jid, daemonize=True):
if daemonize and not salt.utils.platform.is_windows():
# Shutdown the multiprocessing before daemonizing
salt.log.setup.shutdown_multiprocessing_logging()
salt.utils.process.daemonize()
# Reconfigure multiprocessing logging after daemonizing
salt.log.setup.setup_multiprocessing_logging()
# pack a few things into low
low["__jid__"] = jid
low["__user__"] = user
low["__tag__"] = tag
try:
return self.cmd_sync(low)
except salt.exceptions.EauthAuthenticationError as exc:
log.error(exc)
```
###Assistant :
Run this method in a multiprocess target to execute the function on the
master and fire the return data on the event bus
|
2,242 | def array_safe(a, like, **kwargs):
from dask.array.routines import array
return _array_like_safe(np.array, array, a, like, **kwargs)
|
If `a` is `dask.array`, return `dask.array.asarray(a, **kwargs)`,
otherwise return `np.asarray(a, like=like, **kwargs)`, dispatching
the call to the library that implements the like array. Note that
when `a` is a `dask.Array` backed by `cupy.ndarray` but `like`
isn't, this function will call `a.compute(scheduler="sync")`
before `np.array`, as downstream libraries are unlikely to know how
to convert a `dask.Array` and CuPy doesn't implement `__array__` to
prevent implicit copies to host.
| 66 | 14 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def array_safe(a, like, **kwargs):
from dask.array.routines import array
return _array_like_safe(np.array, array, a, like, **kwargs)
```
###Assistant :
If `a` is `dask.array`, return `dask.array.asarray(a, **kwargs)`,
otherwise return `np.asarray(a, like=like, **kwargs)`, dispatching
the call to the library that implements the like array. Note that
when `a` is a `dask.Array` backed by `cupy.ndarray` but `like`
isn't, this function will call `a.compute(scheduler="sync")`
before `np.array`, as downstream libraries are unlikely to know how
to convert a `dask.Array` and CuPy doesn't implement `__array__` to
prevent implicit copies to host.
|
2,243 | def get_tables(self) -> StatusResponse:
query =
result = self.native_query(query)
df = result.data_frame
df = df.drop(['type', 'type'], axis=1)
result.data_frame = df.rename(columns={'name': 'table_name'})
return result
|
Return list of entities that will be accessible as tables.
Returns:
HandlerResponse
SHOW TABLES;
| 14 | 23 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_tables(self) -> StatusResponse:
query =
result = self.native_query(query)
df = result.data_frame
df = df.drop(['type', 'type'], axis=1)
result.data_frame = df.rename(columns={'name': 'table_name'})
return result
```
###Assistant :
Return list of entities that will be accessible as tables.
Returns:
HandlerResponse
SHOW TABLES;
|
2,244 | def accuracy(self, params, X, Y, averaged=True):
Y_hat = self.apply(params, X)
Y_hat = d2l.reshape(Y_hat, (-1, Y_hat.shape[-1]))
preds = d2l.astype(d2l.argmax(Y_hat, axis=1), Y.dtype)
compare = d2l.astype(preds == d2l.reshape(Y, -1), d2l.float32)
return d2l.reduce_mean(compare) if averaged else compare
| Compute the number of correct predictions.
Defined in :numref:`sec_classification` | 9 | 33 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def accuracy(self, params, X, Y, averaged=True):
Y_hat = self.apply(params, X)
Y_hat = d2l.reshape(Y_hat, (-1, Y_hat.shape[-1]))
preds = d2l.astype(d2l.argmax(Y_hat, axis=1), Y.dtype)
compare = d2l.astype(preds == d2l.reshape(Y, -1), d2l.float32)
return d2l.reduce_mean(compare) if averaged else compare
```
###Assistant : Compute the number of correct predictions.
Defined in :numref:`sec_classification`
|
2,245 | def open_metadata(self, book, custom_columns):
if config.config_use_google_drive:
if not gdriveutils.is_gdrive_ready():
raise Exception('Google Drive is configured but not ready')
web_content_link = gdriveutils.get_metadata_backup_via_gdrive(book.path)
if not web_content_link:
raise Exception('Google Drive cover url not found')
stream = None
try:
stream = urlopen(web_content_link)
except Exception as ex:
# Bubble exception to calling function
self.log.debug('Error reading metadata.opf: ' + str(ex)) # ToDo Check whats going on
raise ex
finally:
if stream is not None:
stream.close()
else:
# ToDo: Handle book folder not found or not readable
book_metadata_filepath = os.path.join(config.config_calibre_dir, book.path, 'metadata.opf')
#if not os.path.isfile(book_metadata_filepath):
self.create_new_metadata_backup(book, custom_columns, book_metadata_filepath)
# else:
| namespaces = {'dc': PURL_NAMESPACE, 'opf': OPF_NAMESPACE}
test = etree.parse(book_metadata_filepath)
root = test.getroot()
for i in root.iter():
self.log.info(i)
title = root.find("dc:metadata", namespaces)
pass
with open(book_metadata_filepath, "rb") as f:
xml = f.read()
root = objectify.fromstring(xml)
# root.metadata['{http://purl.org/dc/elements/1.1/}title']
# root.metadata[PURL + 'title']
# getattr(root.metadata, PURL +'title')
# test = objectify.parse()
pass
# backup not found has to be created
#raise Exception('Book cover file not found') | 62 | 92 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def open_metadata(self, book, custom_columns):
if config.config_use_google_drive:
if not gdriveutils.is_gdrive_ready():
raise Exception('Google Drive is configured but not ready')
web_content_link = gdriveutils.get_metadata_backup_via_gdrive(book.path)
if not web_content_link:
raise Exception('Google Drive cover url not found')
stream = None
try:
stream = urlopen(web_content_link)
except Exception as ex:
# Bubble exception to calling function
self.log.debug('Error reading metadata.opf: ' + str(ex)) # ToDo Check whats going on
raise ex
finally:
if stream is not None:
stream.close()
else:
# ToDo: Handle book folder not found or not readable
book_metadata_filepath = os.path.join(config.config_calibre_dir, book.path, 'metadata.opf')
#if not os.path.isfile(book_metadata_filepath):
self.create_new_metadata_backup(book, custom_columns, book_metadata_filepath)
# else:
```
###Assistant : namespaces = {'dc': PURL_NAMESPACE, 'opf': OPF_NAMESPACE}
test = etree.parse(book_metadata_filepath)
root = test.getroot()
for i in root.iter():
self.log.info(i)
title = root.find("dc:metadata", namespaces)
pass
with open(book_metadata_filepath, "rb") as f:
xml = f.read()
root = objectify.fromstring(xml)
# root.metadata['{http://purl.org/dc/elements/1.1/}title']
# root.metadata[PURL + 'title']
# getattr(root.metadata, PURL +'title')
# test = objectify.parse()
pass
# backup not found has to be created
#raise Exception('Book cover file not found')
|
2,246 | def test_gevent_monkey(pyi_builder):
pyi_builder.test_source()
# The tkinter module may be available for import, but not actually importable due to missing shared libraries.
# Therefore, we need to use `can_import_module`-based skip decorator instead of `@importorskip`.
@pytest.mark.skipif(not can_import_module("tkinter"), reason="tkinter cannot be imported.") |
from gevent.monkey import patch_all
patch_all()
| 5 | 39 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_gevent_monkey(pyi_builder):
pyi_builder.test_source()
# The tkinter module may be available for import, but not actually importable due to missing shared libraries.
# Therefore, we need to use `can_import_module`-based skip decorator instead of `@importorskip`.
@pytest.mark.skipif(not can_import_module("tkinter"), reason="tkinter cannot be imported.")
```
###Assistant :
from gevent.monkey import patch_all
patch_all()
|
2,247 | def split_auth_netloc_from_url(url):
# type: (str) -> Tuple[str, str, Tuple[str, str]]
url_without_auth, (netloc, auth) = _transform_url(url, _get_netloc)
return url_without_auth, netloc, auth
|
Parse a url into separate netloc, auth, and url with no auth.
Returns: (url_without_auth, netloc, (username, password))
| 17 | 20 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def split_auth_netloc_from_url(url):
# type: (str) -> Tuple[str, str, Tuple[str, str]]
url_without_auth, (netloc, auth) = _transform_url(url, _get_netloc)
return url_without_auth, netloc, auth
```
###Assistant :
Parse a url into separate netloc, auth, and url with no auth.
Returns: (url_without_auth, netloc, (username, password))
|
2,248 | def test_non_str_color():
text = Text("test_color_inheritance", color=Color("blue"))
markup_text = MarkupText("test_color_inheritance", color=Color("blue"))
| Test that the Text and MarkupText can accept non_str color values
i.e. colour.Color(red). | 13 | 10 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_non_str_color():
text = Text("test_color_inheritance", color=Color("blue"))
markup_text = MarkupText("test_color_inheritance", color=Color("blue"))
```
###Assistant : Test that the Text and MarkupText can accept non_str color values
i.e. colour.Color(red).
|
2,249 | async def test_strategy_no_network_settings(pick_radio, mock_app, hass):
mock_app.load_network_info = MagicMock(side_effect=NetworkNotFormed())
result, port = await pick_radio(RadioType.ezsp)
assert (
config_flow.FORMATION_REUSE_SETTINGS
not in result["data_schema"].schema["next_step_id"].container
)
| Test formation strategy when no network settings are present. | 9 | 20 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
async def test_strategy_no_network_settings(pick_radio, mock_app, hass):
mock_app.load_network_info = MagicMock(side_effect=NetworkNotFormed())
result, port = await pick_radio(RadioType.ezsp)
assert (
config_flow.FORMATION_REUSE_SETTINGS
not in result["data_schema"].schema["next_step_id"].container
)
```
###Assistant : Test formation strategy when no network settings are present.
|
2,250 | def detrend(x, key=None, axis=None):
if key is None or key in ['constant', 'mean', 'default']:
return detrend(x, key=detrend_mean, axis=axis)
elif key == 'linear':
return detrend(x, key=detrend_linear, axis=axis)
elif key == 'none':
return detrend(x, key=detrend_none, axis=axis)
elif callable(key):
x = np.asarray(x)
if axis is not None and axis + 1 > x.ndim:
raise ValueError(f'axis(={axis}) out of bounds')
if (axis is None and x.ndim == 0) or (not axis and x.ndim == 1):
return key(x)
# try to use the 'axis' argument if the function supports it,
# otherwise use apply_along_axis to do it
try:
return key(x, axis=axis)
except TypeError:
return np.apply_along_axis(key, axis=axis, arr=x)
else:
raise ValueError(
f"Unknown value for key: {key!r}, must be one of: 'default', "
f"'constant', 'mean', 'linear', or a function")
|
Return *x* with its trend removed.
Parameters
----------
x : array or sequence
Array or sequence containing the data.
key : {'default', 'constant', 'mean', 'linear', 'none'} or function
The detrending algorithm to use. 'default', 'mean', and 'constant' are
the same as `detrend_mean`. 'linear' is the same as `detrend_linear`.
'none' is the same as `detrend_none`. The default is 'mean'. See the
corresponding functions for more details regarding the algorithms. Can
also be a function that carries out the detrend operation.
axis : int
The axis along which to do the detrending.
See Also
--------
detrend_mean : Implementation of the 'mean' algorithm.
detrend_linear : Implementation of the 'linear' algorithm.
detrend_none : Implementation of the 'none' algorithm.
| 114 | 121 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def detrend(x, key=None, axis=None):
if key is None or key in ['constant', 'mean', 'default']:
return detrend(x, key=detrend_mean, axis=axis)
elif key == 'linear':
return detrend(x, key=detrend_linear, axis=axis)
elif key == 'none':
return detrend(x, key=detrend_none, axis=axis)
elif callable(key):
x = np.asarray(x)
if axis is not None and axis + 1 > x.ndim:
raise ValueError(f'axis(={axis}) out of bounds')
if (axis is None and x.ndim == 0) or (not axis and x.ndim == 1):
return key(x)
# try to use the 'axis' argument if the function supports it,
# otherwise use apply_along_axis to do it
try:
return key(x, axis=axis)
except TypeError:
return np.apply_along_axis(key, axis=axis, arr=x)
else:
raise ValueError(
f"Unknown value for key: {key!r}, must be one of: 'default', "
f"'constant', 'mean', 'linear', or a function")
```
###Assistant :
Return *x* with its trend removed.
Parameters
----------
x : array or sequence
Array or sequence containing the data.
key : {'default', 'constant', 'mean', 'linear', 'none'} or function
The detrending algorithm to use. 'default', 'mean', and 'constant' are
the same as `detrend_mean`. 'linear' is the same as `detrend_linear`.
'none' is the same as `detrend_none`. The default is 'mean'. See the
corresponding functions for more details regarding the algorithms. Can
also be a function that carries out the detrend operation.
axis : int
The axis along which to do the detrending.
See Also
--------
detrend_mean : Implementation of the 'mean' algorithm.
detrend_linear : Implementation of the 'linear' algorithm.
detrend_none : Implementation of the 'none' algorithm.
|
2,251 | def has_unrendered_errors(bound_field):
return bound_field.errors and not hasattr(
bound_field.field.widget, "render_with_errors"
)
@register.filter(is_safe=True)
@stringfilter |
Return true if this field has errors that were not accounted for by render_with_errors, because
the widget does not support the render_with_errors method
| 23 | 12 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def has_unrendered_errors(bound_field):
return bound_field.errors and not hasattr(
bound_field.field.widget, "render_with_errors"
)
@register.filter(is_safe=True)
@stringfilter
```
###Assistant :
Return true if this field has errors that were not accounted for by render_with_errors, because
the widget does not support the render_with_errors method
|
2,252 | def test_raw_id_threshold_page_permission_inline_admin(self):
with self.settings(CMS_RAW_ID_USERS=1):
with self.assertNumQueries(1):
self.assertEqual(PagePermissionInlineAdmin.raw_id_fields, [])
# Create users to check if threshold is honored
self._get_guys()
with self.settings(CMS_RAW_ID_USERS=False):
with self.assertNumQueries(0):
self.assertEqual(PagePermissionInlineAdmin.raw_id_fields, [])
with self.settings(CMS_RAW_ID_USERS=True):
with self.assertNumQueries(0):
self.assertEqual(PagePermissionInlineAdmin.raw_id_fields, ['user'])
with self.settings(CMS_RAW_ID_USERS=1):
with self.assertNumQueries(1):
self.assertEqual(PagePermissionInlineAdmin.raw_id_fields, ['user'])
|
Only count users when using an integer value as threshold for
CMS_RAW_ID_USERS.
| 12 | 36 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_raw_id_threshold_page_permission_inline_admin(self):
with self.settings(CMS_RAW_ID_USERS=1):
with self.assertNumQueries(1):
self.assertEqual(PagePermissionInlineAdmin.raw_id_fields, [])
# Create users to check if threshold is honored
self._get_guys()
with self.settings(CMS_RAW_ID_USERS=False):
with self.assertNumQueries(0):
self.assertEqual(PagePermissionInlineAdmin.raw_id_fields, [])
with self.settings(CMS_RAW_ID_USERS=True):
with self.assertNumQueries(0):
self.assertEqual(PagePermissionInlineAdmin.raw_id_fields, ['user'])
with self.settings(CMS_RAW_ID_USERS=1):
with self.assertNumQueries(1):
self.assertEqual(PagePermissionInlineAdmin.raw_id_fields, ['user'])
```
###Assistant :
Only count users when using an integer value as threshold for
CMS_RAW_ID_USERS.
|
2,253 | def get_breaks(self, filename, lineno):
filename = self.canonic(filename)
return filename in self.breaks and \
lineno in self.breaks[filename] and \
Breakpoint.bplist[filename, lineno] or []
| Return all breakpoints for filename:lineno.
If no breakpoints are set, return an empty list.
| 14 | 22 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_breaks(self, filename, lineno):
filename = self.canonic(filename)
return filename in self.breaks and \
lineno in self.breaks[filename] and \
Breakpoint.bplist[filename, lineno] or []
```
###Assistant : Return all breakpoints for filename:lineno.
If no breakpoints are set, return an empty list.
|
2,254 | def test_override(self) -> None:
self.get_success(
self.store.register_user(
self.user_id,
self.pwhash,
approved=True,
)
)
user = self.get_success(self.store.get_user_by_id(self.user_id))
self.assertIsNotNone(user)
assert user is not None
self.assertEqual(user["approved"], 1)
approved = self.get_success(self.store.is_user_approved(self.user_id))
self.assertTrue(approved)
| Tests that if we require approval for new accounts, but we explicitly say the
new user should be considered approved, they're marked as approved.
| 24 | 26 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_override(self) -> None:
self.get_success(
self.store.register_user(
self.user_id,
self.pwhash,
approved=True,
)
)
user = self.get_success(self.store.get_user_by_id(self.user_id))
self.assertIsNotNone(user)
assert user is not None
self.assertEqual(user["approved"], 1)
approved = self.get_success(self.store.is_user_approved(self.user_id))
self.assertTrue(approved)
```
###Assistant : Tests that if we require approval for new accounts, but we explicitly say the
new user should be considered approved, they're marked as approved.
|
2,255 | def tokenize(lines, token='word'):
if token == 'word':
return [line.split() for line in lines]
elif token == 'char':
return [list(line) for line in lines]
else:
print('ERROR: unknown token type: ' + token)
| Split text lines into word or character tokens.
Defined in :numref:`sec_text_preprocessing` | 11 | 31 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def tokenize(lines, token='word'):
if token == 'word':
return [line.split() for line in lines]
elif token == 'char':
return [list(line) for line in lines]
else:
print('ERROR: unknown token type: ' + token)
```
###Assistant : Split text lines into word or character tokens.
Defined in :numref:`sec_text_preprocessing`
|
2,256 | def make_purchase_order_for_default_supplier(source_name, selected_items=None, target_doc=None):
if not selected_items:
return
if isinstance(selected_items, str):
selected_items = json.loads(selected_items)
def set_missing_values(source, target):
target.supplier = supplier
target.apply_discount_on = ""
target.additional_discount_percentage = 0.0
target.discount_amount = 0.0
target.inter_company_order_reference = ""
default_price_list = frappe.get_value("Supplier", supplier, "default_price_list")
if default_price_list:
target.buying_price_list = default_price_list
if any(item.delivered_by_supplier == 1 for item in source.items):
if source.shipping_address_name:
target.shipping_address = source.shipping_address_name
target.shipping_address_display = source.shipping_address
else:
target.shipping_address = source.customer_address
target.shipping_address_display = source.address_display
target.customer_contact_person = source.contact_person
target.customer_contact_display = source.contact_display
target.customer_contact_mobile = source.contact_mobile
target.customer_contact_email = source.contact_email
else:
target.customer = ""
target.customer_name = ""
target.run_method("set_missing_values")
target.run_method("calculate_taxes_and_totals")
def update_item(source, target, source_parent):
target.schedule_date = source.delivery_date
target.qty = flt(source.qty) - (flt(source.ordered_qty) / flt(source.conversion_factor))
target.stock_qty = flt(source.stock_qty) - flt(source.ordered_qty)
target.project = source_parent.project
suppliers = [item.get("supplier") for item in selected_items if item.get("supplier")]
suppliers = list(dict.fromkeys(suppliers)) # remove duplicates while preserving order
items_to_map = [item.get("item_code") for item in selected_items if item.get("item_code")]
items_to_map = list(set(items_to_map))
if not suppliers:
frappe.throw(
_("Please set a Supplier against the Items to be considered in the Purchase Order.")
)
purchase_orders = []
for supplier in suppliers:
doc = get_mapped_doc(
"Sales Order",
source_name,
{
"Sales Order": {
"doctype": "Purchase Order",
"field_no_map": [
"address_display",
"contact_display",
"contact_mobile",
"contact_email",
"contact_person",
"taxes_and_charges",
"shipping_address",
"terms",
],
"validation": {"docstatus": ["=", 1]},
},
"Sales Order Item": {
"doctype": "Purchase Order Item",
"field_map": [
["name", "sales_order_item"],
["parent", "sales_order"],
["stock_uom", "stock_uom"],
["uom", "uom"],
["conversion_factor", "conversion_factor"],
["delivery_date", "schedule_date"],
],
"field_no_map": [
"rate",
"price_list_rate",
"item_tax_template",
"discount_percentage",
"discount_amount",
"pricing_rules",
],
"postprocess": update_item,
"condition": lambda doc: doc.ordered_qty < doc.stock_qty
and doc.supplier == supplier
and doc.item_code in items_to_map,
},
},
target_doc,
set_missing_values,
)
doc.insert()
frappe.db.commit()
purchase_orders.append(doc)
return purchase_orders
@frappe.whitelist() | Creates Purchase Order for each Supplier. Returns a list of doc objects. | 12 | 252 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def make_purchase_order_for_default_supplier(source_name, selected_items=None, target_doc=None):
if not selected_items:
return
if isinstance(selected_items, str):
selected_items = json.loads(selected_items)
def set_missing_values(source, target):
target.supplier = supplier
target.apply_discount_on = ""
target.additional_discount_percentage = 0.0
target.discount_amount = 0.0
target.inter_company_order_reference = ""
default_price_list = frappe.get_value("Supplier", supplier, "default_price_list")
if default_price_list:
target.buying_price_list = default_price_list
if any(item.delivered_by_supplier == 1 for item in source.items):
if source.shipping_address_name:
target.shipping_address = source.shipping_address_name
target.shipping_address_display = source.shipping_address
else:
target.shipping_address = source.customer_address
target.shipping_address_display = source.address_display
target.customer_contact_person = source.contact_person
target.customer_contact_display = source.contact_display
target.customer_contact_mobile = source.contact_mobile
target.customer_contact_email = source.contact_email
else:
target.customer = ""
target.customer_name = ""
target.run_method("set_missing_values")
target.run_method("calculate_taxes_and_totals")
def update_item(source, target, source_parent):
target.schedule_date = source.delivery_date
target.qty = flt(source.qty) - (flt(source.ordered_qty) / flt(source.conversion_factor))
target.stock_qty = flt(source.stock_qty) - flt(source.ordered_qty)
target.project = source_parent.project
suppliers = [item.get("supplier") for item in selected_items if item.get("supplier")]
suppliers = list(dict.fromkeys(suppliers)) # remove duplicates while preserving order
items_to_map = [item.get("item_code") for item in selected_items if item.get("item_code")]
items_to_map = list(set(items_to_map))
if not suppliers:
frappe.throw(
_("Please set a Supplier against the Items to be considered in the Purchase Order.")
)
purchase_orders = []
for supplier in suppliers:
doc = get_mapped_doc(
"Sales Order",
source_name,
{
"Sales Order": {
"doctype": "Purchase Order",
"field_no_map": [
"address_display",
"contact_display",
"contact_mobile",
"contact_email",
"contact_person",
"taxes_and_charges",
"shipping_address",
"terms",
],
"validation": {"docstatus": ["=", 1]},
},
"Sales Order Item": {
"doctype": "Purchase Order Item",
"field_map": [
["name", "sales_order_item"],
["parent", "sales_order"],
["stock_uom", "stock_uom"],
["uom", "uom"],
["conversion_factor", "conversion_factor"],
["delivery_date", "schedule_date"],
],
"field_no_map": [
"rate",
"price_list_rate",
"item_tax_template",
"discount_percentage",
"discount_amount",
"pricing_rules",
],
"postprocess": update_item,
"condition": lambda doc: doc.ordered_qty < doc.stock_qty
and doc.supplier == supplier
and doc.item_code in items_to_map,
},
},
target_doc,
set_missing_values,
)
doc.insert()
frappe.db.commit()
purchase_orders.append(doc)
return purchase_orders
@frappe.whitelist()
```
###Assistant : Creates Purchase Order for each Supplier. Returns a list of doc objects.
|
2,257 | def resolve_relation(model, app_label=None, model_name=None):
if isinstance(model, str):
if model == RECURSIVE_RELATIONSHIP_CONSTANT:
if app_label is None or model_name is None:
raise TypeError(
'app_label and model_name must be provided to resolve '
'recursive relationships.'
)
return app_label, model_name
if '.' in model:
app_label, model_name = model.split('.', 1)
return app_label, model_name.lower()
if app_label is None:
raise TypeError(
'app_label must be provided to resolve unscoped model relationships.'
)
return app_label, model.lower()
return model._meta.app_label, model._meta.model_name
|
Turn a model class or model reference string and return a model tuple.
app_label and model_name are used to resolve the scope of recursive and
unscoped model relationship.
| 28 | 70 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def resolve_relation(model, app_label=None, model_name=None):
if isinstance(model, str):
if model == RECURSIVE_RELATIONSHIP_CONSTANT:
if app_label is None or model_name is None:
raise TypeError(
'app_label and model_name must be provided to resolve '
'recursive relationships.'
)
return app_label, model_name
if '.' in model:
app_label, model_name = model.split('.', 1)
return app_label, model_name.lower()
if app_label is None:
raise TypeError(
'app_label must be provided to resolve unscoped model relationships.'
)
return app_label, model.lower()
return model._meta.app_label, model._meta.model_name
```
###Assistant :
Turn a model class or model reference string and return a model tuple.
app_label and model_name are used to resolve the scope of recursive and
unscoped model relationship.
|
2,258 | def load_tf_weights(model, resolved_archive_file, ignore_mismatched_sizes=False, _prefix=None):
missing_layers = []
unexpected_layers = []
mismatched_layers = []
# Read the H5 file
with h5py.File(resolved_archive_file, "r") as sharded_checkpoint_file:
# Retrieve the name of each layer from the H5 file
saved_h5_model_layers_name = set(
hdf5_format.load_attributes_from_hdf5_group(sharded_checkpoint_file, "layer_names")
)
# Find the missing layers from the high level list of layers
missing_layers = list(set([layer.name for layer in model.layers]) - saved_h5_model_layers_name)
# Find the unexpected layers from the high level list of layers
unexpected_layers = list(saved_h5_model_layers_name - set([layer.name for layer in model.layers]))
saved_weight_names_set = set()
symbolic_weights_names = set()
weight_value_tuples = []
# Compute missing and unexpected sub layers
# Store the weights in list of tuples that looks like [(weight_object, value_of_weight),...]
for layer in model.layers:
# if layer_name from the H5 file belongs to the layers from the instantiated model
if layer.name in saved_h5_model_layers_name:
# Get the H5 layer object from its name
h5_layer_object = sharded_checkpoint_file[layer.name]
# Get all the weights as a list from the layer object
symbolic_weights = layer.trainable_weights + layer.non_trainable_weights
saved_weights = {}
# Create a dict from the H5 saved model that looks like {"weight_name": weight_value}
# And a set with only the names
for weight_name in hdf5_format.load_attributes_from_hdf5_group(h5_layer_object, "weight_names"):
# TF names always start with the model name so we ignore it
name = "/".join(weight_name.split("/")[1:])
if _prefix is not None:
name = _prefix + "/" + name
saved_weights[name] = np.asarray(h5_layer_object[weight_name])
# Add the updated name to the final list for computing missing/unexpected values
saved_weight_names_set.add(name)
# Loop over each weights from the instantiated model and compare with the weights from the H5 file
for symbolic_weight in symbolic_weights:
# TF names always start with the model name so we ignore it
if _prefix is not None:
delimeter = len(_prefix.split("/"))
symbolic_weight_name = "/".join(
symbolic_weight.name.split("/")[:delimeter]
+ symbolic_weight.name.split("/")[delimeter + 1 :]
)
else:
symbolic_weight_name = "/".join(symbolic_weight.name.split("/")[1:])
# here we check if the current weight is among the weights from the H5 file
# If yes, get the weight_value of the corresponding weight from the H5 file
# If not, make the value to None
saved_weight_value = saved_weights.get(symbolic_weight_name, None)
# Add the updated name to the final list for computing missing/unexpected values
symbolic_weights_names.add(symbolic_weight_name)
# If the current weight is found
if saved_weight_value is not None:
# Check if the shape of the current weight and the one from the H5 file are different
if K.int_shape(symbolic_weight) != saved_weight_value.shape:
# If yes we reshape the weight from the H5 file accordingly to the current weight
# If the two shapes are not compatible we raise an issue
try:
array = np.reshape(saved_weight_value, K.int_shape(symbolic_weight))
except ValueError as e:
if ignore_mismatched_sizes:
mismatched_layers.append(
(symbolic_weight_name, saved_weight_value.shape, K.int_shape(symbolic_weight))
)
continue
else:
raise e
else:
array = saved_weight_value
# We create the tuple that will be loaded and add it to the final list
weight_value_tuples.append((symbolic_weight, array))
# Load all the weights
K.batch_set_value(weight_value_tuples)
# Compute the missing and unexpected layers
missing_layers.extend(list(symbolic_weights_names - saved_weight_names_set))
unexpected_layers.extend(list(saved_weight_names_set - symbolic_weights_names))
return missing_layers, unexpected_layers, mismatched_layers
|
Detect missing and unexpected layers and load the TF weights from the shard file accordingly to their names and
shapes.
Args:
model (`tf.keras.models.Model`):
The model to load the weights into.
resolved_archive_file (`str`):
The location of the H5 file.
ignore_mismatched_sizes (`bool`, *optional*, defaults to `False`):
Whether or not to ignore weights with shapes that don't match between the checkpoint of the model.
Returns:
Three lists, one for the missing layers, another one for the unexpected layers, and a last one for the
mismatched layers.
| 83 | 479 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def load_tf_weights(model, resolved_archive_file, ignore_mismatched_sizes=False, _prefix=None):
missing_layers = []
unexpected_layers = []
mismatched_layers = []
# Read the H5 file
with h5py.File(resolved_archive_file, "r") as sharded_checkpoint_file:
# Retrieve the name of each layer from the H5 file
saved_h5_model_layers_name = set(
hdf5_format.load_attributes_from_hdf5_group(sharded_checkpoint_file, "layer_names")
)
# Find the missing layers from the high level list of layers
missing_layers = list(set([layer.name for layer in model.layers]) - saved_h5_model_layers_name)
# Find the unexpected layers from the high level list of layers
unexpected_layers = list(saved_h5_model_layers_name - set([layer.name for layer in model.layers]))
saved_weight_names_set = set()
symbolic_weights_names = set()
weight_value_tuples = []
# Compute missing and unexpected sub layers
# Store the weights in list of tuples that looks like [(weight_object, value_of_weight),...]
for layer in model.layers:
# if layer_name from the H5 file belongs to the layers from the instantiated model
if layer.name in saved_h5_model_layers_name:
# Get the H5 layer object from its name
h5_layer_object = sharded_checkpoint_file[layer.name]
# Get all the weights as a list from the layer object
symbolic_weights = layer.trainable_weights + layer.non_trainable_weights
saved_weights = {}
# Create a dict from the H5 saved model that looks like {"weight_name": weight_value}
# And a set with only the names
for weight_name in hdf5_format.load_attributes_from_hdf5_group(h5_layer_object, "weight_names"):
# TF names always start with the model name so we ignore it
name = "/".join(weight_name.split("/")[1:])
if _prefix is not None:
name = _prefix + "/" + name
saved_weights[name] = np.asarray(h5_layer_object[weight_name])
# Add the updated name to the final list for computing missing/unexpected values
saved_weight_names_set.add(name)
# Loop over each weights from the instantiated model and compare with the weights from the H5 file
for symbolic_weight in symbolic_weights:
# TF names always start with the model name so we ignore it
if _prefix is not None:
delimeter = len(_prefix.split("/"))
symbolic_weight_name = "/".join(
symbolic_weight.name.split("/")[:delimeter]
+ symbolic_weight.name.split("/")[delimeter + 1 :]
)
else:
symbolic_weight_name = "/".join(symbolic_weight.name.split("/")[1:])
# here we check if the current weight is among the weights from the H5 file
# If yes, get the weight_value of the corresponding weight from the H5 file
# If not, make the value to None
saved_weight_value = saved_weights.get(symbolic_weight_name, None)
# Add the updated name to the final list for computing missing/unexpected values
symbolic_weights_names.add(symbolic_weight_name)
# If the current weight is found
if saved_weight_value is not None:
# Check if the shape of the current weight and the one from the H5 file are different
if K.int_shape(symbolic_weight) != saved_weight_value.shape:
# If yes we reshape the weight from the H5 file accordingly to the current weight
# If the two shapes are not compatible we raise an issue
try:
array = np.reshape(saved_weight_value, K.int_shape(symbolic_weight))
except ValueError as e:
if ignore_mismatched_sizes:
mismatched_layers.append(
(symbolic_weight_name, saved_weight_value.shape, K.int_shape(symbolic_weight))
)
continue
else:
raise e
else:
array = saved_weight_value
# We create the tuple that will be loaded and add it to the final list
weight_value_tuples.append((symbolic_weight, array))
# Load all the weights
K.batch_set_value(weight_value_tuples)
# Compute the missing and unexpected layers
missing_layers.extend(list(symbolic_weights_names - saved_weight_names_set))
unexpected_layers.extend(list(saved_weight_names_set - symbolic_weights_names))
return missing_layers, unexpected_layers, mismatched_layers
```
###Assistant :
Detect missing and unexpected layers and load the TF weights from the shard file accordingly to their names and
shapes.
Args:
model (`tf.keras.models.Model`):
The model to load the weights into.
resolved_archive_file (`str`):
The location of the H5 file.
ignore_mismatched_sizes (`bool`, *optional*, defaults to `False`):
Whether or not to ignore weights with shapes that don't match between the checkpoint of the model.
Returns:
Three lists, one for the missing layers, another one for the unexpected layers, and a last one for the
mismatched layers.
|
2,259 | def css_classes(self, extra_classes=None):
if hasattr(extra_classes, "split"):
extra_classes = extra_classes.split()
extra_classes = set(extra_classes or [])
if self.errors and hasattr(self.form, "error_css_class"):
extra_classes.add(self.form.error_css_class)
if self.field.required and hasattr(self.form, "required_css_class"):
extra_classes.add(self.form.required_css_class)
return " ".join(extra_classes)
|
Return a string of space-separated CSS classes for this field.
| 10 | 29 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def css_classes(self, extra_classes=None):
if hasattr(extra_classes, "split"):
extra_classes = extra_classes.split()
extra_classes = set(extra_classes or [])
if self.errors and hasattr(self.form, "error_css_class"):
extra_classes.add(self.form.error_css_class)
if self.field.required and hasattr(self.form, "required_css_class"):
extra_classes.add(self.form.required_css_class)
return " ".join(extra_classes)
```
###Assistant :
Return a string of space-separated CSS classes for this field.
|
2,260 | def _get_free_vram(self) -> List[float]:
vram = [pynvml.nvmlDeviceGetMemoryInfo(handle).free / (1024 * 1024)
for handle in self._handles]
self._log("debug", f"GPU VRAM free: {vram}")
return vram
| Obtain the amount of VRAM that is available, in Megabytes, for each connected Nvidia
GPU.
Returns
-------
list
List of `float`s containing the amount of VRAM available, in Megabytes, for each
connected GPU as corresponding to the values in :attr:`_handles
| 40 | 22 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _get_free_vram(self) -> List[float]:
vram = [pynvml.nvmlDeviceGetMemoryInfo(handle).free / (1024 * 1024)
for handle in self._handles]
self._log("debug", f"GPU VRAM free: {vram}")
return vram
```
###Assistant : Obtain the amount of VRAM that is available, in Megabytes, for each connected Nvidia
GPU.
Returns
-------
list
List of `float`s containing the amount of VRAM available, in Megabytes, for each
connected GPU as corresponding to the values in :attr:`_handles
|
2,261 | def get_project(doctype, txt, searchfield, start, page_len, filters):
from erpnext.controllers.queries import get_match_cond
meta = frappe.get_meta(doctype)
searchfields = meta.get_search_fields()
search_columns = ", " + ", ".join(searchfields) if searchfields else ""
search_cond = " or " + " or ".join(field + " like %(txt)s" for field in searchfields)
return frappe.db.sql(
.format(
search_columns=search_columns, search_condition=search_cond
),
{
"key": searchfield,
"txt": "%" + txt + "%",
"mcond": get_match_cond(doctype),
"start": start,
"page_len": page_len,
},
)
@frappe.whitelist() | select name {search_columns} from `tabProject`
where %(key)s like %(txt)s
%(mcond)s
{search_condition}
order by name
limit %(start)s, %(page_len)s | 17 | 69 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_project(doctype, txt, searchfield, start, page_len, filters):
from erpnext.controllers.queries import get_match_cond
meta = frappe.get_meta(doctype)
searchfields = meta.get_search_fields()
search_columns = ", " + ", ".join(searchfields) if searchfields else ""
search_cond = " or " + " or ".join(field + " like %(txt)s" for field in searchfields)
return frappe.db.sql(
.format(
search_columns=search_columns, search_condition=search_cond
),
{
"key": searchfield,
"txt": "%" + txt + "%",
"mcond": get_match_cond(doctype),
"start": start,
"page_len": page_len,
},
)
@frappe.whitelist()
```
###Assistant : select name {search_columns} from `tabProject`
where %(key)s like %(txt)s
%(mcond)s
{search_condition}
order by name
limit %(start)s, %(page_len)s
|
2,262 | def build(self, var_list):
super().build(var_list)
if getattr(self, "_built", False):
return
self._built = True
self._momentums = []
self._velocities = []
self._u_product = tf.Variable(1.0, dtype=var_list[0].dtype)
# Keep a counter on how many times of _u_product has been computed to
# avoid duplicated computations.
self._u_product_counter = 1
for var in var_list:
self._momentums.append(
self.add_variable_from_reference(
model_variable=var, variable_name="m"
)
)
self._velocities.append(
self.add_variable_from_reference(
model_variable=var, variable_name="v"
)
)
| Initialize optimizer variables.
Nadam optimizer has 2 types of variables: momentums and velocities.
Args:
var_list: list of model variables to build Nadam variables on.
| 24 | 59 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def build(self, var_list):
super().build(var_list)
if getattr(self, "_built", False):
return
self._built = True
self._momentums = []
self._velocities = []
self._u_product = tf.Variable(1.0, dtype=var_list[0].dtype)
# Keep a counter on how many times of _u_product has been computed to
# avoid duplicated computations.
self._u_product_counter = 1
for var in var_list:
self._momentums.append(
self.add_variable_from_reference(
model_variable=var, variable_name="m"
)
)
self._velocities.append(
self.add_variable_from_reference(
model_variable=var, variable_name="v"
)
)
```
###Assistant : Initialize optimizer variables.
Nadam optimizer has 2 types of variables: momentums and velocities.
Args:
var_list: list of model variables to build Nadam variables on.
|
2,263 | def list_distinfo_files(self, absolute=False):
record_path = os.path.join(self.path, 'installed-files.txt')
if os.path.exists(record_path):
skip = True
with codecs.open(record_path, 'r', encoding='utf-8') as f:
for line in f:
line = line.strip()
if line == './':
skip = False
continue
if not skip:
p = os.path.normpath(os.path.join(self.path, line))
if p.startswith(self.path):
if absolute:
yield p
else:
yield line
|
Iterates over the ``installed-files.txt`` entries and returns paths for
each line if the path is pointing to a file located in the
``.egg-info`` directory or one of its subdirectories.
:parameter absolute: If *absolute* is ``True``, each returned path is
transformed into a local absolute path. Otherwise the
raw value from ``installed-files.txt`` is returned.
:type absolute: boolean
:returns: iterator of paths
| 60 | 49 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def list_distinfo_files(self, absolute=False):
record_path = os.path.join(self.path, 'installed-files.txt')
if os.path.exists(record_path):
skip = True
with codecs.open(record_path, 'r', encoding='utf-8') as f:
for line in f:
line = line.strip()
if line == './':
skip = False
continue
if not skip:
p = os.path.normpath(os.path.join(self.path, line))
if p.startswith(self.path):
if absolute:
yield p
else:
yield line
```
###Assistant :
Iterates over the ``installed-files.txt`` entries and returns paths for
each line if the path is pointing to a file located in the
``.egg-info`` directory or one of its subdirectories.
:parameter absolute: If *absolute* is ``True``, each returned path is
transformed into a local absolute path. Otherwise the
raw value from ``installed-files.txt`` is returned.
:type absolute: boolean
:returns: iterator of paths
|
2,264 | def mixin_scalable_deployment_parser(parser):
gp = mixin_base_deployment_parser(parser, title='Scalable Deployment')
gp.add_argument(
'--polling',
type=str,
default=PollingType.ANY.name,
help=,
)
gp.add_argument(
'--shards',
type=int,
default=1,
help='The number of shards in the deployment running at the same time. For more details check '
'https://docs.jina.ai/fundamentals/flow/create-flow/#complex-flow-topologies',
)
gp.add_argument(
'--replicas',
type=int,
default=1,
help='The number of replicas in the deployment',
)
gp.add_argument(
'--native',
action='store_true',
default=False,
help='If set, only native Executors is allowed, and the Executor is always run inside WorkerRuntime.',
)
| Mixing in arguments required by a scalable deployment into the given parser.
The deployment is scalable and can have shards, replicas and polling
:param parser: the parser instance to which we add arguments
The polling strategy of the Deployment and its endpoints (when `shards>1`).
Can be defined for all endpoints of a Deployment or by endpoint.
Define per Deployment:
- ANY: only one (whoever is idle) Pod polls the message
- ALL: all Pods poll the message (like a broadcast)
Define per Endpoint:
JSON dict, {endpoint: PollingType}
{'/custom': 'ALL', '/search': 'ANY', '*': 'ANY'}
| 93 | 68 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def mixin_scalable_deployment_parser(parser):
gp = mixin_base_deployment_parser(parser, title='Scalable Deployment')
gp.add_argument(
'--polling',
type=str,
default=PollingType.ANY.name,
help=,
)
gp.add_argument(
'--shards',
type=int,
default=1,
help='The number of shards in the deployment running at the same time. For more details check '
'https://docs.jina.ai/fundamentals/flow/create-flow/#complex-flow-topologies',
)
gp.add_argument(
'--replicas',
type=int,
default=1,
help='The number of replicas in the deployment',
)
gp.add_argument(
'--native',
action='store_true',
default=False,
help='If set, only native Executors is allowed, and the Executor is always run inside WorkerRuntime.',
)
```
###Assistant : Mixing in arguments required by a scalable deployment into the given parser.
The deployment is scalable and can have shards, replicas and polling
:param parser: the parser instance to which we add arguments
The polling strategy of the Deployment and its endpoints (when `shards>1`).
Can be defined for all endpoints of a Deployment or by endpoint.
Define per Deployment:
- ANY: only one (whoever is idle) Pod polls the message
- ALL: all Pods poll the message (like a broadcast)
Define per Endpoint:
JSON dict, {endpoint: PollingType}
{'/custom': 'ALL', '/search': 'ANY', '*': 'ANY'}
|
2,265 | def test_explorer_list_private(self):
response = self.client.get(
reverse("wagtailadmin_explore", args=(self.private_page.id,))
)
# Check the response
self.assertEqual(response.status_code, 200)
# Must have one privacy icon (next to the private child page)
self.assertContains(
response,
'<span class="indicator privacy-indicator icon icon-no-view"',
count=1,
)
|
This tests that there is a padlock displayed
next to the private child page in the private pages explorer listing
| 20 | 35 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_explorer_list_private(self):
response = self.client.get(
reverse("wagtailadmin_explore", args=(self.private_page.id,))
)
# Check the response
self.assertEqual(response.status_code, 200)
# Must have one privacy icon (next to the private child page)
self.assertContains(
response,
'<span class="indicator privacy-indicator icon icon-no-view"',
count=1,
)
```
###Assistant :
This tests that there is a padlock displayed
next to the private child page in the private pages explorer listing
|
2,266 | def next(self):
self._check("ra")
if self.firstmember is not None:
m = self.firstmember
self.firstmember = None
return m
# Read the next block.
self.fileobj.seek(self.offset)
tarinfo = None
while True:
try:
tarinfo = self.tarinfo.fromtarfile(self)
except EOFHeaderError as e:
if self.ignore_zeros:
self._dbg(2, "0x%X: %s" % (self.offset, e))
self.offset += BLOCKSIZE
continue
except InvalidHeaderError as e:
if self.ignore_zeros:
self._dbg(2, "0x%X: %s" % (self.offset, e))
self.offset += BLOCKSIZE
continue
elif self.offset == 0:
raise ReadError(str(e))
except EmptyHeaderError:
if self.offset == 0:
raise ReadError("empty file")
except TruncatedHeaderError as e:
if self.offset == 0:
raise ReadError(str(e))
except SubsequentHeaderError as e:
raise ReadError(str(e))
break
if tarinfo is not None:
self.members.append(tarinfo)
else:
self._loaded = True
return tarinfo
#--------------------------------------------------------------------------
# Little helper methods:
| Return the next member of the archive as a TarInfo object, when
TarFile is opened for reading. Return None if there is no more
available.
| 25 | 112 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def next(self):
self._check("ra")
if self.firstmember is not None:
m = self.firstmember
self.firstmember = None
return m
# Read the next block.
self.fileobj.seek(self.offset)
tarinfo = None
while True:
try:
tarinfo = self.tarinfo.fromtarfile(self)
except EOFHeaderError as e:
if self.ignore_zeros:
self._dbg(2, "0x%X: %s" % (self.offset, e))
self.offset += BLOCKSIZE
continue
except InvalidHeaderError as e:
if self.ignore_zeros:
self._dbg(2, "0x%X: %s" % (self.offset, e))
self.offset += BLOCKSIZE
continue
elif self.offset == 0:
raise ReadError(str(e))
except EmptyHeaderError:
if self.offset == 0:
raise ReadError("empty file")
except TruncatedHeaderError as e:
if self.offset == 0:
raise ReadError(str(e))
except SubsequentHeaderError as e:
raise ReadError(str(e))
break
if tarinfo is not None:
self.members.append(tarinfo)
else:
self._loaded = True
return tarinfo
#--------------------------------------------------------------------------
# Little helper methods:
```
###Assistant : Return the next member of the archive as a TarInfo object, when
TarFile is opened for reading. Return None if there is no more
available.
|
2,267 | def predict_proba(self, X):
check_is_fitted(self)
ovr = self.multi_class in ["ovr", "warn"] or (
self.multi_class == "auto"
and (
self.classes_.size <= 2
or self.solver in ("liblinear", "newton-cholesky")
)
)
if ovr:
return super()._predict_proba_lr(X)
else:
decision = self.decision_function(X)
if decision.ndim == 1:
# Workaround for multi_class="multinomial" and binary outcomes
# which requires softmax prediction with only a 1D decision.
decision_2d = np.c_[-decision, decision]
else:
decision_2d = decision
return softmax(decision_2d, copy=False)
|
Probability estimates.
The returned estimates for all classes are ordered by the
label of classes.
For a multi_class problem, if multi_class is set to be "multinomial"
the softmax function is used to find the predicted probability of
each class.
Else use a one-vs-rest approach, i.e calculate the probability
of each class assuming it to be positive using the logistic function.
and normalize these values across all the classes.
Parameters
----------
X : array-like of shape (n_samples, n_features)
Vector to be scored, where `n_samples` is the number of samples and
`n_features` is the number of features.
Returns
-------
T : array-like of shape (n_samples, n_classes)
Returns the probability of the sample for each class in the model,
where classes are ordered as they are in ``self.classes_``.
| 125 | 67 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def predict_proba(self, X):
check_is_fitted(self)
ovr = self.multi_class in ["ovr", "warn"] or (
self.multi_class == "auto"
and (
self.classes_.size <= 2
or self.solver in ("liblinear", "newton-cholesky")
)
)
if ovr:
return super()._predict_proba_lr(X)
else:
decision = self.decision_function(X)
if decision.ndim == 1:
# Workaround for multi_class="multinomial" and binary outcomes
# which requires softmax prediction with only a 1D decision.
decision_2d = np.c_[-decision, decision]
else:
decision_2d = decision
return softmax(decision_2d, copy=False)
```
###Assistant :
Probability estimates.
The returned estimates for all classes are ordered by the
label of classes.
For a multi_class problem, if multi_class is set to be "multinomial"
the softmax function is used to find the predicted probability of
each class.
Else use a one-vs-rest approach, i.e calculate the probability
of each class assuming it to be positive using the logistic function.
and normalize these values across all the classes.
Parameters
----------
X : array-like of shape (n_samples, n_features)
Vector to be scored, where `n_samples` is the number of samples and
`n_features` is the number of features.
Returns
-------
T : array-like of shape (n_samples, n_classes)
Returns the probability of the sample for each class in the model,
where classes are ordered as they are in ``self.classes_``.
|
2,268 | def selu(x):
return tf.nn.selu(x)
@keras_export("keras.activations.softplus")
@tf.__internal__.dispatch.add_dispatch_support | Scaled Exponential Linear Unit (SELU).
The Scaled Exponential Linear Unit (SELU) activation function is defined as:
- `if x > 0: return scale * x`
- `if x < 0: return scale * alpha * (exp(x) - 1)`
where `alpha` and `scale` are pre-defined constants
(`alpha=1.67326324` and `scale=1.05070098`).
Basically, the SELU activation function multiplies `scale` (> 1) with the
output of the `tf.keras.activations.elu` function to ensure a slope larger
than one for positive inputs.
The values of `alpha` and `scale` are
chosen so that the mean and variance of the inputs are preserved
between two consecutive layers as long as the weights are initialized
correctly (see `tf.keras.initializers.LecunNormal` initializer)
and the number of input units is "large enough"
(see reference paper for more information).
Example Usage:
>>> num_classes = 10 # 10-class problem
>>> model = tf.keras.Sequential()
>>> model.add(tf.keras.layers.Dense(64, kernel_initializer='lecun_normal',
... activation='selu'))
>>> model.add(tf.keras.layers.Dense(32, kernel_initializer='lecun_normal',
... activation='selu'))
>>> model.add(tf.keras.layers.Dense(16, kernel_initializer='lecun_normal',
... activation='selu'))
>>> model.add(tf.keras.layers.Dense(num_classes, activation='softmax'))
Args:
x: A tensor or variable to compute the activation function for.
Returns:
The scaled exponential unit activation: `scale * elu(x, alpha)`.
Notes:
- To be used together with the
`tf.keras.initializers.LecunNormal` initializer.
- To be used together with the dropout variant
`tf.keras.layers.AlphaDropout` (not regular dropout).
References:
- [Klambauer et al., 2017](https://arxiv.org/abs/1706.02515)
| 205 | 6 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def selu(x):
return tf.nn.selu(x)
@keras_export("keras.activations.softplus")
@tf.__internal__.dispatch.add_dispatch_support
```
###Assistant : Scaled Exponential Linear Unit (SELU).
The Scaled Exponential Linear Unit (SELU) activation function is defined as:
- `if x > 0: return scale * x`
- `if x < 0: return scale * alpha * (exp(x) - 1)`
where `alpha` and `scale` are pre-defined constants
(`alpha=1.67326324` and `scale=1.05070098`).
Basically, the SELU activation function multiplies `scale` (> 1) with the
output of the `tf.keras.activations.elu` function to ensure a slope larger
than one for positive inputs.
The values of `alpha` and `scale` are
chosen so that the mean and variance of the inputs are preserved
between two consecutive layers as long as the weights are initialized
correctly (see `tf.keras.initializers.LecunNormal` initializer)
and the number of input units is "large enough"
(see reference paper for more information).
Example Usage:
>>> num_classes = 10 # 10-class problem
>>> model = tf.keras.Sequential()
>>> model.add(tf.keras.layers.Dense(64, kernel_initializer='lecun_normal',
... activation='selu'))
>>> model.add(tf.keras.layers.Dense(32, kernel_initializer='lecun_normal',
... activation='selu'))
>>> model.add(tf.keras.layers.Dense(16, kernel_initializer='lecun_normal',
... activation='selu'))
>>> model.add(tf.keras.layers.Dense(num_classes, activation='softmax'))
Args:
x: A tensor or variable to compute the activation function for.
Returns:
The scaled exponential unit activation: `scale * elu(x, alpha)`.
Notes:
- To be used together with the
`tf.keras.initializers.LecunNormal` initializer.
- To be used together with the dropout variant
`tf.keras.layers.AlphaDropout` (not regular dropout).
References:
- [Klambauer et al., 2017](https://arxiv.org/abs/1706.02515)
|
2,269 | def _find_safe_escape_tokens(self, text):
for i, t, v in self._filter_to(
self.lang.get_tokens_unprocessed(text),
lambda t: t in Token.Comment or t in Token.String
):
if t is None:
for i2, t2, v2 in self._find_escape_tokens(v):
yield i + i2, t2, v2
else:
yield i, None, v
| find escape tokens that are not in strings or comments | 10 | 41 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _find_safe_escape_tokens(self, text):
for i, t, v in self._filter_to(
self.lang.get_tokens_unprocessed(text),
lambda t: t in Token.Comment or t in Token.String
):
if t is None:
for i2, t2, v2 in self._find_escape_tokens(v):
yield i + i2, t2, v2
else:
yield i, None, v
```
###Assistant : find escape tokens that are not in strings or comments
|
2,270 | def set_up_fileselector(quteproc, py_proc, kind, files, output_type):
cmd, args = py_proc(r)
args += files.split(' ')
if output_type == "a temporary file":
args += ['--file={}']
fileselect_cmd = json.dumps([cmd, *args])
quteproc.set_setting('fileselect.handler', 'external')
quteproc.set_setting(f'fileselect.{kind}.command', fileselect_cmd)
| Set up fileselect.xxx.command to select the file(s).
import os
import sys
tmp_file = None
for i, arg in enumerate(sys.argv):
if arg.startswith('--file='):
tmp_file = arg[len('--file='):]
sys.argv.pop(i)
break
selected_files = sys.argv[1:]
if tmp_file is None:
for selected_file in selected_files:
print(os.path.abspath(selected_file))
else:
with open(tmp_file, 'w') as f:
for selected_file in selected_files:
f.write(os.path.abspath(selected_file) + '\n')
| 51 | 31 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def set_up_fileselector(quteproc, py_proc, kind, files, output_type):
cmd, args = py_proc(r)
args += files.split(' ')
if output_type == "a temporary file":
args += ['--file={}']
fileselect_cmd = json.dumps([cmd, *args])
quteproc.set_setting('fileselect.handler', 'external')
quteproc.set_setting(f'fileselect.{kind}.command', fileselect_cmd)
```
###Assistant : Set up fileselect.xxx.command to select the file(s).
import os
import sys
tmp_file = None
for i, arg in enumerate(sys.argv):
if arg.startswith('--file='):
tmp_file = arg[len('--file='):]
sys.argv.pop(i)
break
selected_files = sys.argv[1:]
if tmp_file is None:
for selected_file in selected_files:
print(os.path.abspath(selected_file))
else:
with open(tmp_file, 'w') as f:
for selected_file in selected_files:
f.write(os.path.abspath(selected_file) + '\n')
|
2,271 | def task_runner(request):
if not hasattr(request.param, "_pytestfixturefunction"):
raise TypeError("Received invalid `task_runner` parameter. Expected fixture.")
yield request.getfixturevalue(request.param.__name__)
|
An indirect fixture that expects to receive a pytest fixture that yields a task
runner.
| 15 | 15 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def task_runner(request):
if not hasattr(request.param, "_pytestfixturefunction"):
raise TypeError("Received invalid `task_runner` parameter. Expected fixture.")
yield request.getfixturevalue(request.param.__name__)
```
###Assistant :
An indirect fixture that expects to receive a pytest fixture that yields a task
runner.
|
2,272 | def test_get_apns_context(self) -> None:
import zerver.lib.push_notifications
zerver.lib.push_notifications.get_apns_context.cache_clear()
try:
with self.settings(APNS_CERT_FILE="/foo.pem"), mock.patch("aioapns.APNs") as mock_apns:
apns_context = get_apns_context()
assert apns_context is not None
try:
self.assertEqual(mock_apns.return_value, apns_context.apns)
finally:
apns_context.loop.close()
finally:
# Reset the cache for `get_apns_context` so that we don't
# leak changes to the rest of the world.
zerver.lib.push_notifications.get_apns_context.cache_clear()
| This test is pretty hacky, and needs to carefully reset the state
it modifies in order to avoid leaking state that can lead to
nondeterministic results for other tests.
| 29 | 47 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_get_apns_context(self) -> None:
import zerver.lib.push_notifications
zerver.lib.push_notifications.get_apns_context.cache_clear()
try:
with self.settings(APNS_CERT_FILE="/foo.pem"), mock.patch("aioapns.APNs") as mock_apns:
apns_context = get_apns_context()
assert apns_context is not None
try:
self.assertEqual(mock_apns.return_value, apns_context.apns)
finally:
apns_context.loop.close()
finally:
# Reset the cache for `get_apns_context` so that we don't
# leak changes to the rest of the world.
zerver.lib.push_notifications.get_apns_context.cache_clear()
```
###Assistant : This test is pretty hacky, and needs to carefully reset the state
it modifies in order to avoid leaking state that can lead to
nondeterministic results for other tests.
|
2,273 | def _validate(self) -> None:
if (self._args.writer == "ffmpeg" and
not self._images.is_video and
self._args.reference_video is None):
raise FaceswapError("Output as video selected, but using frames as input. You must "
"provide a reference video ('-ref', '--reference-video').")
if (self._args.on_the_fly and
self._args.mask_type not in ("none", "extended", "components")):
logger.warning("You have selected an incompatible mask type ('%s') for On-The-Fly "
"conversion. Switching to 'extended'", self._args.mask_type)
self._args.mask_type = "extended"
if (not self._args.on_the_fly and
self._args.mask_type not in ("none", "predicted") and
not self._alignments.mask_is_valid(self._args.mask_type)):
msg = (f"You have selected the Mask Type `{self._args.mask_type}` but at least one "
"face does not have this mask stored in the Alignments File.\nYou should "
"generate the required masks with the Mask Tool or set the Mask Type option to "
"an existing Mask Type.\nA summary of existing masks is as follows:\nTotal "
f"faces: {self._alignments.faces_count}, "
f"Masks: {self._alignments.mask_summary}")
raise FaceswapError(msg)
if self._args.mask_type == "predicted" and not self._predictor.has_predicted_mask:
available_masks = [k for k, v in self._alignments.mask_summary.items()
if k != "none" and v == self._alignments.faces_count]
if not available_masks:
msg = ("Predicted Mask selected, but the model was not trained with a mask and no "
"masks are stored in the Alignments File.\nYou should generate the "
"required masks with the Mask Tool or set the Mask Type to `none`.")
raise FaceswapError(msg)
mask_type = available_masks[0]
logger.warning("Predicted Mask selected, but the model was not trained with a "
"mask. Selecting first available mask: '%s'", mask_type)
self._args.mask_type = mask_type
| Validate the Command Line Options.
Ensure that certain cli selections are valid and won't result in an error. Checks:
* If frames have been passed in with video output, ensure user supplies reference
video.
* If "on-the-fly" and a Neural Network mask is selected, warn and switch to 'extended'
* If a mask-type is selected, ensure it exists in the alignments file.
* If a predicted mask-type is selected, ensure model has been trained with a mask
otherwise attempt to select first available masks, otherwise raise error.
Raises
------
FaceswapError
If an invalid selection has been found.
| 97 | 230 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _validate(self) -> None:
if (self._args.writer == "ffmpeg" and
not self._images.is_video and
self._args.reference_video is None):
raise FaceswapError("Output as video selected, but using frames as input. You must "
"provide a reference video ('-ref', '--reference-video').")
if (self._args.on_the_fly and
self._args.mask_type not in ("none", "extended", "components")):
logger.warning("You have selected an incompatible mask type ('%s') for On-The-Fly "
"conversion. Switching to 'extended'", self._args.mask_type)
self._args.mask_type = "extended"
if (not self._args.on_the_fly and
self._args.mask_type not in ("none", "predicted") and
not self._alignments.mask_is_valid(self._args.mask_type)):
msg = (f"You have selected the Mask Type `{self._args.mask_type}` but at least one "
"face does not have this mask stored in the Alignments File.\nYou should "
"generate the required masks with the Mask Tool or set the Mask Type option to "
"an existing Mask Type.\nA summary of existing masks is as follows:\nTotal "
f"faces: {self._alignments.faces_count}, "
f"Masks: {self._alignments.mask_summary}")
raise FaceswapError(msg)
if self._args.mask_type == "predicted" and not self._predictor.has_predicted_mask:
available_masks = [k for k, v in self._alignments.mask_summary.items()
if k != "none" and v == self._alignments.faces_count]
if not available_masks:
msg = ("Predicted Mask selected, but the model was not trained with a mask and no "
"masks are stored in the Alignments File.\nYou should generate the "
"required masks with the Mask Tool or set the Mask Type to `none`.")
raise FaceswapError(msg)
mask_type = available_masks[0]
logger.warning("Predicted Mask selected, but the model was not trained with a "
"mask. Selecting first available mask: '%s'", mask_type)
self._args.mask_type = mask_type
```
###Assistant : Validate the Command Line Options.
Ensure that certain cli selections are valid and won't result in an error. Checks:
* If frames have been passed in with video output, ensure user supplies reference
video.
* If "on-the-fly" and a Neural Network mask is selected, warn and switch to 'extended'
* If a mask-type is selected, ensure it exists in the alignments file.
* If a predicted mask-type is selected, ensure model has been trained with a mask
otherwise attempt to select first available masks, otherwise raise error.
Raises
------
FaceswapError
If an invalid selection has been found.
|
2,274 | def get_authenticators(self) -> List[BaseAuthentication]:
# TODO: Increase test coverage and get this working for monolith mode.
if SiloMode.get_current_mode() == SiloMode.MONOLITH:
return super().get_authenticators()
last_api_authenticator = ApiAuthentication([])
result: List[BaseAuthentication] = []
for authenticator_cls in self.authentication_classes:
auth_type = ApiAuthenticatorType.from_authenticator(authenticator_cls)
if auth_type:
last_api_authenticator.types.append(auth_type)
else:
if last_api_authenticator.types:
result.append(last_api_authenticator)
last_api_authenticator = ApiAuthentication([])
result.append(authenticator_cls())
if last_api_authenticator.types:
result.append(last_api_authenticator)
return result
|
Instantiates and returns the list of authenticators that this view can use.
Aggregates together authenticators that can be supported using HybridCloud.
| 21 | 52 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_authenticators(self) -> List[BaseAuthentication]:
# TODO: Increase test coverage and get this working for monolith mode.
if SiloMode.get_current_mode() == SiloMode.MONOLITH:
return super().get_authenticators()
last_api_authenticator = ApiAuthentication([])
result: List[BaseAuthentication] = []
for authenticator_cls in self.authentication_classes:
auth_type = ApiAuthenticatorType.from_authenticator(authenticator_cls)
if auth_type:
last_api_authenticator.types.append(auth_type)
else:
if last_api_authenticator.types:
result.append(last_api_authenticator)
last_api_authenticator = ApiAuthentication([])
result.append(authenticator_cls())
if last_api_authenticator.types:
result.append(last_api_authenticator)
return result
```
###Assistant :
Instantiates and returns the list of authenticators that this view can use.
Aggregates together authenticators that can be supported using HybridCloud.
|
2,275 | def test_naive_lowest_common_ancestor2(self):
G = nx.DiGraph()
G.add_edge(0, 1)
G.add_edge(2, 0)
G.add_edge(2, 3)
G.add_edge(4, 0)
G.add_edge(5, 2)
assert naive_lca(G, 1, 3) == 2
| Test that the one-pair function works for issue #4942. | 9 | 21 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_naive_lowest_common_ancestor2(self):
G = nx.DiGraph()
G.add_edge(0, 1)
G.add_edge(2, 0)
G.add_edge(2, 3)
G.add_edge(4, 0)
G.add_edge(5, 2)
assert naive_lca(G, 1, 3) == 2
```
###Assistant : Test that the one-pair function works for issue #4942.
|
2,276 | def get_references(state, model_tuple, field_tuple=()):
for state_model_tuple, model_state in state.models.items():
for name, field in model_state.fields.items():
reference = field_references(
state_model_tuple, field, model_tuple, *field_tuple
)
if reference:
yield model_state, name, field, reference
|
Generator of (model_state, name, field, reference) referencing
provided context.
If field_tuple is provided only references to this particular field of
model_tuple will be generated.
| 24 | 29 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_references(state, model_tuple, field_tuple=()):
for state_model_tuple, model_state in state.models.items():
for name, field in model_state.fields.items():
reference = field_references(
state_model_tuple, field, model_tuple, *field_tuple
)
if reference:
yield model_state, name, field, reference
```
###Assistant :
Generator of (model_state, name, field, reference) referencing
provided context.
If field_tuple is provided only references to this particular field of
model_tuple will be generated.
|
2,277 | async def test_media_player_eq_bands_not_supported(hass):
device = (
"media_player.test_bands",
"on",
{
"friendly_name": "Test media player",
"supported_features": SUPPORT_SELECT_SOUND_MODE,
"sound_mode": "tv",
"sound_mode_list": ["movie", "music", "night", "sport", "tv", "rocknroll"],
},
)
await discovery_test(device, hass)
context = Context()
# Test for SetBands Error
request = get_new_request(
"Alexa.EqualizerController", "SetBands", "media_player#test_bands"
)
request["directive"]["payload"] = {"bands": [{"name": "BASS", "value": -2}]}
msg = await smart_home.async_handle_message(
hass, get_default_config(hass), request, context
)
assert "event" in msg
msg = msg["event"]
assert msg["header"]["name"] == "ErrorResponse"
assert msg["header"]["namespace"] == "Alexa"
assert msg["payload"]["type"] == "INVALID_DIRECTIVE"
# Test for AdjustBands Error
request = get_new_request(
"Alexa.EqualizerController", "AdjustBands", "media_player#test_bands"
)
request["directive"]["payload"] = {
"bands": [{"name": "BASS", "levelDelta": 3, "levelDirection": "UP"}]
}
msg = await smart_home.async_handle_message(
hass, get_default_config(hass), request, context
)
assert "event" in msg
msg = msg["event"]
assert msg["header"]["name"] == "ErrorResponse"
assert msg["header"]["namespace"] == "Alexa"
assert msg["payload"]["type"] == "INVALID_DIRECTIVE"
# Test for ResetBands Error
request = get_new_request(
"Alexa.EqualizerController", "ResetBands", "media_player#test_bands"
)
request["directive"]["payload"] = {
"bands": [{"name": "BASS", "levelDelta": 3, "levelDirection": "UP"}]
}
msg = await smart_home.async_handle_message(
hass, get_default_config(hass), request, context
)
assert "event" in msg
msg = msg["event"]
assert msg["header"]["name"] == "ErrorResponse"
assert msg["header"]["namespace"] == "Alexa"
assert msg["payload"]["type"] == "INVALID_DIRECTIVE"
| Test EqualizerController bands directive not supported. | 6 | 181 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
async def test_media_player_eq_bands_not_supported(hass):
device = (
"media_player.test_bands",
"on",
{
"friendly_name": "Test media player",
"supported_features": SUPPORT_SELECT_SOUND_MODE,
"sound_mode": "tv",
"sound_mode_list": ["movie", "music", "night", "sport", "tv", "rocknroll"],
},
)
await discovery_test(device, hass)
context = Context()
# Test for SetBands Error
request = get_new_request(
"Alexa.EqualizerController", "SetBands", "media_player#test_bands"
)
request["directive"]["payload"] = {"bands": [{"name": "BASS", "value": -2}]}
msg = await smart_home.async_handle_message(
hass, get_default_config(hass), request, context
)
assert "event" in msg
msg = msg["event"]
assert msg["header"]["name"] == "ErrorResponse"
assert msg["header"]["namespace"] == "Alexa"
assert msg["payload"]["type"] == "INVALID_DIRECTIVE"
# Test for AdjustBands Error
request = get_new_request(
"Alexa.EqualizerController", "AdjustBands", "media_player#test_bands"
)
request["directive"]["payload"] = {
"bands": [{"name": "BASS", "levelDelta": 3, "levelDirection": "UP"}]
}
msg = await smart_home.async_handle_message(
hass, get_default_config(hass), request, context
)
assert "event" in msg
msg = msg["event"]
assert msg["header"]["name"] == "ErrorResponse"
assert msg["header"]["namespace"] == "Alexa"
assert msg["payload"]["type"] == "INVALID_DIRECTIVE"
# Test for ResetBands Error
request = get_new_request(
"Alexa.EqualizerController", "ResetBands", "media_player#test_bands"
)
request["directive"]["payload"] = {
"bands": [{"name": "BASS", "levelDelta": 3, "levelDirection": "UP"}]
}
msg = await smart_home.async_handle_message(
hass, get_default_config(hass), request, context
)
assert "event" in msg
msg = msg["event"]
assert msg["header"]["name"] == "ErrorResponse"
assert msg["header"]["namespace"] == "Alexa"
assert msg["payload"]["type"] == "INVALID_DIRECTIVE"
```
###Assistant : Test EqualizerController bands directive not supported.
|
2,278 | def lstsq(a, b):
q, r = qr(a)
x = solve_triangular(r, q.T.conj().dot(b))
residuals = b - a.dot(x)
residuals = abs(residuals**2).sum(axis=0, keepdims=b.ndim == 1)
token = tokenize(a, b)
# r must be a triangular with single block
# rank
rname = "lstsq-rank-" + token
rdsk = {(rname,): (np.linalg.matrix_rank, (r.name, 0, 0))}
graph = HighLevelGraph.from_collections(rname, rdsk, dependencies=[r])
# rank must be an integer
rank = Array(graph, rname, shape=(), chunks=(), dtype=int)
# singular
sname = "lstsq-singular-" + token
rt = r.T.conj()
sdsk = {
(sname, 0): (
_reverse,
(np.sqrt, (np.linalg.eigvalsh, (np.dot, (rt.name, 0, 0), (r.name, 0, 0)))),
)
}
graph = HighLevelGraph.from_collections(sname, sdsk, dependencies=[rt, r])
meta = meta_from_array(residuals, 1)
s = Array(graph, sname, shape=(r.shape[0],), chunks=r.shape[0], meta=meta)
return x, residuals, rank, s
@derived_from(np.linalg) |
Return the least-squares solution to a linear matrix equation using
QR decomposition.
Solves the equation `a x = b` by computing a vector `x` that
minimizes the Euclidean 2-norm `|| b - a x ||^2`. The equation may
be under-, well-, or over- determined (i.e., the number of
linearly independent rows of `a` can be less than, equal to, or
greater than its number of linearly independent columns). If `a`
is square and of full rank, then `x` (but for round-off error) is
the "exact" solution of the equation.
Parameters
----------
a : (M, N) array_like
"Coefficient" matrix.
b : {(M,), (M, K)} array_like
Ordinate or "dependent variable" values. If `b` is two-dimensional,
the least-squares solution is calculated for each of the `K` columns
of `b`.
Returns
-------
x : {(N,), (N, K)} Array
Least-squares solution. If `b` is two-dimensional,
the solutions are in the `K` columns of `x`.
residuals : {(1,), (K,)} Array
Sums of residuals; squared Euclidean 2-norm for each column in
``b - a*x``.
If `b` is 1-dimensional, this is a (1,) shape array.
Otherwise the shape is (K,).
rank : Array
Rank of matrix `a`.
s : (min(M, N),) Array
Singular values of `a`.
| 198 | 118 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def lstsq(a, b):
q, r = qr(a)
x = solve_triangular(r, q.T.conj().dot(b))
residuals = b - a.dot(x)
residuals = abs(residuals**2).sum(axis=0, keepdims=b.ndim == 1)
token = tokenize(a, b)
# r must be a triangular with single block
# rank
rname = "lstsq-rank-" + token
rdsk = {(rname,): (np.linalg.matrix_rank, (r.name, 0, 0))}
graph = HighLevelGraph.from_collections(rname, rdsk, dependencies=[r])
# rank must be an integer
rank = Array(graph, rname, shape=(), chunks=(), dtype=int)
# singular
sname = "lstsq-singular-" + token
rt = r.T.conj()
sdsk = {
(sname, 0): (
_reverse,
(np.sqrt, (np.linalg.eigvalsh, (np.dot, (rt.name, 0, 0), (r.name, 0, 0)))),
)
}
graph = HighLevelGraph.from_collections(sname, sdsk, dependencies=[rt, r])
meta = meta_from_array(residuals, 1)
s = Array(graph, sname, shape=(r.shape[0],), chunks=r.shape[0], meta=meta)
return x, residuals, rank, s
@derived_from(np.linalg)
```
###Assistant :
Return the least-squares solution to a linear matrix equation using
QR decomposition.
Solves the equation `a x = b` by computing a vector `x` that
minimizes the Euclidean 2-norm `|| b - a x ||^2`. The equation may
be under-, well-, or over- determined (i.e., the number of
linearly independent rows of `a` can be less than, equal to, or
greater than its number of linearly independent columns). If `a`
is square and of full rank, then `x` (but for round-off error) is
the "exact" solution of the equation.
Parameters
----------
a : (M, N) array_like
"Coefficient" matrix.
b : {(M,), (M, K)} array_like
Ordinate or "dependent variable" values. If `b` is two-dimensional,
the least-squares solution is calculated for each of the `K` columns
of `b`.
Returns
-------
x : {(N,), (N, K)} Array
Least-squares solution. If `b` is two-dimensional,
the solutions are in the `K` columns of `x`.
residuals : {(1,), (K,)} Array
Sums of residuals; squared Euclidean 2-norm for each column in
``b - a*x``.
If `b` is 1-dimensional, this is a (1,) shape array.
Otherwise the shape is (K,).
rank : Array
Rank of matrix `a`.
s : (min(M, N),) Array
Singular values of `a`.
|
2,279 | def docker_environment(): # type: () -> t.Dict[str, str]
env = common_environment()
env.update(dict((key, os.environ[key]) for key in os.environ if key.startswith('DOCKER_') or key.startswith('CONTAINER_')))
return env
| Return a dictionary of docker related environment variables found in the current environment. | 13 | 23 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def docker_environment(): # type: () -> t.Dict[str, str]
env = common_environment()
env.update(dict((key, os.environ[key]) for key in os.environ if key.startswith('DOCKER_') or key.startswith('CONTAINER_')))
return env
```
###Assistant : Return a dictionary of docker related environment variables found in the current environment.
|
2,280 | def test_remove_from_figure(use_gridspec):
fig, ax = plt.subplots()
sc = ax.scatter([1, 2], [3, 4])
sc.set_array(np.array([5, 6]))
pre_position = ax.get_position()
cb = fig.colorbar(sc, use_gridspec=use_gridspec)
fig.subplots_adjust()
cb.remove()
fig.subplots_adjust()
post_position = ax.get_position()
assert (pre_position.get_points() == post_position.get_points()).all()
|
Test `remove` with the specified ``use_gridspec`` setting
| 7 | 31 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_remove_from_figure(use_gridspec):
fig, ax = plt.subplots()
sc = ax.scatter([1, 2], [3, 4])
sc.set_array(np.array([5, 6]))
pre_position = ax.get_position()
cb = fig.colorbar(sc, use_gridspec=use_gridspec)
fig.subplots_adjust()
cb.remove()
fig.subplots_adjust()
post_position = ax.get_position()
assert (pre_position.get_points() == post_position.get_points()).all()
```
###Assistant :
Test `remove` with the specified ``use_gridspec`` setting
|
2,281 | def groupby(func, seq):
d = {}
for item in seq:
key = func(item)
if key not in d:
d[key] = []
d[key].append(item)
return d
| Group a collection by a key function
>>> from sympy.multipledispatch.utils import groupby
>>> names = ['Alice', 'Bob', 'Charlie', 'Dan', 'Edith', 'Frank']
>>> groupby(len, names) # doctest: +SKIP
{3: ['Bob', 'Dan'], 5: ['Alice', 'Edith', 'Frank'], 7: ['Charlie']}
>>> iseven = lambda x: x % 2 == 0
>>> groupby(iseven, [1, 2, 3, 4, 5, 6, 7, 8]) # doctest: +SKIP
{False: [1, 3, 5, 7], True: [2, 4, 6, 8]}
See Also:
``countby``
| 72 | 24 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def groupby(func, seq):
d = {}
for item in seq:
key = func(item)
if key not in d:
d[key] = []
d[key].append(item)
return d
```
###Assistant : Group a collection by a key function
>>> from sympy.multipledispatch.utils import groupby
>>> names = ['Alice', 'Bob', 'Charlie', 'Dan', 'Edith', 'Frank']
>>> groupby(len, names) # doctest: +SKIP
{3: ['Bob', 'Dan'], 5: ['Alice', 'Edith', 'Frank'], 7: ['Charlie']}
>>> iseven = lambda x: x % 2 == 0
>>> groupby(iseven, [1, 2, 3, 4, 5, 6, 7, 8]) # doctest: +SKIP
{False: [1, 3, 5, 7], True: [2, 4, 6, 8]}
See Also:
``countby``
|
2,282 | def input_files(self) -> List[str]:
metadata = self._plan.execute().get_metadata()
files = set()
for m in metadata:
for f in m.input_files:
files.add(f)
return list(files)
| Return the list of input files for the dataset.
Time complexity: O(num input files)
Returns:
The list of input files used to create the dataset, or an empty
list if the input files is not known.
| 36 | 21 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def input_files(self) -> List[str]:
metadata = self._plan.execute().get_metadata()
files = set()
for m in metadata:
for f in m.input_files:
files.add(f)
return list(files)
```
###Assistant : Return the list of input files for the dataset.
Time complexity: O(num input files)
Returns:
The list of input files used to create the dataset, or an empty
list if the input files is not known.
|
2,283 | def get_nccl_reduce_op(reduce_op):
if reduce_op not in NCCL_REDUCE_OP_MAP:
raise RuntimeError("NCCL does not support reduce op: '{}'.".format(reduce_op))
return NCCL_REDUCE_OP_MAP[reduce_op]
| Map the reduce op to NCCL reduce op type.
Args:
reduce_op (ReduceOp): ReduceOp Enum (SUM/PRODUCT/MIN/MAX).
Returns:
(nccl.ncclRedOp_t): the mapped NCCL reduce op.
| 22 | 17 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_nccl_reduce_op(reduce_op):
if reduce_op not in NCCL_REDUCE_OP_MAP:
raise RuntimeError("NCCL does not support reduce op: '{}'.".format(reduce_op))
return NCCL_REDUCE_OP_MAP[reduce_op]
```
###Assistant : Map the reduce op to NCCL reduce op type.
Args:
reduce_op (ReduceOp): ReduceOp Enum (SUM/PRODUCT/MIN/MAX).
Returns:
(nccl.ncclRedOp_t): the mapped NCCL reduce op.
|
2,284 | def aligned(self) -> AlignedFace:
assert self._aligned is not None
return self._aligned
| The aligned face connected to this detected face. | 8 | 11 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def aligned(self) -> AlignedFace:
assert self._aligned is not None
return self._aligned
```
###Assistant : The aligned face connected to this detected face.
|
2,285 | def upgrade():
conn = op.get_bind()
if conn.dialect.name == "sqlite":
# in sqlite TEXT and STRING column types are the same
return
if conn.dialect.name == "mysql":
op.alter_column(
'connection',
'description',
existing_type=sa.String(length=5000),
type_=sa.Text(length=5000),
existing_nullable=True,
)
else:
# postgres does not allow size modifier for text type
op.alter_column('connection', 'description', existing_type=sa.String(length=5000), type_=sa.Text())
| Apply Fix description field in ``connection`` to be ``text`` | 9 | 47 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def upgrade():
conn = op.get_bind()
if conn.dialect.name == "sqlite":
# in sqlite TEXT and STRING column types are the same
return
if conn.dialect.name == "mysql":
op.alter_column(
'connection',
'description',
existing_type=sa.String(length=5000),
type_=sa.Text(length=5000),
existing_nullable=True,
)
else:
# postgres does not allow size modifier for text type
op.alter_column('connection', 'description', existing_type=sa.String(length=5000), type_=sa.Text())
```
###Assistant : Apply Fix description field in ``connection`` to be ``text``
|
2,286 | def test_memory(self):
params = sum(map(list, six.itervalues(self.net.params)), [])
blobs = self.net.blobs.values()
del self.net
# now sum everything (forcing all memory to be read)
total = 0
for p in params:
total += p.data.sum() + p.diff.sum()
for bl in blobs:
total += bl.data.sum() + bl.diff.sum()
| Check that holding onto blob data beyond the life of a Net is OK | 14 | 43 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_memory(self):
params = sum(map(list, six.itervalues(self.net.params)), [])
blobs = self.net.blobs.values()
del self.net
# now sum everything (forcing all memory to be read)
total = 0
for p in params:
total += p.data.sum() + p.diff.sum()
for bl in blobs:
total += bl.data.sum() + bl.diff.sum()
```
###Assistant : Check that holding onto blob data beyond the life of a Net is OK
|
2,287 | async def test_timeout_stops_execution_in_sync_subflows(self, tmp_path):
canary_file = tmp_path / "canary"
|
Sync flow runs can be cancelled after a timeout once a task is called
| 14 | 9 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
async def test_timeout_stops_execution_in_sync_subflows(self, tmp_path):
canary_file = tmp_path / "canary"
```
###Assistant :
Sync flow runs can be cancelled after a timeout once a task is called
|
2,288 | def get_income_account(doctype, txt, searchfield, start, page_len, filters):
from erpnext.controllers.queries import get_match_cond
# income account can be any Credit account,
# but can also be a Asset account with account_type='Income Account' in special circumstances.
# Hence the first condition is an "OR"
if not filters:
filters = {}
condition = ""
if filters.get("company"):
condition += "and tabAccount.company = %(company)s"
return frappe.db.sql(
.format(
condition=condition, match_condition=get_match_cond(doctype), key=searchfield
),
{"txt": "%" + txt + "%", "company": filters.get("company", "")},
)
@frappe.whitelist()
@frappe.validate_and_sanitize_search_inputs | select tabAccount.name from `tabAccount`
where (tabAccount.report_type = "Profit and Loss"
or tabAccount.account_type in ("Income Account", "Temporary"))
and tabAccount.is_group=0
and tabAccount.`{key}` LIKE %(txt)s
{condition} {match_condition}
order by idx desc, name | 29 | 77 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_income_account(doctype, txt, searchfield, start, page_len, filters):
from erpnext.controllers.queries import get_match_cond
# income account can be any Credit account,
# but can also be a Asset account with account_type='Income Account' in special circumstances.
# Hence the first condition is an "OR"
if not filters:
filters = {}
condition = ""
if filters.get("company"):
condition += "and tabAccount.company = %(company)s"
return frappe.db.sql(
.format(
condition=condition, match_condition=get_match_cond(doctype), key=searchfield
),
{"txt": "%" + txt + "%", "company": filters.get("company", "")},
)
@frappe.whitelist()
@frappe.validate_and_sanitize_search_inputs
```
###Assistant : select tabAccount.name from `tabAccount`
where (tabAccount.report_type = "Profit and Loss"
or tabAccount.account_type in ("Income Account", "Temporary"))
and tabAccount.is_group=0
and tabAccount.`{key}` LIKE %(txt)s
{condition} {match_condition}
order by idx desc, name
|
2,289 | def win_find_exe(filename, installsubdir=None, env="ProgramFiles"):
# type: (str, Optional[Any], str) -> str
fns = [filename] if filename.endswith(".exe") else [filename + ".exe", filename] # noqa: E501
for fn in fns:
try:
if installsubdir is None:
path = _where(fn)
else:
path = _where(fn, dirs=[os.path.join(os.environ[env], installsubdir)]) # noqa: E501
except IOError:
path = None
else:
break
return path or ""
| Find executable in current dir, system path or in the
given ProgramFiles subdir, and retuen its absolute path.
| 18 | 56 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def win_find_exe(filename, installsubdir=None, env="ProgramFiles"):
# type: (str, Optional[Any], str) -> str
fns = [filename] if filename.endswith(".exe") else [filename + ".exe", filename] # noqa: E501
for fn in fns:
try:
if installsubdir is None:
path = _where(fn)
else:
path = _where(fn, dirs=[os.path.join(os.environ[env], installsubdir)]) # noqa: E501
except IOError:
path = None
else:
break
return path or ""
```
###Assistant : Find executable in current dir, system path or in the
given ProgramFiles subdir, and retuen its absolute path.
|
2,290 | def test_custom_page_queryset(self):
self.assertIs(type(CustomManagerPage.objects.all()), CustomPageQuerySet)
self.assertIs(type(CustomManagerPage.objects.about_spam()), CustomPageQuerySet)
self.assertIs(
type(CustomManagerPage.objects.all().about_spam()), CustomPageQuerySet
)
self.assertIs(
type(CustomManagerPage.objects.about_spam().all()), CustomPageQuerySet
)
|
Managers that are constructed from a custom PageQuerySet
(via PageManager.from_queryset(CustomPageQuerySet)) should return
querysets of that type
| 16 | 14 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_custom_page_queryset(self):
self.assertIs(type(CustomManagerPage.objects.all()), CustomPageQuerySet)
self.assertIs(type(CustomManagerPage.objects.about_spam()), CustomPageQuerySet)
self.assertIs(
type(CustomManagerPage.objects.all().about_spam()), CustomPageQuerySet
)
self.assertIs(
type(CustomManagerPage.objects.about_spam().all()), CustomPageQuerySet
)
```
###Assistant :
Managers that are constructed from a custom PageQuerySet
(via PageManager.from_queryset(CustomPageQuerySet)) should return
querysets of that type
|
2,291 | def distro_release_info(self):
# type: () -> Dict[str, str]
return self._distro_release_info
|
Return a dictionary containing key-value pairs for the information
items from the distro release file data source of the OS
distribution.
For details, see :func:`distro.distro_release_info`.
| 25 | 10 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def distro_release_info(self):
# type: () -> Dict[str, str]
return self._distro_release_info
```
###Assistant :
Return a dictionary containing key-value pairs for the information
items from the distro release file data source of the OS
distribution.
For details, see :func:`distro.distro_release_info`.
|
2,292 | def user_documents_dir(self) -> str:
return os.path.normpath(get_win_folder("CSIDL_PERSONAL"))
|
:return: documents directory tied to the user e.g. ``%USERPROFILE%\\Documents``
| 9 | 6 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def user_documents_dir(self) -> str:
return os.path.normpath(get_win_folder("CSIDL_PERSONAL"))
```
###Assistant :
:return: documents directory tied to the user e.g. ``%USERPROFILE%\\Documents``
|
2,293 | def round_robin_reduce_idx_iterator(self):
idx = 0
round_idx = 0
while idx < self.output_num_blocks:
for merge_idx in range(self.num_merge_tasks_per_round):
if merge_idx < self._partitions_with_extra_task:
reduce_idx = merge_idx * (self.merge_partition_size + 1)
partition_size = self.merge_partition_size + 1
else:
reduce_idx = self._partitions_with_extra_task * (
self.merge_partition_size + 1
)
merge_idx -= self._partitions_with_extra_task
reduce_idx += merge_idx * self.merge_partition_size
partition_size = self.merge_partition_size
if round_idx >= partition_size:
continue
reduce_idx += round_idx
yield reduce_idx
idx += 1
round_idx += 1
|
When there are multiple nodes, merge tasks are spread throughout the
cluster to improve load-balancing. Each merge task produces outputs for
a contiguous partition of reduce tasks. This method creates an iterator
that returns reduce task indices round-robin across the merge tasks.
This can be used to submit reduce tasks in a way that spreads the load
evenly across the cluster.
| 61 | 69 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def round_robin_reduce_idx_iterator(self):
idx = 0
round_idx = 0
while idx < self.output_num_blocks:
for merge_idx in range(self.num_merge_tasks_per_round):
if merge_idx < self._partitions_with_extra_task:
reduce_idx = merge_idx * (self.merge_partition_size + 1)
partition_size = self.merge_partition_size + 1
else:
reduce_idx = self._partitions_with_extra_task * (
self.merge_partition_size + 1
)
merge_idx -= self._partitions_with_extra_task
reduce_idx += merge_idx * self.merge_partition_size
partition_size = self.merge_partition_size
if round_idx >= partition_size:
continue
reduce_idx += round_idx
yield reduce_idx
idx += 1
round_idx += 1
```
###Assistant :
When there are multiple nodes, merge tasks are spread throughout the
cluster to improve load-balancing. Each merge task produces outputs for
a contiguous partition of reduce tasks. This method creates an iterator
that returns reduce task indices round-robin across the merge tasks.
This can be used to submit reduce tasks in a way that spreads the load
evenly across the cluster.
|
2,294 | def _check_index_name(self, result):
if self._by is not None:
# pandas does not name the index for this case
result._query_compiler.set_index_name(None)
return result
|
Check the result of groupby aggregation on the need of resetting index name.
Parameters
----------
result : DataFrame
Group by aggregation result.
Returns
-------
DataFrame
| 25 | 21 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _check_index_name(self, result):
if self._by is not None:
# pandas does not name the index for this case
result._query_compiler.set_index_name(None)
return result
```
###Assistant :
Check the result of groupby aggregation on the need of resetting index name.
Parameters
----------
result : DataFrame
Group by aggregation result.
Returns
-------
DataFrame
|
2,295 | def check_points_in_rotated_boxes(points, boxes):
# [B, N, 5] -> [B, N, 4, 2]
corners = box2corners(boxes)
# [1, L, 2] -> [1, 1, L, 2]
points = points.unsqueeze(0)
# [B, N, 4, 2] -> [B, N, 1, 2]
a, b, c, d = corners.split(4, axis=2)
ab = b - a
ad = d - a
# [B, N, L, 2]
ap = points - a
# [B, N, L]
norm_ab = paddle.sum(ab * ab, axis=-1)
# [B, N, L]
norm_ad = paddle.sum(ad * ad, axis=-1)
# [B, N, L] dot product
ap_dot_ab = paddle.sum(ap * ab, axis=-1)
# [B, N, L] dot product
ap_dot_ad = paddle.sum(ap * ad, axis=-1)
# [B, N, L] <A, B> = |A|*|B|*cos(theta)
is_in_box = (ap_dot_ab >= 0) & (ap_dot_ab <= norm_ab) & (ap_dot_ad >= 0) & (
ap_dot_ad <= norm_ad)
return is_in_box
| Check whether point is in rotated boxes
Args:
points (tensor): (1, L, 2) anchor points
boxes (tensor): [B, N, 5] gt_bboxes
eps (float): default 1e-9
Returns:
is_in_box (tensor): (B, N, L)
| 31 | 136 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def check_points_in_rotated_boxes(points, boxes):
# [B, N, 5] -> [B, N, 4, 2]
corners = box2corners(boxes)
# [1, L, 2] -> [1, 1, L, 2]
points = points.unsqueeze(0)
# [B, N, 4, 2] -> [B, N, 1, 2]
a, b, c, d = corners.split(4, axis=2)
ab = b - a
ad = d - a
# [B, N, L, 2]
ap = points - a
# [B, N, L]
norm_ab = paddle.sum(ab * ab, axis=-1)
# [B, N, L]
norm_ad = paddle.sum(ad * ad, axis=-1)
# [B, N, L] dot product
ap_dot_ab = paddle.sum(ap * ab, axis=-1)
# [B, N, L] dot product
ap_dot_ad = paddle.sum(ap * ad, axis=-1)
# [B, N, L] <A, B> = |A|*|B|*cos(theta)
is_in_box = (ap_dot_ab >= 0) & (ap_dot_ab <= norm_ab) & (ap_dot_ad >= 0) & (
ap_dot_ad <= norm_ad)
return is_in_box
```
###Assistant : Check whether point is in rotated boxes
Args:
points (tensor): (1, L, 2) anchor points
boxes (tensor): [B, N, 5] gt_bboxes
eps (float): default 1e-9
Returns:
is_in_box (tensor): (B, N, L)
|
2,296 | def write_ssh_wrapper(module):
try:
# make sure we have full permission to the module_dir, which
# may not be the case if we're sudo'ing to a non-root user
if os.access(module.tmpdir, os.W_OK | os.R_OK | os.X_OK):
fd, wrapper_path = tempfile.mkstemp(prefix=module.tmpdir + '/')
else:
raise OSError
except (IOError, OSError):
fd, wrapper_path = tempfile.mkstemp()
# use existing git_ssh/ssh_command, fallback to 'ssh'
template = b( % os.environ.get('GIT_SSH', os.environ.get('GIT_SSH_COMMAND', 'ssh')))
# write it
with os.fdopen(fd, 'w+b') as fh:
fh.write(template)
# set execute
st = os.stat(wrapper_path)
os.chmod(wrapper_path, st.st_mode | stat.S_IEXEC)
module.debug('Wrote temp git ssh wrapper (%s): %s' % (wrapper_path, template))
# ensure we cleanup after ourselves
module.add_cleanup_file(path=wrapper_path)
return wrapper_path
|
This writes an shell wrapper for ssh options to be used with git
this is only relevant for older versions of gitthat cannot
handle the options themselves. Returns path to the script
#!/bin/sh
%s $GIT_SSH_OPTS
| 35 | 102 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def write_ssh_wrapper(module):
try:
# make sure we have full permission to the module_dir, which
# may not be the case if we're sudo'ing to a non-root user
if os.access(module.tmpdir, os.W_OK | os.R_OK | os.X_OK):
fd, wrapper_path = tempfile.mkstemp(prefix=module.tmpdir + '/')
else:
raise OSError
except (IOError, OSError):
fd, wrapper_path = tempfile.mkstemp()
# use existing git_ssh/ssh_command, fallback to 'ssh'
template = b( % os.environ.get('GIT_SSH', os.environ.get('GIT_SSH_COMMAND', 'ssh')))
# write it
with os.fdopen(fd, 'w+b') as fh:
fh.write(template)
# set execute
st = os.stat(wrapper_path)
os.chmod(wrapper_path, st.st_mode | stat.S_IEXEC)
module.debug('Wrote temp git ssh wrapper (%s): %s' % (wrapper_path, template))
# ensure we cleanup after ourselves
module.add_cleanup_file(path=wrapper_path)
return wrapper_path
```
###Assistant :
This writes an shell wrapper for ssh options to be used with git
this is only relevant for older versions of gitthat cannot
handle the options themselves. Returns path to the script
#!/bin/sh
%s $GIT_SSH_OPTS
|
2,297 | def alpha_composite(self, im, dest=(0, 0), source=(0, 0)):
if not isinstance(source, (list, tuple)):
msg = "Source must be a tuple"
raise ValueError(msg)
if not isinstance(dest, (list, tuple)):
msg = "Destination must be a tuple"
raise ValueError(msg)
if not len(source) in (2, 4):
msg = "Source must be a 2 or 4-tuple"
raise ValueError(msg)
if not len(dest) == 2:
msg = "Destination must be a 2-tuple"
raise ValueError(msg)
if min(source) < 0:
msg = "Source must be non-negative"
raise ValueError(msg)
if len(source) == 2:
source = source + im.size
# over image, crop if it's not the whole thing.
if source == (0, 0) + im.size:
overlay = im
else:
overlay = im.crop(source)
# target for the paste
box = dest + (dest[0] + overlay.width, dest[1] + overlay.height)
# destination image. don't copy if we're using the whole image.
if box == (0, 0) + self.size:
background = self
else:
background = self.crop(box)
result = alpha_composite(background, overlay)
self.paste(result, box)
| 'In-place' analog of Image.alpha_composite. Composites an image
onto this image.
:param im: image to composite over this one
:param dest: Optional 2 tuple (left, top) specifying the upper
left corner in this (destination) image.
:param source: Optional 2 (left, top) tuple for the upper left
corner in the overlay source image, or 4 tuple (left, top, right,
bottom) for the bounds of the source rectangle
Performance Note: Not currently implemented in-place in the core layer.
| 75 | 157 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def alpha_composite(self, im, dest=(0, 0), source=(0, 0)):
if not isinstance(source, (list, tuple)):
msg = "Source must be a tuple"
raise ValueError(msg)
if not isinstance(dest, (list, tuple)):
msg = "Destination must be a tuple"
raise ValueError(msg)
if not len(source) in (2, 4):
msg = "Source must be a 2 or 4-tuple"
raise ValueError(msg)
if not len(dest) == 2:
msg = "Destination must be a 2-tuple"
raise ValueError(msg)
if min(source) < 0:
msg = "Source must be non-negative"
raise ValueError(msg)
if len(source) == 2:
source = source + im.size
# over image, crop if it's not the whole thing.
if source == (0, 0) + im.size:
overlay = im
else:
overlay = im.crop(source)
# target for the paste
box = dest + (dest[0] + overlay.width, dest[1] + overlay.height)
# destination image. don't copy if we're using the whole image.
if box == (0, 0) + self.size:
background = self
else:
background = self.crop(box)
result = alpha_composite(background, overlay)
self.paste(result, box)
```
###Assistant : 'In-place' analog of Image.alpha_composite. Composites an image
onto this image.
:param im: image to composite over this one
:param dest: Optional 2 tuple (left, top) specifying the upper
left corner in this (destination) image.
:param source: Optional 2 (left, top) tuple for the upper left
corner in the overlay source image, or 4 tuple (left, top, right,
bottom) for the bounds of the source rectangle
Performance Note: Not currently implemented in-place in the core layer.
|
2,298 | def test_form_field_clean_name_override(self):
field = ExtendedFormField.objects.create(
page=self.form_page,
sort_order=1,
label="quanti ge·là·to?",
field_type="number", # only number fields will add the ID as a prefix to the clean_name
required=True,
)
self.assertEqual(field.clean_name, "number_field--quanti_gelato")
|
Creating a new field should use the overridden method
See ExtendedFormField get_field_clean_name method
| 13 | 28 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_form_field_clean_name_override(self):
field = ExtendedFormField.objects.create(
page=self.form_page,
sort_order=1,
label="quanti ge·là·to?",
field_type="number", # only number fields will add the ID as a prefix to the clean_name
required=True,
)
self.assertEqual(field.clean_name, "number_field--quanti_gelato")
```
###Assistant :
Creating a new field should use the overridden method
See ExtendedFormField get_field_clean_name method
|
2,299 | def polyder(p, m=1):
_check_arraylike("polyder", p)
m = core.concrete_or_error(operator.index, m, "'m' argument of jnp.polyder")
p, = _promote_dtypes_inexact(p)
if m < 0:
raise ValueError("Order of derivative must be positive")
if m == 0:
return p
coeff = (arange(len(p), m, -1)[np.newaxis, :] - 1 - arange(m)[:, np.newaxis]).prod(0)
return p[:-m] * coeff
_LEADING_ZEROS_DOC =
@_wraps(np.polymul, lax_description=_LEADING_ZEROS_DOC) | \
Setting trim_leading_zeros=True makes the output match that of numpy.
But prevents the function from being able to be used in compiled code.
| 23 | 52 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def polyder(p, m=1):
_check_arraylike("polyder", p)
m = core.concrete_or_error(operator.index, m, "'m' argument of jnp.polyder")
p, = _promote_dtypes_inexact(p)
if m < 0:
raise ValueError("Order of derivative must be positive")
if m == 0:
return p
coeff = (arange(len(p), m, -1)[np.newaxis, :] - 1 - arange(m)[:, np.newaxis]).prod(0)
return p[:-m] * coeff
_LEADING_ZEROS_DOC =
@_wraps(np.polymul, lax_description=_LEADING_ZEROS_DOC)
```
###Assistant : \
Setting trim_leading_zeros=True makes the output match that of numpy.
But prevents the function from being able to be used in compiled code.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.