Unnamed: 0
int64 0
2.93k
| code
stringlengths 101
62.2k
| docs
stringlengths 51
10.7k
| doc_len
int64 4
1.74k
| words
int64 4
4.82k
| lang
stringclasses 1
value | prompt
stringlengths 320
71.2k
|
---|---|---|---|---|---|---|
1,600 | def find_asteroidal_triple(G):
r
V = set(G.nodes)
if len(V) < 6:
# An asteroidal triple cannot exist in a graph with 5 or less vertices.
return None
component_structure = create_component_structure(G)
E_complement = set(nx.complement(G).edges)
for e in E_complement:
u = e[0]
v = e[1]
u_neighborhood = set(G[u]).union([u])
v_neighborhood = set(G[v]).union([v])
union_of_neighborhoods = u_neighborhood.union(v_neighborhood)
for w in V - union_of_neighborhoods:
# Check for each pair of vertices whether they belong to the
# same connected component when the closed neighborhood of the
# third is removed.
if (
component_structure[u][v] == component_structure[u][w]
and component_structure[v][u] == component_structure[v][w]
and component_structure[w][u] == component_structure[w][v]
):
return [u, v, w]
return None
@not_implemented_for("directed")
@not_implemented_for("multigraph") | Find an asteroidal triple in the given graph.
An asteroidal triple is a triple of non-adjacent vertices such that
there exists a path between any two of them which avoids the closed
neighborhood of the third. It checks all independent triples of vertices
and whether they are an asteroidal triple or not. This is done with the
help of a data structure called a component structure.
A component structure encodes information about which vertices belongs to
the same connected component when the closed neighborhood of a given vertex
is removed from the graph. The algorithm used to check is the trivial
one, outlined in [1]_, which has a runtime of
:math:`O(|V||\overline{E} + |V||E|)`, where the second term is the
creation of the component structure.
Parameters
----------
G : NetworkX Graph
The graph to check whether is AT-free or not
Returns
-------
list or None
An asteroidal triple is returned as a list of nodes. If no asteroidal
triple exists, i.e. the graph is AT-free, then None is returned.
The returned value depends on the certificate parameter. The default
option is a bool which is True if the graph is AT-free, i.e. the
given graph contains no asteroidal triples, and False otherwise, i.e.
if the graph contains at least one asteroidal triple.
Notes
-----
The component structure and the algorithm is described in [1]_. The current
implementation implements the trivial algorithm for simple graphs.
References
----------
.. [1] Ekkehard Köhler,
"Recognizing Graphs without asteroidal triples",
Journal of Discrete Algorithms 2, pages 439-452, 2004.
https://www.sciencedirect.com/science/article/pii/S157086670400019X
| 253 | 105 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def find_asteroidal_triple(G):
r
V = set(G.nodes)
if len(V) < 6:
# An asteroidal triple cannot exist in a graph with 5 or less vertices.
return None
component_structure = create_component_structure(G)
E_complement = set(nx.complement(G).edges)
for e in E_complement:
u = e[0]
v = e[1]
u_neighborhood = set(G[u]).union([u])
v_neighborhood = set(G[v]).union([v])
union_of_neighborhoods = u_neighborhood.union(v_neighborhood)
for w in V - union_of_neighborhoods:
# Check for each pair of vertices whether they belong to the
# same connected component when the closed neighborhood of the
# third is removed.
if (
component_structure[u][v] == component_structure[u][w]
and component_structure[v][u] == component_structure[v][w]
and component_structure[w][u] == component_structure[w][v]
):
return [u, v, w]
return None
@not_implemented_for("directed")
@not_implemented_for("multigraph")
```
###Assistant : Find an asteroidal triple in the given graph.
An asteroidal triple is a triple of non-adjacent vertices such that
there exists a path between any two of them which avoids the closed
neighborhood of the third. It checks all independent triples of vertices
and whether they are an asteroidal triple or not. This is done with the
help of a data structure called a component structure.
A component structure encodes information about which vertices belongs to
the same connected component when the closed neighborhood of a given vertex
is removed from the graph. The algorithm used to check is the trivial
one, outlined in [1]_, which has a runtime of
:math:`O(|V||\overline{E} + |V||E|)`, where the second term is the
creation of the component structure.
Parameters
----------
G : NetworkX Graph
The graph to check whether is AT-free or not
Returns
-------
list or None
An asteroidal triple is returned as a list of nodes. If no asteroidal
triple exists, i.e. the graph is AT-free, then None is returned.
The returned value depends on the certificate parameter. The default
option is a bool which is True if the graph is AT-free, i.e. the
given graph contains no asteroidal triples, and False otherwise, i.e.
if the graph contains at least one asteroidal triple.
Notes
-----
The component structure and the algorithm is described in [1]_. The current
implementation implements the trivial algorithm for simple graphs.
References
----------
.. [1] Ekkehard Köhler,
"Recognizing Graphs without asteroidal triples",
Journal of Discrete Algorithms 2, pages 439-452, 2004.
https://www.sciencedirect.com/science/article/pii/S157086670400019X
|
1,601 | def get_leave_entries(employee, leave_type, from_date, to_date):
return frappe.db.sql(
,
{"from_date": from_date, "to_date": to_date, "employee": employee, "leave_type": leave_type},
as_dict=1,
)
@frappe.whitelist() | Returns leave entries between from_date and to_date.
SELECT
employee, leave_type, from_date, to_date, leaves, transaction_name, transaction_type, holiday_list,
is_carry_forward, is_expired
FROM `tabLeave Ledger Entry`
WHERE employee=%(employee)s AND leave_type=%(leave_type)s
AND docstatus=1
AND (leaves<0
OR is_expired=1)
AND (from_date between %(from_date)s AND %(to_date)s
OR to_date between %(from_date)s AND %(to_date)s
OR (from_date < %(from_date)s AND to_date > %(to_date)s))
| 52 | 19 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_leave_entries(employee, leave_type, from_date, to_date):
return frappe.db.sql(
,
{"from_date": from_date, "to_date": to_date, "employee": employee, "leave_type": leave_type},
as_dict=1,
)
@frappe.whitelist()
```
###Assistant : Returns leave entries between from_date and to_date.
SELECT
employee, leave_type, from_date, to_date, leaves, transaction_name, transaction_type, holiday_list,
is_carry_forward, is_expired
FROM `tabLeave Ledger Entry`
WHERE employee=%(employee)s AND leave_type=%(leave_type)s
AND docstatus=1
AND (leaves<0
OR is_expired=1)
AND (from_date between %(from_date)s AND %(to_date)s
OR to_date between %(from_date)s AND %(to_date)s
OR (from_date < %(from_date)s AND to_date > %(to_date)s))
|
1,602 | def addslashes(value):
return value.replace("\\", "\\\\").replace('"', '\\"').replace("'", "\\'")
@register.filter(is_safe=True)
@stringfilter |
Add slashes before quotes. Useful for escaping strings in CSV, for
example. Less useful for escaping JavaScript; use the ``escapejs``
filter instead.
| 22 | 9 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def addslashes(value):
return value.replace("\\", "\\\\").replace('"', '\\"').replace("'", "\\'")
@register.filter(is_safe=True)
@stringfilter
```
###Assistant :
Add slashes before quotes. Useful for escaping strings in CSV, for
example. Less useful for escaping JavaScript; use the ``escapejs``
filter instead.
|
1,603 | def _get_raw_from_config(cls) -> str:
if cls.varname is None:
raise TypeError("varname should not be None")
return os.environ[cls.varname]
|
Read the value from environment variable.
Returns
-------
str
Config raw value.
Raises
------
TypeError
If `varname` is None.
KeyError
If value is absent.
| 24 | 16 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _get_raw_from_config(cls) -> str:
if cls.varname is None:
raise TypeError("varname should not be None")
return os.environ[cls.varname]
```
###Assistant :
Read the value from environment variable.
Returns
-------
str
Config raw value.
Raises
------
TypeError
If `varname` is None.
KeyError
If value is absent.
|
1,604 | def is_monotonic_decreasing(self):
return SeriesDefault.register(pandas.Series.is_monotonic_decreasing)(self)
|
Return boolean if values in the object are monotonically decreasing.
Returns
-------
bool
| 13 | 4 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def is_monotonic_decreasing(self):
return SeriesDefault.register(pandas.Series.is_monotonic_decreasing)(self)
```
###Assistant :
Return boolean if values in the object are monotonically decreasing.
Returns
-------
bool
|
1,605 | def get(self, model, columns, filters):
key = self._make_key(model, filters)
conn = self.cluster.get_local_client_for_key(key)
pipe = conn.pipeline()
for col in columns:
pipe.hget(key, f"i+{col}")
results = pipe.execute()
return {
col: (int(results[i]) if results[i] is not None else 0) for i, col in enumerate(columns)
}
|
Fetches buffered values for a model/filter. Passed columns must be integer columns.
| 12 | 41 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get(self, model, columns, filters):
key = self._make_key(model, filters)
conn = self.cluster.get_local_client_for_key(key)
pipe = conn.pipeline()
for col in columns:
pipe.hget(key, f"i+{col}")
results = pipe.execute()
return {
col: (int(results[i]) if results[i] is not None else 0) for i, col in enumerate(columns)
}
```
###Assistant :
Fetches buffered values for a model/filter. Passed columns must be integer columns.
|
1,606 | def test_non_ascii_subscription_for_principal(self) -> None:
iago = self.example_user("iago")
self.assert_adding_subscriptions_for_principal(
iago.id, get_realm("zulip"), ["hümbüǵ"], policy_name="Public"
)
|
You can subscribe other people to streams even if they containing
non-ASCII characters.
| 13 | 13 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_non_ascii_subscription_for_principal(self) -> None:
iago = self.example_user("iago")
self.assert_adding_subscriptions_for_principal(
iago.id, get_realm("zulip"), ["hümbüǵ"], policy_name="Public"
)
```
###Assistant :
You can subscribe other people to streams even if they containing
non-ASCII characters.
|
1,607 | def add_provs(self, reader):
fileids = reader.fileids()
for fileid in fileids:
prov, langfile = os.path.split(fileid)
file_name, file_extension = os.path.splitext(langfile)
if file_extension == ".tab":
lang = file_name.split("-")[-1]
if lang in self.provenances.keys():
# We already have another resource for this lang,
# so we need to further specify the lang id:
lang = f"{lang}_{prov}"
self.provenances[lang] = prov
| Add languages from Multilingual Wordnet to the provenance dictionary | 9 | 54 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def add_provs(self, reader):
fileids = reader.fileids()
for fileid in fileids:
prov, langfile = os.path.split(fileid)
file_name, file_extension = os.path.splitext(langfile)
if file_extension == ".tab":
lang = file_name.split("-")[-1]
if lang in self.provenances.keys():
# We already have another resource for this lang,
# so we need to further specify the lang id:
lang = f"{lang}_{prov}"
self.provenances[lang] = prov
```
###Assistant : Add languages from Multilingual Wordnet to the provenance dictionary
|
1,608 | def test_thread_with_bundled_aggregations_for_latest(self) -> None:
self._send_relation(RelationTypes.THREAD, "m.room.test")
channel = self._send_relation(RelationTypes.THREAD, "m.room.test")
thread_2 = channel.json_body["event_id"]
self._send_relation(
RelationTypes.ANNOTATION, "m.reaction", "a", parent_id=thread_2
)
|
Bundled aggregations should get applied to the latest thread event.
| 10 | 19 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_thread_with_bundled_aggregations_for_latest(self) -> None:
self._send_relation(RelationTypes.THREAD, "m.room.test")
channel = self._send_relation(RelationTypes.THREAD, "m.room.test")
thread_2 = channel.json_body["event_id"]
self._send_relation(
RelationTypes.ANNOTATION, "m.reaction", "a", parent_id=thread_2
)
```
###Assistant :
Bundled aggregations should get applied to the latest thread event.
|
1,609 | def from_environment(cls):
return cls.from_file(path=KUBE_CONFIG_DEFAULT_LOCATION)
|
Factory method to produce an instance of this class using the default kube config location
| 15 | 4 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def from_environment(cls):
return cls.from_file(path=KUBE_CONFIG_DEFAULT_LOCATION)
```
###Assistant :
Factory method to produce an instance of this class using the default kube config location
|
1,610 | def test_thumbnail_repeated_thumbnail(self) -> None:
self._test_thumbnail(
"scale", self.test_image.expected_scaled, self.test_image.expected_found
)
if not self.test_image.expected_found:
return
# Fetching again should work, without re-requesting the image from the
# remote.
params = "?width=32&height=32&method=scale"
channel = make_request(
self.reactor,
FakeSite(self.thumbnail_resource, self.reactor),
"GET",
self.media_id + params,
shorthand=False,
await_result=False,
)
self.pump()
self.assertEqual(channel.code, 200)
if self.test_image.expected_scaled:
self.assertEqual(
channel.result["body"],
self.test_image.expected_scaled,
channel.result["body"],
)
# Deleting the thumbnail on disk then re-requesting it should work as
# Synapse should regenerate missing thumbnails.
origin, media_id = self.media_id.split("/")
info = self.get_success(self.store.get_cached_remote_media(origin, media_id))
file_id = info["filesystem_id"]
thumbnail_dir = self.media_repo.filepaths.remote_media_thumbnail_dir(
origin, file_id
)
shutil.rmtree(thumbnail_dir, ignore_errors=True)
channel = make_request(
self.reactor,
FakeSite(self.thumbnail_resource, self.reactor),
"GET",
self.media_id + params,
shorthand=False,
await_result=False,
)
self.pump()
self.assertEqual(channel.code, 200)
if self.test_image.expected_scaled:
self.assertEqual(
channel.result["body"],
self.test_image.expected_scaled,
channel.result["body"],
)
| Test that fetching the same thumbnail works, and deleting the on disk
thumbnail regenerates it.
| 15 | 112 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_thumbnail_repeated_thumbnail(self) -> None:
self._test_thumbnail(
"scale", self.test_image.expected_scaled, self.test_image.expected_found
)
if not self.test_image.expected_found:
return
# Fetching again should work, without re-requesting the image from the
# remote.
params = "?width=32&height=32&method=scale"
channel = make_request(
self.reactor,
FakeSite(self.thumbnail_resource, self.reactor),
"GET",
self.media_id + params,
shorthand=False,
await_result=False,
)
self.pump()
self.assertEqual(channel.code, 200)
if self.test_image.expected_scaled:
self.assertEqual(
channel.result["body"],
self.test_image.expected_scaled,
channel.result["body"],
)
# Deleting the thumbnail on disk then re-requesting it should work as
# Synapse should regenerate missing thumbnails.
origin, media_id = self.media_id.split("/")
info = self.get_success(self.store.get_cached_remote_media(origin, media_id))
file_id = info["filesystem_id"]
thumbnail_dir = self.media_repo.filepaths.remote_media_thumbnail_dir(
origin, file_id
)
shutil.rmtree(thumbnail_dir, ignore_errors=True)
channel = make_request(
self.reactor,
FakeSite(self.thumbnail_resource, self.reactor),
"GET",
self.media_id + params,
shorthand=False,
await_result=False,
)
self.pump()
self.assertEqual(channel.code, 200)
if self.test_image.expected_scaled:
self.assertEqual(
channel.result["body"],
self.test_image.expected_scaled,
channel.result["body"],
)
```
###Assistant : Test that fetching the same thumbnail works, and deleting the on disk
thumbnail regenerates it.
|
1,611 | def update(self) -> None:
ping_cmd = [
"ping",
"-c",
"1",
"-W",
str(DEFAULT_PING_TIMEOUT),
str(self._host),
]
status = sp.call(ping_cmd, stdout=sp.DEVNULL, stderr=sp.DEVNULL)
self._state = not bool(status)
| Check if device is on and update the state. Only called if assumed state is false. | 16 | 23 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def update(self) -> None:
ping_cmd = [
"ping",
"-c",
"1",
"-W",
str(DEFAULT_PING_TIMEOUT),
str(self._host),
]
status = sp.call(ping_cmd, stdout=sp.DEVNULL, stderr=sp.DEVNULL)
self._state = not bool(status)
```
###Assistant : Check if device is on and update the state. Only called if assumed state is false.
|
1,612 | def get_transactions(self):
df = self.__transactions[
[
"Date",
"Type",
"Ticker",
"Side",
"Price",
"Quantity",
"Fees",
"Investment",
"Currency",
"Sector",
"Industry",
"Country",
"Region",
]
]
df = df.replace(np.nan, "-")
df["Date"] = df["Date"].dt.strftime("%Y-%m-%d")
df.sort_values(by="Date", ascending=False, inplace=True)
return df
| Get formatted transactions
Returns
-------
pd.DataFrame: formatted transactions
| 8 | 33 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_transactions(self):
df = self.__transactions[
[
"Date",
"Type",
"Ticker",
"Side",
"Price",
"Quantity",
"Fees",
"Investment",
"Currency",
"Sector",
"Industry",
"Country",
"Region",
]
]
df = df.replace(np.nan, "-")
df["Date"] = df["Date"].dt.strftime("%Y-%m-%d")
df.sort_values(by="Date", ascending=False, inplace=True)
return df
```
###Assistant : Get formatted transactions
Returns
-------
pd.DataFrame: formatted transactions
|
1,613 | def test_pick_colors(self) -> None:
used_colors: Set[str] = set()
color_map: Dict[int, str] = {}
recipient_ids = list(range(30))
user_color_map = pick_colors(used_colors, color_map, recipient_ids)
self.assertEqual(
user_color_map,
{
0: "#76ce90",
1: "#fae589",
2: "#a6c7e5",
3: "#e79ab5",
4: "#bfd56f",
5: "#f4ae55",
6: "#b0a5fd",
7: "#addfe5",
8: "#f5ce6e",
9: "#c2726a",
10: "#94c849",
11: "#bd86e5",
12: "#ee7e4a",
13: "#a6dcbf",
14: "#95a5fd",
15: "#53a063",
16: "#9987e1",
17: "#e4523d",
18: "#c2c2c2",
19: "#4f8de4",
20: "#c6a8ad",
21: "#e7cc4d",
22: "#c8bebf",
23: "#a47462",
# start repeating
24: "#76ce90",
25: "#fae589",
26: "#a6c7e5",
27: "#e79ab5",
28: "#bfd56f",
29: "#f4ae55",
},
)
color_map = {98: "color98", 99: "color99"}
used_colors = set(STREAM_ASSIGNMENT_COLORS) - {"#c6a8ad", "#9987e1"}
recipient_ids = [99, 98, 1, 2, 3, 4]
user_color_map = pick_colors(used_colors, color_map, recipient_ids)
self.assertEqual(
user_color_map,
{98: "color98", 99: "color99", 1: "#9987e1", 2: "#c6a8ad", 3: "#e79ab5", 4: "#bfd56f"},
)
used_colors = set(STREAM_ASSIGNMENT_COLORS)
color_map = {}
recipient_ids = [2, 26, 50, 74]
user_color_map = pick_colors(used_colors, color_map, recipient_ids)
self.assertEqual(
user_color_map,
{2: "#a6c7e5", 26: "#a6c7e5", 50: "#a6c7e5", 74: "#a6c7e5"},
)
|
If we are assigning colors to a user with 24+ streams, we have to start
re-using old colors. Our algorithm basically uses recipient_id % 24, so
the following code reflects the worse case scenario that our new
streams have recipient ids spaced out by exact multiples of 24. We
don't try to work around this edge case, since users who really depend
on the stream colors can always just assign themselves custom colors
for the streams that they really want to stand out.
Even if recipient_ids were completely random, the odds of collisions
are low, but it's often the case that bulk-adds are done for streams
that either were or are being created at roughly the same time, so the
recipient_ids tend to have even fewer collisions.
| 127 | 157 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_pick_colors(self) -> None:
used_colors: Set[str] = set()
color_map: Dict[int, str] = {}
recipient_ids = list(range(30))
user_color_map = pick_colors(used_colors, color_map, recipient_ids)
self.assertEqual(
user_color_map,
{
0: "#76ce90",
1: "#fae589",
2: "#a6c7e5",
3: "#e79ab5",
4: "#bfd56f",
5: "#f4ae55",
6: "#b0a5fd",
7: "#addfe5",
8: "#f5ce6e",
9: "#c2726a",
10: "#94c849",
11: "#bd86e5",
12: "#ee7e4a",
13: "#a6dcbf",
14: "#95a5fd",
15: "#53a063",
16: "#9987e1",
17: "#e4523d",
18: "#c2c2c2",
19: "#4f8de4",
20: "#c6a8ad",
21: "#e7cc4d",
22: "#c8bebf",
23: "#a47462",
# start repeating
24: "#76ce90",
25: "#fae589",
26: "#a6c7e5",
27: "#e79ab5",
28: "#bfd56f",
29: "#f4ae55",
},
)
color_map = {98: "color98", 99: "color99"}
used_colors = set(STREAM_ASSIGNMENT_COLORS) - {"#c6a8ad", "#9987e1"}
recipient_ids = [99, 98, 1, 2, 3, 4]
user_color_map = pick_colors(used_colors, color_map, recipient_ids)
self.assertEqual(
user_color_map,
{98: "color98", 99: "color99", 1: "#9987e1", 2: "#c6a8ad", 3: "#e79ab5", 4: "#bfd56f"},
)
used_colors = set(STREAM_ASSIGNMENT_COLORS)
color_map = {}
recipient_ids = [2, 26, 50, 74]
user_color_map = pick_colors(used_colors, color_map, recipient_ids)
self.assertEqual(
user_color_map,
{2: "#a6c7e5", 26: "#a6c7e5", 50: "#a6c7e5", 74: "#a6c7e5"},
)
```
###Assistant :
If we are assigning colors to a user with 24+ streams, we have to start
re-using old colors. Our algorithm basically uses recipient_id % 24, so
the following code reflects the worse case scenario that our new
streams have recipient ids spaced out by exact multiples of 24. We
don't try to work around this edge case, since users who really depend
on the stream colors can always just assign themselves custom colors
for the streams that they really want to stand out.
Even if recipient_ids were completely random, the odds of collisions
are low, but it's often the case that bulk-adds are done for streams
that either were or are being created at roughly the same time, so the
recipient_ids tend to have even fewer collisions.
|
1,614 | def test_import(self):
data = (
('name', 'slug', 'status', 'cf_text', 'cf_longtext', 'cf_integer', 'cf_boolean', 'cf_date', 'cf_url', 'cf_json', 'cf_select', 'cf_multiselect'),
('Site 1', 'site-1', 'active', 'ABC', 'Foo', '123', 'True', '2020-01-01', 'http://example.com/1', '{"foo": 123}', 'Choice A', '"Choice A,Choice B"'),
('Site 2', 'site-2', 'active', 'DEF', 'Bar', '456', 'False', '2020-01-02', 'http://example.com/2', '{"bar": 456}', 'Choice B', '"Choice B,Choice C"'),
('Site 3', 'site-3', 'active', '', '', '', '', '', '', '', '', ''),
)
csv_data = '\n'.join(','.join(row) for row in data)
response = self.client.post(reverse('dcim:site_import'), {'csv': csv_data})
self.assertEqual(response.status_code, 200)
self.assertEqual(Site.objects.count(), 3)
# Validate data for site 1
site1 = Site.objects.get(name='Site 1')
self.assertEqual(len(site1.custom_field_data), 9)
self.assertEqual(site1.custom_field_data['text'], 'ABC')
self.assertEqual(site1.custom_field_data['longtext'], 'Foo')
self.assertEqual(site1.custom_field_data['integer'], 123)
self.assertEqual(site1.custom_field_data['boolean'], True)
self.assertEqual(site1.custom_field_data['date'], '2020-01-01')
self.assertEqual(site1.custom_field_data['url'], 'http://example.com/1')
self.assertEqual(site1.custom_field_data['json'], {"foo": 123})
self.assertEqual(site1.custom_field_data['select'], 'Choice A')
self.assertEqual(site1.custom_field_data['multiselect'], ['Choice A', 'Choice B'])
# Validate data for site 2
site2 = Site.objects.get(name='Site 2')
self.assertEqual(len(site2.custom_field_data), 9)
self.assertEqual(site2.custom_field_data['text'], 'DEF')
self.assertEqual(site2.custom_field_data['longtext'], 'Bar')
self.assertEqual(site2.custom_field_data['integer'], 456)
self.assertEqual(site2.custom_field_data['boolean'], False)
self.assertEqual(site2.custom_field_data['date'], '2020-01-02')
self.assertEqual(site2.custom_field_data['url'], 'http://example.com/2')
self.assertEqual(site2.custom_field_data['json'], {"bar": 456})
self.assertEqual(site2.custom_field_data['select'], 'Choice B')
self.assertEqual(site2.custom_field_data['multiselect'], ['Choice B', 'Choice C'])
# No custom field data should be set for site 3
site3 = Site.objects.get(name='Site 3')
self.assertFalse(any(site3.custom_field_data.values()))
|
Import a Site in CSV format, including a value for each CustomField.
| 12 | 167 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_import(self):
data = (
('name', 'slug', 'status', 'cf_text', 'cf_longtext', 'cf_integer', 'cf_boolean', 'cf_date', 'cf_url', 'cf_json', 'cf_select', 'cf_multiselect'),
('Site 1', 'site-1', 'active', 'ABC', 'Foo', '123', 'True', '2020-01-01', 'http://example.com/1', '{"foo": 123}', 'Choice A', '"Choice A,Choice B"'),
('Site 2', 'site-2', 'active', 'DEF', 'Bar', '456', 'False', '2020-01-02', 'http://example.com/2', '{"bar": 456}', 'Choice B', '"Choice B,Choice C"'),
('Site 3', 'site-3', 'active', '', '', '', '', '', '', '', '', ''),
)
csv_data = '\n'.join(','.join(row) for row in data)
response = self.client.post(reverse('dcim:site_import'), {'csv': csv_data})
self.assertEqual(response.status_code, 200)
self.assertEqual(Site.objects.count(), 3)
# Validate data for site 1
site1 = Site.objects.get(name='Site 1')
self.assertEqual(len(site1.custom_field_data), 9)
self.assertEqual(site1.custom_field_data['text'], 'ABC')
self.assertEqual(site1.custom_field_data['longtext'], 'Foo')
self.assertEqual(site1.custom_field_data['integer'], 123)
self.assertEqual(site1.custom_field_data['boolean'], True)
self.assertEqual(site1.custom_field_data['date'], '2020-01-01')
self.assertEqual(site1.custom_field_data['url'], 'http://example.com/1')
self.assertEqual(site1.custom_field_data['json'], {"foo": 123})
self.assertEqual(site1.custom_field_data['select'], 'Choice A')
self.assertEqual(site1.custom_field_data['multiselect'], ['Choice A', 'Choice B'])
# Validate data for site 2
site2 = Site.objects.get(name='Site 2')
self.assertEqual(len(site2.custom_field_data), 9)
self.assertEqual(site2.custom_field_data['text'], 'DEF')
self.assertEqual(site2.custom_field_data['longtext'], 'Bar')
self.assertEqual(site2.custom_field_data['integer'], 456)
self.assertEqual(site2.custom_field_data['boolean'], False)
self.assertEqual(site2.custom_field_data['date'], '2020-01-02')
self.assertEqual(site2.custom_field_data['url'], 'http://example.com/2')
self.assertEqual(site2.custom_field_data['json'], {"bar": 456})
self.assertEqual(site2.custom_field_data['select'], 'Choice B')
self.assertEqual(site2.custom_field_data['multiselect'], ['Choice B', 'Choice C'])
# No custom field data should be set for site 3
site3 = Site.objects.get(name='Site 3')
self.assertFalse(any(site3.custom_field_data.values()))
```
###Assistant :
Import a Site in CSV format, including a value for each CustomField.
|
1,615 | def build_pattern():
#bare = set()
for module, replace in list(MAPPING.items()):
for old_attr, new_attr in list(replace.items()):
LOOKUP[(module, old_attr)] = new_attr
#bare.add(module)
#bare.add(old_attr)
#yield % (module, module)
yield % (module, old_attr, old_attr)
yield % (module, old_attr)
#yield % alternates(bare)
|
# import_name< 'import' (module=%r
# | dotted_as_names< any* module=%r any* >) >
#
import_from< 'from' module_name=%r 'import'
( attr_name=%r | import_as_name< attr_name=%r 'as' any >) >
power< module_name=%r trailer< '.' attr_name=%r > any* >
bare_name=%s | 35 | 37 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def build_pattern():
#bare = set()
for module, replace in list(MAPPING.items()):
for old_attr, new_attr in list(replace.items()):
LOOKUP[(module, old_attr)] = new_attr
#bare.add(module)
#bare.add(old_attr)
#yield % (module, module)
yield % (module, old_attr, old_attr)
yield % (module, old_attr)
#yield % alternates(bare)
```
###Assistant :
# import_name< 'import' (module=%r
# | dotted_as_names< any* module=%r any* >) >
#
import_from< 'from' module_name=%r 'import'
( attr_name=%r | import_as_name< attr_name=%r 'as' any >) >
power< module_name=%r trailer< '.' attr_name=%r > any* >
bare_name=%s
|
1,616 | async def async_refresh_sensor(self) -> None:
_LOGGER.debug("Refreshing library sensor for '%s'", self.name)
try:
await self.hass.async_add_executor_job(self._update_state_and_attrs)
self._attr_available = True
except NotFound:
self._attr_available = False
except requests.exceptions.RequestException as err:
_LOGGER.error(
"Could not update library sensor for '%s': %s",
self.library_section.title,
err,
)
self._attr_available = False
self.async_write_ha_state()
| Update state and attributes for the library sensor. | 8 | 42 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
async def async_refresh_sensor(self) -> None:
_LOGGER.debug("Refreshing library sensor for '%s'", self.name)
try:
await self.hass.async_add_executor_job(self._update_state_and_attrs)
self._attr_available = True
except NotFound:
self._attr_available = False
except requests.exceptions.RequestException as err:
_LOGGER.error(
"Could not update library sensor for '%s': %s",
self.library_section.title,
err,
)
self._attr_available = False
self.async_write_ha_state()
```
###Assistant : Update state and attributes for the library sensor.
|
1,617 | def __hash__(self):
return hash(self.name) + hash(self.version) + hash(self.source_url)
|
Compute hash in a way which matches the equality test.
| 10 | 8 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def __hash__(self):
return hash(self.name) + hash(self.version) + hash(self.source_url)
```
###Assistant :
Compute hash in a way which matches the equality test.
|
1,618 | def _min_nodes(self) -> int:
if self._side == "gblock":
return self._config["fc_gblock_min_nodes"]
retval = self._scale_filters(self._config["fc_min_filters"])
retval = int(retval * self._config["fc_dimensions"] ** 2)
return retval
| int: The number of nodes for the first Dense. For non g-block layers this will be the
given minimum filters multiplied by the dimensions squared. For g-block layers, this is the
given value | 33 | 22 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _min_nodes(self) -> int:
if self._side == "gblock":
return self._config["fc_gblock_min_nodes"]
retval = self._scale_filters(self._config["fc_min_filters"])
retval = int(retval * self._config["fc_dimensions"] ** 2)
return retval
```
###Assistant : int: The number of nodes for the first Dense. For non g-block layers this will be the
given minimum filters multiplied by the dimensions squared. For g-block layers, this is the
given value
|
1,619 | def _get(cls) -> dict:
custom_parameters = super().get()
result = cls.default.copy()
result.update(
{key.replace("-", "_"): value for key, value in custom_parameters.items()}
)
return result
|
Get the resulted command-line options.
Returns
-------
dict
Decoded and verified config value.
| 13 | 22 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _get(cls) -> dict:
custom_parameters = super().get()
result = cls.default.copy()
result.update(
{key.replace("-", "_"): value for key, value in custom_parameters.items()}
)
return result
```
###Assistant :
Get the resulted command-line options.
Returns
-------
dict
Decoded and verified config value.
|
1,620 | def add(self, node, *predecessors):
if self._ready_nodes is not None:
raise ValueError("Nodes cannot be added after a call to prepare()")
# Create the node -> predecessor edges
nodeinfo = self._get_nodeinfo(node)
nodeinfo.npredecessors += len(predecessors)
# Create the predecessor -> node edges
for pred in predecessors:
pred_info = self._get_nodeinfo(pred)
pred_info.successors.append(node)
| Add a new node and its predecessors to the graph.
Both the *node* and all elements in *predecessors* must be hashable.
If called multiple times with the same node argument, the set of dependencies
will be the union of all dependencies passed in.
It is possible to add a node with no dependencies (*predecessors* is not provided)
as well as provide a dependency twice. If a node that has not been provided before
is included among *predecessors* it will be automatically added to the graph with
no predecessors of its own.
Raises ValueError if called after "prepare".
| 97 | 47 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def add(self, node, *predecessors):
if self._ready_nodes is not None:
raise ValueError("Nodes cannot be added after a call to prepare()")
# Create the node -> predecessor edges
nodeinfo = self._get_nodeinfo(node)
nodeinfo.npredecessors += len(predecessors)
# Create the predecessor -> node edges
for pred in predecessors:
pred_info = self._get_nodeinfo(pred)
pred_info.successors.append(node)
```
###Assistant : Add a new node and its predecessors to the graph.
Both the *node* and all elements in *predecessors* must be hashable.
If called multiple times with the same node argument, the set of dependencies
will be the union of all dependencies passed in.
It is possible to add a node with no dependencies (*predecessors* is not provided)
as well as provide a dependency twice. If a node that has not been provided before
is included among *predecessors* it will be automatically added to the graph with
no predecessors of its own.
Raises ValueError if called after "prepare".
|
1,621 | def test_stream_slices_with_state_and_slices(self, api, async_manager_mock, start_date):
end_date = start_date + duration(days=10)
cursor_value = start_date + duration(days=5)
state = {
AdsInsights.cursor_field: cursor_value.date().isoformat(),
"slices": [(cursor_value + duration(days=1)).date().isoformat(), (cursor_value + duration(days=3)).date().isoformat()],
}
stream = AdsInsights(api=api, start_date=start_date, end_date=end_date)
async_manager_mock.completed_jobs.return_value = [1, 2, 3]
slices = list(stream.stream_slices(stream_state=state, sync_mode=SyncMode.incremental))
assert slices == [{"insight_job": 1}, {"insight_job": 2}, {"insight_job": 3}]
async_manager_mock.assert_called_once()
args, kwargs = async_manager_mock.call_args
generated_jobs = list(kwargs["jobs"])
assert len(generated_jobs) == (end_date - cursor_value).days - 2, "should be 2 slices short because of state"
assert generated_jobs[0].interval.start == cursor_value.date() + duration(days=2)
assert generated_jobs[1].interval.start == cursor_value.date() + duration(days=4)
| Stream will use cursor_value from state, but will skip saved slices | 11 | 87 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_stream_slices_with_state_and_slices(self, api, async_manager_mock, start_date):
end_date = start_date + duration(days=10)
cursor_value = start_date + duration(days=5)
state = {
AdsInsights.cursor_field: cursor_value.date().isoformat(),
"slices": [(cursor_value + duration(days=1)).date().isoformat(), (cursor_value + duration(days=3)).date().isoformat()],
}
stream = AdsInsights(api=api, start_date=start_date, end_date=end_date)
async_manager_mock.completed_jobs.return_value = [1, 2, 3]
slices = list(stream.stream_slices(stream_state=state, sync_mode=SyncMode.incremental))
assert slices == [{"insight_job": 1}, {"insight_job": 2}, {"insight_job": 3}]
async_manager_mock.assert_called_once()
args, kwargs = async_manager_mock.call_args
generated_jobs = list(kwargs["jobs"])
assert len(generated_jobs) == (end_date - cursor_value).days - 2, "should be 2 slices short because of state"
assert generated_jobs[0].interval.start == cursor_value.date() + duration(days=2)
assert generated_jobs[1].interval.start == cursor_value.date() + duration(days=4)
```
###Assistant : Stream will use cursor_value from state, but will skip saved slices
|
1,622 | def _test_readonly_foreignkey_links(self, admin_site):
chapter = Chapter.objects.create(
title="Chapter 1",
content="content",
book=Book.objects.create(name="Book 1"),
)
language = Language.objects.create(iso="_40", name="Test")
obj = ReadOnlyRelatedField.objects.create(
chapter=chapter,
language=language,
user=self.superuser,
)
response = self.client.get(
reverse(
f"{admin_site}:admin_views_readonlyrelatedfield_change", args=(obj.pk,)
),
)
# Related ForeignKey object registered in admin.
user_url = reverse(f"{admin_site}:auth_user_change", args=(self.superuser.pk,))
self.assertContains(
response,
'<div class="readonly"><a href="%s">super</a></div>' % user_url,
html=True,
)
# Related ForeignKey with the string primary key registered in admin.
language_url = reverse(
f"{admin_site}:admin_views_language_change",
args=(quote(language.pk),),
)
self.assertContains(
response,
'<div class="readonly"><a href="%s">_40</a></div>' % language_url,
html=True,
)
# Related ForeignKey object not registered in admin.
self.assertContains(
response, '<div class="readonly">Chapter 1</div>', html=True
)
|
ForeignKey readonly fields render as links if the target model is
registered in admin.
| 14 | 92 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _test_readonly_foreignkey_links(self, admin_site):
chapter = Chapter.objects.create(
title="Chapter 1",
content="content",
book=Book.objects.create(name="Book 1"),
)
language = Language.objects.create(iso="_40", name="Test")
obj = ReadOnlyRelatedField.objects.create(
chapter=chapter,
language=language,
user=self.superuser,
)
response = self.client.get(
reverse(
f"{admin_site}:admin_views_readonlyrelatedfield_change", args=(obj.pk,)
),
)
# Related ForeignKey object registered in admin.
user_url = reverse(f"{admin_site}:auth_user_change", args=(self.superuser.pk,))
self.assertContains(
response,
'<div class="readonly"><a href="%s">super</a></div>' % user_url,
html=True,
)
# Related ForeignKey with the string primary key registered in admin.
language_url = reverse(
f"{admin_site}:admin_views_language_change",
args=(quote(language.pk),),
)
self.assertContains(
response,
'<div class="readonly"><a href="%s">_40</a></div>' % language_url,
html=True,
)
# Related ForeignKey object not registered in admin.
self.assertContains(
response, '<div class="readonly">Chapter 1</div>', html=True
)
```
###Assistant :
ForeignKey readonly fields render as links if the target model is
registered in admin.
|
1,623 | def execute():
for project in frappe.get_all("Project", fields=["name", "percent_complete_method"]):
total = frappe.db.count("Task", dict(project=project.name))
if project.percent_complete_method == "Task Completion" and total > 0:
completed = frappe.db.sql(
,
project.name,
)[0][0]
percent_complete = flt(flt(completed) / total * 100, 2)
if project.percent_complete != percent_complete:
frappe.db.set_value("Project", project.name, "percent_complete", percent_complete)
if percent_complete == 100:
frappe.db.set_value("Project", project.name, "status", "Completed")
| select count(name) from tabTask where
project=%s and status in ('Cancelled', 'Completed') | 11 | 51 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def execute():
for project in frappe.get_all("Project", fields=["name", "percent_complete_method"]):
total = frappe.db.count("Task", dict(project=project.name))
if project.percent_complete_method == "Task Completion" and total > 0:
completed = frappe.db.sql(
,
project.name,
)[0][0]
percent_complete = flt(flt(completed) / total * 100, 2)
if project.percent_complete != percent_complete:
frappe.db.set_value("Project", project.name, "percent_complete", percent_complete)
if percent_complete == 100:
frappe.db.set_value("Project", project.name, "status", "Completed")
```
###Assistant : select count(name) from tabTask where
project=%s and status in ('Cancelled', 'Completed')
|
1,624 | def mathematica(s, additional_translations=None):
parser = MathematicaParser(additional_translations)
if additional_translations is not None:
SymPyDeprecationWarning(
feature="additional_translations parameter for the Mathematica parser",
last_supported_version="1.9",
useinstead="Use SymPy's .replace( ) or .subs( ) methods on the output expression",
issue="23042",
).warn()
return sympify(parser._parse_old(s))
return parser.parse(s)
|
Translate a string containing a Wolfram Mathematica expression to a SymPy
expression.
If the translator is unable to find a suitable SymPy expression, the
``FullForm`` of the Mathematica expression will be output, using SymPy
``Function`` objects as nodes of the syntax tree.
Examples
========
>>> from sympy.parsing.mathematica import mathematica
>>> mathematica("Sin[x]^2 Tan[y]")
sin(x)**2*tan(y)
>>> e = mathematica("F[7,5,3]")
>>> e
F(7, 5, 3)
>>> from sympy import Function, Max, Min
>>> e.replace(Function("F"), lambda *x: Max(*x)*Min(*x))
21
Both standard input form and Mathematica full form are supported:
>>> mathematica("x*(a + b)")
x*(a + b)
>>> mathematica("Times[x, Plus[a, b]]")
x*(a + b)
To get a matrix from Wolfram's code:
>>> m = mathematica("{{a, b}, {c, d}}")
>>> m
((a, b), (c, d))
>>> from sympy import Matrix
>>> Matrix(m)
Matrix([
[a, b],
[c, d]])
If the translation into equivalent SymPy expressions fails, an SymPy
expression equivalent to Wolfram Mathematica's "FullForm" will be created:
>>> mathematica("x_.")
Optional(Pattern(x, Blank()))
>>> mathematica("Plus @@ {x, y, z}")
Apply(Plus, (x, y, z))
>>> mathematica("f[x_, 3] := x^3 /; x > 0")
SetDelayed(f(Pattern(x, Blank()), 3), Condition(x**3, x > 0))
| 180 | 37 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def mathematica(s, additional_translations=None):
parser = MathematicaParser(additional_translations)
if additional_translations is not None:
SymPyDeprecationWarning(
feature="additional_translations parameter for the Mathematica parser",
last_supported_version="1.9",
useinstead="Use SymPy's .replace( ) or .subs( ) methods on the output expression",
issue="23042",
).warn()
return sympify(parser._parse_old(s))
return parser.parse(s)
```
###Assistant :
Translate a string containing a Wolfram Mathematica expression to a SymPy
expression.
If the translator is unable to find a suitable SymPy expression, the
``FullForm`` of the Mathematica expression will be output, using SymPy
``Function`` objects as nodes of the syntax tree.
Examples
========
>>> from sympy.parsing.mathematica import mathematica
>>> mathematica("Sin[x]^2 Tan[y]")
sin(x)**2*tan(y)
>>> e = mathematica("F[7,5,3]")
>>> e
F(7, 5, 3)
>>> from sympy import Function, Max, Min
>>> e.replace(Function("F"), lambda *x: Max(*x)*Min(*x))
21
Both standard input form and Mathematica full form are supported:
>>> mathematica("x*(a + b)")
x*(a + b)
>>> mathematica("Times[x, Plus[a, b]]")
x*(a + b)
To get a matrix from Wolfram's code:
>>> m = mathematica("{{a, b}, {c, d}}")
>>> m
((a, b), (c, d))
>>> from sympy import Matrix
>>> Matrix(m)
Matrix([
[a, b],
[c, d]])
If the translation into equivalent SymPy expressions fails, an SymPy
expression equivalent to Wolfram Mathematica's "FullForm" will be created:
>>> mathematica("x_.")
Optional(Pattern(x, Blank()))
>>> mathematica("Plus @@ {x, y, z}")
Apply(Plus, (x, y, z))
>>> mathematica("f[x_, 3] := x^3 /; x > 0")
SetDelayed(f(Pattern(x, Blank()), 3), Condition(x**3, x > 0))
|
1,625 | def update_company_current_month_sales(company):
current_month_year = formatdate(today(), "MM-yyyy")
results = frappe.db.sql(
.format(
current_month_year=current_month_year, company=frappe.db.escape(company)
),
as_dict=True,
)
monthly_total = results[0]["total"] if len(results) > 0 else 0
frappe.db.set_value("Company", company, "total_monthly_sales", monthly_total)
|
SELECT
SUM(base_grand_total) AS total,
DATE_FORMAT(`posting_date`, '%m-%Y') AS month_year
FROM
`tabSales Invoice`
WHERE
DATE_FORMAT(`posting_date`, '%m-%Y') = '{current_month_year}'
AND docstatus = 1
AND company = {company}
GROUP BY
month_year
| 27 | 28 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def update_company_current_month_sales(company):
current_month_year = formatdate(today(), "MM-yyyy")
results = frappe.db.sql(
.format(
current_month_year=current_month_year, company=frappe.db.escape(company)
),
as_dict=True,
)
monthly_total = results[0]["total"] if len(results) > 0 else 0
frappe.db.set_value("Company", company, "total_monthly_sales", monthly_total)
```
###Assistant :
SELECT
SUM(base_grand_total) AS total,
DATE_FORMAT(`posting_date`, '%m-%Y') AS month_year
FROM
`tabSales Invoice`
WHERE
DATE_FORMAT(`posting_date`, '%m-%Y') = '{current_month_year}'
AND docstatus = 1
AND company = {company}
GROUP BY
month_year
|
1,626 | def metadata_version(self):
# type: () -> Optional[str]
raise NotImplementedError()
| Value of "Metadata-Version:" in the distribution, if available. | 8 | 9 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def metadata_version(self):
# type: () -> Optional[str]
raise NotImplementedError()
```
###Assistant : Value of "Metadata-Version:" in the distribution, if available.
|
1,627 | def call_ca(self, _):
if self.screen_tickers:
self.queue = ca_controller.ComparisonAnalysisController(
self.screen_tickers, self.queue
).menu(custom_path_menu_above="/stocks/")
else:
print("Some tickers must be screened first through one of the presets!\n")
| Call the comparison analysis menu with selected tickers | 8 | 23 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def call_ca(self, _):
if self.screen_tickers:
self.queue = ca_controller.ComparisonAnalysisController(
self.screen_tickers, self.queue
).menu(custom_path_menu_above="/stocks/")
else:
print("Some tickers must be screened first through one of the presets!\n")
```
###Assistant : Call the comparison analysis menu with selected tickers
|
1,628 | def delete_tasks_predictions(project, queryset, **kwargs):
task_ids = queryset.values_list('id', flat=True)
predictions = Prediction.objects.filter(task__id__in=task_ids)
count = predictions.count()
predictions.delete()
queryset.update(updated_at=datetime.now())
return {'processed_items': count, 'detail': 'Deleted ' + str(count) + ' predictions'}
actions = [
{
'entry_point': retrieve_tasks_predictions,
'permission': all_permissions.predictions_any,
'title': 'Retrieve Predictions',
'order': 90,
'dialog': {
'text': 'Send the selected tasks to all ML backends connected to the project.'
'This operation might be abruptly interrupted due to a timeout. '
'The recommended way to get predictions is to update tasks using the Label Studio API.'
'<a href="https://labelstud.io/guide/ml.html>See more in the documentation</a>.'
'Please confirm your action.',
'type': 'confirm'
}
},
{
'entry_point': delete_tasks,
'permission': all_permissions.tasks_delete,
'title': 'Delete Tasks',
'order': 100,
'reload': True,
'dialog': {
'text': 'You are going to delete the selected tasks. Please confirm your action.',
'type': 'confirm'
}
},
{
'entry_point': delete_tasks_annotations,
'permission': all_permissions.tasks_delete,
'title': 'Delete Annotations',
'order': 101,
'dialog': {
'text': 'You are going to delete all annotations from the selected tasks. Please confirm your action.',
'type': 'confirm'
}
},
{
'entry_point': delete_tasks_predictions,
'permission': all_permissions.predictions_any,
'title': 'Delete Predictions',
'order': 102,
'dialog': {
'text': 'You are going to delete all predictions from the selected tasks. Please confirm your action.',
'type': 'confirm'
}
}
]
| Delete all predictions by tasks ids
:param project: project instance
:param queryset: filtered tasks db queryset
| 16 | 191 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def delete_tasks_predictions(project, queryset, **kwargs):
task_ids = queryset.values_list('id', flat=True)
predictions = Prediction.objects.filter(task__id__in=task_ids)
count = predictions.count()
predictions.delete()
queryset.update(updated_at=datetime.now())
return {'processed_items': count, 'detail': 'Deleted ' + str(count) + ' predictions'}
actions = [
{
'entry_point': retrieve_tasks_predictions,
'permission': all_permissions.predictions_any,
'title': 'Retrieve Predictions',
'order': 90,
'dialog': {
'text': 'Send the selected tasks to all ML backends connected to the project.'
'This operation might be abruptly interrupted due to a timeout. '
'The recommended way to get predictions is to update tasks using the Label Studio API.'
'<a href="https://labelstud.io/guide/ml.html>See more in the documentation</a>.'
'Please confirm your action.',
'type': 'confirm'
}
},
{
'entry_point': delete_tasks,
'permission': all_permissions.tasks_delete,
'title': 'Delete Tasks',
'order': 100,
'reload': True,
'dialog': {
'text': 'You are going to delete the selected tasks. Please confirm your action.',
'type': 'confirm'
}
},
{
'entry_point': delete_tasks_annotations,
'permission': all_permissions.tasks_delete,
'title': 'Delete Annotations',
'order': 101,
'dialog': {
'text': 'You are going to delete all annotations from the selected tasks. Please confirm your action.',
'type': 'confirm'
}
},
{
'entry_point': delete_tasks_predictions,
'permission': all_permissions.predictions_any,
'title': 'Delete Predictions',
'order': 102,
'dialog': {
'text': 'You are going to delete all predictions from the selected tasks. Please confirm your action.',
'type': 'confirm'
}
}
]
```
###Assistant : Delete all predictions by tasks ids
:param project: project instance
:param queryset: filtered tasks db queryset
|
1,629 | def clean_up(self):
if not self._cleaned:
for ref in self.referenced_paths():
self._reference_counter[ref] -= 1
if self._reference_counter[ref] <= 0:
os.remove(ref)
if self._reference_counter[ref] < 0:
_logger.warning('Referance counter error, the number of %s is %d',
ref, self._reference_counter[ref])
self._cleaned = True
else:
_logger.warning('Already clean up task %d', self.task_id)
|
Counter of referenced file paths subtract 1. If the counter reach 0, then delete the file.
| 16 | 42 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def clean_up(self):
if not self._cleaned:
for ref in self.referenced_paths():
self._reference_counter[ref] -= 1
if self._reference_counter[ref] <= 0:
os.remove(ref)
if self._reference_counter[ref] < 0:
_logger.warning('Referance counter error, the number of %s is %d',
ref, self._reference_counter[ref])
self._cleaned = True
else:
_logger.warning('Already clean up task %d', self.task_id)
```
###Assistant :
Counter of referenced file paths subtract 1. If the counter reach 0, then delete the file.
|
1,630 | def load_backend(backend_name):
# This backend was renamed in Django 1.9.
if backend_name == "django.db.backends.postgresql_psycopg2":
backend_name = "django.db.backends.postgresql"
try:
return import_module("%s.base" % backend_name)
except ImportError as e_user:
# The database backend wasn't found. Display a helpful error message
# listing all built-in database backends.
import django.db.backends
builtin_backends = [
name
for _, name, ispkg in pkgutil.iter_modules(django.db.backends.__path__)
if ispkg and name not in {"base", "dummy"}
]
if backend_name not in ["django.db.backends.%s" % b for b in builtin_backends]:
backend_reprs = map(repr, sorted(builtin_backends))
raise ImproperlyConfigured(
"%r isn't an available database backend or couldn't be "
"imported. Check the above exception. To use one of the "
"built-in backends, use 'django.db.backends.XXX', where XXX "
"is one of:\n"
" %s" % (backend_name, ", ".join(backend_reprs))
) from e_user
else:
# If there's some other error, this must be an error in Django
raise
|
Return a database backend's "base" module given a fully qualified database
backend name, or raise an error if it doesn't exist.
| 21 | 136 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def load_backend(backend_name):
# This backend was renamed in Django 1.9.
if backend_name == "django.db.backends.postgresql_psycopg2":
backend_name = "django.db.backends.postgresql"
try:
return import_module("%s.base" % backend_name)
except ImportError as e_user:
# The database backend wasn't found. Display a helpful error message
# listing all built-in database backends.
import django.db.backends
builtin_backends = [
name
for _, name, ispkg in pkgutil.iter_modules(django.db.backends.__path__)
if ispkg and name not in {"base", "dummy"}
]
if backend_name not in ["django.db.backends.%s" % b for b in builtin_backends]:
backend_reprs = map(repr, sorted(builtin_backends))
raise ImproperlyConfigured(
"%r isn't an available database backend or couldn't be "
"imported. Check the above exception. To use one of the "
"built-in backends, use 'django.db.backends.XXX', where XXX "
"is one of:\n"
" %s" % (backend_name, ", ".join(backend_reprs))
) from e_user
else:
# If there's some other error, this must be an error in Django
raise
```
###Assistant :
Return a database backend's "base" module given a fully qualified database
backend name, or raise an error if it doesn't exist.
|
1,631 | def explode_superdims(sizes, dims):
strides_to_sizes = {stride: size for size, stride in zip(sizes, strides_for_sizes(sizes))}
dims = list(reversed(dims))
final_dims = []
for size, stride in dims:
target_size = strides_to_sizes[stride]
new_dims = []
while size > target_size:
assert target_size > 1 # Ensure progress
assert size % target_size == 0
new_dims.append((target_size, stride))
size //= target_size
stride *= target_size
target_size = strides_to_sizes[stride]
assert size == target_size
new_dims.append((size, stride))
final_dims += reversed(new_dims)
return final_dims
| Explode superdims to fit a known shape.
The unflattening process might mistakenly generate too few too large dimensions.
For example, ``unflatten_superdims(np.arange(n))`` always returns ``[(n, 1)]``.
This function takes a list of such contiguous super-dimensions and splits them
into smaller dimensions such that::
set(map(fst, explode_superdims(sizes, dims))) == set(sizes)
| 47 | 69 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def explode_superdims(sizes, dims):
strides_to_sizes = {stride: size for size, stride in zip(sizes, strides_for_sizes(sizes))}
dims = list(reversed(dims))
final_dims = []
for size, stride in dims:
target_size = strides_to_sizes[stride]
new_dims = []
while size > target_size:
assert target_size > 1 # Ensure progress
assert size % target_size == 0
new_dims.append((target_size, stride))
size //= target_size
stride *= target_size
target_size = strides_to_sizes[stride]
assert size == target_size
new_dims.append((size, stride))
final_dims += reversed(new_dims)
return final_dims
```
###Assistant : Explode superdims to fit a known shape.
The unflattening process might mistakenly generate too few too large dimensions.
For example, ``unflatten_superdims(np.arange(n))`` always returns ``[(n, 1)]``.
This function takes a list of such contiguous super-dimensions and splits them
into smaller dimensions such that::
set(map(fst, explode_superdims(sizes, dims))) == set(sizes)
|
1,632 | def path_weight(G, path, weight):
multigraph = G.is_multigraph()
cost = 0
if not nx.is_path(G, path):
raise nx.NetworkXNoPath("path does not exist")
for node, nbr in nx.utils.pairwise(path):
if multigraph:
cost += min(v[weight] for v in G[node][nbr].values())
else:
cost += G[node][nbr][weight]
return cost
| Returns total cost associated with specified path and weight
Parameters
----------
G : graph
A NetworkX graph.
path: list
A list of node labels which defines the path to traverse
weight: string
A string indicating which edge attribute to use for path cost
Returns
-------
cost: int or float
An integer or a float representing the total cost with respect to the
specified weight of the specified path
Raises
------
NetworkXNoPath
If the specified edge does not exist.
| 78 | 39 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def path_weight(G, path, weight):
multigraph = G.is_multigraph()
cost = 0
if not nx.is_path(G, path):
raise nx.NetworkXNoPath("path does not exist")
for node, nbr in nx.utils.pairwise(path):
if multigraph:
cost += min(v[weight] for v in G[node][nbr].values())
else:
cost += G[node][nbr][weight]
return cost
```
###Assistant : Returns total cost associated with specified path and weight
Parameters
----------
G : graph
A NetworkX graph.
path: list
A list of node labels which defines the path to traverse
weight: string
A string indicating which edge attribute to use for path cost
Returns
-------
cost: int or float
An integer or a float representing the total cost with respect to the
specified weight of the specified path
Raises
------
NetworkXNoPath
If the specified edge does not exist.
|
1,633 | def _run_sql(self, sql, params, raw=True, output=False, latest=False):
toget = 'source_raw' if raw else 'source'
sqlfrom = "history"
if output:
sqlfrom = "history LEFT JOIN output_history USING (session, line)"
toget = "history.%s, output_history.output" % toget
if latest:
toget += ", MAX(session * 128 * 1024 + line)"
this_querry = "SELECT session, line, %s FROM %s " % (toget, sqlfrom) + sql
cur = self.db.execute(this_querry, params)
if latest:
cur = (row[:-1] for row in cur)
if output: # Regroup into 3-tuples, and parse JSON
return ((ses, lin, (inp, out)) for ses, lin, inp, out in cur)
return cur
| Prepares and runs an SQL query for the history database.
Parameters
----------
sql : str
Any filtering expressions to go after SELECT ... FROM ...
params : tuple
Parameters passed to the SQL query (to replace "?")
raw, output : bool
See :meth:`get_range`
latest : bool
Select rows with max (session, line)
Returns
-------
Tuples as :meth:`get_range`
| 57 | 96 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _run_sql(self, sql, params, raw=True, output=False, latest=False):
toget = 'source_raw' if raw else 'source'
sqlfrom = "history"
if output:
sqlfrom = "history LEFT JOIN output_history USING (session, line)"
toget = "history.%s, output_history.output" % toget
if latest:
toget += ", MAX(session * 128 * 1024 + line)"
this_querry = "SELECT session, line, %s FROM %s " % (toget, sqlfrom) + sql
cur = self.db.execute(this_querry, params)
if latest:
cur = (row[:-1] for row in cur)
if output: # Regroup into 3-tuples, and parse JSON
return ((ses, lin, (inp, out)) for ses, lin, inp, out in cur)
return cur
```
###Assistant : Prepares and runs an SQL query for the history database.
Parameters
----------
sql : str
Any filtering expressions to go after SELECT ... FROM ...
params : tuple
Parameters passed to the SQL query (to replace "?")
raw, output : bool
See :meth:`get_range`
latest : bool
Select rows with max (session, line)
Returns
-------
Tuples as :meth:`get_range`
|
1,634 | def __add__(self, other):
if isinstance(other, PathSpec):
return PathSpec(self.patterns + other.patterns)
else:
return NotImplemented
|
Combines the :attr:`Pathspec.patterns` patterns from two
:class:`PathSpec` instances.
| 8 | 13 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def __add__(self, other):
if isinstance(other, PathSpec):
return PathSpec(self.patterns + other.patterns)
else:
return NotImplemented
```
###Assistant :
Combines the :attr:`Pathspec.patterns` patterns from two
:class:`PathSpec` instances.
|
1,635 | def address(self):
if use_gcs_for_bootstrap():
return self._gcs_address
return self._redis_address
| Get the address for bootstrapping, e.g. the address to pass to
`ray start` or `ray.int()` to start worker nodes, that has been
converted to ip:port format.
| 26 | 8 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def address(self):
if use_gcs_for_bootstrap():
return self._gcs_address
return self._redis_address
```
###Assistant : Get the address for bootstrapping, e.g. the address to pass to
`ray start` or `ray.int()` to start worker nodes, that has been
converted to ip:port format.
|
1,636 | def similarity(self, texts=[], data={}, use_gpu=False, batch_size=1):
if use_gpu:
try:
_places = os.environ["CUDA_VISIBLE_DEVICES"]
int(_places[0])
except:
raise RuntimeError(
"Environment Variable CUDA_VISIBLE_DEVICES is not set correctly. If you wanna use gpu, please set CUDA_VISIBLE_DEVICES as cuda_device_id."
)
data = self.check_data(texts, data)
start_idx = 0
iteration = int(math.ceil(len(data['text_1']) / batch_size))
results = []
for i in range(iteration):
batch_data = {'text_1': [], 'text_2': []}
if i < (iteration - 1):
batch_data['text_1'] = data['text_1'][start_idx:(start_idx + batch_size)]
batch_data['text_2'] = data['text_2'][start_idx:(start_idx + batch_size)]
else:
batch_data['text_1'] = data['text_1'][start_idx:(start_idx + batch_size)]
batch_data['text_2'] = data['text_2'][start_idx:(start_idx + batch_size)]
start_idx = start_idx + batch_size
processed_results = preprocess(self.word_seg_module, self.vocab, batch_data, use_gpu, batch_size)
data_1, lod_1, shape_1 = self._texts_process(processed_results["text_1"])
data_2, lod_2, shape_2 = self._texts_process(processed_results["text_2"])
predictor = self.gpu_predictor if use_gpu else self.cpu_predictor
input_names = predictor.get_input_names()
input_handle = predictor.get_input_handle(input_names[0])
input_handle.copy_from_cpu(data_1)
input_handle.set_lod(lod_1)
input_handle.reshape(shape_1)
input_handle = predictor.get_input_handle(input_names[1])
input_handle.copy_from_cpu(data_2)
input_handle.set_lod(lod_2)
input_handle.reshape(shape_2)
predictor.run()
output_names = predictor.get_output_names()
output_handle = predictor.get_output_handle(output_names[1])
batch_out = output_handle.copy_to_cpu()
batch_result = postprocess(batch_out, processed_results)
results += batch_result
return results
|
Get the sentiment prediction results results with the texts as input
Args:
texts(list): the input texts to be predicted which the first element is text_1(list)
and the second element is text_2(list), such as [['这道题很难'], ['这道题不简单']]
if texts not data.
data(dict): key must be 'text_1' and 'text_2', value is the texts(list) to be predicted
use_gpu(bool): whether use gpu to predict or not
batch_size(int): the program deals once with one batch
Returns:
results(list): the word segmentation results
| 75 | 149 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def similarity(self, texts=[], data={}, use_gpu=False, batch_size=1):
if use_gpu:
try:
_places = os.environ["CUDA_VISIBLE_DEVICES"]
int(_places[0])
except:
raise RuntimeError(
"Environment Variable CUDA_VISIBLE_DEVICES is not set correctly. If you wanna use gpu, please set CUDA_VISIBLE_DEVICES as cuda_device_id."
)
data = self.check_data(texts, data)
start_idx = 0
iteration = int(math.ceil(len(data['text_1']) / batch_size))
results = []
for i in range(iteration):
batch_data = {'text_1': [], 'text_2': []}
if i < (iteration - 1):
batch_data['text_1'] = data['text_1'][start_idx:(start_idx + batch_size)]
batch_data['text_2'] = data['text_2'][start_idx:(start_idx + batch_size)]
else:
batch_data['text_1'] = data['text_1'][start_idx:(start_idx + batch_size)]
batch_data['text_2'] = data['text_2'][start_idx:(start_idx + batch_size)]
start_idx = start_idx + batch_size
processed_results = preprocess(self.word_seg_module, self.vocab, batch_data, use_gpu, batch_size)
data_1, lod_1, shape_1 = self._texts_process(processed_results["text_1"])
data_2, lod_2, shape_2 = self._texts_process(processed_results["text_2"])
predictor = self.gpu_predictor if use_gpu else self.cpu_predictor
input_names = predictor.get_input_names()
input_handle = predictor.get_input_handle(input_names[0])
input_handle.copy_from_cpu(data_1)
input_handle.set_lod(lod_1)
input_handle.reshape(shape_1)
input_handle = predictor.get_input_handle(input_names[1])
input_handle.copy_from_cpu(data_2)
input_handle.set_lod(lod_2)
input_handle.reshape(shape_2)
predictor.run()
output_names = predictor.get_output_names()
output_handle = predictor.get_output_handle(output_names[1])
batch_out = output_handle.copy_to_cpu()
batch_result = postprocess(batch_out, processed_results)
results += batch_result
return results
```
###Assistant :
Get the sentiment prediction results results with the texts as input
Args:
texts(list): the input texts to be predicted which the first element is text_1(list)
and the second element is text_2(list), such as [['这道题很难'], ['这道题不简单']]
if texts not data.
data(dict): key must be 'text_1' and 'text_2', value is the texts(list) to be predicted
use_gpu(bool): whether use gpu to predict or not
batch_size(int): the program deals once with one batch
Returns:
results(list): the word segmentation results
|
1,637 | def cli_collect_weights(sys_argv):
parser = argparse.ArgumentParser(
description="This script loads a pretrained model " "and uses it collect weights.",
prog="ludwig collect_weights",
usage="%(prog)s [options]",
)
# ----------------
# Model parameters
# ----------------
parser.add_argument("-m", "--model_path", help="model to load", required=True)
parser.add_argument("-t", "--tensors", help="tensors to collect", nargs="+", required=True)
# -------------------------
# Output results parameters
# -------------------------
parser.add_argument(
"-od", "--output_directory", type=str, default="results", help="directory that contains the results"
)
# ------------------
# Runtime parameters
# ------------------
parser.add_argument(
"-l",
"--logging_level",
default="info",
help="the level of logging to use",
choices=["critical", "error", "warning", "info", "debug", "notset"],
)
add_contrib_callback_args(parser)
args = parser.parse_args(sys_argv)
args.callbacks = args.callbacks or []
for callback in args.callbacks:
callback.on_cmdline("collect_weights", *sys_argv)
args.logging_level = get_logging_level_registry()[args.logging_level]
logging.getLogger("ludwig").setLevel(args.logging_level)
global logger
logger = logging.getLogger("ludwig.collect")
print_ludwig("Collect Weights", LUDWIG_VERSION)
collect_weights(**vars(args))
| Command Line Interface to collecting the weights for the model.
--m: Input model that is necessary to collect to the tensors, this is a
required *option*
--t: Tensors to collect
--od: Output directory of the model, defaults to results
--v: Verbose: Defines the logging level that the user will be exposed to
| 52 | 113 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def cli_collect_weights(sys_argv):
parser = argparse.ArgumentParser(
description="This script loads a pretrained model " "and uses it collect weights.",
prog="ludwig collect_weights",
usage="%(prog)s [options]",
)
# ----------------
# Model parameters
# ----------------
parser.add_argument("-m", "--model_path", help="model to load", required=True)
parser.add_argument("-t", "--tensors", help="tensors to collect", nargs="+", required=True)
# -------------------------
# Output results parameters
# -------------------------
parser.add_argument(
"-od", "--output_directory", type=str, default="results", help="directory that contains the results"
)
# ------------------
# Runtime parameters
# ------------------
parser.add_argument(
"-l",
"--logging_level",
default="info",
help="the level of logging to use",
choices=["critical", "error", "warning", "info", "debug", "notset"],
)
add_contrib_callback_args(parser)
args = parser.parse_args(sys_argv)
args.callbacks = args.callbacks or []
for callback in args.callbacks:
callback.on_cmdline("collect_weights", *sys_argv)
args.logging_level = get_logging_level_registry()[args.logging_level]
logging.getLogger("ludwig").setLevel(args.logging_level)
global logger
logger = logging.getLogger("ludwig.collect")
print_ludwig("Collect Weights", LUDWIG_VERSION)
collect_weights(**vars(args))
```
###Assistant : Command Line Interface to collecting the weights for the model.
--m: Input model that is necessary to collect to the tensors, this is a
required *option*
--t: Tensors to collect
--od: Output directory of the model, defaults to results
--v: Verbose: Defines the logging level that the user will be exposed to
|
1,638 | def get_user_emails_from_group(group):
group_doc = group
if isinstance(group_doc, str):
group_doc = frappe.get_doc("Daily Work Summary Group", group)
emails = get_users_email(group_doc)
return emails
| Returns list of email of enabled users from the given group
:param group: Daily Work Summary Group `name` | 18 | 20 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_user_emails_from_group(group):
group_doc = group
if isinstance(group_doc, str):
group_doc = frappe.get_doc("Daily Work Summary Group", group)
emails = get_users_email(group_doc)
return emails
```
###Assistant : Returns list of email of enabled users from the given group
:param group: Daily Work Summary Group `name`
|
1,639 | def adapt_unknown_value(self, value):
if isinstance(value, datetime.datetime): # must be before date
return self.adapt_datetimefield_value(value)
elif isinstance(value, datetime.date):
return self.adapt_datefield_value(value)
elif isinstance(value, datetime.time):
return self.adapt_timefield_value(value)
elif isinstance(value, decimal.Decimal):
return self.adapt_decimalfield_value(value)
else:
return value
|
Transform a value to something compatible with the backend driver.
This method only depends on the type of the value. It's designed for
cases where the target type isn't known, such as .raw() SQL queries.
As a consequence it may not work perfectly in all circumstances.
| 46 | 31 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def adapt_unknown_value(self, value):
if isinstance(value, datetime.datetime): # must be before date
return self.adapt_datetimefield_value(value)
elif isinstance(value, datetime.date):
return self.adapt_datefield_value(value)
elif isinstance(value, datetime.time):
return self.adapt_timefield_value(value)
elif isinstance(value, decimal.Decimal):
return self.adapt_decimalfield_value(value)
else:
return value
```
###Assistant :
Transform a value to something compatible with the backend driver.
This method only depends on the type of the value. It's designed for
cases where the target type isn't known, such as .raw() SQL queries.
As a consequence it may not work perfectly in all circumstances.
|
1,640 | def test_get_settings_variable_assignment_request_context(self):
request = self.get_request(site=self.other_site)
context = Context({"request": request})
template = Template(
"{% load wagtailsettings_tags %}"
"{% get_settings as wagtail_settings %}"
"{{ wagtail_settings.tests.testsetting.title}}"
)
self.assertEqual(template.render(context), self.other_site_settings.title)
# Also check that the default 'settings' variable hasn't been set
template = Template(
"{% load wagtailsettings_tags %}"
"{% get_settings as wagtail_settings %}"
"{{ settings.tests.testsetting.title}}"
)
self.assertEqual(template.render(context), "")
|
Check that assigning the setting to a context variable with
{% get_settings as wagtail_settings %} works.
| 16 | 54 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_get_settings_variable_assignment_request_context(self):
request = self.get_request(site=self.other_site)
context = Context({"request": request})
template = Template(
"{% load wagtailsettings_tags %}"
"{% get_settings as wagtail_settings %}"
"{{ wagtail_settings.tests.testsetting.title}}"
)
self.assertEqual(template.render(context), self.other_site_settings.title)
# Also check that the default 'settings' variable hasn't been set
template = Template(
"{% load wagtailsettings_tags %}"
"{% get_settings as wagtail_settings %}"
"{{ settings.tests.testsetting.title}}"
)
self.assertEqual(template.render(context), "")
```
###Assistant :
Check that assigning the setting to a context variable with
{% get_settings as wagtail_settings %} works.
|
1,641 | def get_dataset(cls, path, engine, storage_options):
if engine == "auto":
# We follow in concordance with pandas
engine_classes = [PyArrowDataset, FastParquetDataset]
error_msgs = ""
for engine_class in engine_classes:
try:
return engine_class(path, storage_options)
except ImportError as err:
error_msgs += "\n - " + str(err)
raise ImportError(
"Unable to find a usable engine; "
+ "tried using: 'pyarrow', 'fastparquet'.\n"
+ "A suitable version of "
+ "pyarrow or fastparquet is required for parquet "
+ "support.\n"
+ "Trying to import the above resulted in these errors:"
+ f"{error_msgs}"
)
elif engine == "pyarrow":
return PyArrowDataset(path, storage_options)
elif engine == "fastparquet":
return FastParquetDataset(path, storage_options)
else:
raise ValueError("engine must be one of 'pyarrow', 'fastparquet'")
|
Retrieve Parquet engine specific Dataset implementation.
Parameters
----------
path : str, path object or file-like object
The filepath of the parquet file in local filesystem or hdfs.
engine : str
Parquet library to use (only 'PyArrow' is supported for now).
storage_options : dict
Parameters for specific storage engine.
Returns
-------
Dataset
Either a PyArrowDataset or FastParquetDataset object.
| 57 | 109 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_dataset(cls, path, engine, storage_options):
if engine == "auto":
# We follow in concordance with pandas
engine_classes = [PyArrowDataset, FastParquetDataset]
error_msgs = ""
for engine_class in engine_classes:
try:
return engine_class(path, storage_options)
except ImportError as err:
error_msgs += "\n - " + str(err)
raise ImportError(
"Unable to find a usable engine; "
+ "tried using: 'pyarrow', 'fastparquet'.\n"
+ "A suitable version of "
+ "pyarrow or fastparquet is required for parquet "
+ "support.\n"
+ "Trying to import the above resulted in these errors:"
+ f"{error_msgs}"
)
elif engine == "pyarrow":
return PyArrowDataset(path, storage_options)
elif engine == "fastparquet":
return FastParquetDataset(path, storage_options)
else:
raise ValueError("engine must be one of 'pyarrow', 'fastparquet'")
```
###Assistant :
Retrieve Parquet engine specific Dataset implementation.
Parameters
----------
path : str, path object or file-like object
The filepath of the parquet file in local filesystem or hdfs.
engine : str
Parquet library to use (only 'PyArrow' is supported for now).
storage_options : dict
Parameters for specific storage engine.
Returns
-------
Dataset
Either a PyArrowDataset or FastParquetDataset object.
|
1,642 | def test_causal_lm_model_past_with_attn_mask(self):
config_and_inputs = self.model_tester.prepare_config_and_inputs()
self.model_tester.create_and_check_causal_lm_model_past_with_attn_mask(*config_and_inputs)
| Test the causal LM model with `past_key_values` and `attention_mask` | 9 | 6 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_causal_lm_model_past_with_attn_mask(self):
config_and_inputs = self.model_tester.prepare_config_and_inputs()
self.model_tester.create_and_check_causal_lm_model_past_with_attn_mask(*config_and_inputs)
```
###Assistant : Test the causal LM model with `past_key_values` and `attention_mask`
|
1,643 | def _whatsnd(data):
hdr = data[:512]
fakefile = BytesIO(hdr)
for testfn in sndhdr.tests:
res = testfn(hdr, fakefile)
if res is not None:
return _sndhdr_MIMEmap.get(res[0])
return None | Try to identify a sound file type.
sndhdr.what() has a pretty cruddy interface, unfortunately. This is why
we re-do it here. It would be easier to reverse engineer the Unix 'file'
command and use the standard 'magic' file, as shipped with a modern Unix.
| 44 | 25 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _whatsnd(data):
hdr = data[:512]
fakefile = BytesIO(hdr)
for testfn in sndhdr.tests:
res = testfn(hdr, fakefile)
if res is not None:
return _sndhdr_MIMEmap.get(res[0])
return None
```
###Assistant : Try to identify a sound file type.
sndhdr.what() has a pretty cruddy interface, unfortunately. This is why
we re-do it here. It would be easier to reverse engineer the Unix 'file'
command and use the standard 'magic' file, as shipped with a modern Unix.
|
1,644 | async def test_protect_loop_debugger_sleep(caplog):
block_async_io.enable()
with patch(
"homeassistant.util.async_.extract_stack",
return_value=[
Mock(
filename="/home/paulus/homeassistant/.venv/blah/pydevd.py",
lineno="23",
line="do_something()",
),
Mock(
filename="/home/paulus/homeassistant/util/async.py",
lineno="123",
line="protected_loop_func",
),
Mock(
filename="/home/paulus/homeassistant/util/async.py",
lineno="123",
line="check_loop()",
),
],
):
time.sleep(0)
assert "Detected blocking call inside the event loop" not in caplog.text
| Test time.sleep injected by the debugger is not reported. | 9 | 37 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
async def test_protect_loop_debugger_sleep(caplog):
block_async_io.enable()
with patch(
"homeassistant.util.async_.extract_stack",
return_value=[
Mock(
filename="/home/paulus/homeassistant/.venv/blah/pydevd.py",
lineno="23",
line="do_something()",
),
Mock(
filename="/home/paulus/homeassistant/util/async.py",
lineno="123",
line="protected_loop_func",
),
Mock(
filename="/home/paulus/homeassistant/util/async.py",
lineno="123",
line="check_loop()",
),
],
):
time.sleep(0)
assert "Detected blocking call inside the event loop" not in caplog.text
```
###Assistant : Test time.sleep injected by the debugger is not reported.
|
1,645 | def update_worker_pea_args(self):
self.peas_args['peas'] = self._set_peas_args(self.args)
| Update args of all its worker peas based on Pod args. Does not touch head and tail | 17 | 5 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def update_worker_pea_args(self):
self.peas_args['peas'] = self._set_peas_args(self.args)
```
###Assistant : Update args of all its worker peas based on Pod args. Does not touch head and tail
|
1,646 | def prevprime(n):
n = _as_int_ceiling(n)
if n < 3:
raise ValueError("no preceding primes")
if n < 8:
return {3: 2, 4: 3, 5: 3, 6: 5, 7: 5}[n]
if n <= sieve._list[-1]:
l, u = sieve.search(n)
if l == u:
return sieve[l-1]
else:
return sieve[l]
nn = 6*(n//6)
if n - nn <= 1:
n = nn - 1
if isprime(n):
return n
n -= 4
else:
n = nn + 1
while 1:
if isprime(n):
return n
n -= 2
if isprime(n):
return n
n -= 4
| Return the largest prime smaller than n.
Notes
=====
Potential primes are located at 6*j +/- 1. This
property is used during searching.
>>> from sympy import prevprime
>>> [(i, prevprime(i)) for i in range(10, 15)]
[(10, 7), (11, 7), (12, 11), (13, 11), (14, 13)]
See Also
========
nextprime : Return the ith prime greater than n
primerange : Generates all primes in a given range
| 67 | 88 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def prevprime(n):
n = _as_int_ceiling(n)
if n < 3:
raise ValueError("no preceding primes")
if n < 8:
return {3: 2, 4: 3, 5: 3, 6: 5, 7: 5}[n]
if n <= sieve._list[-1]:
l, u = sieve.search(n)
if l == u:
return sieve[l-1]
else:
return sieve[l]
nn = 6*(n//6)
if n - nn <= 1:
n = nn - 1
if isprime(n):
return n
n -= 4
else:
n = nn + 1
while 1:
if isprime(n):
return n
n -= 2
if isprime(n):
return n
n -= 4
```
###Assistant : Return the largest prime smaller than n.
Notes
=====
Potential primes are located at 6*j +/- 1. This
property is used during searching.
>>> from sympy import prevprime
>>> [(i, prevprime(i)) for i in range(10, 15)]
[(10, 7), (11, 7), (12, 11), (13, 11), (14, 13)]
See Also
========
nextprime : Return the ith prime greater than n
primerange : Generates all primes in a given range
|
1,647 | def test_duplicate_statistics_handle_integrity_error(hass_recorder, caplog):
hass = hass_recorder()
wait_recording_done(hass)
period1 = dt_util.as_utc(dt_util.parse_datetime("2021-09-01 00:00:00"))
period2 = dt_util.as_utc(dt_util.parse_datetime("2021-09-30 23:00:00"))
external_energy_metadata_1 = {
"has_mean": False,
"has_sum": True,
"name": "Total imported energy",
"source": "test",
"state_unit_of_measurement": "kWh",
"statistic_id": "test:total_energy_import_tariff_1",
"unit_of_measurement": "kWh",
}
external_energy_statistics_1 = [
{
"start": period1,
"last_reset": None,
"state": 3,
"sum": 5,
},
]
external_energy_statistics_2 = [
{
"start": period2,
"last_reset": None,
"state": 3,
"sum": 6,
}
]
with patch.object(
statistics, "_statistics_exists", return_value=False
), patch.object(
statistics, "_insert_statistics", wraps=statistics._insert_statistics
) as insert_statistics_mock:
async_add_external_statistics(
hass, external_energy_metadata_1, external_energy_statistics_1
)
async_add_external_statistics(
hass, external_energy_metadata_1, external_energy_statistics_1
)
async_add_external_statistics(
hass, external_energy_metadata_1, external_energy_statistics_2
)
wait_recording_done(hass)
assert insert_statistics_mock.call_count == 3
with session_scope(hass=hass) as session:
tmp = session.query(recorder.db_schema.Statistics).all()
assert len(tmp) == 2
assert "Blocked attempt to insert duplicated statistic rows" in caplog.text
| Test the recorder does not blow up if statistics is duplicated. | 11 | 117 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_duplicate_statistics_handle_integrity_error(hass_recorder, caplog):
hass = hass_recorder()
wait_recording_done(hass)
period1 = dt_util.as_utc(dt_util.parse_datetime("2021-09-01 00:00:00"))
period2 = dt_util.as_utc(dt_util.parse_datetime("2021-09-30 23:00:00"))
external_energy_metadata_1 = {
"has_mean": False,
"has_sum": True,
"name": "Total imported energy",
"source": "test",
"state_unit_of_measurement": "kWh",
"statistic_id": "test:total_energy_import_tariff_1",
"unit_of_measurement": "kWh",
}
external_energy_statistics_1 = [
{
"start": period1,
"last_reset": None,
"state": 3,
"sum": 5,
},
]
external_energy_statistics_2 = [
{
"start": period2,
"last_reset": None,
"state": 3,
"sum": 6,
}
]
with patch.object(
statistics, "_statistics_exists", return_value=False
), patch.object(
statistics, "_insert_statistics", wraps=statistics._insert_statistics
) as insert_statistics_mock:
async_add_external_statistics(
hass, external_energy_metadata_1, external_energy_statistics_1
)
async_add_external_statistics(
hass, external_energy_metadata_1, external_energy_statistics_1
)
async_add_external_statistics(
hass, external_energy_metadata_1, external_energy_statistics_2
)
wait_recording_done(hass)
assert insert_statistics_mock.call_count == 3
with session_scope(hass=hass) as session:
tmp = session.query(recorder.db_schema.Statistics).all()
assert len(tmp) == 2
assert "Blocked attempt to insert duplicated statistic rows" in caplog.text
```
###Assistant : Test the recorder does not blow up if statistics is duplicated.
|
1,648 | def testRequestResourcesRaceConditionWithResourceDemands(self):
config = copy.deepcopy(MULTI_WORKER_CLUSTER)
config["available_node_types"].update(
{
"empty_node": {
"node_config": {},
"resources": {"CPU": 2, "GPU": 1},
"max_workers": 1,
},
"def_worker": {
"node_config": {},
"resources": {"CPU": 2, "GPU": 1, "WORKER": 1},
"max_workers": 3,
},
}
)
config["idle_timeout_minutes"] = 0
config_path = self.write_config(config)
self.provider = MockProvider()
self.provider.create_node(
{},
{
TAG_RAY_NODE_KIND: "head",
TAG_RAY_NODE_STATUS: STATUS_UP_TO_DATE,
TAG_RAY_USER_NODE_TYPE: "empty_node",
},
1,
)
runner = MockProcessRunner()
runner.respond_to_call("json .Config.Env", ["[]" for i in range(2)])
lm = LoadMetrics()
autoscaler = MockAutoscaler(
config_path,
lm,
MockNodeInfoStub(),
max_failures=0,
process_runner=runner,
update_interval_s=0,
)
lm.update(
"127.0.0.0",
mock_raylet_id(),
{"CPU": 2, "GPU": 1},
{"CPU": 2},
{},
waiting_bundles=[{"CPU": 2}],
)
autoscaler.load_metrics.set_resource_requests([{"CPU": 2, "GPU": 1}] * 2)
autoscaler.update()
# 1 head, 1 worker.
self.waitForNodes(2)
lm.update(
"127.0.0.0",
mock_raylet_id(),
{"CPU": 2, "GPU": 1},
{"CPU": 2},
{},
waiting_bundles=[{"CPU": 2}],
)
# make sure it stays consistent.
for _ in range(10):
autoscaler.update()
self.waitForNodes(2)
| Test request_resources() with resource_demands.
Tests when request_resources() is called simultaneously with resource
demands in multiple orders.
| 16 | 130 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def testRequestResourcesRaceConditionWithResourceDemands(self):
config = copy.deepcopy(MULTI_WORKER_CLUSTER)
config["available_node_types"].update(
{
"empty_node": {
"node_config": {},
"resources": {"CPU": 2, "GPU": 1},
"max_workers": 1,
},
"def_worker": {
"node_config": {},
"resources": {"CPU": 2, "GPU": 1, "WORKER": 1},
"max_workers": 3,
},
}
)
config["idle_timeout_minutes"] = 0
config_path = self.write_config(config)
self.provider = MockProvider()
self.provider.create_node(
{},
{
TAG_RAY_NODE_KIND: "head",
TAG_RAY_NODE_STATUS: STATUS_UP_TO_DATE,
TAG_RAY_USER_NODE_TYPE: "empty_node",
},
1,
)
runner = MockProcessRunner()
runner.respond_to_call("json .Config.Env", ["[]" for i in range(2)])
lm = LoadMetrics()
autoscaler = MockAutoscaler(
config_path,
lm,
MockNodeInfoStub(),
max_failures=0,
process_runner=runner,
update_interval_s=0,
)
lm.update(
"127.0.0.0",
mock_raylet_id(),
{"CPU": 2, "GPU": 1},
{"CPU": 2},
{},
waiting_bundles=[{"CPU": 2}],
)
autoscaler.load_metrics.set_resource_requests([{"CPU": 2, "GPU": 1}] * 2)
autoscaler.update()
# 1 head, 1 worker.
self.waitForNodes(2)
lm.update(
"127.0.0.0",
mock_raylet_id(),
{"CPU": 2, "GPU": 1},
{"CPU": 2},
{},
waiting_bundles=[{"CPU": 2}],
)
# make sure it stays consistent.
for _ in range(10):
autoscaler.update()
self.waitForNodes(2)
```
###Assistant : Test request_resources() with resource_demands.
Tests when request_resources() is called simultaneously with resource
demands in multiple orders.
|
1,649 | def test_sitemap_unpublished_titles(self):
sitemap = CMSSitemap()
locations = []
urlset = sitemap.get_urls()
unpublished_titles = set()
for item in urlset:
locations.append(item['location'])
for page in Page.objects.drafts():
if page.get_public_object():
set1 = set(page.get_public_object().title_set.values_list('path', flat=True))
set2 = set(page.title_set.values_list('path', flat=True))
unpublished_titles.update(set2.difference(set1))
else:
unpublished_titles.update(page.title_set.values_list('path', flat=True))
for path in unpublished_titles:
title = Title.objects.get(path=path)
if title.path:
url = f'http://example.com/{title.language}/{title.path}/'
else:
url = f'http://example.com/{title.language}/{title.path}'
self.assertFalse(url in locations)
|
Check that titles attached to unpublished pages are not in the urlset.
As titles are 'published' depending on their attached page, we create a
set of unpublished titles by checking titles attached to the draft and
public version of each page
| 41 | 56 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_sitemap_unpublished_titles(self):
sitemap = CMSSitemap()
locations = []
urlset = sitemap.get_urls()
unpublished_titles = set()
for item in urlset:
locations.append(item['location'])
for page in Page.objects.drafts():
if page.get_public_object():
set1 = set(page.get_public_object().title_set.values_list('path', flat=True))
set2 = set(page.title_set.values_list('path', flat=True))
unpublished_titles.update(set2.difference(set1))
else:
unpublished_titles.update(page.title_set.values_list('path', flat=True))
for path in unpublished_titles:
title = Title.objects.get(path=path)
if title.path:
url = f'http://example.com/{title.language}/{title.path}/'
else:
url = f'http://example.com/{title.language}/{title.path}'
self.assertFalse(url in locations)
```
###Assistant :
Check that titles attached to unpublished pages are not in the urlset.
As titles are 'published' depending on their attached page, we create a
set of unpublished titles by checking titles attached to the draft and
public version of each page
|
1,650 | def mixin_gateway_parser(parser):
gp = add_arg_group(parser, title='Gateway')
_add_host(gp)
_add_proxy(gp)
gp.add_argument(
'--uses',
type=str,
default=None,
# TODO: add Jina Hub Gateway
help=,
)
gp.add_argument(
'--uses-with',
action=KVAppendAction,
metavar='KEY: VALUE',
nargs='*',
help=,
)
gp.add_argument(
'--py-modules',
type=str,
nargs='*',
metavar='PATH',
help=,
)
mixin_base_runtime_parser(gp)
gp.add_argument(
'--port-expose',
type=int,
dest='port',
default=helper.random_port(),
help='The port that the gateway exposes for clients for GRPC connections.',
)
parser.add_argument(
'--graph-description',
type=str,
help='Routing graph for the gateway',
default='{}',
)
parser.add_argument(
'--graph-conditions',
type=str,
help='Dictionary stating which filtering conditions each Executor in the graph requires to receive Documents.',
default='{}',
)
parser.add_argument(
'--deployments-addresses',
type=str,
help='dictionary JSON with the input addresses of each Deployment',
default='{}',
)
parser.add_argument(
'--deployments-disable-reduce',
type=str,
help='list JSON disabling the built-in merging mechanism for each Deployment listed',
default='[]',
)
gp.add_argument(
'--compression',
choices=['NoCompression', 'Deflate', 'Gzip'],
help='The compression mechanism used when sending requests from the Head to the WorkerRuntimes. For more details, '
'check https://grpc.github.io/grpc/python/grpc.html#compression.',
)
gp.add_argument(
'--timeout-send',
type=int,
default=None,
help='The timeout in milliseconds used when sending data requests to Executors, -1 means no timeout, disabled by default',
)
| Add the options for remote expose at the Gateway
:param parser: the parser
The config of the gateway, it could be one of the followings:
* the string literal of an Gateway class name
* a Gateway YAML file (.yml, .yaml, .jaml)
* a docker image (must start with `docker://`)
* the string literal of a YAML config (must start with `!` or `jtype: `)
* the string literal of a JSON config
When use it under Python, one can use the following values additionally:
- a Python dict that represents the config
- a text file stream has `.read()` interface
Dictionary of keyword arguments that will override the `with` configuration in `uses`
The customized python modules need to be imported before loading the gateway
Note that the recommended way is to only import a single module - a simple python file, if your
gateway can be defined in a single file, or an ``__init__.py`` file if you have multiple files,
which should be structured as a python package.
| 169 | 160 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def mixin_gateway_parser(parser):
gp = add_arg_group(parser, title='Gateway')
_add_host(gp)
_add_proxy(gp)
gp.add_argument(
'--uses',
type=str,
default=None,
# TODO: add Jina Hub Gateway
help=,
)
gp.add_argument(
'--uses-with',
action=KVAppendAction,
metavar='KEY: VALUE',
nargs='*',
help=,
)
gp.add_argument(
'--py-modules',
type=str,
nargs='*',
metavar='PATH',
help=,
)
mixin_base_runtime_parser(gp)
gp.add_argument(
'--port-expose',
type=int,
dest='port',
default=helper.random_port(),
help='The port that the gateway exposes for clients for GRPC connections.',
)
parser.add_argument(
'--graph-description',
type=str,
help='Routing graph for the gateway',
default='{}',
)
parser.add_argument(
'--graph-conditions',
type=str,
help='Dictionary stating which filtering conditions each Executor in the graph requires to receive Documents.',
default='{}',
)
parser.add_argument(
'--deployments-addresses',
type=str,
help='dictionary JSON with the input addresses of each Deployment',
default='{}',
)
parser.add_argument(
'--deployments-disable-reduce',
type=str,
help='list JSON disabling the built-in merging mechanism for each Deployment listed',
default='[]',
)
gp.add_argument(
'--compression',
choices=['NoCompression', 'Deflate', 'Gzip'],
help='The compression mechanism used when sending requests from the Head to the WorkerRuntimes. For more details, '
'check https://grpc.github.io/grpc/python/grpc.html#compression.',
)
gp.add_argument(
'--timeout-send',
type=int,
default=None,
help='The timeout in milliseconds used when sending data requests to Executors, -1 means no timeout, disabled by default',
)
```
###Assistant : Add the options for remote expose at the Gateway
:param parser: the parser
The config of the gateway, it could be one of the followings:
* the string literal of an Gateway class name
* a Gateway YAML file (.yml, .yaml, .jaml)
* a docker image (must start with `docker://`)
* the string literal of a YAML config (must start with `!` or `jtype: `)
* the string literal of a JSON config
When use it under Python, one can use the following values additionally:
- a Python dict that represents the config
- a text file stream has `.read()` interface
Dictionary of keyword arguments that will override the `with` configuration in `uses`
The customized python modules need to be imported before loading the gateway
Note that the recommended way is to only import a single module - a simple python file, if your
gateway can be defined in a single file, or an ``__init__.py`` file if you have multiple files,
which should be structured as a python package.
|
1,651 | def _format_changelog(self, changelog):
if not changelog:
return changelog
new_changelog = []
for line in changelog.strip().split('\n'):
line = line.strip()
if line[0] == '*':
new_changelog.extend(['', line])
elif line[0] == '-':
new_changelog.append(line)
else:
new_changelog.append(' ' + line)
# strip trailing newline inserted by first changelog entry
if not new_changelog[0]:
del new_changelog[0]
return new_changelog
| Format the changelog correctly and convert it to a list of strings
| 12 | 50 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _format_changelog(self, changelog):
if not changelog:
return changelog
new_changelog = []
for line in changelog.strip().split('\n'):
line = line.strip()
if line[0] == '*':
new_changelog.extend(['', line])
elif line[0] == '-':
new_changelog.append(line)
else:
new_changelog.append(' ' + line)
# strip trailing newline inserted by first changelog entry
if not new_changelog[0]:
del new_changelog[0]
return new_changelog
```
###Assistant : Format the changelog correctly and convert it to a list of strings
|
1,652 | def _get_time(self) -> float:
# N.B. We could remove this method and always call `self._timer.get_time()` internally,
# but it's handy to have in mocking situations
return self._timer.get_time()
| Get the current wall clock time, via the internal Timer. | 10 | 27 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _get_time(self) -> float:
# N.B. We could remove this method and always call `self._timer.get_time()` internally,
# but it's handy to have in mocking situations
return self._timer.get_time()
```
###Assistant : Get the current wall clock time, via the internal Timer.
|
1,653 | def _generate(self, pset, min_, max_, condition, type_=None):
if type_ is None:
type_ = pset.ret
expr = []
height = np.random.randint(min_, max_)
stack = [(0, type_)]
while len(stack) != 0:
depth, type_ = stack.pop()
# We've added a type_ parameter to the condition function
if condition(height, depth, type_):
try:
term = np.random.choice(pset.terminals[type_])
except IndexError:
_, _, traceback = sys.exc_info()
raise IndexError(
"The gp.generate function tried to add "
"a terminal of type {}, but there is"
"none available. {}".format(type_, traceback)
)
if inspect.isclass(term):
term = term()
expr.append(term)
else:
try:
prim = np.random.choice(pset.primitives[type_])
except IndexError:
_, _, traceback = sys.exc_info()
raise IndexError(
"The gp.generate function tried to add "
"a primitive of type {}, but there is"
"none available. {}".format(type_, traceback)
)
expr.append(prim)
for arg in reversed(prim.args):
stack.append((depth + 1, arg))
return expr
| Generate a Tree as a list of lists.
The tree is build from the root to the leaves, and it stop growing when
the condition is fulfilled.
Parameters
----------
pset: PrimitiveSetTyped
Primitive set from which primitives are selected.
min_: int
Minimum height of the produced trees.
max_: int
Maximum height of the produced trees.
condition: function
The condition is a function that takes two arguments,
the height of the tree to build and the current
depth in the tree.
type_: class
The type that should return the tree when called, when
:obj:None (default) no return type is enforced.
Returns
-------
individual: list
A grown tree with leaves at possibly different depths
depending on the condition function.
| 116 | 131 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _generate(self, pset, min_, max_, condition, type_=None):
if type_ is None:
type_ = pset.ret
expr = []
height = np.random.randint(min_, max_)
stack = [(0, type_)]
while len(stack) != 0:
depth, type_ = stack.pop()
# We've added a type_ parameter to the condition function
if condition(height, depth, type_):
try:
term = np.random.choice(pset.terminals[type_])
except IndexError:
_, _, traceback = sys.exc_info()
raise IndexError(
"The gp.generate function tried to add "
"a terminal of type {}, but there is"
"none available. {}".format(type_, traceback)
)
if inspect.isclass(term):
term = term()
expr.append(term)
else:
try:
prim = np.random.choice(pset.primitives[type_])
except IndexError:
_, _, traceback = sys.exc_info()
raise IndexError(
"The gp.generate function tried to add "
"a primitive of type {}, but there is"
"none available. {}".format(type_, traceback)
)
expr.append(prim)
for arg in reversed(prim.args):
stack.append((depth + 1, arg))
return expr
```
###Assistant : Generate a Tree as a list of lists.
The tree is build from the root to the leaves, and it stop growing when
the condition is fulfilled.
Parameters
----------
pset: PrimitiveSetTyped
Primitive set from which primitives are selected.
min_: int
Minimum height of the produced trees.
max_: int
Maximum height of the produced trees.
condition: function
The condition is a function that takes two arguments,
the height of the tree to build and the current
depth in the tree.
type_: class
The type that should return the tree when called, when
:obj:None (default) no return type is enforced.
Returns
-------
individual: list
A grown tree with leaves at possibly different depths
depending on the condition function.
|
1,654 | def test_ohe_infrequent_multiple_categories_dtypes():
pd = pytest.importorskip("pandas")
X = pd.DataFrame(
{
"str": ["a", "f", "c", "f", "f", "a", "c", "b", "b"],
"int": [5, 3, 0, 10, 10, 12, 0, 3, 5],
},
columns=["str", "int"],
)
ohe = OneHotEncoder(
categories="auto", max_categories=3, handle_unknown="infrequent_if_exist"
)
# X[:, 0] 'a', 'b', 'c' have the same frequency. 'a' and 'b' will be
# considered infrequent because they are greater
# X[:, 1] 0, 3, 5, 10 has frequency 2 and 12 has frequency 1.
# 0, 3, 12 will be considered infrequent
X_trans = ohe.fit_transform(X).toarray()
assert_array_equal(ohe.infrequent_categories_[0], ["a", "b"])
assert_array_equal(ohe.infrequent_categories_[1], [0, 3, 12])
expected = [
[0, 0, 1, 1, 0, 0],
[0, 1, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 1],
[0, 1, 0, 0, 1, 0],
[0, 1, 0, 0, 1, 0],
[0, 0, 1, 0, 0, 1],
[1, 0, 0, 0, 0, 1],
[0, 0, 1, 0, 0, 1],
[0, 0, 1, 1, 0, 0],
]
assert_allclose(expected, X_trans)
X_test = pd.DataFrame({"str": ["b", "f"], "int": [14, 12]}, columns=["str", "int"])
expected = [[0, 0, 1, 0, 0, 1], [0, 1, 0, 0, 0, 1]]
X_test_trans = ohe.transform(X_test)
assert_allclose(expected, X_test_trans.toarray())
X_inv = ohe.inverse_transform(X_test_trans)
expected_inv = np.array(
[["infrequent_sklearn", "infrequent_sklearn"], ["f", "infrequent_sklearn"]],
dtype=object,
)
assert_array_equal(expected_inv, X_inv)
# only infrequent or known categories
X_test = pd.DataFrame({"str": ["c", "b"], "int": [12, 5]}, columns=["str", "int"])
X_test_trans = ohe.transform(X_test).toarray()
expected = [[1, 0, 0, 0, 0, 1], [0, 0, 1, 1, 0, 0]]
assert_allclose(expected, X_test_trans)
X_inv = ohe.inverse_transform(X_test_trans)
expected_inv = np.array(
[["c", "infrequent_sklearn"], ["infrequent_sklearn", 5]], dtype=object
)
assert_array_equal(expected_inv, X_inv)
@pytest.mark.parametrize("kwargs", [{"min_frequency": 21, "max_categories": 1}]) | Test infrequent categories with a pandas dataframe with multiple dtypes. | 10 | 252 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_ohe_infrequent_multiple_categories_dtypes():
pd = pytest.importorskip("pandas")
X = pd.DataFrame(
{
"str": ["a", "f", "c", "f", "f", "a", "c", "b", "b"],
"int": [5, 3, 0, 10, 10, 12, 0, 3, 5],
},
columns=["str", "int"],
)
ohe = OneHotEncoder(
categories="auto", max_categories=3, handle_unknown="infrequent_if_exist"
)
# X[:, 0] 'a', 'b', 'c' have the same frequency. 'a' and 'b' will be
# considered infrequent because they are greater
# X[:, 1] 0, 3, 5, 10 has frequency 2 and 12 has frequency 1.
# 0, 3, 12 will be considered infrequent
X_trans = ohe.fit_transform(X).toarray()
assert_array_equal(ohe.infrequent_categories_[0], ["a", "b"])
assert_array_equal(ohe.infrequent_categories_[1], [0, 3, 12])
expected = [
[0, 0, 1, 1, 0, 0],
[0, 1, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 1],
[0, 1, 0, 0, 1, 0],
[0, 1, 0, 0, 1, 0],
[0, 0, 1, 0, 0, 1],
[1, 0, 0, 0, 0, 1],
[0, 0, 1, 0, 0, 1],
[0, 0, 1, 1, 0, 0],
]
assert_allclose(expected, X_trans)
X_test = pd.DataFrame({"str": ["b", "f"], "int": [14, 12]}, columns=["str", "int"])
expected = [[0, 0, 1, 0, 0, 1], [0, 1, 0, 0, 0, 1]]
X_test_trans = ohe.transform(X_test)
assert_allclose(expected, X_test_trans.toarray())
X_inv = ohe.inverse_transform(X_test_trans)
expected_inv = np.array(
[["infrequent_sklearn", "infrequent_sklearn"], ["f", "infrequent_sklearn"]],
dtype=object,
)
assert_array_equal(expected_inv, X_inv)
# only infrequent or known categories
X_test = pd.DataFrame({"str": ["c", "b"], "int": [12, 5]}, columns=["str", "int"])
X_test_trans = ohe.transform(X_test).toarray()
expected = [[1, 0, 0, 0, 0, 1], [0, 0, 1, 1, 0, 0]]
assert_allclose(expected, X_test_trans)
X_inv = ohe.inverse_transform(X_test_trans)
expected_inv = np.array(
[["c", "infrequent_sklearn"], ["infrequent_sklearn", 5]], dtype=object
)
assert_array_equal(expected_inv, X_inv)
@pytest.mark.parametrize("kwargs", [{"min_frequency": 21, "max_categories": 1}])
```
###Assistant : Test infrequent categories with a pandas dataframe with multiple dtypes.
|
1,655 | def _set_random_id(self):
if getattr(self, "persistence", False):
raise RuntimeError(
)
if "dash_snapshots" in sys.modules:
raise RuntimeError(
)
if not hasattr(self, "id"):
v = str(uuid.UUID(int=rd.randint(0, 2 ** 128)))
setattr(self, "id", v)
return getattr(self, "id")
|
Attempting to use an auto-generated ID with the `persistence` prop.
This is prohibited because persistence is tied to component IDs and
auto-generated IDs can easily change.
Please assign an explicit ID to this component.
Attempting to use an auto-generated ID in an app with `dash_snapshots`.
This is prohibited because snapshots saves the whole app layout,
including component IDs, and auto-generated IDs can easily change.
Callbacks referencing the new IDs will not work old snapshots.
Please assign an explicit ID to this component.
| 82 | 32 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _set_random_id(self):
if getattr(self, "persistence", False):
raise RuntimeError(
)
if "dash_snapshots" in sys.modules:
raise RuntimeError(
)
if not hasattr(self, "id"):
v = str(uuid.UUID(int=rd.randint(0, 2 ** 128)))
setattr(self, "id", v)
return getattr(self, "id")
```
###Assistant :
Attempting to use an auto-generated ID with the `persistence` prop.
This is prohibited because persistence is tied to component IDs and
auto-generated IDs can easily change.
Please assign an explicit ID to this component.
Attempting to use an auto-generated ID in an app with `dash_snapshots`.
This is prohibited because snapshots saves the whole app layout,
including component IDs, and auto-generated IDs can easily change.
Callbacks referencing the new IDs will not work old snapshots.
Please assign an explicit ID to this component.
|
1,656 | def final():
head = []
head.append(("layernorm.weight", "norm.weight"))
head.append(("layernorm.bias", "norm.bias"))
head.append(("classifier.weight", "head.weight"))
head.append(("classifier.bias", "head.bias"))
return head
|
Function helps in renaming final classification layer
| 7 | 15 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def final():
head = []
head.append(("layernorm.weight", "norm.weight"))
head.append(("layernorm.bias", "norm.bias"))
head.append(("classifier.weight", "head.weight"))
head.append(("classifier.bias", "head.bias"))
return head
```
###Assistant :
Function helps in renaming final classification layer
|
1,657 | def test_delete_index(self, ds, documents):
ds.write_documents(documents, index="custom_index")
assert ds.get_document_count(index="custom_index") == len(documents)
ds.delete_index(index="custom_index")
assert ds.get_document_count(index="custom_index") == 0
| Contrary to other Document Stores, SQLDocumentStore doesn't raise if the index is empty | 13 | 15 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_delete_index(self, ds, documents):
ds.write_documents(documents, index="custom_index")
assert ds.get_document_count(index="custom_index") == len(documents)
ds.delete_index(index="custom_index")
assert ds.get_document_count(index="custom_index") == 0
```
###Assistant : Contrary to other Document Stores, SQLDocumentStore doesn't raise if the index is empty
|
1,658 | def add_holidays(events, start, end, employee, company):
applicable_holiday_list = get_holiday_list_for_employee(employee, company)
if not applicable_holiday_list:
return
for holiday in frappe.db.sql(
,
(applicable_holiday_list, start, end),
as_dict=True,
):
events.append(
{
"doctype": "Holiday",
"from_date": holiday.holiday_date,
"to_date": holiday.holiday_date,
"title": _("Holiday") + ": " + cstr(holiday.description),
"name": holiday.name,
}
)
@frappe.whitelist() | select name, holiday_date, description
from `tabHoliday` where parent=%s and holiday_date between %s and %s | 14 | 44 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def add_holidays(events, start, end, employee, company):
applicable_holiday_list = get_holiday_list_for_employee(employee, company)
if not applicable_holiday_list:
return
for holiday in frappe.db.sql(
,
(applicable_holiday_list, start, end),
as_dict=True,
):
events.append(
{
"doctype": "Holiday",
"from_date": holiday.holiday_date,
"to_date": holiday.holiday_date,
"title": _("Holiday") + ": " + cstr(holiday.description),
"name": holiday.name,
}
)
@frappe.whitelist()
```
###Assistant : select name, holiday_date, description
from `tabHoliday` where parent=%s and holiday_date between %s and %s
|
1,659 | def get_video_input_devices_names() -> List[str]:
# based on https://docs.microsoft.com/ru-ru/windows/win32/directshow/selecting-a-capture-device
names = []
sys_dev_enum = strmif.ICreateDevEnum()
if ole32.CoCreateInstance(uuids.CLSID_SystemDeviceEnum, None, ole32.CLSCTX.CLSCTX_INPROC_SERVER, strmif.ICreateDevEnum.IID, sys_dev_enum) == wintypes.ERROR.SUCCESS:
pEnumCat = objidl.IEnumMoniker()
if sys_dev_enum.CreateClassEnumerator(uuids.CLSID_VideoInputDeviceCategory, pEnumCat, 0) == wintypes.ERROR.SUCCESS:
moniker = objidl.IMoniker()
while pEnumCat.Next(1, moniker, None) == wintypes.ERROR.SUCCESS:
prop_bag = oaidl.IPropertyBag()
if moniker.BindToStorage(None, None, oaidl.IPropertyBag.IID, prop_bag) == wintypes.ERROR.SUCCESS:
var = wintypes.VARIANT()
hr = prop_bag.Read(wintypes.LPCOLESTR('Description'), var, None )
if hr != wintypes.ERROR.SUCCESS:
hr = prop_bag.Read(wintypes.LPCOLESTR('FriendlyName'), var, None )
names.append(var.value.bstrVal.value if hr == wintypes.ERROR.SUCCESS else 'unnamed')
prop_bag.Release()
moniker.Release()
pEnumCat.Release()
sys_dev_enum.Release()
return names |
returns a list of available names of VideoInputDevice's
ole32 should be initialized before use
| 14 | 82 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_video_input_devices_names() -> List[str]:
# based on https://docs.microsoft.com/ru-ru/windows/win32/directshow/selecting-a-capture-device
names = []
sys_dev_enum = strmif.ICreateDevEnum()
if ole32.CoCreateInstance(uuids.CLSID_SystemDeviceEnum, None, ole32.CLSCTX.CLSCTX_INPROC_SERVER, strmif.ICreateDevEnum.IID, sys_dev_enum) == wintypes.ERROR.SUCCESS:
pEnumCat = objidl.IEnumMoniker()
if sys_dev_enum.CreateClassEnumerator(uuids.CLSID_VideoInputDeviceCategory, pEnumCat, 0) == wintypes.ERROR.SUCCESS:
moniker = objidl.IMoniker()
while pEnumCat.Next(1, moniker, None) == wintypes.ERROR.SUCCESS:
prop_bag = oaidl.IPropertyBag()
if moniker.BindToStorage(None, None, oaidl.IPropertyBag.IID, prop_bag) == wintypes.ERROR.SUCCESS:
var = wintypes.VARIANT()
hr = prop_bag.Read(wintypes.LPCOLESTR('Description'), var, None )
if hr != wintypes.ERROR.SUCCESS:
hr = prop_bag.Read(wintypes.LPCOLESTR('FriendlyName'), var, None )
names.append(var.value.bstrVal.value if hr == wintypes.ERROR.SUCCESS else 'unnamed')
prop_bag.Release()
moniker.Release()
pEnumCat.Release()
sys_dev_enum.Release()
return names
```
###Assistant :
returns a list of available names of VideoInputDevice's
ole32 should be initialized before use
|
1,660 | def check_for_updates():
version_message = get_update_status()
print(version_message)
|
Check for updates to the current version.
| 7 | 6 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def check_for_updates():
version_message = get_update_status()
print(version_message)
```
###Assistant :
Check for updates to the current version.
|
1,661 | def _get_base_knot_positions(X, n_knots=10, knots="uniform", sample_weight=None):
if knots == "quantile":
percentiles = 100 * np.linspace(
start=0, stop=1, num=n_knots, dtype=np.float64
)
if sample_weight is None:
knots = np.percentile(X, percentiles, axis=0)
else:
knots = np.array(
[
_weighted_percentile(X, sample_weight, percentile)
for percentile in percentiles
]
)
else:
# knots == 'uniform':
# Note that the variable `knots` has already been validated and
# `else` is therefore safe.
# Disregard observations with zero weight.
mask = slice(None, None, 1) if sample_weight is None else sample_weight > 0
x_min = np.amin(X[mask], axis=0)
x_max = np.amax(X[mask], axis=0)
knots = np.linspace(
start=x_min,
stop=x_max,
num=n_knots,
endpoint=True,
dtype=np.float64,
)
return knots
| Calculate base knot positions.
Base knots such that first knot <= feature <= last knot. For the
B-spline construction with scipy.interpolate.BSpline, 2*degree knots
beyond the base interval are added.
Returns
-------
knots : ndarray of shape (n_knots, n_features), dtype=np.float64
Knot positions (points) of base interval.
| 45 | 101 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _get_base_knot_positions(X, n_knots=10, knots="uniform", sample_weight=None):
if knots == "quantile":
percentiles = 100 * np.linspace(
start=0, stop=1, num=n_knots, dtype=np.float64
)
if sample_weight is None:
knots = np.percentile(X, percentiles, axis=0)
else:
knots = np.array(
[
_weighted_percentile(X, sample_weight, percentile)
for percentile in percentiles
]
)
else:
# knots == 'uniform':
# Note that the variable `knots` has already been validated and
# `else` is therefore safe.
# Disregard observations with zero weight.
mask = slice(None, None, 1) if sample_weight is None else sample_weight > 0
x_min = np.amin(X[mask], axis=0)
x_max = np.amax(X[mask], axis=0)
knots = np.linspace(
start=x_min,
stop=x_max,
num=n_knots,
endpoint=True,
dtype=np.float64,
)
return knots
```
###Assistant : Calculate base knot positions.
Base knots such that first knot <= feature <= last knot. For the
B-spline construction with scipy.interpolate.BSpline, 2*degree knots
beyond the base interval are added.
Returns
-------
knots : ndarray of shape (n_knots, n_features), dtype=np.float64
Knot positions (points) of base interval.
|
1,662 | def _num_tokens(self, data):
if tf_utils.is_sparse(data):
flat_values = data.values
elif tf_utils.is_ragged(data):
flat_values = data.flat_values
else:
flat_values = tf.reshape(data, [-1])
tokens, _, counts = tf.unique_with_counts(flat_values, out_idx=tf.int64)
return tokens, counts
| Count the number of tokens in a ragged, sparse or dense tensor. | 12 | 27 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _num_tokens(self, data):
if tf_utils.is_sparse(data):
flat_values = data.values
elif tf_utils.is_ragged(data):
flat_values = data.flat_values
else:
flat_values = tf.reshape(data, [-1])
tokens, _, counts = tf.unique_with_counts(flat_values, out_idx=tf.int64)
return tokens, counts
```
###Assistant : Count the number of tokens in a ragged, sparse or dense tensor.
|
1,663 | def no_devicess_fixture():
return json.loads(load_fixture("awair/no_devices.json"))
@pytest.fixture(name="awair_offline", scope="session") | Fixture representing when no devices are found in Awair's cloud API. | 11 | 6 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def no_devicess_fixture():
return json.loads(load_fixture("awair/no_devices.json"))
@pytest.fixture(name="awair_offline", scope="session")
```
###Assistant : Fixture representing when no devices are found in Awair's cloud API.
|
1,664 | def data():
return pd.array(
[True, False] * 4 + [np.nan] + [True, False] * 44 + [np.nan] + [True, False],
dtype="boolean",
)
@pytest.mark.parametrize(
"values, exp_any, exp_all, exp_any_noskip, exp_all_noskip",
[
([True, pd.NA], True, True, True, pd.NA),
([False, pd.NA], False, False, pd.NA, False),
([pd.NA], False, True, pd.NA, pd.NA),
([], False, True, False, True),
# GH-33253: all True / all False values buggy with skipna=False
([True, True], True, True, True, True),
([False, False], False, False, False, False),
],
) | Fixture returning boolean array, with valid and missing values. | 9 | 76 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def data():
return pd.array(
[True, False] * 4 + [np.nan] + [True, False] * 44 + [np.nan] + [True, False],
dtype="boolean",
)
@pytest.mark.parametrize(
"values, exp_any, exp_all, exp_any_noskip, exp_all_noskip",
[
([True, pd.NA], True, True, True, pd.NA),
([False, pd.NA], False, False, pd.NA, False),
([pd.NA], False, True, pd.NA, pd.NA),
([], False, True, False, True),
# GH-33253: all True / all False values buggy with skipna=False
([True, True], True, True, True, True),
([False, False], False, False, False, False),
],
)
```
###Assistant : Fixture returning boolean array, with valid and missing values.
|
1,665 | def should_toggle_mask(self) -> bool:
with self._lock:
retval = self._toggle_mask
if retval:
logger.debug("Sending toggle mask")
self._toggle_mask = False
return retval
| Check whether the mask should be toggled and return the value. If ``True`` is returned
then resets :attr:`_toggle_mask` back to ``False``
Returns
-------
bool
``True`` if the mask should be toggled otherwise ``False``. | 33 | 19 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def should_toggle_mask(self) -> bool:
with self._lock:
retval = self._toggle_mask
if retval:
logger.debug("Sending toggle mask")
self._toggle_mask = False
return retval
```
###Assistant : Check whether the mask should be toggled and return the value. If ``True`` is returned
then resets :attr:`_toggle_mask` back to ``False``
Returns
-------
bool
``True`` if the mask should be toggled otherwise ``False``.
|
1,666 | def load_mtpl2(n_samples=100000):
# freMTPL2freq dataset from https://www.openml.org/d/41214
df_freq = fetch_openml(data_id=41214, as_frame=True, parser="pandas").data
df_freq["IDpol"] = df_freq["IDpol"].astype(int)
df_freq.set_index("IDpol", inplace=True)
# freMTPL2sev dataset from https://www.openml.org/d/41215
df_sev = fetch_openml(data_id=41215, as_frame=True, parser="pandas").data
# sum ClaimAmount over identical IDs
df_sev = df_sev.groupby("IDpol").sum()
df = df_freq.join(df_sev, how="left")
df["ClaimAmount"].fillna(0, inplace=True)
# unquote string fields
for column_name in df.columns[df.dtypes.values == object]:
df[column_name] = df[column_name].str.strip("'")
return df.iloc[:n_samples]
| Fetch the French Motor Third-Party Liability Claims dataset.
Parameters
----------
n_samples: int, default=100000
number of samples to select (for faster run time). Full dataset has
678013 samples.
| 27 | 57 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def load_mtpl2(n_samples=100000):
# freMTPL2freq dataset from https://www.openml.org/d/41214
df_freq = fetch_openml(data_id=41214, as_frame=True, parser="pandas").data
df_freq["IDpol"] = df_freq["IDpol"].astype(int)
df_freq.set_index("IDpol", inplace=True)
# freMTPL2sev dataset from https://www.openml.org/d/41215
df_sev = fetch_openml(data_id=41215, as_frame=True, parser="pandas").data
# sum ClaimAmount over identical IDs
df_sev = df_sev.groupby("IDpol").sum()
df = df_freq.join(df_sev, how="left")
df["ClaimAmount"].fillna(0, inplace=True)
# unquote string fields
for column_name in df.columns[df.dtypes.values == object]:
df[column_name] = df[column_name].str.strip("'")
return df.iloc[:n_samples]
```
###Assistant : Fetch the French Motor Third-Party Liability Claims dataset.
Parameters
----------
n_samples: int, default=100000
number of samples to select (for faster run time). Full dataset has
678013 samples.
|
1,667 | def _get_driver(self) -> str:
try:
driver = pynvml.nvmlSystemGetDriverVersion().decode("utf-8")
except pynvml.NVMLError as err:
self._log("debug", f"Unable to obtain driver. Original error: {str(err)}")
driver = "No Nvidia driver found"
self._log("debug", f"GPU Driver: {driver}")
return driver
| Obtain the Nvidia driver version currently in use.
Returns
-------
str
The current GPU driver version
| 16 | 32 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _get_driver(self) -> str:
try:
driver = pynvml.nvmlSystemGetDriverVersion().decode("utf-8")
except pynvml.NVMLError as err:
self._log("debug", f"Unable to obtain driver. Original error: {str(err)}")
driver = "No Nvidia driver found"
self._log("debug", f"GPU Driver: {driver}")
return driver
```
###Assistant : Obtain the Nvidia driver version currently in use.
Returns
-------
str
The current GPU driver version
|
1,668 | def _load_serializers():
global _serializers
serializers = {}
for format in BUILTIN_SERIALIZERS:
register_serializer(format, BUILTIN_SERIALIZERS[format], serializers)
if hasattr(settings, "SERIALIZATION_MODULES"):
for format in settings.SERIALIZATION_MODULES:
register_serializer(
format, settings.SERIALIZATION_MODULES[format], serializers
)
_serializers = serializers
|
Register built-in and settings-defined serializers. This is done lazily so
that user code has a chance to (e.g.) set up custom settings without
needing to be careful of import order.
| 30 | 29 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _load_serializers():
global _serializers
serializers = {}
for format in BUILTIN_SERIALIZERS:
register_serializer(format, BUILTIN_SERIALIZERS[format], serializers)
if hasattr(settings, "SERIALIZATION_MODULES"):
for format in settings.SERIALIZATION_MODULES:
register_serializer(
format, settings.SERIALIZATION_MODULES[format], serializers
)
_serializers = serializers
```
###Assistant :
Register built-in and settings-defined serializers. This is done lazily so
that user code has a chance to (e.g.) set up custom settings without
needing to be careful of import order.
|
1,669 | def pop(self):
if self.keyorder:
value = self.keys()[0]
self.remove(value)
return value
return None
|
Pops the top element from the sorted keys if it exists. Returns None otherwise.
| 14 | 12 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def pop(self):
if self.keyorder:
value = self.keys()[0]
self.remove(value)
return value
return None
```
###Assistant :
Pops the top element from the sorted keys if it exists. Returns None otherwise.
|
1,670 | def probs_to_pianoroll_viterbi(frame_probs, onset_probs, alpha=0.5):
n, d = onset_probs.shape
loss_matrix = np.zeros([n, d, 2], dtype=float)
path_matrix = np.zeros([n, d, 2], dtype=bool)
frame_losses = (1 - alpha) * -np.log(np.stack([1 - frame_probs,
frame_probs], axis=-1))
onset_losses = alpha * -np.log(np.stack([1 - onset_probs,
onset_probs], axis=-1))
loss_matrix[0, :, :] = frame_losses[0, :, :] + onset_losses[0, :, :]
for i in range(1, n):
transition_loss = np.tile(loss_matrix[i - 1, :, :][:, :, np.newaxis],
[1, 1, 2])
transition_loss[:, 0, 0] += onset_losses[i, :, 0]
transition_loss[:, 0, 1] += onset_losses[i, :, 1]
transition_loss[:, 1, 0] += onset_losses[i, :, 0]
transition_loss[:, 1, 1] += onset_losses[i, :, 0]
path_matrix[i, :, :] = np.argmin(transition_loss, axis=1)
loss_matrix[i, :, 0] = transition_loss[
np.arange(d), path_matrix[i, :, 0].astype(int), 0]
loss_matrix[i, :, 1] = transition_loss[
np.arange(d), path_matrix[i, :, 1].astype(int), 1]
loss_matrix[i, :, :] += frame_losses[i, :, :]
pianoroll = np.zeros([n, d], dtype=bool)
pianoroll[n - 1, :] = np.argmin(loss_matrix[n - 1, :, :], axis=-1)
for i in range(n - 2, -1, -1):
pianoroll[i, :] = path_matrix[
i + 1, np.arange(d), pianoroll[i + 1, :].astype(int)]
return pianoroll
| Viterbi decoding of frame & onset probabilities to pianoroll.
Args:
frame_probs: A numpy array (num-frames-by-num-pitches) of frame
probabilities.
onset_probs: A numpy array (num-frames-by-num-pitches) of onset
probabilities.
alpha: Relative weight of onset and frame loss, a float between 0 and 1.
With alpha = 0, onset probabilities will be ignored. With alpha = 1, frame
probabilities will be ignored.
Returns:
A numpy array (num-frames-by-num-pitches) representing the boolean-valued
pianoroll.
| 67 | 167 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def probs_to_pianoroll_viterbi(frame_probs, onset_probs, alpha=0.5):
n, d = onset_probs.shape
loss_matrix = np.zeros([n, d, 2], dtype=float)
path_matrix = np.zeros([n, d, 2], dtype=bool)
frame_losses = (1 - alpha) * -np.log(np.stack([1 - frame_probs,
frame_probs], axis=-1))
onset_losses = alpha * -np.log(np.stack([1 - onset_probs,
onset_probs], axis=-1))
loss_matrix[0, :, :] = frame_losses[0, :, :] + onset_losses[0, :, :]
for i in range(1, n):
transition_loss = np.tile(loss_matrix[i - 1, :, :][:, :, np.newaxis],
[1, 1, 2])
transition_loss[:, 0, 0] += onset_losses[i, :, 0]
transition_loss[:, 0, 1] += onset_losses[i, :, 1]
transition_loss[:, 1, 0] += onset_losses[i, :, 0]
transition_loss[:, 1, 1] += onset_losses[i, :, 0]
path_matrix[i, :, :] = np.argmin(transition_loss, axis=1)
loss_matrix[i, :, 0] = transition_loss[
np.arange(d), path_matrix[i, :, 0].astype(int), 0]
loss_matrix[i, :, 1] = transition_loss[
np.arange(d), path_matrix[i, :, 1].astype(int), 1]
loss_matrix[i, :, :] += frame_losses[i, :, :]
pianoroll = np.zeros([n, d], dtype=bool)
pianoroll[n - 1, :] = np.argmin(loss_matrix[n - 1, :, :], axis=-1)
for i in range(n - 2, -1, -1):
pianoroll[i, :] = path_matrix[
i + 1, np.arange(d), pianoroll[i + 1, :].astype(int)]
return pianoroll
```
###Assistant : Viterbi decoding of frame & onset probabilities to pianoroll.
Args:
frame_probs: A numpy array (num-frames-by-num-pitches) of frame
probabilities.
onset_probs: A numpy array (num-frames-by-num-pitches) of onset
probabilities.
alpha: Relative weight of onset and frame loss, a float between 0 and 1.
With alpha = 0, onset probabilities will be ignored. With alpha = 1, frame
probabilities will be ignored.
Returns:
A numpy array (num-frames-by-num-pitches) representing the boolean-valued
pianoroll.
|
1,671 | def to_native_string(string, encoding="ascii"):
if isinstance(string, builtin_str):
out = string
else:
out = string.decode(encoding)
return out
| Given a string object, regardless of type, returns a representation of
that string in the native string type, encoding and decoding where
necessary. This assumes ASCII unless told otherwise.
| 29 | 15 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def to_native_string(string, encoding="ascii"):
if isinstance(string, builtin_str):
out = string
else:
out = string.decode(encoding)
return out
```
###Assistant : Given a string object, regardless of type, returns a representation of
that string in the native string type, encoding and decoding where
necessary. This assumes ASCII unless told otherwise.
|
1,672 | def mathematica(s, additional_translations=None):
parser = MathematicaParser(additional_translations)
if additional_translations is not None:
sympy_deprecation_warning(
,
deprecated_since_version="1.11",
active_deprecations_target="mathematica-parser-additional-translations",
)
return sympify(parser._parse_old(s))
return parser.parse(s)
|
Translate a string containing a Wolfram Mathematica expression to a SymPy
expression.
If the translator is unable to find a suitable SymPy expression, the
``FullForm`` of the Mathematica expression will be output, using SymPy
``Function`` objects as nodes of the syntax tree.
Examples
========
>>> from sympy.parsing.mathematica import mathematica
>>> mathematica("Sin[x]^2 Tan[y]")
sin(x)**2*tan(y)
>>> e = mathematica("F[7,5,3]")
>>> e
F(7, 5, 3)
>>> from sympy import Function, Max, Min
>>> e.replace(Function("F"), lambda *x: Max(*x)*Min(*x))
21
Both standard input form and Mathematica full form are supported:
>>> mathematica("x*(a + b)")
x*(a + b)
>>> mathematica("Times[x, Plus[a, b]]")
x*(a + b)
To get a matrix from Wolfram's code:
>>> m = mathematica("{{a, b}, {c, d}}")
>>> m
((a, b), (c, d))
>>> from sympy import Matrix
>>> Matrix(m)
Matrix([
[a, b],
[c, d]])
If the translation into equivalent SymPy expressions fails, an SymPy
expression equivalent to Wolfram Mathematica's "FullForm" will be created:
>>> mathematica("x_.")
Optional(Pattern(x, Blank()))
>>> mathematica("Plus @@ {x, y, z}")
Apply(Plus, (x, y, z))
>>> mathematica("f[x_, 3] := x^3 /; x > 0")
SetDelayed(f(Pattern(x, Blank()), 3), Condition(x**3, x > 0))
The ``additional_translations`` parameter for the Mathematica parser is now deprecated.
Use SymPy's .replace( ) or .subs( ) methods on the output expression instead. | 203 | 20 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def mathematica(s, additional_translations=None):
parser = MathematicaParser(additional_translations)
if additional_translations is not None:
sympy_deprecation_warning(
,
deprecated_since_version="1.11",
active_deprecations_target="mathematica-parser-additional-translations",
)
return sympify(parser._parse_old(s))
return parser.parse(s)
```
###Assistant :
Translate a string containing a Wolfram Mathematica expression to a SymPy
expression.
If the translator is unable to find a suitable SymPy expression, the
``FullForm`` of the Mathematica expression will be output, using SymPy
``Function`` objects as nodes of the syntax tree.
Examples
========
>>> from sympy.parsing.mathematica import mathematica
>>> mathematica("Sin[x]^2 Tan[y]")
sin(x)**2*tan(y)
>>> e = mathematica("F[7,5,3]")
>>> e
F(7, 5, 3)
>>> from sympy import Function, Max, Min
>>> e.replace(Function("F"), lambda *x: Max(*x)*Min(*x))
21
Both standard input form and Mathematica full form are supported:
>>> mathematica("x*(a + b)")
x*(a + b)
>>> mathematica("Times[x, Plus[a, b]]")
x*(a + b)
To get a matrix from Wolfram's code:
>>> m = mathematica("{{a, b}, {c, d}}")
>>> m
((a, b), (c, d))
>>> from sympy import Matrix
>>> Matrix(m)
Matrix([
[a, b],
[c, d]])
If the translation into equivalent SymPy expressions fails, an SymPy
expression equivalent to Wolfram Mathematica's "FullForm" will be created:
>>> mathematica("x_.")
Optional(Pattern(x, Blank()))
>>> mathematica("Plus @@ {x, y, z}")
Apply(Plus, (x, y, z))
>>> mathematica("f[x_, 3] := x^3 /; x > 0")
SetDelayed(f(Pattern(x, Blank()), 3), Condition(x**3, x > 0))
The ``additional_translations`` parameter for the Mathematica parser is now deprecated.
Use SymPy's .replace( ) or .subs( ) methods on the output expression instead.
|
1,673 | def evaluation(self):
# adding info about the eval tasks
if self.eval_tasks == self.train_tasks:
msg = "For evalution, we used the same training datasets; check the [Datasets Used](#datasets-used) section for more information"
eval_list = ''
else:
msg = f"This model was evaluated on the datasets below (use the `parlai display_data` commands to show data). Visit the {make_link('task (dataset) list', task_site)} for more details about the datasets.\n"
eval_list = get_dataset_info(self.eval_tasks)
eval_list = '\n' + '\n'.join(eval_list)
content = [msg + eval_list]
# validation metric info: getting metric name and description
splitted = re.sub(r'_+', ' ', self.valid_metric).split()
key = splitted[-1]
if extra_metric_info.get(key):
mname, description = extra_metric_info[key]
elif METRICS_DISPLAY_DATA.get(key):
mname = METRICS_DISPLAY_DATA[key].title
description = METRICS_DISPLAY_DATA[key].description
else:
description, mname = (None, None)
# adding description for validation metric and re-wording it:
msg = f"\n\nWe used the metric {metric_format(self.valid_metric)}"
if len(splitted) == 3 and splitted[0] == 'class' and mname:
msg += f", the {mname.lower()} scores for the class {splitted[1]}"
content.append(msg + ' as the validation metric. ')
if description:
description = description[0].lower() + description[1:]
content[-1] += f"Recall that `{self.valid_metric}` is {description}."
# evaluation table
# getting list of subtasks and making columns
eval_tasks = self.eval_tasks
if len(self.eval_tasks) > 1:
eval_tasks.insert(0, 'All')
columns = [' '] + [taskname(subtask) for subtask in eval_tasks]
# only one row: validation
row = [metric_format(self.valid_metric)]
for subtask in eval_tasks:
# creating the key to get metric and formatting
pre = '' if subtask == 'All' or len(eval_tasks) == 1 else subtask + '/'
key = pre + self.valid_metric
fmt = '{:.4f}' if self.valid_metric in not_percent else '{:.2%}'
row.append(fmt.format(self.eval_results[key]))
return '\n'.join(content) + '\n\n' + '\n'.join(make_md_table([row], columns))
|
returns a section with dataset info about the eval tasks if they exist,
information about the validation metric if it exists, and create a table with
the validation metric.
| 29 | 262 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def evaluation(self):
# adding info about the eval tasks
if self.eval_tasks == self.train_tasks:
msg = "For evalution, we used the same training datasets; check the [Datasets Used](#datasets-used) section for more information"
eval_list = ''
else:
msg = f"This model was evaluated on the datasets below (use the `parlai display_data` commands to show data). Visit the {make_link('task (dataset) list', task_site)} for more details about the datasets.\n"
eval_list = get_dataset_info(self.eval_tasks)
eval_list = '\n' + '\n'.join(eval_list)
content = [msg + eval_list]
# validation metric info: getting metric name and description
splitted = re.sub(r'_+', ' ', self.valid_metric).split()
key = splitted[-1]
if extra_metric_info.get(key):
mname, description = extra_metric_info[key]
elif METRICS_DISPLAY_DATA.get(key):
mname = METRICS_DISPLAY_DATA[key].title
description = METRICS_DISPLAY_DATA[key].description
else:
description, mname = (None, None)
# adding description for validation metric and re-wording it:
msg = f"\n\nWe used the metric {metric_format(self.valid_metric)}"
if len(splitted) == 3 and splitted[0] == 'class' and mname:
msg += f", the {mname.lower()} scores for the class {splitted[1]}"
content.append(msg + ' as the validation metric. ')
if description:
description = description[0].lower() + description[1:]
content[-1] += f"Recall that `{self.valid_metric}` is {description}."
# evaluation table
# getting list of subtasks and making columns
eval_tasks = self.eval_tasks
if len(self.eval_tasks) > 1:
eval_tasks.insert(0, 'All')
columns = [' '] + [taskname(subtask) for subtask in eval_tasks]
# only one row: validation
row = [metric_format(self.valid_metric)]
for subtask in eval_tasks:
# creating the key to get metric and formatting
pre = '' if subtask == 'All' or len(eval_tasks) == 1 else subtask + '/'
key = pre + self.valid_metric
fmt = '{:.4f}' if self.valid_metric in not_percent else '{:.2%}'
row.append(fmt.format(self.eval_results[key]))
return '\n'.join(content) + '\n\n' + '\n'.join(make_md_table([row], columns))
```
###Assistant :
returns a section with dataset info about the eval tasks if they exist,
information about the validation metric if it exists, and create a table with
the validation metric.
|
1,674 | def _train_with_recompute(n_steps):
img_dim, n_channels, batch_size = 256, 1, 4
x, y = _get_dummy_data(img_dim, n_channels, batch_size)
# This model is the same model as _get_big_cnn_model but split into 3 parts.
models = _get_split_cnn_model(
img_dim, n_channels, num_partitions=3, blocks_per_partition=2
)
model1, model2, model3 = models
# Apply gradient checkpointing to the submodels using tf.recompute_grad.
model1_re = tf.recompute_grad(model1)
model2_re = tf.recompute_grad(model2)
model3_re = tf.recompute_grad(model3)
optimizer = optimizers.SGD()
tr_vars = (
model1.trainable_variables
+ model2.trainable_variables
+ model3.trainable_variables
)
losses = []
for _ in range(n_steps):
with tf.GradientTape() as tape:
logits1 = model1_re(x)
logits2 = model2_re(logits1)
logits3 = model3_re(logits2)
loss = _compute_loss(logits3, y)
losses.append(loss)
grads = tape.gradient(loss, tr_vars) # tr_vars
optimizer.apply_gradients(zip(grads, tr_vars))
del grads
return losses
@tf_test_utils.with_eager_op_as_function | Trains a single large model with gradient checkpointing using tf.recompute_grad. | 10 | 110 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _train_with_recompute(n_steps):
img_dim, n_channels, batch_size = 256, 1, 4
x, y = _get_dummy_data(img_dim, n_channels, batch_size)
# This model is the same model as _get_big_cnn_model but split into 3 parts.
models = _get_split_cnn_model(
img_dim, n_channels, num_partitions=3, blocks_per_partition=2
)
model1, model2, model3 = models
# Apply gradient checkpointing to the submodels using tf.recompute_grad.
model1_re = tf.recompute_grad(model1)
model2_re = tf.recompute_grad(model2)
model3_re = tf.recompute_grad(model3)
optimizer = optimizers.SGD()
tr_vars = (
model1.trainable_variables
+ model2.trainable_variables
+ model3.trainable_variables
)
losses = []
for _ in range(n_steps):
with tf.GradientTape() as tape:
logits1 = model1_re(x)
logits2 = model2_re(logits1)
logits3 = model3_re(logits2)
loss = _compute_loss(logits3, y)
losses.append(loss)
grads = tape.gradient(loss, tr_vars) # tr_vars
optimizer.apply_gradients(zip(grads, tr_vars))
del grads
return losses
@tf_test_utils.with_eager_op_as_function
```
###Assistant : Trains a single large model with gradient checkpointing using tf.recompute_grad.
|
1,675 | def test_syncer_callback_dead_node_log_error(caplog, ray_start_2_cpus, temp_data_dirs):
caplog.set_level(logging.ERROR, logger="ray.tune.syncer")
tmp_source, tmp_target = temp_data_dirs
syncer_callback = TestSyncerCallback(
sync_period=0,
local_logdir_override=tmp_target,
)
trial1 = MockTrial(trial_id="a", logdir=tmp_source, on_dead_node=True)
syncer_callback.on_trial_result(iteration=1, trials=[], trial=trial1, result={})
assert (
"An error occurred when trying to get the node ip where this trial is running"
in caplog.text
)
| Check that we catch + log errors when trying syncing with a dead remote node. | 15 | 45 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_syncer_callback_dead_node_log_error(caplog, ray_start_2_cpus, temp_data_dirs):
caplog.set_level(logging.ERROR, logger="ray.tune.syncer")
tmp_source, tmp_target = temp_data_dirs
syncer_callback = TestSyncerCallback(
sync_period=0,
local_logdir_override=tmp_target,
)
trial1 = MockTrial(trial_id="a", logdir=tmp_source, on_dead_node=True)
syncer_callback.on_trial_result(iteration=1, trials=[], trial=trial1, result={})
assert (
"An error occurred when trying to get the node ip where this trial is running"
in caplog.text
)
```
###Assistant : Check that we catch + log errors when trying syncing with a dead remote node.
|
1,676 | def directed_modularity_matrix(G, nodelist=None, weight=None):
import numpy as np
if nodelist is None:
nodelist = list(G)
A = nx.to_scipy_sparse_array(G, nodelist=nodelist, weight=weight, format="csr")
k_in = A.sum(axis=0)
k_out = A.sum(axis=1)
m = k_in.sum()
# Expected adjacency matrix
X = np.outer(k_out, k_in) / m
return A - X
| Returns the directed modularity matrix of G.
The modularity matrix is the matrix B = A - <A>, where A is the adjacency
matrix and <A> is the expected adjacency matrix, assuming that the graph
is described by the configuration model.
More specifically, the element B_ij of B is defined as
.. math::
B_{ij} = A_{ij} - k_i^{out} k_j^{in} / m
where :math:`k_i^{in}` is the in degree of node i, and :math:`k_j^{out}` is the out degree
of node j, with m the number of edges in the graph. When weight is set
to a name of an attribute edge, Aij, k_i, k_j and m are computed using
its value.
Parameters
----------
G : DiGraph
A NetworkX DiGraph
nodelist : list, optional
The rows and columns are ordered according to the nodes in nodelist.
If nodelist is None, then the ordering is produced by G.nodes().
weight : string or None, optional (default=None)
The edge attribute that holds the numerical value used for
the edge weight. If None then all edge weights are 1.
Returns
-------
B : Numpy array
The modularity matrix of G.
Examples
--------
>>> G = nx.DiGraph()
>>> G.add_edges_from(
... (
... (1, 2),
... (1, 3),
... (3, 1),
... (3, 2),
... (3, 5),
... (4, 5),
... (4, 6),
... (5, 4),
... (5, 6),
... (6, 4),
... )
... )
>>> B = nx.directed_modularity_matrix(G)
Notes
-----
NetworkX defines the element A_ij of the adjacency matrix as 1 if there
is a link going from node i to node j. Leicht and Newman use the opposite
definition. This explains the different expression for B_ij.
See Also
--------
to_numpy_array
modularity_spectrum
adjacency_matrix
modularity_matrix
References
----------
.. [1] E. A. Leicht, M. E. J. Newman,
"Community structure in directed networks",
Phys. Rev Lett., vol. 100, no. 11, p. 118703, 2008.
| 303 | 44 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def directed_modularity_matrix(G, nodelist=None, weight=None):
import numpy as np
if nodelist is None:
nodelist = list(G)
A = nx.to_scipy_sparse_array(G, nodelist=nodelist, weight=weight, format="csr")
k_in = A.sum(axis=0)
k_out = A.sum(axis=1)
m = k_in.sum()
# Expected adjacency matrix
X = np.outer(k_out, k_in) / m
return A - X
```
###Assistant : Returns the directed modularity matrix of G.
The modularity matrix is the matrix B = A - <A>, where A is the adjacency
matrix and <A> is the expected adjacency matrix, assuming that the graph
is described by the configuration model.
More specifically, the element B_ij of B is defined as
.. math::
B_{ij} = A_{ij} - k_i^{out} k_j^{in} / m
where :math:`k_i^{in}` is the in degree of node i, and :math:`k_j^{out}` is the out degree
of node j, with m the number of edges in the graph. When weight is set
to a name of an attribute edge, Aij, k_i, k_j and m are computed using
its value.
Parameters
----------
G : DiGraph
A NetworkX DiGraph
nodelist : list, optional
The rows and columns are ordered according to the nodes in nodelist.
If nodelist is None, then the ordering is produced by G.nodes().
weight : string or None, optional (default=None)
The edge attribute that holds the numerical value used for
the edge weight. If None then all edge weights are 1.
Returns
-------
B : Numpy array
The modularity matrix of G.
Examples
--------
>>> G = nx.DiGraph()
>>> G.add_edges_from(
... (
... (1, 2),
... (1, 3),
... (3, 1),
... (3, 2),
... (3, 5),
... (4, 5),
... (4, 6),
... (5, 4),
... (5, 6),
... (6, 4),
... )
... )
>>> B = nx.directed_modularity_matrix(G)
Notes
-----
NetworkX defines the element A_ij of the adjacency matrix as 1 if there
is a link going from node i to node j. Leicht and Newman use the opposite
definition. This explains the different expression for B_ij.
See Also
--------
to_numpy_array
modularity_spectrum
adjacency_matrix
modularity_matrix
References
----------
.. [1] E. A. Leicht, M. E. J. Newman,
"Community structure in directed networks",
Phys. Rev Lett., vol. 100, no. 11, p. 118703, 2008.
|
1,677 | def project_columns(self, columns):
if columns == self.columns:
return self
return ParquetFunctionWrapper(
self.engine,
self.fs,
self.meta,
columns,
self.index,
None, # Already merged into common_kwargs
self.common_kwargs,
)
| Return a new ParquetFunctionWrapper object
with a sub-column projection.
| 9 | 24 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def project_columns(self, columns):
if columns == self.columns:
return self
return ParquetFunctionWrapper(
self.engine,
self.fs,
self.meta,
columns,
self.index,
None, # Already merged into common_kwargs
self.common_kwargs,
)
```
###Assistant : Return a new ParquetFunctionWrapper object
with a sub-column projection.
|
1,678 | def sort_args_by_name(self):
expr = self.expr
if not isinstance(expr, ArrayTensorProduct):
return self
args = expr.args
sorted_data = sorted(enumerate(args), key=lambda x: default_sort_key(x[1]))
pos_sorted, args_sorted = zip(*sorted_data)
reordering_map = {i: pos_sorted.index(i) for i, arg in enumerate(args)}
contraction_tuples = self._get_contraction_tuples()
contraction_tuples = [[(reordering_map[j], k) for j, k in i] for i in contraction_tuples]
c_tp = _array_tensor_product(*args_sorted)
new_contr_indices = self._contraction_tuples_to_contraction_indices(
c_tp,
contraction_tuples
)
return _array_contraction(c_tp, *new_contr_indices)
|
Sort arguments in the tensor product so that their order is lexicographical.
Examples
========
>>> from sympy.tensor.array.expressions.from_matrix_to_array import convert_matrix_to_array
>>> from sympy import MatrixSymbol
>>> from sympy.abc import N
>>> A = MatrixSymbol("A", N, N)
>>> B = MatrixSymbol("B", N, N)
>>> C = MatrixSymbol("C", N, N)
>>> D = MatrixSymbol("D", N, N)
>>> cg = convert_matrix_to_array(C*D*A*B)
>>> cg
ArrayContraction(ArrayTensorProduct(A, D, C, B), (0, 3), (1, 6), (2, 5))
>>> cg.sort_args_by_name()
ArrayContraction(ArrayTensorProduct(A, D, B, C), (0, 3), (1, 4), (2, 7))
| 81 | 61 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def sort_args_by_name(self):
expr = self.expr
if not isinstance(expr, ArrayTensorProduct):
return self
args = expr.args
sorted_data = sorted(enumerate(args), key=lambda x: default_sort_key(x[1]))
pos_sorted, args_sorted = zip(*sorted_data)
reordering_map = {i: pos_sorted.index(i) for i, arg in enumerate(args)}
contraction_tuples = self._get_contraction_tuples()
contraction_tuples = [[(reordering_map[j], k) for j, k in i] for i in contraction_tuples]
c_tp = _array_tensor_product(*args_sorted)
new_contr_indices = self._contraction_tuples_to_contraction_indices(
c_tp,
contraction_tuples
)
return _array_contraction(c_tp, *new_contr_indices)
```
###Assistant :
Sort arguments in the tensor product so that their order is lexicographical.
Examples
========
>>> from sympy.tensor.array.expressions.from_matrix_to_array import convert_matrix_to_array
>>> from sympy import MatrixSymbol
>>> from sympy.abc import N
>>> A = MatrixSymbol("A", N, N)
>>> B = MatrixSymbol("B", N, N)
>>> C = MatrixSymbol("C", N, N)
>>> D = MatrixSymbol("D", N, N)
>>> cg = convert_matrix_to_array(C*D*A*B)
>>> cg
ArrayContraction(ArrayTensorProduct(A, D, C, B), (0, 3), (1, 6), (2, 5))
>>> cg.sort_args_by_name()
ArrayContraction(ArrayTensorProduct(A, D, B, C), (0, 3), (1, 4), (2, 7))
|
1,679 | def gather_async(self, batch_ms=0, num_async=1) -> "LocalIterator[T]":
if num_async < 1:
raise ValueError("queue depth must be positive")
if batch_ms < 0:
raise ValueError("batch time must be positive")
# Forward reference to the returned iterator.
local_iter = None
| Returns a local iterable for asynchronous iteration.
New items will be fetched from the shards asynchronously as soon as
the previous one is computed. Items arrive in non-deterministic order.
Arguments:
batch_ms (int): Batches items for batch_ms milliseconds
on each shard before retrieving it.
Increasing batch_ms increases latency but improves throughput.
If this value is 0, then items are returned immediately.
num_async (int): The max number of async requests in flight
per actor. Increasing this improves the amount of pipeline
parallelism in the iterator.
Examples:
>>> it = from_range(100, 1).gather_async()
>>> next(it)
... 3
>>> next(it)
... 0
>>> next(it)
... 1
| 101 | 36 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def gather_async(self, batch_ms=0, num_async=1) -> "LocalIterator[T]":
if num_async < 1:
raise ValueError("queue depth must be positive")
if batch_ms < 0:
raise ValueError("batch time must be positive")
# Forward reference to the returned iterator.
local_iter = None
```
###Assistant : Returns a local iterable for asynchronous iteration.
New items will be fetched from the shards asynchronously as soon as
the previous one is computed. Items arrive in non-deterministic order.
Arguments:
batch_ms (int): Batches items for batch_ms milliseconds
on each shard before retrieving it.
Increasing batch_ms increases latency but improves throughput.
If this value is 0, then items are returned immediately.
num_async (int): The max number of async requests in flight
per actor. Increasing this improves the amount of pipeline
parallelism in the iterator.
Examples:
>>> it = from_range(100, 1).gather_async()
>>> next(it)
... 3
>>> next(it)
... 0
>>> next(it)
... 1
|
1,680 | def test_dict_checkpoint_dict(self):
checkpoint = self._prepare_dict_checkpoint()
# Convert into dict checkpoint
data_dict = checkpoint.to_dict()
self.assertIsInstance(data_dict, dict)
# Create from dict
checkpoint = Checkpoint.from_dict(data_dict)
self.assertTrue(checkpoint._data_dict)
self._assert_dict_checkpoint(checkpoint)
| Test conversion from dict to dict checkpoint and back. | 9 | 24 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_dict_checkpoint_dict(self):
checkpoint = self._prepare_dict_checkpoint()
# Convert into dict checkpoint
data_dict = checkpoint.to_dict()
self.assertIsInstance(data_dict, dict)
# Create from dict
checkpoint = Checkpoint.from_dict(data_dict)
self.assertTrue(checkpoint._data_dict)
self._assert_dict_checkpoint(checkpoint)
```
###Assistant : Test conversion from dict to dict checkpoint and back.
|
1,681 | def test_5_model(self):
query =
predict_query =
for cid, char in [(CID_A, 'a'), (CID_B, 'b')]:
self.sql_via_http(
query.format(char, char),
company_id=cid,
expected_resp_type=RESPONSE_TYPE.OK
)
response = self.sql_via_http(
predict_query.format(char),
company_id=cid,
expected_resp_type=RESPONSE_TYPE.TABLE
)
self.assertTrue(len(response['data']), 1)
|
CREATE MODEL mindsdb.model_{}
FROM test_integration_{} (
select * from test_data.home_rentals limit 50
) PREDICT rental_price
USING join_learn_process=true, time_aim=5
select * from mindsdb.model_{} where sqft = 100
| 26 | 29 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_5_model(self):
query =
predict_query =
for cid, char in [(CID_A, 'a'), (CID_B, 'b')]:
self.sql_via_http(
query.format(char, char),
company_id=cid,
expected_resp_type=RESPONSE_TYPE.OK
)
response = self.sql_via_http(
predict_query.format(char),
company_id=cid,
expected_resp_type=RESPONSE_TYPE.TABLE
)
self.assertTrue(len(response['data']), 1)
```
###Assistant :
CREATE MODEL mindsdb.model_{}
FROM test_integration_{} (
select * from test_data.home_rentals limit 50
) PREDICT rental_price
USING join_learn_process=true, time_aim=5
select * from mindsdb.model_{} where sqft = 100
|
1,682 | def putpalette(self, data, rawmode="RGB"):
from . import ImagePalette
if self.mode not in ("L", "LA", "P", "PA"):
raise ValueError("illegal image mode")
if isinstance(data, ImagePalette.ImagePalette):
palette = ImagePalette.raw(data.rawmode, data.palette)
else:
if not isinstance(data, bytes):
data = bytes(data)
palette = ImagePalette.raw(rawmode, data)
self.mode = "PA" if "A" in self.mode else "P"
self.palette = palette
self.palette.mode = "RGB"
self.load() # install new palette
|
Attaches a palette to this image. The image must be a "P", "PA", "L"
or "LA" image.
The palette sequence must contain at most 256 colors, made up of one
integer value for each channel in the raw mode.
For example, if the raw mode is "RGB", then it can contain at most 768
values, made up of red, green and blue values for the corresponding pixel
index in the 256 colors.
If the raw mode is "RGBA", then it can contain at most 1024 values,
containing red, green, blue and alpha values.
Alternatively, an 8-bit string may be used instead of an integer sequence.
:param data: A palette sequence (either a list or a string).
:param rawmode: The raw mode of the palette. Either "RGB", "RGBA", or a mode
that can be transformed to "RGB" or "RGBA" (e.g. "R", "BGR;15", "RGBA;L").
| 142 | 59 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def putpalette(self, data, rawmode="RGB"):
from . import ImagePalette
if self.mode not in ("L", "LA", "P", "PA"):
raise ValueError("illegal image mode")
if isinstance(data, ImagePalette.ImagePalette):
palette = ImagePalette.raw(data.rawmode, data.palette)
else:
if not isinstance(data, bytes):
data = bytes(data)
palette = ImagePalette.raw(rawmode, data)
self.mode = "PA" if "A" in self.mode else "P"
self.palette = palette
self.palette.mode = "RGB"
self.load() # install new palette
```
###Assistant :
Attaches a palette to this image. The image must be a "P", "PA", "L"
or "LA" image.
The palette sequence must contain at most 256 colors, made up of one
integer value for each channel in the raw mode.
For example, if the raw mode is "RGB", then it can contain at most 768
values, made up of red, green and blue values for the corresponding pixel
index in the 256 colors.
If the raw mode is "RGBA", then it can contain at most 1024 values,
containing red, green, blue and alpha values.
Alternatively, an 8-bit string may be used instead of an integer sequence.
:param data: A palette sequence (either a list or a string).
:param rawmode: The raw mode of the palette. Either "RGB", "RGBA", or a mode
that can be transformed to "RGB" or "RGBA" (e.g. "R", "BGR;15", "RGBA;L").
|
1,683 | def get_timeout() -> t.Optional[t.Dict[str, t.Any]]:
if not os.path.exists(TIMEOUT_PATH):
return None
data = read_json_file(TIMEOUT_PATH)
data['deadline'] = datetime.datetime.strptime(data['deadline'], '%Y-%m-%dT%H:%M:%SZ')
return data
| Return details about the currently set timeout, if any, otherwise return None. | 12 | 19 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_timeout() -> t.Optional[t.Dict[str, t.Any]]:
if not os.path.exists(TIMEOUT_PATH):
return None
data = read_json_file(TIMEOUT_PATH)
data['deadline'] = datetime.datetime.strptime(data['deadline'], '%Y-%m-%dT%H:%M:%SZ')
return data
```
###Assistant : Return details about the currently set timeout, if any, otherwise return None.
|
1,684 | def my_mean_squared_error(y_true, y_pred):
return backend.mean(tf.math.squared_difference(y_pred, y_true), axis=-1)
module_my_mean_squared_error = my_mean_squared_error
@test_utils.run_v2_only | Identical to built-in `mean_squared_error`, added here as a custom
func.
| 10 | 11 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def my_mean_squared_error(y_true, y_pred):
return backend.mean(tf.math.squared_difference(y_pred, y_true), axis=-1)
module_my_mean_squared_error = my_mean_squared_error
@test_utils.run_v2_only
```
###Assistant : Identical to built-in `mean_squared_error`, added here as a custom
func.
|
1,685 | def _amd_predict_with_optimized_batchsizes(self, feed, batch_size):
if isinstance(feed, np.ndarray):
feed = [feed]
items = feed[0].shape[0]
done_items = 0
results = []
while done_items < items:
if batch_size < 4: # Not much difference in BS < 4
batch_size = 1
batch_items = ((items - done_items) // batch_size) * batch_size
if batch_items:
pred_data = [x[done_items:done_items + batch_items] for x in feed]
pred = self._model.predict(pred_data, batch_size=batch_size)
done_items += batch_items
results.append(pred)
batch_size //= 2
if isinstance(results[0], np.ndarray):
return np.concatenate(results)
return [np.concatenate(x) for x in zip(*results)]
| Minimizes the amount of kernels to be compiled when using the ``amd`` backend with
varying batch sizes while trying to keep the batchsize as high as possible.
Parameters
----------
feed: numpy.ndarray or list
The feed to be provided to the model as input. This should be a ``numpy.ndarray``
for single inputs or a ``list`` of ``numpy.ndarray`` objects for multiple inputs.
batch_size: int
The upper batchsize to use.
| 67 | 80 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _amd_predict_with_optimized_batchsizes(self, feed, batch_size):
if isinstance(feed, np.ndarray):
feed = [feed]
items = feed[0].shape[0]
done_items = 0
results = []
while done_items < items:
if batch_size < 4: # Not much difference in BS < 4
batch_size = 1
batch_items = ((items - done_items) // batch_size) * batch_size
if batch_items:
pred_data = [x[done_items:done_items + batch_items] for x in feed]
pred = self._model.predict(pred_data, batch_size=batch_size)
done_items += batch_items
results.append(pred)
batch_size //= 2
if isinstance(results[0], np.ndarray):
return np.concatenate(results)
return [np.concatenate(x) for x in zip(*results)]
```
###Assistant : Minimizes the amount of kernels to be compiled when using the ``amd`` backend with
varying batch sizes while trying to keep the batchsize as high as possible.
Parameters
----------
feed: numpy.ndarray or list
The feed to be provided to the model as input. This should be a ``numpy.ndarray``
for single inputs or a ``list`` of ``numpy.ndarray`` objects for multiple inputs.
batch_size: int
The upper batchsize to use.
|
1,686 | def set_positions(self, posA, posB):
if posA is not None:
self._posA_posB[0] = posA
if posB is not None:
self._posA_posB[1] = posB
self.stale = True
|
Set the start and end positions of the connecting path.
Parameters
----------
posA, posB : None, tuple
(x, y) coordinates of arrow tail and arrow head respectively. If
`None` use current value.
| 32 | 23 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def set_positions(self, posA, posB):
if posA is not None:
self._posA_posB[0] = posA
if posB is not None:
self._posA_posB[1] = posB
self.stale = True
```
###Assistant :
Set the start and end positions of the connecting path.
Parameters
----------
posA, posB : None, tuple
(x, y) coordinates of arrow tail and arrow head respectively. If
`None` use current value.
|
1,687 | def test_orderby_percentile_with_many_fields_multiple_entities_with_missing_data(self):
for tag, value, numbers in (
("transaction", "/foo/", [10, 11, 12]),
("transaction", "/bar/", [4, 5, 6]),
):
for subvalue in numbers:
self.store_performance_metric(
name=TransactionMRI.MEASUREMENTS_LCP.value,
tags={tag: value},
value=subvalue,
)
response = self.get_success_response(
self.organization.slug,
field=[
f"p50({TransactionMetricKey.MEASUREMENTS_LCP.value})",
f"count_unique({TransactionMetricKey.USER.value})",
],
statsPeriod="1h",
interval="1h",
groupBy=["project_id", "transaction"],
orderBy=f"p50({TransactionMetricKey.MEASUREMENTS_LCP.value})",
useCase="performance",
)
groups = response.data["groups"]
assert len(groups) == 2
expected = [
("/bar/", 5.0, 5),
("/foo/", 11.0, 1),
]
for (expected_tag_value, expected_lcp_count, users), group in zip(expected, groups):
# With orderBy, you only get totals:
assert group["by"] == {"transaction": expected_tag_value, "project_id": self.project.id}
assert group["totals"] == {
f"count_unique({TransactionMetricKey.USER.value})": 0,
f"p50({TransactionMetricKey.MEASUREMENTS_LCP.value})": expected_lcp_count,
}
assert group["series"] == {
f"count_unique({TransactionMetricKey.USER.value})": [0],
f"p50({TransactionMetricKey.MEASUREMENTS_LCP.value})": [expected_lcp_count],
}
|
Test that ensures when transactions table has null values for some fields (i.e. fields
with a different entity than the entity of the field in the order by), then the table gets
populated accordingly
| 34 | 101 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_orderby_percentile_with_many_fields_multiple_entities_with_missing_data(self):
for tag, value, numbers in (
("transaction", "/foo/", [10, 11, 12]),
("transaction", "/bar/", [4, 5, 6]),
):
for subvalue in numbers:
self.store_performance_metric(
name=TransactionMRI.MEASUREMENTS_LCP.value,
tags={tag: value},
value=subvalue,
)
response = self.get_success_response(
self.organization.slug,
field=[
f"p50({TransactionMetricKey.MEASUREMENTS_LCP.value})",
f"count_unique({TransactionMetricKey.USER.value})",
],
statsPeriod="1h",
interval="1h",
groupBy=["project_id", "transaction"],
orderBy=f"p50({TransactionMetricKey.MEASUREMENTS_LCP.value})",
useCase="performance",
)
groups = response.data["groups"]
assert len(groups) == 2
expected = [
("/bar/", 5.0, 5),
("/foo/", 11.0, 1),
]
for (expected_tag_value, expected_lcp_count, users), group in zip(expected, groups):
# With orderBy, you only get totals:
assert group["by"] == {"transaction": expected_tag_value, "project_id": self.project.id}
assert group["totals"] == {
f"count_unique({TransactionMetricKey.USER.value})": 0,
f"p50({TransactionMetricKey.MEASUREMENTS_LCP.value})": expected_lcp_count,
}
assert group["series"] == {
f"count_unique({TransactionMetricKey.USER.value})": [0],
f"p50({TransactionMetricKey.MEASUREMENTS_LCP.value})": [expected_lcp_count],
}
```
###Assistant :
Test that ensures when transactions table has null values for some fields (i.e. fields
with a different entity than the entity of the field in the order by), then the table gets
populated accordingly
|
1,688 | def get_earning_components_max_benefits(employee, date, earning_component):
salary_structure = get_assigned_salary_structure(employee, date)
amount = frappe.db.sql(
,
salary_structure,
earning_component,
)
return amount if amount else 0
|
select amount
from `tabSalary Detail`
where parent = %s and is_flexible_benefit = 1
and salary_component = %s
order by name
| 20 | 21 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_earning_components_max_benefits(employee, date, earning_component):
salary_structure = get_assigned_salary_structure(employee, date)
amount = frappe.db.sql(
,
salary_structure,
earning_component,
)
return amount if amount else 0
```
###Assistant :
select amount
from `tabSalary Detail`
where parent = %s and is_flexible_benefit = 1
and salary_component = %s
order by name
|
1,689 | def get_input_mask_at(self, node_index):
inputs = self.get_input_at(node_index)
if isinstance(inputs, list):
return [getattr(x, "_keras_mask", None) for x in inputs]
else:
return getattr(inputs, "_keras_mask", None)
| Retrieves the input mask tensor(s) of a layer at a given node.
Args:
node_index: Integer, index of the node
from which to retrieve the attribute.
E.g. `node_index=0` will correspond to the
first time the layer was called.
Returns:
A mask tensor
(or list of tensors if the layer has multiple inputs).
| 51 | 22 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_input_mask_at(self, node_index):
inputs = self.get_input_at(node_index)
if isinstance(inputs, list):
return [getattr(x, "_keras_mask", None) for x in inputs]
else:
return getattr(inputs, "_keras_mask", None)
```
###Assistant : Retrieves the input mask tensor(s) of a layer at a given node.
Args:
node_index: Integer, index of the node
from which to retrieve the attribute.
E.g. `node_index=0` will correspond to the
first time the layer was called.
Returns:
A mask tensor
(or list of tensors if the layer has multiple inputs).
|
1,690 | def get_default_grpc_options():
return [
('grpc.max_send_message_length', -1),
('grpc.max_receive_message_length', -1),
]
|
Returns a list of default options used for creating grpc channels.
Documentation is here https://github.com/grpc/grpc/blob/master/include/grpc/impl/codegen/grpc_types.h
:returns: list of tuples defining grpc parameters
| 22 | 9 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_default_grpc_options():
return [
('grpc.max_send_message_length', -1),
('grpc.max_receive_message_length', -1),
]
```
###Assistant :
Returns a list of default options used for creating grpc channels.
Documentation is here https://github.com/grpc/grpc/blob/master/include/grpc/impl/codegen/grpc_types.h
:returns: list of tuples defining grpc parameters
|
1,691 | def _cancel_futures_kwargs(self):
if sys.version_info[:2] < (3, 9):
return {}
return dict(cancel_futures=True)
| Shim older Pythons that do not have Executor.shutdown(...cancel_futures=).
Remove this code when support for Python 3.8 is dropped.
| 18 | 11 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _cancel_futures_kwargs(self):
if sys.version_info[:2] < (3, 9):
return {}
return dict(cancel_futures=True)
```
###Assistant : Shim older Pythons that do not have Executor.shutdown(...cancel_futures=).
Remove this code when support for Python 3.8 is dropped.
|
1,692 | def _estimate_available_parallelism() -> int:
cur_pg = ray.util.get_current_placement_group()
return _estimate_avail_cpus(cur_pg)
| Estimates the available CPU parallelism for this Dataset in the cluster.
If we are currently in a placement group, take that into account. | 23 | 9 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _estimate_available_parallelism() -> int:
cur_pg = ray.util.get_current_placement_group()
return _estimate_avail_cpus(cur_pg)
```
###Assistant : Estimates the available CPU parallelism for this Dataset in the cluster.
If we are currently in a placement group, take that into account.
|
1,693 | def bin(num, max_bits=None):
ceiling = 2 ** (num).bit_length()
if num >= 0:
s = bltns.bin(num + ceiling).replace('1', '0', 1)
else:
s = bltns.bin(~num ^ (ceiling - 1) + ceiling)
sign = s[:3]
digits = s[3:]
if max_bits is not None:
if len(digits) < max_bits:
digits = (sign[-1] * max_bits + digits)[-max_bits:]
return "%s %s" % (sign, digits)
|
Like built-in bin(), except negative values are represented in
twos-compliment, and the leading bit always indicates sign
(0=positive, 1=negative).
>>> bin(10)
'0b0 1010'
>>> bin(~10) # ~10 is -11
'0b1 0101'
| 31 | 57 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def bin(num, max_bits=None):
ceiling = 2 ** (num).bit_length()
if num >= 0:
s = bltns.bin(num + ceiling).replace('1', '0', 1)
else:
s = bltns.bin(~num ^ (ceiling - 1) + ceiling)
sign = s[:3]
digits = s[3:]
if max_bits is not None:
if len(digits) < max_bits:
digits = (sign[-1] * max_bits + digits)[-max_bits:]
return "%s %s" % (sign, digits)
```
###Assistant :
Like built-in bin(), except negative values are represented in
twos-compliment, and the leading bit always indicates sign
(0=positive, 1=negative).
>>> bin(10)
'0b0 1010'
>>> bin(~10) # ~10 is -11
'0b1 0101'
|
1,694 | def axis0_safe_slice(X, mask, len_mask):
if len_mask != 0:
return X[safe_mask(X, mask), :]
return np.zeros(shape=(0, X.shape[1]))
| Return a mask which is safer to use on X than safe_mask.
This mask is safer than safe_mask since it returns an
empty array, when a sparse matrix is sliced with a boolean mask
with all False, instead of raising an unhelpful error in older
versions of SciPy.
See: https://github.com/scipy/scipy/issues/5361
Also note that we can avoid doing the dot product by checking if
the len_mask is not zero in _huber_loss_and_gradient but this
is not going to be the bottleneck, since the number of outliers
and non_outliers are typically non-zero and it makes the code
tougher to follow.
Parameters
----------
X : {array-like, sparse matrix}
Data on which to apply mask.
mask : ndarray
Mask to be used on X.
len_mask : int
The length of the mask.
Returns
-------
mask : ndarray
Array that is safe to use on X.
| 140 | 15 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def axis0_safe_slice(X, mask, len_mask):
if len_mask != 0:
return X[safe_mask(X, mask), :]
return np.zeros(shape=(0, X.shape[1]))
```
###Assistant : Return a mask which is safer to use on X than safe_mask.
This mask is safer than safe_mask since it returns an
empty array, when a sparse matrix is sliced with a boolean mask
with all False, instead of raising an unhelpful error in older
versions of SciPy.
See: https://github.com/scipy/scipy/issues/5361
Also note that we can avoid doing the dot product by checking if
the len_mask is not zero in _huber_loss_and_gradient but this
is not going to be the bottleneck, since the number of outliers
and non_outliers are typically non-zero and it makes the code
tougher to follow.
Parameters
----------
X : {array-like, sparse matrix}
Data on which to apply mask.
mask : ndarray
Mask to be used on X.
len_mask : int
The length of the mask.
Returns
-------
mask : ndarray
Array that is safe to use on X.
|
1,695 | def _most_frequent(array, extra_value, n_repeat):
# Compute the most frequent value in array only
if array.size > 0:
if array.dtype == object:
# scipy.stats.mode is slow with object dtype array.
# Python Counter is more efficient
counter = Counter(array)
most_frequent_count = counter.most_common(1)[0][1]
# tie breaking similarly to scipy.stats.mode
most_frequent_value = min(
value
for value, count in counter.items()
if count == most_frequent_count
)
else:
mode = _mode(array)
most_frequent_value = mode[0][0]
most_frequent_count = mode[1][0]
else:
most_frequent_value = 0
most_frequent_count = 0
# Compare to array + [extra_value] * n_repeat
if most_frequent_count == 0 and n_repeat == 0:
return np.nan
elif most_frequent_count < n_repeat:
return extra_value
elif most_frequent_count > n_repeat:
return most_frequent_value
elif most_frequent_count == n_repeat:
# tie breaking similarly to scipy.stats.mode
return min(most_frequent_value, extra_value)
| Compute the most frequent value in a 1d array extended with
[extra_value] * n_repeat, where extra_value is assumed to be not part
of the array. | 25 | 121 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _most_frequent(array, extra_value, n_repeat):
# Compute the most frequent value in array only
if array.size > 0:
if array.dtype == object:
# scipy.stats.mode is slow with object dtype array.
# Python Counter is more efficient
counter = Counter(array)
most_frequent_count = counter.most_common(1)[0][1]
# tie breaking similarly to scipy.stats.mode
most_frequent_value = min(
value
for value, count in counter.items()
if count == most_frequent_count
)
else:
mode = _mode(array)
most_frequent_value = mode[0][0]
most_frequent_count = mode[1][0]
else:
most_frequent_value = 0
most_frequent_count = 0
# Compare to array + [extra_value] * n_repeat
if most_frequent_count == 0 and n_repeat == 0:
return np.nan
elif most_frequent_count < n_repeat:
return extra_value
elif most_frequent_count > n_repeat:
return most_frequent_value
elif most_frequent_count == n_repeat:
# tie breaking similarly to scipy.stats.mode
return min(most_frequent_value, extra_value)
```
###Assistant : Compute the most frequent value in a 1d array extended with
[extra_value] * n_repeat, where extra_value is assumed to be not part
of the array.
|
1,696 | def _cov(X, shrinkage=None, covariance_estimator=None):
if covariance_estimator is None:
shrinkage = "empirical" if shrinkage is None else shrinkage
if isinstance(shrinkage, str):
if shrinkage == "auto":
sc = StandardScaler() # standardize features
X = sc.fit_transform(X)
s = ledoit_wolf(X)[0]
# rescale
s = sc.scale_[:, np.newaxis] * s * sc.scale_[np.newaxis, :]
elif shrinkage == "empirical":
s = empirical_covariance(X)
else:
raise ValueError("unknown shrinkage parameter")
elif isinstance(shrinkage, Real):
if shrinkage < 0 or shrinkage > 1:
raise ValueError("shrinkage parameter must be between 0 and 1")
s = shrunk_covariance(empirical_covariance(X), shrinkage)
else:
raise TypeError("shrinkage must be a float or a string")
else:
if shrinkage is not None and shrinkage != 0:
raise ValueError(
"covariance_estimator and shrinkage parameters "
"are not None. Only one of the two can be set."
)
covariance_estimator.fit(X)
if not hasattr(covariance_estimator, "covariance_"):
raise ValueError(
"%s does not have a covariance_ attribute"
% covariance_estimator.__class__.__name__
)
s = covariance_estimator.covariance_
return s
| Estimate covariance matrix (using optional covariance_estimator).
Parameters
----------
X : array-like of shape (n_samples, n_features)
Input data.
shrinkage : {'empirical', 'auto'} or float, default=None
Shrinkage parameter, possible values:
- None or 'empirical': no shrinkage (default).
- 'auto': automatic shrinkage using the Ledoit-Wolf lemma.
- float between 0 and 1: fixed shrinkage parameter.
Shrinkage parameter is ignored if `covariance_estimator`
is not None.
covariance_estimator : estimator, default=None
If not None, `covariance_estimator` is used to estimate
the covariance matrices instead of relying on the empirical
covariance estimator (with potential shrinkage).
The object should have a fit method and a ``covariance_`` attribute
like the estimators in :mod:`sklearn.covariance``.
if None the shrinkage parameter drives the estimate.
.. versionadded:: 0.24
Returns
-------
s : ndarray of shape (n_features, n_features)
Estimated covariance matrix.
| 126 | 144 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _cov(X, shrinkage=None, covariance_estimator=None):
if covariance_estimator is None:
shrinkage = "empirical" if shrinkage is None else shrinkage
if isinstance(shrinkage, str):
if shrinkage == "auto":
sc = StandardScaler() # standardize features
X = sc.fit_transform(X)
s = ledoit_wolf(X)[0]
# rescale
s = sc.scale_[:, np.newaxis] * s * sc.scale_[np.newaxis, :]
elif shrinkage == "empirical":
s = empirical_covariance(X)
else:
raise ValueError("unknown shrinkage parameter")
elif isinstance(shrinkage, Real):
if shrinkage < 0 or shrinkage > 1:
raise ValueError("shrinkage parameter must be between 0 and 1")
s = shrunk_covariance(empirical_covariance(X), shrinkage)
else:
raise TypeError("shrinkage must be a float or a string")
else:
if shrinkage is not None and shrinkage != 0:
raise ValueError(
"covariance_estimator and shrinkage parameters "
"are not None. Only one of the two can be set."
)
covariance_estimator.fit(X)
if not hasattr(covariance_estimator, "covariance_"):
raise ValueError(
"%s does not have a covariance_ attribute"
% covariance_estimator.__class__.__name__
)
s = covariance_estimator.covariance_
return s
```
###Assistant : Estimate covariance matrix (using optional covariance_estimator).
Parameters
----------
X : array-like of shape (n_samples, n_features)
Input data.
shrinkage : {'empirical', 'auto'} or float, default=None
Shrinkage parameter, possible values:
- None or 'empirical': no shrinkage (default).
- 'auto': automatic shrinkage using the Ledoit-Wolf lemma.
- float between 0 and 1: fixed shrinkage parameter.
Shrinkage parameter is ignored if `covariance_estimator`
is not None.
covariance_estimator : estimator, default=None
If not None, `covariance_estimator` is used to estimate
the covariance matrices instead of relying on the empirical
covariance estimator (with potential shrinkage).
The object should have a fit method and a ``covariance_`` attribute
like the estimators in :mod:`sklearn.covariance``.
if None the shrinkage parameter drives the estimate.
.. versionadded:: 0.24
Returns
-------
s : ndarray of shape (n_features, n_features)
Estimated covariance matrix.
|
1,697 | def from_key_val_list(value):
if value is None:
return None
if isinstance(value, (str, bytes, bool, int)):
raise ValueError("cannot encode objects that are not 2-tuples")
return OrderedDict(value)
| Take an object and test to see if it can be represented as a
dictionary. Unless it can not be represented as such, return an
OrderedDict, e.g.,
::
>>> from_key_val_list([('key', 'val')])
OrderedDict([('key', 'val')])
>>> from_key_val_list('string')
Traceback (most recent call last):
...
ValueError: cannot encode objects that are not 2-tuples
>>> from_key_val_list({'key': 'val'})
OrderedDict([('key', 'val')])
:rtype: OrderedDict
| 56 | 24 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def from_key_val_list(value):
if value is None:
return None
if isinstance(value, (str, bytes, bool, int)):
raise ValueError("cannot encode objects that are not 2-tuples")
return OrderedDict(value)
```
###Assistant : Take an object and test to see if it can be represented as a
dictionary. Unless it can not be represented as such, return an
OrderedDict, e.g.,
::
>>> from_key_val_list([('key', 'val')])
OrderedDict([('key', 'val')])
>>> from_key_val_list('string')
Traceback (most recent call last):
...
ValueError: cannot encode objects that are not 2-tuples
>>> from_key_val_list({'key': 'val'})
OrderedDict([('key', 'val')])
:rtype: OrderedDict
|
1,698 | def load_lexer_from_file(filename, lexername="CustomLexer", **options):
try:
# This empty dict will contain the namespace for the exec'd file
custom_namespace = {}
with open(filename, 'rb') as f:
exec(f.read(), custom_namespace)
# Retrieve the class `lexername` from that namespace
if lexername not in custom_namespace:
raise ClassNotFound('no valid %s class found in %s' %
(lexername, filename))
lexer_class = custom_namespace[lexername]
# And finally instantiate it with the options
return lexer_class(**options)
except OSError as err:
raise ClassNotFound('cannot read %s: %s' % (filename, err))
except ClassNotFound:
raise
except Exception as err:
raise ClassNotFound('error when loading custom lexer: %s' % err)
| Load a lexer from a file.
This method expects a file located relative to the current working
directory, which contains a Lexer class. By default, it expects the
Lexer to be name CustomLexer; you can specify your own class name
as the second argument to this function.
Users should be very careful with the input, because this method
is equivalent to running eval on the input file.
Raises ClassNotFound if there are any problems importing the Lexer.
.. versionadded:: 2.2
| 80 | 92 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def load_lexer_from_file(filename, lexername="CustomLexer", **options):
try:
# This empty dict will contain the namespace for the exec'd file
custom_namespace = {}
with open(filename, 'rb') as f:
exec(f.read(), custom_namespace)
# Retrieve the class `lexername` from that namespace
if lexername not in custom_namespace:
raise ClassNotFound('no valid %s class found in %s' %
(lexername, filename))
lexer_class = custom_namespace[lexername]
# And finally instantiate it with the options
return lexer_class(**options)
except OSError as err:
raise ClassNotFound('cannot read %s: %s' % (filename, err))
except ClassNotFound:
raise
except Exception as err:
raise ClassNotFound('error when loading custom lexer: %s' % err)
```
###Assistant : Load a lexer from a file.
This method expects a file located relative to the current working
directory, which contains a Lexer class. By default, it expects the
Lexer to be name CustomLexer; you can specify your own class name
as the second argument to this function.
Users should be very careful with the input, because this method
is equivalent to running eval on the input file.
Raises ClassNotFound if there are any problems importing the Lexer.
.. versionadded:: 2.2
|
1,699 | def get_price(item_code, price_list, customer_group, company, qty=1):
from erpnext.e_commerce.shopping_cart.cart import get_party
template_item_code = frappe.db.get_value("Item", item_code, "variant_of")
if price_list:
price = frappe.get_all("Item Price", fields=["price_list_rate", "currency"],
filters={"price_list": price_list, "item_code": item_code})
if template_item_code and not price:
price = frappe.get_all("Item Price", fields=["price_list_rate", "currency"],
filters={"price_list": price_list, "item_code": template_item_code})
if price:
party = get_party()
pricing_rule_dict = frappe._dict({
"item_code": item_code,
"qty": qty,
"stock_qty": qty,
"transaction_type": "selling",
"price_list": price_list,
"customer_group": customer_group,
"company": company,
"conversion_rate": 1,
"for_shopping_cart": True,
"currency": frappe.db.get_value("Price List", price_list, "currency")
})
if party and party.doctype == "Customer":
pricing_rule_dict.update({"customer": party.name})
pricing_rule = get_pricing_rule_for_item(pricing_rule_dict)
price_obj = price[0]
if pricing_rule:
# price without any rules applied
mrp = price_obj.price_list_rate or 0
if pricing_rule.pricing_rule_for == "Discount Percentage":
price_obj.discount_percent = pricing_rule.discount_percentage
price_obj.formatted_discount_percent = str(flt(pricing_rule.discount_percentage, 0)) + "%"
price_obj.price_list_rate = flt(price_obj.price_list_rate * (1.0 - (flt(pricing_rule.discount_percentage) / 100.0)))
if pricing_rule.pricing_rule_for == "Rate":
rate_discount = flt(mrp) - flt(pricing_rule.price_list_rate)
if rate_discount > 0:
price_obj.formatted_discount_rate = fmt_money(rate_discount, currency=price_obj["currency"])
price_obj.price_list_rate = pricing_rule.price_list_rate or 0
if price_obj:
price_obj["formatted_price"] = fmt_money(price_obj["price_list_rate"], currency=price_obj["currency"])
if mrp != price_obj["price_list_rate"]:
price_obj["formatted_mrp"] = fmt_money(mrp, currency=price_obj["currency"])
price_obj["currency_symbol"] = not cint(frappe.db.get_default("hide_currency_symbol")) \
and (frappe.db.get_value("Currency", price_obj.currency, "symbol", cache=True) or price_obj.currency) \
or ""
uom_conversion_factor = frappe.db.sql(, item_code)
uom_conversion_factor = uom_conversion_factor[0][0] if uom_conversion_factor else 1
price_obj["formatted_price_sales_uom"] = fmt_money(price_obj["price_list_rate"] * uom_conversion_factor, currency=price_obj["currency"])
if not price_obj["price_list_rate"]:
price_obj["price_list_rate"] = 0
if not price_obj["currency"]:
price_obj["currency"] = ""
if not price_obj["formatted_price"]:
price_obj["formatted_price"], price_obj["formatted_mrp"] = "", ""
return price_obj
| select C.conversion_factor
from `tabUOM Conversion Detail` C
inner join `tabItem` I on C.parent = I.name and C.uom = I.sales_uom
where I.name = %s | 23 | 214 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_price(item_code, price_list, customer_group, company, qty=1):
from erpnext.e_commerce.shopping_cart.cart import get_party
template_item_code = frappe.db.get_value("Item", item_code, "variant_of")
if price_list:
price = frappe.get_all("Item Price", fields=["price_list_rate", "currency"],
filters={"price_list": price_list, "item_code": item_code})
if template_item_code and not price:
price = frappe.get_all("Item Price", fields=["price_list_rate", "currency"],
filters={"price_list": price_list, "item_code": template_item_code})
if price:
party = get_party()
pricing_rule_dict = frappe._dict({
"item_code": item_code,
"qty": qty,
"stock_qty": qty,
"transaction_type": "selling",
"price_list": price_list,
"customer_group": customer_group,
"company": company,
"conversion_rate": 1,
"for_shopping_cart": True,
"currency": frappe.db.get_value("Price List", price_list, "currency")
})
if party and party.doctype == "Customer":
pricing_rule_dict.update({"customer": party.name})
pricing_rule = get_pricing_rule_for_item(pricing_rule_dict)
price_obj = price[0]
if pricing_rule:
# price without any rules applied
mrp = price_obj.price_list_rate or 0
if pricing_rule.pricing_rule_for == "Discount Percentage":
price_obj.discount_percent = pricing_rule.discount_percentage
price_obj.formatted_discount_percent = str(flt(pricing_rule.discount_percentage, 0)) + "%"
price_obj.price_list_rate = flt(price_obj.price_list_rate * (1.0 - (flt(pricing_rule.discount_percentage) / 100.0)))
if pricing_rule.pricing_rule_for == "Rate":
rate_discount = flt(mrp) - flt(pricing_rule.price_list_rate)
if rate_discount > 0:
price_obj.formatted_discount_rate = fmt_money(rate_discount, currency=price_obj["currency"])
price_obj.price_list_rate = pricing_rule.price_list_rate or 0
if price_obj:
price_obj["formatted_price"] = fmt_money(price_obj["price_list_rate"], currency=price_obj["currency"])
if mrp != price_obj["price_list_rate"]:
price_obj["formatted_mrp"] = fmt_money(mrp, currency=price_obj["currency"])
price_obj["currency_symbol"] = not cint(frappe.db.get_default("hide_currency_symbol")) \
and (frappe.db.get_value("Currency", price_obj.currency, "symbol", cache=True) or price_obj.currency) \
or ""
uom_conversion_factor = frappe.db.sql(, item_code)
uom_conversion_factor = uom_conversion_factor[0][0] if uom_conversion_factor else 1
price_obj["formatted_price_sales_uom"] = fmt_money(price_obj["price_list_rate"] * uom_conversion_factor, currency=price_obj["currency"])
if not price_obj["price_list_rate"]:
price_obj["price_list_rate"] = 0
if not price_obj["currency"]:
price_obj["currency"] = ""
if not price_obj["formatted_price"]:
price_obj["formatted_price"], price_obj["formatted_mrp"] = "", ""
return price_obj
```
###Assistant : select C.conversion_factor
from `tabUOM Conversion Detail` C
inner join `tabItem` I on C.parent = I.name and C.uom = I.sales_uom
where I.name = %s
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.