problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
25.4k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 582
39.1k
| num_tokens
int64 271
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_54621
|
rasdani/github-patches
|
git_diff
|
ibis-project__ibis-4790
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
docs: infinite build when using `mkdocs serve`
It appears that when using `mkdocs serve` the docs are repeatedly rebuilt to no end.
I suspect there's a file that we're generating (maybe the operation matrix?) that is being considered new and triggering a rebuild.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gen_matrix.py`
Content:
```
1 from pathlib import Path
2
3 import pandas as pd
4 import tomli
5
6 import ibis
7 import ibis.expr.operations as ops
8
9
10 def get_backends():
11 pyproject = tomli.loads(Path("pyproject.toml").read_text())
12 backends = pyproject["tool"]["poetry"]["plugins"]["ibis.backends"]
13 del backends["spark"]
14 return [(backend, getattr(ibis, backend)) for backend in sorted(backends.keys())]
15
16
17 def get_leaf_classes(op):
18 for child_class in op.__subclasses__():
19 if not child_class.__subclasses__():
20 yield child_class
21 else:
22 yield from get_leaf_classes(child_class)
23
24
25 EXCLUDED_OPS = {
26 # Never translates into anything
27 ops.UnresolvedExistsSubquery,
28 ops.UnresolvedNotExistsSubquery,
29 ops.ScalarParameter,
30 }
31
32 INCLUDED_OPS = {
33 # Parent class of MultiQuantile so it's ignored by `get_backends()`
34 ops.Quantile,
35 }
36
37
38 ICONS = {
39 True: ":material-check-decagram:{ .verified }",
40 False: ":material-cancel:{ .cancel }",
41 }
42
43
44 def main():
45 possible_ops = (
46 frozenset(get_leaf_classes(ops.Value)) | INCLUDED_OPS
47 ) - EXCLUDED_OPS
48
49 support = {"operation": [f"`{op.__name__}`" for op in possible_ops]}
50 support.update(
51 (name, list(map(backend.has_operation, possible_ops)))
52 for name, backend in get_backends()
53 )
54
55 df = pd.DataFrame(support).set_index("operation").sort_index()
56
57 counts = df.sum().sort_values(ascending=False)
58 num_ops = len(possible_ops)
59 coverage = (
60 counts.map(lambda n: f"_{n} ({round(100 * n / num_ops)}%)_")
61 .to_frame(name="**API Coverage**")
62 .T
63 )
64
65 ops_table = df.loc[:, counts.index].replace(ICONS)
66 table = pd.concat([coverage, ops_table])
67 dst = Path(__file__).parent.joinpath(
68 "docs",
69 "backends",
70 "support_matrix.csv",
71 )
72 table.to_csv(dst, index_label="Backends")
73
74
75 main()
76
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/gen_matrix.py b/gen_matrix.py
--- a/gen_matrix.py
+++ b/gen_matrix.py
@@ -69,7 +69,15 @@
"backends",
"support_matrix.csv",
)
- table.to_csv(dst, index_label="Backends")
+
+ if dst.exists():
+ old = pd.read_csv(dst, index_col="Backends")
+ should_write = not old.equals(table)
+ else:
+ should_write = True
+
+ if should_write:
+ table.to_csv(dst, index_label="Backends")
main()
|
{"golden_diff": "diff --git a/gen_matrix.py b/gen_matrix.py\n--- a/gen_matrix.py\n+++ b/gen_matrix.py\n@@ -69,7 +69,15 @@\n \"backends\",\n \"support_matrix.csv\",\n )\n- table.to_csv(dst, index_label=\"Backends\")\n+\n+ if dst.exists():\n+ old = pd.read_csv(dst, index_col=\"Backends\")\n+ should_write = not old.equals(table)\n+ else:\n+ should_write = True\n+\n+ if should_write:\n+ table.to_csv(dst, index_label=\"Backends\")\n \n \n main()\n", "issue": "docs: infinite build when using `mkdocs serve`\nIt appears that when using `mkdocs serve` the docs are repeatedly rebuilt to no end.\r\n\r\nI suspect there's a file that we're generating (maybe the operation matrix?) that is being considered new and triggering a rebuild.\n", "before_files": [{"content": "from pathlib import Path\n\nimport pandas as pd\nimport tomli\n\nimport ibis\nimport ibis.expr.operations as ops\n\n\ndef get_backends():\n pyproject = tomli.loads(Path(\"pyproject.toml\").read_text())\n backends = pyproject[\"tool\"][\"poetry\"][\"plugins\"][\"ibis.backends\"]\n del backends[\"spark\"]\n return [(backend, getattr(ibis, backend)) for backend in sorted(backends.keys())]\n\n\ndef get_leaf_classes(op):\n for child_class in op.__subclasses__():\n if not child_class.__subclasses__():\n yield child_class\n else:\n yield from get_leaf_classes(child_class)\n\n\nEXCLUDED_OPS = {\n # Never translates into anything\n ops.UnresolvedExistsSubquery,\n ops.UnresolvedNotExistsSubquery,\n ops.ScalarParameter,\n}\n\nINCLUDED_OPS = {\n # Parent class of MultiQuantile so it's ignored by `get_backends()`\n ops.Quantile,\n}\n\n\nICONS = {\n True: \":material-check-decagram:{ .verified }\",\n False: \":material-cancel:{ .cancel }\",\n}\n\n\ndef main():\n possible_ops = (\n frozenset(get_leaf_classes(ops.Value)) | INCLUDED_OPS\n ) - EXCLUDED_OPS\n\n support = {\"operation\": [f\"`{op.__name__}`\" for op in possible_ops]}\n support.update(\n (name, list(map(backend.has_operation, possible_ops)))\n for name, backend in get_backends()\n )\n\n df = pd.DataFrame(support).set_index(\"operation\").sort_index()\n\n counts = df.sum().sort_values(ascending=False)\n num_ops = len(possible_ops)\n coverage = (\n counts.map(lambda n: f\"_{n} ({round(100 * n / num_ops)}%)_\")\n .to_frame(name=\"**API Coverage**\")\n .T\n )\n\n ops_table = df.loc[:, counts.index].replace(ICONS)\n table = pd.concat([coverage, ops_table])\n dst = Path(__file__).parent.joinpath(\n \"docs\",\n \"backends\",\n \"support_matrix.csv\",\n )\n table.to_csv(dst, index_label=\"Backends\")\n\n\nmain()\n", "path": "gen_matrix.py"}], "after_files": [{"content": "from pathlib import Path\n\nimport pandas as pd\nimport tomli\n\nimport ibis\nimport ibis.expr.operations as ops\n\n\ndef get_backends():\n pyproject = tomli.loads(Path(\"pyproject.toml\").read_text())\n backends = pyproject[\"tool\"][\"poetry\"][\"plugins\"][\"ibis.backends\"]\n del backends[\"spark\"]\n return [(backend, getattr(ibis, backend)) for backend in sorted(backends.keys())]\n\n\ndef get_leaf_classes(op):\n for child_class in op.__subclasses__():\n if not child_class.__subclasses__():\n yield child_class\n else:\n yield from get_leaf_classes(child_class)\n\n\nEXCLUDED_OPS = {\n # Never translates into anything\n ops.UnresolvedExistsSubquery,\n ops.UnresolvedNotExistsSubquery,\n ops.ScalarParameter,\n}\n\nINCLUDED_OPS = {\n # Parent class of MultiQuantile so it's ignored by `get_backends()`\n ops.Quantile,\n}\n\n\nICONS = {\n True: \":material-check-decagram:{ .verified }\",\n False: \":material-cancel:{ .cancel }\",\n}\n\n\ndef main():\n possible_ops = (\n frozenset(get_leaf_classes(ops.Value)) | INCLUDED_OPS\n ) - EXCLUDED_OPS\n\n support = {\"operation\": [f\"`{op.__name__}`\" for op in possible_ops]}\n support.update(\n (name, list(map(backend.has_operation, possible_ops)))\n for name, backend in get_backends()\n )\n\n df = pd.DataFrame(support).set_index(\"operation\").sort_index()\n\n counts = df.sum().sort_values(ascending=False)\n num_ops = len(possible_ops)\n coverage = (\n counts.map(lambda n: f\"_{n} ({round(100 * n / num_ops)}%)_\")\n .to_frame(name=\"**API Coverage**\")\n .T\n )\n\n ops_table = df.loc[:, counts.index].replace(ICONS)\n table = pd.concat([coverage, ops_table])\n dst = Path(__file__).parent.joinpath(\n \"docs\",\n \"backends\",\n \"support_matrix.csv\",\n )\n\n if dst.exists():\n old = pd.read_csv(dst, index_col=\"Backends\")\n should_write = not old.equals(table)\n else:\n should_write = True\n\n if should_write:\n table.to_csv(dst, index_label=\"Backends\")\n\n\nmain()\n", "path": "gen_matrix.py"}]}
| 945 | 129 |
gh_patches_debug_36560
|
rasdani/github-patches
|
git_diff
|
mampfes__hacs_waste_collection_schedule-2099
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug]: day_switch_time does not seem to be working correctly
### I Have A Problem With:
The integration in general
### What's Your Problem
I have day switch time set to `20:00` but the day switches at `01:19`
<img width="228" alt="Screenshot 2024-05-08 at 07 24 31" src="https://github.com/mampfes/hacs_waste_collection_schedule/assets/49797976/c84d1086-1fd8-462a-a206-77ed846838a0">
config:
```
waste_collection_schedule:
sources:
- name: maldon_gov_uk
args:
uprn: "uprn"
customize:
- type: Refuse Collection
- type: Recycling
day_switch_time: "20:00"
fetch_time: 01:00
```
### Source (if relevant)
Maldon District Council / maldon.gov.uk
### Logs
_No response_
### Relevant Configuration
```YAML
waste_collection_schedule:
sources:
- name: maldon_gov_uk
args:
uprn: "uprn"
customize:
- type: Refuse Collection
- type: Recycling
day_switch_time: "20:00"
fetch_time: 01:00
```
### Checklist Source Error
- [ ] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)
- [X] Checked that the website of your service provider is still working
- [ ] Tested my attributes on the service provider website (if possible)
- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on "Redownload" and choose master as version)
### Checklist Sensor Error
- [X] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)
### Required
- [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.
- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `custom_components/waste_collection_schedule/waste_collection_schedule/source/maldon_gov_uk.py`
Content:
```
1 import re
2 from datetime import datetime
3
4 import requests
5 from bs4 import BeautifulSoup
6 from waste_collection_schedule import Collection
7
8 TITLE = "Maldon District Council"
9
10 DESCRIPTION = ("Source for www.maldon.gov.uk services for Maldon, UK")
11
12 URL = "https://www.maldon.gov.uk/"
13
14 TEST_CASES = {
15 "test 1": {"uprn": "200000917928"},
16 "test 2": {"uprn": "100091258454"},
17 }
18
19 API_URL = "https://maldon.suez.co.uk/maldon/ServiceSummary?uprn="
20
21 ICON_MAP = {
22 "Refuse Collection": "mdi:trash-can",
23 "Recycling": "mdi:recycle",
24 "Green": "mdi:leaf",
25 "Food": "mdi:food-apple",
26 }
27
28 class Source:
29 def __init__(self, uprn: str):
30 self._uprn = uprn
31
32 def _extract_future_date(self, text):
33 # parse both dates and return the future one
34 dates = re.findall(r'\d{2}/\d{2}/\d{4}', text)
35 dates = [datetime.strptime(date, '%d/%m/%Y').date() for date in dates]
36 return max(dates)
37
38 def fetch(self):
39 entries = []
40
41 session = requests.Session()
42
43 r = session.get(f"{API_URL}{self._uprn}")
44 soup = BeautifulSoup(r.text, features="html.parser")
45 collections = soup.find_all("div", {"class": "panel-default"})
46
47 if not collections:
48 raise Exception("No collections found for given UPRN")
49
50 for collection in collections:
51 # check is a collection row
52 title = collection.find("h2", {"class": "panel-title"}).text.strip()
53
54 if title == "Other Services" or "You are not currently subscribed" in collection.text:
55 continue
56
57 entries.append(
58 Collection(
59 date=self._extract_future_date(collection.text),
60 t=title,
61 icon=ICON_MAP.get(title),
62 )
63 )
64
65 return entries
66
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/maldon_gov_uk.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/maldon_gov_uk.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/maldon_gov_uk.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/maldon_gov_uk.py
@@ -3,17 +3,17 @@
import requests
from bs4 import BeautifulSoup
-from waste_collection_schedule import Collection
+from waste_collection_schedule import Collection # type: ignore[attr-defined]
TITLE = "Maldon District Council"
-DESCRIPTION = ("Source for www.maldon.gov.uk services for Maldon, UK")
+DESCRIPTION = "Source for www.maldon.gov.uk services for Maldon, UK"
URL = "https://www.maldon.gov.uk/"
TEST_CASES = {
"test 1": {"uprn": "200000917928"},
- "test 2": {"uprn": "100091258454"},
+ "test 2": {"uprn": 100091258454},
}
API_URL = "https://maldon.suez.co.uk/maldon/ServiceSummary?uprn="
@@ -25,15 +25,15 @@
"Food": "mdi:food-apple",
}
+
class Source:
def __init__(self, uprn: str):
self._uprn = uprn
- def _extract_future_date(self, text):
+ def _extract_dates(self, text):
# parse both dates and return the future one
- dates = re.findall(r'\d{2}/\d{2}/\d{4}', text)
- dates = [datetime.strptime(date, '%d/%m/%Y').date() for date in dates]
- return max(dates)
+ dates = re.findall(r"\d{2}/\d{2}/\d{4}", text)
+ return [datetime.strptime(date, "%d/%m/%Y").date() for date in dates]
def fetch(self):
entries = []
@@ -51,15 +51,19 @@
# check is a collection row
title = collection.find("h2", {"class": "panel-title"}).text.strip()
- if title == "Other Services" or "You are not currently subscribed" in collection.text:
+ if (
+ title == "Other Services"
+ or "You are not currently subscribed" in collection.text
+ ):
continue
- entries.append(
- Collection(
- date=self._extract_future_date(collection.text),
- t=title,
- icon=ICON_MAP.get(title),
+ for date in self._extract_dates(collection.text):
+ entries.append(
+ Collection(
+ date=date,
+ t=title,
+ icon=ICON_MAP.get(title),
+ )
)
- )
return entries
|
{"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/maldon_gov_uk.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/maldon_gov_uk.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/maldon_gov_uk.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/maldon_gov_uk.py\n@@ -3,17 +3,17 @@\n \n import requests\n from bs4 import BeautifulSoup\n-from waste_collection_schedule import Collection\n+from waste_collection_schedule import Collection # type: ignore[attr-defined]\n \n TITLE = \"Maldon District Council\"\n \n-DESCRIPTION = (\"Source for www.maldon.gov.uk services for Maldon, UK\")\n+DESCRIPTION = \"Source for www.maldon.gov.uk services for Maldon, UK\"\n \n URL = \"https://www.maldon.gov.uk/\"\n \n TEST_CASES = {\n \"test 1\": {\"uprn\": \"200000917928\"},\n- \"test 2\": {\"uprn\": \"100091258454\"},\n+ \"test 2\": {\"uprn\": 100091258454},\n }\n \n API_URL = \"https://maldon.suez.co.uk/maldon/ServiceSummary?uprn=\"\n@@ -25,15 +25,15 @@\n \"Food\": \"mdi:food-apple\",\n }\n \n+\n class Source:\n def __init__(self, uprn: str):\n self._uprn = uprn\n \n- def _extract_future_date(self, text):\n+ def _extract_dates(self, text):\n # parse both dates and return the future one\n- dates = re.findall(r'\\d{2}/\\d{2}/\\d{4}', text)\n- dates = [datetime.strptime(date, '%d/%m/%Y').date() for date in dates]\n- return max(dates)\n+ dates = re.findall(r\"\\d{2}/\\d{2}/\\d{4}\", text)\n+ return [datetime.strptime(date, \"%d/%m/%Y\").date() for date in dates]\n \n def fetch(self):\n entries = []\n@@ -51,15 +51,19 @@\n # check is a collection row\n title = collection.find(\"h2\", {\"class\": \"panel-title\"}).text.strip()\n \n- if title == \"Other Services\" or \"You are not currently subscribed\" in collection.text:\n+ if (\n+ title == \"Other Services\"\n+ or \"You are not currently subscribed\" in collection.text\n+ ):\n continue\n \n- entries.append(\n- Collection(\n- date=self._extract_future_date(collection.text),\n- t=title,\n- icon=ICON_MAP.get(title),\n+ for date in self._extract_dates(collection.text):\n+ entries.append(\n+ Collection(\n+ date=date,\n+ t=title,\n+ icon=ICON_MAP.get(title),\n+ )\n )\n- )\n \n return entries\n", "issue": "[Bug]: day_switch_time does not seem to be working correctly\n### I Have A Problem With:\n\nThe integration in general\n\n### What's Your Problem\n\nI have day switch time set to `20:00` but the day switches at `01:19`\r\n\r\n<img width=\"228\" alt=\"Screenshot 2024-05-08 at 07 24 31\" src=\"https://github.com/mampfes/hacs_waste_collection_schedule/assets/49797976/c84d1086-1fd8-462a-a206-77ed846838a0\">\r\n\r\nconfig:\r\n\r\n```\r\nwaste_collection_schedule:\r\n sources:\r\n - name: maldon_gov_uk\r\n args:\r\n uprn: \"uprn\"\r\n customize:\r\n - type: Refuse Collection\r\n - type: Recycling\r\n day_switch_time: \"20:00\"\r\n fetch_time: 01:00\r\n```\r\n\n\n### Source (if relevant)\n\nMaldon District Council / maldon.gov.uk\n\n### Logs\n\n_No response_\n\n### Relevant Configuration\n\n```YAML\nwaste_collection_schedule:\r\n sources:\r\n - name: maldon_gov_uk\r\n args:\r\n uprn: \"uprn\"\r\n customize:\r\n - type: Refuse Collection\r\n - type: Recycling\r\n day_switch_time: \"20:00\"\r\n fetch_time: 01:00\n```\n\n\n### Checklist Source Error\n\n- [ ] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)\n- [X] Checked that the website of your service provider is still working\n- [ ] Tested my attributes on the service provider website (if possible)\n- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on \"Redownload\" and choose master as version)\n\n### Checklist Sensor Error\n\n- [X] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)\n\n### Required\n\n- [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.\n- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.\n", "before_files": [{"content": "import re\nfrom datetime import datetime\n\nimport requests\nfrom bs4 import BeautifulSoup\nfrom waste_collection_schedule import Collection\n\nTITLE = \"Maldon District Council\"\n\nDESCRIPTION = (\"Source for www.maldon.gov.uk services for Maldon, UK\")\n\nURL = \"https://www.maldon.gov.uk/\"\n\nTEST_CASES = {\n \"test 1\": {\"uprn\": \"200000917928\"},\n \"test 2\": {\"uprn\": \"100091258454\"},\n}\n\nAPI_URL = \"https://maldon.suez.co.uk/maldon/ServiceSummary?uprn=\"\n\nICON_MAP = {\n \"Refuse Collection\": \"mdi:trash-can\",\n \"Recycling\": \"mdi:recycle\",\n \"Green\": \"mdi:leaf\",\n \"Food\": \"mdi:food-apple\",\n}\n\nclass Source:\n def __init__(self, uprn: str):\n self._uprn = uprn\n\n def _extract_future_date(self, text):\n # parse both dates and return the future one\n dates = re.findall(r'\\d{2}/\\d{2}/\\d{4}', text)\n dates = [datetime.strptime(date, '%d/%m/%Y').date() for date in dates]\n return max(dates)\n\n def fetch(self):\n entries = []\n\n session = requests.Session()\n\n r = session.get(f\"{API_URL}{self._uprn}\")\n soup = BeautifulSoup(r.text, features=\"html.parser\")\n collections = soup.find_all(\"div\", {\"class\": \"panel-default\"})\n\n if not collections:\n raise Exception(\"No collections found for given UPRN\")\n\n for collection in collections:\n # check is a collection row\n title = collection.find(\"h2\", {\"class\": \"panel-title\"}).text.strip()\n\n if title == \"Other Services\" or \"You are not currently subscribed\" in collection.text:\n continue\n\n entries.append(\n Collection(\n date=self._extract_future_date(collection.text),\n t=title,\n icon=ICON_MAP.get(title),\n )\n )\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/maldon_gov_uk.py"}], "after_files": [{"content": "import re\nfrom datetime import datetime\n\nimport requests\nfrom bs4 import BeautifulSoup\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\n\nTITLE = \"Maldon District Council\"\n\nDESCRIPTION = \"Source for www.maldon.gov.uk services for Maldon, UK\"\n\nURL = \"https://www.maldon.gov.uk/\"\n\nTEST_CASES = {\n \"test 1\": {\"uprn\": \"200000917928\"},\n \"test 2\": {\"uprn\": 100091258454},\n}\n\nAPI_URL = \"https://maldon.suez.co.uk/maldon/ServiceSummary?uprn=\"\n\nICON_MAP = {\n \"Refuse Collection\": \"mdi:trash-can\",\n \"Recycling\": \"mdi:recycle\",\n \"Green\": \"mdi:leaf\",\n \"Food\": \"mdi:food-apple\",\n}\n\n\nclass Source:\n def __init__(self, uprn: str):\n self._uprn = uprn\n\n def _extract_dates(self, text):\n # parse both dates and return the future one\n dates = re.findall(r\"\\d{2}/\\d{2}/\\d{4}\", text)\n return [datetime.strptime(date, \"%d/%m/%Y\").date() for date in dates]\n\n def fetch(self):\n entries = []\n\n session = requests.Session()\n\n r = session.get(f\"{API_URL}{self._uprn}\")\n soup = BeautifulSoup(r.text, features=\"html.parser\")\n collections = soup.find_all(\"div\", {\"class\": \"panel-default\"})\n\n if not collections:\n raise Exception(\"No collections found for given UPRN\")\n\n for collection in collections:\n # check is a collection row\n title = collection.find(\"h2\", {\"class\": \"panel-title\"}).text.strip()\n\n if (\n title == \"Other Services\"\n or \"You are not currently subscribed\" in collection.text\n ):\n continue\n\n for date in self._extract_dates(collection.text):\n entries.append(\n Collection(\n date=date,\n t=title,\n icon=ICON_MAP.get(title),\n )\n )\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/maldon_gov_uk.py"}]}
| 1,408 | 678 |
gh_patches_debug_17215
|
rasdani/github-patches
|
git_diff
|
pymedusa__Medusa-9333
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Nebulance - Cached results for manual search get duplicated
**Describe the bug**
As described in #8787
Provider Nebulance duplicates cache results.

If you need access to the Provider reach out to me in DM on Discord.
I can also provide the cache.db if needed.
PS: Should rarbg already be deduplicated?

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `medusa/providers/torrent/html/nebulance.py`
Content:
```
1 # coding=utf-8
2
3 """Provider code for Nebulance."""
4
5 from __future__ import unicode_literals
6
7 import logging
8 import re
9
10 from medusa import tv
11 from medusa.bs4_parser import BS4Parser
12 from medusa.helper.common import (
13 convert_size,
14 try_int,
15 )
16 from medusa.helper.exceptions import AuthException
17 from medusa.logger.adapters.style import BraceAdapter
18 from medusa.providers.torrent.torrent_provider import TorrentProvider
19
20 from requests.compat import urljoin
21 from requests.utils import dict_from_cookiejar
22
23 log = BraceAdapter(logging.getLogger(__name__))
24 log.logger.addHandler(logging.NullHandler())
25
26
27 class NebulanceProvider(TorrentProvider):
28 """Nebulance Torrent provider."""
29
30 def __init__(self):
31 """Initialize the class."""
32 super(NebulanceProvider, self).__init__('Nebulance')
33
34 # Credentials
35 self.username = None
36 self.password = None
37
38 # URLs
39 self.url = 'https://nebulance.io/'
40 self.urls = {
41 'login': urljoin(self.url, '/login.php'),
42 'search': urljoin(self.url, '/torrents.php'),
43 }
44
45 # Proper Strings
46
47 # Miscellaneous Options
48 self.freeleech = None
49
50 # Cache
51 self.cache = tv.Cache(self)
52
53 def search(self, search_strings, age=0, ep_obj=None, **kwargs):
54 """
55 Search a provider and parse the results.
56
57 :param search_strings: A dict with mode (key) and the search value (value)
58 :param age: Not used
59 :param ep_obj: Not used
60 :returns: A list of search results (structure)
61 """
62 results = []
63 if not self.login():
64 return results
65
66 for mode in search_strings:
67 log.debug('Search mode: {0}', mode)
68
69 for search_string in search_strings[mode]:
70
71 if mode != 'RSS':
72 log.debug('Search string: {search}',
73 {'search': search_string})
74
75 search_params = {
76 'searchtext': search_string,
77 'filter_freeleech': (0, 1)[self.freeleech is True],
78 'order_by': ('seeders', 'time')[mode == 'RSS'],
79 'order_way': 'desc',
80 }
81
82 if not search_string:
83 del search_params['searchtext']
84
85 response = self.session.get(self.urls['search'], params=search_params)
86 if not response or not response.text:
87 log.debug('No data returned from provider')
88 continue
89
90 results += self.parse(response.text, mode)
91
92 return results
93
94 def parse(self, data, mode):
95 """
96 Parse search results for items.
97
98 :param data: The raw response from a search
99 :param mode: The current mode used to search, e.g. RSS
100
101 :return: A list of items found
102 """
103 items = []
104
105 with BS4Parser(data, 'html5lib') as html:
106 torrent_table = html.find('table', {'id': 'torrent_table'})
107
108 # Continue only if at least one release is found
109 if not torrent_table:
110 log.debug('Data returned from provider does not contain any torrents')
111 return items
112
113 torrent_rows = torrent_table('tr', {'class': 'torrent'})
114
115 # Continue only if one Release is found
116 if not torrent_rows:
117 log.debug('Data returned from provider does not contain any torrents')
118 return items
119
120 for row in torrent_rows:
121 try:
122 freeleech = row.find('img', alt='Freeleech') is not None
123 if self.freeleech and not freeleech:
124 continue
125
126 download_item = row.find('a', {'title': [
127 'Download Torrent', # Download link
128 'Previously Grabbed Torrent File', # Already Downloaded
129 'Currently Seeding Torrent', # Seeding
130 'Currently Leeching Torrent', # Leeching
131 ]})
132
133 if not download_item:
134 continue
135
136 download_url = urljoin(self.url, download_item['href'])
137
138 temp_anchor = row.find('a', {'data-src': True})
139 title = temp_anchor['data-src']
140 if not all([title, download_url]):
141 continue
142
143 cells = row('td')
144 seeders = try_int(cells[5].text.strip())
145 leechers = try_int(cells[6].text.strip())
146
147 # Filter unseeded torrent
148 if seeders < self.minseed:
149 if mode != 'RSS':
150 log.debug("Discarding torrent because it doesn't meet the"
151 ' minimum seeders: {0}. Seeders: {1}',
152 title, seeders)
153 continue
154
155 torrent_size = cells[2].find('div').get_text(strip=True)
156 size = convert_size(torrent_size) or -1
157
158 pubdate_raw = cells[3].find('span')['title']
159 pubdate = self.parse_pubdate(pubdate_raw)
160
161 item = {
162 'title': title,
163 'link': download_url,
164 'size': size,
165 'seeders': seeders,
166 'leechers': leechers,
167 'pubdate': pubdate,
168 }
169 if mode != 'RSS':
170 log.debug('Found result: {0} with {1} seeders and {2} leechers',
171 title, seeders, leechers)
172
173 items.append(item)
174 except (AttributeError, TypeError, KeyError, ValueError, IndexError):
175 log.exception('Failed parsing provider.')
176
177 return items
178
179 def login(self):
180 """Login method used for logging in before doing search and torrent downloads."""
181 if any(dict_from_cookiejar(self.session.cookies).values()):
182 return True
183
184 login_params = {
185 'username': self.username,
186 'password': self.password,
187 'keeplogged': 'on',
188 'login': 'Login'
189 }
190
191 response = self.session.post(self.urls['login'], data=login_params)
192 if not response or not response.text:
193 log.warning('Unable to connect to provider')
194 return False
195
196 if any([re.search('Username Incorrect', response.text),
197 re.search('Password Incorrect', response.text), ]):
198 log.warning('Invalid username or password. Check your settings')
199 return False
200
201 return True
202
203 def _check_auth(self):
204 """Check if user credentials."""
205 if not self.username or not self.password:
206 raise AuthException('Your authentication credentials for {0} are missing,'
207 ' check your config.'.format(self.name))
208
209 return True
210
211
212 provider = NebulanceProvider()
213
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/medusa/providers/torrent/html/nebulance.py b/medusa/providers/torrent/html/nebulance.py
--- a/medusa/providers/torrent/html/nebulance.py
+++ b/medusa/providers/torrent/html/nebulance.py
@@ -27,6 +27,8 @@
class NebulanceProvider(TorrentProvider):
"""Nebulance Torrent provider."""
+ IDENTIFIER_REGEX = re.compile(r'.+id=([0-9]+)&')
+
def __init__(self):
"""Initialize the class."""
super(NebulanceProvider, self).__init__('Nebulance')
@@ -208,5 +210,18 @@
return True
+ @staticmethod
+ def _get_identifier(item):
+ """
+ Return the identifier for the item.
+
+ Cut the apikey from it, as this might change over time.
+ So we'd like to prevent adding duplicates to cache.
+ """
+ url = NebulanceProvider.IDENTIFIER_REGEX.match(item.url)
+ if url:
+ return url.group(1)
+ return item.url
+
provider = NebulanceProvider()
|
{"golden_diff": "diff --git a/medusa/providers/torrent/html/nebulance.py b/medusa/providers/torrent/html/nebulance.py\n--- a/medusa/providers/torrent/html/nebulance.py\n+++ b/medusa/providers/torrent/html/nebulance.py\n@@ -27,6 +27,8 @@\n class NebulanceProvider(TorrentProvider):\n \"\"\"Nebulance Torrent provider.\"\"\"\n \n+ IDENTIFIER_REGEX = re.compile(r'.+id=([0-9]+)&')\n+\n def __init__(self):\n \"\"\"Initialize the class.\"\"\"\n super(NebulanceProvider, self).__init__('Nebulance')\n@@ -208,5 +210,18 @@\n \n return True\n \n+ @staticmethod\n+ def _get_identifier(item):\n+ \"\"\"\n+ Return the identifier for the item.\n+\n+ Cut the apikey from it, as this might change over time.\n+ So we'd like to prevent adding duplicates to cache.\n+ \"\"\"\n+ url = NebulanceProvider.IDENTIFIER_REGEX.match(item.url)\n+ if url:\n+ return url.group(1)\n+ return item.url\n+\n \n provider = NebulanceProvider()\n", "issue": "Nebulance - Cached results for manual search get duplicated\n**Describe the bug**\r\nAs described in #8787 \r\nProvider Nebulance duplicates cache results.\r\n\r\n\r\nIf you need access to the Provider reach out to me in DM on Discord.\r\nI can also provide the cache.db if needed.\r\n\r\nPS: Should rarbg already be deduplicated?\r\n\r\n\n", "before_files": [{"content": "# coding=utf-8\n\n\"\"\"Provider code for Nebulance.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport logging\nimport re\n\nfrom medusa import tv\nfrom medusa.bs4_parser import BS4Parser\nfrom medusa.helper.common import (\n convert_size,\n try_int,\n)\nfrom medusa.helper.exceptions import AuthException\nfrom medusa.logger.adapters.style import BraceAdapter\nfrom medusa.providers.torrent.torrent_provider import TorrentProvider\n\nfrom requests.compat import urljoin\nfrom requests.utils import dict_from_cookiejar\n\nlog = BraceAdapter(logging.getLogger(__name__))\nlog.logger.addHandler(logging.NullHandler())\n\n\nclass NebulanceProvider(TorrentProvider):\n \"\"\"Nebulance Torrent provider.\"\"\"\n\n def __init__(self):\n \"\"\"Initialize the class.\"\"\"\n super(NebulanceProvider, self).__init__('Nebulance')\n\n # Credentials\n self.username = None\n self.password = None\n\n # URLs\n self.url = 'https://nebulance.io/'\n self.urls = {\n 'login': urljoin(self.url, '/login.php'),\n 'search': urljoin(self.url, '/torrents.php'),\n }\n\n # Proper Strings\n\n # Miscellaneous Options\n self.freeleech = None\n\n # Cache\n self.cache = tv.Cache(self)\n\n def search(self, search_strings, age=0, ep_obj=None, **kwargs):\n \"\"\"\n Search a provider and parse the results.\n\n :param search_strings: A dict with mode (key) and the search value (value)\n :param age: Not used\n :param ep_obj: Not used\n :returns: A list of search results (structure)\n \"\"\"\n results = []\n if not self.login():\n return results\n\n for mode in search_strings:\n log.debug('Search mode: {0}', mode)\n\n for search_string in search_strings[mode]:\n\n if mode != 'RSS':\n log.debug('Search string: {search}',\n {'search': search_string})\n\n search_params = {\n 'searchtext': search_string,\n 'filter_freeleech': (0, 1)[self.freeleech is True],\n 'order_by': ('seeders', 'time')[mode == 'RSS'],\n 'order_way': 'desc',\n }\n\n if not search_string:\n del search_params['searchtext']\n\n response = self.session.get(self.urls['search'], params=search_params)\n if not response or not response.text:\n log.debug('No data returned from provider')\n continue\n\n results += self.parse(response.text, mode)\n\n return results\n\n def parse(self, data, mode):\n \"\"\"\n Parse search results for items.\n\n :param data: The raw response from a search\n :param mode: The current mode used to search, e.g. RSS\n\n :return: A list of items found\n \"\"\"\n items = []\n\n with BS4Parser(data, 'html5lib') as html:\n torrent_table = html.find('table', {'id': 'torrent_table'})\n\n # Continue only if at least one release is found\n if not torrent_table:\n log.debug('Data returned from provider does not contain any torrents')\n return items\n\n torrent_rows = torrent_table('tr', {'class': 'torrent'})\n\n # Continue only if one Release is found\n if not torrent_rows:\n log.debug('Data returned from provider does not contain any torrents')\n return items\n\n for row in torrent_rows:\n try:\n freeleech = row.find('img', alt='Freeleech') is not None\n if self.freeleech and not freeleech:\n continue\n\n download_item = row.find('a', {'title': [\n 'Download Torrent', # Download link\n 'Previously Grabbed Torrent File', # Already Downloaded\n 'Currently Seeding Torrent', # Seeding\n 'Currently Leeching Torrent', # Leeching\n ]})\n\n if not download_item:\n continue\n\n download_url = urljoin(self.url, download_item['href'])\n\n temp_anchor = row.find('a', {'data-src': True})\n title = temp_anchor['data-src']\n if not all([title, download_url]):\n continue\n\n cells = row('td')\n seeders = try_int(cells[5].text.strip())\n leechers = try_int(cells[6].text.strip())\n\n # Filter unseeded torrent\n if seeders < self.minseed:\n if mode != 'RSS':\n log.debug(\"Discarding torrent because it doesn't meet the\"\n ' minimum seeders: {0}. Seeders: {1}',\n title, seeders)\n continue\n\n torrent_size = cells[2].find('div').get_text(strip=True)\n size = convert_size(torrent_size) or -1\n\n pubdate_raw = cells[3].find('span')['title']\n pubdate = self.parse_pubdate(pubdate_raw)\n\n item = {\n 'title': title,\n 'link': download_url,\n 'size': size,\n 'seeders': seeders,\n 'leechers': leechers,\n 'pubdate': pubdate,\n }\n if mode != 'RSS':\n log.debug('Found result: {0} with {1} seeders and {2} leechers',\n title, seeders, leechers)\n\n items.append(item)\n except (AttributeError, TypeError, KeyError, ValueError, IndexError):\n log.exception('Failed parsing provider.')\n\n return items\n\n def login(self):\n \"\"\"Login method used for logging in before doing search and torrent downloads.\"\"\"\n if any(dict_from_cookiejar(self.session.cookies).values()):\n return True\n\n login_params = {\n 'username': self.username,\n 'password': self.password,\n 'keeplogged': 'on',\n 'login': 'Login'\n }\n\n response = self.session.post(self.urls['login'], data=login_params)\n if not response or not response.text:\n log.warning('Unable to connect to provider')\n return False\n\n if any([re.search('Username Incorrect', response.text),\n re.search('Password Incorrect', response.text), ]):\n log.warning('Invalid username or password. Check your settings')\n return False\n\n return True\n\n def _check_auth(self):\n \"\"\"Check if user credentials.\"\"\"\n if not self.username or not self.password:\n raise AuthException('Your authentication credentials for {0} are missing,'\n ' check your config.'.format(self.name))\n\n return True\n\n\nprovider = NebulanceProvider()\n", "path": "medusa/providers/torrent/html/nebulance.py"}], "after_files": [{"content": "# coding=utf-8\n\n\"\"\"Provider code for Nebulance.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport logging\nimport re\n\nfrom medusa import tv\nfrom medusa.bs4_parser import BS4Parser\nfrom medusa.helper.common import (\n convert_size,\n try_int,\n)\nfrom medusa.helper.exceptions import AuthException\nfrom medusa.logger.adapters.style import BraceAdapter\nfrom medusa.providers.torrent.torrent_provider import TorrentProvider\n\nfrom requests.compat import urljoin\nfrom requests.utils import dict_from_cookiejar\n\nlog = BraceAdapter(logging.getLogger(__name__))\nlog.logger.addHandler(logging.NullHandler())\n\n\nclass NebulanceProvider(TorrentProvider):\n \"\"\"Nebulance Torrent provider.\"\"\"\n\n IDENTIFIER_REGEX = re.compile(r'.+id=([0-9]+)&')\n\n def __init__(self):\n \"\"\"Initialize the class.\"\"\"\n super(NebulanceProvider, self).__init__('Nebulance')\n\n # Credentials\n self.username = None\n self.password = None\n\n # URLs\n self.url = 'https://nebulance.io/'\n self.urls = {\n 'login': urljoin(self.url, '/login.php'),\n 'search': urljoin(self.url, '/torrents.php'),\n }\n\n # Proper Strings\n\n # Miscellaneous Options\n self.freeleech = None\n\n # Cache\n self.cache = tv.Cache(self)\n\n def search(self, search_strings, age=0, ep_obj=None, **kwargs):\n \"\"\"\n Search a provider and parse the results.\n\n :param search_strings: A dict with mode (key) and the search value (value)\n :param age: Not used\n :param ep_obj: Not used\n :returns: A list of search results (structure)\n \"\"\"\n results = []\n if not self.login():\n return results\n\n for mode in search_strings:\n log.debug('Search mode: {0}', mode)\n\n for search_string in search_strings[mode]:\n\n if mode != 'RSS':\n log.debug('Search string: {search}',\n {'search': search_string})\n\n search_params = {\n 'searchtext': search_string,\n 'filter_freeleech': (0, 1)[self.freeleech is True],\n 'order_by': ('seeders', 'time')[mode == 'RSS'],\n 'order_way': 'desc',\n }\n\n if not search_string:\n del search_params['searchtext']\n\n response = self.session.get(self.urls['search'], params=search_params)\n if not response or not response.text:\n log.debug('No data returned from provider')\n continue\n\n results += self.parse(response.text, mode)\n\n return results\n\n def parse(self, data, mode):\n \"\"\"\n Parse search results for items.\n\n :param data: The raw response from a search\n :param mode: The current mode used to search, e.g. RSS\n\n :return: A list of items found\n \"\"\"\n items = []\n\n with BS4Parser(data, 'html5lib') as html:\n torrent_table = html.find('table', {'id': 'torrent_table'})\n\n # Continue only if at least one release is found\n if not torrent_table:\n log.debug('Data returned from provider does not contain any torrents')\n return items\n\n torrent_rows = torrent_table('tr', {'class': 'torrent'})\n\n # Continue only if one Release is found\n if not torrent_rows:\n log.debug('Data returned from provider does not contain any torrents')\n return items\n\n for row in torrent_rows:\n try:\n freeleech = row.find('img', alt='Freeleech') is not None\n if self.freeleech and not freeleech:\n continue\n\n download_item = row.find('a', {'title': [\n 'Download Torrent', # Download link\n 'Previously Grabbed Torrent File', # Already Downloaded\n 'Currently Seeding Torrent', # Seeding\n 'Currently Leeching Torrent', # Leeching\n ]})\n\n if not download_item:\n continue\n\n download_url = urljoin(self.url, download_item['href'])\n\n temp_anchor = row.find('a', {'data-src': True})\n title = temp_anchor['data-src']\n if not all([title, download_url]):\n continue\n\n cells = row('td')\n seeders = try_int(cells[5].text.strip())\n leechers = try_int(cells[6].text.strip())\n\n # Filter unseeded torrent\n if seeders < self.minseed:\n if mode != 'RSS':\n log.debug(\"Discarding torrent because it doesn't meet the\"\n ' minimum seeders: {0}. Seeders: {1}',\n title, seeders)\n continue\n\n torrent_size = cells[2].find('div').get_text(strip=True)\n size = convert_size(torrent_size) or -1\n\n pubdate_raw = cells[3].find('span')['title']\n pubdate = self.parse_pubdate(pubdate_raw)\n\n item = {\n 'title': title,\n 'link': download_url,\n 'size': size,\n 'seeders': seeders,\n 'leechers': leechers,\n 'pubdate': pubdate,\n }\n if mode != 'RSS':\n log.debug('Found result: {0} with {1} seeders and {2} leechers',\n title, seeders, leechers)\n\n items.append(item)\n except (AttributeError, TypeError, KeyError, ValueError, IndexError):\n log.exception('Failed parsing provider.')\n\n return items\n\n def login(self):\n \"\"\"Login method used for logging in before doing search and torrent downloads.\"\"\"\n if any(dict_from_cookiejar(self.session.cookies).values()):\n return True\n\n login_params = {\n 'username': self.username,\n 'password': self.password,\n 'keeplogged': 'on',\n 'login': 'Login'\n }\n\n response = self.session.post(self.urls['login'], data=login_params)\n if not response or not response.text:\n log.warning('Unable to connect to provider')\n return False\n\n if any([re.search('Username Incorrect', response.text),\n re.search('Password Incorrect', response.text), ]):\n log.warning('Invalid username or password. Check your settings')\n return False\n\n return True\n\n def _check_auth(self):\n \"\"\"Check if user credentials.\"\"\"\n if not self.username or not self.password:\n raise AuthException('Your authentication credentials for {0} are missing,'\n ' check your config.'.format(self.name))\n\n return True\n\n @staticmethod\n def _get_identifier(item):\n \"\"\"\n Return the identifier for the item.\n\n Cut the apikey from it, as this might change over time.\n So we'd like to prevent adding duplicates to cache.\n \"\"\"\n url = NebulanceProvider.IDENTIFIER_REGEX.match(item.url)\n if url:\n return url.group(1)\n return item.url\n\n\nprovider = NebulanceProvider()\n", "path": "medusa/providers/torrent/html/nebulance.py"}]}
| 2,427 | 254 |
gh_patches_debug_798
|
rasdani/github-patches
|
git_diff
|
conan-io__conan-8167
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[bug] YCM generator uses deprecated FlagsForFile method instead of Settings
<!--
Please don't forget to update the issue title.
Include all applicable information to help us reproduce your problem.
To help us debug your issue please explain:
-->
### Environment Details (include every applicable attribute)
* Operating System+version: macOS 10.14.5
* Compiler+version: clang 10.0.1
* Conan version: 1.31.4
* Python version: 3.9.0
### Steps to reproduce (Include if Applicable)
Follow instructions at https://docs.conan.io/en/latest/integrations/ide/youcompleteme.html#youcompleteme-integration to configure `.ycm_extra_conf` and `conan_ycm_flags.json`:
conanfile.txt
```
[generators]
ycm
```
```bash
# from your base folder
$ cp build/conan_ycm_extra_conf.py .ycm_extra_conf.py
$ ln -s build/conan_ycm_flags.json conan_ycm_flags.json
```
Install `gtest` as a package, and then import it in a source file.
### Logs (Executed commands with output) (Include/Attach if Applicable)
<!--
Your log content should be related to the bug description, it can be:
- Conan command output
- Server output (Artifactory, conan_server)
-->
YCM was unable to find the gtest package as installed by conan. YCM Debug Info:
```
Printing YouCompleteMe debug information...
-- Resolve completions: Up front
-- Client logfile: /var/folders/_2/cyfwx31x0y1dh06whkrkrmh00000gn/T/ycm_x9dk66na.log
-- Server Python interpreter: /usr/local/opt/[email protected]/bin/python3.9
-- Server Python version: 3.9.0
-- Server has Clang support compiled in: True
-- Clang version: clang version 10.0.0
-- Extra configuration file found and loaded
-- Extra configuration path: /Users/username/home/projects/project/.ycm_extra_conf.py
-- C-family completer debug information:
-- Clangd running
-- Clangd process ID: 56305
-- Clangd executable: ['/Users/username/.vim/plugged/YouCompleteMe/third_party/ycmd/third_party/clangd/output/bin/clangd', '-header-insertion-decorators=0', '-resource-dir=/Users/
username/.vim/plugged/YouCompleteMe/third_party/ycmd/third_party/clang/lib/clang/10.0.0', '-limit-results=500', '-log=verbose']
-- Clangd logfiles:
-- /var/folders/_2/cyfwx31x0y1dh06whkrkrmh00000gn/T/clangd_stderr615mhccn.log
-- Clangd Server State: Initialized
-- Clangd Project Directory: /Users/username/home/projects/project
-- Clangd Settings: {}
-- Clangd Compilation Command: False
-- Server running at: http://127.0.0.1:50225
-- Server process ID: 56303
-- Server logfiles:
-- /var/folders/_2/cyfwx31x0y1dh06whkrkrmh00000gn/T/ycmd_50225_stdout_nstboyjy.log
-- /var/folders/_2/cyfwx31x0y1dh06whkrkrmh00000gn/T/ycmd_50225_stderr_ey11rfes.log
```
As can be seen, `clangd` is not using the flags `'-x', 'c++'` as defined in the default `flags` list in the generated `.ycm_extra_conf.py`, or the `gtest` package as installed by conan. The generated `conan_ycm_flags.json` file contains the following:
```
{
"includes": [
"-isystem/Users/username/.conan/data/gtest/1.10.0/_/_/package/03ad53d73db1da068548d1d6a87ac3219077b5c0/include",
"-isystem/Users/username/.conan/data/rapidjson/1.1.0/_/_/package/5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9/include"
],
"defines": [],
"flags": []
}
```
These flags are also not included in the compilation arguments.
The issue appears to be caused by the fact that the [generator](https://github.com/conan-io/conan/blob/develop/conans/client/generators/ycm.py) uses the deprecated `FlagsForFile` method instead of it's replacement, `Settings`. This can be resolved by modifying line 143 from:
```python
def FlagsForFile( filename, **kwargs ):
```
to
```python
def Settings( filename, **kwargs):
```
As a new user of YCM and conan, this took an inordinate amount of time to troubleshoot, though it is relatively trivial.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conans/client/generators/ycm.py`
Content:
```
1 import json
2
3 from conans.model import Generator
4
5
6 class YouCompleteMeGenerator(Generator):
7 template = '''
8 # This file is NOT licensed under the GPLv3, which is the license for the rest
9 # of YouCompleteMe.
10 #
11 # Here's the license text for this file:
12 #
13 # This is free and unencumbered software released into the public domain.
14 #
15 # Anyone is free to copy, modify, publish, use, compile, sell, or
16 # distribute this software, either in source code form or as a compiled
17 # binary, for any purpose, commercial or non-commercial, and by any
18 # means.
19 #
20 # In jurisdictions that recognize copyright laws, the author or authors
21 # of this software dedicate any and all copyright interest in the
22 # software to the public domain. We make this dedication for the benefit
23 # of the public at large and to the detriment of our heirs and
24 # successors. We intend this dedication to be an overt act of
25 # relinquishment in perpetuity of all present and future rights to this
26 # software under copyright law.
27 #
28 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
29 # EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
30 # MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
31 # IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
32 # OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
33 # ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
34 # OTHER DEALINGS IN THE SOFTWARE.
35 #
36 # For more information, please refer to <http://unlicense.org/>
37
38 import os
39 import json
40 import ycm_core
41 import logging
42
43
44 _logger = logging.getLogger(__name__)
45
46
47 def DirectoryOfThisScript():
48 return os.path.dirname( os.path.abspath( __file__ ) )
49
50
51 # These are the compilation flags that will be used in case there's no
52 # compilation database set (by default, one is not set).
53 # CHANGE THIS LIST OF FLAGS. YES, THIS IS THE DROID YOU HAVE BEEN LOOKING FOR.
54 flags = [
55 '-x', 'c++'
56 ]
57
58 conan_flags = json.loads(open("conan_ycm_flags.json", "r").read())
59
60 flags.extend(conan_flags["flags"])
61 flags.extend(conan_flags["defines"])
62 flags.extend(conan_flags["includes"])
63
64
65 # Set this to the absolute path to the folder (NOT the file!) containing the
66 # compile_commands.json file to use that instead of 'flags'. See here for
67 # more details: http://clang.llvm.org/docs/JSONCompilationDatabase.html
68 #
69 # You can get CMake to generate this file for you by adding:
70 # set( CMAKE_EXPORT_COMPILE_COMMANDS 1 )
71 # to your CMakeLists.txt file.
72 #
73 # Most projects will NOT need to set this to anything; you can just change the
74 # 'flags' list of compilation flags. Notice that YCM itself uses that approach.
75 compilation_database_folder = os.path.join(DirectoryOfThisScript(), 'Debug')
76
77 if os.path.exists( compilation_database_folder ):
78 database = ycm_core.CompilationDatabase( compilation_database_folder )
79 if not database.DatabaseSuccessfullyLoaded():
80 _logger.warn("Failed to load database")
81 database = None
82 else:
83 database = None
84
85 SOURCE_EXTENSIONS = [ '.cpp', '.cxx', '.cc', '.c', '.m', '.mm' ]
86
87 def GetAbsolutePath(include_path, working_directory):
88 if os.path.isabs(include_path):
89 return include_path
90 return os.path.join(working_directory, include_path)
91
92
93 def MakeRelativePathsInFlagsAbsolute( flags, working_directory ):
94 if not working_directory:
95 return list( flags )
96 new_flags = []
97 make_next_absolute = False
98 path_flags = [ '-isystem', '-I', '-iquote', '--sysroot=' ]
99 for flag in flags:
100 new_flag = flag
101
102 if make_next_absolute:
103 make_next_absolute = False
104 new_flag = GetAbsolutePath(flag, working_directory)
105
106 for path_flag in path_flags:
107 if flag == path_flag:
108 make_next_absolute = True
109 break
110
111 if flag.startswith( path_flag ):
112 path = flag[ len( path_flag ): ]
113 new_flag = flag[:len(path_flag)] + GetAbsolutePath(path, working_directory)
114 break
115
116 if new_flag:
117 new_flags.append( new_flag )
118 return new_flags
119
120
121 def IsHeaderFile( filename ):
122 extension = os.path.splitext( filename )[ 1 ]
123 return extension.lower() in [ '.h', '.hxx', '.hpp', '.hh' ]
124
125
126 def GetCompilationInfoForFile( filename ):
127 # The compilation_commands.json file generated by CMake does not have entries
128 # for header files. So we do our best by asking the db for flags for a
129 # corresponding source file, if any. If one exists, the flags for that file
130 # should be good enough.
131 if IsHeaderFile( filename ):
132 basename = os.path.splitext( filename )[ 0 ]
133 for extension in SOURCE_EXTENSIONS:
134 replacement_file = basename + extension
135 if os.path.exists( replacement_file ):
136 compilation_info = database.GetCompilationInfoForFile( replacement_file )
137 if compilation_info.compiler_flags_:
138 return compilation_info
139 return None
140 return database.GetCompilationInfoForFile( filename )
141
142
143 def FlagsForFile( filename, **kwargs ):
144 relative_to = None
145 compiler_flags = None
146
147 if database:
148 # Bear in mind that compilation_info.compiler_flags_ does NOT return a
149 # python list, but a "list-like" StringVec object
150 compilation_info = GetCompilationInfoForFile( filename )
151 if compilation_info is None:
152 relative_to = DirectoryOfThisScript()
153 compiler_flags = flags
154 else:
155 relative_to = compilation_info.compiler_working_dir_
156 compiler_flags = compilation_info.compiler_flags_
157
158 else:
159 relative_to = DirectoryOfThisScript()
160 compiler_flags = flags
161
162 final_flags = MakeRelativePathsInFlagsAbsolute( compiler_flags, relative_to )
163 for flag in final_flags:
164 if flag.startswith("-W"):
165 final_flags.remove(flag)
166 _logger.info("Final flags for %s are %s" % (filename, ' '.join(final_flags)))
167
168 return {{
169 'flags': final_flags + ["-I/usr/include", "-I/usr/include/c++/{cxx_version}"],
170 'do_cache': True
171 }}
172 '''
173
174 @property
175 def filename(self):
176 pass
177
178 @property
179 def content(self):
180 def prefixed(prefix, values):
181 return [prefix + x for x in values]
182
183 conan_flags = {
184 "includes": prefixed("-isystem", self.deps_build_info.include_paths),
185 "defines": prefixed("-D", self.deps_build_info.defines),
186 "flags": self.deps_build_info.cxxflags
187 }
188
189 cxx_version = ''
190 try:
191 cxx_version = str(self.settings.compiler.version).split('.')[0]
192 except Exception:
193 pass
194
195 ycm_data = self.template.format(cxx_version=cxx_version)
196 return {"conan_ycm_extra_conf.py": ycm_data,
197 "conan_ycm_flags.json": json.dumps(conan_flags, indent=2)}
198
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/conans/client/generators/ycm.py b/conans/client/generators/ycm.py
--- a/conans/client/generators/ycm.py
+++ b/conans/client/generators/ycm.py
@@ -140,7 +140,7 @@
return database.GetCompilationInfoForFile( filename )
-def FlagsForFile( filename, **kwargs ):
+def Settings( filename, **kwargs ):
relative_to = None
compiler_flags = None
|
{"golden_diff": "diff --git a/conans/client/generators/ycm.py b/conans/client/generators/ycm.py\n--- a/conans/client/generators/ycm.py\n+++ b/conans/client/generators/ycm.py\n@@ -140,7 +140,7 @@\n return database.GetCompilationInfoForFile( filename )\n \n \n-def FlagsForFile( filename, **kwargs ):\n+def Settings( filename, **kwargs ):\n relative_to = None\n compiler_flags = None\n", "issue": "[bug] YCM generator uses deprecated FlagsForFile method instead of Settings\n<!--\r\n Please don't forget to update the issue title.\r\n Include all applicable information to help us reproduce your problem.\r\n\r\n To help us debug your issue please explain:\r\n-->\r\n\r\n### Environment Details (include every applicable attribute)\r\n * Operating System+version: macOS 10.14.5\r\n * Compiler+version: clang 10.0.1\r\n * Conan version: 1.31.4\r\n * Python version: 3.9.0\r\n\r\n### Steps to reproduce (Include if Applicable)\r\nFollow instructions at https://docs.conan.io/en/latest/integrations/ide/youcompleteme.html#youcompleteme-integration to configure `.ycm_extra_conf` and `conan_ycm_flags.json`:\r\n\r\nconanfile.txt\r\n```\r\n [generators]\r\n ycm\r\n```\r\n\r\n```bash\r\n# from your base folder\r\n$ cp build/conan_ycm_extra_conf.py .ycm_extra_conf.py\r\n$ ln -s build/conan_ycm_flags.json conan_ycm_flags.json\r\n```\r\nInstall `gtest` as a package, and then import it in a source file.\r\n\r\n\r\n### Logs (Executed commands with output) (Include/Attach if Applicable)\r\n\r\n<!--\r\n Your log content should be related to the bug description, it can be:\r\n - Conan command output\r\n - Server output (Artifactory, conan_server)\r\n-->\r\nYCM was unable to find the gtest package as installed by conan. YCM Debug Info:\r\n```\r\nPrinting YouCompleteMe debug information...\r\n-- Resolve completions: Up front\r\n-- Client logfile: /var/folders/_2/cyfwx31x0y1dh06whkrkrmh00000gn/T/ycm_x9dk66na.log\r\n-- Server Python interpreter: /usr/local/opt/[email protected]/bin/python3.9\r\n-- Server Python version: 3.9.0\r\n-- Server has Clang support compiled in: True\r\n-- Clang version: clang version 10.0.0\r\n-- Extra configuration file found and loaded\r\n-- Extra configuration path: /Users/username/home/projects/project/.ycm_extra_conf.py\r\n-- C-family completer debug information:\r\n-- Clangd running\r\n-- Clangd process ID: 56305\r\n-- Clangd executable: ['/Users/username/.vim/plugged/YouCompleteMe/third_party/ycmd/third_party/clangd/output/bin/clangd', '-header-insertion-decorators=0', '-resource-dir=/Users/\r\nusername/.vim/plugged/YouCompleteMe/third_party/ycmd/third_party/clang/lib/clang/10.0.0', '-limit-results=500', '-log=verbose']\r\n-- Clangd logfiles:\r\n-- /var/folders/_2/cyfwx31x0y1dh06whkrkrmh00000gn/T/clangd_stderr615mhccn.log\r\n-- Clangd Server State: Initialized\r\n-- Clangd Project Directory: /Users/username/home/projects/project\r\n-- Clangd Settings: {}\r\n-- Clangd Compilation Command: False\r\n-- Server running at: http://127.0.0.1:50225\r\n-- Server process ID: 56303\r\n-- Server logfiles:\r\n-- /var/folders/_2/cyfwx31x0y1dh06whkrkrmh00000gn/T/ycmd_50225_stdout_nstboyjy.log\r\n-- /var/folders/_2/cyfwx31x0y1dh06whkrkrmh00000gn/T/ycmd_50225_stderr_ey11rfes.log\r\n```\r\nAs can be seen, `clangd` is not using the flags `'-x', 'c++'` as defined in the default `flags` list in the generated `.ycm_extra_conf.py`, or the `gtest` package as installed by conan. The generated `conan_ycm_flags.json` file contains the following:\r\n\r\n```\r\n{\r\n \"includes\": [\r\n \"-isystem/Users/username/.conan/data/gtest/1.10.0/_/_/package/03ad53d73db1da068548d1d6a87ac3219077b5c0/include\",\r\n \"-isystem/Users/username/.conan/data/rapidjson/1.1.0/_/_/package/5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9/include\"\r\n ],\r\n \"defines\": [],\r\n \"flags\": []\r\n}\r\n```\r\nThese flags are also not included in the compilation arguments.\r\n\r\nThe issue appears to be caused by the fact that the [generator](https://github.com/conan-io/conan/blob/develop/conans/client/generators/ycm.py) uses the deprecated `FlagsForFile` method instead of it's replacement, `Settings`. This can be resolved by modifying line 143 from:\r\n\r\n```python\r\ndef FlagsForFile( filename, **kwargs ):\r\n```\r\nto\r\n```python\r\ndef Settings( filename, **kwargs):\r\n```\r\n\r\nAs a new user of YCM and conan, this took an inordinate amount of time to troubleshoot, though it is relatively trivial.\n", "before_files": [{"content": "import json\n\nfrom conans.model import Generator\n\n\nclass YouCompleteMeGenerator(Generator):\n template = '''\n# This file is NOT licensed under the GPLv3, which is the license for the rest\n# of YouCompleteMe.\n#\n# Here's the license text for this file:\n#\n# This is free and unencumbered software released into the public domain.\n#\n# Anyone is free to copy, modify, publish, use, compile, sell, or\n# distribute this software, either in source code form or as a compiled\n# binary, for any purpose, commercial or non-commercial, and by any\n# means.\n#\n# In jurisdictions that recognize copyright laws, the author or authors\n# of this software dedicate any and all copyright interest in the\n# software to the public domain. We make this dedication for the benefit\n# of the public at large and to the detriment of our heirs and\n# successors. We intend this dedication to be an overt act of\n# relinquishment in perpetuity of all present and future rights to this\n# software under copyright law.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND,\n# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF\n# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.\n# IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,\n# ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR\n# OTHER DEALINGS IN THE SOFTWARE.\n#\n# For more information, please refer to <http://unlicense.org/>\n\nimport os\nimport json\nimport ycm_core\nimport logging\n\n\n_logger = logging.getLogger(__name__)\n\n\ndef DirectoryOfThisScript():\n return os.path.dirname( os.path.abspath( __file__ ) )\n\n\n# These are the compilation flags that will be used in case there's no\n# compilation database set (by default, one is not set).\n# CHANGE THIS LIST OF FLAGS. YES, THIS IS THE DROID YOU HAVE BEEN LOOKING FOR.\nflags = [\n '-x', 'c++'\n]\n\nconan_flags = json.loads(open(\"conan_ycm_flags.json\", \"r\").read())\n\nflags.extend(conan_flags[\"flags\"])\nflags.extend(conan_flags[\"defines\"])\nflags.extend(conan_flags[\"includes\"])\n\n\n# Set this to the absolute path to the folder (NOT the file!) containing the\n# compile_commands.json file to use that instead of 'flags'. See here for\n# more details: http://clang.llvm.org/docs/JSONCompilationDatabase.html\n#\n# You can get CMake to generate this file for you by adding:\n# set( CMAKE_EXPORT_COMPILE_COMMANDS 1 )\n# to your CMakeLists.txt file.\n#\n# Most projects will NOT need to set this to anything; you can just change the\n# 'flags' list of compilation flags. Notice that YCM itself uses that approach.\ncompilation_database_folder = os.path.join(DirectoryOfThisScript(), 'Debug')\n\nif os.path.exists( compilation_database_folder ):\n database = ycm_core.CompilationDatabase( compilation_database_folder )\n if not database.DatabaseSuccessfullyLoaded():\n _logger.warn(\"Failed to load database\")\n database = None\nelse:\n database = None\n\nSOURCE_EXTENSIONS = [ '.cpp', '.cxx', '.cc', '.c', '.m', '.mm' ]\n\ndef GetAbsolutePath(include_path, working_directory):\n if os.path.isabs(include_path):\n return include_path\n return os.path.join(working_directory, include_path)\n\n\ndef MakeRelativePathsInFlagsAbsolute( flags, working_directory ):\n if not working_directory:\n return list( flags )\n new_flags = []\n make_next_absolute = False\n path_flags = [ '-isystem', '-I', '-iquote', '--sysroot=' ]\n for flag in flags:\n new_flag = flag\n\n if make_next_absolute:\n make_next_absolute = False\n new_flag = GetAbsolutePath(flag, working_directory)\n\n for path_flag in path_flags:\n if flag == path_flag:\n make_next_absolute = True\n break\n\n if flag.startswith( path_flag ):\n path = flag[ len( path_flag ): ]\n new_flag = flag[:len(path_flag)] + GetAbsolutePath(path, working_directory)\n break\n\n if new_flag:\n new_flags.append( new_flag )\n return new_flags\n\n\ndef IsHeaderFile( filename ):\n extension = os.path.splitext( filename )[ 1 ]\n return extension.lower() in [ '.h', '.hxx', '.hpp', '.hh' ]\n\n\ndef GetCompilationInfoForFile( filename ):\n # The compilation_commands.json file generated by CMake does not have entries\n # for header files. So we do our best by asking the db for flags for a\n # corresponding source file, if any. If one exists, the flags for that file\n # should be good enough.\n if IsHeaderFile( filename ):\n basename = os.path.splitext( filename )[ 0 ]\n for extension in SOURCE_EXTENSIONS:\n replacement_file = basename + extension\n if os.path.exists( replacement_file ):\n compilation_info = database.GetCompilationInfoForFile( replacement_file )\n if compilation_info.compiler_flags_:\n return compilation_info\n return None\n return database.GetCompilationInfoForFile( filename )\n\n\ndef FlagsForFile( filename, **kwargs ):\n relative_to = None\n compiler_flags = None\n\n if database:\n # Bear in mind that compilation_info.compiler_flags_ does NOT return a\n # python list, but a \"list-like\" StringVec object\n compilation_info = GetCompilationInfoForFile( filename )\n if compilation_info is None:\n relative_to = DirectoryOfThisScript()\n compiler_flags = flags\n else:\n relative_to = compilation_info.compiler_working_dir_\n compiler_flags = compilation_info.compiler_flags_\n\n else:\n relative_to = DirectoryOfThisScript()\n compiler_flags = flags\n\n final_flags = MakeRelativePathsInFlagsAbsolute( compiler_flags, relative_to )\n for flag in final_flags:\n if flag.startswith(\"-W\"):\n final_flags.remove(flag)\n _logger.info(\"Final flags for %s are %s\" % (filename, ' '.join(final_flags)))\n\n return {{\n 'flags': final_flags + [\"-I/usr/include\", \"-I/usr/include/c++/{cxx_version}\"],\n 'do_cache': True\n }}\n'''\n\n @property\n def filename(self):\n pass\n\n @property\n def content(self):\n def prefixed(prefix, values):\n return [prefix + x for x in values]\n\n conan_flags = {\n \"includes\": prefixed(\"-isystem\", self.deps_build_info.include_paths),\n \"defines\": prefixed(\"-D\", self.deps_build_info.defines),\n \"flags\": self.deps_build_info.cxxflags\n }\n\n cxx_version = ''\n try:\n cxx_version = str(self.settings.compiler.version).split('.')[0]\n except Exception:\n pass\n\n ycm_data = self.template.format(cxx_version=cxx_version)\n return {\"conan_ycm_extra_conf.py\": ycm_data,\n \"conan_ycm_flags.json\": json.dumps(conan_flags, indent=2)}\n", "path": "conans/client/generators/ycm.py"}], "after_files": [{"content": "import json\n\nfrom conans.model import Generator\n\n\nclass YouCompleteMeGenerator(Generator):\n template = '''\n# This file is NOT licensed under the GPLv3, which is the license for the rest\n# of YouCompleteMe.\n#\n# Here's the license text for this file:\n#\n# This is free and unencumbered software released into the public domain.\n#\n# Anyone is free to copy, modify, publish, use, compile, sell, or\n# distribute this software, either in source code form or as a compiled\n# binary, for any purpose, commercial or non-commercial, and by any\n# means.\n#\n# In jurisdictions that recognize copyright laws, the author or authors\n# of this software dedicate any and all copyright interest in the\n# software to the public domain. We make this dedication for the benefit\n# of the public at large and to the detriment of our heirs and\n# successors. We intend this dedication to be an overt act of\n# relinquishment in perpetuity of all present and future rights to this\n# software under copyright law.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND,\n# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF\n# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.\n# IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,\n# ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR\n# OTHER DEALINGS IN THE SOFTWARE.\n#\n# For more information, please refer to <http://unlicense.org/>\n\nimport os\nimport json\nimport ycm_core\nimport logging\n\n\n_logger = logging.getLogger(__name__)\n\n\ndef DirectoryOfThisScript():\n return os.path.dirname( os.path.abspath( __file__ ) )\n\n\n# These are the compilation flags that will be used in case there's no\n# compilation database set (by default, one is not set).\n# CHANGE THIS LIST OF FLAGS. YES, THIS IS THE DROID YOU HAVE BEEN LOOKING FOR.\nflags = [\n '-x', 'c++'\n]\n\nconan_flags = json.loads(open(\"conan_ycm_flags.json\", \"r\").read())\n\nflags.extend(conan_flags[\"flags\"])\nflags.extend(conan_flags[\"defines\"])\nflags.extend(conan_flags[\"includes\"])\n\n\n# Set this to the absolute path to the folder (NOT the file!) containing the\n# compile_commands.json file to use that instead of 'flags'. See here for\n# more details: http://clang.llvm.org/docs/JSONCompilationDatabase.html\n#\n# You can get CMake to generate this file for you by adding:\n# set( CMAKE_EXPORT_COMPILE_COMMANDS 1 )\n# to your CMakeLists.txt file.\n#\n# Most projects will NOT need to set this to anything; you can just change the\n# 'flags' list of compilation flags. Notice that YCM itself uses that approach.\ncompilation_database_folder = os.path.join(DirectoryOfThisScript(), 'Debug')\n\nif os.path.exists( compilation_database_folder ):\n database = ycm_core.CompilationDatabase( compilation_database_folder )\n if not database.DatabaseSuccessfullyLoaded():\n _logger.warn(\"Failed to load database\")\n database = None\nelse:\n database = None\n\nSOURCE_EXTENSIONS = [ '.cpp', '.cxx', '.cc', '.c', '.m', '.mm' ]\n\ndef GetAbsolutePath(include_path, working_directory):\n if os.path.isabs(include_path):\n return include_path\n return os.path.join(working_directory, include_path)\n\n\ndef MakeRelativePathsInFlagsAbsolute( flags, working_directory ):\n if not working_directory:\n return list( flags )\n new_flags = []\n make_next_absolute = False\n path_flags = [ '-isystem', '-I', '-iquote', '--sysroot=' ]\n for flag in flags:\n new_flag = flag\n\n if make_next_absolute:\n make_next_absolute = False\n new_flag = GetAbsolutePath(flag, working_directory)\n\n for path_flag in path_flags:\n if flag == path_flag:\n make_next_absolute = True\n break\n\n if flag.startswith( path_flag ):\n path = flag[ len( path_flag ): ]\n new_flag = flag[:len(path_flag)] + GetAbsolutePath(path, working_directory)\n break\n\n if new_flag:\n new_flags.append( new_flag )\n return new_flags\n\n\ndef IsHeaderFile( filename ):\n extension = os.path.splitext( filename )[ 1 ]\n return extension.lower() in [ '.h', '.hxx', '.hpp', '.hh' ]\n\n\ndef GetCompilationInfoForFile( filename ):\n # The compilation_commands.json file generated by CMake does not have entries\n # for header files. So we do our best by asking the db for flags for a\n # corresponding source file, if any. If one exists, the flags for that file\n # should be good enough.\n if IsHeaderFile( filename ):\n basename = os.path.splitext( filename )[ 0 ]\n for extension in SOURCE_EXTENSIONS:\n replacement_file = basename + extension\n if os.path.exists( replacement_file ):\n compilation_info = database.GetCompilationInfoForFile( replacement_file )\n if compilation_info.compiler_flags_:\n return compilation_info\n return None\n return database.GetCompilationInfoForFile( filename )\n\n\ndef Settings( filename, **kwargs ):\n relative_to = None\n compiler_flags = None\n\n if database:\n # Bear in mind that compilation_info.compiler_flags_ does NOT return a\n # python list, but a \"list-like\" StringVec object\n compilation_info = GetCompilationInfoForFile( filename )\n if compilation_info is None:\n relative_to = DirectoryOfThisScript()\n compiler_flags = flags\n else:\n relative_to = compilation_info.compiler_working_dir_\n compiler_flags = compilation_info.compiler_flags_\n\n else:\n relative_to = DirectoryOfThisScript()\n compiler_flags = flags\n\n final_flags = MakeRelativePathsInFlagsAbsolute( compiler_flags, relative_to )\n for flag in final_flags:\n if flag.startswith(\"-W\"):\n final_flags.remove(flag)\n _logger.info(\"Final flags for %s are %s\" % (filename, ' '.join(final_flags)))\n\n return {{\n 'flags': final_flags + [\"-I/usr/include\", \"-I/usr/include/c++/{cxx_version}\"],\n 'do_cache': True\n }}\n'''\n\n @property\n def filename(self):\n pass\n\n @property\n def content(self):\n def prefixed(prefix, values):\n return [prefix + x for x in values]\n\n conan_flags = {\n \"includes\": prefixed(\"-isystem\", self.deps_build_info.include_paths),\n \"defines\": prefixed(\"-D\", self.deps_build_info.defines),\n \"flags\": self.deps_build_info.cxxflags\n }\n\n cxx_version = ''\n try:\n cxx_version = str(self.settings.compiler.version).split('.')[0]\n except Exception:\n pass\n\n ycm_data = self.template.format(cxx_version=cxx_version)\n return {\"conan_ycm_extra_conf.py\": ycm_data,\n \"conan_ycm_flags.json\": json.dumps(conan_flags, indent=2)}\n", "path": "conans/client/generators/ycm.py"}]}
| 3,511 | 109 |
gh_patches_debug_4220
|
rasdani/github-patches
|
git_diff
|
freedomofpress__securedrop-6586
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Clean up outdated references to Python 3.5
*This is a good first issue for new contributors to take on, if you have any questions, please ask on the task or in our [Gitter room](https://gitter.im/freedomofpress/securedrop)!*
## Description
SecureDrop now runs on focal, which uses Python 3.8. But there are still references to Python 3.5 that need to be cleaned up. Some should be dropped outright, others should be switched to 3.8.
Some examples:
```
$ rg python3\\.5
install_files/securedrop-grsec-focal/opt/securedrop/paxctld.conf
98:/usr/bin/python3.5 E
molecule/testinfra/vars/app-qubes-staging.yml
13:securedrop_venv_site_packages: "{{ securedrop_venv }}/lib/python3.5/site-packages"
molecule/testinfra/vars/prodVM.yml
12:securedrop_venv_site_packages: "/opt/venvs/securedrop-app-code/lib/python3.5/site-packages"
install_files/ansible-base/roles/build-securedrop-app-code-deb-pkg/files/usr.sbin.apache2
71: /etc/python3.5/sitecustomize.py r,
109: /usr/local/lib/python3.5/dist-packages/ r,
117: /opt/venvs/securedrop-app-code/lib/python3.5/ r,
118: /opt/venvs/securedrop-app-code/lib/python3.5/** rm,
securedrop/scripts/rqrequeue
9:sys.path.insert(0, "/opt/venvs/securedrop-app-code/lib/python3.5/site-packages") # noqa: E402
securedrop/scripts/shredder
14: 0, "/opt/venvs/securedrop-app-code/lib/python3.5/site-packages"
securedrop/scripts/source_deleter
14: 0, "/opt/venvs/securedrop-app-code/lib/python3.5/site-packages"
$ rg 3\\.5 --type=py
molecule/builder-focal/tests/test_build_dependencies.py
6:SECUREDROP_PYTHON_VERSION = os.environ.get("SECUREDROP_PYTHON_VERSION", "3.5")
setup.py
14: python_requires=">=3.5",
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import setuptools
2
3 long_description = "The SecureDrop whistleblower platform."
4
5 setuptools.setup(
6 name="securedrop-app-code",
7 version="2.5.0~rc1",
8 author="Freedom of the Press Foundation",
9 author_email="[email protected]",
10 description="SecureDrop Server",
11 long_description=long_description,
12 long_description_content_type="text/markdown",
13 license="AGPLv3+",
14 python_requires=">=3.5",
15 url="https://github.com/freedomofpress/securedrop",
16 classifiers=(
17 "Development Status :: 5 - Stable",
18 "Programming Language :: Python :: 3",
19 "Topic :: Software Development :: Libraries :: Python Modules",
20 "License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)",
21 "Intended Audience :: Developers",
22 "Operating System :: OS Independent",
23 ),
24 )
25
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -11,7 +11,7 @@
long_description=long_description,
long_description_content_type="text/markdown",
license="AGPLv3+",
- python_requires=">=3.5",
+ python_requires=">=3.8",
url="https://github.com/freedomofpress/securedrop",
classifiers=(
"Development Status :: 5 - Stable",
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -11,7 +11,7 @@\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n license=\"AGPLv3+\",\n- python_requires=\">=3.5\",\n+ python_requires=\">=3.8\",\n url=\"https://github.com/freedomofpress/securedrop\",\n classifiers=(\n \"Development Status :: 5 - Stable\",\n", "issue": "Clean up outdated references to Python 3.5\n*This is a good first issue for new contributors to take on, if you have any questions, please ask on the task or in our [Gitter room](https://gitter.im/freedomofpress/securedrop)!*\r\n\r\n## Description\r\n\r\nSecureDrop now runs on focal, which uses Python 3.8. But there are still references to Python 3.5 that need to be cleaned up. Some should be dropped outright, others should be switched to 3.8.\r\n\r\n\r\nSome examples:\r\n```\r\n$ rg python3\\\\.5\r\ninstall_files/securedrop-grsec-focal/opt/securedrop/paxctld.conf\r\n98:/usr/bin/python3.5\t\tE\r\n\r\nmolecule/testinfra/vars/app-qubes-staging.yml\r\n13:securedrop_venv_site_packages: \"{{ securedrop_venv }}/lib/python3.5/site-packages\"\r\n\r\nmolecule/testinfra/vars/prodVM.yml\r\n12:securedrop_venv_site_packages: \"/opt/venvs/securedrop-app-code/lib/python3.5/site-packages\"\r\n\r\ninstall_files/ansible-base/roles/build-securedrop-app-code-deb-pkg/files/usr.sbin.apache2\r\n71: /etc/python3.5/sitecustomize.py r,\r\n109: /usr/local/lib/python3.5/dist-packages/ r,\r\n117: /opt/venvs/securedrop-app-code/lib/python3.5/ r,\r\n118: /opt/venvs/securedrop-app-code/lib/python3.5/** rm,\r\n\r\nsecuredrop/scripts/rqrequeue\r\n9:sys.path.insert(0, \"/opt/venvs/securedrop-app-code/lib/python3.5/site-packages\") # noqa: E402\r\n\r\nsecuredrop/scripts/shredder\r\n14: 0, \"/opt/venvs/securedrop-app-code/lib/python3.5/site-packages\"\r\n\r\nsecuredrop/scripts/source_deleter\r\n14: 0, \"/opt/venvs/securedrop-app-code/lib/python3.5/site-packages\"\r\n$ rg 3\\\\.5 --type=py\r\nmolecule/builder-focal/tests/test_build_dependencies.py\r\n6:SECUREDROP_PYTHON_VERSION = os.environ.get(\"SECUREDROP_PYTHON_VERSION\", \"3.5\")\r\n\r\nsetup.py\r\n14: python_requires=\">=3.5\",\r\n```\n", "before_files": [{"content": "import setuptools\n\nlong_description = \"The SecureDrop whistleblower platform.\"\n\nsetuptools.setup(\n name=\"securedrop-app-code\",\n version=\"2.5.0~rc1\",\n author=\"Freedom of the Press Foundation\",\n author_email=\"[email protected]\",\n description=\"SecureDrop Server\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n license=\"AGPLv3+\",\n python_requires=\">=3.5\",\n url=\"https://github.com/freedomofpress/securedrop\",\n classifiers=(\n \"Development Status :: 5 - Stable\",\n \"Programming Language :: Python :: 3\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n \"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)\",\n \"Intended Audience :: Developers\",\n \"Operating System :: OS Independent\",\n ),\n)\n", "path": "setup.py"}], "after_files": [{"content": "import setuptools\n\nlong_description = \"The SecureDrop whistleblower platform.\"\n\nsetuptools.setup(\n name=\"securedrop-app-code\",\n version=\"2.5.0~rc1\",\n author=\"Freedom of the Press Foundation\",\n author_email=\"[email protected]\",\n description=\"SecureDrop Server\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n license=\"AGPLv3+\",\n python_requires=\">=3.8\",\n url=\"https://github.com/freedomofpress/securedrop\",\n classifiers=(\n \"Development Status :: 5 - Stable\",\n \"Programming Language :: Python :: 3\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n \"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)\",\n \"Intended Audience :: Developers\",\n \"Operating System :: OS Independent\",\n ),\n)\n", "path": "setup.py"}]}
| 1,001 | 107 |
gh_patches_debug_45504
|
rasdani/github-patches
|
git_diff
|
Project-MONAI__MONAI-2062
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add support to move data to `device` after inverting
**Is your feature request related to a problem? Please describe.**
Need to enhance the `TransformInverter` handler to move data to expected `device`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `monai/handlers/transform_inverter.py`
Content:
```
1 # Copyright 2020 - 2021 MONAI Consortium
2 # Licensed under the Apache License, Version 2.0 (the "License");
3 # you may not use this file except in compliance with the License.
4 # You may obtain a copy of the License at
5 # http://www.apache.org/licenses/LICENSE-2.0
6 # Unless required by applicable law or agreed to in writing, software
7 # distributed under the License is distributed on an "AS IS" BASIS,
8 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
9 # See the License for the specific language governing permissions and
10 # limitations under the License.
11
12 import warnings
13 from copy import deepcopy
14 from typing import TYPE_CHECKING, Callable, Optional, Sequence, Union
15
16 from torch.utils.data import DataLoader as TorchDataLoader
17
18 from monai.data import BatchInverseTransform
19 from monai.data.utils import no_collation
20 from monai.engines.utils import CommonKeys, IterationEvents
21 from monai.transforms import InvertibleTransform, ToTensor, allow_missing_keys_mode, convert_inverse_interp_mode
22 from monai.utils import InverseKeys, ensure_tuple, ensure_tuple_rep, exact_version, optional_import
23
24 Events, _ = optional_import("ignite.engine", "0.4.4", exact_version, "Events")
25 if TYPE_CHECKING:
26 from ignite.engine import Engine
27 else:
28 Engine, _ = optional_import("ignite.engine", "0.4.4", exact_version, "Engine")
29
30
31 class TransformInverter:
32 """
33 Ignite handler to automatically invert `transforms`.
34 It takes `engine.state.output` as the input data and uses the transforms information from `engine.state.batch`.
35 The outputs are stored in `engine.state.output` with key: "{output_key}_{postfix}".
36 """
37
38 def __init__(
39 self,
40 transform: InvertibleTransform,
41 loader: TorchDataLoader,
42 output_keys: Union[str, Sequence[str]] = CommonKeys.PRED,
43 batch_keys: Union[str, Sequence[str]] = CommonKeys.IMAGE,
44 meta_key_postfix: str = "meta_dict",
45 collate_fn: Optional[Callable] = no_collation,
46 postfix: str = "inverted",
47 nearest_interp: Union[bool, Sequence[bool]] = True,
48 post_func: Union[Callable, Sequence[Callable]] = lambda x: x,
49 num_workers: Optional[int] = 0,
50 ) -> None:
51 """
52 Args:
53 transform: a callable data transform on input data.
54 loader: data loader used to run transforms and generate the batch of data.
55 output_keys: the key of expected data in `ignite.engine.output`, invert transforms on it.
56 it also can be a list of keys, will invert transform for each of them. Default to "pred".
57 batch_keys: the key of input data in `ignite.engine.batch`. will get the applied transforms
58 for this input data, then invert them for the expected data with `output_keys`.
59 It can also be a list of keys, each matches to the `output_keys` data. default to "image".
60 meta_key_postfix: use `{batch_key}_{postfix}` to to fetch the meta data according to the key data,
61 default is `meta_dict`, the meta data is a dictionary object.
62 For example, to handle key `image`, read/write affine matrices from the
63 metadata `image_meta_dict` dictionary's `affine` field.
64 collate_fn: how to collate data after inverse transformations.
65 default won't do any collation, so the output will be a list of size batch size.
66 postfix: will save the inverted result into `ignite.engine.output` with key `{output_key}_{postfix}`.
67 nearest_interp: whether to use `nearest` interpolation mode when inverting the spatial transforms,
68 default to `True`. If `False`, use the same interpolation mode as the original transform.
69 it also can be a list of bool, each matches to the `output_keys` data.
70 post_func: post processing for the inverted data, should be a callable function.
71 it also can be a list of callable, each matches to the `output_keys` data.
72 num_workers: number of workers when run data loader for inverse transforms,
73 default to 0 as only run one iteration and multi-processing may be even slower.
74 Set to `None`, to use the `num_workers` of the input transform data loader.
75
76 """
77 self.transform = transform
78 self.inverter = BatchInverseTransform(
79 transform=transform,
80 loader=loader,
81 collate_fn=collate_fn,
82 num_workers=num_workers,
83 )
84 self.output_keys = ensure_tuple(output_keys)
85 self.batch_keys = ensure_tuple_rep(batch_keys, len(self.output_keys))
86 self.meta_key_postfix = meta_key_postfix
87 self.postfix = postfix
88 self.nearest_interp = ensure_tuple_rep(nearest_interp, len(self.output_keys))
89 self.post_func = ensure_tuple_rep(post_func, len(self.output_keys))
90 self._totensor = ToTensor()
91
92 def attach(self, engine: Engine) -> None:
93 """
94 Args:
95 engine: Ignite Engine, it can be a trainer, validator or evaluator.
96 """
97 engine.add_event_handler(IterationEvents.MODEL_COMPLETED, self)
98
99 def __call__(self, engine: Engine) -> None:
100 """
101 Args:
102 engine: Ignite Engine, it can be a trainer, validator or evaluator.
103 """
104 for output_key, batch_key, nearest_interp, post_funct in zip(
105 self.output_keys, self.batch_keys, self.nearest_interp, self.post_func
106 ):
107 transform_key = batch_key + InverseKeys.KEY_SUFFIX
108 if transform_key not in engine.state.batch:
109 warnings.warn(f"all the transforms on `{batch_key}` are not InvertibleTransform.")
110 continue
111
112 transform_info = engine.state.batch[transform_key]
113 if nearest_interp:
114 transform_info = convert_inverse_interp_mode(
115 trans_info=deepcopy(transform_info),
116 mode="nearest",
117 align_corners=None,
118 )
119
120 segs_dict = {
121 batch_key: engine.state.output[output_key].detach().cpu(),
122 transform_key: transform_info,
123 }
124 meta_dict_key = f"{batch_key}_{self.meta_key_postfix}"
125 if meta_dict_key in engine.state.batch:
126 segs_dict[meta_dict_key] = engine.state.batch[meta_dict_key]
127
128 with allow_missing_keys_mode(self.transform): # type: ignore
129 inverted_key = f"{output_key}_{self.postfix}"
130 engine.state.output[inverted_key] = [
131 post_funct(self._totensor(i[batch_key])) for i in self.inverter(segs_dict)
132 ]
133
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/monai/handlers/transform_inverter.py b/monai/handlers/transform_inverter.py
--- a/monai/handlers/transform_inverter.py
+++ b/monai/handlers/transform_inverter.py
@@ -13,6 +13,7 @@
from copy import deepcopy
from typing import TYPE_CHECKING, Callable, Optional, Sequence, Union
+import torch
from torch.utils.data import DataLoader as TorchDataLoader
from monai.data import BatchInverseTransform
@@ -32,7 +33,7 @@
"""
Ignite handler to automatically invert `transforms`.
It takes `engine.state.output` as the input data and uses the transforms information from `engine.state.batch`.
- The outputs are stored in `engine.state.output` with key: "{output_key}_{postfix}".
+ The inverted results are stored in `engine.state.output` with key: "{output_key}_{postfix}".
"""
def __init__(
@@ -45,6 +46,8 @@
collate_fn: Optional[Callable] = no_collation,
postfix: str = "inverted",
nearest_interp: Union[bool, Sequence[bool]] = True,
+ to_tensor: Union[bool, Sequence[bool]] = True,
+ device: Union[Union[str, torch.device], Sequence[Union[str, torch.device]]] = "cpu",
post_func: Union[Callable, Sequence[Callable]] = lambda x: x,
num_workers: Optional[int] = 0,
) -> None:
@@ -67,6 +70,11 @@
nearest_interp: whether to use `nearest` interpolation mode when inverting the spatial transforms,
default to `True`. If `False`, use the same interpolation mode as the original transform.
it also can be a list of bool, each matches to the `output_keys` data.
+ to_tensor: whether to convert the inverted data into PyTorch Tensor first, default to `True`.
+ it also can be a list of bool, each matches to the `output_keys` data.
+ device: if converted to Tensor, move the inverted results to target device before `post_func`,
+ default to "cpu", it also can be a list of string or `torch.device`,
+ each matches to the `output_keys` data.
post_func: post processing for the inverted data, should be a callable function.
it also can be a list of callable, each matches to the `output_keys` data.
num_workers: number of workers when run data loader for inverse transforms,
@@ -86,6 +94,8 @@
self.meta_key_postfix = meta_key_postfix
self.postfix = postfix
self.nearest_interp = ensure_tuple_rep(nearest_interp, len(self.output_keys))
+ self.to_tensor = ensure_tuple_rep(to_tensor, len(self.output_keys))
+ self.device = ensure_tuple_rep(device, len(self.output_keys))
self.post_func = ensure_tuple_rep(post_func, len(self.output_keys))
self._totensor = ToTensor()
@@ -101,8 +111,8 @@
Args:
engine: Ignite Engine, it can be a trainer, validator or evaluator.
"""
- for output_key, batch_key, nearest_interp, post_funct in zip(
- self.output_keys, self.batch_keys, self.nearest_interp, self.post_func
+ for output_key, batch_key, nearest_interp, to_tensor, device, post_func in zip(
+ self.output_keys, self.batch_keys, self.nearest_interp, self.to_tensor, self.device, self.post_func
):
transform_key = batch_key + InverseKeys.KEY_SUFFIX
if transform_key not in engine.state.batch:
@@ -118,7 +128,7 @@
)
segs_dict = {
- batch_key: engine.state.output[output_key].detach().cpu(),
+ batch_key: engine.state.output[output_key],
transform_key: transform_info,
}
meta_dict_key = f"{batch_key}_{self.meta_key_postfix}"
@@ -128,5 +138,6 @@
with allow_missing_keys_mode(self.transform): # type: ignore
inverted_key = f"{output_key}_{self.postfix}"
engine.state.output[inverted_key] = [
- post_funct(self._totensor(i[batch_key])) for i in self.inverter(segs_dict)
+ post_func(self._totensor(i[batch_key]).to(device) if to_tensor else i[batch_key])
+ for i in self.inverter(segs_dict)
]
|
{"golden_diff": "diff --git a/monai/handlers/transform_inverter.py b/monai/handlers/transform_inverter.py\n--- a/monai/handlers/transform_inverter.py\n+++ b/monai/handlers/transform_inverter.py\n@@ -13,6 +13,7 @@\n from copy import deepcopy\n from typing import TYPE_CHECKING, Callable, Optional, Sequence, Union\n \n+import torch\n from torch.utils.data import DataLoader as TorchDataLoader\n \n from monai.data import BatchInverseTransform\n@@ -32,7 +33,7 @@\n \"\"\"\n Ignite handler to automatically invert `transforms`.\n It takes `engine.state.output` as the input data and uses the transforms information from `engine.state.batch`.\n- The outputs are stored in `engine.state.output` with key: \"{output_key}_{postfix}\".\n+ The inverted results are stored in `engine.state.output` with key: \"{output_key}_{postfix}\".\n \"\"\"\n \n def __init__(\n@@ -45,6 +46,8 @@\n collate_fn: Optional[Callable] = no_collation,\n postfix: str = \"inverted\",\n nearest_interp: Union[bool, Sequence[bool]] = True,\n+ to_tensor: Union[bool, Sequence[bool]] = True,\n+ device: Union[Union[str, torch.device], Sequence[Union[str, torch.device]]] = \"cpu\",\n post_func: Union[Callable, Sequence[Callable]] = lambda x: x,\n num_workers: Optional[int] = 0,\n ) -> None:\n@@ -67,6 +70,11 @@\n nearest_interp: whether to use `nearest` interpolation mode when inverting the spatial transforms,\n default to `True`. If `False`, use the same interpolation mode as the original transform.\n it also can be a list of bool, each matches to the `output_keys` data.\n+ to_tensor: whether to convert the inverted data into PyTorch Tensor first, default to `True`.\n+ it also can be a list of bool, each matches to the `output_keys` data.\n+ device: if converted to Tensor, move the inverted results to target device before `post_func`,\n+ default to \"cpu\", it also can be a list of string or `torch.device`,\n+ each matches to the `output_keys` data.\n post_func: post processing for the inverted data, should be a callable function.\n it also can be a list of callable, each matches to the `output_keys` data.\n num_workers: number of workers when run data loader for inverse transforms,\n@@ -86,6 +94,8 @@\n self.meta_key_postfix = meta_key_postfix\n self.postfix = postfix\n self.nearest_interp = ensure_tuple_rep(nearest_interp, len(self.output_keys))\n+ self.to_tensor = ensure_tuple_rep(to_tensor, len(self.output_keys))\n+ self.device = ensure_tuple_rep(device, len(self.output_keys))\n self.post_func = ensure_tuple_rep(post_func, len(self.output_keys))\n self._totensor = ToTensor()\n \n@@ -101,8 +111,8 @@\n Args:\n engine: Ignite Engine, it can be a trainer, validator or evaluator.\n \"\"\"\n- for output_key, batch_key, nearest_interp, post_funct in zip(\n- self.output_keys, self.batch_keys, self.nearest_interp, self.post_func\n+ for output_key, batch_key, nearest_interp, to_tensor, device, post_func in zip(\n+ self.output_keys, self.batch_keys, self.nearest_interp, self.to_tensor, self.device, self.post_func\n ):\n transform_key = batch_key + InverseKeys.KEY_SUFFIX\n if transform_key not in engine.state.batch:\n@@ -118,7 +128,7 @@\n )\n \n segs_dict = {\n- batch_key: engine.state.output[output_key].detach().cpu(),\n+ batch_key: engine.state.output[output_key],\n transform_key: transform_info,\n }\n meta_dict_key = f\"{batch_key}_{self.meta_key_postfix}\"\n@@ -128,5 +138,6 @@\n with allow_missing_keys_mode(self.transform): # type: ignore\n inverted_key = f\"{output_key}_{self.postfix}\"\n engine.state.output[inverted_key] = [\n- post_funct(self._totensor(i[batch_key])) for i in self.inverter(segs_dict)\n+ post_func(self._totensor(i[batch_key]).to(device) if to_tensor else i[batch_key])\n+ for i in self.inverter(segs_dict)\n ]\n", "issue": "Add support to move data to `device` after inverting\n**Is your feature request related to a problem? Please describe.**\r\nNeed to enhance the `TransformInverter` handler to move data to expected `device`.\r\n\n", "before_files": [{"content": "# Copyright 2020 - 2021 MONAI Consortium\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# http://www.apache.org/licenses/LICENSE-2.0\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport warnings\nfrom copy import deepcopy\nfrom typing import TYPE_CHECKING, Callable, Optional, Sequence, Union\n\nfrom torch.utils.data import DataLoader as TorchDataLoader\n\nfrom monai.data import BatchInverseTransform\nfrom monai.data.utils import no_collation\nfrom monai.engines.utils import CommonKeys, IterationEvents\nfrom monai.transforms import InvertibleTransform, ToTensor, allow_missing_keys_mode, convert_inverse_interp_mode\nfrom monai.utils import InverseKeys, ensure_tuple, ensure_tuple_rep, exact_version, optional_import\n\nEvents, _ = optional_import(\"ignite.engine\", \"0.4.4\", exact_version, \"Events\")\nif TYPE_CHECKING:\n from ignite.engine import Engine\nelse:\n Engine, _ = optional_import(\"ignite.engine\", \"0.4.4\", exact_version, \"Engine\")\n\n\nclass TransformInverter:\n \"\"\"\n Ignite handler to automatically invert `transforms`.\n It takes `engine.state.output` as the input data and uses the transforms information from `engine.state.batch`.\n The outputs are stored in `engine.state.output` with key: \"{output_key}_{postfix}\".\n \"\"\"\n\n def __init__(\n self,\n transform: InvertibleTransform,\n loader: TorchDataLoader,\n output_keys: Union[str, Sequence[str]] = CommonKeys.PRED,\n batch_keys: Union[str, Sequence[str]] = CommonKeys.IMAGE,\n meta_key_postfix: str = \"meta_dict\",\n collate_fn: Optional[Callable] = no_collation,\n postfix: str = \"inverted\",\n nearest_interp: Union[bool, Sequence[bool]] = True,\n post_func: Union[Callable, Sequence[Callable]] = lambda x: x,\n num_workers: Optional[int] = 0,\n ) -> None:\n \"\"\"\n Args:\n transform: a callable data transform on input data.\n loader: data loader used to run transforms and generate the batch of data.\n output_keys: the key of expected data in `ignite.engine.output`, invert transforms on it.\n it also can be a list of keys, will invert transform for each of them. Default to \"pred\".\n batch_keys: the key of input data in `ignite.engine.batch`. will get the applied transforms\n for this input data, then invert them for the expected data with `output_keys`.\n It can also be a list of keys, each matches to the `output_keys` data. default to \"image\".\n meta_key_postfix: use `{batch_key}_{postfix}` to to fetch the meta data according to the key data,\n default is `meta_dict`, the meta data is a dictionary object.\n For example, to handle key `image`, read/write affine matrices from the\n metadata `image_meta_dict` dictionary's `affine` field.\n collate_fn: how to collate data after inverse transformations.\n default won't do any collation, so the output will be a list of size batch size.\n postfix: will save the inverted result into `ignite.engine.output` with key `{output_key}_{postfix}`.\n nearest_interp: whether to use `nearest` interpolation mode when inverting the spatial transforms,\n default to `True`. If `False`, use the same interpolation mode as the original transform.\n it also can be a list of bool, each matches to the `output_keys` data.\n post_func: post processing for the inverted data, should be a callable function.\n it also can be a list of callable, each matches to the `output_keys` data.\n num_workers: number of workers when run data loader for inverse transforms,\n default to 0 as only run one iteration and multi-processing may be even slower.\n Set to `None`, to use the `num_workers` of the input transform data loader.\n\n \"\"\"\n self.transform = transform\n self.inverter = BatchInverseTransform(\n transform=transform,\n loader=loader,\n collate_fn=collate_fn,\n num_workers=num_workers,\n )\n self.output_keys = ensure_tuple(output_keys)\n self.batch_keys = ensure_tuple_rep(batch_keys, len(self.output_keys))\n self.meta_key_postfix = meta_key_postfix\n self.postfix = postfix\n self.nearest_interp = ensure_tuple_rep(nearest_interp, len(self.output_keys))\n self.post_func = ensure_tuple_rep(post_func, len(self.output_keys))\n self._totensor = ToTensor()\n\n def attach(self, engine: Engine) -> None:\n \"\"\"\n Args:\n engine: Ignite Engine, it can be a trainer, validator or evaluator.\n \"\"\"\n engine.add_event_handler(IterationEvents.MODEL_COMPLETED, self)\n\n def __call__(self, engine: Engine) -> None:\n \"\"\"\n Args:\n engine: Ignite Engine, it can be a trainer, validator or evaluator.\n \"\"\"\n for output_key, batch_key, nearest_interp, post_funct in zip(\n self.output_keys, self.batch_keys, self.nearest_interp, self.post_func\n ):\n transform_key = batch_key + InverseKeys.KEY_SUFFIX\n if transform_key not in engine.state.batch:\n warnings.warn(f\"all the transforms on `{batch_key}` are not InvertibleTransform.\")\n continue\n\n transform_info = engine.state.batch[transform_key]\n if nearest_interp:\n transform_info = convert_inverse_interp_mode(\n trans_info=deepcopy(transform_info),\n mode=\"nearest\",\n align_corners=None,\n )\n\n segs_dict = {\n batch_key: engine.state.output[output_key].detach().cpu(),\n transform_key: transform_info,\n }\n meta_dict_key = f\"{batch_key}_{self.meta_key_postfix}\"\n if meta_dict_key in engine.state.batch:\n segs_dict[meta_dict_key] = engine.state.batch[meta_dict_key]\n\n with allow_missing_keys_mode(self.transform): # type: ignore\n inverted_key = f\"{output_key}_{self.postfix}\"\n engine.state.output[inverted_key] = [\n post_funct(self._totensor(i[batch_key])) for i in self.inverter(segs_dict)\n ]\n", "path": "monai/handlers/transform_inverter.py"}], "after_files": [{"content": "# Copyright 2020 - 2021 MONAI Consortium\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# http://www.apache.org/licenses/LICENSE-2.0\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport warnings\nfrom copy import deepcopy\nfrom typing import TYPE_CHECKING, Callable, Optional, Sequence, Union\n\nimport torch\nfrom torch.utils.data import DataLoader as TorchDataLoader\n\nfrom monai.data import BatchInverseTransform\nfrom monai.data.utils import no_collation\nfrom monai.engines.utils import CommonKeys, IterationEvents\nfrom monai.transforms import InvertibleTransform, ToTensor, allow_missing_keys_mode, convert_inverse_interp_mode\nfrom monai.utils import InverseKeys, ensure_tuple, ensure_tuple_rep, exact_version, optional_import\n\nEvents, _ = optional_import(\"ignite.engine\", \"0.4.4\", exact_version, \"Events\")\nif TYPE_CHECKING:\n from ignite.engine import Engine\nelse:\n Engine, _ = optional_import(\"ignite.engine\", \"0.4.4\", exact_version, \"Engine\")\n\n\nclass TransformInverter:\n \"\"\"\n Ignite handler to automatically invert `transforms`.\n It takes `engine.state.output` as the input data and uses the transforms information from `engine.state.batch`.\n The inverted results are stored in `engine.state.output` with key: \"{output_key}_{postfix}\".\n \"\"\"\n\n def __init__(\n self,\n transform: InvertibleTransform,\n loader: TorchDataLoader,\n output_keys: Union[str, Sequence[str]] = CommonKeys.PRED,\n batch_keys: Union[str, Sequence[str]] = CommonKeys.IMAGE,\n meta_key_postfix: str = \"meta_dict\",\n collate_fn: Optional[Callable] = no_collation,\n postfix: str = \"inverted\",\n nearest_interp: Union[bool, Sequence[bool]] = True,\n to_tensor: Union[bool, Sequence[bool]] = True,\n device: Union[Union[str, torch.device], Sequence[Union[str, torch.device]]] = \"cpu\",\n post_func: Union[Callable, Sequence[Callable]] = lambda x: x,\n num_workers: Optional[int] = 0,\n ) -> None:\n \"\"\"\n Args:\n transform: a callable data transform on input data.\n loader: data loader used to run transforms and generate the batch of data.\n output_keys: the key of expected data in `ignite.engine.output`, invert transforms on it.\n it also can be a list of keys, will invert transform for each of them. Default to \"pred\".\n batch_keys: the key of input data in `ignite.engine.batch`. will get the applied transforms\n for this input data, then invert them for the expected data with `output_keys`.\n It can also be a list of keys, each matches to the `output_keys` data. default to \"image\".\n meta_key_postfix: use `{batch_key}_{postfix}` to to fetch the meta data according to the key data,\n default is `meta_dict`, the meta data is a dictionary object.\n For example, to handle key `image`, read/write affine matrices from the\n metadata `image_meta_dict` dictionary's `affine` field.\n collate_fn: how to collate data after inverse transformations.\n default won't do any collation, so the output will be a list of size batch size.\n postfix: will save the inverted result into `ignite.engine.output` with key `{output_key}_{postfix}`.\n nearest_interp: whether to use `nearest` interpolation mode when inverting the spatial transforms,\n default to `True`. If `False`, use the same interpolation mode as the original transform.\n it also can be a list of bool, each matches to the `output_keys` data.\n to_tensor: whether to convert the inverted data into PyTorch Tensor first, default to `True`.\n it also can be a list of bool, each matches to the `output_keys` data.\n device: if converted to Tensor, move the inverted results to target device before `post_func`,\n default to \"cpu\", it also can be a list of string or `torch.device`,\n each matches to the `output_keys` data.\n post_func: post processing for the inverted data, should be a callable function.\n it also can be a list of callable, each matches to the `output_keys` data.\n num_workers: number of workers when run data loader for inverse transforms,\n default to 0 as only run one iteration and multi-processing may be even slower.\n Set to `None`, to use the `num_workers` of the input transform data loader.\n\n \"\"\"\n self.transform = transform\n self.inverter = BatchInverseTransform(\n transform=transform,\n loader=loader,\n collate_fn=collate_fn,\n num_workers=num_workers,\n )\n self.output_keys = ensure_tuple(output_keys)\n self.batch_keys = ensure_tuple_rep(batch_keys, len(self.output_keys))\n self.meta_key_postfix = meta_key_postfix\n self.postfix = postfix\n self.nearest_interp = ensure_tuple_rep(nearest_interp, len(self.output_keys))\n self.to_tensor = ensure_tuple_rep(to_tensor, len(self.output_keys))\n self.device = ensure_tuple_rep(device, len(self.output_keys))\n self.post_func = ensure_tuple_rep(post_func, len(self.output_keys))\n self._totensor = ToTensor()\n\n def attach(self, engine: Engine) -> None:\n \"\"\"\n Args:\n engine: Ignite Engine, it can be a trainer, validator or evaluator.\n \"\"\"\n engine.add_event_handler(IterationEvents.MODEL_COMPLETED, self)\n\n def __call__(self, engine: Engine) -> None:\n \"\"\"\n Args:\n engine: Ignite Engine, it can be a trainer, validator or evaluator.\n \"\"\"\n for output_key, batch_key, nearest_interp, to_tensor, device, post_func in zip(\n self.output_keys, self.batch_keys, self.nearest_interp, self.to_tensor, self.device, self.post_func\n ):\n transform_key = batch_key + InverseKeys.KEY_SUFFIX\n if transform_key not in engine.state.batch:\n warnings.warn(f\"all the transforms on `{batch_key}` are not InvertibleTransform.\")\n continue\n\n transform_info = engine.state.batch[transform_key]\n if nearest_interp:\n transform_info = convert_inverse_interp_mode(\n trans_info=deepcopy(transform_info),\n mode=\"nearest\",\n align_corners=None,\n )\n\n segs_dict = {\n batch_key: engine.state.output[output_key],\n transform_key: transform_info,\n }\n meta_dict_key = f\"{batch_key}_{self.meta_key_postfix}\"\n if meta_dict_key in engine.state.batch:\n segs_dict[meta_dict_key] = engine.state.batch[meta_dict_key]\n\n with allow_missing_keys_mode(self.transform): # type: ignore\n inverted_key = f\"{output_key}_{self.postfix}\"\n engine.state.output[inverted_key] = [\n post_func(self._totensor(i[batch_key]).to(device) if to_tensor else i[batch_key])\n for i in self.inverter(segs_dict)\n ]\n", "path": "monai/handlers/transform_inverter.py"}]}
| 2,030 | 1,004 |
gh_patches_debug_32132
|
rasdani/github-patches
|
git_diff
|
goauthentik__authentik-4016
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
internal: allow Sentry DSN to be configured by end user
## Details
The existing Sentry instrumentation is fairly valuable for performance optimization purposes, but having Sentry hard-coded to report to `https://sentry.beryju.org` makes it a pain to override during local development.
## Changes
### New Features
* Allow the Sentry DSN to be overridden via `AUTHENTIK_ERROR_REPORTING__SENTRY_DSN`.
## Additional
I updated the configuration docs to list the new configuration setting, and _slightly_ tweaked the wording of the surrounding documentation to explain the dichotomy between the default DSN -- useful for debugging issues in collaboration with authentik devs, anonymous performance data, etc -- and specifying a custom one.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `authentik/lib/sentry.py`
Content:
```
1 """authentik sentry integration"""
2 from asyncio.exceptions import CancelledError
3 from typing import Any, Optional
4
5 from billiard.exceptions import SoftTimeLimitExceeded, WorkerLostError
6 from celery.exceptions import CeleryError
7 from channels.middleware import BaseMiddleware
8 from channels_redis.core import ChannelFull
9 from django.conf import settings
10 from django.core.exceptions import ImproperlyConfigured, SuspiciousOperation, ValidationError
11 from django.db import DatabaseError, InternalError, OperationalError, ProgrammingError
12 from django.http.response import Http404
13 from django_redis.exceptions import ConnectionInterrupted
14 from docker.errors import DockerException
15 from h11 import LocalProtocolError
16 from ldap3.core.exceptions import LDAPException
17 from redis.exceptions import ConnectionError as RedisConnectionError
18 from redis.exceptions import RedisError, ResponseError
19 from rest_framework.exceptions import APIException
20 from sentry_sdk import HttpTransport, Hub
21 from sentry_sdk import init as sentry_sdk_init
22 from sentry_sdk.api import set_tag
23 from sentry_sdk.integrations.celery import CeleryIntegration
24 from sentry_sdk.integrations.django import DjangoIntegration
25 from sentry_sdk.integrations.redis import RedisIntegration
26 from sentry_sdk.integrations.threading import ThreadingIntegration
27 from sentry_sdk.tracing import Transaction
28 from structlog.stdlib import get_logger
29 from websockets.exceptions import WebSocketException
30
31 from authentik import __version__, get_build_hash
32 from authentik.lib.config import CONFIG
33 from authentik.lib.utils.http import authentik_user_agent
34 from authentik.lib.utils.reflection import class_to_path, get_env
35
36 LOGGER = get_logger()
37 SENTRY_DSN = "https://[email protected]/8"
38
39
40 class SentryWSMiddleware(BaseMiddleware):
41 """Sentry Websocket middleweare to set the transaction name based on
42 consumer class path"""
43
44 async def __call__(self, scope, receive, send):
45 transaction: Optional[Transaction] = Hub.current.scope.transaction
46 class_path = class_to_path(self.inner.consumer_class)
47 if transaction:
48 transaction.name = class_path
49 return await self.inner(scope, receive, send)
50
51
52 class SentryIgnoredException(Exception):
53 """Base Class for all errors that are suppressed, and not sent to sentry."""
54
55
56 class SentryTransport(HttpTransport):
57 """Custom sentry transport with custom user-agent"""
58
59 def __init__(self, options: dict[str, Any]) -> None:
60 super().__init__(options)
61 self._auth = self.parsed_dsn.to_auth(authentik_user_agent())
62
63
64 def sentry_init(**sentry_init_kwargs):
65 """Configure sentry SDK"""
66 sentry_env = CONFIG.y("error_reporting.environment", "customer")
67 kwargs = {
68 "environment": sentry_env,
69 "send_default_pii": CONFIG.y_bool("error_reporting.send_pii", False),
70 }
71 kwargs.update(**sentry_init_kwargs)
72 # pylint: disable=abstract-class-instantiated
73 sentry_sdk_init(
74 dsn=SENTRY_DSN,
75 integrations=[
76 DjangoIntegration(transaction_style="function_name"),
77 CeleryIntegration(),
78 RedisIntegration(),
79 ThreadingIntegration(propagate_hub=True),
80 ],
81 before_send=before_send,
82 traces_sampler=traces_sampler,
83 release=f"authentik@{__version__}",
84 transport=SentryTransport,
85 **kwargs,
86 )
87 set_tag("authentik.build_hash", get_build_hash("tagged"))
88 set_tag("authentik.env", get_env())
89 set_tag("authentik.component", "backend")
90
91
92 def traces_sampler(sampling_context: dict) -> float:
93 """Custom sampler to ignore certain routes"""
94 path = sampling_context.get("asgi_scope", {}).get("path", "")
95 # Ignore all healthcheck routes
96 if path.startswith("/-/health") or path.startswith("/-/metrics"):
97 return 0
98 return float(CONFIG.y("error_reporting.sample_rate", 0.1))
99
100
101 def before_send(event: dict, hint: dict) -> Optional[dict]:
102 """Check if error is database error, and ignore if so"""
103 # pylint: disable=no-name-in-module
104 from psycopg2.errors import Error
105
106 ignored_classes = (
107 # Inbuilt types
108 KeyboardInterrupt,
109 ConnectionResetError,
110 OSError,
111 PermissionError,
112 # Django Errors
113 Error,
114 ImproperlyConfigured,
115 DatabaseError,
116 OperationalError,
117 InternalError,
118 ProgrammingError,
119 SuspiciousOperation,
120 ValidationError,
121 # Redis errors
122 RedisConnectionError,
123 ConnectionInterrupted,
124 RedisError,
125 ResponseError,
126 # websocket errors
127 ChannelFull,
128 WebSocketException,
129 LocalProtocolError,
130 # rest_framework error
131 APIException,
132 # celery errors
133 WorkerLostError,
134 CeleryError,
135 SoftTimeLimitExceeded,
136 # custom baseclass
137 SentryIgnoredException,
138 # ldap errors
139 LDAPException,
140 # Docker errors
141 DockerException,
142 # End-user errors
143 Http404,
144 # AsyncIO
145 CancelledError,
146 )
147 exc_value = None
148 if "exc_info" in hint:
149 _, exc_value, _ = hint["exc_info"]
150 if isinstance(exc_value, ignored_classes):
151 LOGGER.debug("dropping exception", exc=exc_value)
152 return None
153 if "logger" in event:
154 if event["logger"] in [
155 "kombu",
156 "asyncio",
157 "multiprocessing",
158 "django_redis",
159 "django.security.DisallowedHost",
160 "django_redis.cache",
161 "celery.backends.redis",
162 "celery.worker",
163 "paramiko.transport",
164 ]:
165 return None
166 LOGGER.debug("sending event to sentry", exc=exc_value, source_logger=event.get("logger", None))
167 if settings.DEBUG:
168 return None
169 return event
170
```
Path: `authentik/api/v3/config.py`
Content:
```
1 """core Configs API"""
2 from os import path
3
4 from django.conf import settings
5 from django.db import models
6 from drf_spectacular.utils import extend_schema
7 from rest_framework.fields import (
8 BooleanField,
9 CharField,
10 ChoiceField,
11 FloatField,
12 IntegerField,
13 ListField,
14 )
15 from rest_framework.permissions import AllowAny
16 from rest_framework.request import Request
17 from rest_framework.response import Response
18 from rest_framework.views import APIView
19
20 from authentik.core.api.utils import PassiveSerializer
21 from authentik.events.geo import GEOIP_READER
22 from authentik.lib.config import CONFIG
23
24
25 class Capabilities(models.TextChoices):
26 """Define capabilities which influence which APIs can/should be used"""
27
28 CAN_SAVE_MEDIA = "can_save_media"
29 CAN_GEO_IP = "can_geo_ip"
30 CAN_IMPERSONATE = "can_impersonate"
31 CAN_DEBUG = "can_debug"
32
33
34 class ErrorReportingConfigSerializer(PassiveSerializer):
35 """Config for error reporting"""
36
37 enabled = BooleanField(read_only=True)
38 environment = CharField(read_only=True)
39 send_pii = BooleanField(read_only=True)
40 traces_sample_rate = FloatField(read_only=True)
41
42
43 class ConfigSerializer(PassiveSerializer):
44 """Serialize authentik Config into DRF Object"""
45
46 error_reporting = ErrorReportingConfigSerializer(required=True)
47 capabilities = ListField(child=ChoiceField(choices=Capabilities.choices))
48
49 cache_timeout = IntegerField(required=True)
50 cache_timeout_flows = IntegerField(required=True)
51 cache_timeout_policies = IntegerField(required=True)
52 cache_timeout_reputation = IntegerField(required=True)
53
54
55 class ConfigView(APIView):
56 """Read-only view set that returns the current session's Configs"""
57
58 permission_classes = [AllowAny]
59
60 def get_capabilities(self) -> list[Capabilities]:
61 """Get all capabilities this server instance supports"""
62 caps = []
63 deb_test = settings.DEBUG or settings.TEST
64 if path.ismount(settings.MEDIA_ROOT) or deb_test:
65 caps.append(Capabilities.CAN_SAVE_MEDIA)
66 if GEOIP_READER.enabled:
67 caps.append(Capabilities.CAN_GEO_IP)
68 if CONFIG.y_bool("impersonation"):
69 caps.append(Capabilities.CAN_IMPERSONATE)
70 if settings.DEBUG:
71 caps.append(Capabilities.CAN_DEBUG)
72 return caps
73
74 def get_config(self) -> ConfigSerializer:
75 """Get Config"""
76 return ConfigSerializer(
77 {
78 "error_reporting": {
79 "enabled": CONFIG.y("error_reporting.enabled"),
80 "environment": CONFIG.y("error_reporting.environment"),
81 "send_pii": CONFIG.y("error_reporting.send_pii"),
82 "traces_sample_rate": float(CONFIG.y("error_reporting.sample_rate", 0.4)),
83 },
84 "capabilities": self.get_capabilities(),
85 "cache_timeout": int(CONFIG.y("redis.cache_timeout")),
86 "cache_timeout_flows": int(CONFIG.y("redis.cache_timeout_flows")),
87 "cache_timeout_policies": int(CONFIG.y("redis.cache_timeout_policies")),
88 "cache_timeout_reputation": int(CONFIG.y("redis.cache_timeout_reputation")),
89 }
90 )
91
92 @extend_schema(responses={200: ConfigSerializer(many=False)})
93 def get(self, request: Request) -> Response:
94 """Retrieve public configuration options"""
95 return Response(self.get_config().data)
96
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/authentik/api/v3/config.py b/authentik/api/v3/config.py
--- a/authentik/api/v3/config.py
+++ b/authentik/api/v3/config.py
@@ -35,6 +35,7 @@
"""Config for error reporting"""
enabled = BooleanField(read_only=True)
+ sentry_dsn = CharField(read_only=True)
environment = CharField(read_only=True)
send_pii = BooleanField(read_only=True)
traces_sample_rate = FloatField(read_only=True)
@@ -77,6 +78,7 @@
{
"error_reporting": {
"enabled": CONFIG.y("error_reporting.enabled"),
+ "sentry_dsn": CONFIG.y("error_reporting.sentry_dsn"),
"environment": CONFIG.y("error_reporting.environment"),
"send_pii": CONFIG.y("error_reporting.send_pii"),
"traces_sample_rate": float(CONFIG.y("error_reporting.sample_rate", 0.4)),
diff --git a/authentik/lib/sentry.py b/authentik/lib/sentry.py
--- a/authentik/lib/sentry.py
+++ b/authentik/lib/sentry.py
@@ -34,7 +34,6 @@
from authentik.lib.utils.reflection import class_to_path, get_env
LOGGER = get_logger()
-SENTRY_DSN = "https://[email protected]/8"
class SentryWSMiddleware(BaseMiddleware):
@@ -71,7 +70,7 @@
kwargs.update(**sentry_init_kwargs)
# pylint: disable=abstract-class-instantiated
sentry_sdk_init(
- dsn=SENTRY_DSN,
+ dsn=CONFIG.y("error_reporting.sentry_dsn"),
integrations=[
DjangoIntegration(transaction_style="function_name"),
CeleryIntegration(),
|
{"golden_diff": "diff --git a/authentik/api/v3/config.py b/authentik/api/v3/config.py\n--- a/authentik/api/v3/config.py\n+++ b/authentik/api/v3/config.py\n@@ -35,6 +35,7 @@\n \"\"\"Config for error reporting\"\"\"\n \n enabled = BooleanField(read_only=True)\n+ sentry_dsn = CharField(read_only=True)\n environment = CharField(read_only=True)\n send_pii = BooleanField(read_only=True)\n traces_sample_rate = FloatField(read_only=True)\n@@ -77,6 +78,7 @@\n {\n \"error_reporting\": {\n \"enabled\": CONFIG.y(\"error_reporting.enabled\"),\n+ \"sentry_dsn\": CONFIG.y(\"error_reporting.sentry_dsn\"),\n \"environment\": CONFIG.y(\"error_reporting.environment\"),\n \"send_pii\": CONFIG.y(\"error_reporting.send_pii\"),\n \"traces_sample_rate\": float(CONFIG.y(\"error_reporting.sample_rate\", 0.4)),\ndiff --git a/authentik/lib/sentry.py b/authentik/lib/sentry.py\n--- a/authentik/lib/sentry.py\n+++ b/authentik/lib/sentry.py\n@@ -34,7 +34,6 @@\n from authentik.lib.utils.reflection import class_to_path, get_env\n \n LOGGER = get_logger()\n-SENTRY_DSN = \"https://[email protected]/8\"\n \n \n class SentryWSMiddleware(BaseMiddleware):\n@@ -71,7 +70,7 @@\n kwargs.update(**sentry_init_kwargs)\n # pylint: disable=abstract-class-instantiated\n sentry_sdk_init(\n- dsn=SENTRY_DSN,\n+ dsn=CONFIG.y(\"error_reporting.sentry_dsn\"),\n integrations=[\n DjangoIntegration(transaction_style=\"function_name\"),\n CeleryIntegration(),\n", "issue": "internal: allow Sentry DSN to be configured by end user\n## Details\r\n\r\nThe existing Sentry instrumentation is fairly valuable for performance optimization purposes, but having Sentry hard-coded to report to `https://sentry.beryju.org` makes it a pain to override during local development.\r\n\r\n## Changes\r\n\r\n### New Features\r\n\r\n* Allow the Sentry DSN to be overridden via `AUTHENTIK_ERROR_REPORTING__SENTRY_DSN`.\r\n\r\n## Additional\r\n\r\nI updated the configuration docs to list the new configuration setting, and _slightly_ tweaked the wording of the surrounding documentation to explain the dichotomy between the default DSN -- useful for debugging issues in collaboration with authentik devs, anonymous performance data, etc -- and specifying a custom one.\n", "before_files": [{"content": "\"\"\"authentik sentry integration\"\"\"\nfrom asyncio.exceptions import CancelledError\nfrom typing import Any, Optional\n\nfrom billiard.exceptions import SoftTimeLimitExceeded, WorkerLostError\nfrom celery.exceptions import CeleryError\nfrom channels.middleware import BaseMiddleware\nfrom channels_redis.core import ChannelFull\nfrom django.conf import settings\nfrom django.core.exceptions import ImproperlyConfigured, SuspiciousOperation, ValidationError\nfrom django.db import DatabaseError, InternalError, OperationalError, ProgrammingError\nfrom django.http.response import Http404\nfrom django_redis.exceptions import ConnectionInterrupted\nfrom docker.errors import DockerException\nfrom h11 import LocalProtocolError\nfrom ldap3.core.exceptions import LDAPException\nfrom redis.exceptions import ConnectionError as RedisConnectionError\nfrom redis.exceptions import RedisError, ResponseError\nfrom rest_framework.exceptions import APIException\nfrom sentry_sdk import HttpTransport, Hub\nfrom sentry_sdk import init as sentry_sdk_init\nfrom sentry_sdk.api import set_tag\nfrom sentry_sdk.integrations.celery import CeleryIntegration\nfrom sentry_sdk.integrations.django import DjangoIntegration\nfrom sentry_sdk.integrations.redis import RedisIntegration\nfrom sentry_sdk.integrations.threading import ThreadingIntegration\nfrom sentry_sdk.tracing import Transaction\nfrom structlog.stdlib import get_logger\nfrom websockets.exceptions import WebSocketException\n\nfrom authentik import __version__, get_build_hash\nfrom authentik.lib.config import CONFIG\nfrom authentik.lib.utils.http import authentik_user_agent\nfrom authentik.lib.utils.reflection import class_to_path, get_env\n\nLOGGER = get_logger()\nSENTRY_DSN = \"https://[email protected]/8\"\n\n\nclass SentryWSMiddleware(BaseMiddleware):\n \"\"\"Sentry Websocket middleweare to set the transaction name based on\n consumer class path\"\"\"\n\n async def __call__(self, scope, receive, send):\n transaction: Optional[Transaction] = Hub.current.scope.transaction\n class_path = class_to_path(self.inner.consumer_class)\n if transaction:\n transaction.name = class_path\n return await self.inner(scope, receive, send)\n\n\nclass SentryIgnoredException(Exception):\n \"\"\"Base Class for all errors that are suppressed, and not sent to sentry.\"\"\"\n\n\nclass SentryTransport(HttpTransport):\n \"\"\"Custom sentry transport with custom user-agent\"\"\"\n\n def __init__(self, options: dict[str, Any]) -> None:\n super().__init__(options)\n self._auth = self.parsed_dsn.to_auth(authentik_user_agent())\n\n\ndef sentry_init(**sentry_init_kwargs):\n \"\"\"Configure sentry SDK\"\"\"\n sentry_env = CONFIG.y(\"error_reporting.environment\", \"customer\")\n kwargs = {\n \"environment\": sentry_env,\n \"send_default_pii\": CONFIG.y_bool(\"error_reporting.send_pii\", False),\n }\n kwargs.update(**sentry_init_kwargs)\n # pylint: disable=abstract-class-instantiated\n sentry_sdk_init(\n dsn=SENTRY_DSN,\n integrations=[\n DjangoIntegration(transaction_style=\"function_name\"),\n CeleryIntegration(),\n RedisIntegration(),\n ThreadingIntegration(propagate_hub=True),\n ],\n before_send=before_send,\n traces_sampler=traces_sampler,\n release=f\"authentik@{__version__}\",\n transport=SentryTransport,\n **kwargs,\n )\n set_tag(\"authentik.build_hash\", get_build_hash(\"tagged\"))\n set_tag(\"authentik.env\", get_env())\n set_tag(\"authentik.component\", \"backend\")\n\n\ndef traces_sampler(sampling_context: dict) -> float:\n \"\"\"Custom sampler to ignore certain routes\"\"\"\n path = sampling_context.get(\"asgi_scope\", {}).get(\"path\", \"\")\n # Ignore all healthcheck routes\n if path.startswith(\"/-/health\") or path.startswith(\"/-/metrics\"):\n return 0\n return float(CONFIG.y(\"error_reporting.sample_rate\", 0.1))\n\n\ndef before_send(event: dict, hint: dict) -> Optional[dict]:\n \"\"\"Check if error is database error, and ignore if so\"\"\"\n # pylint: disable=no-name-in-module\n from psycopg2.errors import Error\n\n ignored_classes = (\n # Inbuilt types\n KeyboardInterrupt,\n ConnectionResetError,\n OSError,\n PermissionError,\n # Django Errors\n Error,\n ImproperlyConfigured,\n DatabaseError,\n OperationalError,\n InternalError,\n ProgrammingError,\n SuspiciousOperation,\n ValidationError,\n # Redis errors\n RedisConnectionError,\n ConnectionInterrupted,\n RedisError,\n ResponseError,\n # websocket errors\n ChannelFull,\n WebSocketException,\n LocalProtocolError,\n # rest_framework error\n APIException,\n # celery errors\n WorkerLostError,\n CeleryError,\n SoftTimeLimitExceeded,\n # custom baseclass\n SentryIgnoredException,\n # ldap errors\n LDAPException,\n # Docker errors\n DockerException,\n # End-user errors\n Http404,\n # AsyncIO\n CancelledError,\n )\n exc_value = None\n if \"exc_info\" in hint:\n _, exc_value, _ = hint[\"exc_info\"]\n if isinstance(exc_value, ignored_classes):\n LOGGER.debug(\"dropping exception\", exc=exc_value)\n return None\n if \"logger\" in event:\n if event[\"logger\"] in [\n \"kombu\",\n \"asyncio\",\n \"multiprocessing\",\n \"django_redis\",\n \"django.security.DisallowedHost\",\n \"django_redis.cache\",\n \"celery.backends.redis\",\n \"celery.worker\",\n \"paramiko.transport\",\n ]:\n return None\n LOGGER.debug(\"sending event to sentry\", exc=exc_value, source_logger=event.get(\"logger\", None))\n if settings.DEBUG:\n return None\n return event\n", "path": "authentik/lib/sentry.py"}, {"content": "\"\"\"core Configs API\"\"\"\nfrom os import path\n\nfrom django.conf import settings\nfrom django.db import models\nfrom drf_spectacular.utils import extend_schema\nfrom rest_framework.fields import (\n BooleanField,\n CharField,\n ChoiceField,\n FloatField,\n IntegerField,\n ListField,\n)\nfrom rest_framework.permissions import AllowAny\nfrom rest_framework.request import Request\nfrom rest_framework.response import Response\nfrom rest_framework.views import APIView\n\nfrom authentik.core.api.utils import PassiveSerializer\nfrom authentik.events.geo import GEOIP_READER\nfrom authentik.lib.config import CONFIG\n\n\nclass Capabilities(models.TextChoices):\n \"\"\"Define capabilities which influence which APIs can/should be used\"\"\"\n\n CAN_SAVE_MEDIA = \"can_save_media\"\n CAN_GEO_IP = \"can_geo_ip\"\n CAN_IMPERSONATE = \"can_impersonate\"\n CAN_DEBUG = \"can_debug\"\n\n\nclass ErrorReportingConfigSerializer(PassiveSerializer):\n \"\"\"Config for error reporting\"\"\"\n\n enabled = BooleanField(read_only=True)\n environment = CharField(read_only=True)\n send_pii = BooleanField(read_only=True)\n traces_sample_rate = FloatField(read_only=True)\n\n\nclass ConfigSerializer(PassiveSerializer):\n \"\"\"Serialize authentik Config into DRF Object\"\"\"\n\n error_reporting = ErrorReportingConfigSerializer(required=True)\n capabilities = ListField(child=ChoiceField(choices=Capabilities.choices))\n\n cache_timeout = IntegerField(required=True)\n cache_timeout_flows = IntegerField(required=True)\n cache_timeout_policies = IntegerField(required=True)\n cache_timeout_reputation = IntegerField(required=True)\n\n\nclass ConfigView(APIView):\n \"\"\"Read-only view set that returns the current session's Configs\"\"\"\n\n permission_classes = [AllowAny]\n\n def get_capabilities(self) -> list[Capabilities]:\n \"\"\"Get all capabilities this server instance supports\"\"\"\n caps = []\n deb_test = settings.DEBUG or settings.TEST\n if path.ismount(settings.MEDIA_ROOT) or deb_test:\n caps.append(Capabilities.CAN_SAVE_MEDIA)\n if GEOIP_READER.enabled:\n caps.append(Capabilities.CAN_GEO_IP)\n if CONFIG.y_bool(\"impersonation\"):\n caps.append(Capabilities.CAN_IMPERSONATE)\n if settings.DEBUG:\n caps.append(Capabilities.CAN_DEBUG)\n return caps\n\n def get_config(self) -> ConfigSerializer:\n \"\"\"Get Config\"\"\"\n return ConfigSerializer(\n {\n \"error_reporting\": {\n \"enabled\": CONFIG.y(\"error_reporting.enabled\"),\n \"environment\": CONFIG.y(\"error_reporting.environment\"),\n \"send_pii\": CONFIG.y(\"error_reporting.send_pii\"),\n \"traces_sample_rate\": float(CONFIG.y(\"error_reporting.sample_rate\", 0.4)),\n },\n \"capabilities\": self.get_capabilities(),\n \"cache_timeout\": int(CONFIG.y(\"redis.cache_timeout\")),\n \"cache_timeout_flows\": int(CONFIG.y(\"redis.cache_timeout_flows\")),\n \"cache_timeout_policies\": int(CONFIG.y(\"redis.cache_timeout_policies\")),\n \"cache_timeout_reputation\": int(CONFIG.y(\"redis.cache_timeout_reputation\")),\n }\n )\n\n @extend_schema(responses={200: ConfigSerializer(many=False)})\n def get(self, request: Request) -> Response:\n \"\"\"Retrieve public configuration options\"\"\"\n return Response(self.get_config().data)\n", "path": "authentik/api/v3/config.py"}], "after_files": [{"content": "\"\"\"authentik sentry integration\"\"\"\nfrom asyncio.exceptions import CancelledError\nfrom typing import Any, Optional\n\nfrom billiard.exceptions import SoftTimeLimitExceeded, WorkerLostError\nfrom celery.exceptions import CeleryError\nfrom channels.middleware import BaseMiddleware\nfrom channels_redis.core import ChannelFull\nfrom django.conf import settings\nfrom django.core.exceptions import ImproperlyConfigured, SuspiciousOperation, ValidationError\nfrom django.db import DatabaseError, InternalError, OperationalError, ProgrammingError\nfrom django.http.response import Http404\nfrom django_redis.exceptions import ConnectionInterrupted\nfrom docker.errors import DockerException\nfrom h11 import LocalProtocolError\nfrom ldap3.core.exceptions import LDAPException\nfrom redis.exceptions import ConnectionError as RedisConnectionError\nfrom redis.exceptions import RedisError, ResponseError\nfrom rest_framework.exceptions import APIException\nfrom sentry_sdk import HttpTransport, Hub\nfrom sentry_sdk import init as sentry_sdk_init\nfrom sentry_sdk.api import set_tag\nfrom sentry_sdk.integrations.celery import CeleryIntegration\nfrom sentry_sdk.integrations.django import DjangoIntegration\nfrom sentry_sdk.integrations.redis import RedisIntegration\nfrom sentry_sdk.integrations.threading import ThreadingIntegration\nfrom sentry_sdk.tracing import Transaction\nfrom structlog.stdlib import get_logger\nfrom websockets.exceptions import WebSocketException\n\nfrom authentik import __version__, get_build_hash\nfrom authentik.lib.config import CONFIG\nfrom authentik.lib.utils.http import authentik_user_agent\nfrom authentik.lib.utils.reflection import class_to_path, get_env\n\nLOGGER = get_logger()\n\n\nclass SentryWSMiddleware(BaseMiddleware):\n \"\"\"Sentry Websocket middleweare to set the transaction name based on\n consumer class path\"\"\"\n\n async def __call__(self, scope, receive, send):\n transaction: Optional[Transaction] = Hub.current.scope.transaction\n class_path = class_to_path(self.inner.consumer_class)\n if transaction:\n transaction.name = class_path\n return await self.inner(scope, receive, send)\n\n\nclass SentryIgnoredException(Exception):\n \"\"\"Base Class for all errors that are suppressed, and not sent to sentry.\"\"\"\n\n\nclass SentryTransport(HttpTransport):\n \"\"\"Custom sentry transport with custom user-agent\"\"\"\n\n def __init__(self, options: dict[str, Any]) -> None:\n super().__init__(options)\n self._auth = self.parsed_dsn.to_auth(authentik_user_agent())\n\n\ndef sentry_init(**sentry_init_kwargs):\n \"\"\"Configure sentry SDK\"\"\"\n sentry_env = CONFIG.y(\"error_reporting.environment\", \"customer\")\n kwargs = {\n \"environment\": sentry_env,\n \"send_default_pii\": CONFIG.y_bool(\"error_reporting.send_pii\", False),\n }\n kwargs.update(**sentry_init_kwargs)\n # pylint: disable=abstract-class-instantiated\n sentry_sdk_init(\n dsn=CONFIG.y(\"error_reporting.sentry_dsn\"),\n integrations=[\n DjangoIntegration(transaction_style=\"function_name\"),\n CeleryIntegration(),\n RedisIntegration(),\n ThreadingIntegration(propagate_hub=True),\n ],\n before_send=before_send,\n traces_sampler=traces_sampler,\n release=f\"authentik@{__version__}\",\n transport=SentryTransport,\n **kwargs,\n )\n set_tag(\"authentik.build_hash\", get_build_hash(\"tagged\"))\n set_tag(\"authentik.env\", get_env())\n set_tag(\"authentik.component\", \"backend\")\n\n\ndef traces_sampler(sampling_context: dict) -> float:\n \"\"\"Custom sampler to ignore certain routes\"\"\"\n path = sampling_context.get(\"asgi_scope\", {}).get(\"path\", \"\")\n # Ignore all healthcheck routes\n if path.startswith(\"/-/health\") or path.startswith(\"/-/metrics\"):\n return 0\n return float(CONFIG.y(\"error_reporting.sample_rate\", 0.1))\n\n\ndef before_send(event: dict, hint: dict) -> Optional[dict]:\n \"\"\"Check if error is database error, and ignore if so\"\"\"\n # pylint: disable=no-name-in-module\n from psycopg2.errors import Error\n\n ignored_classes = (\n # Inbuilt types\n KeyboardInterrupt,\n ConnectionResetError,\n OSError,\n PermissionError,\n # Django Errors\n Error,\n ImproperlyConfigured,\n DatabaseError,\n OperationalError,\n InternalError,\n ProgrammingError,\n SuspiciousOperation,\n ValidationError,\n # Redis errors\n RedisConnectionError,\n ConnectionInterrupted,\n RedisError,\n ResponseError,\n # websocket errors\n ChannelFull,\n WebSocketException,\n LocalProtocolError,\n # rest_framework error\n APIException,\n # celery errors\n WorkerLostError,\n CeleryError,\n SoftTimeLimitExceeded,\n # custom baseclass\n SentryIgnoredException,\n # ldap errors\n LDAPException,\n # Docker errors\n DockerException,\n # End-user errors\n Http404,\n # AsyncIO\n CancelledError,\n )\n exc_value = None\n if \"exc_info\" in hint:\n _, exc_value, _ = hint[\"exc_info\"]\n if isinstance(exc_value, ignored_classes):\n LOGGER.debug(\"dropping exception\", exc=exc_value)\n return None\n if \"logger\" in event:\n if event[\"logger\"] in [\n \"kombu\",\n \"asyncio\",\n \"multiprocessing\",\n \"django_redis\",\n \"django.security.DisallowedHost\",\n \"django_redis.cache\",\n \"celery.backends.redis\",\n \"celery.worker\",\n \"paramiko.transport\",\n ]:\n return None\n LOGGER.debug(\"sending event to sentry\", exc=exc_value, source_logger=event.get(\"logger\", None))\n if settings.DEBUG:\n return None\n return event\n", "path": "authentik/lib/sentry.py"}, {"content": "\"\"\"core Configs API\"\"\"\nfrom os import path\n\nfrom django.conf import settings\nfrom django.db import models\nfrom drf_spectacular.utils import extend_schema\nfrom rest_framework.fields import (\n BooleanField,\n CharField,\n ChoiceField,\n FloatField,\n IntegerField,\n ListField,\n)\nfrom rest_framework.permissions import AllowAny\nfrom rest_framework.request import Request\nfrom rest_framework.response import Response\nfrom rest_framework.views import APIView\n\nfrom authentik.core.api.utils import PassiveSerializer\nfrom authentik.events.geo import GEOIP_READER\nfrom authentik.lib.config import CONFIG\n\n\nclass Capabilities(models.TextChoices):\n \"\"\"Define capabilities which influence which APIs can/should be used\"\"\"\n\n CAN_SAVE_MEDIA = \"can_save_media\"\n CAN_GEO_IP = \"can_geo_ip\"\n CAN_IMPERSONATE = \"can_impersonate\"\n CAN_DEBUG = \"can_debug\"\n\n\nclass ErrorReportingConfigSerializer(PassiveSerializer):\n \"\"\"Config for error reporting\"\"\"\n\n enabled = BooleanField(read_only=True)\n sentry_dsn = CharField(read_only=True)\n environment = CharField(read_only=True)\n send_pii = BooleanField(read_only=True)\n traces_sample_rate = FloatField(read_only=True)\n\n\nclass ConfigSerializer(PassiveSerializer):\n \"\"\"Serialize authentik Config into DRF Object\"\"\"\n\n error_reporting = ErrorReportingConfigSerializer(required=True)\n capabilities = ListField(child=ChoiceField(choices=Capabilities.choices))\n\n cache_timeout = IntegerField(required=True)\n cache_timeout_flows = IntegerField(required=True)\n cache_timeout_policies = IntegerField(required=True)\n cache_timeout_reputation = IntegerField(required=True)\n\n\nclass ConfigView(APIView):\n \"\"\"Read-only view set that returns the current session's Configs\"\"\"\n\n permission_classes = [AllowAny]\n\n def get_capabilities(self) -> list[Capabilities]:\n \"\"\"Get all capabilities this server instance supports\"\"\"\n caps = []\n deb_test = settings.DEBUG or settings.TEST\n if path.ismount(settings.MEDIA_ROOT) or deb_test:\n caps.append(Capabilities.CAN_SAVE_MEDIA)\n if GEOIP_READER.enabled:\n caps.append(Capabilities.CAN_GEO_IP)\n if CONFIG.y_bool(\"impersonation\"):\n caps.append(Capabilities.CAN_IMPERSONATE)\n if settings.DEBUG:\n caps.append(Capabilities.CAN_DEBUG)\n return caps\n\n def get_config(self) -> ConfigSerializer:\n \"\"\"Get Config\"\"\"\n return ConfigSerializer(\n {\n \"error_reporting\": {\n \"enabled\": CONFIG.y(\"error_reporting.enabled\"),\n \"sentry_dsn\": CONFIG.y(\"error_reporting.sentry_dsn\"),\n \"environment\": CONFIG.y(\"error_reporting.environment\"),\n \"send_pii\": CONFIG.y(\"error_reporting.send_pii\"),\n \"traces_sample_rate\": float(CONFIG.y(\"error_reporting.sample_rate\", 0.4)),\n },\n \"capabilities\": self.get_capabilities(),\n \"cache_timeout\": int(CONFIG.y(\"redis.cache_timeout\")),\n \"cache_timeout_flows\": int(CONFIG.y(\"redis.cache_timeout_flows\")),\n \"cache_timeout_policies\": int(CONFIG.y(\"redis.cache_timeout_policies\")),\n \"cache_timeout_reputation\": int(CONFIG.y(\"redis.cache_timeout_reputation\")),\n }\n )\n\n @extend_schema(responses={200: ConfigSerializer(many=False)})\n def get(self, request: Request) -> Response:\n \"\"\"Retrieve public configuration options\"\"\"\n return Response(self.get_config().data)\n", "path": "authentik/api/v3/config.py"}]}
| 3,007 | 425 |
gh_patches_debug_29743
|
rasdani/github-patches
|
git_diff
|
ManimCommunity__manim-929
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update Matrix to ensure consistency
## Motivation
<!-- Outline your motivation: In what way do your changes improve the library? -->
As described in #924, ```Matrix``` shows inconsistent behavior depending on whether ```DecimalNumber``` or ```MathTex``` is used as the underlying mobject. Additionally, passing a 1d row-vector results in a column-vector to be displayed (Note that while this is not a breaking change, it changes the visualization of existing code!).
Closes #924
## Overview / Explanation for Changes
<!-- Give an overview of your changes and explain how they
resolve the situation described in the previous section.
For PRs introducing new features, please provide code snippets
using the newly introduced functionality and ideally even the
expected rendered output. -->
The following change addresses both points mentioned in the motivation.
```py
matrix = np.array(matrix, ndmin=1)
```
to
```py
matrix = np.array(matrix, ndmin=2)
```
The other change, which is the introduction of the ```transpose``` argument is for the following reason:
```py
from manim import *
class Test(Scene):
def construct(self):
arr = np.array([0.1, 0.3])
# Transposing a vector does not do anything!
m1 = DecimalMatrix(arr.transpose())
self.add(m1)
# Transposing a vector (remember ndmin=2 in matrix.py) now works!
m2 = DecimalMatrix(arr, transpose=True).next_to(m1, DOWN)
self.add(m2)
self.wait(2)
```
Resulting image:
<img width="443" alt="Screenshot 2021-01-11 at 11 57 04" src="https://user-images.githubusercontent.com/22744609/104172306-5d7ce280-5404-11eb-9bec-ef8941b96951.png">
## Oneline Summary of Changes
```
- Ensure consistent matrix behavior (:pr:`925`)
```
## Testing Status
Just tested with the code above and a part of my codebase (consisting of around 6 different matrices)
## Acknowledgements
- [x] I have read the [Contributing Guidelines](https://docs.manim.community/en/latest/contributing.html)
<!-- Once again, thanks for helping out by contributing to manim! -->
<!-- Do not modify the lines below. -->
## Reviewer Checklist
- [ ] Newly added functions/classes are either private or have a docstring
- [ ] Newly added functions/classes have [tests](https://github.com/ManimCommunity/manim/wiki/Testing) added and (optional) examples in the docs
- [ ] Newly added documentation builds, looks correctly formatted, and adds no additional build warnings
- [ ] The oneline summary has been included [in the wiki](https://github.com/ManimCommunity/manim/wiki/Changelog-for-next-release)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `manim/mobject/matrix.py`
Content:
```
1 """Mobjects representing matrices."""
2
3 __all__ = [
4 "Matrix",
5 "DecimalMatrix",
6 "IntegerMatrix",
7 "MobjectMatrix",
8 "matrix_to_tex_string",
9 "matrix_to_mobject",
10 "vector_coordinate_label",
11 "get_det_text",
12 ]
13
14
15 import numpy as np
16
17 from ..constants import *
18 from ..mobject.numbers import DecimalNumber
19 from ..mobject.numbers import Integer
20 from ..mobject.shape_matchers import BackgroundRectangle
21 from ..mobject.svg.tex_mobject import MathTex
22 from ..mobject.svg.tex_mobject import Tex
23 from ..mobject.types.vectorized_mobject import VGroup
24 from ..mobject.types.vectorized_mobject import VMobject
25 from ..utils.color import WHITE
26
27 VECTOR_LABEL_SCALE_FACTOR = 0.8
28
29
30 def matrix_to_tex_string(matrix):
31 matrix = np.array(matrix).astype("str")
32 if matrix.ndim == 1:
33 matrix = matrix.reshape((matrix.size, 1))
34 n_rows, n_cols = matrix.shape
35 prefix = "\\left[ \\begin{array}{%s}" % ("c" * n_cols)
36 suffix = "\\end{array} \\right]"
37 rows = [" & ".join(row) for row in matrix]
38 return prefix + " \\\\ ".join(rows) + suffix
39
40
41 def matrix_to_mobject(matrix):
42 return MathTex(matrix_to_tex_string(matrix))
43
44
45 def vector_coordinate_label(vector_mob, integer_labels=True, n_dim=2, color=WHITE):
46 vect = np.array(vector_mob.get_end())
47 if integer_labels:
48 vect = np.round(vect).astype(int)
49 vect = vect[:n_dim]
50 vect = vect.reshape((n_dim, 1))
51 label = Matrix(vect, add_background_rectangles_to_entries=True)
52 label.scale(VECTOR_LABEL_SCALE_FACTOR)
53
54 shift_dir = np.array(vector_mob.get_end())
55 if shift_dir[0] >= 0: # Pointing right
56 shift_dir -= label.get_left() + DEFAULT_MOBJECT_TO_MOBJECT_BUFFER * LEFT
57 else: # Pointing left
58 shift_dir -= label.get_right() + DEFAULT_MOBJECT_TO_MOBJECT_BUFFER * RIGHT
59 label.shift(shift_dir)
60 label.set_color(color)
61 label.rect = BackgroundRectangle(label)
62 label.add_to_back(label.rect)
63 return label
64
65
66 class Matrix(VMobject):
67 def __init__(
68 self,
69 matrix,
70 v_buff=0.8,
71 h_buff=1.3,
72 bracket_h_buff=MED_SMALL_BUFF,
73 bracket_v_buff=MED_SMALL_BUFF,
74 add_background_rectangles_to_entries=False,
75 include_background_rectangle=False,
76 element_to_mobject=MathTex,
77 element_to_mobject_config={},
78 element_alignment_corner=DR,
79 left_bracket="\\big[",
80 right_bracket="\\big]",
81 **kwargs
82 ):
83 """
84 Matrix can either either include numbers, tex_strings,
85 or mobjects
86 """
87 self.v_buff = v_buff
88 self.h_buff = h_buff
89 self.bracket_h_buff = bracket_h_buff
90 self.bracket_v_buff = bracket_v_buff
91 self.add_background_rectangles_to_entries = add_background_rectangles_to_entries
92 self.include_background_rectangle = include_background_rectangle
93 self.element_to_mobject = element_to_mobject
94 self.element_to_mobject_config = element_to_mobject_config
95 self.element_alignment_corner = element_alignment_corner
96 self.left_bracket = left_bracket
97 self.right_bracket = right_bracket
98 VMobject.__init__(self, **kwargs)
99 matrix = np.array(matrix, ndmin=1)
100 mob_matrix = self.matrix_to_mob_matrix(matrix)
101 self.organize_mob_matrix(mob_matrix)
102 self.elements = VGroup(*mob_matrix.flatten())
103 self.add(self.elements)
104 self.add_brackets(self.left_bracket, self.right_bracket)
105 self.center()
106 self.mob_matrix = mob_matrix
107 if self.add_background_rectangles_to_entries:
108 for mob in self.elements:
109 mob.add_background_rectangle()
110 if self.include_background_rectangle:
111 self.add_background_rectangle()
112
113 def matrix_to_mob_matrix(self, matrix):
114 return np.vectorize(self.element_to_mobject)(
115 matrix, **self.element_to_mobject_config
116 )
117
118 def organize_mob_matrix(self, matrix):
119 for i, row in enumerate(matrix):
120 for j, elem in enumerate(row):
121 mob = matrix[i][j]
122 mob.move_to(
123 i * self.v_buff * DOWN + j * self.h_buff * RIGHT,
124 self.element_alignment_corner,
125 )
126 return self
127
128 def add_brackets(self, left="\\big[", right="\\big]"):
129 bracket_pair = MathTex(left, right)
130 bracket_pair.scale(2)
131 bracket_pair.stretch_to_fit_height(self.get_height() + 2 * self.bracket_v_buff)
132 l_bracket, r_bracket = bracket_pair.split()
133 l_bracket.next_to(self, LEFT, self.bracket_h_buff)
134 r_bracket.next_to(self, RIGHT, self.bracket_h_buff)
135 self.add(l_bracket, r_bracket)
136 self.brackets = VGroup(l_bracket, r_bracket)
137 return self
138
139 def get_columns(self):
140 return VGroup(
141 *[VGroup(*self.mob_matrix[:, i]) for i in range(self.mob_matrix.shape[1])]
142 )
143
144 def set_column_colors(self, *colors):
145 columns = self.get_columns()
146 for color, column in zip(colors, columns):
147 column.set_color(color)
148 return self
149
150 def get_rows(self):
151 """Return rows of the matrix as VGroups
152
153 Returns
154 --------
155 List[:class:`~.VGroup`]
156 Each VGroup contains a row of the matrix.
157 """
158 return VGroup(
159 *[VGroup(*self.mob_matrix[i, :]) for i in range(self.mob_matrix.shape[0])]
160 )
161
162 def set_row_colors(self, *colors):
163 """Set individual colors for each row of the matrix
164
165 Parameters
166 ----------
167 colors : :class:`str`
168 The list of colors; each color specified corresponds to a row.
169
170 Returns
171 -------
172 :class:`Matrix`
173 The current matrix object (self).
174 """
175 rows = self.get_rows()
176 for color, row in zip(colors, rows):
177 row.set_color(color)
178 return self
179
180 def add_background_to_entries(self):
181 for mob in self.get_entries():
182 mob.add_background_rectangle()
183 return self
184
185 def get_mob_matrix(self):
186 return self.mob_matrix
187
188 def get_entries(self):
189 return VGroup(*self.get_mob_matrix().flatten())
190
191 def get_brackets(self):
192 return self.brackets
193
194
195 class DecimalMatrix(Matrix):
196 def __init__(
197 self,
198 matrix,
199 element_to_mobject=DecimalNumber,
200 element_to_mobject_config={"num_decimal_places": 1},
201 **kwargs
202 ):
203 Matrix.__init__(
204 self,
205 matrix,
206 element_to_mobject=element_to_mobject,
207 element_to_mobject_config=element_to_mobject_config,
208 **kwargs
209 )
210
211
212 class IntegerMatrix(Matrix):
213 def __init__(self, matrix, element_to_mobject=Integer, **kwargs):
214 Matrix.__init__(self, matrix, element_to_mobject=element_to_mobject, **kwargs)
215
216
217 class MobjectMatrix(Matrix):
218 def __init__(self, matrix, element_to_mobject=lambda m: m, **kwargs):
219 Matrix.__init__(self, matrix, element_to_mobject=element_to_mobject, **kwargs)
220
221
222 def get_det_text(
223 matrix, determinant=None, background_rect=False, initial_scale_factor=2
224 ):
225 parens = MathTex("(", ")")
226 parens.scale(initial_scale_factor)
227 parens.stretch_to_fit_height(matrix.get_height())
228 l_paren, r_paren = parens.split()
229 l_paren.next_to(matrix, LEFT, buff=0.1)
230 r_paren.next_to(matrix, RIGHT, buff=0.1)
231 det = Tex("det")
232 det.scale(initial_scale_factor)
233 det.next_to(l_paren, LEFT, buff=0.1)
234 if background_rect:
235 det.add_background_rectangle()
236 det_text = VGroup(det, l_paren, r_paren)
237 if determinant is not None:
238 eq = MathTex("=")
239 eq.next_to(r_paren, RIGHT, buff=0.1)
240 result = MathTex(str(determinant))
241 result.next_to(eq, RIGHT, buff=0.2)
242 det_text.add(eq, result)
243 return det_text
244
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/manim/mobject/matrix.py b/manim/mobject/matrix.py
--- a/manim/mobject/matrix.py
+++ b/manim/mobject/matrix.py
@@ -78,7 +78,7 @@
element_alignment_corner=DR,
left_bracket="\\big[",
right_bracket="\\big]",
- **kwargs
+ **kwargs,
):
"""
Matrix can either either include numbers, tex_strings,
@@ -96,7 +96,11 @@
self.left_bracket = left_bracket
self.right_bracket = right_bracket
VMobject.__init__(self, **kwargs)
- matrix = np.array(matrix, ndmin=1)
+ if len(matrix.shape) < 2:
+ raise ValueError(
+ f"{self.__str__()} class requires a two-dimensional array!"
+ )
+ matrix = np.array(matrix)
mob_matrix = self.matrix_to_mob_matrix(matrix)
self.organize_mob_matrix(mob_matrix)
self.elements = VGroup(*mob_matrix.flatten())
@@ -198,14 +202,14 @@
matrix,
element_to_mobject=DecimalNumber,
element_to_mobject_config={"num_decimal_places": 1},
- **kwargs
+ **kwargs,
):
Matrix.__init__(
self,
matrix,
element_to_mobject=element_to_mobject,
element_to_mobject_config=element_to_mobject_config,
- **kwargs
+ **kwargs,
)
|
{"golden_diff": "diff --git a/manim/mobject/matrix.py b/manim/mobject/matrix.py\n--- a/manim/mobject/matrix.py\n+++ b/manim/mobject/matrix.py\n@@ -78,7 +78,7 @@\n element_alignment_corner=DR,\n left_bracket=\"\\\\big[\",\n right_bracket=\"\\\\big]\",\n- **kwargs\n+ **kwargs,\n ):\n \"\"\"\n Matrix can either either include numbers, tex_strings,\n@@ -96,7 +96,11 @@\n self.left_bracket = left_bracket\n self.right_bracket = right_bracket\n VMobject.__init__(self, **kwargs)\n- matrix = np.array(matrix, ndmin=1)\n+ if len(matrix.shape) < 2:\n+ raise ValueError(\n+ f\"{self.__str__()} class requires a two-dimensional array!\"\n+ )\n+ matrix = np.array(matrix)\n mob_matrix = self.matrix_to_mob_matrix(matrix)\n self.organize_mob_matrix(mob_matrix)\n self.elements = VGroup(*mob_matrix.flatten())\n@@ -198,14 +202,14 @@\n matrix,\n element_to_mobject=DecimalNumber,\n element_to_mobject_config={\"num_decimal_places\": 1},\n- **kwargs\n+ **kwargs,\n ):\n Matrix.__init__(\n self,\n matrix,\n element_to_mobject=element_to_mobject,\n element_to_mobject_config=element_to_mobject_config,\n- **kwargs\n+ **kwargs,\n )\n", "issue": "Update Matrix to ensure consistency\n## Motivation\r\n<!-- Outline your motivation: In what way do your changes improve the library? -->\r\nAs described in #924, ```Matrix``` shows inconsistent behavior depending on whether ```DecimalNumber``` or ```MathTex``` is used as the underlying mobject. Additionally, passing a 1d row-vector results in a column-vector to be displayed (Note that while this is not a breaking change, it changes the visualization of existing code!).\r\n\r\nCloses #924 \r\n\r\n\r\n\r\n## Overview / Explanation for Changes\r\n<!-- Give an overview of your changes and explain how they\r\nresolve the situation described in the previous section.\r\n\r\nFor PRs introducing new features, please provide code snippets\r\nusing the newly introduced functionality and ideally even the\r\nexpected rendered output. -->\r\n\r\nThe following change addresses both points mentioned in the motivation.\r\n```py\r\nmatrix = np.array(matrix, ndmin=1)\r\n```\r\nto\r\n```py\r\nmatrix = np.array(matrix, ndmin=2)\r\n```\r\n\r\nThe other change, which is the introduction of the ```transpose``` argument is for the following reason:\r\n```py\r\nfrom manim import *\r\n\r\n\r\nclass Test(Scene):\r\n def construct(self):\r\n arr = np.array([0.1, 0.3])\r\n # Transposing a vector does not do anything!\r\n m1 = DecimalMatrix(arr.transpose())\r\n self.add(m1)\r\n\r\n # Transposing a vector (remember ndmin=2 in matrix.py) now works!\r\n m2 = DecimalMatrix(arr, transpose=True).next_to(m1, DOWN)\r\n self.add(m2)\r\n\r\n self.wait(2)\r\n```\r\nResulting image:\r\n<img width=\"443\" alt=\"Screenshot 2021-01-11 at 11 57 04\" src=\"https://user-images.githubusercontent.com/22744609/104172306-5d7ce280-5404-11eb-9bec-ef8941b96951.png\">\r\n\r\n\r\n## Oneline Summary of Changes\r\n```\r\n- Ensure consistent matrix behavior (:pr:`925`)\r\n```\r\n\r\n## Testing Status\r\nJust tested with the code above and a part of my codebase (consisting of around 6 different matrices)\r\n\r\n\r\n## Acknowledgements\r\n- [x] I have read the [Contributing Guidelines](https://docs.manim.community/en/latest/contributing.html)\r\n\r\n<!-- Once again, thanks for helping out by contributing to manim! -->\r\n\r\n\r\n<!-- Do not modify the lines below. -->\r\n## Reviewer Checklist\r\n- [ ] Newly added functions/classes are either private or have a docstring\r\n- [ ] Newly added functions/classes have [tests](https://github.com/ManimCommunity/manim/wiki/Testing) added and (optional) examples in the docs\r\n- [ ] Newly added documentation builds, looks correctly formatted, and adds no additional build warnings\r\n- [ ] The oneline summary has been included [in the wiki](https://github.com/ManimCommunity/manim/wiki/Changelog-for-next-release)\r\n\n", "before_files": [{"content": "\"\"\"Mobjects representing matrices.\"\"\"\n\n__all__ = [\n \"Matrix\",\n \"DecimalMatrix\",\n \"IntegerMatrix\",\n \"MobjectMatrix\",\n \"matrix_to_tex_string\",\n \"matrix_to_mobject\",\n \"vector_coordinate_label\",\n \"get_det_text\",\n]\n\n\nimport numpy as np\n\nfrom ..constants import *\nfrom ..mobject.numbers import DecimalNumber\nfrom ..mobject.numbers import Integer\nfrom ..mobject.shape_matchers import BackgroundRectangle\nfrom ..mobject.svg.tex_mobject import MathTex\nfrom ..mobject.svg.tex_mobject import Tex\nfrom ..mobject.types.vectorized_mobject import VGroup\nfrom ..mobject.types.vectorized_mobject import VMobject\nfrom ..utils.color import WHITE\n\nVECTOR_LABEL_SCALE_FACTOR = 0.8\n\n\ndef matrix_to_tex_string(matrix):\n matrix = np.array(matrix).astype(\"str\")\n if matrix.ndim == 1:\n matrix = matrix.reshape((matrix.size, 1))\n n_rows, n_cols = matrix.shape\n prefix = \"\\\\left[ \\\\begin{array}{%s}\" % (\"c\" * n_cols)\n suffix = \"\\\\end{array} \\\\right]\"\n rows = [\" & \".join(row) for row in matrix]\n return prefix + \" \\\\\\\\ \".join(rows) + suffix\n\n\ndef matrix_to_mobject(matrix):\n return MathTex(matrix_to_tex_string(matrix))\n\n\ndef vector_coordinate_label(vector_mob, integer_labels=True, n_dim=2, color=WHITE):\n vect = np.array(vector_mob.get_end())\n if integer_labels:\n vect = np.round(vect).astype(int)\n vect = vect[:n_dim]\n vect = vect.reshape((n_dim, 1))\n label = Matrix(vect, add_background_rectangles_to_entries=True)\n label.scale(VECTOR_LABEL_SCALE_FACTOR)\n\n shift_dir = np.array(vector_mob.get_end())\n if shift_dir[0] >= 0: # Pointing right\n shift_dir -= label.get_left() + DEFAULT_MOBJECT_TO_MOBJECT_BUFFER * LEFT\n else: # Pointing left\n shift_dir -= label.get_right() + DEFAULT_MOBJECT_TO_MOBJECT_BUFFER * RIGHT\n label.shift(shift_dir)\n label.set_color(color)\n label.rect = BackgroundRectangle(label)\n label.add_to_back(label.rect)\n return label\n\n\nclass Matrix(VMobject):\n def __init__(\n self,\n matrix,\n v_buff=0.8,\n h_buff=1.3,\n bracket_h_buff=MED_SMALL_BUFF,\n bracket_v_buff=MED_SMALL_BUFF,\n add_background_rectangles_to_entries=False,\n include_background_rectangle=False,\n element_to_mobject=MathTex,\n element_to_mobject_config={},\n element_alignment_corner=DR,\n left_bracket=\"\\\\big[\",\n right_bracket=\"\\\\big]\",\n **kwargs\n ):\n \"\"\"\n Matrix can either either include numbers, tex_strings,\n or mobjects\n \"\"\"\n self.v_buff = v_buff\n self.h_buff = h_buff\n self.bracket_h_buff = bracket_h_buff\n self.bracket_v_buff = bracket_v_buff\n self.add_background_rectangles_to_entries = add_background_rectangles_to_entries\n self.include_background_rectangle = include_background_rectangle\n self.element_to_mobject = element_to_mobject\n self.element_to_mobject_config = element_to_mobject_config\n self.element_alignment_corner = element_alignment_corner\n self.left_bracket = left_bracket\n self.right_bracket = right_bracket\n VMobject.__init__(self, **kwargs)\n matrix = np.array(matrix, ndmin=1)\n mob_matrix = self.matrix_to_mob_matrix(matrix)\n self.organize_mob_matrix(mob_matrix)\n self.elements = VGroup(*mob_matrix.flatten())\n self.add(self.elements)\n self.add_brackets(self.left_bracket, self.right_bracket)\n self.center()\n self.mob_matrix = mob_matrix\n if self.add_background_rectangles_to_entries:\n for mob in self.elements:\n mob.add_background_rectangle()\n if self.include_background_rectangle:\n self.add_background_rectangle()\n\n def matrix_to_mob_matrix(self, matrix):\n return np.vectorize(self.element_to_mobject)(\n matrix, **self.element_to_mobject_config\n )\n\n def organize_mob_matrix(self, matrix):\n for i, row in enumerate(matrix):\n for j, elem in enumerate(row):\n mob = matrix[i][j]\n mob.move_to(\n i * self.v_buff * DOWN + j * self.h_buff * RIGHT,\n self.element_alignment_corner,\n )\n return self\n\n def add_brackets(self, left=\"\\\\big[\", right=\"\\\\big]\"):\n bracket_pair = MathTex(left, right)\n bracket_pair.scale(2)\n bracket_pair.stretch_to_fit_height(self.get_height() + 2 * self.bracket_v_buff)\n l_bracket, r_bracket = bracket_pair.split()\n l_bracket.next_to(self, LEFT, self.bracket_h_buff)\n r_bracket.next_to(self, RIGHT, self.bracket_h_buff)\n self.add(l_bracket, r_bracket)\n self.brackets = VGroup(l_bracket, r_bracket)\n return self\n\n def get_columns(self):\n return VGroup(\n *[VGroup(*self.mob_matrix[:, i]) for i in range(self.mob_matrix.shape[1])]\n )\n\n def set_column_colors(self, *colors):\n columns = self.get_columns()\n for color, column in zip(colors, columns):\n column.set_color(color)\n return self\n\n def get_rows(self):\n \"\"\"Return rows of the matrix as VGroups\n\n Returns\n --------\n List[:class:`~.VGroup`]\n Each VGroup contains a row of the matrix.\n \"\"\"\n return VGroup(\n *[VGroup(*self.mob_matrix[i, :]) for i in range(self.mob_matrix.shape[0])]\n )\n\n def set_row_colors(self, *colors):\n \"\"\"Set individual colors for each row of the matrix\n\n Parameters\n ----------\n colors : :class:`str`\n The list of colors; each color specified corresponds to a row.\n\n Returns\n -------\n :class:`Matrix`\n The current matrix object (self).\n \"\"\"\n rows = self.get_rows()\n for color, row in zip(colors, rows):\n row.set_color(color)\n return self\n\n def add_background_to_entries(self):\n for mob in self.get_entries():\n mob.add_background_rectangle()\n return self\n\n def get_mob_matrix(self):\n return self.mob_matrix\n\n def get_entries(self):\n return VGroup(*self.get_mob_matrix().flatten())\n\n def get_brackets(self):\n return self.brackets\n\n\nclass DecimalMatrix(Matrix):\n def __init__(\n self,\n matrix,\n element_to_mobject=DecimalNumber,\n element_to_mobject_config={\"num_decimal_places\": 1},\n **kwargs\n ):\n Matrix.__init__(\n self,\n matrix,\n element_to_mobject=element_to_mobject,\n element_to_mobject_config=element_to_mobject_config,\n **kwargs\n )\n\n\nclass IntegerMatrix(Matrix):\n def __init__(self, matrix, element_to_mobject=Integer, **kwargs):\n Matrix.__init__(self, matrix, element_to_mobject=element_to_mobject, **kwargs)\n\n\nclass MobjectMatrix(Matrix):\n def __init__(self, matrix, element_to_mobject=lambda m: m, **kwargs):\n Matrix.__init__(self, matrix, element_to_mobject=element_to_mobject, **kwargs)\n\n\ndef get_det_text(\n matrix, determinant=None, background_rect=False, initial_scale_factor=2\n):\n parens = MathTex(\"(\", \")\")\n parens.scale(initial_scale_factor)\n parens.stretch_to_fit_height(matrix.get_height())\n l_paren, r_paren = parens.split()\n l_paren.next_to(matrix, LEFT, buff=0.1)\n r_paren.next_to(matrix, RIGHT, buff=0.1)\n det = Tex(\"det\")\n det.scale(initial_scale_factor)\n det.next_to(l_paren, LEFT, buff=0.1)\n if background_rect:\n det.add_background_rectangle()\n det_text = VGroup(det, l_paren, r_paren)\n if determinant is not None:\n eq = MathTex(\"=\")\n eq.next_to(r_paren, RIGHT, buff=0.1)\n result = MathTex(str(determinant))\n result.next_to(eq, RIGHT, buff=0.2)\n det_text.add(eq, result)\n return det_text\n", "path": "manim/mobject/matrix.py"}], "after_files": [{"content": "\"\"\"Mobjects representing matrices.\"\"\"\n\n__all__ = [\n \"Matrix\",\n \"DecimalMatrix\",\n \"IntegerMatrix\",\n \"MobjectMatrix\",\n \"matrix_to_tex_string\",\n \"matrix_to_mobject\",\n \"vector_coordinate_label\",\n \"get_det_text\",\n]\n\n\nimport numpy as np\n\nfrom ..constants import *\nfrom ..mobject.numbers import DecimalNumber\nfrom ..mobject.numbers import Integer\nfrom ..mobject.shape_matchers import BackgroundRectangle\nfrom ..mobject.svg.tex_mobject import MathTex\nfrom ..mobject.svg.tex_mobject import Tex\nfrom ..mobject.types.vectorized_mobject import VGroup\nfrom ..mobject.types.vectorized_mobject import VMobject\nfrom ..utils.color import WHITE\n\nVECTOR_LABEL_SCALE_FACTOR = 0.8\n\n\ndef matrix_to_tex_string(matrix):\n matrix = np.array(matrix).astype(\"str\")\n if matrix.ndim == 1:\n matrix = matrix.reshape((matrix.size, 1))\n n_rows, n_cols = matrix.shape\n prefix = \"\\\\left[ \\\\begin{array}{%s}\" % (\"c\" * n_cols)\n suffix = \"\\\\end{array} \\\\right]\"\n rows = [\" & \".join(row) for row in matrix]\n return prefix + \" \\\\\\\\ \".join(rows) + suffix\n\n\ndef matrix_to_mobject(matrix):\n return MathTex(matrix_to_tex_string(matrix))\n\n\ndef vector_coordinate_label(vector_mob, integer_labels=True, n_dim=2, color=WHITE):\n vect = np.array(vector_mob.get_end())\n if integer_labels:\n vect = np.round(vect).astype(int)\n vect = vect[:n_dim]\n vect = vect.reshape((n_dim, 1))\n label = Matrix(vect, add_background_rectangles_to_entries=True)\n label.scale(VECTOR_LABEL_SCALE_FACTOR)\n\n shift_dir = np.array(vector_mob.get_end())\n if shift_dir[0] >= 0: # Pointing right\n shift_dir -= label.get_left() + DEFAULT_MOBJECT_TO_MOBJECT_BUFFER * LEFT\n else: # Pointing left\n shift_dir -= label.get_right() + DEFAULT_MOBJECT_TO_MOBJECT_BUFFER * RIGHT\n label.shift(shift_dir)\n label.set_color(color)\n label.rect = BackgroundRectangle(label)\n label.add_to_back(label.rect)\n return label\n\n\nclass Matrix(VMobject):\n def __init__(\n self,\n matrix,\n v_buff=0.8,\n h_buff=1.3,\n bracket_h_buff=MED_SMALL_BUFF,\n bracket_v_buff=MED_SMALL_BUFF,\n add_background_rectangles_to_entries=False,\n include_background_rectangle=False,\n element_to_mobject=MathTex,\n element_to_mobject_config={},\n element_alignment_corner=DR,\n left_bracket=\"\\\\big[\",\n right_bracket=\"\\\\big]\",\n **kwargs,\n ):\n \"\"\"\n Matrix can either either include numbers, tex_strings,\n or mobjects\n \"\"\"\n self.v_buff = v_buff\n self.h_buff = h_buff\n self.bracket_h_buff = bracket_h_buff\n self.bracket_v_buff = bracket_v_buff\n self.add_background_rectangles_to_entries = add_background_rectangles_to_entries\n self.include_background_rectangle = include_background_rectangle\n self.element_to_mobject = element_to_mobject\n self.element_to_mobject_config = element_to_mobject_config\n self.element_alignment_corner = element_alignment_corner\n self.left_bracket = left_bracket\n self.right_bracket = right_bracket\n VMobject.__init__(self, **kwargs)\n if len(matrix.shape) < 2:\n raise ValueError(\n f\"{self.__str__()} class requires a two-dimensional array!\"\n )\n matrix = np.array(matrix)\n mob_matrix = self.matrix_to_mob_matrix(matrix)\n self.organize_mob_matrix(mob_matrix)\n self.elements = VGroup(*mob_matrix.flatten())\n self.add(self.elements)\n self.add_brackets(self.left_bracket, self.right_bracket)\n self.center()\n self.mob_matrix = mob_matrix\n if self.add_background_rectangles_to_entries:\n for mob in self.elements:\n mob.add_background_rectangle()\n if self.include_background_rectangle:\n self.add_background_rectangle()\n\n def matrix_to_mob_matrix(self, matrix):\n return np.vectorize(self.element_to_mobject)(\n matrix, **self.element_to_mobject_config\n )\n\n def organize_mob_matrix(self, matrix):\n for i, row in enumerate(matrix):\n for j, elem in enumerate(row):\n mob = matrix[i][j]\n mob.move_to(\n i * self.v_buff * DOWN + j * self.h_buff * RIGHT,\n self.element_alignment_corner,\n )\n return self\n\n def add_brackets(self, left=\"\\\\big[\", right=\"\\\\big]\"):\n bracket_pair = MathTex(left, right)\n bracket_pair.scale(2)\n bracket_pair.stretch_to_fit_height(self.get_height() + 2 * self.bracket_v_buff)\n l_bracket, r_bracket = bracket_pair.split()\n l_bracket.next_to(self, LEFT, self.bracket_h_buff)\n r_bracket.next_to(self, RIGHT, self.bracket_h_buff)\n self.add(l_bracket, r_bracket)\n self.brackets = VGroup(l_bracket, r_bracket)\n return self\n\n def get_columns(self):\n return VGroup(\n *[VGroup(*self.mob_matrix[:, i]) for i in range(self.mob_matrix.shape[1])]\n )\n\n def set_column_colors(self, *colors):\n columns = self.get_columns()\n for color, column in zip(colors, columns):\n column.set_color(color)\n return self\n\n def get_rows(self):\n \"\"\"Return rows of the matrix as VGroups\n\n Returns\n --------\n List[:class:`~.VGroup`]\n Each VGroup contains a row of the matrix.\n \"\"\"\n return VGroup(\n *[VGroup(*self.mob_matrix[i, :]) for i in range(self.mob_matrix.shape[0])]\n )\n\n def set_row_colors(self, *colors):\n \"\"\"Set individual colors for each row of the matrix\n\n Parameters\n ----------\n colors : :class:`str`\n The list of colors; each color specified corresponds to a row.\n\n Returns\n -------\n :class:`Matrix`\n The current matrix object (self).\n \"\"\"\n rows = self.get_rows()\n for color, row in zip(colors, rows):\n row.set_color(color)\n return self\n\n def add_background_to_entries(self):\n for mob in self.get_entries():\n mob.add_background_rectangle()\n return self\n\n def get_mob_matrix(self):\n return self.mob_matrix\n\n def get_entries(self):\n return VGroup(*self.get_mob_matrix().flatten())\n\n def get_brackets(self):\n return self.brackets\n\n\nclass DecimalMatrix(Matrix):\n def __init__(\n self,\n matrix,\n element_to_mobject=DecimalNumber,\n element_to_mobject_config={\"num_decimal_places\": 1},\n **kwargs,\n ):\n Matrix.__init__(\n self,\n matrix,\n element_to_mobject=element_to_mobject,\n element_to_mobject_config=element_to_mobject_config,\n **kwargs,\n )\n\n\nclass IntegerMatrix(Matrix):\n def __init__(self, matrix, element_to_mobject=Integer, **kwargs):\n Matrix.__init__(self, matrix, element_to_mobject=element_to_mobject, **kwargs)\n\n\nclass MobjectMatrix(Matrix):\n def __init__(self, matrix, element_to_mobject=lambda m: m, **kwargs):\n Matrix.__init__(self, matrix, element_to_mobject=element_to_mobject, **kwargs)\n\n\ndef get_det_text(\n matrix, determinant=None, background_rect=False, initial_scale_factor=2\n):\n parens = MathTex(\"(\", \")\")\n parens.scale(initial_scale_factor)\n parens.stretch_to_fit_height(matrix.get_height())\n l_paren, r_paren = parens.split()\n l_paren.next_to(matrix, LEFT, buff=0.1)\n r_paren.next_to(matrix, RIGHT, buff=0.1)\n det = Tex(\"det\")\n det.scale(initial_scale_factor)\n det.next_to(l_paren, LEFT, buff=0.1)\n if background_rect:\n det.add_background_rectangle()\n det_text = VGroup(det, l_paren, r_paren)\n if determinant is not None:\n eq = MathTex(\"=\")\n eq.next_to(r_paren, RIGHT, buff=0.1)\n result = MathTex(str(determinant))\n result.next_to(eq, RIGHT, buff=0.2)\n det_text.add(eq, result)\n return det_text\n", "path": "manim/mobject/matrix.py"}]}
| 3,387 | 334 |
gh_patches_debug_2361
|
rasdani/github-patches
|
git_diff
|
tough-dev-school__education-backend-1502
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
В админке во всех названиях курса выводить название потока
Сейчас совершенно непонятно, к какому потоку принадлежит курс — приходится догадываться по старшинству. Надо, чтобы вот тут (см. ниже) выводилось название ProductGroup, к которому привязан курс.
<img width="1511" alt="Screenshot 2022-06-20 at 10 55 18" src="https://user-images.githubusercontent.com/1592663/174552950-bf6ee7e8-6ba7-43f7-af90-5ba2fededfd7.png">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/products/models/course.py`
Content:
```
1 from django.apps import apps
2 from django.core.exceptions import ValidationError
3 from django.db.models import OuterRef
4 from django.db.models import QuerySet
5 from django.db.models import Subquery
6 from django.utils.translation import gettext_lazy as _
7
8 from app.files import RandomFileName
9 from app.models import models
10 from mailing.tasks import send_mail
11 from products.models.base import Shippable
12 from users.models import User
13
14
15 class CourseQuerySet(QuerySet):
16 def for_lms(self) -> QuerySet["Course"]:
17 return self.filter(
18 display_in_lms=True,
19 ).with_course_homepage()
20
21 def with_course_homepage(self) -> QuerySet["Course"]:
22 materials = (
23 apps.get_model("notion.Material")
24 .objects.filter(
25 course=OuterRef("pk"),
26 is_home_page=True,
27 )
28 .order_by(
29 "-created",
30 )
31 .values(
32 "page_id",
33 )
34 )
35
36 return self.annotate(
37 home_page_slug=Subquery(materials[:1]),
38 )
39
40
41 CourseManager = models.Manager.from_queryset(CourseQuerySet)
42
43
44 class Course(Shippable):
45 objects = CourseManager()
46
47 name_genitive = models.CharField(_("Genitive name"), max_length=255, help_text="«мастер-класса о TDD». К примеру для записей.")
48 zoomus_webinar_id = models.CharField(
49 _("Zoom.us webinar ID"), max_length=255, null=True, blank=True, help_text=_("If set, every user who purcashes this course gets invited")
50 )
51
52 welcome_letter_template_id = models.CharField(
53 _("Welcome letter template id"), max_length=255, blank=True, null=True, help_text=_("Will be sent upon purchase if set")
54 )
55 gift_welcome_letter_template_id = models.CharField(
56 _("Special welcome letter template id for gifts"), max_length=255, blank=True, null=True, help_text=_("If not set, common welcome letter will be used")
57 )
58 display_in_lms = models.BooleanField(_("Display in LMS"), default=True, help_text=_("If disabled will not be shown in LMS"))
59
60 diploma_template_context = models.JSONField(default=dict, blank=True)
61
62 disable_triggers = models.BooleanField(_("Disable all triggers"), default=False)
63
64 confirmation_template_id = models.CharField(
65 _("Confirmation template id"),
66 max_length=255,
67 null=True,
68 blank=True,
69 help_text=_("If set user sill receive this message upon creating zero-priced order"),
70 )
71 confirmation_success_url = models.URLField(_("Confirmation success URL"), null=True, blank=True)
72
73 cover = models.ImageField(
74 verbose_name=_("Cover image"),
75 upload_to=RandomFileName("courses/covers"),
76 blank=True,
77 help_text=_("The cover image of course"),
78 )
79
80 class Meta:
81 ordering = ["-id"]
82 verbose_name = _("Course")
83 verbose_name_plural = _("Courses")
84 db_table = "courses_course"
85
86 def clean(self):
87 """Check for correct setting of confirmation_template_id and confirmation_success_url"""
88 if not self.confirmation_template_id and not self.confirmation_success_url:
89 return
90
91 if not all([self.confirmation_template_id, self.confirmation_success_url]):
92 raise ValidationError(_("Both confirmation_template_id and confirmation_success_url must be set"))
93
94 if self.price != 0:
95 raise ValidationError(_("Courses with confirmation should have zero price"))
96
97 def get_purchased_users(self) -> QuerySet[User]:
98 return User.objects.filter(
99 pk__in=apps.get_model("studying.Study").objects.filter(course=self).values_list("student", flat=True),
100 )
101
102 def send_email_to_all_purchased_users(self, template_id: str):
103 for user in self.get_purchased_users().iterator():
104 send_mail.delay(
105 to=user.email,
106 template_id=template_id,
107 )
108
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/products/models/course.py b/src/products/models/course.py
--- a/src/products/models/course.py
+++ b/src/products/models/course.py
@@ -105,3 +105,11 @@
to=user.email,
template_id=template_id,
)
+
+ def __str__(self) -> str:
+ name = getattr(self, "name", None)
+ group = getattr(self, "group", None)
+ if name is not None and group is not None:
+ return f"{name} - {group.name}"
+
+ return super().__str__()
|
{"golden_diff": "diff --git a/src/products/models/course.py b/src/products/models/course.py\n--- a/src/products/models/course.py\n+++ b/src/products/models/course.py\n@@ -105,3 +105,11 @@\n to=user.email,\n template_id=template_id,\n )\n+\n+ def __str__(self) -> str:\n+ name = getattr(self, \"name\", None)\n+ group = getattr(self, \"group\", None)\n+ if name is not None and group is not None:\n+ return f\"{name} - {group.name}\"\n+\n+ return super().__str__()\n", "issue": "\u0412 \u0430\u0434\u043c\u0438\u043d\u043a\u0435 \u0432\u043e \u0432\u0441\u0435\u0445 \u043d\u0430\u0437\u0432\u0430\u043d\u0438\u044f\u0445 \u043a\u0443\u0440\u0441\u0430 \u0432\u044b\u0432\u043e\u0434\u0438\u0442\u044c \u043d\u0430\u0437\u0432\u0430\u043d\u0438\u0435 \u043f\u043e\u0442\u043e\u043a\u0430\n\u0421\u0435\u0439\u0447\u0430\u0441 \u0441\u043e\u0432\u0435\u0440\u0448\u0435\u043d\u043d\u043e \u043d\u0435\u043f\u043e\u043d\u044f\u0442\u043d\u043e, \u043a \u043a\u0430\u043a\u043e\u043c\u0443 \u043f\u043e\u0442\u043e\u043a\u0443 \u043f\u0440\u0438\u043d\u0430\u0434\u043b\u0435\u0436\u0438\u0442 \u043a\u0443\u0440\u0441 \u2014\u00a0\u043f\u0440\u0438\u0445\u043e\u0434\u0438\u0442\u0441\u044f \u0434\u043e\u0433\u0430\u0434\u044b\u0432\u0430\u0442\u044c\u0441\u044f \u043f\u043e \u0441\u0442\u0430\u0440\u0448\u0438\u043d\u0441\u0442\u0432\u0443. \u041d\u0430\u0434\u043e, \u0447\u0442\u043e\u0431\u044b \u0432\u043e\u0442 \u0442\u0443\u0442 (\u0441\u043c. \u043d\u0438\u0436\u0435) \u0432\u044b\u0432\u043e\u0434\u0438\u043b\u043e\u0441\u044c \u043d\u0430\u0437\u0432\u0430\u043d\u0438\u0435 ProductGroup, \u043a \u043a\u043e\u0442\u043e\u0440\u043e\u043c\u0443 \u043f\u0440\u0438\u0432\u044f\u0437\u0430\u043d \u043a\u0443\u0440\u0441.\r\n\r\n<img width=\"1511\" alt=\"Screenshot 2022-06-20 at 10 55 18\" src=\"https://user-images.githubusercontent.com/1592663/174552950-bf6ee7e8-6ba7-43f7-af90-5ba2fededfd7.png\">\r\n\r\n\n", "before_files": [{"content": "from django.apps import apps\nfrom django.core.exceptions import ValidationError\nfrom django.db.models import OuterRef\nfrom django.db.models import QuerySet\nfrom django.db.models import Subquery\nfrom django.utils.translation import gettext_lazy as _\n\nfrom app.files import RandomFileName\nfrom app.models import models\nfrom mailing.tasks import send_mail\nfrom products.models.base import Shippable\nfrom users.models import User\n\n\nclass CourseQuerySet(QuerySet):\n def for_lms(self) -> QuerySet[\"Course\"]:\n return self.filter(\n display_in_lms=True,\n ).with_course_homepage()\n\n def with_course_homepage(self) -> QuerySet[\"Course\"]:\n materials = (\n apps.get_model(\"notion.Material\")\n .objects.filter(\n course=OuterRef(\"pk\"),\n is_home_page=True,\n )\n .order_by(\n \"-created\",\n )\n .values(\n \"page_id\",\n )\n )\n\n return self.annotate(\n home_page_slug=Subquery(materials[:1]),\n )\n\n\nCourseManager = models.Manager.from_queryset(CourseQuerySet)\n\n\nclass Course(Shippable):\n objects = CourseManager()\n\n name_genitive = models.CharField(_(\"Genitive name\"), max_length=255, help_text=\"\u00ab\u043c\u0430\u0441\u0442\u0435\u0440-\u043a\u043b\u0430\u0441\u0441\u0430 \u043e TDD\u00bb. \u041a \u043f\u0440\u0438\u043c\u0435\u0440\u0443 \u0434\u043b\u044f \u0437\u0430\u043f\u0438\u0441\u0435\u0439.\")\n zoomus_webinar_id = models.CharField(\n _(\"Zoom.us webinar ID\"), max_length=255, null=True, blank=True, help_text=_(\"If set, every user who purcashes this course gets invited\")\n )\n\n welcome_letter_template_id = models.CharField(\n _(\"Welcome letter template id\"), max_length=255, blank=True, null=True, help_text=_(\"Will be sent upon purchase if set\")\n )\n gift_welcome_letter_template_id = models.CharField(\n _(\"Special welcome letter template id for gifts\"), max_length=255, blank=True, null=True, help_text=_(\"If not set, common welcome letter will be used\")\n )\n display_in_lms = models.BooleanField(_(\"Display in LMS\"), default=True, help_text=_(\"If disabled will not be shown in LMS\"))\n\n diploma_template_context = models.JSONField(default=dict, blank=True)\n\n disable_triggers = models.BooleanField(_(\"Disable all triggers\"), default=False)\n\n confirmation_template_id = models.CharField(\n _(\"Confirmation template id\"),\n max_length=255,\n null=True,\n blank=True,\n help_text=_(\"If set user sill receive this message upon creating zero-priced order\"),\n )\n confirmation_success_url = models.URLField(_(\"Confirmation success URL\"), null=True, blank=True)\n\n cover = models.ImageField(\n verbose_name=_(\"Cover image\"),\n upload_to=RandomFileName(\"courses/covers\"),\n blank=True,\n help_text=_(\"The cover image of course\"),\n )\n\n class Meta:\n ordering = [\"-id\"]\n verbose_name = _(\"Course\")\n verbose_name_plural = _(\"Courses\")\n db_table = \"courses_course\"\n\n def clean(self):\n \"\"\"Check for correct setting of confirmation_template_id and confirmation_success_url\"\"\"\n if not self.confirmation_template_id and not self.confirmation_success_url:\n return\n\n if not all([self.confirmation_template_id, self.confirmation_success_url]):\n raise ValidationError(_(\"Both confirmation_template_id and confirmation_success_url must be set\"))\n\n if self.price != 0:\n raise ValidationError(_(\"Courses with confirmation should have zero price\"))\n\n def get_purchased_users(self) -> QuerySet[User]:\n return User.objects.filter(\n pk__in=apps.get_model(\"studying.Study\").objects.filter(course=self).values_list(\"student\", flat=True),\n )\n\n def send_email_to_all_purchased_users(self, template_id: str):\n for user in self.get_purchased_users().iterator():\n send_mail.delay(\n to=user.email,\n template_id=template_id,\n )\n", "path": "src/products/models/course.py"}], "after_files": [{"content": "from django.apps import apps\nfrom django.core.exceptions import ValidationError\nfrom django.db.models import OuterRef\nfrom django.db.models import QuerySet\nfrom django.db.models import Subquery\nfrom django.utils.translation import gettext_lazy as _\n\nfrom app.files import RandomFileName\nfrom app.models import models\nfrom mailing.tasks import send_mail\nfrom products.models.base import Shippable\nfrom users.models import User\n\n\nclass CourseQuerySet(QuerySet):\n def for_lms(self) -> QuerySet[\"Course\"]:\n return self.filter(\n display_in_lms=True,\n ).with_course_homepage()\n\n def with_course_homepage(self) -> QuerySet[\"Course\"]:\n materials = (\n apps.get_model(\"notion.Material\")\n .objects.filter(\n course=OuterRef(\"pk\"),\n is_home_page=True,\n )\n .order_by(\n \"-created\",\n )\n .values(\n \"page_id\",\n )\n )\n\n return self.annotate(\n home_page_slug=Subquery(materials[:1]),\n )\n\n\nCourseManager = models.Manager.from_queryset(CourseQuerySet)\n\n\nclass Course(Shippable):\n objects = CourseManager()\n\n name_genitive = models.CharField(_(\"Genitive name\"), max_length=255, help_text=\"\u00ab\u043c\u0430\u0441\u0442\u0435\u0440-\u043a\u043b\u0430\u0441\u0441\u0430 \u043e TDD\u00bb. \u041a \u043f\u0440\u0438\u043c\u0435\u0440\u0443 \u0434\u043b\u044f \u0437\u0430\u043f\u0438\u0441\u0435\u0439.\")\n zoomus_webinar_id = models.CharField(\n _(\"Zoom.us webinar ID\"), max_length=255, null=True, blank=True, help_text=_(\"If set, every user who purcashes this course gets invited\")\n )\n\n welcome_letter_template_id = models.CharField(\n _(\"Welcome letter template id\"), max_length=255, blank=True, null=True, help_text=_(\"Will be sent upon purchase if set\")\n )\n gift_welcome_letter_template_id = models.CharField(\n _(\"Special welcome letter template id for gifts\"), max_length=255, blank=True, null=True, help_text=_(\"If not set, common welcome letter will be used\")\n )\n display_in_lms = models.BooleanField(_(\"Display in LMS\"), default=True, help_text=_(\"If disabled will not be shown in LMS\"))\n\n diploma_template_context = models.JSONField(default=dict, blank=True)\n\n disable_triggers = models.BooleanField(_(\"Disable all triggers\"), default=False)\n\n confirmation_template_id = models.CharField(\n _(\"Confirmation template id\"),\n max_length=255,\n null=True,\n blank=True,\n help_text=_(\"If set user sill receive this message upon creating zero-priced order\"),\n )\n confirmation_success_url = models.URLField(_(\"Confirmation success URL\"), null=True, blank=True)\n\n cover = models.ImageField(\n verbose_name=_(\"Cover image\"),\n upload_to=RandomFileName(\"courses/covers\"),\n blank=True,\n help_text=_(\"The cover image of course\"),\n )\n\n class Meta:\n ordering = [\"-id\"]\n verbose_name = _(\"Course\")\n verbose_name_plural = _(\"Courses\")\n db_table = \"courses_course\"\n\n def clean(self):\n \"\"\"Check for correct setting of confirmation_template_id and confirmation_success_url\"\"\"\n if not self.confirmation_template_id and not self.confirmation_success_url:\n return\n\n if not all([self.confirmation_template_id, self.confirmation_success_url]):\n raise ValidationError(_(\"Both confirmation_template_id and confirmation_success_url must be set\"))\n\n if self.price != 0:\n raise ValidationError(_(\"Courses with confirmation should have zero price\"))\n\n def get_purchased_users(self) -> QuerySet[User]:\n return User.objects.filter(\n pk__in=apps.get_model(\"studying.Study\").objects.filter(course=self).values_list(\"student\", flat=True),\n )\n\n def send_email_to_all_purchased_users(self, template_id: str):\n for user in self.get_purchased_users().iterator():\n send_mail.delay(\n to=user.email,\n template_id=template_id,\n )\n\n def __str__(self) -> str:\n name = getattr(self, \"name\", None)\n group = getattr(self, \"group\", None)\n if name is not None and group is not None:\n return f\"{name} - {group.name}\"\n\n return super().__str__()\n", "path": "src/products/models/course.py"}]}
| 1,473 | 130 |
gh_patches_debug_39756
|
rasdani/github-patches
|
git_diff
|
bridgecrewio__checkov-2383
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CKV_GIT_4 GitHub "Ensure Secrets are encrypted" is based on a misinterpretation
note: I am not a user of Checkov but I contributed to the GitHub terraform provider so [this comment](https://github.com/integrations/terraform-provider-github/issues/888#issuecomment-1033071672) by @ned1313 made me aware or Checkov behaviour which I believe to be incorrect.
**Describe the issue**
The GitHub [Ensure Secrets are encrypted](https://github.com/bridgecrewio/checkov/blob/0700739e71db58e2dc7987694c33757642b7d38b/checkov/terraform/checks/resource/github/SecretsEncrypted.py) check appears to be based on a misinterpretation of the GitHub functionality and the GitHub terraform provider documentation, because of ambiguity in the argument's naming scheme and the documentation.
Secrets can be provided to GitHub in either a plain text form or an encrypted form. GitHub will encrypt any plain text secrets that arrive, and leave encrypted secrets as is. Regardless of the choice of how to provide the input (plain text or encrypted) it will be stored encrypted by GitHub, then decrypted by GitHub Actions at runtime using the private key.
The choice to use `encrypted_value` over `plaintext_value` is made when you have a secret that you are providing as an input to terraform that you do not want to end up in your terraform state in plain text. If a secret has been generated by a different terraform provider (e.g: a cloud provider access token) then it will already exist in the terraform state, so passing it as a `plaintext_value` to GitHub doesn't introduce any additional exposure.
```python
class SecretsEncrypted(BaseResourceNegativeValueCheck):
def __init__(self) -> None:
# -from github docs "It is also advised that you do not store plaintext values in your code but rather populate
# the encrypted_value using fields from a resource, data source or variable as,
# while encrypted in state, these will be easily accessible in your code"
```
The full quote here the [GitHub documentation](https://registry.terraform.io/providers/integrations/github/latest/docs/resources/actions_environment_secret) is...
> For the purposes of security, the contents of the `plaintext_value` field have been marked as sensitive to Terraform, but it is important to note that this does not hide it from state files. You should treat state as sensitive always. It is also advised that you do not store plaintext values in your code but rather populate the `encrypted_value` using fields from a resource, data source or variable as, while encrypted in state, these will be easily accessible in your code.
Rather than serve to describe the behaviour of `encrypted_value` and `plaintext_value` the statement is actually just a generic warning about terraform best practices.
Here's an example of what would Checkov currently produces a warning for, despite it being a valid and secure use:
```hcl
resource "azuread_service_principal" "gh_actions" {
application_id = azuread_application.gh_actions.application_id
owners = [data.azuread_client_config.current.object_id]
}
resource "azuread_service_principal_password" "gh_actions" {
service_principal_id = azuread_service_principal.gh_actions.object_id
}
resource "github_actions_secret" "example_secret" {
repository = "example_repository"
secret_name = "example_secret_name"
plaintext_value = azuread_service_principal_password.gh_actions.value
}
```
The check could either be removed completely (because `plaintext_value` is appropriate to use in most situations) or, if possible, modified to only warn when the value for the `plaintext_value` argument is a string value directly written into the terraform configuration.
I hope that's clear, please let me know if I can clarify anything.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/terraform/checks/resource/base_resource_value_check.py`
Content:
```
1 from abc import abstractmethod
2 from collections.abc import Iterable
3 from typing import List, Dict, Any
4
5 import dpath.util
6 import re
7 from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
8 from checkov.common.models.enums import CheckResult, CheckCategories
9 from checkov.common.models.consts import ANY_VALUE
10 from checkov.common.util.type_forcers import force_list
11 from checkov.terraform.graph_builder.utils import get_referenced_vertices_in_value
12 from checkov.terraform.parser_functions import handle_dynamic_values
13 from checkov.terraform.parser_utils import find_var_blocks
14
15
16
17 class BaseResourceValueCheck(BaseResourceCheck):
18 def __init__(
19 self,
20 name: str,
21 id: str,
22 categories: "Iterable[CheckCategories]",
23 supported_resources: "Iterable[str]",
24 missing_block_result: CheckResult = CheckResult.FAILED,
25 ) -> None:
26 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
27 self.missing_block_result = missing_block_result
28
29 @staticmethod
30 def _filter_key_path(path: str) -> List[str]:
31 """
32 Filter an attribute path to contain only named attributes by dropping array indices from the path)
33 :param path: valid JSONPath of an attribute
34 :return: List of named attributes with respect to the input JSONPath order
35 """
36 return [x for x in path.split("/") if not re.search(re.compile(r"^\[?\d+]?$"), x)]
37
38 @staticmethod
39 def _is_nesting_key(inspected_attributes: List[str], key: List[str]) -> bool:
40 """
41 Resolves whether a key is a subset of the inspected nesting attributes
42 :param inspected_attributes: list of nesting attributes
43 :param key: JSONPath key of an attribute
44 :return: True/False
45 """
46 return any(x in key for x in inspected_attributes)
47
48 def scan_resource_conf(self, conf: Dict[str, List[Any]]) -> CheckResult:
49 handle_dynamic_values(conf)
50 inspected_key = self.get_inspected_key()
51 expected_values = self.get_expected_values()
52 if dpath.search(conf, inspected_key) != {}:
53 # Inspected key exists
54 value = dpath.get(conf, inspected_key)
55 if isinstance(value, list) and len(value) == 1:
56 value = value[0]
57 if value is None or (isinstance(value, list) and not value):
58 return self.missing_block_result
59 if ANY_VALUE in expected_values and value is not None and (not isinstance(value, str) or value):
60 # Key is found on the configuration - if it accepts any value, the check is PASSED
61 return CheckResult.PASSED
62 if self._is_variable_dependant(value):
63 # If the tested attribute is variable-dependant, then result is PASSED
64 return CheckResult.PASSED
65 if value in expected_values:
66 return CheckResult.PASSED
67 if get_referenced_vertices_in_value(value=value, aliases={}, resources_types=[]):
68 # we don't provide resources_types as we want to stay provider agnostic
69 return CheckResult.UNKNOWN
70 return CheckResult.FAILED
71 else:
72 # Look for the configuration in a bottom-up fashion
73 inspected_attributes = self._filter_key_path(inspected_key)
74 for attribute in reversed(inspected_attributes):
75 for sub_key, sub_conf in dpath.search(conf, f"**/{attribute}", yielded=True):
76 filtered_sub_key = self._filter_key_path(sub_key)
77 # Only proceed with check if full path for key is similar - not partial match
78 if inspected_attributes == filtered_sub_key:
79 if self._is_nesting_key(inspected_attributes, filtered_sub_key):
80 if isinstance(sub_conf, list) and len(sub_conf) == 1:
81 sub_conf = sub_conf[0]
82 if sub_conf in self.get_expected_values():
83 return CheckResult.PASSED
84 if self._is_variable_dependant(sub_conf):
85 # If the tested attribute is variable-dependant, then result is PASSED
86 return CheckResult.PASSED
87
88 return self.missing_block_result
89
90 @abstractmethod
91 def get_inspected_key(self) -> str:
92 """
93 :return: JSONPath syntax path of the checked attribute
94 """
95 raise NotImplementedError()
96
97 def get_expected_values(self) -> List[Any]:
98 """
99 Override the method with the list of acceptable values if the check has more than one possible expected value, given
100 the inspected key
101 :return: List of expected values, defaults to a list of the expected value
102 """
103 return [self.get_expected_value()]
104
105 def get_expected_value(self) -> Any:
106 """
107 Returns the default expected value, governed by provider best practices
108 """
109 return True
110
111 def get_evaluated_keys(self) -> List[str]:
112 return force_list(self.get_inspected_key())
113
```
Path: `checkov/terraform/checks/resource/base_resource_check.py`
Content:
```
1 from abc import abstractmethod
2 from collections.abc import Iterable
3 from typing import Dict, List, Any, Optional
4
5 from checkov.common.checks.base_check import BaseCheck
6 from checkov.common.models.enums import CheckResult, CheckCategories
7 from checkov.terraform.checks.resource.registry import resource_registry
8 from checkov.terraform.parser_functions import handle_dynamic_values
9 from checkov.terraform.parser_utils import find_var_blocks
10
11
12 class BaseResourceCheck(BaseCheck):
13 def __init__(
14 self,
15 name: str,
16 id: str,
17 categories: "Iterable[CheckCategories]",
18 supported_resources: "Iterable[str]",
19 guideline: Optional[str] = None,
20 ) -> None:
21 super().__init__(
22 name=name,
23 id=id,
24 categories=categories,
25 supported_entities=supported_resources,
26 block_type="resource",
27 guideline=guideline,
28 )
29 self.supported_resources = supported_resources
30 resource_registry.register(self)
31
32 @staticmethod
33 def _is_variable_dependant(value: Any) -> bool:
34 if not isinstance(value, str):
35 return False
36
37 if value.startswith(('var.', 'local.', 'module.')):
38 return True
39
40 if "${" not in value:
41 return False
42
43 if find_var_blocks(value):
44 return True
45 return False
46
47 def scan_entity_conf(self, conf: Dict[str, List[Any]], entity_type: str) -> CheckResult:
48 self.entity_type = entity_type
49
50 if conf.get("count") == [0]:
51 return CheckResult.UNKNOWN
52
53 handle_dynamic_values(conf)
54 return self.scan_resource_conf(conf)
55
56 @abstractmethod
57 def scan_resource_conf(self, conf: Dict[str, List[Any]]) -> CheckResult:
58 """
59 self.evaluated_keys should be set with a JSONPath of the attribute inspected.
60 If not relevant it should be set to an empty array so the previous check's value gets overridden in the report.
61 """
62 raise NotImplementedError()
63
```
Path: `checkov/terraform/checks/resource/github/SecretsEncrypted.py`
Content:
```
1 from typing import List, Any
2
3 from checkov.common.models.enums import CheckCategories
4 from checkov.terraform.checks.resource.base_resource_negative_value_check import BaseResourceNegativeValueCheck
5 from checkov.common.models.consts import ANY_VALUE
6
7
8 class SecretsEncrypted(BaseResourceNegativeValueCheck):
9 def __init__(self) -> None:
10 # -from github docs "It is also advised that you do not store plaintext values in your code but rather populate
11 # the encrypted_value using fields from a resource, data source or variable as,
12 # while encrypted in state, these will be easily accessible in your code"
13 name = "Ensure Secrets are encrypted"
14 id = "CKV_GIT_4"
15 supported_resources = ["github_actions_environment_secret",
16 "github_actions_organization_secret",
17 "github_actions_secret"]
18 categories = [CheckCategories.ENCRYPTION]
19 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
20
21 def get_inspected_key(self) -> str:
22 return "plaintext_value"
23
24 def get_forbidden_values(self) -> List[Any]:
25 return [ANY_VALUE]
26
27
28 check = SecretsEncrypted()
29
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/checkov/terraform/checks/resource/base_resource_check.py b/checkov/terraform/checks/resource/base_resource_check.py
--- a/checkov/terraform/checks/resource/base_resource_check.py
+++ b/checkov/terraform/checks/resource/base_resource_check.py
@@ -9,6 +9,19 @@
from checkov.terraform.parser_utils import find_var_blocks
+PROVIDER_PREFIXES = (
+ "aws_",
+ "azurerm_",
+ "azuread_",
+ "digitalocean_",
+ "google_",
+ "github_",
+ "linode_",
+ "oci_",
+ "openstack_",
+)
+
+
class BaseResourceCheck(BaseCheck):
def __init__(
self,
@@ -34,7 +47,10 @@
if not isinstance(value, str):
return False
- if value.startswith(('var.', 'local.', 'module.')):
+ if value.startswith(("var.", "local.", "module.")):
+ return True
+
+ if value.startswith(PROVIDER_PREFIXES):
return True
if "${" not in value:
diff --git a/checkov/terraform/checks/resource/base_resource_value_check.py b/checkov/terraform/checks/resource/base_resource_value_check.py
--- a/checkov/terraform/checks/resource/base_resource_value_check.py
+++ b/checkov/terraform/checks/resource/base_resource_value_check.py
@@ -10,8 +10,6 @@
from checkov.common.util.type_forcers import force_list
from checkov.terraform.graph_builder.utils import get_referenced_vertices_in_value
from checkov.terraform.parser_functions import handle_dynamic_values
-from checkov.terraform.parser_utils import find_var_blocks
-
class BaseResourceValueCheck(BaseResourceCheck):
diff --git a/checkov/terraform/checks/resource/github/SecretsEncrypted.py b/checkov/terraform/checks/resource/github/SecretsEncrypted.py
--- a/checkov/terraform/checks/resource/github/SecretsEncrypted.py
+++ b/checkov/terraform/checks/resource/github/SecretsEncrypted.py
@@ -1,6 +1,6 @@
-from typing import List, Any
+from typing import List, Any, Dict
-from checkov.common.models.enums import CheckCategories
+from checkov.common.models.enums import CheckCategories, CheckResult
from checkov.terraform.checks.resource.base_resource_negative_value_check import BaseResourceNegativeValueCheck
from checkov.common.models.consts import ANY_VALUE
@@ -12,12 +12,21 @@
# while encrypted in state, these will be easily accessible in your code"
name = "Ensure Secrets are encrypted"
id = "CKV_GIT_4"
- supported_resources = ["github_actions_environment_secret",
- "github_actions_organization_secret",
- "github_actions_secret"]
- categories = [CheckCategories.ENCRYPTION]
+ supported_resources = (
+ "github_actions_environment_secret",
+ "github_actions_organization_secret",
+ "github_actions_secret",
+ )
+ categories = (CheckCategories.ENCRYPTION,)
super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
+ def scan_resource_conf(self, conf: Dict[str, List[Any]]) -> CheckResult:
+ plaintext = conf.get("plaintext_value")
+ if plaintext and self._is_variable_dependant(plaintext[0]):
+ return CheckResult.PASSED
+
+ return super().scan_resource_conf(conf)
+
def get_inspected_key(self) -> str:
return "plaintext_value"
|
{"golden_diff": "diff --git a/checkov/terraform/checks/resource/base_resource_check.py b/checkov/terraform/checks/resource/base_resource_check.py\n--- a/checkov/terraform/checks/resource/base_resource_check.py\n+++ b/checkov/terraform/checks/resource/base_resource_check.py\n@@ -9,6 +9,19 @@\n from checkov.terraform.parser_utils import find_var_blocks\n \n \n+PROVIDER_PREFIXES = (\n+ \"aws_\",\n+ \"azurerm_\",\n+ \"azuread_\",\n+ \"digitalocean_\",\n+ \"google_\",\n+ \"github_\",\n+ \"linode_\",\n+ \"oci_\",\n+ \"openstack_\",\n+)\n+\n+\n class BaseResourceCheck(BaseCheck):\n def __init__(\n self,\n@@ -34,7 +47,10 @@\n if not isinstance(value, str):\n return False\n \n- if value.startswith(('var.', 'local.', 'module.')):\n+ if value.startswith((\"var.\", \"local.\", \"module.\")):\n+ return True\n+\n+ if value.startswith(PROVIDER_PREFIXES):\n return True\n \n if \"${\" not in value:\ndiff --git a/checkov/terraform/checks/resource/base_resource_value_check.py b/checkov/terraform/checks/resource/base_resource_value_check.py\n--- a/checkov/terraform/checks/resource/base_resource_value_check.py\n+++ b/checkov/terraform/checks/resource/base_resource_value_check.py\n@@ -10,8 +10,6 @@\n from checkov.common.util.type_forcers import force_list\n from checkov.terraform.graph_builder.utils import get_referenced_vertices_in_value\n from checkov.terraform.parser_functions import handle_dynamic_values\n-from checkov.terraform.parser_utils import find_var_blocks\n-\n \n \n class BaseResourceValueCheck(BaseResourceCheck):\ndiff --git a/checkov/terraform/checks/resource/github/SecretsEncrypted.py b/checkov/terraform/checks/resource/github/SecretsEncrypted.py\n--- a/checkov/terraform/checks/resource/github/SecretsEncrypted.py\n+++ b/checkov/terraform/checks/resource/github/SecretsEncrypted.py\n@@ -1,6 +1,6 @@\n-from typing import List, Any\n+from typing import List, Any, Dict\n \n-from checkov.common.models.enums import CheckCategories\n+from checkov.common.models.enums import CheckCategories, CheckResult\n from checkov.terraform.checks.resource.base_resource_negative_value_check import BaseResourceNegativeValueCheck\n from checkov.common.models.consts import ANY_VALUE\n \n@@ -12,12 +12,21 @@\n # while encrypted in state, these will be easily accessible in your code\"\n name = \"Ensure Secrets are encrypted\"\n id = \"CKV_GIT_4\"\n- supported_resources = [\"github_actions_environment_secret\",\n- \"github_actions_organization_secret\",\n- \"github_actions_secret\"]\n- categories = [CheckCategories.ENCRYPTION]\n+ supported_resources = (\n+ \"github_actions_environment_secret\",\n+ \"github_actions_organization_secret\",\n+ \"github_actions_secret\",\n+ )\n+ categories = (CheckCategories.ENCRYPTION,)\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n \n+ def scan_resource_conf(self, conf: Dict[str, List[Any]]) -> CheckResult:\n+ plaintext = conf.get(\"plaintext_value\")\n+ if plaintext and self._is_variable_dependant(plaintext[0]):\n+ return CheckResult.PASSED\n+\n+ return super().scan_resource_conf(conf)\n+\n def get_inspected_key(self) -> str:\n return \"plaintext_value\"\n", "issue": "CKV_GIT_4 GitHub \"Ensure Secrets are encrypted\" is based on a misinterpretation\nnote: I am not a user of Checkov but I contributed to the GitHub terraform provider so [this comment](https://github.com/integrations/terraform-provider-github/issues/888#issuecomment-1033071672) by @ned1313 made me aware or Checkov behaviour which I believe to be incorrect.\r\n\r\n**Describe the issue**\r\n\r\nThe GitHub [Ensure Secrets are encrypted](https://github.com/bridgecrewio/checkov/blob/0700739e71db58e2dc7987694c33757642b7d38b/checkov/terraform/checks/resource/github/SecretsEncrypted.py) check appears to be based on a misinterpretation of the GitHub functionality and the GitHub terraform provider documentation, because of ambiguity in the argument's naming scheme and the documentation.\r\n\r\nSecrets can be provided to GitHub in either a plain text form or an encrypted form. GitHub will encrypt any plain text secrets that arrive, and leave encrypted secrets as is. Regardless of the choice of how to provide the input (plain text or encrypted) it will be stored encrypted by GitHub, then decrypted by GitHub Actions at runtime using the private key.\r\n\r\nThe choice to use `encrypted_value` over `plaintext_value` is made when you have a secret that you are providing as an input to terraform that you do not want to end up in your terraform state in plain text. If a secret has been generated by a different terraform provider (e.g: a cloud provider access token) then it will already exist in the terraform state, so passing it as a `plaintext_value` to GitHub doesn't introduce any additional exposure.\r\n\r\n```python\r\nclass SecretsEncrypted(BaseResourceNegativeValueCheck):\r\n def __init__(self) -> None:\r\n # -from github docs \"It is also advised that you do not store plaintext values in your code but rather populate\r\n # the encrypted_value using fields from a resource, data source or variable as,\r\n # while encrypted in state, these will be easily accessible in your code\"\r\n```\r\n\r\nThe full quote here the [GitHub documentation](https://registry.terraform.io/providers/integrations/github/latest/docs/resources/actions_environment_secret) is...\r\n\r\n> For the purposes of security, the contents of the `plaintext_value` field have been marked as sensitive to Terraform, but it is important to note that this does not hide it from state files. You should treat state as sensitive always. It is also advised that you do not store plaintext values in your code but rather populate the `encrypted_value` using fields from a resource, data source or variable as, while encrypted in state, these will be easily accessible in your code. \r\n\r\nRather than serve to describe the behaviour of `encrypted_value` and `plaintext_value` the statement is actually just a generic warning about terraform best practices.\r\n\r\nHere's an example of what would Checkov currently produces a warning for, despite it being a valid and secure use:\r\n\r\n```hcl\r\nresource \"azuread_service_principal\" \"gh_actions\" {\r\n application_id = azuread_application.gh_actions.application_id\r\n owners = [data.azuread_client_config.current.object_id]\r\n}\r\n\r\nresource \"azuread_service_principal_password\" \"gh_actions\" {\r\n service_principal_id = azuread_service_principal.gh_actions.object_id\r\n}\r\n\r\nresource \"github_actions_secret\" \"example_secret\" {\r\n repository = \"example_repository\"\r\n secret_name = \"example_secret_name\"\r\n plaintext_value = azuread_service_principal_password.gh_actions.value\r\n}\r\n```\r\n\r\nThe check could either be removed completely (because `plaintext_value` is appropriate to use in most situations) or, if possible, modified to only warn when the value for the `plaintext_value` argument is a string value directly written into the terraform configuration.\r\n\r\nI hope that's clear, please let me know if I can clarify anything.\n", "before_files": [{"content": "from abc import abstractmethod\nfrom collections.abc import Iterable\nfrom typing import List, Dict, Any\n\nimport dpath.util\nimport re\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\nfrom checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.common.models.consts import ANY_VALUE\nfrom checkov.common.util.type_forcers import force_list\nfrom checkov.terraform.graph_builder.utils import get_referenced_vertices_in_value\nfrom checkov.terraform.parser_functions import handle_dynamic_values\nfrom checkov.terraform.parser_utils import find_var_blocks\n\n\n\nclass BaseResourceValueCheck(BaseResourceCheck):\n def __init__(\n self,\n name: str,\n id: str,\n categories: \"Iterable[CheckCategories]\",\n supported_resources: \"Iterable[str]\",\n missing_block_result: CheckResult = CheckResult.FAILED,\n ) -> None:\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n self.missing_block_result = missing_block_result\n\n @staticmethod\n def _filter_key_path(path: str) -> List[str]:\n \"\"\"\n Filter an attribute path to contain only named attributes by dropping array indices from the path)\n :param path: valid JSONPath of an attribute\n :return: List of named attributes with respect to the input JSONPath order\n \"\"\"\n return [x for x in path.split(\"/\") if not re.search(re.compile(r\"^\\[?\\d+]?$\"), x)]\n\n @staticmethod\n def _is_nesting_key(inspected_attributes: List[str], key: List[str]) -> bool:\n \"\"\"\n Resolves whether a key is a subset of the inspected nesting attributes\n :param inspected_attributes: list of nesting attributes\n :param key: JSONPath key of an attribute\n :return: True/False\n \"\"\"\n return any(x in key for x in inspected_attributes)\n\n def scan_resource_conf(self, conf: Dict[str, List[Any]]) -> CheckResult:\n handle_dynamic_values(conf)\n inspected_key = self.get_inspected_key()\n expected_values = self.get_expected_values()\n if dpath.search(conf, inspected_key) != {}:\n # Inspected key exists\n value = dpath.get(conf, inspected_key)\n if isinstance(value, list) and len(value) == 1:\n value = value[0]\n if value is None or (isinstance(value, list) and not value):\n return self.missing_block_result\n if ANY_VALUE in expected_values and value is not None and (not isinstance(value, str) or value):\n # Key is found on the configuration - if it accepts any value, the check is PASSED\n return CheckResult.PASSED\n if self._is_variable_dependant(value):\n # If the tested attribute is variable-dependant, then result is PASSED\n return CheckResult.PASSED\n if value in expected_values:\n return CheckResult.PASSED\n if get_referenced_vertices_in_value(value=value, aliases={}, resources_types=[]):\n # we don't provide resources_types as we want to stay provider agnostic\n return CheckResult.UNKNOWN\n return CheckResult.FAILED\n else:\n # Look for the configuration in a bottom-up fashion\n inspected_attributes = self._filter_key_path(inspected_key)\n for attribute in reversed(inspected_attributes):\n for sub_key, sub_conf in dpath.search(conf, f\"**/{attribute}\", yielded=True):\n filtered_sub_key = self._filter_key_path(sub_key)\n # Only proceed with check if full path for key is similar - not partial match\n if inspected_attributes == filtered_sub_key:\n if self._is_nesting_key(inspected_attributes, filtered_sub_key):\n if isinstance(sub_conf, list) and len(sub_conf) == 1:\n sub_conf = sub_conf[0]\n if sub_conf in self.get_expected_values():\n return CheckResult.PASSED\n if self._is_variable_dependant(sub_conf):\n # If the tested attribute is variable-dependant, then result is PASSED\n return CheckResult.PASSED\n\n return self.missing_block_result\n\n @abstractmethod\n def get_inspected_key(self) -> str:\n \"\"\"\n :return: JSONPath syntax path of the checked attribute\n \"\"\"\n raise NotImplementedError()\n\n def get_expected_values(self) -> List[Any]:\n \"\"\"\n Override the method with the list of acceptable values if the check has more than one possible expected value, given\n the inspected key\n :return: List of expected values, defaults to a list of the expected value\n \"\"\"\n return [self.get_expected_value()]\n\n def get_expected_value(self) -> Any:\n \"\"\"\n Returns the default expected value, governed by provider best practices\n \"\"\"\n return True\n\n def get_evaluated_keys(self) -> List[str]:\n return force_list(self.get_inspected_key())\n", "path": "checkov/terraform/checks/resource/base_resource_value_check.py"}, {"content": "from abc import abstractmethod\nfrom collections.abc import Iterable\nfrom typing import Dict, List, Any, Optional\n\nfrom checkov.common.checks.base_check import BaseCheck\nfrom checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.terraform.checks.resource.registry import resource_registry\nfrom checkov.terraform.parser_functions import handle_dynamic_values\nfrom checkov.terraform.parser_utils import find_var_blocks\n\n\nclass BaseResourceCheck(BaseCheck):\n def __init__(\n self,\n name: str,\n id: str,\n categories: \"Iterable[CheckCategories]\",\n supported_resources: \"Iterable[str]\",\n guideline: Optional[str] = None,\n ) -> None:\n super().__init__(\n name=name,\n id=id,\n categories=categories,\n supported_entities=supported_resources,\n block_type=\"resource\",\n guideline=guideline,\n )\n self.supported_resources = supported_resources\n resource_registry.register(self)\n\n @staticmethod\n def _is_variable_dependant(value: Any) -> bool:\n if not isinstance(value, str):\n return False\n\n if value.startswith(('var.', 'local.', 'module.')):\n return True\n\n if \"${\" not in value:\n return False\n\n if find_var_blocks(value):\n return True\n return False\n\n def scan_entity_conf(self, conf: Dict[str, List[Any]], entity_type: str) -> CheckResult:\n self.entity_type = entity_type\n\n if conf.get(\"count\") == [0]:\n return CheckResult.UNKNOWN\n\n handle_dynamic_values(conf)\n return self.scan_resource_conf(conf)\n\n @abstractmethod\n def scan_resource_conf(self, conf: Dict[str, List[Any]]) -> CheckResult:\n \"\"\"\n self.evaluated_keys should be set with a JSONPath of the attribute inspected.\n If not relevant it should be set to an empty array so the previous check's value gets overridden in the report.\n \"\"\"\n raise NotImplementedError()\n", "path": "checkov/terraform/checks/resource/base_resource_check.py"}, {"content": "from typing import List, Any\n\nfrom checkov.common.models.enums import CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_negative_value_check import BaseResourceNegativeValueCheck\nfrom checkov.common.models.consts import ANY_VALUE\n\n\nclass SecretsEncrypted(BaseResourceNegativeValueCheck):\n def __init__(self) -> None:\n # -from github docs \"It is also advised that you do not store plaintext values in your code but rather populate\n # the encrypted_value using fields from a resource, data source or variable as,\n # while encrypted in state, these will be easily accessible in your code\"\n name = \"Ensure Secrets are encrypted\"\n id = \"CKV_GIT_4\"\n supported_resources = [\"github_actions_environment_secret\",\n \"github_actions_organization_secret\",\n \"github_actions_secret\"]\n categories = [CheckCategories.ENCRYPTION]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def get_inspected_key(self) -> str:\n return \"plaintext_value\"\n\n def get_forbidden_values(self) -> List[Any]:\n return [ANY_VALUE]\n\n\ncheck = SecretsEncrypted()\n", "path": "checkov/terraform/checks/resource/github/SecretsEncrypted.py"}], "after_files": [{"content": "from abc import abstractmethod\nfrom collections.abc import Iterable\nfrom typing import List, Dict, Any\n\nimport dpath.util\nimport re\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\nfrom checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.common.models.consts import ANY_VALUE\nfrom checkov.common.util.type_forcers import force_list\nfrom checkov.terraform.graph_builder.utils import get_referenced_vertices_in_value\nfrom checkov.terraform.parser_functions import handle_dynamic_values\n\n\nclass BaseResourceValueCheck(BaseResourceCheck):\n def __init__(\n self,\n name: str,\n id: str,\n categories: \"Iterable[CheckCategories]\",\n supported_resources: \"Iterable[str]\",\n missing_block_result: CheckResult = CheckResult.FAILED,\n ) -> None:\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n self.missing_block_result = missing_block_result\n\n @staticmethod\n def _filter_key_path(path: str) -> List[str]:\n \"\"\"\n Filter an attribute path to contain only named attributes by dropping array indices from the path)\n :param path: valid JSONPath of an attribute\n :return: List of named attributes with respect to the input JSONPath order\n \"\"\"\n return [x for x in path.split(\"/\") if not re.search(re.compile(r\"^\\[?\\d+]?$\"), x)]\n\n @staticmethod\n def _is_nesting_key(inspected_attributes: List[str], key: List[str]) -> bool:\n \"\"\"\n Resolves whether a key is a subset of the inspected nesting attributes\n :param inspected_attributes: list of nesting attributes\n :param key: JSONPath key of an attribute\n :return: True/False\n \"\"\"\n return any(x in key for x in inspected_attributes)\n\n def scan_resource_conf(self, conf: Dict[str, List[Any]]) -> CheckResult:\n handle_dynamic_values(conf)\n inspected_key = self.get_inspected_key()\n expected_values = self.get_expected_values()\n if dpath.search(conf, inspected_key) != {}:\n # Inspected key exists\n value = dpath.get(conf, inspected_key)\n if isinstance(value, list) and len(value) == 1:\n value = value[0]\n if value is None or (isinstance(value, list) and not value):\n return self.missing_block_result\n if ANY_VALUE in expected_values and value is not None and (not isinstance(value, str) or value):\n # Key is found on the configuration - if it accepts any value, the check is PASSED\n return CheckResult.PASSED\n if self._is_variable_dependant(value):\n # If the tested attribute is variable-dependant, then result is PASSED\n return CheckResult.PASSED\n if value in expected_values:\n return CheckResult.PASSED\n if get_referenced_vertices_in_value(value=value, aliases={}, resources_types=[]):\n # we don't provide resources_types as we want to stay provider agnostic\n return CheckResult.UNKNOWN\n return CheckResult.FAILED\n else:\n # Look for the configuration in a bottom-up fashion\n inspected_attributes = self._filter_key_path(inspected_key)\n for attribute in reversed(inspected_attributes):\n for sub_key, sub_conf in dpath.search(conf, f\"**/{attribute}\", yielded=True):\n filtered_sub_key = self._filter_key_path(sub_key)\n # Only proceed with check if full path for key is similar - not partial match\n if inspected_attributes == filtered_sub_key:\n if self._is_nesting_key(inspected_attributes, filtered_sub_key):\n if isinstance(sub_conf, list) and len(sub_conf) == 1:\n sub_conf = sub_conf[0]\n if sub_conf in self.get_expected_values():\n return CheckResult.PASSED\n if self._is_variable_dependant(sub_conf):\n # If the tested attribute is variable-dependant, then result is PASSED\n return CheckResult.PASSED\n\n return self.missing_block_result\n\n @abstractmethod\n def get_inspected_key(self) -> str:\n \"\"\"\n :return: JSONPath syntax path of the checked attribute\n \"\"\"\n raise NotImplementedError()\n\n def get_expected_values(self) -> List[Any]:\n \"\"\"\n Override the method with the list of acceptable values if the check has more than one possible expected value, given\n the inspected key\n :return: List of expected values, defaults to a list of the expected value\n \"\"\"\n return [self.get_expected_value()]\n\n def get_expected_value(self) -> Any:\n \"\"\"\n Returns the default expected value, governed by provider best practices\n \"\"\"\n return True\n\n def get_evaluated_keys(self) -> List[str]:\n return force_list(self.get_inspected_key())\n", "path": "checkov/terraform/checks/resource/base_resource_value_check.py"}, {"content": "from abc import abstractmethod\nfrom collections.abc import Iterable\nfrom typing import Dict, List, Any, Optional\n\nfrom checkov.common.checks.base_check import BaseCheck\nfrom checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.terraform.checks.resource.registry import resource_registry\nfrom checkov.terraform.parser_functions import handle_dynamic_values\nfrom checkov.terraform.parser_utils import find_var_blocks\n\n\nPROVIDER_PREFIXES = (\n \"aws_\",\n \"azurerm_\",\n \"azuread_\",\n \"digitalocean_\",\n \"google_\",\n \"github_\",\n \"linode_\",\n \"oci_\",\n \"openstack_\",\n)\n\n\nclass BaseResourceCheck(BaseCheck):\n def __init__(\n self,\n name: str,\n id: str,\n categories: \"Iterable[CheckCategories]\",\n supported_resources: \"Iterable[str]\",\n guideline: Optional[str] = None,\n ) -> None:\n super().__init__(\n name=name,\n id=id,\n categories=categories,\n supported_entities=supported_resources,\n block_type=\"resource\",\n guideline=guideline,\n )\n self.supported_resources = supported_resources\n resource_registry.register(self)\n\n @staticmethod\n def _is_variable_dependant(value: Any) -> bool:\n if not isinstance(value, str):\n return False\n\n if value.startswith((\"var.\", \"local.\", \"module.\")):\n return True\n\n if value.startswith(PROVIDER_PREFIXES):\n return True\n\n if \"${\" not in value:\n return False\n\n if find_var_blocks(value):\n return True\n return False\n\n def scan_entity_conf(self, conf: Dict[str, List[Any]], entity_type: str) -> CheckResult:\n self.entity_type = entity_type\n\n if conf.get(\"count\") == [0]:\n return CheckResult.UNKNOWN\n\n handle_dynamic_values(conf)\n return self.scan_resource_conf(conf)\n\n @abstractmethod\n def scan_resource_conf(self, conf: Dict[str, List[Any]]) -> CheckResult:\n \"\"\"\n self.evaluated_keys should be set with a JSONPath of the attribute inspected.\n If not relevant it should be set to an empty array so the previous check's value gets overridden in the report.\n \"\"\"\n raise NotImplementedError()\n", "path": "checkov/terraform/checks/resource/base_resource_check.py"}, {"content": "from typing import List, Any, Dict\n\nfrom checkov.common.models.enums import CheckCategories, CheckResult\nfrom checkov.terraform.checks.resource.base_resource_negative_value_check import BaseResourceNegativeValueCheck\nfrom checkov.common.models.consts import ANY_VALUE\n\n\nclass SecretsEncrypted(BaseResourceNegativeValueCheck):\n def __init__(self) -> None:\n # -from github docs \"It is also advised that you do not store plaintext values in your code but rather populate\n # the encrypted_value using fields from a resource, data source or variable as,\n # while encrypted in state, these will be easily accessible in your code\"\n name = \"Ensure Secrets are encrypted\"\n id = \"CKV_GIT_4\"\n supported_resources = (\n \"github_actions_environment_secret\",\n \"github_actions_organization_secret\",\n \"github_actions_secret\",\n )\n categories = (CheckCategories.ENCRYPTION,)\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf: Dict[str, List[Any]]) -> CheckResult:\n plaintext = conf.get(\"plaintext_value\")\n if plaintext and self._is_variable_dependant(plaintext[0]):\n return CheckResult.PASSED\n\n return super().scan_resource_conf(conf)\n\n def get_inspected_key(self) -> str:\n return \"plaintext_value\"\n\n def get_forbidden_values(self) -> List[Any]:\n return [ANY_VALUE]\n\n\ncheck = SecretsEncrypted()\n", "path": "checkov/terraform/checks/resource/github/SecretsEncrypted.py"}]}
| 3,262 | 774 |
gh_patches_debug_40716
|
rasdani/github-patches
|
git_diff
|
freedomofpress__securedrop-5075
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Duplicate Source Interface session triggers server error
## Description
Initiating more than one SecureDrop session in the Source Interface at a time causes a Server Error for the second session, and causes the first session to be invalidated.
## Steps to Reproduce
1. Launch the Docker-based SecureDrop development environment;
2. Visit http://localhost:8080/generate twice, in two separate private browser tabs.
3. Click "Submit" in the first tab
4. Click "Submit" in the second tab
## Expected Behavior
I can submit messages or documents.
## Actual Behavior
In the second tab, the following error message appears:
> Server error
> Sorry, the website encountered an error and was unable to complete your request.
In the first tab, subsequent page loads take me to the login screen: http://localhost:8080/login
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `securedrop/source_app/main.py`
Content:
```
1 import operator
2 import os
3 import io
4
5 from datetime import datetime
6 from flask import (Blueprint, render_template, flash, redirect, url_for, g,
7 session, current_app, request, Markup, abort)
8 from flask_babel import gettext
9 from sqlalchemy.exc import IntegrityError
10
11 import store
12
13 from db import db
14 from models import Source, Submission, Reply, get_one_or_else
15 from source_app.decorators import login_required
16 from source_app.utils import (logged_in, generate_unique_codename,
17 async_genkey, normalize_timestamps,
18 valid_codename, get_entropy_estimate)
19 from source_app.forms import LoginForm
20
21
22 def make_blueprint(config):
23 view = Blueprint('main', __name__)
24
25 @view.route('/')
26 def index():
27 return render_template('index.html')
28
29 @view.route('/generate', methods=('GET', 'POST'))
30 def generate():
31 if logged_in():
32 flash(gettext(
33 "You were redirected because you are already logged in. "
34 "If you want to create a new account, you should log out "
35 "first."),
36 "notification")
37 return redirect(url_for('.lookup'))
38
39 codename = generate_unique_codename(config)
40 session['codename'] = codename
41 session['new_user'] = True
42 return render_template('generate.html', codename=codename)
43
44 @view.route('/org-logo')
45 def select_logo():
46 if os.path.exists(os.path.join(current_app.static_folder, 'i',
47 'custom_logo.png')):
48 return redirect(url_for('static', filename='i/custom_logo.png'))
49 else:
50 return redirect(url_for('static', filename='i/logo.png'))
51
52 @view.route('/create', methods=['POST'])
53 def create():
54 filesystem_id = current_app.crypto_util.hash_codename(
55 session['codename'])
56
57 source = Source(filesystem_id, current_app.crypto_util.display_id())
58 db.session.add(source)
59 try:
60 db.session.commit()
61 except IntegrityError as e:
62 db.session.rollback()
63 current_app.logger.error(
64 "Attempt to create a source with duplicate codename: %s" %
65 (e,))
66
67 # Issue 2386: don't log in on duplicates
68 del session['codename']
69
70 # Issue 4361: Delete 'logged_in' if it's in the session
71 try:
72 del session['logged_in']
73 except KeyError:
74 pass
75
76 abort(500)
77 else:
78 os.mkdir(current_app.storage.path(filesystem_id))
79
80 session['logged_in'] = True
81 return redirect(url_for('.lookup'))
82
83 @view.route('/lookup', methods=('GET',))
84 @login_required
85 def lookup():
86 replies = []
87 source_inbox = Reply.query.filter(Reply.source_id == g.source.id) \
88 .filter(Reply.deleted_by_source == False).all() # noqa
89
90 for reply in source_inbox:
91 reply_path = current_app.storage.path(
92 g.filesystem_id,
93 reply.filename,
94 )
95 try:
96 with io.open(reply_path, "rb") as f:
97 contents = f.read()
98 reply_obj = current_app.crypto_util.decrypt(g.codename, contents)
99 reply.decrypted = reply_obj
100 except UnicodeDecodeError:
101 current_app.logger.error("Could not decode reply %s" %
102 reply.filename)
103 else:
104 reply.date = datetime.utcfromtimestamp(
105 os.stat(reply_path).st_mtime)
106 replies.append(reply)
107
108 # Sort the replies by date
109 replies.sort(key=operator.attrgetter('date'), reverse=True)
110
111 # Generate a keypair to encrypt replies from the journalist
112 # Only do this if the journalist has flagged the source as one
113 # that they would like to reply to. (Issue #140.)
114 if not current_app.crypto_util.getkey(g.filesystem_id) and \
115 g.source.flagged:
116 db_uri = current_app.config['SQLALCHEMY_DATABASE_URI']
117 async_genkey(current_app.crypto_util,
118 db_uri,
119 g.filesystem_id,
120 g.codename)
121
122 return render_template(
123 'lookup.html',
124 allow_document_uploads=current_app.instance_config.allow_document_uploads,
125 codename=g.codename,
126 replies=replies,
127 flagged=g.source.flagged,
128 new_user=session.get('new_user', None),
129 haskey=current_app.crypto_util.getkey(
130 g.filesystem_id))
131
132 @view.route('/submit', methods=('POST',))
133 @login_required
134 def submit():
135 allow_document_uploads = current_app.instance_config.allow_document_uploads
136 msg = request.form['msg']
137 fh = None
138 if allow_document_uploads and 'fh' in request.files:
139 fh = request.files['fh']
140
141 # Don't submit anything if it was an "empty" submission. #878
142 if not (msg or fh):
143 if allow_document_uploads:
144 flash(gettext(
145 "You must enter a message or choose a file to submit."),
146 "error")
147 else:
148 flash(gettext("You must enter a message."), "error")
149 return redirect(url_for('main.lookup'))
150
151 fnames = []
152 journalist_filename = g.source.journalist_filename
153 first_submission = g.source.interaction_count == 0
154
155 if msg:
156 g.source.interaction_count += 1
157 fnames.append(
158 current_app.storage.save_message_submission(
159 g.filesystem_id,
160 g.source.interaction_count,
161 journalist_filename,
162 msg))
163 if fh:
164 g.source.interaction_count += 1
165 fnames.append(
166 current_app.storage.save_file_submission(
167 g.filesystem_id,
168 g.source.interaction_count,
169 journalist_filename,
170 fh.filename,
171 fh.stream))
172
173 if first_submission:
174 msg = render_template('first_submission_flashed_message.html')
175 flash(Markup(msg), "success")
176
177 else:
178 if msg and not fh:
179 html_contents = gettext('Thanks! We received your message.')
180 elif not msg and fh:
181 html_contents = gettext('Thanks! We received your document.')
182 else:
183 html_contents = gettext('Thanks! We received your message and '
184 'document.')
185
186 msg = render_template('next_submission_flashed_message.html',
187 html_contents=html_contents)
188 flash(Markup(msg), "success")
189
190 new_submissions = []
191 for fname in fnames:
192 submission = Submission(g.source, fname)
193 db.session.add(submission)
194 new_submissions.append(submission)
195
196 if g.source.pending:
197 g.source.pending = False
198
199 # Generate a keypair now, if there's enough entropy (issue #303)
200 # (gpg reads 300 bytes from /dev/random)
201 entropy_avail = get_entropy_estimate()
202 if entropy_avail >= 2400:
203 db_uri = current_app.config['SQLALCHEMY_DATABASE_URI']
204
205 async_genkey(current_app.crypto_util,
206 db_uri,
207 g.filesystem_id,
208 g.codename)
209 current_app.logger.info("generating key, entropy: {}".format(
210 entropy_avail))
211 else:
212 current_app.logger.warn(
213 "skipping key generation. entropy: {}".format(
214 entropy_avail))
215
216 g.source.last_updated = datetime.utcnow()
217 db.session.commit()
218
219 for sub in new_submissions:
220 store.async_add_checksum_for_file(sub)
221
222 normalize_timestamps(g.filesystem_id)
223
224 return redirect(url_for('main.lookup'))
225
226 @view.route('/delete', methods=('POST',))
227 @login_required
228 def delete():
229 """This deletes the reply from the source's inbox, but preserves
230 the history for journalists such that they can view conversation
231 history.
232 """
233
234 query = Reply.query.filter_by(
235 filename=request.form['reply_filename'],
236 source_id=g.source.id)
237 reply = get_one_or_else(query, current_app.logger, abort)
238 reply.deleted_by_source = True
239 db.session.add(reply)
240 db.session.commit()
241
242 flash(gettext("Reply deleted"), "notification")
243 return redirect(url_for('.lookup'))
244
245 @view.route('/delete-all', methods=('POST',))
246 @login_required
247 def batch_delete():
248 replies = Reply.query.filter(Reply.source_id == g.source.id) \
249 .filter(Reply.deleted_by_source == False).all() # noqa
250 if len(replies) == 0:
251 current_app.logger.error("Found no replies when at least one was "
252 "expected")
253 return redirect(url_for('.lookup'))
254
255 for reply in replies:
256 reply.deleted_by_source = True
257 db.session.add(reply)
258 db.session.commit()
259
260 flash(gettext("All replies have been deleted"), "notification")
261 return redirect(url_for('.lookup'))
262
263 @view.route('/login', methods=('GET', 'POST'))
264 def login():
265 form = LoginForm()
266 if form.validate_on_submit():
267 codename = request.form['codename'].strip()
268 if valid_codename(codename):
269 session.update(codename=codename, logged_in=True)
270 return redirect(url_for('.lookup', from_login='1'))
271 else:
272 current_app.logger.info(
273 "Login failed for invalid codename")
274 flash(gettext("Sorry, that is not a recognized codename."),
275 "error")
276 return render_template('login.html', form=form)
277
278 @view.route('/logout')
279 def logout():
280 if logged_in():
281 msg = render_template('logout_flashed_message.html')
282
283 # Clear the session after we render the message so it's localized
284 # If a user specified a locale, save it and restore it
285 user_locale = g.locale
286 session.clear()
287 session['locale'] = user_locale
288
289 flash(Markup(msg), "important hide-if-not-tor-browser")
290 return redirect(url_for('.index'))
291
292 return view
293
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/securedrop/source_app/main.py b/securedrop/source_app/main.py
--- a/securedrop/source_app/main.py
+++ b/securedrop/source_app/main.py
@@ -2,6 +2,7 @@
import os
import io
+from base64 import urlsafe_b64encode
from datetime import datetime
from flask import (Blueprint, render_template, flash, redirect, url_for, g,
session, current_app, request, Markup, abort)
@@ -37,9 +38,17 @@
return redirect(url_for('.lookup'))
codename = generate_unique_codename(config)
- session['codename'] = codename
+
+ # Generate a unique id for each browser tab and associate the codename with this id.
+ # This will allow retrieval of the codename displayed in the tab from which the source has
+ # clicked to proceed to /generate (ref. issue #4458)
+ tab_id = urlsafe_b64encode(os.urandom(64)).decode()
+ codenames = session.get('codenames', {})
+ codenames[tab_id] = codename
+ session['codenames'] = codenames
+
session['new_user'] = True
- return render_template('generate.html', codename=codename)
+ return render_template('generate.html', codename=codename, tab_id=tab_id)
@view.route('/org-logo')
def select_logo():
@@ -51,33 +60,43 @@
@view.route('/create', methods=['POST'])
def create():
- filesystem_id = current_app.crypto_util.hash_codename(
- session['codename'])
-
- source = Source(filesystem_id, current_app.crypto_util.display_id())
- db.session.add(source)
- try:
- db.session.commit()
- except IntegrityError as e:
- db.session.rollback()
- current_app.logger.error(
- "Attempt to create a source with duplicate codename: %s" %
- (e,))
-
- # Issue 2386: don't log in on duplicates
- del session['codename']
-
- # Issue 4361: Delete 'logged_in' if it's in the session
- try:
- del session['logged_in']
- except KeyError:
- pass
-
- abort(500)
+ if session.get('logged_in', False):
+ flash(gettext("You are already logged in. Please verify your codename below as it " +
+ "may differ from the one displayed on the previous page."),
+ 'notification')
else:
- os.mkdir(current_app.storage.path(filesystem_id))
+ tab_id = request.form['tab_id']
+ codename = session['codenames'][tab_id]
+ session['codename'] = codename
+
+ del session['codenames']
+
+ filesystem_id = current_app.crypto_util.hash_codename(codename)
+
+ source = Source(filesystem_id, current_app.crypto_util.display_id())
+ db.session.add(source)
+ try:
+ db.session.commit()
+ except IntegrityError as e:
+ db.session.rollback()
+ current_app.logger.error(
+ "Attempt to create a source with duplicate codename: %s" %
+ (e,))
+
+ # Issue 2386: don't log in on duplicates
+ del session['codename']
+
+ # Issue 4361: Delete 'logged_in' if it's in the session
+ try:
+ del session['logged_in']
+ except KeyError:
+ pass
+
+ abort(500)
+ else:
+ os.mkdir(current_app.storage.path(filesystem_id))
- session['logged_in'] = True
+ session['logged_in'] = True
return redirect(url_for('.lookup'))
@view.route('/lookup', methods=('GET',))
|
{"golden_diff": "diff --git a/securedrop/source_app/main.py b/securedrop/source_app/main.py\n--- a/securedrop/source_app/main.py\n+++ b/securedrop/source_app/main.py\n@@ -2,6 +2,7 @@\n import os\n import io\n \n+from base64 import urlsafe_b64encode\n from datetime import datetime\n from flask import (Blueprint, render_template, flash, redirect, url_for, g,\n session, current_app, request, Markup, abort)\n@@ -37,9 +38,17 @@\n return redirect(url_for('.lookup'))\n \n codename = generate_unique_codename(config)\n- session['codename'] = codename\n+\n+ # Generate a unique id for each browser tab and associate the codename with this id.\n+ # This will allow retrieval of the codename displayed in the tab from which the source has\n+ # clicked to proceed to /generate (ref. issue #4458)\n+ tab_id = urlsafe_b64encode(os.urandom(64)).decode()\n+ codenames = session.get('codenames', {})\n+ codenames[tab_id] = codename\n+ session['codenames'] = codenames\n+\n session['new_user'] = True\n- return render_template('generate.html', codename=codename)\n+ return render_template('generate.html', codename=codename, tab_id=tab_id)\n \n @view.route('/org-logo')\n def select_logo():\n@@ -51,33 +60,43 @@\n \n @view.route('/create', methods=['POST'])\n def create():\n- filesystem_id = current_app.crypto_util.hash_codename(\n- session['codename'])\n-\n- source = Source(filesystem_id, current_app.crypto_util.display_id())\n- db.session.add(source)\n- try:\n- db.session.commit()\n- except IntegrityError as e:\n- db.session.rollback()\n- current_app.logger.error(\n- \"Attempt to create a source with duplicate codename: %s\" %\n- (e,))\n-\n- # Issue 2386: don't log in on duplicates\n- del session['codename']\n-\n- # Issue 4361: Delete 'logged_in' if it's in the session\n- try:\n- del session['logged_in']\n- except KeyError:\n- pass\n-\n- abort(500)\n+ if session.get('logged_in', False):\n+ flash(gettext(\"You are already logged in. Please verify your codename below as it \" +\n+ \"may differ from the one displayed on the previous page.\"),\n+ 'notification')\n else:\n- os.mkdir(current_app.storage.path(filesystem_id))\n+ tab_id = request.form['tab_id']\n+ codename = session['codenames'][tab_id]\n+ session['codename'] = codename\n+\n+ del session['codenames']\n+\n+ filesystem_id = current_app.crypto_util.hash_codename(codename)\n+\n+ source = Source(filesystem_id, current_app.crypto_util.display_id())\n+ db.session.add(source)\n+ try:\n+ db.session.commit()\n+ except IntegrityError as e:\n+ db.session.rollback()\n+ current_app.logger.error(\n+ \"Attempt to create a source with duplicate codename: %s\" %\n+ (e,))\n+\n+ # Issue 2386: don't log in on duplicates\n+ del session['codename']\n+\n+ # Issue 4361: Delete 'logged_in' if it's in the session\n+ try:\n+ del session['logged_in']\n+ except KeyError:\n+ pass\n+\n+ abort(500)\n+ else:\n+ os.mkdir(current_app.storage.path(filesystem_id))\n \n- session['logged_in'] = True\n+ session['logged_in'] = True\n return redirect(url_for('.lookup'))\n \n @view.route('/lookup', methods=('GET',))\n", "issue": "Duplicate Source Interface session triggers server error\n## Description\r\n\r\nInitiating more than one SecureDrop session in the Source Interface at a time causes a Server Error for the second session, and causes the first session to be invalidated.\r\n\r\n## Steps to Reproduce\r\n\r\n1. Launch the Docker-based SecureDrop development environment;\r\n2. Visit http://localhost:8080/generate twice, in two separate private browser tabs.\r\n3. Click \"Submit\" in the first tab\r\n4. Click \"Submit\" in the second tab\r\n\r\n## Expected Behavior\r\nI can submit messages or documents.\r\n\r\n## Actual Behavior\r\n\r\nIn the second tab, the following error message appears:\r\n\r\n> Server error\r\n> Sorry, the website encountered an error and was unable to complete your request.\r\n\r\nIn the first tab, subsequent page loads take me to the login screen: http://localhost:8080/login\n", "before_files": [{"content": "import operator\nimport os\nimport io\n\nfrom datetime import datetime\nfrom flask import (Blueprint, render_template, flash, redirect, url_for, g,\n session, current_app, request, Markup, abort)\nfrom flask_babel import gettext\nfrom sqlalchemy.exc import IntegrityError\n\nimport store\n\nfrom db import db\nfrom models import Source, Submission, Reply, get_one_or_else\nfrom source_app.decorators import login_required\nfrom source_app.utils import (logged_in, generate_unique_codename,\n async_genkey, normalize_timestamps,\n valid_codename, get_entropy_estimate)\nfrom source_app.forms import LoginForm\n\n\ndef make_blueprint(config):\n view = Blueprint('main', __name__)\n\n @view.route('/')\n def index():\n return render_template('index.html')\n\n @view.route('/generate', methods=('GET', 'POST'))\n def generate():\n if logged_in():\n flash(gettext(\n \"You were redirected because you are already logged in. \"\n \"If you want to create a new account, you should log out \"\n \"first.\"),\n \"notification\")\n return redirect(url_for('.lookup'))\n\n codename = generate_unique_codename(config)\n session['codename'] = codename\n session['new_user'] = True\n return render_template('generate.html', codename=codename)\n\n @view.route('/org-logo')\n def select_logo():\n if os.path.exists(os.path.join(current_app.static_folder, 'i',\n 'custom_logo.png')):\n return redirect(url_for('static', filename='i/custom_logo.png'))\n else:\n return redirect(url_for('static', filename='i/logo.png'))\n\n @view.route('/create', methods=['POST'])\n def create():\n filesystem_id = current_app.crypto_util.hash_codename(\n session['codename'])\n\n source = Source(filesystem_id, current_app.crypto_util.display_id())\n db.session.add(source)\n try:\n db.session.commit()\n except IntegrityError as e:\n db.session.rollback()\n current_app.logger.error(\n \"Attempt to create a source with duplicate codename: %s\" %\n (e,))\n\n # Issue 2386: don't log in on duplicates\n del session['codename']\n\n # Issue 4361: Delete 'logged_in' if it's in the session\n try:\n del session['logged_in']\n except KeyError:\n pass\n\n abort(500)\n else:\n os.mkdir(current_app.storage.path(filesystem_id))\n\n session['logged_in'] = True\n return redirect(url_for('.lookup'))\n\n @view.route('/lookup', methods=('GET',))\n @login_required\n def lookup():\n replies = []\n source_inbox = Reply.query.filter(Reply.source_id == g.source.id) \\\n .filter(Reply.deleted_by_source == False).all() # noqa\n\n for reply in source_inbox:\n reply_path = current_app.storage.path(\n g.filesystem_id,\n reply.filename,\n )\n try:\n with io.open(reply_path, \"rb\") as f:\n contents = f.read()\n reply_obj = current_app.crypto_util.decrypt(g.codename, contents)\n reply.decrypted = reply_obj\n except UnicodeDecodeError:\n current_app.logger.error(\"Could not decode reply %s\" %\n reply.filename)\n else:\n reply.date = datetime.utcfromtimestamp(\n os.stat(reply_path).st_mtime)\n replies.append(reply)\n\n # Sort the replies by date\n replies.sort(key=operator.attrgetter('date'), reverse=True)\n\n # Generate a keypair to encrypt replies from the journalist\n # Only do this if the journalist has flagged the source as one\n # that they would like to reply to. (Issue #140.)\n if not current_app.crypto_util.getkey(g.filesystem_id) and \\\n g.source.flagged:\n db_uri = current_app.config['SQLALCHEMY_DATABASE_URI']\n async_genkey(current_app.crypto_util,\n db_uri,\n g.filesystem_id,\n g.codename)\n\n return render_template(\n 'lookup.html',\n allow_document_uploads=current_app.instance_config.allow_document_uploads,\n codename=g.codename,\n replies=replies,\n flagged=g.source.flagged,\n new_user=session.get('new_user', None),\n haskey=current_app.crypto_util.getkey(\n g.filesystem_id))\n\n @view.route('/submit', methods=('POST',))\n @login_required\n def submit():\n allow_document_uploads = current_app.instance_config.allow_document_uploads\n msg = request.form['msg']\n fh = None\n if allow_document_uploads and 'fh' in request.files:\n fh = request.files['fh']\n\n # Don't submit anything if it was an \"empty\" submission. #878\n if not (msg or fh):\n if allow_document_uploads:\n flash(gettext(\n \"You must enter a message or choose a file to submit.\"),\n \"error\")\n else:\n flash(gettext(\"You must enter a message.\"), \"error\")\n return redirect(url_for('main.lookup'))\n\n fnames = []\n journalist_filename = g.source.journalist_filename\n first_submission = g.source.interaction_count == 0\n\n if msg:\n g.source.interaction_count += 1\n fnames.append(\n current_app.storage.save_message_submission(\n g.filesystem_id,\n g.source.interaction_count,\n journalist_filename,\n msg))\n if fh:\n g.source.interaction_count += 1\n fnames.append(\n current_app.storage.save_file_submission(\n g.filesystem_id,\n g.source.interaction_count,\n journalist_filename,\n fh.filename,\n fh.stream))\n\n if first_submission:\n msg = render_template('first_submission_flashed_message.html')\n flash(Markup(msg), \"success\")\n\n else:\n if msg and not fh:\n html_contents = gettext('Thanks! We received your message.')\n elif not msg and fh:\n html_contents = gettext('Thanks! We received your document.')\n else:\n html_contents = gettext('Thanks! We received your message and '\n 'document.')\n\n msg = render_template('next_submission_flashed_message.html',\n html_contents=html_contents)\n flash(Markup(msg), \"success\")\n\n new_submissions = []\n for fname in fnames:\n submission = Submission(g.source, fname)\n db.session.add(submission)\n new_submissions.append(submission)\n\n if g.source.pending:\n g.source.pending = False\n\n # Generate a keypair now, if there's enough entropy (issue #303)\n # (gpg reads 300 bytes from /dev/random)\n entropy_avail = get_entropy_estimate()\n if entropy_avail >= 2400:\n db_uri = current_app.config['SQLALCHEMY_DATABASE_URI']\n\n async_genkey(current_app.crypto_util,\n db_uri,\n g.filesystem_id,\n g.codename)\n current_app.logger.info(\"generating key, entropy: {}\".format(\n entropy_avail))\n else:\n current_app.logger.warn(\n \"skipping key generation. entropy: {}\".format(\n entropy_avail))\n\n g.source.last_updated = datetime.utcnow()\n db.session.commit()\n\n for sub in new_submissions:\n store.async_add_checksum_for_file(sub)\n\n normalize_timestamps(g.filesystem_id)\n\n return redirect(url_for('main.lookup'))\n\n @view.route('/delete', methods=('POST',))\n @login_required\n def delete():\n \"\"\"This deletes the reply from the source's inbox, but preserves\n the history for journalists such that they can view conversation\n history.\n \"\"\"\n\n query = Reply.query.filter_by(\n filename=request.form['reply_filename'],\n source_id=g.source.id)\n reply = get_one_or_else(query, current_app.logger, abort)\n reply.deleted_by_source = True\n db.session.add(reply)\n db.session.commit()\n\n flash(gettext(\"Reply deleted\"), \"notification\")\n return redirect(url_for('.lookup'))\n\n @view.route('/delete-all', methods=('POST',))\n @login_required\n def batch_delete():\n replies = Reply.query.filter(Reply.source_id == g.source.id) \\\n .filter(Reply.deleted_by_source == False).all() # noqa\n if len(replies) == 0:\n current_app.logger.error(\"Found no replies when at least one was \"\n \"expected\")\n return redirect(url_for('.lookup'))\n\n for reply in replies:\n reply.deleted_by_source = True\n db.session.add(reply)\n db.session.commit()\n\n flash(gettext(\"All replies have been deleted\"), \"notification\")\n return redirect(url_for('.lookup'))\n\n @view.route('/login', methods=('GET', 'POST'))\n def login():\n form = LoginForm()\n if form.validate_on_submit():\n codename = request.form['codename'].strip()\n if valid_codename(codename):\n session.update(codename=codename, logged_in=True)\n return redirect(url_for('.lookup', from_login='1'))\n else:\n current_app.logger.info(\n \"Login failed for invalid codename\")\n flash(gettext(\"Sorry, that is not a recognized codename.\"),\n \"error\")\n return render_template('login.html', form=form)\n\n @view.route('/logout')\n def logout():\n if logged_in():\n msg = render_template('logout_flashed_message.html')\n\n # Clear the session after we render the message so it's localized\n # If a user specified a locale, save it and restore it\n user_locale = g.locale\n session.clear()\n session['locale'] = user_locale\n\n flash(Markup(msg), \"important hide-if-not-tor-browser\")\n return redirect(url_for('.index'))\n\n return view\n", "path": "securedrop/source_app/main.py"}], "after_files": [{"content": "import operator\nimport os\nimport io\n\nfrom base64 import urlsafe_b64encode\nfrom datetime import datetime\nfrom flask import (Blueprint, render_template, flash, redirect, url_for, g,\n session, current_app, request, Markup, abort)\nfrom flask_babel import gettext\nfrom sqlalchemy.exc import IntegrityError\n\nimport store\n\nfrom db import db\nfrom models import Source, Submission, Reply, get_one_or_else\nfrom source_app.decorators import login_required\nfrom source_app.utils import (logged_in, generate_unique_codename,\n async_genkey, normalize_timestamps,\n valid_codename, get_entropy_estimate)\nfrom source_app.forms import LoginForm\n\n\ndef make_blueprint(config):\n view = Blueprint('main', __name__)\n\n @view.route('/')\n def index():\n return render_template('index.html')\n\n @view.route('/generate', methods=('GET', 'POST'))\n def generate():\n if logged_in():\n flash(gettext(\n \"You were redirected because you are already logged in. \"\n \"If you want to create a new account, you should log out \"\n \"first.\"),\n \"notification\")\n return redirect(url_for('.lookup'))\n\n codename = generate_unique_codename(config)\n\n # Generate a unique id for each browser tab and associate the codename with this id.\n # This will allow retrieval of the codename displayed in the tab from which the source has\n # clicked to proceed to /generate (ref. issue #4458)\n tab_id = urlsafe_b64encode(os.urandom(64)).decode()\n codenames = session.get('codenames', {})\n codenames[tab_id] = codename\n session['codenames'] = codenames\n\n session['new_user'] = True\n return render_template('generate.html', codename=codename, tab_id=tab_id)\n\n @view.route('/org-logo')\n def select_logo():\n if os.path.exists(os.path.join(current_app.static_folder, 'i',\n 'custom_logo.png')):\n return redirect(url_for('static', filename='i/custom_logo.png'))\n else:\n return redirect(url_for('static', filename='i/logo.png'))\n\n @view.route('/create', methods=['POST'])\n def create():\n if session.get('logged_in', False):\n flash(gettext(\"You are already logged in. Please verify your codename below as it \" +\n \"may differ from the one displayed on the previous page.\"),\n 'notification')\n else:\n tab_id = request.form['tab_id']\n codename = session['codenames'][tab_id]\n session['codename'] = codename\n\n del session['codenames']\n\n filesystem_id = current_app.crypto_util.hash_codename(codename)\n\n source = Source(filesystem_id, current_app.crypto_util.display_id())\n db.session.add(source)\n try:\n db.session.commit()\n except IntegrityError as e:\n db.session.rollback()\n current_app.logger.error(\n \"Attempt to create a source with duplicate codename: %s\" %\n (e,))\n\n # Issue 2386: don't log in on duplicates\n del session['codename']\n\n # Issue 4361: Delete 'logged_in' if it's in the session\n try:\n del session['logged_in']\n except KeyError:\n pass\n\n abort(500)\n else:\n os.mkdir(current_app.storage.path(filesystem_id))\n\n session['logged_in'] = True\n return redirect(url_for('.lookup'))\n\n @view.route('/lookup', methods=('GET',))\n @login_required\n def lookup():\n replies = []\n source_inbox = Reply.query.filter(Reply.source_id == g.source.id) \\\n .filter(Reply.deleted_by_source == False).all() # noqa\n\n for reply in source_inbox:\n reply_path = current_app.storage.path(\n g.filesystem_id,\n reply.filename,\n )\n try:\n with io.open(reply_path, \"rb\") as f:\n contents = f.read()\n reply_obj = current_app.crypto_util.decrypt(g.codename, contents)\n reply.decrypted = reply_obj\n except UnicodeDecodeError:\n current_app.logger.error(\"Could not decode reply %s\" %\n reply.filename)\n else:\n reply.date = datetime.utcfromtimestamp(\n os.stat(reply_path).st_mtime)\n replies.append(reply)\n\n # Sort the replies by date\n replies.sort(key=operator.attrgetter('date'), reverse=True)\n\n # Generate a keypair to encrypt replies from the journalist\n # Only do this if the journalist has flagged the source as one\n # that they would like to reply to. (Issue #140.)\n if not current_app.crypto_util.getkey(g.filesystem_id) and \\\n g.source.flagged:\n db_uri = current_app.config['SQLALCHEMY_DATABASE_URI']\n async_genkey(current_app.crypto_util,\n db_uri,\n g.filesystem_id,\n g.codename)\n\n return render_template(\n 'lookup.html',\n allow_document_uploads=current_app.instance_config.allow_document_uploads,\n codename=g.codename,\n replies=replies,\n flagged=g.source.flagged,\n new_user=session.get('new_user', None),\n haskey=current_app.crypto_util.getkey(\n g.filesystem_id))\n\n @view.route('/submit', methods=('POST',))\n @login_required\n def submit():\n allow_document_uploads = current_app.instance_config.allow_document_uploads\n msg = request.form['msg']\n fh = None\n if allow_document_uploads and 'fh' in request.files:\n fh = request.files['fh']\n\n # Don't submit anything if it was an \"empty\" submission. #878\n if not (msg or fh):\n if allow_document_uploads:\n flash(gettext(\n \"You must enter a message or choose a file to submit.\"),\n \"error\")\n else:\n flash(gettext(\"You must enter a message.\"), \"error\")\n return redirect(url_for('main.lookup'))\n\n fnames = []\n journalist_filename = g.source.journalist_filename\n first_submission = g.source.interaction_count == 0\n\n if msg:\n g.source.interaction_count += 1\n fnames.append(\n current_app.storage.save_message_submission(\n g.filesystem_id,\n g.source.interaction_count,\n journalist_filename,\n msg))\n if fh:\n g.source.interaction_count += 1\n fnames.append(\n current_app.storage.save_file_submission(\n g.filesystem_id,\n g.source.interaction_count,\n journalist_filename,\n fh.filename,\n fh.stream))\n\n if first_submission:\n msg = render_template('first_submission_flashed_message.html')\n flash(Markup(msg), \"success\")\n\n else:\n if msg and not fh:\n html_contents = gettext('Thanks! We received your message.')\n elif not msg and fh:\n html_contents = gettext('Thanks! We received your document.')\n else:\n html_contents = gettext('Thanks! We received your message and '\n 'document.')\n\n msg = render_template('next_submission_flashed_message.html',\n html_contents=html_contents)\n flash(Markup(msg), \"success\")\n\n new_submissions = []\n for fname in fnames:\n submission = Submission(g.source, fname)\n db.session.add(submission)\n new_submissions.append(submission)\n\n if g.source.pending:\n g.source.pending = False\n\n # Generate a keypair now, if there's enough entropy (issue #303)\n # (gpg reads 300 bytes from /dev/random)\n entropy_avail = get_entropy_estimate()\n if entropy_avail >= 2400:\n db_uri = current_app.config['SQLALCHEMY_DATABASE_URI']\n\n async_genkey(current_app.crypto_util,\n db_uri,\n g.filesystem_id,\n g.codename)\n current_app.logger.info(\"generating key, entropy: {}\".format(\n entropy_avail))\n else:\n current_app.logger.warn(\n \"skipping key generation. entropy: {}\".format(\n entropy_avail))\n\n g.source.last_updated = datetime.utcnow()\n db.session.commit()\n\n for sub in new_submissions:\n store.async_add_checksum_for_file(sub)\n\n normalize_timestamps(g.filesystem_id)\n\n return redirect(url_for('main.lookup'))\n\n @view.route('/delete', methods=('POST',))\n @login_required\n def delete():\n \"\"\"This deletes the reply from the source's inbox, but preserves\n the history for journalists such that they can view conversation\n history.\n \"\"\"\n\n query = Reply.query.filter_by(\n filename=request.form['reply_filename'],\n source_id=g.source.id)\n reply = get_one_or_else(query, current_app.logger, abort)\n reply.deleted_by_source = True\n db.session.add(reply)\n db.session.commit()\n\n flash(gettext(\"Reply deleted\"), \"notification\")\n return redirect(url_for('.lookup'))\n\n @view.route('/delete-all', methods=('POST',))\n @login_required\n def batch_delete():\n replies = Reply.query.filter(Reply.source_id == g.source.id) \\\n .filter(Reply.deleted_by_source == False).all() # noqa\n if len(replies) == 0:\n current_app.logger.error(\"Found no replies when at least one was \"\n \"expected\")\n return redirect(url_for('.lookup'))\n\n for reply in replies:\n reply.deleted_by_source = True\n db.session.add(reply)\n db.session.commit()\n\n flash(gettext(\"All replies have been deleted\"), \"notification\")\n return redirect(url_for('.lookup'))\n\n @view.route('/login', methods=('GET', 'POST'))\n def login():\n form = LoginForm()\n if form.validate_on_submit():\n codename = request.form['codename'].strip()\n if valid_codename(codename):\n session.update(codename=codename, logged_in=True)\n return redirect(url_for('.lookup', from_login='1'))\n else:\n current_app.logger.info(\n \"Login failed for invalid codename\")\n flash(gettext(\"Sorry, that is not a recognized codename.\"),\n \"error\")\n return render_template('login.html', form=form)\n\n @view.route('/logout')\n def logout():\n if logged_in():\n msg = render_template('logout_flashed_message.html')\n\n # Clear the session after we render the message so it's localized\n # If a user specified a locale, save it and restore it\n user_locale = g.locale\n session.clear()\n session['locale'] = user_locale\n\n flash(Markup(msg), \"important hide-if-not-tor-browser\")\n return redirect(url_for('.index'))\n\n return view\n", "path": "securedrop/source_app/main.py"}]}
| 3,315 | 871 |
gh_patches_debug_28795
|
rasdani/github-patches
|
git_diff
|
readthedocs__readthedocs.org-548
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
mercurial project imported from bitbucket stuck in 'Triggered' state
The docs for pylibftdi are set to be built (via a POST trigger) from https://bitbucket.org/codedstructure/pylibftdi, but builds (https://readthedocs.org/builds/pylibftdi/) are stuck at 'Triggered'.
Based on comments in #435 I set the project up to build against a github mirror, and that worked successfully, so it seems (from #435) that this is likely an hg issue.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `readthedocs/vcs_support/backends/hg.py`
Content:
```
1 import csv
2 from StringIO import StringIO
3
4 from projects.exceptions import ProjectImportError
5 from vcs_support.base import BaseVCS, VCSVersion
6
7
8 class Backend(BaseVCS):
9 supports_tags = True
10 supports_branches = True
11 fallback_branch = 'default'
12
13 def update(self):
14 super(Backend, self).update()
15 retcode = self.run('hg', 'status')[0]
16 if retcode == 0:
17 return self.pull()
18 else:
19 return self.clone()
20
21 def pull(self):
22 pull_output = self.run('hg', 'pull')
23 if pull_output[0] != 0:
24 raise ProjectImportError(
25 ("Failed to get code from '%s' (hg pull): %s"
26 % (self.repo_url, pull_output[0]))
27 )
28 update_output = self.run('hg', 'update', '-C')[0]
29 if update_output[0] != 0:
30 raise ProjectImportError(
31 ("Failed to get code from '%s' (hg update): %s"
32 % (self.repo_url, pull_output[0]))
33 )
34 return update_output
35
36 def clone(self):
37 output = self.run('hg', 'clone', self.repo_url, '.')
38 if output[0] != 0:
39 raise ProjectImportError(
40 ("Failed to get code from '%s' (hg clone): %s"
41 % (self.repo_url, output[0]))
42 )
43 return output
44
45 @property
46 def branches(self):
47 retcode, stdout = self.run('hg', 'branches', '-q')[:2]
48 # error (or no tags found)
49 if retcode != 0:
50 return []
51 return self.parse_branches(stdout)
52
53 def parse_branches(self, data):
54 """
55 stable
56 default
57 """
58
59 names = [name.lstrip() for name in data.splitlines()]
60 return [VCSVersion(self, name, name) for name in names if name]
61
62 @property
63 def tags(self):
64 retcode, stdout = self.run('hg', 'tags')[:2]
65 # error (or no tags found)
66 if retcode != 0:
67 return []
68 return self.parse_tags(stdout)
69
70 def parse_tags(self, data):
71 """
72 Parses output of show-ref --tags, eg:
73
74 tip 278:c4b2d21db51a
75 0.2.2 152:6b0364d98837
76 0.2.1 117:a14b7b6ffa03
77 0.1 50:30c2c6b3a055
78 """
79 # parse the lines into a list of tuples (commit-hash, tag ref name)
80 raw_tags = csv.reader(StringIO(data), delimiter=' ')
81 vcs_tags = []
82 for row in raw_tags:
83 row = filter(lambda f: f != '', row)
84 if row == []:
85 continue
86 name, commit = row
87 if name == 'tip':
88 continue
89 revision, commit_hash = commit.split(':')
90 vcs_tags.append(VCSVersion(self, commit_hash, name))
91 return vcs_tags
92
93 def checkout(self, identifier=None):
94 super(Backend, self).checkout()
95 if not identifier:
96 identifier = 'tip'
97 retcode = self.run('hg', 'status')[0]
98 if retcode == 0:
99 self.run('hg', 'pull')
100 return self.run('hg', 'update', '-C', identifier)
101 else:
102 self.clone()
103 return self.run('hg', 'update', '-C', identifier)
104
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/readthedocs/vcs_support/backends/hg.py b/readthedocs/vcs_support/backends/hg.py
--- a/readthedocs/vcs_support/backends/hg.py
+++ b/readthedocs/vcs_support/backends/hg.py
@@ -1,6 +1,3 @@
-import csv
-from StringIO import StringIO
-
from projects.exceptions import ProjectImportError
from vcs_support.base import BaseVCS, VCSVersion
@@ -69,19 +66,24 @@
def parse_tags(self, data):
"""
- Parses output of show-ref --tags, eg:
+ Parses output of `hg tags`, eg:
+
+ tip 278:c4b2d21db51a
+ 0.2.2 152:6b0364d98837
+ 0.2.1 117:a14b7b6ffa03
+ 0.1 50:30c2c6b3a055
+ maintenance release 1 10:f83c32fe8126
- tip 278:c4b2d21db51a
- 0.2.2 152:6b0364d98837
- 0.2.1 117:a14b7b6ffa03
- 0.1 50:30c2c6b3a055
+ Into VCSVersion objects with the tag name as verbose_name and the
+ commit hash as identifier.
"""
- # parse the lines into a list of tuples (commit-hash, tag ref name)
- raw_tags = csv.reader(StringIO(data), delimiter=' ')
vcs_tags = []
- for row in raw_tags:
- row = filter(lambda f: f != '', row)
- if row == []:
+ tag_lines = [line.strip() for line in data.splitlines()]
+ # starting from the rhs of each line, split a single value (changeset)
+ # off at whitespace; the tag name is the string to the left of that
+ tag_pairs = [line.rsplit(None, 1) for line in tag_lines]
+ for row in tag_pairs:
+ if len(row) != 2:
continue
name, commit = row
if name == 'tip':
|
{"golden_diff": "diff --git a/readthedocs/vcs_support/backends/hg.py b/readthedocs/vcs_support/backends/hg.py\n--- a/readthedocs/vcs_support/backends/hg.py\n+++ b/readthedocs/vcs_support/backends/hg.py\n@@ -1,6 +1,3 @@\n-import csv\n-from StringIO import StringIO\n-\n from projects.exceptions import ProjectImportError\n from vcs_support.base import BaseVCS, VCSVersion\n \n@@ -69,19 +66,24 @@\n \n def parse_tags(self, data):\n \"\"\"\n- Parses output of show-ref --tags, eg:\n+ Parses output of `hg tags`, eg:\n+\n+ tip 278:c4b2d21db51a\n+ 0.2.2 152:6b0364d98837\n+ 0.2.1 117:a14b7b6ffa03\n+ 0.1 50:30c2c6b3a055\n+ maintenance release 1 10:f83c32fe8126\n \n- tip 278:c4b2d21db51a\n- 0.2.2 152:6b0364d98837\n- 0.2.1 117:a14b7b6ffa03\n- 0.1 50:30c2c6b3a055\n+ Into VCSVersion objects with the tag name as verbose_name and the\n+ commit hash as identifier.\n \"\"\"\n- # parse the lines into a list of tuples (commit-hash, tag ref name)\n- raw_tags = csv.reader(StringIO(data), delimiter=' ')\n vcs_tags = []\n- for row in raw_tags:\n- row = filter(lambda f: f != '', row)\n- if row == []:\n+ tag_lines = [line.strip() for line in data.splitlines()]\n+ # starting from the rhs of each line, split a single value (changeset)\n+ # off at whitespace; the tag name is the string to the left of that\n+ tag_pairs = [line.rsplit(None, 1) for line in tag_lines]\n+ for row in tag_pairs:\n+ if len(row) != 2:\n continue\n name, commit = row\n if name == 'tip':\n", "issue": "mercurial project imported from bitbucket stuck in 'Triggered' state\nThe docs for pylibftdi are set to be built (via a POST trigger) from https://bitbucket.org/codedstructure/pylibftdi, but builds (https://readthedocs.org/builds/pylibftdi/) are stuck at 'Triggered'.\n\nBased on comments in #435 I set the project up to build against a github mirror, and that worked successfully, so it seems (from #435) that this is likely an hg issue.\n\n", "before_files": [{"content": "import csv\nfrom StringIO import StringIO\n\nfrom projects.exceptions import ProjectImportError\nfrom vcs_support.base import BaseVCS, VCSVersion\n\n\nclass Backend(BaseVCS):\n supports_tags = True\n supports_branches = True\n fallback_branch = 'default'\n\n def update(self):\n super(Backend, self).update()\n retcode = self.run('hg', 'status')[0]\n if retcode == 0:\n return self.pull()\n else:\n return self.clone()\n\n def pull(self):\n pull_output = self.run('hg', 'pull')\n if pull_output[0] != 0:\n raise ProjectImportError(\n (\"Failed to get code from '%s' (hg pull): %s\"\n % (self.repo_url, pull_output[0]))\n )\n update_output = self.run('hg', 'update', '-C')[0]\n if update_output[0] != 0:\n raise ProjectImportError(\n (\"Failed to get code from '%s' (hg update): %s\"\n % (self.repo_url, pull_output[0]))\n )\n return update_output\n\n def clone(self):\n output = self.run('hg', 'clone', self.repo_url, '.')\n if output[0] != 0:\n raise ProjectImportError(\n (\"Failed to get code from '%s' (hg clone): %s\"\n % (self.repo_url, output[0]))\n )\n return output\n\n @property\n def branches(self):\n retcode, stdout = self.run('hg', 'branches', '-q')[:2]\n # error (or no tags found)\n if retcode != 0:\n return []\n return self.parse_branches(stdout)\n\n def parse_branches(self, data):\n \"\"\"\n stable\n default\n \"\"\"\n\n names = [name.lstrip() for name in data.splitlines()]\n return [VCSVersion(self, name, name) for name in names if name]\n\n @property\n def tags(self):\n retcode, stdout = self.run('hg', 'tags')[:2]\n # error (or no tags found)\n if retcode != 0:\n return []\n return self.parse_tags(stdout)\n\n def parse_tags(self, data):\n \"\"\"\n Parses output of show-ref --tags, eg:\n\n tip 278:c4b2d21db51a\n 0.2.2 152:6b0364d98837\n 0.2.1 117:a14b7b6ffa03\n 0.1 50:30c2c6b3a055\n \"\"\"\n # parse the lines into a list of tuples (commit-hash, tag ref name)\n raw_tags = csv.reader(StringIO(data), delimiter=' ')\n vcs_tags = []\n for row in raw_tags:\n row = filter(lambda f: f != '', row)\n if row == []:\n continue\n name, commit = row\n if name == 'tip':\n continue\n revision, commit_hash = commit.split(':')\n vcs_tags.append(VCSVersion(self, commit_hash, name))\n return vcs_tags\n\n def checkout(self, identifier=None):\n super(Backend, self).checkout()\n if not identifier:\n identifier = 'tip'\n retcode = self.run('hg', 'status')[0]\n if retcode == 0:\n self.run('hg', 'pull')\n return self.run('hg', 'update', '-C', identifier)\n else:\n self.clone()\n return self.run('hg', 'update', '-C', identifier)\n", "path": "readthedocs/vcs_support/backends/hg.py"}], "after_files": [{"content": "from projects.exceptions import ProjectImportError\nfrom vcs_support.base import BaseVCS, VCSVersion\n\n\nclass Backend(BaseVCS):\n supports_tags = True\n supports_branches = True\n fallback_branch = 'default'\n\n def update(self):\n super(Backend, self).update()\n retcode = self.run('hg', 'status')[0]\n if retcode == 0:\n return self.pull()\n else:\n return self.clone()\n\n def pull(self):\n pull_output = self.run('hg', 'pull')\n if pull_output[0] != 0:\n raise ProjectImportError(\n (\"Failed to get code from '%s' (hg pull): %s\"\n % (self.repo_url, pull_output[0]))\n )\n update_output = self.run('hg', 'update', '-C')[0]\n if update_output[0] != 0:\n raise ProjectImportError(\n (\"Failed to get code from '%s' (hg update): %s\"\n % (self.repo_url, pull_output[0]))\n )\n return update_output\n\n def clone(self):\n output = self.run('hg', 'clone', self.repo_url, '.')\n if output[0] != 0:\n raise ProjectImportError(\n (\"Failed to get code from '%s' (hg clone): %s\"\n % (self.repo_url, output[0]))\n )\n return output\n\n @property\n def branches(self):\n retcode, stdout = self.run('hg', 'branches', '-q')[:2]\n # error (or no tags found)\n if retcode != 0:\n return []\n return self.parse_branches(stdout)\n\n def parse_branches(self, data):\n \"\"\"\n stable\n default\n \"\"\"\n\n names = [name.lstrip() for name in data.splitlines()]\n return [VCSVersion(self, name, name) for name in names if name]\n\n @property\n def tags(self):\n retcode, stdout = self.run('hg', 'tags')[:2]\n # error (or no tags found)\n if retcode != 0:\n return []\n return self.parse_tags(stdout)\n\n def parse_tags(self, data):\n \"\"\"\n Parses output of `hg tags`, eg:\n\n tip 278:c4b2d21db51a\n 0.2.2 152:6b0364d98837\n 0.2.1 117:a14b7b6ffa03\n 0.1 50:30c2c6b3a055\n maintenance release 1 10:f83c32fe8126\n\n Into VCSVersion objects with the tag name as verbose_name and the\n commit hash as identifier.\n \"\"\"\n vcs_tags = []\n tag_lines = [line.strip() for line in data.splitlines()]\n # starting from the rhs of each line, split a single value (changeset)\n # off at whitespace; the tag name is the string to the left of that\n tag_pairs = [line.rsplit(None, 1) for line in tag_lines]\n for row in tag_pairs:\n if len(row) != 2:\n continue\n name, commit = row\n if name == 'tip':\n continue\n revision, commit_hash = commit.split(':')\n vcs_tags.append(VCSVersion(self, commit_hash, name))\n return vcs_tags\n\n def checkout(self, identifier=None):\n super(Backend, self).checkout()\n if not identifier:\n identifier = 'tip'\n retcode = self.run('hg', 'status')[0]\n if retcode == 0:\n self.run('hg', 'pull')\n return self.run('hg', 'update', '-C', identifier)\n else:\n self.clone()\n return self.run('hg', 'update', '-C', identifier)\n", "path": "readthedocs/vcs_support/backends/hg.py"}]}
| 1,390 | 559 |
gh_patches_debug_3875
|
rasdani/github-patches
|
git_diff
|
kartoza__prj.app-813
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error 500: Editing Answers.
# Problem
When I select the edit option for the answers on http://changelog.qgis.org/id/inasafe-realtime2/
Then I get error 500.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `django_project/lesson/views/answer.py`
Content:
```
1 # coding=utf-8
2 """Answer views."""
3
4 from django.core.urlresolvers import reverse
5 from django.views.generic import (
6 CreateView,
7 DeleteView,
8 UpdateView,
9 )
10 from django.shortcuts import get_object_or_404
11 from django.utils.translation import ugettext_lazy as _
12
13 from braces.views import LoginRequiredMixin
14
15 from lesson.forms.answer import AnswerForm
16 from lesson.models.answer import Answer
17 from lesson.models.worksheet_question import WorksheetQuestion
18
19
20 class AnswerMixin(object):
21 """Mixin class to provide standard settings for Answer."""
22
23 model = Answer
24 form_class = AnswerForm
25
26
27 class AnswerCreateView(
28 LoginRequiredMixin, AnswerMixin, CreateView):
29 """Create view for Answer."""
30
31 context_object_name = 'answer'
32 template_name = 'create.html'
33 creation_label = _('Add answer')
34
35 def get_success_url(self):
36 """Define the redirect URL
37
38 After successful creation of the object, the User will be redirected
39 to the unapproved Version list page for the object's parent Worksheet
40
41 :returns: URL
42 :rtype: HttpResponse
43 """
44 return reverse('worksheet-detail', kwargs={
45 'pk': self.object.question.worksheet.pk,
46 'section_slug': self.object.question.worksheet.section.slug,
47 'project_slug': self.object.question.worksheet.section.project.slug
48 })
49
50 def get_form_kwargs(self):
51 """Get keyword arguments from form.
52
53 :returns keyword argument from the form
54 :rtype dict
55 """
56 kwargs = super(AnswerCreateView, self).get_form_kwargs()
57 pk = self.kwargs['question_pk']
58 kwargs['question'] = get_object_or_404(WorksheetQuestion, pk=pk)
59 return kwargs
60
61
62 # noinspection PyAttributeOutsideInit
63 class AnswerDeleteView(
64 LoginRequiredMixin,
65 AnswerMixin,
66 DeleteView):
67 """Delete view for Answer."""
68
69 context_object_name = 'answer'
70 template_name = 'answer/delete.html'
71
72 def get_success_url(self):
73 """Define the redirect URL.
74
75 After successful deletion of the object, the User will be redirected
76 to the Certifying Organisation list page
77 for the object's parent Worksheet.
78
79 :returns: URL
80 :rtype: HttpResponse
81 """
82 return reverse('worksheet-detail', kwargs={
83 'pk': self.object.question.worksheet.pk,
84 'section_slug': self.object.question.worksheet.section.slug,
85 'project_slug': self.object.question.worksheet.section.project.slug
86 })
87
88
89 # noinspection PyAttributeOutsideInit
90 class AnswerUpdateView(
91 LoginRequiredMixin,
92 AnswerMixin,
93 UpdateView):
94 """Update view for Answer."""
95
96 context_object_name = 'answer'
97 template_name = 'update.html'
98 update_label = _('Update answer')
99
100 def get_form_kwargs(self):
101 """Get keyword arguments from form.
102
103 :returns keyword argument from the form
104 :rtype: dict
105 """
106 kwargs = super(AnswerUpdateView, self).get_form_kwargs()
107 answer = get_object_or_404(Answer, self.pk_url_kwarg)
108 kwargs['question'] = answer.question
109 return kwargs
110
111 def get_success_url(self):
112 """Define the redirect URL.
113
114 After successful update of the object, the User will be redirected to
115 the specification list page for the object's parent Worksheet.
116
117 :returns: URL
118 :rtype: HttpResponse
119 """
120 return reverse('worksheet-detail', kwargs={
121 'pk': self.object.question.worksheet.pk,
122 'section_slug': self.object.question.worksheet.section.slug,
123 'project_slug': self.object.question.worksheet.section.project.slug
124 })
125
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/django_project/lesson/views/answer.py b/django_project/lesson/views/answer.py
--- a/django_project/lesson/views/answer.py
+++ b/django_project/lesson/views/answer.py
@@ -104,7 +104,7 @@
:rtype: dict
"""
kwargs = super(AnswerUpdateView, self).get_form_kwargs()
- answer = get_object_or_404(Answer, self.pk_url_kwarg)
+ answer = get_object_or_404(Answer, pk=kwargs['instance'].pk)
kwargs['question'] = answer.question
return kwargs
|
{"golden_diff": "diff --git a/django_project/lesson/views/answer.py b/django_project/lesson/views/answer.py\n--- a/django_project/lesson/views/answer.py\n+++ b/django_project/lesson/views/answer.py\n@@ -104,7 +104,7 @@\n :rtype: dict\n \"\"\"\n kwargs = super(AnswerUpdateView, self).get_form_kwargs()\n- answer = get_object_or_404(Answer, self.pk_url_kwarg)\n+ answer = get_object_or_404(Answer, pk=kwargs['instance'].pk)\n kwargs['question'] = answer.question\n return kwargs\n", "issue": "Error 500: Editing Answers.\n# Problem\r\n\r\nWhen I select the edit option for the answers on http://changelog.qgis.org/id/inasafe-realtime2/\r\nThen I get error 500.\n", "before_files": [{"content": "# coding=utf-8\n\"\"\"Answer views.\"\"\"\n\nfrom django.core.urlresolvers import reverse\nfrom django.views.generic import (\n CreateView,\n DeleteView,\n UpdateView,\n)\nfrom django.shortcuts import get_object_or_404\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom braces.views import LoginRequiredMixin\n\nfrom lesson.forms.answer import AnswerForm\nfrom lesson.models.answer import Answer\nfrom lesson.models.worksheet_question import WorksheetQuestion\n\n\nclass AnswerMixin(object):\n \"\"\"Mixin class to provide standard settings for Answer.\"\"\"\n\n model = Answer\n form_class = AnswerForm\n\n\nclass AnswerCreateView(\n LoginRequiredMixin, AnswerMixin, CreateView):\n \"\"\"Create view for Answer.\"\"\"\n\n context_object_name = 'answer'\n template_name = 'create.html'\n creation_label = _('Add answer')\n\n def get_success_url(self):\n \"\"\"Define the redirect URL\n\n After successful creation of the object, the User will be redirected\n to the unapproved Version list page for the object's parent Worksheet\n\n :returns: URL\n :rtype: HttpResponse\n \"\"\"\n return reverse('worksheet-detail', kwargs={\n 'pk': self.object.question.worksheet.pk,\n 'section_slug': self.object.question.worksheet.section.slug,\n 'project_slug': self.object.question.worksheet.section.project.slug\n })\n\n def get_form_kwargs(self):\n \"\"\"Get keyword arguments from form.\n\n :returns keyword argument from the form\n :rtype dict\n \"\"\"\n kwargs = super(AnswerCreateView, self).get_form_kwargs()\n pk = self.kwargs['question_pk']\n kwargs['question'] = get_object_or_404(WorksheetQuestion, pk=pk)\n return kwargs\n\n\n# noinspection PyAttributeOutsideInit\nclass AnswerDeleteView(\n LoginRequiredMixin,\n AnswerMixin,\n DeleteView):\n \"\"\"Delete view for Answer.\"\"\"\n\n context_object_name = 'answer'\n template_name = 'answer/delete.html'\n\n def get_success_url(self):\n \"\"\"Define the redirect URL.\n\n After successful deletion of the object, the User will be redirected\n to the Certifying Organisation list page\n for the object's parent Worksheet.\n\n :returns: URL\n :rtype: HttpResponse\n \"\"\"\n return reverse('worksheet-detail', kwargs={\n 'pk': self.object.question.worksheet.pk,\n 'section_slug': self.object.question.worksheet.section.slug,\n 'project_slug': self.object.question.worksheet.section.project.slug\n })\n\n\n# noinspection PyAttributeOutsideInit\nclass AnswerUpdateView(\n LoginRequiredMixin,\n AnswerMixin,\n UpdateView):\n \"\"\"Update view for Answer.\"\"\"\n\n context_object_name = 'answer'\n template_name = 'update.html'\n update_label = _('Update answer')\n\n def get_form_kwargs(self):\n \"\"\"Get keyword arguments from form.\n\n :returns keyword argument from the form\n :rtype: dict\n \"\"\"\n kwargs = super(AnswerUpdateView, self).get_form_kwargs()\n answer = get_object_or_404(Answer, self.pk_url_kwarg)\n kwargs['question'] = answer.question\n return kwargs\n\n def get_success_url(self):\n \"\"\"Define the redirect URL.\n\n After successful update of the object, the User will be redirected to\n the specification list page for the object's parent Worksheet.\n\n :returns: URL\n :rtype: HttpResponse\n \"\"\"\n return reverse('worksheet-detail', kwargs={\n 'pk': self.object.question.worksheet.pk,\n 'section_slug': self.object.question.worksheet.section.slug,\n 'project_slug': self.object.question.worksheet.section.project.slug\n })\n", "path": "django_project/lesson/views/answer.py"}], "after_files": [{"content": "# coding=utf-8\n\"\"\"Answer views.\"\"\"\n\nfrom django.core.urlresolvers import reverse\nfrom django.views.generic import (\n CreateView,\n DeleteView,\n UpdateView,\n)\nfrom django.shortcuts import get_object_or_404\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom braces.views import LoginRequiredMixin\n\nfrom lesson.forms.answer import AnswerForm\nfrom lesson.models.answer import Answer\nfrom lesson.models.worksheet_question import WorksheetQuestion\n\n\nclass AnswerMixin(object):\n \"\"\"Mixin class to provide standard settings for Answer.\"\"\"\n\n model = Answer\n form_class = AnswerForm\n\n\nclass AnswerCreateView(\n LoginRequiredMixin, AnswerMixin, CreateView):\n \"\"\"Create view for Answer.\"\"\"\n\n context_object_name = 'answer'\n template_name = 'create.html'\n creation_label = _('Add answer')\n\n def get_success_url(self):\n \"\"\"Define the redirect URL\n\n After successful creation of the object, the User will be redirected\n to the unapproved Version list page for the object's parent Worksheet\n\n :returns: URL\n :rtype: HttpResponse\n \"\"\"\n return reverse('worksheet-detail', kwargs={\n 'pk': self.object.question.worksheet.pk,\n 'section_slug': self.object.question.worksheet.section.slug,\n 'project_slug': self.object.question.worksheet.section.project.slug\n })\n\n def get_form_kwargs(self):\n \"\"\"Get keyword arguments from form.\n\n :returns keyword argument from the form\n :rtype dict\n \"\"\"\n kwargs = super(AnswerCreateView, self).get_form_kwargs()\n pk = self.kwargs['question_pk']\n kwargs['question'] = get_object_or_404(WorksheetQuestion, pk=pk)\n return kwargs\n\n\n# noinspection PyAttributeOutsideInit\nclass AnswerDeleteView(\n LoginRequiredMixin,\n AnswerMixin,\n DeleteView):\n \"\"\"Delete view for Answer.\"\"\"\n\n context_object_name = 'answer'\n template_name = 'answer/delete.html'\n\n def get_success_url(self):\n \"\"\"Define the redirect URL.\n\n After successful deletion of the object, the User will be redirected\n to the Certifying Organisation list page\n for the object's parent Worksheet.\n\n :returns: URL\n :rtype: HttpResponse\n \"\"\"\n return reverse('worksheet-detail', kwargs={\n 'pk': self.object.question.worksheet.pk,\n 'section_slug': self.object.question.worksheet.section.slug,\n 'project_slug': self.object.question.worksheet.section.project.slug\n })\n\n\n# noinspection PyAttributeOutsideInit\nclass AnswerUpdateView(\n LoginRequiredMixin,\n AnswerMixin,\n UpdateView):\n \"\"\"Update view for Answer.\"\"\"\n\n context_object_name = 'answer'\n template_name = 'update.html'\n update_label = _('Update answer')\n\n def get_form_kwargs(self):\n \"\"\"Get keyword arguments from form.\n\n :returns keyword argument from the form\n :rtype: dict\n \"\"\"\n kwargs = super(AnswerUpdateView, self).get_form_kwargs()\n answer = get_object_or_404(Answer, pk=kwargs['instance'].pk)\n kwargs['question'] = answer.question\n return kwargs\n\n def get_success_url(self):\n \"\"\"Define the redirect URL.\n\n After successful update of the object, the User will be redirected to\n the specification list page for the object's parent Worksheet.\n\n :returns: URL\n :rtype: HttpResponse\n \"\"\"\n return reverse('worksheet-detail', kwargs={\n 'pk': self.object.question.worksheet.pk,\n 'section_slug': self.object.question.worksheet.section.slug,\n 'project_slug': self.object.question.worksheet.section.project.slug\n })\n", "path": "django_project/lesson/views/answer.py"}]}
| 1,351 | 140 |
gh_patches_debug_63228
|
rasdani/github-patches
|
git_diff
|
ManimCommunity__manim-501
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
-fp does not work on Ubuntu
On Ubuntu, when passing -f flag (open the file in file browser) and -p (preview) no preview is shown, only the file browser is open, instead of both.
IIRC it is working on windows, as I remember having worked on this flag to make it work on Windows.
Seems like an easy PR
-fp does not work on Ubuntu
On Ubuntu, when passing -f flag (open the file in file browser) and -p (preview) no preview is shown, only the file browser is open, instead of both.
IIRC it is working on windows, as I remember having worked on this flag to make it work on Windows.
Seems like an easy PR
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `manim/__main__.py`
Content:
```
1 import inspect
2 import os
3 import platform
4 import subprocess as sp
5 import sys
6 import re
7 import traceback
8 import importlib.util
9 import types
10
11 from . import constants, logger, console, file_writer_config
12 from .config.config import args
13 from .config import cfg_subcmds
14 from .scene.scene import Scene
15 from .utils.sounds import play_error_sound, play_finish_sound
16 from .utils.file_ops import open_file as open_media_file
17 from . import constants
18
19
20 def open_file_if_needed(file_writer):
21 if file_writer_config["verbosity"] != "DEBUG":
22 curr_stdout = sys.stdout
23 sys.stdout = open(os.devnull, "w")
24
25 open_file = any(
26 [file_writer_config["preview"], file_writer_config["show_in_file_browser"]]
27 )
28 if open_file:
29 current_os = platform.system()
30 file_paths = []
31
32 if file_writer_config["save_last_frame"]:
33 file_paths.append(file_writer.get_image_file_path())
34 if (
35 file_writer_config["write_to_movie"]
36 and not file_writer_config["save_as_gif"]
37 ):
38 file_paths.append(file_writer.get_movie_file_path())
39 if file_writer_config["save_as_gif"]:
40 file_paths.append(file_writer.gif_file_path)
41
42 for file_path in file_paths:
43 open_media_file(file_path, file_writer_config["show_in_file_browser"])
44
45 if file_writer_config["verbosity"] != "DEBUG":
46 sys.stdout.close()
47 sys.stdout = curr_stdout
48
49
50 def is_child_scene(obj, module):
51 return (
52 inspect.isclass(obj)
53 and issubclass(obj, Scene)
54 and obj != Scene
55 and obj.__module__.startswith(module.__name__)
56 )
57
58
59 def prompt_user_for_choice(scene_classes):
60 num_to_class = {}
61 for count, scene_class in enumerate(scene_classes):
62 count += 1 # start with 1 instead of 0
63 name = scene_class.__name__
64 console.print(f"{count}: {name}", style="logging.level.info")
65 num_to_class[count] = scene_class
66 try:
67 user_input = console.input(
68 f"[log.message] {constants.CHOOSE_NUMBER_MESSAGE} [/log.message]"
69 )
70 return [
71 num_to_class[int(num_str)]
72 for num_str in re.split(r"\s*,\s*", user_input.strip())
73 ]
74 except KeyError:
75 logger.error(constants.INVALID_NUMBER_MESSAGE)
76 sys.exit(2)
77 except EOFError:
78 sys.exit(1)
79
80
81 def get_scenes_to_render(scene_classes):
82 if not scene_classes:
83 logger.error(constants.NO_SCENE_MESSAGE)
84 return []
85 if file_writer_config["write_all"]:
86 return scene_classes
87 result = []
88 for scene_name in file_writer_config["scene_names"]:
89 found = False
90 for scene_class in scene_classes:
91 if scene_class.__name__ == scene_name:
92 result.append(scene_class)
93 found = True
94 break
95 if not found and (scene_name != ""):
96 logger.error(constants.SCENE_NOT_FOUND_MESSAGE.format(scene_name))
97 if result:
98 return result
99 return (
100 [scene_classes[0]]
101 if len(scene_classes) == 1
102 else prompt_user_for_choice(scene_classes)
103 )
104
105
106 def get_scene_classes_from_module(module):
107 return [
108 member[1]
109 for member in inspect.getmembers(module, lambda x: is_child_scene(x, module))
110 ]
111
112
113 def get_module(file_name):
114 if file_name == "-":
115 # Since this feature is used for rapid testing, using Scene Caching would be a
116 # hindrance in this case.
117 file_writer_config["disable_caching"] = True
118 module = types.ModuleType("input_scenes")
119 logger.info(
120 "Enter the animation's code & end with an EOF (CTRL+D on Linux/Unix, CTRL+Z on Windows):"
121 )
122 code = sys.stdin.read()
123 if not code.startswith("from manim import"):
124 logger.warning(
125 "Didn't find an import statement for Manim. Importing automatically..."
126 )
127 code = "from manim import *\n" + code
128 logger.info("Rendering animation from typed code...")
129 try:
130 exec(code, module.__dict__)
131 return module
132 except Exception as e:
133 logger.error(f"Failed to render scene: {str(e)}")
134 sys.exit(2)
135 else:
136 if os.path.exists(file_name):
137 if file_name[-3:] != ".py":
138 raise Exception(f"{file_name} is not a valid Manim python script.")
139 module_name = file_name[:-3].replace(os.sep, ".").split(".")[-1]
140 spec = importlib.util.spec_from_file_location(module_name, file_name)
141 module = importlib.util.module_from_spec(spec)
142 spec.loader.exec_module(module)
143 return module
144 else:
145 raise FileNotFoundError(f"{file_name} not found")
146
147
148 def main():
149 if hasattr(args, "subcommands"):
150 if "cfg" in args.subcommands:
151 if args.cfg_subcommand is not None:
152 subcommand = args.cfg_subcommand
153 if subcommand == "write":
154 cfg_subcmds.write(args.level, args.open)
155 elif subcommand == "show":
156 cfg_subcmds.show()
157 elif subcommand == "export":
158 cfg_subcmds.export(args.dir)
159 else:
160 logger.error("No argument provided; Exiting...")
161
162 else:
163 module = get_module(file_writer_config["input_file"])
164 all_scene_classes = get_scene_classes_from_module(module)
165 scene_classes_to_render = get_scenes_to_render(all_scene_classes)
166 sound_on = file_writer_config["sound"]
167 for SceneClass in scene_classes_to_render:
168 try:
169 # By invoking, this renders the full scene
170 scene = SceneClass()
171 open_file_if_needed(scene.file_writer)
172 if sound_on:
173 play_finish_sound()
174 except Exception:
175 print("\n\n")
176 traceback.print_exc()
177 print("\n\n")
178 if sound_on:
179 play_error_sound()
180
181
182 if __name__ == "__main__":
183 main()
184
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/manim/__main__.py b/manim/__main__.py
--- a/manim/__main__.py
+++ b/manim/__main__.py
@@ -40,7 +40,10 @@
file_paths.append(file_writer.gif_file_path)
for file_path in file_paths:
- open_media_file(file_path, file_writer_config["show_in_file_browser"])
+ if file_writer_config["show_in_file_browser"]:
+ open_media_file(file_path, True)
+ if file_writer_config["preview"]:
+ open_media_file(file_path, False)
if file_writer_config["verbosity"] != "DEBUG":
sys.stdout.close()
|
{"golden_diff": "diff --git a/manim/__main__.py b/manim/__main__.py\n--- a/manim/__main__.py\n+++ b/manim/__main__.py\n@@ -40,7 +40,10 @@\n file_paths.append(file_writer.gif_file_path)\n \n for file_path in file_paths:\n- open_media_file(file_path, file_writer_config[\"show_in_file_browser\"])\n+ if file_writer_config[\"show_in_file_browser\"]:\n+ open_media_file(file_path, True)\n+ if file_writer_config[\"preview\"]:\n+ open_media_file(file_path, False)\n \n if file_writer_config[\"verbosity\"] != \"DEBUG\":\n sys.stdout.close()\n", "issue": "-fp does not work on Ubuntu \nOn Ubuntu, when passing -f flag (open the file in file browser) and -p (preview) no preview is shown, only the file browser is open, instead of both. \r\n\r\nIIRC it is working on windows, as I remember having worked on this flag to make it work on Windows. \r\n\r\nSeems like an easy PR \n-fp does not work on Ubuntu \nOn Ubuntu, when passing -f flag (open the file in file browser) and -p (preview) no preview is shown, only the file browser is open, instead of both. \r\n\r\nIIRC it is working on windows, as I remember having worked on this flag to make it work on Windows. \r\n\r\nSeems like an easy PR \n", "before_files": [{"content": "import inspect\nimport os\nimport platform\nimport subprocess as sp\nimport sys\nimport re\nimport traceback\nimport importlib.util\nimport types\n\nfrom . import constants, logger, console, file_writer_config\nfrom .config.config import args\nfrom .config import cfg_subcmds\nfrom .scene.scene import Scene\nfrom .utils.sounds import play_error_sound, play_finish_sound\nfrom .utils.file_ops import open_file as open_media_file\nfrom . import constants\n\n\ndef open_file_if_needed(file_writer):\n if file_writer_config[\"verbosity\"] != \"DEBUG\":\n curr_stdout = sys.stdout\n sys.stdout = open(os.devnull, \"w\")\n\n open_file = any(\n [file_writer_config[\"preview\"], file_writer_config[\"show_in_file_browser\"]]\n )\n if open_file:\n current_os = platform.system()\n file_paths = []\n\n if file_writer_config[\"save_last_frame\"]:\n file_paths.append(file_writer.get_image_file_path())\n if (\n file_writer_config[\"write_to_movie\"]\n and not file_writer_config[\"save_as_gif\"]\n ):\n file_paths.append(file_writer.get_movie_file_path())\n if file_writer_config[\"save_as_gif\"]:\n file_paths.append(file_writer.gif_file_path)\n\n for file_path in file_paths:\n open_media_file(file_path, file_writer_config[\"show_in_file_browser\"])\n\n if file_writer_config[\"verbosity\"] != \"DEBUG\":\n sys.stdout.close()\n sys.stdout = curr_stdout\n\n\ndef is_child_scene(obj, module):\n return (\n inspect.isclass(obj)\n and issubclass(obj, Scene)\n and obj != Scene\n and obj.__module__.startswith(module.__name__)\n )\n\n\ndef prompt_user_for_choice(scene_classes):\n num_to_class = {}\n for count, scene_class in enumerate(scene_classes):\n count += 1 # start with 1 instead of 0\n name = scene_class.__name__\n console.print(f\"{count}: {name}\", style=\"logging.level.info\")\n num_to_class[count] = scene_class\n try:\n user_input = console.input(\n f\"[log.message] {constants.CHOOSE_NUMBER_MESSAGE} [/log.message]\"\n )\n return [\n num_to_class[int(num_str)]\n for num_str in re.split(r\"\\s*,\\s*\", user_input.strip())\n ]\n except KeyError:\n logger.error(constants.INVALID_NUMBER_MESSAGE)\n sys.exit(2)\n except EOFError:\n sys.exit(1)\n\n\ndef get_scenes_to_render(scene_classes):\n if not scene_classes:\n logger.error(constants.NO_SCENE_MESSAGE)\n return []\n if file_writer_config[\"write_all\"]:\n return scene_classes\n result = []\n for scene_name in file_writer_config[\"scene_names\"]:\n found = False\n for scene_class in scene_classes:\n if scene_class.__name__ == scene_name:\n result.append(scene_class)\n found = True\n break\n if not found and (scene_name != \"\"):\n logger.error(constants.SCENE_NOT_FOUND_MESSAGE.format(scene_name))\n if result:\n return result\n return (\n [scene_classes[0]]\n if len(scene_classes) == 1\n else prompt_user_for_choice(scene_classes)\n )\n\n\ndef get_scene_classes_from_module(module):\n return [\n member[1]\n for member in inspect.getmembers(module, lambda x: is_child_scene(x, module))\n ]\n\n\ndef get_module(file_name):\n if file_name == \"-\":\n # Since this feature is used for rapid testing, using Scene Caching would be a\n # hindrance in this case.\n file_writer_config[\"disable_caching\"] = True\n module = types.ModuleType(\"input_scenes\")\n logger.info(\n \"Enter the animation's code & end with an EOF (CTRL+D on Linux/Unix, CTRL+Z on Windows):\"\n )\n code = sys.stdin.read()\n if not code.startswith(\"from manim import\"):\n logger.warning(\n \"Didn't find an import statement for Manim. Importing automatically...\"\n )\n code = \"from manim import *\\n\" + code\n logger.info(\"Rendering animation from typed code...\")\n try:\n exec(code, module.__dict__)\n return module\n except Exception as e:\n logger.error(f\"Failed to render scene: {str(e)}\")\n sys.exit(2)\n else:\n if os.path.exists(file_name):\n if file_name[-3:] != \".py\":\n raise Exception(f\"{file_name} is not a valid Manim python script.\")\n module_name = file_name[:-3].replace(os.sep, \".\").split(\".\")[-1]\n spec = importlib.util.spec_from_file_location(module_name, file_name)\n module = importlib.util.module_from_spec(spec)\n spec.loader.exec_module(module)\n return module\n else:\n raise FileNotFoundError(f\"{file_name} not found\")\n\n\ndef main():\n if hasattr(args, \"subcommands\"):\n if \"cfg\" in args.subcommands:\n if args.cfg_subcommand is not None:\n subcommand = args.cfg_subcommand\n if subcommand == \"write\":\n cfg_subcmds.write(args.level, args.open)\n elif subcommand == \"show\":\n cfg_subcmds.show()\n elif subcommand == \"export\":\n cfg_subcmds.export(args.dir)\n else:\n logger.error(\"No argument provided; Exiting...\")\n\n else:\n module = get_module(file_writer_config[\"input_file\"])\n all_scene_classes = get_scene_classes_from_module(module)\n scene_classes_to_render = get_scenes_to_render(all_scene_classes)\n sound_on = file_writer_config[\"sound\"]\n for SceneClass in scene_classes_to_render:\n try:\n # By invoking, this renders the full scene\n scene = SceneClass()\n open_file_if_needed(scene.file_writer)\n if sound_on:\n play_finish_sound()\n except Exception:\n print(\"\\n\\n\")\n traceback.print_exc()\n print(\"\\n\\n\")\n if sound_on:\n play_error_sound()\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "manim/__main__.py"}], "after_files": [{"content": "import inspect\nimport os\nimport platform\nimport subprocess as sp\nimport sys\nimport re\nimport traceback\nimport importlib.util\nimport types\n\nfrom . import constants, logger, console, file_writer_config\nfrom .config.config import args\nfrom .config import cfg_subcmds\nfrom .scene.scene import Scene\nfrom .utils.sounds import play_error_sound, play_finish_sound\nfrom .utils.file_ops import open_file as open_media_file\nfrom . import constants\n\n\ndef open_file_if_needed(file_writer):\n if file_writer_config[\"verbosity\"] != \"DEBUG\":\n curr_stdout = sys.stdout\n sys.stdout = open(os.devnull, \"w\")\n\n open_file = any(\n [file_writer_config[\"preview\"], file_writer_config[\"show_in_file_browser\"]]\n )\n if open_file:\n current_os = platform.system()\n file_paths = []\n\n if file_writer_config[\"save_last_frame\"]:\n file_paths.append(file_writer.get_image_file_path())\n if (\n file_writer_config[\"write_to_movie\"]\n and not file_writer_config[\"save_as_gif\"]\n ):\n file_paths.append(file_writer.get_movie_file_path())\n if file_writer_config[\"save_as_gif\"]:\n file_paths.append(file_writer.gif_file_path)\n\n for file_path in file_paths:\n if file_writer_config[\"show_in_file_browser\"]:\n open_media_file(file_path, True)\n if file_writer_config[\"preview\"]:\n open_media_file(file_path, False)\n\n if file_writer_config[\"verbosity\"] != \"DEBUG\":\n sys.stdout.close()\n sys.stdout = curr_stdout\n\n\ndef is_child_scene(obj, module):\n return (\n inspect.isclass(obj)\n and issubclass(obj, Scene)\n and obj != Scene\n and obj.__module__.startswith(module.__name__)\n )\n\n\ndef prompt_user_for_choice(scene_classes):\n num_to_class = {}\n for count, scene_class in enumerate(scene_classes):\n count += 1 # start with 1 instead of 0\n name = scene_class.__name__\n console.print(f\"{count}: {name}\", style=\"logging.level.info\")\n num_to_class[count] = scene_class\n try:\n user_input = console.input(\n f\"[log.message] {constants.CHOOSE_NUMBER_MESSAGE} [/log.message]\"\n )\n return [\n num_to_class[int(num_str)]\n for num_str in re.split(r\"\\s*,\\s*\", user_input.strip())\n ]\n except KeyError:\n logger.error(constants.INVALID_NUMBER_MESSAGE)\n sys.exit(2)\n except EOFError:\n sys.exit(1)\n\n\ndef get_scenes_to_render(scene_classes):\n if not scene_classes:\n logger.error(constants.NO_SCENE_MESSAGE)\n return []\n if file_writer_config[\"write_all\"]:\n return scene_classes\n result = []\n for scene_name in file_writer_config[\"scene_names\"]:\n found = False\n for scene_class in scene_classes:\n if scene_class.__name__ == scene_name:\n result.append(scene_class)\n found = True\n break\n if not found and (scene_name != \"\"):\n logger.error(constants.SCENE_NOT_FOUND_MESSAGE.format(scene_name))\n if result:\n return result\n return (\n [scene_classes[0]]\n if len(scene_classes) == 1\n else prompt_user_for_choice(scene_classes)\n )\n\n\ndef get_scene_classes_from_module(module):\n return [\n member[1]\n for member in inspect.getmembers(module, lambda x: is_child_scene(x, module))\n ]\n\n\ndef get_module(file_name):\n if file_name == \"-\":\n # Since this feature is used for rapid testing, using Scene Caching would be a\n # hindrance in this case.\n file_writer_config[\"disable_caching\"] = True\n module = types.ModuleType(\"input_scenes\")\n logger.info(\n \"Enter the animation's code & end with an EOF (CTRL+D on Linux/Unix, CTRL+Z on Windows):\"\n )\n code = sys.stdin.read()\n if not code.startswith(\"from manim import\"):\n logger.warning(\n \"Didn't find an import statement for Manim. Importing automatically...\"\n )\n code = \"from manim import *\\n\" + code\n logger.info(\"Rendering animation from typed code...\")\n try:\n exec(code, module.__dict__)\n return module\n except Exception as e:\n logger.error(f\"Failed to render scene: {str(e)}\")\n sys.exit(2)\n else:\n if os.path.exists(file_name):\n if file_name[-3:] != \".py\":\n raise Exception(f\"{file_name} is not a valid Manim python script.\")\n module_name = file_name[:-3].replace(os.sep, \".\").split(\".\")[-1]\n spec = importlib.util.spec_from_file_location(module_name, file_name)\n module = importlib.util.module_from_spec(spec)\n spec.loader.exec_module(module)\n return module\n else:\n raise FileNotFoundError(f\"{file_name} not found\")\n\n\ndef main():\n if hasattr(args, \"subcommands\"):\n if \"cfg\" in args.subcommands:\n if args.cfg_subcommand is not None:\n subcommand = args.cfg_subcommand\n if subcommand == \"write\":\n cfg_subcmds.write(args.level, args.open)\n elif subcommand == \"show\":\n cfg_subcmds.show()\n elif subcommand == \"export\":\n cfg_subcmds.export(args.dir)\n else:\n logger.error(\"No argument provided; Exiting...\")\n\n else:\n module = get_module(file_writer_config[\"input_file\"])\n all_scene_classes = get_scene_classes_from_module(module)\n scene_classes_to_render = get_scenes_to_render(all_scene_classes)\n sound_on = file_writer_config[\"sound\"]\n for SceneClass in scene_classes_to_render:\n try:\n # By invoking, this renders the full scene\n scene = SceneClass()\n open_file_if_needed(scene.file_writer)\n if sound_on:\n play_finish_sound()\n except Exception:\n print(\"\\n\\n\")\n traceback.print_exc()\n print(\"\\n\\n\")\n if sound_on:\n play_error_sound()\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "manim/__main__.py"}]}
| 2,150 | 144 |
gh_patches_debug_22768
|
rasdani/github-patches
|
git_diff
|
sql-machine-learning__elasticdl-1384
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[PS-1]Add new RPC services definition in elasticdl.proto according to PS design
[PS design](https://github.com/sql-machine-learning/elasticdl/blob/develop/docs/designs/ps_design.md#rpc-definition) adds some new RPC services.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `elasticdl/python/ps/servicer.py`
Content:
```
1 from google.protobuf import empty_pb2
2
3 from elasticdl.proto import elasticdl_pb2_grpc
4
5
6 class PserverServicer(elasticdl_pb2_grpc.PserverServicer):
7 """PS service implementation"""
8
9 def __init__(
10 self,
11 parameters,
12 grads_to_wait,
13 optimizer,
14 lr_staleness_modulation=False,
15 use_async=False,
16 ):
17 self._parameters = parameters
18 self._grads_to_wait = grads_to_wait
19 self._optimizer = optimizer
20 self._lr_staleness_modulation = lr_staleness_modulation
21 self._use_async = use_async
22 self._version = 0
23
24 def pull_variable(self, request, _):
25 # TODO: implement this RPC service
26 return empty_pb2.Empty()
27
28 def pull_embedding_vector(self, request, _):
29 # TODO: implement this RPC service
30 return empty_pb2.Empty()
31
32 def push_model(self, request, _):
33 # TODO: implement this RPC service
34 return empty_pb2.Empty()
35
36 def push_gradient(self, request, _):
37 # TODO: implement this RPC service
38 return empty_pb2.Empty()
39
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/elasticdl/python/ps/servicer.py b/elasticdl/python/ps/servicer.py
--- a/elasticdl/python/ps/servicer.py
+++ b/elasticdl/python/ps/servicer.py
@@ -1,6 +1,6 @@
from google.protobuf import empty_pb2
-from elasticdl.proto import elasticdl_pb2_grpc
+from elasticdl.proto import elasticdl_pb2, elasticdl_pb2_grpc
class PserverServicer(elasticdl_pb2_grpc.PserverServicer):
@@ -23,11 +23,11 @@
def pull_variable(self, request, _):
# TODO: implement this RPC service
- return empty_pb2.Empty()
+ return elasticdl_pb2.PullVariableResponse()
def pull_embedding_vector(self, request, _):
# TODO: implement this RPC service
- return empty_pb2.Empty()
+ return elasticdl_pb2.Tensor()
def push_model(self, request, _):
# TODO: implement this RPC service
@@ -35,4 +35,4 @@
def push_gradient(self, request, _):
# TODO: implement this RPC service
- return empty_pb2.Empty()
+ return elasticdl_pb2.PushGradientResponse()
|
{"golden_diff": "diff --git a/elasticdl/python/ps/servicer.py b/elasticdl/python/ps/servicer.py\n--- a/elasticdl/python/ps/servicer.py\n+++ b/elasticdl/python/ps/servicer.py\n@@ -1,6 +1,6 @@\n from google.protobuf import empty_pb2\n \n-from elasticdl.proto import elasticdl_pb2_grpc\n+from elasticdl.proto import elasticdl_pb2, elasticdl_pb2_grpc\n \n \n class PserverServicer(elasticdl_pb2_grpc.PserverServicer):\n@@ -23,11 +23,11 @@\n \n def pull_variable(self, request, _):\n # TODO: implement this RPC service\n- return empty_pb2.Empty()\n+ return elasticdl_pb2.PullVariableResponse()\n \n def pull_embedding_vector(self, request, _):\n # TODO: implement this RPC service\n- return empty_pb2.Empty()\n+ return elasticdl_pb2.Tensor()\n \n def push_model(self, request, _):\n # TODO: implement this RPC service\n@@ -35,4 +35,4 @@\n \n def push_gradient(self, request, _):\n # TODO: implement this RPC service\n- return empty_pb2.Empty()\n+ return elasticdl_pb2.PushGradientResponse()\n", "issue": "[PS-1]Add new RPC services definition in elasticdl.proto according to PS design\n[PS design](https://github.com/sql-machine-learning/elasticdl/blob/develop/docs/designs/ps_design.md#rpc-definition) adds some new RPC services.\n", "before_files": [{"content": "from google.protobuf import empty_pb2\n\nfrom elasticdl.proto import elasticdl_pb2_grpc\n\n\nclass PserverServicer(elasticdl_pb2_grpc.PserverServicer):\n \"\"\"PS service implementation\"\"\"\n\n def __init__(\n self,\n parameters,\n grads_to_wait,\n optimizer,\n lr_staleness_modulation=False,\n use_async=False,\n ):\n self._parameters = parameters\n self._grads_to_wait = grads_to_wait\n self._optimizer = optimizer\n self._lr_staleness_modulation = lr_staleness_modulation\n self._use_async = use_async\n self._version = 0\n\n def pull_variable(self, request, _):\n # TODO: implement this RPC service\n return empty_pb2.Empty()\n\n def pull_embedding_vector(self, request, _):\n # TODO: implement this RPC service\n return empty_pb2.Empty()\n\n def push_model(self, request, _):\n # TODO: implement this RPC service\n return empty_pb2.Empty()\n\n def push_gradient(self, request, _):\n # TODO: implement this RPC service\n return empty_pb2.Empty()\n", "path": "elasticdl/python/ps/servicer.py"}], "after_files": [{"content": "from google.protobuf import empty_pb2\n\nfrom elasticdl.proto import elasticdl_pb2, elasticdl_pb2_grpc\n\n\nclass PserverServicer(elasticdl_pb2_grpc.PserverServicer):\n \"\"\"PS service implementation\"\"\"\n\n def __init__(\n self,\n parameters,\n grads_to_wait,\n optimizer,\n lr_staleness_modulation=False,\n use_async=False,\n ):\n self._parameters = parameters\n self._grads_to_wait = grads_to_wait\n self._optimizer = optimizer\n self._lr_staleness_modulation = lr_staleness_modulation\n self._use_async = use_async\n self._version = 0\n\n def pull_variable(self, request, _):\n # TODO: implement this RPC service\n return elasticdl_pb2.PullVariableResponse()\n\n def pull_embedding_vector(self, request, _):\n # TODO: implement this RPC service\n return elasticdl_pb2.Tensor()\n\n def push_model(self, request, _):\n # TODO: implement this RPC service\n return empty_pb2.Empty()\n\n def push_gradient(self, request, _):\n # TODO: implement this RPC service\n return elasticdl_pb2.PushGradientResponse()\n", "path": "elasticdl/python/ps/servicer.py"}]}
| 636 | 280 |
gh_patches_debug_12962
|
rasdani/github-patches
|
git_diff
|
mkdocs__mkdocs-615
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Only creating wheels for Python 2.7
Seems I didn't set something up correctly. It looks like this is a limitation of `setup.py bdist_wheel`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 from __future__ import print_function
5 from setuptools import setup
6 import re
7 import os
8 import sys
9
10 PY26 = sys.version_info[:2] == (2, 6)
11
12
13 long_description = (
14 "MkDocs is a fast, simple and downright gorgeous static site generator "
15 "that's geared towards building project documentation. Documentation "
16 "source files are written in Markdown, and configured with a single YAML "
17 "configuration file."
18 )
19
20
21 def get_version(package):
22 """Return package version as listed in `__version__` in `init.py`."""
23 init_py = open(os.path.join(package, '__init__.py')).read()
24 return re.search("__version__ = ['\"]([^'\"]+)['\"]", init_py).group(1)
25
26
27 def get_packages(package):
28 """Return root package and all sub-packages."""
29 return [dirpath
30 for dirpath, dirnames, filenames in os.walk(package)
31 if os.path.exists(os.path.join(dirpath, '__init__.py'))]
32
33
34 def get_package_data(package):
35 """
36 Return all files under the root package, that are not in a
37 package themselves.
38 """
39 walk = [(dirpath.replace(package + os.sep, '', 1), filenames)
40 for dirpath, dirnames, filenames in os.walk(package)
41 if not os.path.exists(os.path.join(dirpath, '__init__.py'))]
42
43 filepaths = []
44 for base, filenames in walk:
45 filepaths.extend([os.path.join(base, filename)
46 for filename in filenames])
47 return {package: filepaths}
48
49 setup(
50 name="mkdocs",
51 version=get_version("mkdocs"),
52 url='http://www.mkdocs.org',
53 license='BSD',
54 description='Project documentation with Markdown.',
55 long_description=long_description,
56 author='Tom Christie',
57 author_email='[email protected]', # SEE NOTE BELOW (*)
58 packages=get_packages("mkdocs"),
59 package_data=get_package_data("mkdocs"),
60 install_requires=[
61 'click>=4.0',
62 'Jinja2>=2.7.1',
63 'livereload>=2.3.2',
64 'Markdown>=2.3.1,<2.5' if PY26 else 'Markdown>=2.3.1',
65 'PyYAML>=3.10',
66 'tornado>=4.1',
67 ],
68 entry_points={
69 'console_scripts': [
70 'mkdocs = mkdocs.cli:cli',
71 ],
72 },
73 classifiers=[
74 'Development Status :: 5 - Production/Stable',
75 'Environment :: Console',
76 'Environment :: Web Environment',
77 'Intended Audience :: Developers',
78 'License :: OSI Approved :: BSD License',
79 'Operating System :: OS Independent',
80 'Programming Language :: Python',
81 'Programming Language :: Python :: 2',
82 'Programming Language :: Python :: 2.6',
83 'Programming Language :: Python :: 2.7',
84 'Programming Language :: Python :: 3',
85 'Programming Language :: Python :: 3.3',
86 'Programming Language :: Python :: 3.4',
87 "Programming Language :: Python :: Implementation :: CPython",
88 'Topic :: Documentation',
89 'Topic :: Text Processing',
90 ],
91 zip_safe=False
92 )
93
94 # (*) Please direct queries to the discussion group:
95 # https://groups.google.com/forum/#!forum/mkdocs
96
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -46,6 +46,22 @@
for filename in filenames])
return {package: filepaths}
+
+if sys.argv[-1] == 'publish':
+ if os.system("pip freeze | grep wheel"):
+ print("wheel not installed.\nUse `pip install wheel`.\nExiting.")
+ sys.exit()
+ if os.system("pip freeze | grep twine"):
+ print("twine not installed.\nUse `pip install twine`.\nExiting.")
+ sys.exit()
+ os.system("python setup.py sdist bdist_wheel")
+ os.system("twine upload dist/*")
+ print("You probably want to also tag the version now:")
+ print(" git tag -a {0} -m 'version {0}'".format(get_version("mkdocs")))
+ print(" git push --tags")
+ sys.exit()
+
+
setup(
name="mkdocs",
version=get_version("mkdocs"),
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -46,6 +46,22 @@\n for filename in filenames])\n return {package: filepaths}\n \n+\n+if sys.argv[-1] == 'publish':\n+ if os.system(\"pip freeze | grep wheel\"):\n+ print(\"wheel not installed.\\nUse `pip install wheel`.\\nExiting.\")\n+ sys.exit()\n+ if os.system(\"pip freeze | grep twine\"):\n+ print(\"twine not installed.\\nUse `pip install twine`.\\nExiting.\")\n+ sys.exit()\n+ os.system(\"python setup.py sdist bdist_wheel\")\n+ os.system(\"twine upload dist/*\")\n+ print(\"You probably want to also tag the version now:\")\n+ print(\" git tag -a {0} -m 'version {0}'\".format(get_version(\"mkdocs\")))\n+ print(\" git push --tags\")\n+ sys.exit()\n+\n+\n setup(\n name=\"mkdocs\",\n version=get_version(\"mkdocs\"),\n", "issue": "Only creating wheels for Python 2.7\nSeems I didn't set something up correctly. It looks like this is a limitation of `setup.py bdist_wheel`\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nfrom __future__ import print_function\nfrom setuptools import setup\nimport re\nimport os\nimport sys\n\nPY26 = sys.version_info[:2] == (2, 6)\n\n\nlong_description = (\n \"MkDocs is a fast, simple and downright gorgeous static site generator \"\n \"that's geared towards building project documentation. Documentation \"\n \"source files are written in Markdown, and configured with a single YAML \"\n \"configuration file.\"\n)\n\n\ndef get_version(package):\n \"\"\"Return package version as listed in `__version__` in `init.py`.\"\"\"\n init_py = open(os.path.join(package, '__init__.py')).read()\n return re.search(\"__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", init_py).group(1)\n\n\ndef get_packages(package):\n \"\"\"Return root package and all sub-packages.\"\"\"\n return [dirpath\n for dirpath, dirnames, filenames in os.walk(package)\n if os.path.exists(os.path.join(dirpath, '__init__.py'))]\n\n\ndef get_package_data(package):\n \"\"\"\n Return all files under the root package, that are not in a\n package themselves.\n \"\"\"\n walk = [(dirpath.replace(package + os.sep, '', 1), filenames)\n for dirpath, dirnames, filenames in os.walk(package)\n if not os.path.exists(os.path.join(dirpath, '__init__.py'))]\n\n filepaths = []\n for base, filenames in walk:\n filepaths.extend([os.path.join(base, filename)\n for filename in filenames])\n return {package: filepaths}\n\nsetup(\n name=\"mkdocs\",\n version=get_version(\"mkdocs\"),\n url='http://www.mkdocs.org',\n license='BSD',\n description='Project documentation with Markdown.',\n long_description=long_description,\n author='Tom Christie',\n author_email='[email protected]', # SEE NOTE BELOW (*)\n packages=get_packages(\"mkdocs\"),\n package_data=get_package_data(\"mkdocs\"),\n install_requires=[\n 'click>=4.0',\n 'Jinja2>=2.7.1',\n 'livereload>=2.3.2',\n 'Markdown>=2.3.1,<2.5' if PY26 else 'Markdown>=2.3.1',\n 'PyYAML>=3.10',\n 'tornado>=4.1',\n ],\n entry_points={\n 'console_scripts': [\n 'mkdocs = mkdocs.cli:cli',\n ],\n },\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n \"Programming Language :: Python :: Implementation :: CPython\",\n 'Topic :: Documentation',\n 'Topic :: Text Processing',\n ],\n zip_safe=False\n)\n\n# (*) Please direct queries to the discussion group:\n# https://groups.google.com/forum/#!forum/mkdocs\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nfrom __future__ import print_function\nfrom setuptools import setup\nimport re\nimport os\nimport sys\n\nPY26 = sys.version_info[:2] == (2, 6)\n\n\nlong_description = (\n \"MkDocs is a fast, simple and downright gorgeous static site generator \"\n \"that's geared towards building project documentation. Documentation \"\n \"source files are written in Markdown, and configured with a single YAML \"\n \"configuration file.\"\n)\n\n\ndef get_version(package):\n \"\"\"Return package version as listed in `__version__` in `init.py`.\"\"\"\n init_py = open(os.path.join(package, '__init__.py')).read()\n return re.search(\"__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", init_py).group(1)\n\n\ndef get_packages(package):\n \"\"\"Return root package and all sub-packages.\"\"\"\n return [dirpath\n for dirpath, dirnames, filenames in os.walk(package)\n if os.path.exists(os.path.join(dirpath, '__init__.py'))]\n\n\ndef get_package_data(package):\n \"\"\"\n Return all files under the root package, that are not in a\n package themselves.\n \"\"\"\n walk = [(dirpath.replace(package + os.sep, '', 1), filenames)\n for dirpath, dirnames, filenames in os.walk(package)\n if not os.path.exists(os.path.join(dirpath, '__init__.py'))]\n\n filepaths = []\n for base, filenames in walk:\n filepaths.extend([os.path.join(base, filename)\n for filename in filenames])\n return {package: filepaths}\n\n\nif sys.argv[-1] == 'publish':\n if os.system(\"pip freeze | grep wheel\"):\n print(\"wheel not installed.\\nUse `pip install wheel`.\\nExiting.\")\n sys.exit()\n if os.system(\"pip freeze | grep twine\"):\n print(\"twine not installed.\\nUse `pip install twine`.\\nExiting.\")\n sys.exit()\n os.system(\"python setup.py sdist bdist_wheel\")\n os.system(\"twine upload dist/*\")\n print(\"You probably want to also tag the version now:\")\n print(\" git tag -a {0} -m 'version {0}'\".format(get_version(\"mkdocs\")))\n print(\" git push --tags\")\n sys.exit()\n\n\nsetup(\n name=\"mkdocs\",\n version=get_version(\"mkdocs\"),\n url='http://www.mkdocs.org',\n license='BSD',\n description='Project documentation with Markdown.',\n long_description=long_description,\n author='Tom Christie',\n author_email='[email protected]', # SEE NOTE BELOW (*)\n packages=get_packages(\"mkdocs\"),\n package_data=get_package_data(\"mkdocs\"),\n install_requires=[\n 'click>=4.0',\n 'Jinja2>=2.7.1',\n 'livereload>=2.3.2',\n 'Markdown>=2.3.1,<2.5' if PY26 else 'Markdown>=2.3.1',\n 'PyYAML>=3.10',\n 'tornado>=4.1',\n ],\n entry_points={\n 'console_scripts': [\n 'mkdocs = mkdocs.cli:cli',\n ],\n },\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n \"Programming Language :: Python :: Implementation :: CPython\",\n 'Topic :: Documentation',\n 'Topic :: Text Processing',\n ],\n zip_safe=False\n)\n\n# (*) Please direct queries to the discussion group:\n# https://groups.google.com/forum/#!forum/mkdocs\n", "path": "setup.py"}]}
| 1,218 | 234 |
gh_patches_debug_37791
|
rasdani/github-patches
|
git_diff
|
Lightning-AI__pytorch-lightning-1378
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Training loop temporarily hangs after every 4 steps
I am porting some of my code to pytorch lightning, and everything seems to work fine. However, for some reason after every 4 training steps I see some temporary hanging (~1 second), which is severely slowing down my overall training time. Am I missing some obvious configuration? This is my Trainer configuration:
```
trainer = pl.Trainer(
gpus=8
num_nodes=1,
distributed_backend='ddp',
checkpoint_callback=False,
max_epochs=50,
max_steps=None,
progress_bar_refresh_rate=1,
check_val_every_n_epoch=1,
val_check_interval=1.0,
gradient_clip_val=0.0,
log_save_interval=0,
num_sanity_val_steps=0,
amp_level='O0',
)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pytorch_lightning/trainer/data_loading.py`
Content:
```
1 from abc import ABC, abstractmethod
2 from typing import Union, List, Tuple, Callable
3
4 import torch.distributed as torch_distrib
5 from torch.utils.data import SequentialSampler, DataLoader
6 from torch.utils.data.distributed import DistributedSampler
7
8 from pytorch_lightning.core import LightningModule
9 from pytorch_lightning.utilities.exceptions import MisconfigurationException
10
11 try:
12 from apex import amp
13 except ImportError:
14 APEX_AVAILABLE = False
15 else:
16 APEX_AVAILABLE = True
17
18 try:
19 import torch_xla
20 import torch_xla.core.xla_model as xm
21 import torch_xla.distributed.xla_multiprocessing as xmp
22 except ImportError:
23 XLA_AVAILABLE = False
24 else:
25 XLA_AVAILABLE = True
26
27
28 def _has_len(dataloader: DataLoader) -> bool:
29 """ Checks if a given Dataloader has __len__ method implemented i.e. if
30 it is a finite dataloader or infinite dataloader """
31 try:
32 # try getting the length
33 if len(dataloader) == 0:
34 raise ValueError('Dataloader returned 0 length. Please make sure'
35 ' that your Dataloader atleast returns 1 batch')
36 return True
37 except TypeError:
38 return False
39
40
41 class TrainerDataLoadingMixin(ABC):
42
43 # this is just a summary on variables used in this abstract class,
44 # the proper values/initialisation should be done in child class
45 proc_rank: int
46 use_ddp: bool
47 use_ddp2: bool
48 shown_warnings: ...
49 val_check_interval: float
50 use_tpu: bool
51 tpu_local_core_rank: int
52 train_dataloader: DataLoader
53 num_training_batches: Union[int, float]
54 val_check_batch: ...
55 val_dataloaders: List[DataLoader]
56 num_val_batches: Union[int, float]
57 test_dataloaders: List[DataLoader]
58 num_test_batches: Union[int, float]
59 train_percent_check: float
60 val_percent_check: float
61 test_percent_check: float
62
63 @abstractmethod
64 def is_overriden(self, *args):
65 """Warning: this is just empty shell for code implemented in other class."""
66
67 def _percent_range_check(self, name: str) -> None:
68 value = getattr(self, name)
69 msg = f'`{name}` must lie in the range [0.0, 1.0], but got {value:.3f}.'
70 if name == 'val_check_interval':
71 msg += ' If you want to disable validation set `val_percent_check` to 0.0 instead.'
72
73 if not 0. <= value <= 1.:
74 raise ValueError(msg)
75
76 def auto_add_sampler(self, dataloader: DataLoader, train: bool) -> DataLoader:
77
78 # don't do anything if it's not a dataloader
79 if not isinstance(dataloader, DataLoader):
80 return dataloader
81
82 need_dist_sampler = self.use_ddp or self.use_ddp2 or self.use_tpu
83 no_sampler_added = dataloader.sampler is None
84
85 if need_dist_sampler and no_sampler_added:
86
87 skip_keys = ['sampler', 'batch_sampler', 'dataset_kind']
88
89 dl_args = {
90 k: v for k, v in dataloader.__dict__.items() if not k.startswith('_') and k not in skip_keys
91 }
92
93 if self.use_tpu:
94 sampler = DistributedSampler(
95 dataloader.dataset,
96 num_replicas=xm.xrt_world_size(),
97 rank=xm.get_ordinal()
98 )
99 else:
100 sampler = DistributedSampler(dataloader.dataset)
101
102 dl_args['sampler'] = sampler
103 dataloader = type(dataloader)(**dl_args)
104
105 return dataloader
106
107 def reset_train_dataloader(self, model: LightningModule) -> None:
108 """Resets the train dataloader and initialises required variables
109 (number of batches, when to validate, etc.).
110
111 Args:
112 model: The current `LightningModule`
113 """
114 self.train_dataloader = self.request_dataloader(model.train_dataloader)
115 self.num_training_batches = 0
116
117 # automatically add samplers
118 self.train_dataloader = self.auto_add_sampler(self.train_dataloader, train=True)
119
120 self._percent_range_check('train_percent_check')
121
122 if not _has_len(self.train_dataloader):
123 self.num_training_batches = float('inf')
124 else:
125 # try getting the length
126 self.num_training_batches = len(self.train_dataloader)
127 self.num_training_batches = int(self.num_training_batches * self.train_percent_check)
128
129 # determine when to check validation
130 # if int passed in, val checks that often
131 # otherwise, it checks in [0, 1.0] % range of a training epoch
132 if isinstance(self.val_check_interval, int):
133 self.val_check_batch = self.val_check_interval
134 if self.val_check_batch > self.num_training_batches:
135 raise ValueError(
136 f'`val_check_interval` ({self.val_check_interval}) must be less than or equal '
137 f'to the number of the training batches ({self.num_training_batches}). '
138 'If you want to disable validation set `val_percent_check` to 0.0 instead.')
139 else:
140 if not _has_len(self.train_dataloader):
141 if self.val_check_interval == 1.0:
142 self.val_check_batch = float('inf')
143 else:
144 raise MisconfigurationException(
145 'When using an infinite DataLoader (e.g. with an IterableDataset or when '
146 'DataLoader does not implement `__len__`) for `train_dataloader`, '
147 '`Trainer(val_check_interval)` must be `1.0` or an int. An int k specifies '
148 'checking validation every k training batches.')
149 else:
150 self._percent_range_check('val_check_interval')
151
152 self.val_check_batch = int(self.num_training_batches * self.val_check_interval)
153 self.val_check_batch = max(1, self.val_check_batch)
154
155 def _reset_eval_dataloader(self, model: LightningModule,
156 mode: str) -> Tuple[int, List[DataLoader]]:
157 """Generic method to reset a dataloader for evaluation.
158
159 Args:
160 model: The current `LightningModule`
161 mode: Either `'val'` or `'test'`
162
163 Returns:
164 Tuple (num_batches, dataloaders)
165 """
166 dataloaders = self.request_dataloader(getattr(model, f'{mode}_dataloader'))
167
168 if not isinstance(dataloaders, list):
169 dataloaders = [dataloaders]
170
171 # add samplers
172 dataloaders = [self.auto_add_sampler(dl, train=False) for dl in dataloaders if dl]
173
174 num_batches = 0
175
176 # determine number of batches
177 # datasets could be none, 1 or 2+
178 if len(dataloaders) != 0:
179 for dataloader in dataloaders:
180 if not _has_len(dataloader):
181 num_batches = float('inf')
182 break
183
184 percent_check = getattr(self, f'{mode}_percent_check')
185
186 if num_batches != float('inf'):
187 self._percent_range_check(f'{mode}_percent_check')
188
189 num_batches = sum(len(dataloader) for dataloader in dataloaders)
190 num_batches = int(num_batches * percent_check)
191 elif percent_check not in (0.0, 1.0):
192 raise MisconfigurationException(
193 'When using an infinite DataLoader (e.g. with an IterableDataset or when '
194 f'DataLoader does not implement `__len__`) for `{mode}_dataloader`, '
195 f'`Trainer({mode}_percent_check)` must be `0.0` or `1.0`.')
196 return num_batches, dataloaders
197
198 def reset_val_dataloader(self, model: LightningModule) -> None:
199 """Resets the validation dataloader and determines the number of batches.
200
201 Args:
202 model: The current `LightningModule`
203 """
204 if self.is_overriden('validation_step'):
205 self.num_val_batches, self.val_dataloaders =\
206 self._reset_eval_dataloader(model, 'val')
207
208 def reset_test_dataloader(self, model) -> None:
209 """Resets the validation dataloader and determines the number of batches.
210
211 Args:
212 model: The current `LightningModule`
213 """
214 if self.is_overriden('test_step'):
215 self.num_test_batches, self.test_dataloaders =\
216 self._reset_eval_dataloader(model, 'test')
217
218 def request_dataloader(self, dataloader_fx: Callable) -> DataLoader:
219 """Handles downloading data in the GPU or TPU case.
220
221 Args:
222 dataloader_fx: The bound dataloader getter
223
224 Returns:
225 The dataloader
226 """
227 dataloader = dataloader_fx()
228
229 # get the function we'll use to get data
230 if self.use_ddp or self.use_ddp2:
231 # all processes wait until data download has happened
232 torch_distrib.barrier()
233
234 # data download/load on TPU
235 elif self.use_tpu and XLA_AVAILABLE:
236 # all processes wait until data download has happened
237 torch_xla.core.xla_model.rendezvous('pl.TrainerDataLoadingMixin.get_dataloaders')
238
239 return dataloader
240
241 def determine_data_use_amount(self, train_percent_check: float, val_percent_check: float,
242 test_percent_check: float, overfit_pct: float) -> None:
243 """Use less data for debugging purposes
244 """
245 self.train_percent_check = train_percent_check
246 self.val_percent_check = val_percent_check
247 self.test_percent_check = test_percent_check
248 if overfit_pct > 0:
249 if overfit_pct > 1:
250 raise ValueError(
251 f'`overfit_pct` must be not greater than 1.0, but got {overfit_pct:.3f}.')
252
253 self.train_percent_check = overfit_pct
254 self.val_percent_check = overfit_pct
255 self.test_percent_check = overfit_pct
256
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pytorch_lightning/trainer/data_loading.py b/pytorch_lightning/trainer/data_loading.py
--- a/pytorch_lightning/trainer/data_loading.py
+++ b/pytorch_lightning/trainer/data_loading.py
@@ -1,8 +1,9 @@
+import warnings
from abc import ABC, abstractmethod
from typing import Union, List, Tuple, Callable
import torch.distributed as torch_distrib
-from torch.utils.data import SequentialSampler, DataLoader
+from torch.utils.data import DataLoader
from torch.utils.data.distributed import DistributedSampler
from pytorch_lightning.core import LightningModule
@@ -73,6 +74,12 @@
if not 0. <= value <= 1.:
raise ValueError(msg)
+ def _worker_check(self, dataloader: DataLoader, name: str) -> None:
+ if isinstance(dataloader, DataLoader) and dataloader.num_workers <= 2:
+ warnings.warn(f'The dataloader, {name}, does not have many workers which may be a bottleneck.'
+ ' Consider increasing the value of the `num_workers` argument`'
+ ' in the `DataLoader` init to improve performance.')
+
def auto_add_sampler(self, dataloader: DataLoader, train: bool) -> DataLoader:
# don't do anything if it's not a dataloader
@@ -112,11 +119,13 @@
model: The current `LightningModule`
"""
self.train_dataloader = self.request_dataloader(model.train_dataloader)
+
self.num_training_batches = 0
# automatically add samplers
self.train_dataloader = self.auto_add_sampler(self.train_dataloader, train=True)
+ self._worker_check(self.train_dataloader, 'train dataloader')
self._percent_range_check('train_percent_check')
if not _has_len(self.train_dataloader):
@@ -176,10 +185,10 @@
# determine number of batches
# datasets could be none, 1 or 2+
if len(dataloaders) != 0:
- for dataloader in dataloaders:
+ for i, dataloader in enumerate(dataloaders):
+ self._worker_check(dataloader, f'{mode} dataloader {i}')
if not _has_len(dataloader):
num_batches = float('inf')
- break
percent_check = getattr(self, f'{mode}_percent_check')
|
{"golden_diff": "diff --git a/pytorch_lightning/trainer/data_loading.py b/pytorch_lightning/trainer/data_loading.py\n--- a/pytorch_lightning/trainer/data_loading.py\n+++ b/pytorch_lightning/trainer/data_loading.py\n@@ -1,8 +1,9 @@\n+import warnings\n from abc import ABC, abstractmethod\n from typing import Union, List, Tuple, Callable\n \n import torch.distributed as torch_distrib\n-from torch.utils.data import SequentialSampler, DataLoader\n+from torch.utils.data import DataLoader\n from torch.utils.data.distributed import DistributedSampler\n \n from pytorch_lightning.core import LightningModule\n@@ -73,6 +74,12 @@\n if not 0. <= value <= 1.:\n raise ValueError(msg)\n \n+ def _worker_check(self, dataloader: DataLoader, name: str) -> None:\n+ if isinstance(dataloader, DataLoader) and dataloader.num_workers <= 2:\n+ warnings.warn(f'The dataloader, {name}, does not have many workers which may be a bottleneck.'\n+ ' Consider increasing the value of the `num_workers` argument`'\n+ ' in the `DataLoader` init to improve performance.')\n+\n def auto_add_sampler(self, dataloader: DataLoader, train: bool) -> DataLoader:\n \n # don't do anything if it's not a dataloader\n@@ -112,11 +119,13 @@\n model: The current `LightningModule`\n \"\"\"\n self.train_dataloader = self.request_dataloader(model.train_dataloader)\n+\n self.num_training_batches = 0\n \n # automatically add samplers\n self.train_dataloader = self.auto_add_sampler(self.train_dataloader, train=True)\n \n+ self._worker_check(self.train_dataloader, 'train dataloader')\n self._percent_range_check('train_percent_check')\n \n if not _has_len(self.train_dataloader):\n@@ -176,10 +185,10 @@\n # determine number of batches\n # datasets could be none, 1 or 2+\n if len(dataloaders) != 0:\n- for dataloader in dataloaders:\n+ for i, dataloader in enumerate(dataloaders):\n+ self._worker_check(dataloader, f'{mode} dataloader {i}')\n if not _has_len(dataloader):\n num_batches = float('inf')\n- break\n \n percent_check = getattr(self, f'{mode}_percent_check')\n", "issue": "Training loop temporarily hangs after every 4 steps\nI am porting some of my code to pytorch lightning, and everything seems to work fine. However, for some reason after every 4 training steps I see some temporary hanging (~1 second), which is severely slowing down my overall training time. Am I missing some obvious configuration? This is my Trainer configuration:\r\n\r\n```\r\n trainer = pl.Trainer(\r\n gpus=8\r\n num_nodes=1,\r\n distributed_backend='ddp',\r\n checkpoint_callback=False,\r\n max_epochs=50,\r\n max_steps=None,\r\n progress_bar_refresh_rate=1,\r\n check_val_every_n_epoch=1,\r\n val_check_interval=1.0,\r\n gradient_clip_val=0.0,\r\n log_save_interval=0,\r\n num_sanity_val_steps=0,\r\n amp_level='O0',\r\n )\r\n```\r\n\r\n\n", "before_files": [{"content": "from abc import ABC, abstractmethod\nfrom typing import Union, List, Tuple, Callable\n\nimport torch.distributed as torch_distrib\nfrom torch.utils.data import SequentialSampler, DataLoader\nfrom torch.utils.data.distributed import DistributedSampler\n\nfrom pytorch_lightning.core import LightningModule\nfrom pytorch_lightning.utilities.exceptions import MisconfigurationException\n\ntry:\n from apex import amp\nexcept ImportError:\n APEX_AVAILABLE = False\nelse:\n APEX_AVAILABLE = True\n\ntry:\n import torch_xla\n import torch_xla.core.xla_model as xm\n import torch_xla.distributed.xla_multiprocessing as xmp\nexcept ImportError:\n XLA_AVAILABLE = False\nelse:\n XLA_AVAILABLE = True\n\n\ndef _has_len(dataloader: DataLoader) -> bool:\n \"\"\" Checks if a given Dataloader has __len__ method implemented i.e. if\n it is a finite dataloader or infinite dataloader \"\"\"\n try:\n # try getting the length\n if len(dataloader) == 0:\n raise ValueError('Dataloader returned 0 length. Please make sure'\n ' that your Dataloader atleast returns 1 batch')\n return True\n except TypeError:\n return False\n\n\nclass TrainerDataLoadingMixin(ABC):\n\n # this is just a summary on variables used in this abstract class,\n # the proper values/initialisation should be done in child class\n proc_rank: int\n use_ddp: bool\n use_ddp2: bool\n shown_warnings: ...\n val_check_interval: float\n use_tpu: bool\n tpu_local_core_rank: int\n train_dataloader: DataLoader\n num_training_batches: Union[int, float]\n val_check_batch: ...\n val_dataloaders: List[DataLoader]\n num_val_batches: Union[int, float]\n test_dataloaders: List[DataLoader]\n num_test_batches: Union[int, float]\n train_percent_check: float\n val_percent_check: float\n test_percent_check: float\n\n @abstractmethod\n def is_overriden(self, *args):\n \"\"\"Warning: this is just empty shell for code implemented in other class.\"\"\"\n\n def _percent_range_check(self, name: str) -> None:\n value = getattr(self, name)\n msg = f'`{name}` must lie in the range [0.0, 1.0], but got {value:.3f}.'\n if name == 'val_check_interval':\n msg += ' If you want to disable validation set `val_percent_check` to 0.0 instead.'\n\n if not 0. <= value <= 1.:\n raise ValueError(msg)\n\n def auto_add_sampler(self, dataloader: DataLoader, train: bool) -> DataLoader:\n\n # don't do anything if it's not a dataloader\n if not isinstance(dataloader, DataLoader):\n return dataloader\n\n need_dist_sampler = self.use_ddp or self.use_ddp2 or self.use_tpu\n no_sampler_added = dataloader.sampler is None\n\n if need_dist_sampler and no_sampler_added:\n\n skip_keys = ['sampler', 'batch_sampler', 'dataset_kind']\n\n dl_args = {\n k: v for k, v in dataloader.__dict__.items() if not k.startswith('_') and k not in skip_keys\n }\n\n if self.use_tpu:\n sampler = DistributedSampler(\n dataloader.dataset,\n num_replicas=xm.xrt_world_size(),\n rank=xm.get_ordinal()\n )\n else:\n sampler = DistributedSampler(dataloader.dataset)\n\n dl_args['sampler'] = sampler\n dataloader = type(dataloader)(**dl_args)\n\n return dataloader\n\n def reset_train_dataloader(self, model: LightningModule) -> None:\n \"\"\"Resets the train dataloader and initialises required variables\n (number of batches, when to validate, etc.).\n\n Args:\n model: The current `LightningModule`\n \"\"\"\n self.train_dataloader = self.request_dataloader(model.train_dataloader)\n self.num_training_batches = 0\n\n # automatically add samplers\n self.train_dataloader = self.auto_add_sampler(self.train_dataloader, train=True)\n\n self._percent_range_check('train_percent_check')\n\n if not _has_len(self.train_dataloader):\n self.num_training_batches = float('inf')\n else:\n # try getting the length\n self.num_training_batches = len(self.train_dataloader)\n self.num_training_batches = int(self.num_training_batches * self.train_percent_check)\n\n # determine when to check validation\n # if int passed in, val checks that often\n # otherwise, it checks in [0, 1.0] % range of a training epoch\n if isinstance(self.val_check_interval, int):\n self.val_check_batch = self.val_check_interval\n if self.val_check_batch > self.num_training_batches:\n raise ValueError(\n f'`val_check_interval` ({self.val_check_interval}) must be less than or equal '\n f'to the number of the training batches ({self.num_training_batches}). '\n 'If you want to disable validation set `val_percent_check` to 0.0 instead.')\n else:\n if not _has_len(self.train_dataloader):\n if self.val_check_interval == 1.0:\n self.val_check_batch = float('inf')\n else:\n raise MisconfigurationException(\n 'When using an infinite DataLoader (e.g. with an IterableDataset or when '\n 'DataLoader does not implement `__len__`) for `train_dataloader`, '\n '`Trainer(val_check_interval)` must be `1.0` or an int. An int k specifies '\n 'checking validation every k training batches.')\n else:\n self._percent_range_check('val_check_interval')\n\n self.val_check_batch = int(self.num_training_batches * self.val_check_interval)\n self.val_check_batch = max(1, self.val_check_batch)\n\n def _reset_eval_dataloader(self, model: LightningModule,\n mode: str) -> Tuple[int, List[DataLoader]]:\n \"\"\"Generic method to reset a dataloader for evaluation.\n\n Args:\n model: The current `LightningModule`\n mode: Either `'val'` or `'test'`\n\n Returns:\n Tuple (num_batches, dataloaders)\n \"\"\"\n dataloaders = self.request_dataloader(getattr(model, f'{mode}_dataloader'))\n\n if not isinstance(dataloaders, list):\n dataloaders = [dataloaders]\n\n # add samplers\n dataloaders = [self.auto_add_sampler(dl, train=False) for dl in dataloaders if dl]\n\n num_batches = 0\n\n # determine number of batches\n # datasets could be none, 1 or 2+\n if len(dataloaders) != 0:\n for dataloader in dataloaders:\n if not _has_len(dataloader):\n num_batches = float('inf')\n break\n\n percent_check = getattr(self, f'{mode}_percent_check')\n\n if num_batches != float('inf'):\n self._percent_range_check(f'{mode}_percent_check')\n\n num_batches = sum(len(dataloader) for dataloader in dataloaders)\n num_batches = int(num_batches * percent_check)\n elif percent_check not in (0.0, 1.0):\n raise MisconfigurationException(\n 'When using an infinite DataLoader (e.g. with an IterableDataset or when '\n f'DataLoader does not implement `__len__`) for `{mode}_dataloader`, '\n f'`Trainer({mode}_percent_check)` must be `0.0` or `1.0`.')\n return num_batches, dataloaders\n\n def reset_val_dataloader(self, model: LightningModule) -> None:\n \"\"\"Resets the validation dataloader and determines the number of batches.\n\n Args:\n model: The current `LightningModule`\n \"\"\"\n if self.is_overriden('validation_step'):\n self.num_val_batches, self.val_dataloaders =\\\n self._reset_eval_dataloader(model, 'val')\n\n def reset_test_dataloader(self, model) -> None:\n \"\"\"Resets the validation dataloader and determines the number of batches.\n\n Args:\n model: The current `LightningModule`\n \"\"\"\n if self.is_overriden('test_step'):\n self.num_test_batches, self.test_dataloaders =\\\n self._reset_eval_dataloader(model, 'test')\n\n def request_dataloader(self, dataloader_fx: Callable) -> DataLoader:\n \"\"\"Handles downloading data in the GPU or TPU case.\n\n Args:\n dataloader_fx: The bound dataloader getter\n\n Returns:\n The dataloader\n \"\"\"\n dataloader = dataloader_fx()\n\n # get the function we'll use to get data\n if self.use_ddp or self.use_ddp2:\n # all processes wait until data download has happened\n torch_distrib.barrier()\n\n # data download/load on TPU\n elif self.use_tpu and XLA_AVAILABLE:\n # all processes wait until data download has happened\n torch_xla.core.xla_model.rendezvous('pl.TrainerDataLoadingMixin.get_dataloaders')\n\n return dataloader\n\n def determine_data_use_amount(self, train_percent_check: float, val_percent_check: float,\n test_percent_check: float, overfit_pct: float) -> None:\n \"\"\"Use less data for debugging purposes\n \"\"\"\n self.train_percent_check = train_percent_check\n self.val_percent_check = val_percent_check\n self.test_percent_check = test_percent_check\n if overfit_pct > 0:\n if overfit_pct > 1:\n raise ValueError(\n f'`overfit_pct` must be not greater than 1.0, but got {overfit_pct:.3f}.')\n\n self.train_percent_check = overfit_pct\n self.val_percent_check = overfit_pct\n self.test_percent_check = overfit_pct\n", "path": "pytorch_lightning/trainer/data_loading.py"}], "after_files": [{"content": "import warnings\nfrom abc import ABC, abstractmethod\nfrom typing import Union, List, Tuple, Callable\n\nimport torch.distributed as torch_distrib\nfrom torch.utils.data import DataLoader\nfrom torch.utils.data.distributed import DistributedSampler\n\nfrom pytorch_lightning.core import LightningModule\nfrom pytorch_lightning.utilities.exceptions import MisconfigurationException\n\ntry:\n from apex import amp\nexcept ImportError:\n APEX_AVAILABLE = False\nelse:\n APEX_AVAILABLE = True\n\ntry:\n import torch_xla\n import torch_xla.core.xla_model as xm\n import torch_xla.distributed.xla_multiprocessing as xmp\nexcept ImportError:\n XLA_AVAILABLE = False\nelse:\n XLA_AVAILABLE = True\n\n\ndef _has_len(dataloader: DataLoader) -> bool:\n \"\"\" Checks if a given Dataloader has __len__ method implemented i.e. if\n it is a finite dataloader or infinite dataloader \"\"\"\n try:\n # try getting the length\n if len(dataloader) == 0:\n raise ValueError('Dataloader returned 0 length. Please make sure'\n ' that your Dataloader atleast returns 1 batch')\n return True\n except TypeError:\n return False\n\n\nclass TrainerDataLoadingMixin(ABC):\n\n # this is just a summary on variables used in this abstract class,\n # the proper values/initialisation should be done in child class\n proc_rank: int\n use_ddp: bool\n use_ddp2: bool\n shown_warnings: ...\n val_check_interval: float\n use_tpu: bool\n tpu_local_core_rank: int\n train_dataloader: DataLoader\n num_training_batches: Union[int, float]\n val_check_batch: ...\n val_dataloaders: List[DataLoader]\n num_val_batches: Union[int, float]\n test_dataloaders: List[DataLoader]\n num_test_batches: Union[int, float]\n train_percent_check: float\n val_percent_check: float\n test_percent_check: float\n\n @abstractmethod\n def is_overriden(self, *args):\n \"\"\"Warning: this is just empty shell for code implemented in other class.\"\"\"\n\n def _percent_range_check(self, name: str) -> None:\n value = getattr(self, name)\n msg = f'`{name}` must lie in the range [0.0, 1.0], but got {value:.3f}.'\n if name == 'val_check_interval':\n msg += ' If you want to disable validation set `val_percent_check` to 0.0 instead.'\n\n if not 0. <= value <= 1.:\n raise ValueError(msg)\n\n def _worker_check(self, dataloader: DataLoader, name: str) -> None:\n if isinstance(dataloader, DataLoader) and dataloader.num_workers <= 2:\n warnings.warn(f'The dataloader, {name}, does not have many workers which may be a bottleneck.'\n ' Consider increasing the value of the `num_workers` argument`'\n ' in the `DataLoader` init to improve performance.')\n\n def auto_add_sampler(self, dataloader: DataLoader, train: bool) -> DataLoader:\n\n # don't do anything if it's not a dataloader\n if not isinstance(dataloader, DataLoader):\n return dataloader\n\n need_dist_sampler = self.use_ddp or self.use_ddp2 or self.use_tpu\n no_sampler_added = dataloader.sampler is None\n\n if need_dist_sampler and no_sampler_added:\n\n skip_keys = ['sampler', 'batch_sampler', 'dataset_kind']\n\n dl_args = {\n k: v for k, v in dataloader.__dict__.items() if not k.startswith('_') and k not in skip_keys\n }\n\n if self.use_tpu:\n sampler = DistributedSampler(\n dataloader.dataset,\n num_replicas=xm.xrt_world_size(),\n rank=xm.get_ordinal()\n )\n else:\n sampler = DistributedSampler(dataloader.dataset)\n\n dl_args['sampler'] = sampler\n dataloader = type(dataloader)(**dl_args)\n\n return dataloader\n\n def reset_train_dataloader(self, model: LightningModule) -> None:\n \"\"\"Resets the train dataloader and initialises required variables\n (number of batches, when to validate, etc.).\n\n Args:\n model: The current `LightningModule`\n \"\"\"\n self.train_dataloader = self.request_dataloader(model.train_dataloader)\n\n self.num_training_batches = 0\n\n # automatically add samplers\n self.train_dataloader = self.auto_add_sampler(self.train_dataloader, train=True)\n\n self._worker_check(self.train_dataloader, 'train dataloader')\n self._percent_range_check('train_percent_check')\n\n if not _has_len(self.train_dataloader):\n self.num_training_batches = float('inf')\n else:\n # try getting the length\n self.num_training_batches = len(self.train_dataloader)\n self.num_training_batches = int(self.num_training_batches * self.train_percent_check)\n\n # determine when to check validation\n # if int passed in, val checks that often\n # otherwise, it checks in [0, 1.0] % range of a training epoch\n if isinstance(self.val_check_interval, int):\n self.val_check_batch = self.val_check_interval\n if self.val_check_batch > self.num_training_batches:\n raise ValueError(\n f'`val_check_interval` ({self.val_check_interval}) must be less than or equal '\n f'to the number of the training batches ({self.num_training_batches}). '\n 'If you want to disable validation set `val_percent_check` to 0.0 instead.')\n else:\n if not _has_len(self.train_dataloader):\n if self.val_check_interval == 1.0:\n self.val_check_batch = float('inf')\n else:\n raise MisconfigurationException(\n 'When using an infinite DataLoader (e.g. with an IterableDataset or when '\n 'DataLoader does not implement `__len__`) for `train_dataloader`, '\n '`Trainer(val_check_interval)` must be `1.0` or an int. An int k specifies '\n 'checking validation every k training batches.')\n else:\n self._percent_range_check('val_check_interval')\n\n self.val_check_batch = int(self.num_training_batches * self.val_check_interval)\n self.val_check_batch = max(1, self.val_check_batch)\n\n def _reset_eval_dataloader(self, model: LightningModule,\n mode: str) -> Tuple[int, List[DataLoader]]:\n \"\"\"Generic method to reset a dataloader for evaluation.\n\n Args:\n model: The current `LightningModule`\n mode: Either `'val'` or `'test'`\n\n Returns:\n Tuple (num_batches, dataloaders)\n \"\"\"\n dataloaders = self.request_dataloader(getattr(model, f'{mode}_dataloader'))\n\n if not isinstance(dataloaders, list):\n dataloaders = [dataloaders]\n\n # add samplers\n dataloaders = [self.auto_add_sampler(dl, train=False) for dl in dataloaders if dl]\n\n num_batches = 0\n\n # determine number of batches\n # datasets could be none, 1 or 2+\n if len(dataloaders) != 0:\n for i, dataloader in enumerate(dataloaders):\n self._worker_check(dataloader, f'{mode} dataloader {i}')\n if not _has_len(dataloader):\n num_batches = float('inf')\n\n percent_check = getattr(self, f'{mode}_percent_check')\n\n if num_batches != float('inf'):\n self._percent_range_check(f'{mode}_percent_check')\n\n num_batches = sum(len(dataloader) for dataloader in dataloaders)\n num_batches = int(num_batches * percent_check)\n elif percent_check not in (0.0, 1.0):\n raise MisconfigurationException(\n 'When using an infinite DataLoader (e.g. with an IterableDataset or when '\n f'DataLoader does not implement `__len__`) for `{mode}_dataloader`, '\n f'`Trainer({mode}_percent_check)` must be `0.0` or `1.0`.')\n return num_batches, dataloaders\n\n def reset_val_dataloader(self, model: LightningModule) -> None:\n \"\"\"Resets the validation dataloader and determines the number of batches.\n\n Args:\n model: The current `LightningModule`\n \"\"\"\n if self.is_overriden('validation_step'):\n self.num_val_batches, self.val_dataloaders =\\\n self._reset_eval_dataloader(model, 'val')\n\n def reset_test_dataloader(self, model) -> None:\n \"\"\"Resets the validation dataloader and determines the number of batches.\n\n Args:\n model: The current `LightningModule`\n \"\"\"\n if self.is_overriden('test_step'):\n self.num_test_batches, self.test_dataloaders =\\\n self._reset_eval_dataloader(model, 'test')\n\n def request_dataloader(self, dataloader_fx: Callable) -> DataLoader:\n \"\"\"Handles downloading data in the GPU or TPU case.\n\n Args:\n dataloader_fx: The bound dataloader getter\n\n Returns:\n The dataloader\n \"\"\"\n dataloader = dataloader_fx()\n\n # get the function we'll use to get data\n if self.use_ddp or self.use_ddp2:\n # all processes wait until data download has happened\n torch_distrib.barrier()\n\n # data download/load on TPU\n elif self.use_tpu and XLA_AVAILABLE:\n # all processes wait until data download has happened\n torch_xla.core.xla_model.rendezvous('pl.TrainerDataLoadingMixin.get_dataloaders')\n\n return dataloader\n\n def determine_data_use_amount(self, train_percent_check: float, val_percent_check: float,\n test_percent_check: float, overfit_pct: float) -> None:\n \"\"\"Use less data for debugging purposes\n \"\"\"\n self.train_percent_check = train_percent_check\n self.val_percent_check = val_percent_check\n self.test_percent_check = test_percent_check\n if overfit_pct > 0:\n if overfit_pct > 1:\n raise ValueError(\n f'`overfit_pct` must be not greater than 1.0, but got {overfit_pct:.3f}.')\n\n self.train_percent_check = overfit_pct\n self.val_percent_check = overfit_pct\n self.test_percent_check = overfit_pct\n", "path": "pytorch_lightning/trainer/data_loading.py"}]}
| 3,310 | 539 |
gh_patches_debug_26209
|
rasdani/github-patches
|
git_diff
|
Cog-Creators__Red-DiscordBot-3166
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[p]announce fails if bot belongs to team
# Command bugs
#### Command name
`announce`
#### What cog is this command from?
`Admin`
#### What were you expecting to happen?
Send announcement to all enabled servers, if failed, send message to the one of owners or all owners (like an `[p]contact`)
#### What actually happened?
announcement failed almost immediately with error in console
#### How can we reproduce this issue?
1. Set bot with token belonging to team
2. Create environment, where bot cant send announcement to server
3. Announce an message
4. `[p]announce` silently fails with error:
```py
Traceback (most recent call last):
File "/home/fixator/Red-V3/lib/python3.7/site-packages/redbot/cogs/admin/announcer.py", line 67, in announcer
await channel.send(self.message)
File "/home/fixator/Red-V3/lib/python3.7/site-packages/discord/abc.py", line 823, in send
data = await state.http.send_message(channel.id, content, tts=tts, embed=embed, nonce=nonce)
File "/home/fixator/Red-V3/lib/python3.7/site-packages/discord/http.py", line 218, in request
raise Forbidden(r, data)
discord.errors.Forbidden: 403 FORBIDDEN (error code: 50001): Missing Access
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/fixator/Red-V3/lib/python3.7/site-packages/redbot/cogs/admin/announcer.py", line 70, in announcer
_("I could not announce to server: {server.id}").format(server=g)
File "/home/fixator/Red-V3/lib/python3.7/site-packages/discord/abc.py", line 823, in send
data = await state.http.send_message(channel.id, content, tts=tts, embed=embed, nonce=nonce)
File "/home/fixator/Red-V3/lib/python3.7/site-packages/discord/http.py", line 218, in request
raise Forbidden(r, data)
discord.errors.Forbidden: 403 FORBIDDEN (error code: 50007): Cannot send messages to this user
```
Caused by https://github.com/Cog-Creators/Red-DiscordBot/blob/f0836d7182d99239d1fde24cf2231c6ebf206f72/redbot/cogs/admin/announcer.py#L56
*Kinda related to #2781, i guess*
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `redbot/cogs/admin/announcer.py`
Content:
```
1 import asyncio
2
3 import discord
4 from redbot.core import commands
5 from redbot.core.i18n import Translator
6
7 _ = Translator("Announcer", __file__)
8
9
10 class Announcer:
11 def __init__(self, ctx: commands.Context, message: str, config=None):
12 """
13 :param ctx:
14 :param message:
15 :param config: Used to determine channel overrides
16 """
17 self.ctx = ctx
18 self.message = message
19 self.config = config
20
21 self.active = None
22
23 def start(self):
24 """
25 Starts an announcement.
26 :return:
27 """
28 if self.active is None:
29 self.active = True
30 self.ctx.bot.loop.create_task(self.announcer())
31
32 def cancel(self):
33 """
34 Cancels a running announcement.
35 :return:
36 """
37 self.active = False
38
39 async def _get_announce_channel(self, guild: discord.Guild) -> discord.TextChannel:
40 channel_id = await self.config.guild(guild).announce_channel()
41 channel = None
42
43 if channel_id is not None:
44 channel = guild.get_channel(channel_id)
45
46 if channel is None:
47 channel = guild.system_channel
48
49 if channel is None:
50 channel = guild.text_channels[0]
51
52 return channel
53
54 async def announcer(self):
55 guild_list = self.ctx.bot.guilds
56 bot_owner = (await self.ctx.bot.application_info()).owner
57 for g in guild_list:
58 if not self.active:
59 return
60
61 if await self.config.guild(g).announce_ignore():
62 continue
63
64 channel = await self._get_announce_channel(g)
65
66 try:
67 await channel.send(self.message)
68 except discord.Forbidden:
69 await bot_owner.send(
70 _("I could not announce to server: {server.id}").format(server=g)
71 )
72 await asyncio.sleep(0.5)
73
74 self.active = False
75
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/redbot/cogs/admin/announcer.py b/redbot/cogs/admin/announcer.py
--- a/redbot/cogs/admin/announcer.py
+++ b/redbot/cogs/admin/announcer.py
@@ -3,6 +3,7 @@
import discord
from redbot.core import commands
from redbot.core.i18n import Translator
+from redbot.core.utils.chat_formatting import humanize_list, inline
_ = Translator("Announcer", __file__)
@@ -53,7 +54,7 @@
async def announcer(self):
guild_list = self.ctx.bot.guilds
- bot_owner = (await self.ctx.bot.application_info()).owner
+ failed = []
for g in guild_list:
if not self.active:
return
@@ -66,9 +67,14 @@
try:
await channel.send(self.message)
except discord.Forbidden:
- await bot_owner.send(
- _("I could not announce to server: {server.id}").format(server=g)
- )
+ failed.append(str(g.id))
await asyncio.sleep(0.5)
+ msg = (
+ _("I could not announce to the following server: ")
+ if len(failed) == 1
+ else _("I could not announce to the following servers: ")
+ )
+ msg += humanize_list(tuple(map(inline, failed)))
+ await self.ctx.bot.send_to_owners(msg)
self.active = False
|
{"golden_diff": "diff --git a/redbot/cogs/admin/announcer.py b/redbot/cogs/admin/announcer.py\n--- a/redbot/cogs/admin/announcer.py\n+++ b/redbot/cogs/admin/announcer.py\n@@ -3,6 +3,7 @@\n import discord\n from redbot.core import commands\n from redbot.core.i18n import Translator\n+from redbot.core.utils.chat_formatting import humanize_list, inline\n \n _ = Translator(\"Announcer\", __file__)\n \n@@ -53,7 +54,7 @@\n \n async def announcer(self):\n guild_list = self.ctx.bot.guilds\n- bot_owner = (await self.ctx.bot.application_info()).owner\n+ failed = []\n for g in guild_list:\n if not self.active:\n return\n@@ -66,9 +67,14 @@\n try:\n await channel.send(self.message)\n except discord.Forbidden:\n- await bot_owner.send(\n- _(\"I could not announce to server: {server.id}\").format(server=g)\n- )\n+ failed.append(str(g.id))\n await asyncio.sleep(0.5)\n \n+ msg = (\n+ _(\"I could not announce to the following server: \")\n+ if len(failed) == 1\n+ else _(\"I could not announce to the following servers: \")\n+ )\n+ msg += humanize_list(tuple(map(inline, failed)))\n+ await self.ctx.bot.send_to_owners(msg)\n self.active = False\n", "issue": "[p]announce fails if bot belongs to team\n# Command bugs\r\n\r\n#### Command name\r\n\r\n`announce`\r\n\r\n#### What cog is this command from?\r\n\r\n`Admin`\r\n\r\n#### What were you expecting to happen?\r\n\r\nSend announcement to all enabled servers, if failed, send message to the one of owners or all owners (like an `[p]contact`)\r\n\r\n#### What actually happened?\r\n\r\nannouncement failed almost immediately with error in console \r\n\r\n#### How can we reproduce this issue?\r\n\r\n1. Set bot with token belonging to team\r\n2. Create environment, where bot cant send announcement to server\r\n3. Announce an message\r\n4. `[p]announce` silently fails with error:\r\n```py\r\nTraceback (most recent call last):\r\n File \"/home/fixator/Red-V3/lib/python3.7/site-packages/redbot/cogs/admin/announcer.py\", line 67, in announcer\r\n await channel.send(self.message)\r\n File \"/home/fixator/Red-V3/lib/python3.7/site-packages/discord/abc.py\", line 823, in send\r\n data = await state.http.send_message(channel.id, content, tts=tts, embed=embed, nonce=nonce)\r\n File \"/home/fixator/Red-V3/lib/python3.7/site-packages/discord/http.py\", line 218, in request\r\n raise Forbidden(r, data)\r\ndiscord.errors.Forbidden: 403 FORBIDDEN (error code: 50001): Missing Access\r\nDuring handling of the above exception, another exception occurred:\r\nTraceback (most recent call last):\r\n File \"/home/fixator/Red-V3/lib/python3.7/site-packages/redbot/cogs/admin/announcer.py\", line 70, in announcer\r\n _(\"I could not announce to server: {server.id}\").format(server=g)\r\n File \"/home/fixator/Red-V3/lib/python3.7/site-packages/discord/abc.py\", line 823, in send\r\n data = await state.http.send_message(channel.id, content, tts=tts, embed=embed, nonce=nonce)\r\n File \"/home/fixator/Red-V3/lib/python3.7/site-packages/discord/http.py\", line 218, in request\r\n raise Forbidden(r, data)\r\ndiscord.errors.Forbidden: 403 FORBIDDEN (error code: 50007): Cannot send messages to this user\r\n```\r\n\r\nCaused by https://github.com/Cog-Creators/Red-DiscordBot/blob/f0836d7182d99239d1fde24cf2231c6ebf206f72/redbot/cogs/admin/announcer.py#L56\r\n\r\n*Kinda related to #2781, i guess*\n", "before_files": [{"content": "import asyncio\n\nimport discord\nfrom redbot.core import commands\nfrom redbot.core.i18n import Translator\n\n_ = Translator(\"Announcer\", __file__)\n\n\nclass Announcer:\n def __init__(self, ctx: commands.Context, message: str, config=None):\n \"\"\"\n :param ctx:\n :param message:\n :param config: Used to determine channel overrides\n \"\"\"\n self.ctx = ctx\n self.message = message\n self.config = config\n\n self.active = None\n\n def start(self):\n \"\"\"\n Starts an announcement.\n :return:\n \"\"\"\n if self.active is None:\n self.active = True\n self.ctx.bot.loop.create_task(self.announcer())\n\n def cancel(self):\n \"\"\"\n Cancels a running announcement.\n :return:\n \"\"\"\n self.active = False\n\n async def _get_announce_channel(self, guild: discord.Guild) -> discord.TextChannel:\n channel_id = await self.config.guild(guild).announce_channel()\n channel = None\n\n if channel_id is not None:\n channel = guild.get_channel(channel_id)\n\n if channel is None:\n channel = guild.system_channel\n\n if channel is None:\n channel = guild.text_channels[0]\n\n return channel\n\n async def announcer(self):\n guild_list = self.ctx.bot.guilds\n bot_owner = (await self.ctx.bot.application_info()).owner\n for g in guild_list:\n if not self.active:\n return\n\n if await self.config.guild(g).announce_ignore():\n continue\n\n channel = await self._get_announce_channel(g)\n\n try:\n await channel.send(self.message)\n except discord.Forbidden:\n await bot_owner.send(\n _(\"I could not announce to server: {server.id}\").format(server=g)\n )\n await asyncio.sleep(0.5)\n\n self.active = False\n", "path": "redbot/cogs/admin/announcer.py"}], "after_files": [{"content": "import asyncio\n\nimport discord\nfrom redbot.core import commands\nfrom redbot.core.i18n import Translator\nfrom redbot.core.utils.chat_formatting import humanize_list, inline\n\n_ = Translator(\"Announcer\", __file__)\n\n\nclass Announcer:\n def __init__(self, ctx: commands.Context, message: str, config=None):\n \"\"\"\n :param ctx:\n :param message:\n :param config: Used to determine channel overrides\n \"\"\"\n self.ctx = ctx\n self.message = message\n self.config = config\n\n self.active = None\n\n def start(self):\n \"\"\"\n Starts an announcement.\n :return:\n \"\"\"\n if self.active is None:\n self.active = True\n self.ctx.bot.loop.create_task(self.announcer())\n\n def cancel(self):\n \"\"\"\n Cancels a running announcement.\n :return:\n \"\"\"\n self.active = False\n\n async def _get_announce_channel(self, guild: discord.Guild) -> discord.TextChannel:\n channel_id = await self.config.guild(guild).announce_channel()\n channel = None\n\n if channel_id is not None:\n channel = guild.get_channel(channel_id)\n\n if channel is None:\n channel = guild.system_channel\n\n if channel is None:\n channel = guild.text_channels[0]\n\n return channel\n\n async def announcer(self):\n guild_list = self.ctx.bot.guilds\n failed = []\n for g in guild_list:\n if not self.active:\n return\n\n if await self.config.guild(g).announce_ignore():\n continue\n\n channel = await self._get_announce_channel(g)\n\n try:\n await channel.send(self.message)\n except discord.Forbidden:\n failed.append(str(g.id))\n await asyncio.sleep(0.5)\n\n msg = (\n _(\"I could not announce to the following server: \")\n if len(failed) == 1\n else _(\"I could not announce to the following servers: \")\n )\n msg += humanize_list(tuple(map(inline, failed)))\n await self.ctx.bot.send_to_owners(msg)\n self.active = False\n", "path": "redbot/cogs/admin/announcer.py"}]}
| 1,420 | 329 |
gh_patches_debug_41
|
rasdani/github-patches
|
git_diff
|
streamlit__streamlit-3038
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Dark theme does not properly adjust markdown tables
### Summary
When I load the latest streamlit in darkmode I cannot see anything in my markdown tables because the text color is changed but not the background color.
### Steps to reproduce
Code snippet:
```
md = """
| Label | Info |
| -------- | --------- |
| Row | Data |
"""
st.markdown(md)
```
**Expected behavior:**
I would expect if the text color get changed to white in the table, the background color should get changed to something dark
**Actual behavior:**
Both the text color and background are white so nothing can be seen.
### Is this a regression?
no, consequence of new theme
### Debug info
- Streamlit version: 0.79.0
- Python version: 3.7.9
- pip
- OS version: MacOS Catalina 10.15.7
- Browser version: Chrome 89.0.4389.90
### Additional information
I'm not sure why markdown tables have different background style but they seem to; perhaps other ui elements would be affected as well.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `e2e/scripts/st_markdown.py`
Content:
```
1 # Copyright 2018-2021 Streamlit Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import streamlit as st
16
17 st.markdown("This **markdown** is awesome! :sunglasses:")
18
19 st.markdown("This <b>HTML tag</b> is escaped!")
20
21 st.markdown("This <b>HTML tag</b> is not escaped!", unsafe_allow_html=True)
22
23 st.markdown("[text]")
24
25 st.markdown("[link](href)")
26
27 st.markdown("[][]")
28
29 st.markdown("Inline math with $\KaTeX$")
30
31 st.markdown(
32 """
33 $$
34 ax^2 + bx + c = 0
35 $$
36 """
37 )
38
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/e2e/scripts/st_markdown.py b/e2e/scripts/st_markdown.py
--- a/e2e/scripts/st_markdown.py
+++ b/e2e/scripts/st_markdown.py
@@ -35,3 +35,11 @@
$$
"""
)
+
+st.markdown(
+ """
+| Col1 | Col2 |
+| --------- | ----------- |
+| Some | Data |
+"""
+)
|
{"golden_diff": "diff --git a/e2e/scripts/st_markdown.py b/e2e/scripts/st_markdown.py\n--- a/e2e/scripts/st_markdown.py\n+++ b/e2e/scripts/st_markdown.py\n@@ -35,3 +35,11 @@\n $$\n \"\"\"\n )\n+\n+st.markdown(\n+ \"\"\"\n+| Col1 | Col2 |\n+| --------- | ----------- |\n+| Some | Data |\n+\"\"\"\n+)\n", "issue": "Dark theme does not properly adjust markdown tables\n### Summary\r\n\r\nWhen I load the latest streamlit in darkmode I cannot see anything in my markdown tables because the text color is changed but not the background color.\r\n\r\n### Steps to reproduce\r\n\r\nCode snippet:\r\n\r\n```\r\nmd = \"\"\"\r\n| Label | Info |\r\n| -------- | --------- |\r\n| Row | Data |\r\n\"\"\"\r\nst.markdown(md)\r\n```\r\n\r\n**Expected behavior:**\r\n\r\nI would expect if the text color get changed to white in the table, the background color should get changed to something dark\r\n\r\n**Actual behavior:**\r\n\r\nBoth the text color and background are white so nothing can be seen.\r\n\r\n### Is this a regression?\r\n\r\nno, consequence of new theme\r\n\r\n### Debug info\r\n\r\n- Streamlit version: 0.79.0\r\n- Python version: 3.7.9\r\n- pip\r\n- OS version: MacOS Catalina 10.15.7\r\n- Browser version: Chrome 89.0.4389.90\r\n\r\n### Additional information\r\n\r\nI'm not sure why markdown tables have different background style but they seem to; perhaps other ui elements would be affected as well.\r\n\n", "before_files": [{"content": "# Copyright 2018-2021 Streamlit Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport streamlit as st\n\nst.markdown(\"This **markdown** is awesome! :sunglasses:\")\n\nst.markdown(\"This <b>HTML tag</b> is escaped!\")\n\nst.markdown(\"This <b>HTML tag</b> is not escaped!\", unsafe_allow_html=True)\n\nst.markdown(\"[text]\")\n\nst.markdown(\"[link](href)\")\n\nst.markdown(\"[][]\")\n\nst.markdown(\"Inline math with $\\KaTeX$\")\n\nst.markdown(\n \"\"\"\n$$\nax^2 + bx + c = 0\n$$\n\"\"\"\n)\n", "path": "e2e/scripts/st_markdown.py"}], "after_files": [{"content": "# Copyright 2018-2021 Streamlit Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport streamlit as st\n\nst.markdown(\"This **markdown** is awesome! :sunglasses:\")\n\nst.markdown(\"This <b>HTML tag</b> is escaped!\")\n\nst.markdown(\"This <b>HTML tag</b> is not escaped!\", unsafe_allow_html=True)\n\nst.markdown(\"[text]\")\n\nst.markdown(\"[link](href)\")\n\nst.markdown(\"[][]\")\n\nst.markdown(\"Inline math with $\\KaTeX$\")\n\nst.markdown(\n \"\"\"\n$$\nax^2 + bx + c = 0\n$$\n\"\"\"\n)\n\nst.markdown(\n \"\"\"\n| Col1 | Col2 |\n| --------- | ----------- |\n| Some | Data |\n\"\"\"\n)\n", "path": "e2e/scripts/st_markdown.py"}]}
| 831 | 98 |
gh_patches_debug_27285
|
rasdani/github-patches
|
git_diff
|
Lightning-AI__torchmetrics-2184
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Validation with torchmetrics extremely slow
### Bug description
Hi all,
I recently tried to implement a DeepLabV3 training pipeline. I wanted to use the build-in `torchmetrics.JaccardIndex` as my evaluation metric. My `LightningModule` looks like this:
```python
import torchmetrics
from pytorch_lightning import LightningModule
from torchvision.models.segmentation.deeplabv3 import deeplabv3_resnet50
class DeepLabV3LightningModule(LightningModule):
def __init__(self):
self.model = deeplabv3_resnet50(
num_classes=38,
aux_loss=False
)
self.loss = nn.CrossEntropyLoss(ignore_index=255, reduction="mean")
self.iou_metric = torchmetrics.JaccardIndex(
task="multiclass",
threshold=0.5,
num_classes=38,
average="macro",
)
def training_step(self, batch, batch_idx):
imgs, masks = batch
out = self.model(imgs)
preds = out["out"]
loss = self.loss(preds, masks)
return loss
def validation_step(self, batch, batch_idx):
imgs, masks = batch
out = self.model(imgs)
preds = out["out"]
loss = self.loss(preds, masks)
preds = torch.softmax(preds, dim=1)
pred_labels = torch.argmax(preds, dim=1)
# measure runtime of metric update
start = timer()
self.iou_metric.update(pred_labels, masks)
elapsed = timer() - start
return elapsed
def validation_epoch_end(self, outputs):
avg_runtime = round(mean(outputs), 4)
print(f"GPU {self.local_rank}: {avg_runtime} seconds")
```
When using this validation procedure, it is extremely slow. On average, the update step of the metric takes 23.4 seconds. However, the first 3 updates are very fast (<1 second), then they become slow.
I tried to reproduce this behavior in a MWE:
```python
from timeit import default_timer as timer
from statistics import mean
import torchmetrics
import torch
num_classes = 38
iou_metric = torchmetrics.JaccardIndex(
task="multiclass",
threshold=0.5,
num_classes=num_classes,
average="macro"
).to("cuda")
# dummy labels in shape [b, h, w]
label_mask = torch.randint(low=0, high=num_classes-1, size=(8, 480, 640), device="cuda")
# dummy predicted labels in shape [b, h, w]
pred_mask = torch.randint(low=0, high=num_classes-1, size=(8, 480, 640), device="cuda")
runtime_hist = []
for i in range(100):
start = timer()
iou_metric.update(label_mask, pred_mask)
elapsed = timer() - start
runtime_hist.append(elapsed)
avg_runtime = round(mean(runtime_hist), 2)
print(avg_runtime)
```
Here I get an average update duration of 0.03 seconds, so I do not encounter the extremely slow update as in my `LightningModule` above. To me this looks like there is something wrong. At this point, I am not sure if thi
Here some training information for my pytorch-lightning training pipeline:
- OS: Ubuntu 20.04.4
- CUDA 11.3
- DDP training strategy
- GPUs: 4x V100
- batch size: 8
- image size (width x height): 640 x 480
- number of workers in dataloader: 8
My package versions:
- pytorch lightning: 1.8.4.post0 (installed via pip)
- torch: 1.13.0
- torchvision: 0.14.0
- torchmetrics: 0.11.0
- numpy: 11.23.5
Thanks so much!
Lukas
### How to reproduce the bug
_No response_
### Error messages and logs
_No response_
### Environment
_No response_
### More info
_No response_
cc @carmocca @justusschock @awaelchli @borda
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/torchmetrics/utilities/data.py`
Content:
```
1 # Copyright The Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import sys
15 from typing import Any, Dict, List, Optional, Sequence, Tuple, Union
16
17 import torch
18 from lightning_utilities import apply_to_collection
19 from torch import Tensor
20
21 from torchmetrics.utilities.exceptions import TorchMetricsUserWarning
22 from torchmetrics.utilities.imports import _TORCH_GREATER_EQUAL_1_12, _XLA_AVAILABLE
23 from torchmetrics.utilities.prints import rank_zero_warn
24
25 METRIC_EPS = 1e-6
26
27
28 def dim_zero_cat(x: Union[Tensor, List[Tensor]]) -> Tensor:
29 """Concatenation along the zero dimension."""
30 if isinstance(x, torch.Tensor):
31 return x
32 x = [y.unsqueeze(0) if y.numel() == 1 and y.ndim == 0 else y for y in x]
33 if not x: # empty list
34 raise ValueError("No samples to concatenate")
35 return torch.cat(x, dim=0)
36
37
38 def dim_zero_sum(x: Tensor) -> Tensor:
39 """Summation along the zero dimension."""
40 return torch.sum(x, dim=0)
41
42
43 def dim_zero_mean(x: Tensor) -> Tensor:
44 """Average along the zero dimension."""
45 return torch.mean(x, dim=0)
46
47
48 def dim_zero_max(x: Tensor) -> Tensor:
49 """Max along the zero dimension."""
50 return torch.max(x, dim=0).values
51
52
53 def dim_zero_min(x: Tensor) -> Tensor:
54 """Min along the zero dimension."""
55 return torch.min(x, dim=0).values
56
57
58 def _flatten(x: Sequence) -> list:
59 """Flatten list of list into single list."""
60 return [item for sublist in x for item in sublist]
61
62
63 def _flatten_dict(x: Dict) -> Tuple[Dict, bool]:
64 """Flatten dict of dicts into single dict and checking for duplicates in keys along the way."""
65 new_dict = {}
66 duplicates = False
67 for key, value in x.items():
68 if isinstance(value, dict):
69 for k, v in value.items():
70 if k in new_dict:
71 duplicates = True
72 new_dict[k] = v
73 else:
74 if key in new_dict:
75 duplicates = True
76 new_dict[key] = value
77 return new_dict, duplicates
78
79
80 def to_onehot(
81 label_tensor: Tensor,
82 num_classes: Optional[int] = None,
83 ) -> Tensor:
84 """Convert a dense label tensor to one-hot format.
85
86 Args:
87 label_tensor: dense label tensor, with shape [N, d1, d2, ...]
88 num_classes: number of classes C
89
90 Returns:
91 A sparse label tensor with shape [N, C, d1, d2, ...]
92
93 Example:
94 >>> x = torch.tensor([1, 2, 3])
95 >>> to_onehot(x)
96 tensor([[0, 1, 0, 0],
97 [0, 0, 1, 0],
98 [0, 0, 0, 1]])
99
100 """
101 if num_classes is None:
102 num_classes = int(label_tensor.max().detach().item() + 1)
103
104 tensor_onehot = torch.zeros(
105 label_tensor.shape[0],
106 num_classes,
107 *label_tensor.shape[1:],
108 dtype=label_tensor.dtype,
109 device=label_tensor.device,
110 )
111 index = label_tensor.long().unsqueeze(1).expand_as(tensor_onehot)
112 return tensor_onehot.scatter_(1, index, 1.0)
113
114
115 def select_topk(prob_tensor: Tensor, topk: int = 1, dim: int = 1) -> Tensor:
116 """Convert a probability tensor to binary by selecting top-k the highest entries.
117
118 Args:
119 prob_tensor: dense tensor of shape ``[..., C, ...]``, where ``C`` is in the
120 position defined by the ``dim`` argument
121 topk: number of the highest entries to turn into 1s
122 dim: dimension on which to compare entries
123
124 Returns:
125 A binary tensor of the same shape as the input tensor of type ``torch.int32``
126
127 Example:
128 >>> x = torch.tensor([[1.1, 2.0, 3.0], [2.0, 1.0, 0.5]])
129 >>> select_topk(x, topk=2)
130 tensor([[0, 1, 1],
131 [1, 1, 0]], dtype=torch.int32)
132
133 """
134 zeros = torch.zeros_like(prob_tensor)
135 if topk == 1: # argmax has better performance than topk
136 topk_tensor = zeros.scatter(dim, prob_tensor.argmax(dim=dim, keepdim=True), 1.0)
137 else:
138 topk_tensor = zeros.scatter(dim, prob_tensor.topk(k=topk, dim=dim).indices, 1.0)
139 return topk_tensor.int()
140
141
142 def to_categorical(x: Tensor, argmax_dim: int = 1) -> Tensor:
143 """Convert a tensor of probabilities to a dense label tensor.
144
145 Args:
146 x: probabilities to get the categorical label [N, d1, d2, ...]
147 argmax_dim: dimension to apply
148
149 Return:
150 A tensor with categorical labels [N, d2, ...]
151
152 Example:
153 >>> x = torch.tensor([[0.2, 0.5], [0.9, 0.1]])
154 >>> to_categorical(x)
155 tensor([1, 0])
156
157 """
158 return torch.argmax(x, dim=argmax_dim)
159
160
161 def _squeeze_scalar_element_tensor(x: Tensor) -> Tensor:
162 return x.squeeze() if x.numel() == 1 else x
163
164
165 def _squeeze_if_scalar(data: Any) -> Any:
166 return apply_to_collection(data, Tensor, _squeeze_scalar_element_tensor)
167
168
169 def _bincount(x: Tensor, minlength: Optional[int] = None) -> Tensor:
170 """Implement custom bincount.
171
172 PyTorch currently does not support ``torch.bincount`` for:
173
174 - deterministic mode on GPU.
175 - MPS devices
176
177 This implementation fallback to a for-loop counting occurrences in that case.
178
179 Args:
180 x: tensor to count
181 minlength: minimum length to count
182
183 Returns:
184 Number of occurrences for each unique element in x
185
186 Example:
187 >>> x = torch.tensor([0,0,0,1,1,2,2,2,2])
188 >>> _bincount(x, minlength=3)
189 tensor([3, 2, 4])
190
191 """
192 if minlength is None:
193 minlength = len(torch.unique(x))
194 if torch.are_deterministic_algorithms_enabled() or _XLA_AVAILABLE or _TORCH_GREATER_EQUAL_1_12 and x.is_mps:
195 output = torch.zeros(minlength, device=x.device, dtype=torch.long)
196 for i in range(minlength):
197 output[i] = (x == i).sum()
198 return output
199 return torch.bincount(x, minlength=minlength)
200
201
202 def _cumsum(x: Tensor, dim: Optional[int] = 0, dtype: Optional[torch.dtype] = None) -> Tensor:
203 if torch.are_deterministic_algorithms_enabled() and x.is_cuda and x.is_floating_point() and sys.platform != "win32":
204 rank_zero_warn(
205 "You are trying to use a metric in deterministic mode on GPU that uses `torch.cumsum`, which is currently "
206 "not supported. The tensor will be copied to the CPU memory to compute it and then copied back to GPU. "
207 "Expect some slowdowns.",
208 TorchMetricsUserWarning,
209 )
210 return x.cpu().cumsum(dim=dim, dtype=dtype).cuda()
211 return torch.cumsum(x, dim=dim, dtype=dtype)
212
213
214 def _flexible_bincount(x: Tensor) -> Tensor:
215 """Similar to `_bincount`, but works also with tensor that do not contain continuous values.
216
217 Args:
218 x: tensor to count
219
220 Returns:
221 Number of occurrences for each unique element in x
222
223 """
224 # make sure elements in x start from 0
225 x = x - x.min()
226 unique_x = torch.unique(x)
227
228 output = _bincount(x, minlength=torch.max(unique_x) + 1) # type: ignore[arg-type]
229 # remove zeros from output tensor
230 return output[unique_x]
231
232
233 def allclose(tensor1: Tensor, tensor2: Tensor) -> bool:
234 """Wrap torch.allclose to be robust towards dtype difference."""
235 if tensor1.dtype != tensor2.dtype:
236 tensor2 = tensor2.to(dtype=tensor1.dtype)
237 return torch.allclose(tensor1, tensor2)
238
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/torchmetrics/utilities/data.py b/src/torchmetrics/utilities/data.py
--- a/src/torchmetrics/utilities/data.py
+++ b/src/torchmetrics/utilities/data.py
@@ -169,12 +169,10 @@
def _bincount(x: Tensor, minlength: Optional[int] = None) -> Tensor:
"""Implement custom bincount.
- PyTorch currently does not support ``torch.bincount`` for:
-
- - deterministic mode on GPU.
- - MPS devices
-
- This implementation fallback to a for-loop counting occurrences in that case.
+ PyTorch currently does not support ``torch.bincount`` when running in deterministic mode on GPU or when running
+ MPS devices or when running on XLA device. This implementation therefore falls back to using a combination of
+ `torch.arange` and `torch.eq` in these scenarios. A small performance hit can expected and higher memory consumption
+ as `[batch_size, mincount]` tensor needs to be initialized compared to native ``torch.bincount``.
Args:
x: tensor to count
@@ -191,11 +189,11 @@
"""
if minlength is None:
minlength = len(torch.unique(x))
+
if torch.are_deterministic_algorithms_enabled() or _XLA_AVAILABLE or _TORCH_GREATER_EQUAL_1_12 and x.is_mps:
- output = torch.zeros(minlength, device=x.device, dtype=torch.long)
- for i in range(minlength):
- output[i] = (x == i).sum()
- return output
+ mesh = torch.arange(minlength, device=x.device).repeat(len(x), 1)
+ return torch.eq(x.reshape(-1, 1), mesh).sum(dim=0)
+
return torch.bincount(x, minlength=minlength)
|
{"golden_diff": "diff --git a/src/torchmetrics/utilities/data.py b/src/torchmetrics/utilities/data.py\n--- a/src/torchmetrics/utilities/data.py\n+++ b/src/torchmetrics/utilities/data.py\n@@ -169,12 +169,10 @@\n def _bincount(x: Tensor, minlength: Optional[int] = None) -> Tensor:\n \"\"\"Implement custom bincount.\n \n- PyTorch currently does not support ``torch.bincount`` for:\n-\n- - deterministic mode on GPU.\n- - MPS devices\n-\n- This implementation fallback to a for-loop counting occurrences in that case.\n+ PyTorch currently does not support ``torch.bincount`` when running in deterministic mode on GPU or when running\n+ MPS devices or when running on XLA device. This implementation therefore falls back to using a combination of\n+ `torch.arange` and `torch.eq` in these scenarios. A small performance hit can expected and higher memory consumption\n+ as `[batch_size, mincount]` tensor needs to be initialized compared to native ``torch.bincount``.\n \n Args:\n x: tensor to count\n@@ -191,11 +189,11 @@\n \"\"\"\n if minlength is None:\n minlength = len(torch.unique(x))\n+\n if torch.are_deterministic_algorithms_enabled() or _XLA_AVAILABLE or _TORCH_GREATER_EQUAL_1_12 and x.is_mps:\n- output = torch.zeros(minlength, device=x.device, dtype=torch.long)\n- for i in range(minlength):\n- output[i] = (x == i).sum()\n- return output\n+ mesh = torch.arange(minlength, device=x.device).repeat(len(x), 1)\n+ return torch.eq(x.reshape(-1, 1), mesh).sum(dim=0)\n+\n return torch.bincount(x, minlength=minlength)\n", "issue": "Validation with torchmetrics extremely slow\n### Bug description\r\n\r\nHi all,\r\n\r\nI recently tried to implement a DeepLabV3 training pipeline. I wanted to use the build-in `torchmetrics.JaccardIndex` as my evaluation metric. My `LightningModule` looks like this:\r\n\r\n```python\r\nimport torchmetrics \r\nfrom pytorch_lightning import LightningModule\r\nfrom torchvision.models.segmentation.deeplabv3 import deeplabv3_resnet50\r\n\r\n\r\nclass DeepLabV3LightningModule(LightningModule):\r\n def __init__(self):\r\n self.model = deeplabv3_resnet50(\r\n num_classes=38,\r\n aux_loss=False\r\n )\r\n self.loss = nn.CrossEntropyLoss(ignore_index=255, reduction=\"mean\")\r\n self.iou_metric = torchmetrics.JaccardIndex(\r\n task=\"multiclass\", \r\n threshold=0.5, \r\n num_classes=38,\r\n average=\"macro\",\r\n )\r\n\r\n def training_step(self, batch, batch_idx):\r\n imgs, masks = batch\r\n out = self.model(imgs)\r\n preds = out[\"out\"]\r\n loss = self.loss(preds, masks) \r\n return loss\r\n\r\n def validation_step(self, batch, batch_idx):\r\n imgs, masks = batch\r\n out = self.model(imgs)\r\n preds = out[\"out\"]\r\n loss = self.loss(preds, masks)\r\n preds = torch.softmax(preds, dim=1)\r\n pred_labels = torch.argmax(preds, dim=1)\r\n \r\n # measure runtime of metric update\r\n start = timer()\r\n self.iou_metric.update(pred_labels, masks)\r\n elapsed = timer() - start\r\n return elapsed\r\n\r\n def validation_epoch_end(self, outputs):\r\n avg_runtime = round(mean(outputs), 4)\r\n print(f\"GPU {self.local_rank}: {avg_runtime} seconds\")\r\n\r\n```\r\n\r\nWhen using this validation procedure, it is extremely slow. On average, the update step of the metric takes 23.4 seconds. However, the first 3 updates are very fast (<1 second), then they become slow.\r\n\r\nI tried to reproduce this behavior in a MWE:\r\n\r\n```python\r\nfrom timeit import default_timer as timer\r\nfrom statistics import mean\r\nimport torchmetrics\r\nimport torch\r\n\r\nnum_classes = 38\r\n\r\niou_metric = torchmetrics.JaccardIndex(\r\n task=\"multiclass\",\r\n threshold=0.5, \r\n num_classes=num_classes,\r\n average=\"macro\"\r\n).to(\"cuda\")\r\n\r\n# dummy labels in shape [b, h, w]\r\nlabel_mask = torch.randint(low=0, high=num_classes-1, size=(8, 480, 640), device=\"cuda\")\r\n\r\n# dummy predicted labels in shape [b, h, w]\r\npred_mask = torch.randint(low=0, high=num_classes-1, size=(8, 480, 640), device=\"cuda\")\r\n\r\n\r\nruntime_hist = []\r\nfor i in range(100):\r\n start = timer()\r\n iou_metric.update(label_mask, pred_mask)\r\n elapsed = timer() - start\r\n runtime_hist.append(elapsed)\r\n\r\n\r\navg_runtime = round(mean(runtime_hist), 2)\r\nprint(avg_runtime)\r\n```\r\n\r\nHere I get an average update duration of 0.03 seconds, so I do not encounter the extremely slow update as in my `LightningModule` above. To me this looks like there is something wrong. At this point, I am not sure if thi\r\n\r\n\r\nHere some training information for my pytorch-lightning training pipeline:\r\n- OS: Ubuntu 20.04.4\r\n- CUDA 11.3\r\n- DDP training strategy\r\n- GPUs: 4x V100\r\n- batch size: 8\r\n- image size (width x height): 640 x 480\r\n- number of workers in dataloader: 8\r\n\r\nMy package versions:\r\n- pytorch lightning: 1.8.4.post0 (installed via pip)\r\n- torch: 1.13.0\r\n- torchvision: 0.14.0\r\n- torchmetrics: 0.11.0\r\n- numpy: 11.23.5\r\n\r\nThanks so much!\r\nLukas\r\n\r\n### How to reproduce the bug\r\n\r\n_No response_\r\n\r\n### Error messages and logs\r\n\r\n_No response_\r\n\r\n### Environment\r\n\r\n_No response_\r\n\r\n### More info\r\n\r\n_No response_\n\ncc @carmocca @justusschock @awaelchli @borda\n", "before_files": [{"content": "# Copyright The Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport sys\nfrom typing import Any, Dict, List, Optional, Sequence, Tuple, Union\n\nimport torch\nfrom lightning_utilities import apply_to_collection\nfrom torch import Tensor\n\nfrom torchmetrics.utilities.exceptions import TorchMetricsUserWarning\nfrom torchmetrics.utilities.imports import _TORCH_GREATER_EQUAL_1_12, _XLA_AVAILABLE\nfrom torchmetrics.utilities.prints import rank_zero_warn\n\nMETRIC_EPS = 1e-6\n\n\ndef dim_zero_cat(x: Union[Tensor, List[Tensor]]) -> Tensor:\n \"\"\"Concatenation along the zero dimension.\"\"\"\n if isinstance(x, torch.Tensor):\n return x\n x = [y.unsqueeze(0) if y.numel() == 1 and y.ndim == 0 else y for y in x]\n if not x: # empty list\n raise ValueError(\"No samples to concatenate\")\n return torch.cat(x, dim=0)\n\n\ndef dim_zero_sum(x: Tensor) -> Tensor:\n \"\"\"Summation along the zero dimension.\"\"\"\n return torch.sum(x, dim=0)\n\n\ndef dim_zero_mean(x: Tensor) -> Tensor:\n \"\"\"Average along the zero dimension.\"\"\"\n return torch.mean(x, dim=0)\n\n\ndef dim_zero_max(x: Tensor) -> Tensor:\n \"\"\"Max along the zero dimension.\"\"\"\n return torch.max(x, dim=0).values\n\n\ndef dim_zero_min(x: Tensor) -> Tensor:\n \"\"\"Min along the zero dimension.\"\"\"\n return torch.min(x, dim=0).values\n\n\ndef _flatten(x: Sequence) -> list:\n \"\"\"Flatten list of list into single list.\"\"\"\n return [item for sublist in x for item in sublist]\n\n\ndef _flatten_dict(x: Dict) -> Tuple[Dict, bool]:\n \"\"\"Flatten dict of dicts into single dict and checking for duplicates in keys along the way.\"\"\"\n new_dict = {}\n duplicates = False\n for key, value in x.items():\n if isinstance(value, dict):\n for k, v in value.items():\n if k in new_dict:\n duplicates = True\n new_dict[k] = v\n else:\n if key in new_dict:\n duplicates = True\n new_dict[key] = value\n return new_dict, duplicates\n\n\ndef to_onehot(\n label_tensor: Tensor,\n num_classes: Optional[int] = None,\n) -> Tensor:\n \"\"\"Convert a dense label tensor to one-hot format.\n\n Args:\n label_tensor: dense label tensor, with shape [N, d1, d2, ...]\n num_classes: number of classes C\n\n Returns:\n A sparse label tensor with shape [N, C, d1, d2, ...]\n\n Example:\n >>> x = torch.tensor([1, 2, 3])\n >>> to_onehot(x)\n tensor([[0, 1, 0, 0],\n [0, 0, 1, 0],\n [0, 0, 0, 1]])\n\n \"\"\"\n if num_classes is None:\n num_classes = int(label_tensor.max().detach().item() + 1)\n\n tensor_onehot = torch.zeros(\n label_tensor.shape[0],\n num_classes,\n *label_tensor.shape[1:],\n dtype=label_tensor.dtype,\n device=label_tensor.device,\n )\n index = label_tensor.long().unsqueeze(1).expand_as(tensor_onehot)\n return tensor_onehot.scatter_(1, index, 1.0)\n\n\ndef select_topk(prob_tensor: Tensor, topk: int = 1, dim: int = 1) -> Tensor:\n \"\"\"Convert a probability tensor to binary by selecting top-k the highest entries.\n\n Args:\n prob_tensor: dense tensor of shape ``[..., C, ...]``, where ``C`` is in the\n position defined by the ``dim`` argument\n topk: number of the highest entries to turn into 1s\n dim: dimension on which to compare entries\n\n Returns:\n A binary tensor of the same shape as the input tensor of type ``torch.int32``\n\n Example:\n >>> x = torch.tensor([[1.1, 2.0, 3.0], [2.0, 1.0, 0.5]])\n >>> select_topk(x, topk=2)\n tensor([[0, 1, 1],\n [1, 1, 0]], dtype=torch.int32)\n\n \"\"\"\n zeros = torch.zeros_like(prob_tensor)\n if topk == 1: # argmax has better performance than topk\n topk_tensor = zeros.scatter(dim, prob_tensor.argmax(dim=dim, keepdim=True), 1.0)\n else:\n topk_tensor = zeros.scatter(dim, prob_tensor.topk(k=topk, dim=dim).indices, 1.0)\n return topk_tensor.int()\n\n\ndef to_categorical(x: Tensor, argmax_dim: int = 1) -> Tensor:\n \"\"\"Convert a tensor of probabilities to a dense label tensor.\n\n Args:\n x: probabilities to get the categorical label [N, d1, d2, ...]\n argmax_dim: dimension to apply\n\n Return:\n A tensor with categorical labels [N, d2, ...]\n\n Example:\n >>> x = torch.tensor([[0.2, 0.5], [0.9, 0.1]])\n >>> to_categorical(x)\n tensor([1, 0])\n\n \"\"\"\n return torch.argmax(x, dim=argmax_dim)\n\n\ndef _squeeze_scalar_element_tensor(x: Tensor) -> Tensor:\n return x.squeeze() if x.numel() == 1 else x\n\n\ndef _squeeze_if_scalar(data: Any) -> Any:\n return apply_to_collection(data, Tensor, _squeeze_scalar_element_tensor)\n\n\ndef _bincount(x: Tensor, minlength: Optional[int] = None) -> Tensor:\n \"\"\"Implement custom bincount.\n\n PyTorch currently does not support ``torch.bincount`` for:\n\n - deterministic mode on GPU.\n - MPS devices\n\n This implementation fallback to a for-loop counting occurrences in that case.\n\n Args:\n x: tensor to count\n minlength: minimum length to count\n\n Returns:\n Number of occurrences for each unique element in x\n\n Example:\n >>> x = torch.tensor([0,0,0,1,1,2,2,2,2])\n >>> _bincount(x, minlength=3)\n tensor([3, 2, 4])\n\n \"\"\"\n if minlength is None:\n minlength = len(torch.unique(x))\n if torch.are_deterministic_algorithms_enabled() or _XLA_AVAILABLE or _TORCH_GREATER_EQUAL_1_12 and x.is_mps:\n output = torch.zeros(minlength, device=x.device, dtype=torch.long)\n for i in range(minlength):\n output[i] = (x == i).sum()\n return output\n return torch.bincount(x, minlength=minlength)\n\n\ndef _cumsum(x: Tensor, dim: Optional[int] = 0, dtype: Optional[torch.dtype] = None) -> Tensor:\n if torch.are_deterministic_algorithms_enabled() and x.is_cuda and x.is_floating_point() and sys.platform != \"win32\":\n rank_zero_warn(\n \"You are trying to use a metric in deterministic mode on GPU that uses `torch.cumsum`, which is currently \"\n \"not supported. The tensor will be copied to the CPU memory to compute it and then copied back to GPU. \"\n \"Expect some slowdowns.\",\n TorchMetricsUserWarning,\n )\n return x.cpu().cumsum(dim=dim, dtype=dtype).cuda()\n return torch.cumsum(x, dim=dim, dtype=dtype)\n\n\ndef _flexible_bincount(x: Tensor) -> Tensor:\n \"\"\"Similar to `_bincount`, but works also with tensor that do not contain continuous values.\n\n Args:\n x: tensor to count\n\n Returns:\n Number of occurrences for each unique element in x\n\n \"\"\"\n # make sure elements in x start from 0\n x = x - x.min()\n unique_x = torch.unique(x)\n\n output = _bincount(x, minlength=torch.max(unique_x) + 1) # type: ignore[arg-type]\n # remove zeros from output tensor\n return output[unique_x]\n\n\ndef allclose(tensor1: Tensor, tensor2: Tensor) -> bool:\n \"\"\"Wrap torch.allclose to be robust towards dtype difference.\"\"\"\n if tensor1.dtype != tensor2.dtype:\n tensor2 = tensor2.to(dtype=tensor1.dtype)\n return torch.allclose(tensor1, tensor2)\n", "path": "src/torchmetrics/utilities/data.py"}], "after_files": [{"content": "# Copyright The Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport sys\nfrom typing import Any, Dict, List, Optional, Sequence, Tuple, Union\n\nimport torch\nfrom lightning_utilities import apply_to_collection\nfrom torch import Tensor\n\nfrom torchmetrics.utilities.exceptions import TorchMetricsUserWarning\nfrom torchmetrics.utilities.imports import _TORCH_GREATER_EQUAL_1_12, _XLA_AVAILABLE\nfrom torchmetrics.utilities.prints import rank_zero_warn\n\nMETRIC_EPS = 1e-6\n\n\ndef dim_zero_cat(x: Union[Tensor, List[Tensor]]) -> Tensor:\n \"\"\"Concatenation along the zero dimension.\"\"\"\n if isinstance(x, torch.Tensor):\n return x\n x = [y.unsqueeze(0) if y.numel() == 1 and y.ndim == 0 else y for y in x]\n if not x: # empty list\n raise ValueError(\"No samples to concatenate\")\n return torch.cat(x, dim=0)\n\n\ndef dim_zero_sum(x: Tensor) -> Tensor:\n \"\"\"Summation along the zero dimension.\"\"\"\n return torch.sum(x, dim=0)\n\n\ndef dim_zero_mean(x: Tensor) -> Tensor:\n \"\"\"Average along the zero dimension.\"\"\"\n return torch.mean(x, dim=0)\n\n\ndef dim_zero_max(x: Tensor) -> Tensor:\n \"\"\"Max along the zero dimension.\"\"\"\n return torch.max(x, dim=0).values\n\n\ndef dim_zero_min(x: Tensor) -> Tensor:\n \"\"\"Min along the zero dimension.\"\"\"\n return torch.min(x, dim=0).values\n\n\ndef _flatten(x: Sequence) -> list:\n \"\"\"Flatten list of list into single list.\"\"\"\n return [item for sublist in x for item in sublist]\n\n\ndef _flatten_dict(x: Dict) -> Tuple[Dict, bool]:\n \"\"\"Flatten dict of dicts into single dict and checking for duplicates in keys along the way.\"\"\"\n new_dict = {}\n duplicates = False\n for key, value in x.items():\n if isinstance(value, dict):\n for k, v in value.items():\n if k in new_dict:\n duplicates = True\n new_dict[k] = v\n else:\n if key in new_dict:\n duplicates = True\n new_dict[key] = value\n return new_dict, duplicates\n\n\ndef to_onehot(\n label_tensor: Tensor,\n num_classes: Optional[int] = None,\n) -> Tensor:\n \"\"\"Convert a dense label tensor to one-hot format.\n\n Args:\n label_tensor: dense label tensor, with shape [N, d1, d2, ...]\n num_classes: number of classes C\n\n Returns:\n A sparse label tensor with shape [N, C, d1, d2, ...]\n\n Example:\n >>> x = torch.tensor([1, 2, 3])\n >>> to_onehot(x)\n tensor([[0, 1, 0, 0],\n [0, 0, 1, 0],\n [0, 0, 0, 1]])\n\n \"\"\"\n if num_classes is None:\n num_classes = int(label_tensor.max().detach().item() + 1)\n\n tensor_onehot = torch.zeros(\n label_tensor.shape[0],\n num_classes,\n *label_tensor.shape[1:],\n dtype=label_tensor.dtype,\n device=label_tensor.device,\n )\n index = label_tensor.long().unsqueeze(1).expand_as(tensor_onehot)\n return tensor_onehot.scatter_(1, index, 1.0)\n\n\ndef select_topk(prob_tensor: Tensor, topk: int = 1, dim: int = 1) -> Tensor:\n \"\"\"Convert a probability tensor to binary by selecting top-k the highest entries.\n\n Args:\n prob_tensor: dense tensor of shape ``[..., C, ...]``, where ``C`` is in the\n position defined by the ``dim`` argument\n topk: number of the highest entries to turn into 1s\n dim: dimension on which to compare entries\n\n Returns:\n A binary tensor of the same shape as the input tensor of type ``torch.int32``\n\n Example:\n >>> x = torch.tensor([[1.1, 2.0, 3.0], [2.0, 1.0, 0.5]])\n >>> select_topk(x, topk=2)\n tensor([[0, 1, 1],\n [1, 1, 0]], dtype=torch.int32)\n\n \"\"\"\n zeros = torch.zeros_like(prob_tensor)\n if topk == 1: # argmax has better performance than topk\n topk_tensor = zeros.scatter(dim, prob_tensor.argmax(dim=dim, keepdim=True), 1.0)\n else:\n topk_tensor = zeros.scatter(dim, prob_tensor.topk(k=topk, dim=dim).indices, 1.0)\n return topk_tensor.int()\n\n\ndef to_categorical(x: Tensor, argmax_dim: int = 1) -> Tensor:\n \"\"\"Convert a tensor of probabilities to a dense label tensor.\n\n Args:\n x: probabilities to get the categorical label [N, d1, d2, ...]\n argmax_dim: dimension to apply\n\n Return:\n A tensor with categorical labels [N, d2, ...]\n\n Example:\n >>> x = torch.tensor([[0.2, 0.5], [0.9, 0.1]])\n >>> to_categorical(x)\n tensor([1, 0])\n\n \"\"\"\n return torch.argmax(x, dim=argmax_dim)\n\n\ndef _squeeze_scalar_element_tensor(x: Tensor) -> Tensor:\n return x.squeeze() if x.numel() == 1 else x\n\n\ndef _squeeze_if_scalar(data: Any) -> Any:\n return apply_to_collection(data, Tensor, _squeeze_scalar_element_tensor)\n\n\ndef _bincount(x: Tensor, minlength: Optional[int] = None) -> Tensor:\n \"\"\"Implement custom bincount.\n\n PyTorch currently does not support ``torch.bincount`` when running in deterministic mode on GPU or when running\n MPS devices or when running on XLA device. This implementation therefore falls back to using a combination of\n `torch.arange` and `torch.eq` in these scenarios. A small performance hit can expected and higher memory consumption\n as `[batch_size, mincount]` tensor needs to be initialized compared to native ``torch.bincount``.\n\n Args:\n x: tensor to count\n minlength: minimum length to count\n\n Returns:\n Number of occurrences for each unique element in x\n\n Example:\n >>> x = torch.tensor([0,0,0,1,1,2,2,2,2])\n >>> _bincount(x, minlength=3)\n tensor([3, 2, 4])\n\n \"\"\"\n if minlength is None:\n minlength = len(torch.unique(x))\n\n if torch.are_deterministic_algorithms_enabled() or _XLA_AVAILABLE or _TORCH_GREATER_EQUAL_1_12 and x.is_mps:\n mesh = torch.arange(minlength, device=x.device).repeat(len(x), 1)\n return torch.eq(x.reshape(-1, 1), mesh).sum(dim=0)\n\n return torch.bincount(x, minlength=minlength)\n\n\ndef _cumsum(x: Tensor, dim: Optional[int] = 0, dtype: Optional[torch.dtype] = None) -> Tensor:\n if torch.are_deterministic_algorithms_enabled() and x.is_cuda and x.is_floating_point() and sys.platform != \"win32\":\n rank_zero_warn(\n \"You are trying to use a metric in deterministic mode on GPU that uses `torch.cumsum`, which is currently \"\n \"not supported. The tensor will be copied to the CPU memory to compute it and then copied back to GPU. \"\n \"Expect some slowdowns.\",\n TorchMetricsUserWarning,\n )\n return x.cpu().cumsum(dim=dim, dtype=dtype).cuda()\n return torch.cumsum(x, dim=dim, dtype=dtype)\n\n\ndef _flexible_bincount(x: Tensor) -> Tensor:\n \"\"\"Similar to `_bincount`, but works also with tensor that do not contain continuous values.\n\n Args:\n x: tensor to count\n\n Returns:\n Number of occurrences for each unique element in x\n\n \"\"\"\n # make sure elements in x start from 0\n x = x - x.min()\n unique_x = torch.unique(x)\n\n output = _bincount(x, minlength=torch.max(unique_x) + 1) # type: ignore[arg-type]\n # remove zeros from output tensor\n return output[unique_x]\n\n\ndef allclose(tensor1: Tensor, tensor2: Tensor) -> bool:\n \"\"\"Wrap torch.allclose to be robust towards dtype difference.\"\"\"\n if tensor1.dtype != tensor2.dtype:\n tensor2 = tensor2.to(dtype=tensor1.dtype)\n return torch.allclose(tensor1, tensor2)\n", "path": "src/torchmetrics/utilities/data.py"}]}
| 3,826 | 407 |
gh_patches_debug_24888
|
rasdani/github-patches
|
git_diff
|
yt-dlp__yt-dlp-8651
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
bfmtv: Unable to extract video block
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I'm running yt-dlp version **2023.10.13** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
France
### Provide a description that is worded well enough to be understood
yt-dlp is unable to extract replay video from bfmtv websites, from domain www.bfmtv.com. It was still working a few days ago.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
yt-dlp -vU https://www.bfmtv.com/alsace/replay-emissions/bonsoir-l-alsace/diabete-le-groupe-lilly-investit-160-millions-d-euros-a-fegersheim_VN-202310230671.html
[debug] Command-line config: ['-vU', 'https://www.bfmtv.com/alsace/replay-emissions/bonsoir-l-alsace/diabete-le-groupe-lilly-investit-160-millions-d-euros-a-fegersheim_VN-202310230671.html']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] [b634ba742] (zip)
[debug] Python 3.11.2 (CPython x86_64 64bit) - Linux-6.1.0-13-amd64-x86_64-with-glibc2.36 (OpenSSL 3.0.11 19 Sep 2023, glibc 2.36)
[debug] exe versions: ffmpeg 5.1.3-1 (setts), ffprobe 5.1.3-1, phantomjs 2.1.1, rtmpdump 2.4
[debug] Optional libraries: Cryptodome-3.11.0, brotli-1.0.9, certifi-2022.09.24, mutagen-1.46.0, pyxattr-0.8.1, sqlite3-3.40.1, websockets-10.4
[debug] Proxy map: {}
[debug] Loaded 1890 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Available version: [email protected], Current version: [email protected]
Current Build Hash: be5cfb6be8930e1a5f427533ec32f2a481276b3da7b249d0150ce2b740ccf1ce
yt-dlp is up to date ([email protected])
[bfmtv] Extracting URL: https://www.bfmtv.com/alsace/replay-emissions/bonsoir-l-alsace/diabete-le-groupe-lilly-investit-160-millions-d-euros-a-fegersheim_VN-202310230671.html
[bfmtv] 202310230671: Downloading webpage
ERROR: [bfmtv] 202310230671: Unable to extract video block; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "/usr/local/bin/yt-dlp/yt_dlp/extractor/common.py", line 715, in extract
ie_result = self._real_extract(url)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/bin/yt-dlp/yt_dlp/extractor/bfmtv.py", line 43, in _real_extract
video_block = extract_attributes(self._search_regex(
^^^^^^^^^^^^^^^^^^^
File "/usr/local/bin/yt-dlp/yt_dlp/extractor/common.py", line 1263, in _search_regex
raise RegexNotFoundError('Unable to extract %s' % _name)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `yt_dlp/extractor/bfmtv.py`
Content:
```
1 import re
2
3 from .common import InfoExtractor
4 from ..utils import extract_attributes
5
6
7 class BFMTVBaseIE(InfoExtractor):
8 _VALID_URL_BASE = r'https?://(?:www\.|rmc\.)?bfmtv\.com/'
9 _VALID_URL_TMPL = _VALID_URL_BASE + r'(?:[^/]+/)*[^/?&#]+_%s[A-Z]-(?P<id>\d{12})\.html'
10 _VIDEO_BLOCK_REGEX = r'(<div[^>]+class="video_block"[^>]*>)'
11 BRIGHTCOVE_URL_TEMPLATE = 'http://players.brightcove.net/%s/%s_default/index.html?videoId=%s'
12
13 def _brightcove_url_result(self, video_id, video_block):
14 account_id = video_block.get('accountid') or '876450612001'
15 player_id = video_block.get('playerid') or 'I2qBTln4u'
16 return self.url_result(
17 self.BRIGHTCOVE_URL_TEMPLATE % (account_id, player_id, video_id),
18 'BrightcoveNew', video_id)
19
20
21 class BFMTVIE(BFMTVBaseIE):
22 IE_NAME = 'bfmtv'
23 _VALID_URL = BFMTVBaseIE._VALID_URL_TMPL % 'V'
24 _TESTS = [{
25 'url': 'https://www.bfmtv.com/politique/emmanuel-macron-l-islam-est-une-religion-qui-vit-une-crise-aujourd-hui-partout-dans-le-monde_VN-202010020146.html',
26 'info_dict': {
27 'id': '6196747868001',
28 'ext': 'mp4',
29 'title': 'Emmanuel Macron: "L\'Islam est une religion qui vit une crise aujourd’hui, partout dans le monde"',
30 'description': 'Le Président s\'exprime sur la question du séparatisme depuis les Mureaux, dans les Yvelines.',
31 'uploader_id': '876450610001',
32 'upload_date': '20201002',
33 'timestamp': 1601629620,
34 'duration': 44.757,
35 'tags': ['bfmactu', 'politique'],
36 'thumbnail': 'https://cf-images.eu-west-1.prod.boltdns.net/v1/static/876450610001/5041f4c1-bc48-4af8-a256-1b8300ad8ef0/cf2f9114-e8e2-4494-82b4-ab794ea4bc7d/1920x1080/match/image.jpg',
37 },
38 }]
39
40 def _real_extract(self, url):
41 bfmtv_id = self._match_id(url)
42 webpage = self._download_webpage(url, bfmtv_id)
43 video_block = extract_attributes(self._search_regex(
44 self._VIDEO_BLOCK_REGEX, webpage, 'video block'))
45 return self._brightcove_url_result(video_block['videoid'], video_block)
46
47
48 class BFMTVLiveIE(BFMTVIE): # XXX: Do not subclass from concrete IE
49 IE_NAME = 'bfmtv:live'
50 _VALID_URL = BFMTVBaseIE._VALID_URL_BASE + '(?P<id>(?:[^/]+/)?en-direct)'
51 _TESTS = [{
52 'url': 'https://www.bfmtv.com/en-direct/',
53 'info_dict': {
54 'id': '5615950982001',
55 'ext': 'mp4',
56 'title': r're:^le direct BFMTV WEB \d{4}-\d{2}-\d{2} \d{2}:\d{2}$',
57 'uploader_id': '876450610001',
58 'upload_date': '20171018',
59 'timestamp': 1508329950,
60 },
61 'params': {
62 'skip_download': True,
63 },
64 }, {
65 'url': 'https://www.bfmtv.com/economie/en-direct/',
66 'only_matching': True,
67 }]
68
69
70 class BFMTVArticleIE(BFMTVBaseIE):
71 IE_NAME = 'bfmtv:article'
72 _VALID_URL = BFMTVBaseIE._VALID_URL_TMPL % 'A'
73 _TESTS = [{
74 'url': 'https://www.bfmtv.com/sante/covid-19-un-responsable-de-l-institut-pasteur-se-demande-quand-la-france-va-se-reconfiner_AV-202101060198.html',
75 'info_dict': {
76 'id': '202101060198',
77 'title': 'Covid-19: un responsable de l\'Institut Pasteur se demande "quand la France va se reconfiner"',
78 'description': 'md5:947974089c303d3ac6196670ae262843',
79 },
80 'playlist_count': 2,
81 }, {
82 'url': 'https://www.bfmtv.com/international/pour-bolsonaro-le-bresil-est-en-faillite-mais-il-ne-peut-rien-faire_AD-202101060232.html',
83 'only_matching': True,
84 }, {
85 'url': 'https://www.bfmtv.com/sante/covid-19-oui-le-vaccin-de-pfizer-distribue-en-france-a-bien-ete-teste-sur-des-personnes-agees_AN-202101060275.html',
86 'only_matching': True,
87 }, {
88 'url': 'https://rmc.bfmtv.com/actualites/societe/transports/ce-n-est-plus-tout-rentable-le-bioethanol-e85-depasse-1eu-le-litre-des-automobilistes-regrettent_AV-202301100268.html',
89 'info_dict': {
90 'id': '6318445464112',
91 'ext': 'mp4',
92 'title': 'Le plein de bioéthanol fait de plus en plus mal à la pompe',
93 'description': None,
94 'uploader_id': '876630703001',
95 'upload_date': '20230110',
96 'timestamp': 1673341692,
97 'duration': 109.269,
98 'tags': ['rmc', 'show', 'apolline de malherbe', 'info', 'talk', 'matinale', 'radio'],
99 'thumbnail': 'https://cf-images.eu-west-1.prod.boltdns.net/v1/static/876630703001/5bef74b8-9d5e-4480-a21f-60c2e2480c46/96c88b74-f9db-45e1-8040-e199c5da216c/1920x1080/match/image.jpg'
100 }
101 }]
102
103 def _real_extract(self, url):
104 bfmtv_id = self._match_id(url)
105 webpage = self._download_webpage(url, bfmtv_id)
106
107 entries = []
108 for video_block_el in re.findall(self._VIDEO_BLOCK_REGEX, webpage):
109 video_block = extract_attributes(video_block_el)
110 video_id = video_block.get('videoid')
111 if not video_id:
112 continue
113 entries.append(self._brightcove_url_result(video_id, video_block))
114
115 return self.playlist_result(
116 entries, bfmtv_id, self._og_search_title(webpage, fatal=False),
117 self._html_search_meta(['og:description', 'description'], webpage))
118
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/yt_dlp/extractor/bfmtv.py b/yt_dlp/extractor/bfmtv.py
--- a/yt_dlp/extractor/bfmtv.py
+++ b/yt_dlp/extractor/bfmtv.py
@@ -7,7 +7,7 @@
class BFMTVBaseIE(InfoExtractor):
_VALID_URL_BASE = r'https?://(?:www\.|rmc\.)?bfmtv\.com/'
_VALID_URL_TMPL = _VALID_URL_BASE + r'(?:[^/]+/)*[^/?&#]+_%s[A-Z]-(?P<id>\d{12})\.html'
- _VIDEO_BLOCK_REGEX = r'(<div[^>]+class="video_block"[^>]*>)'
+ _VIDEO_BLOCK_REGEX = r'(<div[^>]+class="video_block[^"]*"[^>]*>)'
BRIGHTCOVE_URL_TEMPLATE = 'http://players.brightcove.net/%s/%s_default/index.html?videoId=%s'
def _brightcove_url_result(self, video_id, video_block):
@@ -55,8 +55,11 @@
'ext': 'mp4',
'title': r're:^le direct BFMTV WEB \d{4}-\d{2}-\d{2} \d{2}:\d{2}$',
'uploader_id': '876450610001',
- 'upload_date': '20171018',
- 'timestamp': 1508329950,
+ 'upload_date': '20220926',
+ 'timestamp': 1664207191,
+ 'live_status': 'is_live',
+ 'thumbnail': r're:https://.+/image\.jpg',
+ 'tags': [],
},
'params': {
'skip_download': True,
|
{"golden_diff": "diff --git a/yt_dlp/extractor/bfmtv.py b/yt_dlp/extractor/bfmtv.py\n--- a/yt_dlp/extractor/bfmtv.py\n+++ b/yt_dlp/extractor/bfmtv.py\n@@ -7,7 +7,7 @@\n class BFMTVBaseIE(InfoExtractor):\n _VALID_URL_BASE = r'https?://(?:www\\.|rmc\\.)?bfmtv\\.com/'\n _VALID_URL_TMPL = _VALID_URL_BASE + r'(?:[^/]+/)*[^/?&#]+_%s[A-Z]-(?P<id>\\d{12})\\.html'\n- _VIDEO_BLOCK_REGEX = r'(<div[^>]+class=\"video_block\"[^>]*>)'\n+ _VIDEO_BLOCK_REGEX = r'(<div[^>]+class=\"video_block[^\"]*\"[^>]*>)'\n BRIGHTCOVE_URL_TEMPLATE = 'http://players.brightcove.net/%s/%s_default/index.html?videoId=%s'\n \n def _brightcove_url_result(self, video_id, video_block):\n@@ -55,8 +55,11 @@\n 'ext': 'mp4',\n 'title': r're:^le direct BFMTV WEB \\d{4}-\\d{2}-\\d{2} \\d{2}:\\d{2}$',\n 'uploader_id': '876450610001',\n- 'upload_date': '20171018',\n- 'timestamp': 1508329950,\n+ 'upload_date': '20220926',\n+ 'timestamp': 1664207191,\n+ 'live_status': 'is_live',\n+ 'thumbnail': r're:https://.+/image\\.jpg',\n+ 'tags': [],\n },\n 'params': {\n 'skip_download': True,\n", "issue": "bfmtv: Unable to extract video block\n### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\n\n- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\\* field\n\n### Checklist\n\n- [X] I'm reporting that yt-dlp is broken on a **supported** site\n- [X] I've verified that I'm running yt-dlp version **2023.10.13** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)\n- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details\n- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)\n- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required\n\n### Region\n\nFrance\n\n### Provide a description that is worded well enough to be understood\n\nyt-dlp is unable to extract replay video from bfmtv websites, from domain www.bfmtv.com. It was still working a few days ago.\n\n### Provide verbose output that clearly demonstrates the problem\n\n- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)\n- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead\n- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\n\n### Complete Verbose Output\n\n```shell\nyt-dlp -vU https://www.bfmtv.com/alsace/replay-emissions/bonsoir-l-alsace/diabete-le-groupe-lilly-investit-160-millions-d-euros-a-fegersheim_VN-202310230671.html\r\n[debug] Command-line config: ['-vU', 'https://www.bfmtv.com/alsace/replay-emissions/bonsoir-l-alsace/diabete-le-groupe-lilly-investit-160-millions-d-euros-a-fegersheim_VN-202310230671.html']\r\n[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8\r\n[debug] yt-dlp version [email protected] [b634ba742] (zip)\r\n[debug] Python 3.11.2 (CPython x86_64 64bit) - Linux-6.1.0-13-amd64-x86_64-with-glibc2.36 (OpenSSL 3.0.11 19 Sep 2023, glibc 2.36)\r\n[debug] exe versions: ffmpeg 5.1.3-1 (setts), ffprobe 5.1.3-1, phantomjs 2.1.1, rtmpdump 2.4\r\n[debug] Optional libraries: Cryptodome-3.11.0, brotli-1.0.9, certifi-2022.09.24, mutagen-1.46.0, pyxattr-0.8.1, sqlite3-3.40.1, websockets-10.4\r\n[debug] Proxy map: {}\r\n[debug] Loaded 1890 extractors\r\n[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest\r\nAvailable version: [email protected], Current version: [email protected]\r\nCurrent Build Hash: be5cfb6be8930e1a5f427533ec32f2a481276b3da7b249d0150ce2b740ccf1ce\r\nyt-dlp is up to date ([email protected])\r\n[bfmtv] Extracting URL: https://www.bfmtv.com/alsace/replay-emissions/bonsoir-l-alsace/diabete-le-groupe-lilly-investit-160-millions-d-euros-a-fegersheim_VN-202310230671.html\r\n[bfmtv] 202310230671: Downloading webpage\r\nERROR: [bfmtv] 202310230671: Unable to extract video block; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U\r\n File \"/usr/local/bin/yt-dlp/yt_dlp/extractor/common.py\", line 715, in extract\r\n ie_result = self._real_extract(url)\r\n ^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/bin/yt-dlp/yt_dlp/extractor/bfmtv.py\", line 43, in _real_extract\r\n video_block = extract_attributes(self._search_regex(\r\n ^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/bin/yt-dlp/yt_dlp/extractor/common.py\", line 1263, in _search_regex\r\n raise RegexNotFoundError('Unable to extract %s' % _name)\n```\n\n", "before_files": [{"content": "import re\n\nfrom .common import InfoExtractor\nfrom ..utils import extract_attributes\n\n\nclass BFMTVBaseIE(InfoExtractor):\n _VALID_URL_BASE = r'https?://(?:www\\.|rmc\\.)?bfmtv\\.com/'\n _VALID_URL_TMPL = _VALID_URL_BASE + r'(?:[^/]+/)*[^/?&#]+_%s[A-Z]-(?P<id>\\d{12})\\.html'\n _VIDEO_BLOCK_REGEX = r'(<div[^>]+class=\"video_block\"[^>]*>)'\n BRIGHTCOVE_URL_TEMPLATE = 'http://players.brightcove.net/%s/%s_default/index.html?videoId=%s'\n\n def _brightcove_url_result(self, video_id, video_block):\n account_id = video_block.get('accountid') or '876450612001'\n player_id = video_block.get('playerid') or 'I2qBTln4u'\n return self.url_result(\n self.BRIGHTCOVE_URL_TEMPLATE % (account_id, player_id, video_id),\n 'BrightcoveNew', video_id)\n\n\nclass BFMTVIE(BFMTVBaseIE):\n IE_NAME = 'bfmtv'\n _VALID_URL = BFMTVBaseIE._VALID_URL_TMPL % 'V'\n _TESTS = [{\n 'url': 'https://www.bfmtv.com/politique/emmanuel-macron-l-islam-est-une-religion-qui-vit-une-crise-aujourd-hui-partout-dans-le-monde_VN-202010020146.html',\n 'info_dict': {\n 'id': '6196747868001',\n 'ext': 'mp4',\n 'title': 'Emmanuel Macron: \"L\\'Islam est une religion qui vit une crise aujourd\u2019hui, partout dans le monde\"',\n 'description': 'Le Pr\u00e9sident s\\'exprime sur la question du s\u00e9paratisme depuis les Mureaux, dans les Yvelines.',\n 'uploader_id': '876450610001',\n 'upload_date': '20201002',\n 'timestamp': 1601629620,\n 'duration': 44.757,\n 'tags': ['bfmactu', 'politique'],\n 'thumbnail': 'https://cf-images.eu-west-1.prod.boltdns.net/v1/static/876450610001/5041f4c1-bc48-4af8-a256-1b8300ad8ef0/cf2f9114-e8e2-4494-82b4-ab794ea4bc7d/1920x1080/match/image.jpg',\n },\n }]\n\n def _real_extract(self, url):\n bfmtv_id = self._match_id(url)\n webpage = self._download_webpage(url, bfmtv_id)\n video_block = extract_attributes(self._search_regex(\n self._VIDEO_BLOCK_REGEX, webpage, 'video block'))\n return self._brightcove_url_result(video_block['videoid'], video_block)\n\n\nclass BFMTVLiveIE(BFMTVIE): # XXX: Do not subclass from concrete IE\n IE_NAME = 'bfmtv:live'\n _VALID_URL = BFMTVBaseIE._VALID_URL_BASE + '(?P<id>(?:[^/]+/)?en-direct)'\n _TESTS = [{\n 'url': 'https://www.bfmtv.com/en-direct/',\n 'info_dict': {\n 'id': '5615950982001',\n 'ext': 'mp4',\n 'title': r're:^le direct BFMTV WEB \\d{4}-\\d{2}-\\d{2} \\d{2}:\\d{2}$',\n 'uploader_id': '876450610001',\n 'upload_date': '20171018',\n 'timestamp': 1508329950,\n },\n 'params': {\n 'skip_download': True,\n },\n }, {\n 'url': 'https://www.bfmtv.com/economie/en-direct/',\n 'only_matching': True,\n }]\n\n\nclass BFMTVArticleIE(BFMTVBaseIE):\n IE_NAME = 'bfmtv:article'\n _VALID_URL = BFMTVBaseIE._VALID_URL_TMPL % 'A'\n _TESTS = [{\n 'url': 'https://www.bfmtv.com/sante/covid-19-un-responsable-de-l-institut-pasteur-se-demande-quand-la-france-va-se-reconfiner_AV-202101060198.html',\n 'info_dict': {\n 'id': '202101060198',\n 'title': 'Covid-19: un responsable de l\\'Institut Pasteur se demande \"quand la France va se reconfiner\"',\n 'description': 'md5:947974089c303d3ac6196670ae262843',\n },\n 'playlist_count': 2,\n }, {\n 'url': 'https://www.bfmtv.com/international/pour-bolsonaro-le-bresil-est-en-faillite-mais-il-ne-peut-rien-faire_AD-202101060232.html',\n 'only_matching': True,\n }, {\n 'url': 'https://www.bfmtv.com/sante/covid-19-oui-le-vaccin-de-pfizer-distribue-en-france-a-bien-ete-teste-sur-des-personnes-agees_AN-202101060275.html',\n 'only_matching': True,\n }, {\n 'url': 'https://rmc.bfmtv.com/actualites/societe/transports/ce-n-est-plus-tout-rentable-le-bioethanol-e85-depasse-1eu-le-litre-des-automobilistes-regrettent_AV-202301100268.html',\n 'info_dict': {\n 'id': '6318445464112',\n 'ext': 'mp4',\n 'title': 'Le plein de bio\u00e9thanol fait de plus en plus mal \u00e0 la pompe',\n 'description': None,\n 'uploader_id': '876630703001',\n 'upload_date': '20230110',\n 'timestamp': 1673341692,\n 'duration': 109.269,\n 'tags': ['rmc', 'show', 'apolline de malherbe', 'info', 'talk', 'matinale', 'radio'],\n 'thumbnail': 'https://cf-images.eu-west-1.prod.boltdns.net/v1/static/876630703001/5bef74b8-9d5e-4480-a21f-60c2e2480c46/96c88b74-f9db-45e1-8040-e199c5da216c/1920x1080/match/image.jpg'\n }\n }]\n\n def _real_extract(self, url):\n bfmtv_id = self._match_id(url)\n webpage = self._download_webpage(url, bfmtv_id)\n\n entries = []\n for video_block_el in re.findall(self._VIDEO_BLOCK_REGEX, webpage):\n video_block = extract_attributes(video_block_el)\n video_id = video_block.get('videoid')\n if not video_id:\n continue\n entries.append(self._brightcove_url_result(video_id, video_block))\n\n return self.playlist_result(\n entries, bfmtv_id, self._og_search_title(webpage, fatal=False),\n self._html_search_meta(['og:description', 'description'], webpage))\n", "path": "yt_dlp/extractor/bfmtv.py"}], "after_files": [{"content": "import re\n\nfrom .common import InfoExtractor\nfrom ..utils import extract_attributes\n\n\nclass BFMTVBaseIE(InfoExtractor):\n _VALID_URL_BASE = r'https?://(?:www\\.|rmc\\.)?bfmtv\\.com/'\n _VALID_URL_TMPL = _VALID_URL_BASE + r'(?:[^/]+/)*[^/?&#]+_%s[A-Z]-(?P<id>\\d{12})\\.html'\n _VIDEO_BLOCK_REGEX = r'(<div[^>]+class=\"video_block[^\"]*\"[^>]*>)'\n BRIGHTCOVE_URL_TEMPLATE = 'http://players.brightcove.net/%s/%s_default/index.html?videoId=%s'\n\n def _brightcove_url_result(self, video_id, video_block):\n account_id = video_block.get('accountid') or '876450612001'\n player_id = video_block.get('playerid') or 'I2qBTln4u'\n return self.url_result(\n self.BRIGHTCOVE_URL_TEMPLATE % (account_id, player_id, video_id),\n 'BrightcoveNew', video_id)\n\n\nclass BFMTVIE(BFMTVBaseIE):\n IE_NAME = 'bfmtv'\n _VALID_URL = BFMTVBaseIE._VALID_URL_TMPL % 'V'\n _TESTS = [{\n 'url': 'https://www.bfmtv.com/politique/emmanuel-macron-l-islam-est-une-religion-qui-vit-une-crise-aujourd-hui-partout-dans-le-monde_VN-202010020146.html',\n 'info_dict': {\n 'id': '6196747868001',\n 'ext': 'mp4',\n 'title': 'Emmanuel Macron: \"L\\'Islam est une religion qui vit une crise aujourd\u2019hui, partout dans le monde\"',\n 'description': 'Le Pr\u00e9sident s\\'exprime sur la question du s\u00e9paratisme depuis les Mureaux, dans les Yvelines.',\n 'uploader_id': '876450610001',\n 'upload_date': '20201002',\n 'timestamp': 1601629620,\n 'duration': 44.757,\n 'tags': ['bfmactu', 'politique'],\n 'thumbnail': 'https://cf-images.eu-west-1.prod.boltdns.net/v1/static/876450610001/5041f4c1-bc48-4af8-a256-1b8300ad8ef0/cf2f9114-e8e2-4494-82b4-ab794ea4bc7d/1920x1080/match/image.jpg',\n },\n }]\n\n def _real_extract(self, url):\n bfmtv_id = self._match_id(url)\n webpage = self._download_webpage(url, bfmtv_id)\n video_block = extract_attributes(self._search_regex(\n self._VIDEO_BLOCK_REGEX, webpage, 'video block'))\n return self._brightcove_url_result(video_block['videoid'], video_block)\n\n\nclass BFMTVLiveIE(BFMTVIE): # XXX: Do not subclass from concrete IE\n IE_NAME = 'bfmtv:live'\n _VALID_URL = BFMTVBaseIE._VALID_URL_BASE + '(?P<id>(?:[^/]+/)?en-direct)'\n _TESTS = [{\n 'url': 'https://www.bfmtv.com/en-direct/',\n 'info_dict': {\n 'id': '5615950982001',\n 'ext': 'mp4',\n 'title': r're:^le direct BFMTV WEB \\d{4}-\\d{2}-\\d{2} \\d{2}:\\d{2}$',\n 'uploader_id': '876450610001',\n 'upload_date': '20220926',\n 'timestamp': 1664207191,\n 'live_status': 'is_live',\n 'thumbnail': r're:https://.+/image\\.jpg',\n 'tags': [],\n },\n 'params': {\n 'skip_download': True,\n },\n }, {\n 'url': 'https://www.bfmtv.com/economie/en-direct/',\n 'only_matching': True,\n }]\n\n\nclass BFMTVArticleIE(BFMTVBaseIE):\n IE_NAME = 'bfmtv:article'\n _VALID_URL = BFMTVBaseIE._VALID_URL_TMPL % 'A'\n _TESTS = [{\n 'url': 'https://www.bfmtv.com/sante/covid-19-un-responsable-de-l-institut-pasteur-se-demande-quand-la-france-va-se-reconfiner_AV-202101060198.html',\n 'info_dict': {\n 'id': '202101060198',\n 'title': 'Covid-19: un responsable de l\\'Institut Pasteur se demande \"quand la France va se reconfiner\"',\n 'description': 'md5:947974089c303d3ac6196670ae262843',\n },\n 'playlist_count': 2,\n }, {\n 'url': 'https://www.bfmtv.com/international/pour-bolsonaro-le-bresil-est-en-faillite-mais-il-ne-peut-rien-faire_AD-202101060232.html',\n 'only_matching': True,\n }, {\n 'url': 'https://www.bfmtv.com/sante/covid-19-oui-le-vaccin-de-pfizer-distribue-en-france-a-bien-ete-teste-sur-des-personnes-agees_AN-202101060275.html',\n 'only_matching': True,\n }, {\n 'url': 'https://rmc.bfmtv.com/actualites/societe/transports/ce-n-est-plus-tout-rentable-le-bioethanol-e85-depasse-1eu-le-litre-des-automobilistes-regrettent_AV-202301100268.html',\n 'info_dict': {\n 'id': '6318445464112',\n 'ext': 'mp4',\n 'title': 'Le plein de bio\u00e9thanol fait de plus en plus mal \u00e0 la pompe',\n 'description': None,\n 'uploader_id': '876630703001',\n 'upload_date': '20230110',\n 'timestamp': 1673341692,\n 'duration': 109.269,\n 'tags': ['rmc', 'show', 'apolline de malherbe', 'info', 'talk', 'matinale', 'radio'],\n 'thumbnail': 'https://cf-images.eu-west-1.prod.boltdns.net/v1/static/876630703001/5bef74b8-9d5e-4480-a21f-60c2e2480c46/96c88b74-f9db-45e1-8040-e199c5da216c/1920x1080/match/image.jpg'\n }\n }]\n\n def _real_extract(self, url):\n bfmtv_id = self._match_id(url)\n webpage = self._download_webpage(url, bfmtv_id)\n\n entries = []\n for video_block_el in re.findall(self._VIDEO_BLOCK_REGEX, webpage):\n video_block = extract_attributes(video_block_el)\n video_id = video_block.get('videoid')\n if not video_id:\n continue\n entries.append(self._brightcove_url_result(video_id, video_block))\n\n return self.playlist_result(\n entries, bfmtv_id, self._og_search_title(webpage, fatal=False),\n self._html_search_meta(['og:description', 'description'], webpage))\n", "path": "yt_dlp/extractor/bfmtv.py"}]}
| 3,787 | 433 |
gh_patches_debug_15461
|
rasdani/github-patches
|
git_diff
|
wagtail__wagtail-6508
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Python 3.9 compatibility (resolve DeprecationWarning from collections)
### Issue Summary
Wagtail hits the warning `DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working` in various places.
- [x] Jinja2>2.10.1
- [x] html5lib>1.0.1 (see https://github.com/html5lib/html5lib-python/issues/419)
- [x] beautifulsoup4>=4.8.0 (or maybe earlier)
- [x] fix wagtail/utils/l18n/translation.py:5 (#5485)
### Steps to Reproduce
1. run `tox -e py37-dj22-sqlite-noelasticsearch -- --deprecation all`
2. note deprecation warnings:
```
site-packages/jinja2/utils.py:485: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
site-packages/jinja2/runtime.py:318: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
```
resolved by https://github.com/pallets/jinja/pull/867 - the fix is on master not released yet (as of Jinja2==2.10.1)
```
site-packages/html5lib/_trie/_base.py:3: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
from collections import Mapping
```
Fixed in https://github.com/html5lib/html5lib-python/issues/402 , but not released yet as of 1.0.1 (see https://github.com/html5lib/html5lib-python/issues/419 )
```
site-packages/bs4/element.py:1134: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
if not isinstance(formatter, collections.Callable):
```
https://bugs.launchpad.net/beautifulsoup/+bug/1778909 - resolved by beautifulsoup4>=4.8.0 (and earlier I think)
I'm also seeing one in Wagtail from my own project tests here, the above tox run didn't seem to hit it:
```
wagtail/utils/l18n/translation.py:5: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
```
See #5485
* I have confirmed that this issue can be reproduced as described on a fresh Wagtail project:
yes (on master at bb4e2fe2dfe69fc3143f38c6e34dbe6f2f2f01e0 )
### Technical details
* Python version: Run `python --version`.
Python 3.7.3
* Django version: Look in your requirements.txt, or run `pip show django | grep Version`.
Django 2.2
* Wagtail version: Look at the bottom of the Settings menu in the Wagtail admin, or run `pip show wagtail | grep Version:`.
2.7.0.alpha.0
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2
3 from wagtail import __version__
4 from wagtail.utils.setup import assets, check_bdist_egg, sdist
5
6
7 try:
8 from setuptools import find_packages, setup
9 except ImportError:
10 from distutils.core import setup
11
12
13 # Hack to prevent "TypeError: 'NoneType' object is not callable" error
14 # in multiprocessing/util.py _exit_function when setup.py exits
15 # (see http://www.eby-sarna.com/pipermail/peak/2010-May/003357.html)
16 try:
17 import multiprocessing # noqa
18 except ImportError:
19 pass
20
21
22 install_requires = [
23 "Django>=2.2,<3.2",
24 "django-modelcluster>=5.1,<6.0",
25 "django-taggit>=1.0,<2.0",
26 "django-treebeard>=4.2.0,<5.0",
27 "djangorestframework>=3.11.1,<4.0",
28 "django-filter>=2.2,<3.0",
29 "draftjs_exporter>=2.1.5,<3.0",
30 "Pillow>=4.0.0,<9.0.0",
31 "beautifulsoup4>=4.8,<4.9",
32 "html5lib>=0.999,<2",
33 "Willow>=1.4,<1.5",
34 "requests>=2.11.1,<3.0",
35 "l18n>=2018.5",
36 "xlsxwriter>=1.2.8,<2.0",
37 "tablib[xls,xlsx]>=0.14.0",
38 "anyascii>=0.1.5",
39 ]
40
41 # Testing dependencies
42 testing_extras = [
43 # Required for running the tests
44 'python-dateutil>=2.2',
45 'pytz>=2014.7',
46 'elasticsearch>=1.0.0,<3.0',
47 'Jinja2>=2.8,<3.0',
48 'boto3>=1.4,<1.5',
49 'freezegun>=0.3.8',
50 'openpyxl>=2.6.4',
51 'Unidecode>=0.04.14,<2.0',
52
53 # For coverage and PEP8 linting
54 'coverage>=3.7.0',
55 'flake8>=3.6.0',
56 'isort==5.6.4', # leave this pinned - it tends to change rules between patch releases
57 'flake8-blind-except==0.1.1',
58 'flake8-print==2.0.2',
59 'doc8==0.8.1',
60
61 # For templates linting
62 'jinjalint>=0.5',
63
64 # Pipenv hack to fix broken dependency causing CircleCI failures
65 'docutils==0.15',
66
67 # django-taggit 1.3.0 made changes to verbose_name which affect migrations;
68 # the test suite migrations correspond to >=1.3.0
69 'django-taggit>=1.3.0,<2.0',
70 ]
71
72 # Documentation dependencies
73 documentation_extras = [
74 'pyenchant>=3.1.1,<4',
75 'sphinxcontrib-spelling>=5.4.0,<6',
76 'Sphinx>=1.5.2',
77 'sphinx-autobuild>=0.6.0',
78 'sphinx_rtd_theme>=0.1.9',
79 ]
80
81 setup(
82 name='wagtail',
83 version=__version__,
84 description='A Django content management system.',
85 author='Wagtail core team + contributors',
86 author_email='[email protected]', # For support queries, please see https://docs.wagtail.io/en/stable/support.html
87 url='https://wagtail.io/',
88 packages=find_packages(),
89 include_package_data=True,
90 license='BSD',
91 long_description="Wagtail is an open source content management \
92 system built on Django, with a strong community and commercial support. \
93 It’s focused on user experience, and offers precise control for \
94 designers and developers.\n\n\
95 For more details, see https://wagtail.io, https://docs.wagtail.io and \
96 https://github.com/wagtail/wagtail/.",
97 classifiers=[
98 'Development Status :: 5 - Production/Stable',
99 'Environment :: Web Environment',
100 'Intended Audience :: Developers',
101 'License :: OSI Approved :: BSD License',
102 'Operating System :: OS Independent',
103 'Programming Language :: Python',
104 'Programming Language :: Python :: 3',
105 'Programming Language :: Python :: 3.6',
106 'Programming Language :: Python :: 3.7',
107 'Programming Language :: Python :: 3.8',
108 'Framework :: Django',
109 'Framework :: Django :: 2.2',
110 'Framework :: Django :: 3.0',
111 'Framework :: Django :: 3.1',
112 'Framework :: Wagtail',
113 'Topic :: Internet :: WWW/HTTP :: Site Management',
114 ],
115 python_requires='>=3.6',
116 install_requires=install_requires,
117 extras_require={
118 'testing': testing_extras,
119 'docs': documentation_extras
120 },
121 entry_points="""
122 [console_scripts]
123 wagtail=wagtail.bin.wagtail:main
124 """,
125 zip_safe=False,
126 cmdclass={
127 'sdist': sdist,
128 'bdist_egg': check_bdist_egg,
129 'assets': assets,
130 },
131 )
132
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -45,7 +45,7 @@
'pytz>=2014.7',
'elasticsearch>=1.0.0,<3.0',
'Jinja2>=2.8,<3.0',
- 'boto3>=1.4,<1.5',
+ 'boto3>=1.16,<1.17',
'freezegun>=0.3.8',
'openpyxl>=2.6.4',
'Unidecode>=0.04.14,<2.0',
@@ -105,6 +105,7 @@
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
+ 'Programming Language :: Python :: 3.9',
'Framework :: Django',
'Framework :: Django :: 2.2',
'Framework :: Django :: 3.0',
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -45,7 +45,7 @@\n 'pytz>=2014.7',\n 'elasticsearch>=1.0.0,<3.0',\n 'Jinja2>=2.8,<3.0',\n- 'boto3>=1.4,<1.5',\n+ 'boto3>=1.16,<1.17',\n 'freezegun>=0.3.8',\n 'openpyxl>=2.6.4',\n 'Unidecode>=0.04.14,<2.0',\n@@ -105,6 +105,7 @@\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n+ 'Programming Language :: Python :: 3.9',\n 'Framework :: Django',\n 'Framework :: Django :: 2.2',\n 'Framework :: Django :: 3.0',\n", "issue": "Python 3.9 compatibility (resolve DeprecationWarning from collections)\n### Issue Summary\r\n\r\nWagtail hits the warning `DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working` in various places.\r\n\r\n- [x] Jinja2>2.10.1 \r\n- [x] html5lib>1.0.1 (see https://github.com/html5lib/html5lib-python/issues/419)\r\n- [x] beautifulsoup4>=4.8.0 (or maybe earlier)\r\n- [x] fix wagtail/utils/l18n/translation.py:5 (#5485)\r\n\r\n### Steps to Reproduce\r\n\r\n1. run `tox -e py37-dj22-sqlite-noelasticsearch -- --deprecation all`\r\n2. note deprecation warnings:\r\n\r\n\r\n```\r\nsite-packages/jinja2/utils.py:485: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working\r\nsite-packages/jinja2/runtime.py:318: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working\r\n```\r\nresolved by https://github.com/pallets/jinja/pull/867 - the fix is on master not released yet (as of Jinja2==2.10.1)\r\n\r\n```\r\nsite-packages/html5lib/_trie/_base.py:3: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working\r\n from collections import Mapping\r\n```\r\n\r\nFixed in https://github.com/html5lib/html5lib-python/issues/402 , but not released yet as of 1.0.1 (see https://github.com/html5lib/html5lib-python/issues/419 )\r\n\r\n```\r\nsite-packages/bs4/element.py:1134: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working\r\n if not isinstance(formatter, collections.Callable):\r\n```\r\nhttps://bugs.launchpad.net/beautifulsoup/+bug/1778909 - resolved by beautifulsoup4>=4.8.0 (and earlier I think)\r\n\r\nI'm also seeing one in Wagtail from my own project tests here, the above tox run didn't seem to hit it:\r\n\r\n```\r\nwagtail/utils/l18n/translation.py:5: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working\r\n```\r\n\r\nSee #5485 \r\n\r\n* I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: \r\n\r\nyes (on master at bb4e2fe2dfe69fc3143f38c6e34dbe6f2f2f01e0 )\r\n\r\n### Technical details\r\n\r\n* Python version: Run `python --version`.\r\n\r\nPython 3.7.3\r\n\r\n* Django version: Look in your requirements.txt, or run `pip show django | grep Version`.\r\n\r\nDjango 2.2\r\n\r\n* Wagtail version: Look at the bottom of the Settings menu in the Wagtail admin, or run `pip show wagtail | grep Version:`.\r\n\r\n2.7.0.alpha.0\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\nfrom wagtail import __version__\nfrom wagtail.utils.setup import assets, check_bdist_egg, sdist\n\n\ntry:\n from setuptools import find_packages, setup\nexcept ImportError:\n from distutils.core import setup\n\n\n# Hack to prevent \"TypeError: 'NoneType' object is not callable\" error\n# in multiprocessing/util.py _exit_function when setup.py exits\n# (see http://www.eby-sarna.com/pipermail/peak/2010-May/003357.html)\ntry:\n import multiprocessing # noqa\nexcept ImportError:\n pass\n\n\ninstall_requires = [\n \"Django>=2.2,<3.2\",\n \"django-modelcluster>=5.1,<6.0\",\n \"django-taggit>=1.0,<2.0\",\n \"django-treebeard>=4.2.0,<5.0\",\n \"djangorestframework>=3.11.1,<4.0\",\n \"django-filter>=2.2,<3.0\",\n \"draftjs_exporter>=2.1.5,<3.0\",\n \"Pillow>=4.0.0,<9.0.0\",\n \"beautifulsoup4>=4.8,<4.9\",\n \"html5lib>=0.999,<2\",\n \"Willow>=1.4,<1.5\",\n \"requests>=2.11.1,<3.0\",\n \"l18n>=2018.5\",\n \"xlsxwriter>=1.2.8,<2.0\",\n \"tablib[xls,xlsx]>=0.14.0\",\n \"anyascii>=0.1.5\",\n]\n\n# Testing dependencies\ntesting_extras = [\n # Required for running the tests\n 'python-dateutil>=2.2',\n 'pytz>=2014.7',\n 'elasticsearch>=1.0.0,<3.0',\n 'Jinja2>=2.8,<3.0',\n 'boto3>=1.4,<1.5',\n 'freezegun>=0.3.8',\n 'openpyxl>=2.6.4',\n 'Unidecode>=0.04.14,<2.0',\n\n # For coverage and PEP8 linting\n 'coverage>=3.7.0',\n 'flake8>=3.6.0',\n 'isort==5.6.4', # leave this pinned - it tends to change rules between patch releases\n 'flake8-blind-except==0.1.1',\n 'flake8-print==2.0.2',\n 'doc8==0.8.1',\n\n # For templates linting\n 'jinjalint>=0.5',\n\n # Pipenv hack to fix broken dependency causing CircleCI failures\n 'docutils==0.15',\n\n # django-taggit 1.3.0 made changes to verbose_name which affect migrations;\n # the test suite migrations correspond to >=1.3.0\n 'django-taggit>=1.3.0,<2.0',\n]\n\n# Documentation dependencies\ndocumentation_extras = [\n 'pyenchant>=3.1.1,<4',\n 'sphinxcontrib-spelling>=5.4.0,<6',\n 'Sphinx>=1.5.2',\n 'sphinx-autobuild>=0.6.0',\n 'sphinx_rtd_theme>=0.1.9',\n]\n\nsetup(\n name='wagtail',\n version=__version__,\n description='A Django content management system.',\n author='Wagtail core team + contributors',\n author_email='[email protected]', # For support queries, please see https://docs.wagtail.io/en/stable/support.html\n url='https://wagtail.io/',\n packages=find_packages(),\n include_package_data=True,\n license='BSD',\n long_description=\"Wagtail is an open source content management \\\nsystem built on Django, with a strong community and commercial support. \\\nIt\u2019s focused on user experience, and offers precise control for \\\ndesigners and developers.\\n\\n\\\nFor more details, see https://wagtail.io, https://docs.wagtail.io and \\\nhttps://github.com/wagtail/wagtail/.\",\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Framework :: Django',\n 'Framework :: Django :: 2.2',\n 'Framework :: Django :: 3.0',\n 'Framework :: Django :: 3.1',\n 'Framework :: Wagtail',\n 'Topic :: Internet :: WWW/HTTP :: Site Management',\n ],\n python_requires='>=3.6',\n install_requires=install_requires,\n extras_require={\n 'testing': testing_extras,\n 'docs': documentation_extras\n },\n entry_points=\"\"\"\n [console_scripts]\n wagtail=wagtail.bin.wagtail:main\n \"\"\",\n zip_safe=False,\n cmdclass={\n 'sdist': sdist,\n 'bdist_egg': check_bdist_egg,\n 'assets': assets,\n },\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\nfrom wagtail import __version__\nfrom wagtail.utils.setup import assets, check_bdist_egg, sdist\n\n\ntry:\n from setuptools import find_packages, setup\nexcept ImportError:\n from distutils.core import setup\n\n\n# Hack to prevent \"TypeError: 'NoneType' object is not callable\" error\n# in multiprocessing/util.py _exit_function when setup.py exits\n# (see http://www.eby-sarna.com/pipermail/peak/2010-May/003357.html)\ntry:\n import multiprocessing # noqa\nexcept ImportError:\n pass\n\n\ninstall_requires = [\n \"Django>=2.2,<3.2\",\n \"django-modelcluster>=5.1,<6.0\",\n \"django-taggit>=1.0,<2.0\",\n \"django-treebeard>=4.2.0,<5.0\",\n \"djangorestframework>=3.11.1,<4.0\",\n \"django-filter>=2.2,<3.0\",\n \"draftjs_exporter>=2.1.5,<3.0\",\n \"Pillow>=4.0.0,<9.0.0\",\n \"beautifulsoup4>=4.8,<4.9\",\n \"html5lib>=0.999,<2\",\n \"Willow>=1.4,<1.5\",\n \"requests>=2.11.1,<3.0\",\n \"l18n>=2018.5\",\n \"xlsxwriter>=1.2.8,<2.0\",\n \"tablib[xls,xlsx]>=0.14.0\",\n \"anyascii>=0.1.5\",\n]\n\n# Testing dependencies\ntesting_extras = [\n # Required for running the tests\n 'python-dateutil>=2.2',\n 'pytz>=2014.7',\n 'elasticsearch>=1.0.0,<3.0',\n 'Jinja2>=2.8,<3.0',\n 'boto3>=1.16,<1.17',\n 'freezegun>=0.3.8',\n 'openpyxl>=2.6.4',\n 'Unidecode>=0.04.14,<2.0',\n\n # For coverage and PEP8 linting\n 'coverage>=3.7.0',\n 'flake8>=3.6.0',\n 'isort==5.6.4', # leave this pinned - it tends to change rules between patch releases\n 'flake8-blind-except==0.1.1',\n 'flake8-print==2.0.2',\n 'doc8==0.8.1',\n\n # For templates linting\n 'jinjalint>=0.5',\n\n # Pipenv hack to fix broken dependency causing CircleCI failures\n 'docutils==0.15',\n\n # django-taggit 1.3.0 made changes to verbose_name which affect migrations;\n # the test suite migrations correspond to >=1.3.0\n 'django-taggit>=1.3.0,<2.0',\n]\n\n# Documentation dependencies\ndocumentation_extras = [\n 'pyenchant>=3.1.1,<4',\n 'sphinxcontrib-spelling>=5.4.0,<6',\n 'Sphinx>=1.5.2',\n 'sphinx-autobuild>=0.6.0',\n 'sphinx_rtd_theme>=0.1.9',\n]\n\nsetup(\n name='wagtail',\n version=__version__,\n description='A Django content management system.',\n author='Wagtail core team + contributors',\n author_email='[email protected]', # For support queries, please see https://docs.wagtail.io/en/stable/support.html\n url='https://wagtail.io/',\n packages=find_packages(),\n include_package_data=True,\n license='BSD',\n long_description=\"Wagtail is an open source content management \\\nsystem built on Django, with a strong community and commercial support. \\\nIt\u2019s focused on user experience, and offers precise control for \\\ndesigners and developers.\\n\\n\\\nFor more details, see https://wagtail.io, https://docs.wagtail.io and \\\nhttps://github.com/wagtail/wagtail/.\",\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n 'Framework :: Django',\n 'Framework :: Django :: 2.2',\n 'Framework :: Django :: 3.0',\n 'Framework :: Django :: 3.1',\n 'Framework :: Wagtail',\n 'Topic :: Internet :: WWW/HTTP :: Site Management',\n ],\n python_requires='>=3.6',\n install_requires=install_requires,\n extras_require={\n 'testing': testing_extras,\n 'docs': documentation_extras\n },\n entry_points=\"\"\"\n [console_scripts]\n wagtail=wagtail.bin.wagtail:main\n \"\"\",\n zip_safe=False,\n cmdclass={\n 'sdist': sdist,\n 'bdist_egg': check_bdist_egg,\n 'assets': assets,\n },\n)\n", "path": "setup.py"}]}
| 2,548 | 239 |
gh_patches_debug_43494
|
rasdani/github-patches
|
git_diff
|
microsoft__botbuilder-python-299
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
date_resolver_dialog - test of "definite"
## Version SDK v4.5.0b3
in botbuilder-python/samples/13.core-bot/dialogs/date_resolver_dialog.py
## Describe the bug
Line : _if "definite" in Timex(timex).types:_
The goal of this line is to treat ambiguous date such as timex date = XXXX-05-17
so the test must be "not in" instead of "in" ("definite" = type for an unambiguous date)
[bug]
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `samples/13.core-bot/dialogs/date_resolver_dialog.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 from botbuilder.core import MessageFactory
5 from botbuilder.dialogs import WaterfallDialog, DialogTurnResult, WaterfallStepContext
6 from botbuilder.dialogs.prompts import (
7 DateTimePrompt,
8 PromptValidatorContext,
9 PromptOptions,
10 DateTimeResolution,
11 )
12 from botbuilder.schema import InputHints
13 from .cancel_and_help_dialog import CancelAndHelpDialog
14
15 from datatypes_date_time.timex import Timex
16
17
18 class DateResolverDialog(CancelAndHelpDialog):
19 def __init__(self, dialog_id: str = None):
20 super(DateResolverDialog, self).__init__(
21 dialog_id or DateResolverDialog.__name__
22 )
23
24 self.add_dialog(
25 DateTimePrompt(
26 DateTimePrompt.__name__, DateResolverDialog.datetime_prompt_validator
27 )
28 )
29 self.add_dialog(
30 WaterfallDialog(
31 WaterfallDialog.__name__ + "2", [self.initial_step, self.final_step]
32 )
33 )
34
35 self.initial_dialog_id = WaterfallDialog.__name__ + "2"
36
37 async def initial_step(
38 self, step_context: WaterfallStepContext
39 ) -> DialogTurnResult:
40 timex = step_context.options
41
42 prompt_msg_text = "On what date would you like to travel?"
43 prompt_msg = MessageFactory.text(
44 prompt_msg_text, prompt_msg_text, InputHints.expecting_input
45 )
46
47 reprompt_msg_text = "I'm sorry, for best results, please enter your travel date including the month, day and year."
48 reprompt_msg = MessageFactory.text(
49 reprompt_msg_text, reprompt_msg_text, InputHints.expecting_input
50 )
51
52 if timex is None:
53 # We were not given any date at all so prompt the user.
54 return await step_context.prompt(
55 DateTimePrompt.__name__,
56 PromptOptions(prompt=prompt_msg, retry_prompt=reprompt_msg),
57 )
58 # We have a Date we just need to check it is unambiguous.
59 if "definite" in Timex(timex).types:
60 # This is essentially a "reprompt" of the data we were given up front.
61 return await step_context.prompt(
62 DateTimePrompt.__name__, PromptOptions(prompt=reprompt_msg)
63 )
64
65 return await step_context.next(DateTimeResolution(timex=timex))
66
67 async def final_step(self, step_context: WaterfallStepContext):
68 timex = step_context.result[0].timex
69 return await step_context.end_dialog(timex)
70
71 @staticmethod
72 async def datetime_prompt_validator(prompt_context: PromptValidatorContext) -> bool:
73 if prompt_context.recognized.succeeded:
74 timex = prompt_context.recognized.value[0].timex.split("T")[0]
75
76 # TODO: Needs TimexProperty
77 return "definite" in Timex(timex).types
78
79 return False
80
```
Path: `samples/13.core-bot/dialogs/main_dialog.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 from datetime import datetime
5 from typing import Dict
6 from botbuilder.dialogs import (
7 ComponentDialog,
8 DialogSet,
9 DialogTurnStatus,
10 WaterfallDialog,
11 WaterfallStepContext,
12 DialogTurnResult,
13 )
14 from botbuilder.dialogs.prompts import TextPrompt, ConfirmPrompt, PromptOptions
15 from botbuilder.core import MessageFactory, TurnContext
16 from botbuilder.schema import InputHints
17
18 from .booking_dialog import BookingDialog
19 from booking_details import BookingDetails
20 from flight_booking_recognizer import FlightBookingRecognizer
21 from helpers.luis_helper import LuisHelper, Intent
22
23
24 class MainDialog(ComponentDialog):
25 def __init__(
26 self, luis_recognizer: FlightBookingRecognizer, booking_dialog: BookingDialog
27 ):
28 super(MainDialog, self).__init__(MainDialog.__name__)
29
30 self._luis_recognizer = luis_recognizer
31 self._booking_dialog_id = booking_dialog.id
32
33 self.add_dialog(TextPrompt(TextPrompt.__name__))
34 self.add_dialog(booking_dialog)
35 self.add_dialog(
36 WaterfallDialog(
37 "WFDialog", [self.intro_step, self.act_step, self.final_step]
38 )
39 )
40
41 self.initial_dialog_id = "WFDialog"
42
43 async def intro_step(self, step_context: WaterfallStepContext) -> DialogTurnResult:
44 if not self._luis_recognizer.is_configured:
45 await step_context.context.send_activity(
46 MessageFactory.text(
47 "NOTE: LUIS is not configured. To enable all capabilities, add 'LuisAppId', 'LuisAPIKey' and "
48 "'LuisAPIHostName' to the appsettings.json file.",
49 input_hint=InputHints.ignoring_input,
50 )
51 )
52
53 return await step_context.next(None)
54 message_text = (
55 str(step_context.options)
56 if step_context.options
57 else "What can I help you with today?"
58 )
59 prompt_message = MessageFactory.text(
60 message_text, message_text, InputHints.expecting_input
61 )
62
63 return await step_context.prompt(
64 TextPrompt.__name__, PromptOptions(prompt=prompt_message)
65 )
66
67 async def act_step(self, step_context: WaterfallStepContext) -> DialogTurnResult:
68 if not self._luis_recognizer.is_configured:
69 # LUIS is not configured, we just run the BookingDialog path with an empty BookingDetailsInstance.
70 return await step_context.begin_dialog(
71 self._booking_dialog_id, BookingDetails()
72 )
73
74 # Call LUIS and gather any potential booking details. (Note the TurnContext has the response to the prompt.)
75 intent, luis_result = await LuisHelper.execute_luis_query(
76 self._luis_recognizer, step_context.context
77 )
78
79 # top_intent = cognitive_models_helper.top_intent(luis_result['intents'])
80
81 if intent == Intent.BOOK_FLIGHT.value and luis_result:
82 await MainDialog._show_warning_for_unsupported_cities(
83 step_context.context, luis_result
84 )
85
86 # Run the BookingDialog giving it whatever details we have from the LUIS call.
87 return await step_context.begin_dialog(self._booking_dialog_id, luis_result)
88
89 elif intent == Intent.GET_WEATHER.value:
90 get_weather_text = "TODO: get weather flow here"
91 get_weather_message = MessageFactory.text(
92 get_weather_text, get_weather_text, InputHints.ignoring_input
93 )
94 await step_context.context.send_activity(get_weather_message)
95
96 else:
97 didnt_understand_text = (
98 "Sorry, I didn't get that. Please try asking in a different way"
99 )
100 didnt_understand_message = MessageFactory.text(
101 didnt_understand_text, didnt_understand_text, InputHints.ignoring_input
102 )
103 await step_context.context.send_activity(didnt_understand_message)
104
105 return await step_context.next(None)
106
107 async def final_step(self, step_context: WaterfallStepContext) -> DialogTurnResult:
108 # If the child dialog ("BookingDialog") was cancelled or the user failed to confirm,
109 # the Result here will be null.
110 if step_context.result is not None:
111 result = step_context.result
112
113 # Now we have all the booking details call the booking service.
114
115 # If the call to the booking service was successful tell the user.
116 # time_property = Timex(result.travel_date)
117 # travel_date_msg = time_property.to_natural_language(datetime.now())
118 msg_txt = f"I have you booked to {result.destination} from {result.origin} on {result.travel_date}"
119 message = MessageFactory.text(msg_txt, msg_txt, InputHints.ignoring_input)
120 await step_context.context.send_activity(message)
121
122 prompt_message = "What else can I do for you?"
123 return await step_context.replace_dialog(self.id, prompt_message)
124
125 @staticmethod
126 async def _show_warning_for_unsupported_cities(
127 context: TurnContext, luis_result: BookingDetails
128 ) -> None:
129 if luis_result.unsupported_airports:
130 message_text = (
131 f"Sorry but the following airports are not supported:"
132 f" {', '.join(luis_result.unsupported_airports)}"
133 )
134 message = MessageFactory.text(
135 message_text, message_text, InputHints.ignoring_input
136 )
137 await context.send_activity(message)
138
```
Path: `samples/13.core-bot/helpers/luis_helper.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3 from enum import Enum
4 from typing import Dict
5 from botbuilder.ai.luis import LuisRecognizer, LuisApplication
6 from botbuilder.core import IntentScore, TopIntent, TurnContext
7
8 from booking_details import BookingDetails
9
10
11 class Intent(Enum):
12 BOOK_FLIGHT = "BookFlight"
13 CANCEL = "Cancel"
14 GET_WEATHER = "GetWeather"
15 NONE_INTENT = "NoneIntent"
16
17
18 def top_intent(intents: Dict[Intent, dict]) -> TopIntent:
19 max_intent = Intent.NONE_INTENT
20 max_value = 0.0
21
22 for intent, value in intents:
23 intent_score = IntentScore(value)
24 if intent_score.score > max_value:
25 max_intent, max_value = intent, intent_score.score
26
27 return TopIntent(max_intent, max_value)
28
29
30 class LuisHelper:
31 @staticmethod
32 async def execute_luis_query(
33 luis_recognizer: LuisRecognizer, turn_context: TurnContext
34 ) -> (Intent, object):
35 result = None
36 intent = None
37
38 try:
39 recognizer_result = await luis_recognizer.recognize(turn_context)
40
41 intent = (
42 sorted(
43 recognizer_result.intents,
44 key=recognizer_result.intents.get,
45 reverse=True,
46 )[:1][0]
47 if recognizer_result.intents
48 else None
49 )
50
51 if intent == Intent.BOOK_FLIGHT.value:
52 result = BookingDetails()
53
54 # We need to get the result from the LUIS JSON which at every level returns an array.
55 to_entities = recognizer_result.entities.get("$instance", {}).get(
56 "To", []
57 )
58 if len(to_entities) > 0:
59 if recognizer_result.entities.get("To", [{"$instance": {}}])[0][
60 "$instance"
61 ]:
62 result.destination = to_entities[0]["text"].capitalize()
63 else:
64 result.unsupported_airports.append(
65 to_entities[0]["text"].capitalize()
66 )
67
68 from_entities = recognizer_result.entities.get("$instance", {}).get(
69 "From", []
70 )
71 if len(from_entities) > 0:
72 if recognizer_result.entities.get("From", [{"$instance": {}}])[0][
73 "$instance"
74 ]:
75 result.origin = from_entities[0]["text"].capitalize()
76 else:
77 result.unsupported_airports.append(
78 from_entities[0]["text"].capitalize()
79 )
80
81 # TODO: This value will be a TIMEX. And we are only interested in a Date so grab the first result and drop the Time part.
82 # TIMEX is a format that represents DateTime expressions that include some ambiguity. e.g. missing a Year.
83 date_entities = recognizer_result.entities.get("$instance", {}).get(
84 "datetime", []
85 )
86 if len(date_entities) > 0:
87 result.travel_date = None # TODO: Set when we get a timex format
88 except Exception as e:
89 print(e)
90
91 return intent, result
92
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/samples/13.core-bot/dialogs/date_resolver_dialog.py b/samples/13.core-bot/dialogs/date_resolver_dialog.py
--- a/samples/13.core-bot/dialogs/date_resolver_dialog.py
+++ b/samples/13.core-bot/dialogs/date_resolver_dialog.py
@@ -56,7 +56,7 @@
PromptOptions(prompt=prompt_msg, retry_prompt=reprompt_msg),
)
# We have a Date we just need to check it is unambiguous.
- if "definite" in Timex(timex).types:
+ if "definite" not in Timex(timex).types:
# This is essentially a "reprompt" of the data we were given up front.
return await step_context.prompt(
DateTimePrompt.__name__, PromptOptions(prompt=reprompt_msg)
diff --git a/samples/13.core-bot/dialogs/main_dialog.py b/samples/13.core-bot/dialogs/main_dialog.py
--- a/samples/13.core-bot/dialogs/main_dialog.py
+++ b/samples/13.core-bot/dialogs/main_dialog.py
@@ -1,17 +1,13 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
-from datetime import datetime
-from typing import Dict
from botbuilder.dialogs import (
ComponentDialog,
- DialogSet,
- DialogTurnStatus,
WaterfallDialog,
WaterfallStepContext,
DialogTurnResult,
)
-from botbuilder.dialogs.prompts import TextPrompt, ConfirmPrompt, PromptOptions
+from botbuilder.dialogs.prompts import TextPrompt, PromptOptions
from botbuilder.core import MessageFactory, TurnContext
from botbuilder.schema import InputHints
@@ -76,9 +72,8 @@
self._luis_recognizer, step_context.context
)
- # top_intent = cognitive_models_helper.top_intent(luis_result['intents'])
-
if intent == Intent.BOOK_FLIGHT.value and luis_result:
+ # Show a warning for Origin and Destination if we can't resolve them.
await MainDialog._show_warning_for_unsupported_cities(
step_context.context, luis_result
)
diff --git a/samples/13.core-bot/helpers/luis_helper.py b/samples/13.core-bot/helpers/luis_helper.py
--- a/samples/13.core-bot/helpers/luis_helper.py
+++ b/samples/13.core-bot/helpers/luis_helper.py
@@ -2,7 +2,7 @@
# Licensed under the MIT License.
from enum import Enum
from typing import Dict
-from botbuilder.ai.luis import LuisRecognizer, LuisApplication
+from botbuilder.ai.luis import LuisRecognizer
from botbuilder.core import IntentScore, TopIntent, TurnContext
from booking_details import BookingDetails
@@ -32,6 +32,9 @@
async def execute_luis_query(
luis_recognizer: LuisRecognizer, turn_context: TurnContext
) -> (Intent, object):
+ """
+ Returns an object with preformatted LUIS results for the bot's dialogs to consume.
+ """
result = None
intent = None
@@ -78,13 +81,20 @@
from_entities[0]["text"].capitalize()
)
- # TODO: This value will be a TIMEX. And we are only interested in a Date so grab the first result and drop the Time part.
+ # This value will be a TIMEX. And we are only interested in a Date so grab the first result and drop the Time part.
# TIMEX is a format that represents DateTime expressions that include some ambiguity. e.g. missing a Year.
- date_entities = recognizer_result.entities.get("$instance", {}).get(
- "datetime", []
- )
- if len(date_entities) > 0:
- result.travel_date = None # TODO: Set when we get a timex format
+ date_entities = recognizer_result.entities.get("datetime", [])
+ if date_entities:
+ timex = date_entities[0]["timex"]
+
+ if timex:
+ datetime = timex[0].split("T")[0]
+
+ result.travel_date = datetime
+
+ else:
+ result.travel_date = None
+
except Exception as e:
print(e)
|
{"golden_diff": "diff --git a/samples/13.core-bot/dialogs/date_resolver_dialog.py b/samples/13.core-bot/dialogs/date_resolver_dialog.py\n--- a/samples/13.core-bot/dialogs/date_resolver_dialog.py\n+++ b/samples/13.core-bot/dialogs/date_resolver_dialog.py\n@@ -56,7 +56,7 @@\n PromptOptions(prompt=prompt_msg, retry_prompt=reprompt_msg),\n )\n # We have a Date we just need to check it is unambiguous.\n- if \"definite\" in Timex(timex).types:\n+ if \"definite\" not in Timex(timex).types:\n # This is essentially a \"reprompt\" of the data we were given up front.\n return await step_context.prompt(\n DateTimePrompt.__name__, PromptOptions(prompt=reprompt_msg)\ndiff --git a/samples/13.core-bot/dialogs/main_dialog.py b/samples/13.core-bot/dialogs/main_dialog.py\n--- a/samples/13.core-bot/dialogs/main_dialog.py\n+++ b/samples/13.core-bot/dialogs/main_dialog.py\n@@ -1,17 +1,13 @@\n # Copyright (c) Microsoft Corporation. All rights reserved.\n # Licensed under the MIT License.\n \n-from datetime import datetime\n-from typing import Dict\n from botbuilder.dialogs import (\n ComponentDialog,\n- DialogSet,\n- DialogTurnStatus,\n WaterfallDialog,\n WaterfallStepContext,\n DialogTurnResult,\n )\n-from botbuilder.dialogs.prompts import TextPrompt, ConfirmPrompt, PromptOptions\n+from botbuilder.dialogs.prompts import TextPrompt, PromptOptions\n from botbuilder.core import MessageFactory, TurnContext\n from botbuilder.schema import InputHints\n \n@@ -76,9 +72,8 @@\n self._luis_recognizer, step_context.context\n )\n \n- # top_intent = cognitive_models_helper.top_intent(luis_result['intents'])\n-\n if intent == Intent.BOOK_FLIGHT.value and luis_result:\n+ # Show a warning for Origin and Destination if we can't resolve them.\n await MainDialog._show_warning_for_unsupported_cities(\n step_context.context, luis_result\n )\ndiff --git a/samples/13.core-bot/helpers/luis_helper.py b/samples/13.core-bot/helpers/luis_helper.py\n--- a/samples/13.core-bot/helpers/luis_helper.py\n+++ b/samples/13.core-bot/helpers/luis_helper.py\n@@ -2,7 +2,7 @@\n # Licensed under the MIT License.\n from enum import Enum\n from typing import Dict\n-from botbuilder.ai.luis import LuisRecognizer, LuisApplication\n+from botbuilder.ai.luis import LuisRecognizer\n from botbuilder.core import IntentScore, TopIntent, TurnContext\n \n from booking_details import BookingDetails\n@@ -32,6 +32,9 @@\n async def execute_luis_query(\n luis_recognizer: LuisRecognizer, turn_context: TurnContext\n ) -> (Intent, object):\n+ \"\"\"\n+ Returns an object with preformatted LUIS results for the bot's dialogs to consume.\n+ \"\"\"\n result = None\n intent = None\n \n@@ -78,13 +81,20 @@\n from_entities[0][\"text\"].capitalize()\n )\n \n- # TODO: This value will be a TIMEX. And we are only interested in a Date so grab the first result and drop the Time part.\n+ # This value will be a TIMEX. And we are only interested in a Date so grab the first result and drop the Time part.\n # TIMEX is a format that represents DateTime expressions that include some ambiguity. e.g. missing a Year.\n- date_entities = recognizer_result.entities.get(\"$instance\", {}).get(\n- \"datetime\", []\n- )\n- if len(date_entities) > 0:\n- result.travel_date = None # TODO: Set when we get a timex format\n+ date_entities = recognizer_result.entities.get(\"datetime\", [])\n+ if date_entities:\n+ timex = date_entities[0][\"timex\"]\n+\n+ if timex:\n+ datetime = timex[0].split(\"T\")[0]\n+\n+ result.travel_date = datetime\n+\n+ else:\n+ result.travel_date = None\n+\n except Exception as e:\n print(e)\n", "issue": "date_resolver_dialog - test of \"definite\"\n## Version SDK v4.5.0b3\r\nin botbuilder-python/samples/13.core-bot/dialogs/date_resolver_dialog.py\r\n\r\n## Describe the bug\r\nLine : _if \"definite\" in Timex(timex).types:_\r\nThe goal of this line is to treat ambiguous date such as timex date = XXXX-05-17\r\nso the test must be \"not in\" instead of \"in\" (\"definite\" = type for an unambiguous date)\r\n\r\n[bug]\r\n\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nfrom botbuilder.core import MessageFactory\nfrom botbuilder.dialogs import WaterfallDialog, DialogTurnResult, WaterfallStepContext\nfrom botbuilder.dialogs.prompts import (\n DateTimePrompt,\n PromptValidatorContext,\n PromptOptions,\n DateTimeResolution,\n)\nfrom botbuilder.schema import InputHints\nfrom .cancel_and_help_dialog import CancelAndHelpDialog\n\nfrom datatypes_date_time.timex import Timex\n\n\nclass DateResolverDialog(CancelAndHelpDialog):\n def __init__(self, dialog_id: str = None):\n super(DateResolverDialog, self).__init__(\n dialog_id or DateResolverDialog.__name__\n )\n\n self.add_dialog(\n DateTimePrompt(\n DateTimePrompt.__name__, DateResolverDialog.datetime_prompt_validator\n )\n )\n self.add_dialog(\n WaterfallDialog(\n WaterfallDialog.__name__ + \"2\", [self.initial_step, self.final_step]\n )\n )\n\n self.initial_dialog_id = WaterfallDialog.__name__ + \"2\"\n\n async def initial_step(\n self, step_context: WaterfallStepContext\n ) -> DialogTurnResult:\n timex = step_context.options\n\n prompt_msg_text = \"On what date would you like to travel?\"\n prompt_msg = MessageFactory.text(\n prompt_msg_text, prompt_msg_text, InputHints.expecting_input\n )\n\n reprompt_msg_text = \"I'm sorry, for best results, please enter your travel date including the month, day and year.\"\n reprompt_msg = MessageFactory.text(\n reprompt_msg_text, reprompt_msg_text, InputHints.expecting_input\n )\n\n if timex is None:\n # We were not given any date at all so prompt the user.\n return await step_context.prompt(\n DateTimePrompt.__name__,\n PromptOptions(prompt=prompt_msg, retry_prompt=reprompt_msg),\n )\n # We have a Date we just need to check it is unambiguous.\n if \"definite\" in Timex(timex).types:\n # This is essentially a \"reprompt\" of the data we were given up front.\n return await step_context.prompt(\n DateTimePrompt.__name__, PromptOptions(prompt=reprompt_msg)\n )\n\n return await step_context.next(DateTimeResolution(timex=timex))\n\n async def final_step(self, step_context: WaterfallStepContext):\n timex = step_context.result[0].timex\n return await step_context.end_dialog(timex)\n\n @staticmethod\n async def datetime_prompt_validator(prompt_context: PromptValidatorContext) -> bool:\n if prompt_context.recognized.succeeded:\n timex = prompt_context.recognized.value[0].timex.split(\"T\")[0]\n\n # TODO: Needs TimexProperty\n return \"definite\" in Timex(timex).types\n\n return False\n", "path": "samples/13.core-bot/dialogs/date_resolver_dialog.py"}, {"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nfrom datetime import datetime\nfrom typing import Dict\nfrom botbuilder.dialogs import (\n ComponentDialog,\n DialogSet,\n DialogTurnStatus,\n WaterfallDialog,\n WaterfallStepContext,\n DialogTurnResult,\n)\nfrom botbuilder.dialogs.prompts import TextPrompt, ConfirmPrompt, PromptOptions\nfrom botbuilder.core import MessageFactory, TurnContext\nfrom botbuilder.schema import InputHints\n\nfrom .booking_dialog import BookingDialog\nfrom booking_details import BookingDetails\nfrom flight_booking_recognizer import FlightBookingRecognizer\nfrom helpers.luis_helper import LuisHelper, Intent\n\n\nclass MainDialog(ComponentDialog):\n def __init__(\n self, luis_recognizer: FlightBookingRecognizer, booking_dialog: BookingDialog\n ):\n super(MainDialog, self).__init__(MainDialog.__name__)\n\n self._luis_recognizer = luis_recognizer\n self._booking_dialog_id = booking_dialog.id\n\n self.add_dialog(TextPrompt(TextPrompt.__name__))\n self.add_dialog(booking_dialog)\n self.add_dialog(\n WaterfallDialog(\n \"WFDialog\", [self.intro_step, self.act_step, self.final_step]\n )\n )\n\n self.initial_dialog_id = \"WFDialog\"\n\n async def intro_step(self, step_context: WaterfallStepContext) -> DialogTurnResult:\n if not self._luis_recognizer.is_configured:\n await step_context.context.send_activity(\n MessageFactory.text(\n \"NOTE: LUIS is not configured. To enable all capabilities, add 'LuisAppId', 'LuisAPIKey' and \"\n \"'LuisAPIHostName' to the appsettings.json file.\",\n input_hint=InputHints.ignoring_input,\n )\n )\n\n return await step_context.next(None)\n message_text = (\n str(step_context.options)\n if step_context.options\n else \"What can I help you with today?\"\n )\n prompt_message = MessageFactory.text(\n message_text, message_text, InputHints.expecting_input\n )\n\n return await step_context.prompt(\n TextPrompt.__name__, PromptOptions(prompt=prompt_message)\n )\n\n async def act_step(self, step_context: WaterfallStepContext) -> DialogTurnResult:\n if not self._luis_recognizer.is_configured:\n # LUIS is not configured, we just run the BookingDialog path with an empty BookingDetailsInstance.\n return await step_context.begin_dialog(\n self._booking_dialog_id, BookingDetails()\n )\n\n # Call LUIS and gather any potential booking details. (Note the TurnContext has the response to the prompt.)\n intent, luis_result = await LuisHelper.execute_luis_query(\n self._luis_recognizer, step_context.context\n )\n\n # top_intent = cognitive_models_helper.top_intent(luis_result['intents'])\n\n if intent == Intent.BOOK_FLIGHT.value and luis_result:\n await MainDialog._show_warning_for_unsupported_cities(\n step_context.context, luis_result\n )\n\n # Run the BookingDialog giving it whatever details we have from the LUIS call.\n return await step_context.begin_dialog(self._booking_dialog_id, luis_result)\n\n elif intent == Intent.GET_WEATHER.value:\n get_weather_text = \"TODO: get weather flow here\"\n get_weather_message = MessageFactory.text(\n get_weather_text, get_weather_text, InputHints.ignoring_input\n )\n await step_context.context.send_activity(get_weather_message)\n\n else:\n didnt_understand_text = (\n \"Sorry, I didn't get that. Please try asking in a different way\"\n )\n didnt_understand_message = MessageFactory.text(\n didnt_understand_text, didnt_understand_text, InputHints.ignoring_input\n )\n await step_context.context.send_activity(didnt_understand_message)\n\n return await step_context.next(None)\n\n async def final_step(self, step_context: WaterfallStepContext) -> DialogTurnResult:\n # If the child dialog (\"BookingDialog\") was cancelled or the user failed to confirm,\n # the Result here will be null.\n if step_context.result is not None:\n result = step_context.result\n\n # Now we have all the booking details call the booking service.\n\n # If the call to the booking service was successful tell the user.\n # time_property = Timex(result.travel_date)\n # travel_date_msg = time_property.to_natural_language(datetime.now())\n msg_txt = f\"I have you booked to {result.destination} from {result.origin} on {result.travel_date}\"\n message = MessageFactory.text(msg_txt, msg_txt, InputHints.ignoring_input)\n await step_context.context.send_activity(message)\n\n prompt_message = \"What else can I do for you?\"\n return await step_context.replace_dialog(self.id, prompt_message)\n\n @staticmethod\n async def _show_warning_for_unsupported_cities(\n context: TurnContext, luis_result: BookingDetails\n ) -> None:\n if luis_result.unsupported_airports:\n message_text = (\n f\"Sorry but the following airports are not supported:\"\n f\" {', '.join(luis_result.unsupported_airports)}\"\n )\n message = MessageFactory.text(\n message_text, message_text, InputHints.ignoring_input\n )\n await context.send_activity(message)\n", "path": "samples/13.core-bot/dialogs/main_dialog.py"}, {"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\nfrom enum import Enum\nfrom typing import Dict\nfrom botbuilder.ai.luis import LuisRecognizer, LuisApplication\nfrom botbuilder.core import IntentScore, TopIntent, TurnContext\n\nfrom booking_details import BookingDetails\n\n\nclass Intent(Enum):\n BOOK_FLIGHT = \"BookFlight\"\n CANCEL = \"Cancel\"\n GET_WEATHER = \"GetWeather\"\n NONE_INTENT = \"NoneIntent\"\n\n\ndef top_intent(intents: Dict[Intent, dict]) -> TopIntent:\n max_intent = Intent.NONE_INTENT\n max_value = 0.0\n\n for intent, value in intents:\n intent_score = IntentScore(value)\n if intent_score.score > max_value:\n max_intent, max_value = intent, intent_score.score\n\n return TopIntent(max_intent, max_value)\n\n\nclass LuisHelper:\n @staticmethod\n async def execute_luis_query(\n luis_recognizer: LuisRecognizer, turn_context: TurnContext\n ) -> (Intent, object):\n result = None\n intent = None\n\n try:\n recognizer_result = await luis_recognizer.recognize(turn_context)\n\n intent = (\n sorted(\n recognizer_result.intents,\n key=recognizer_result.intents.get,\n reverse=True,\n )[:1][0]\n if recognizer_result.intents\n else None\n )\n\n if intent == Intent.BOOK_FLIGHT.value:\n result = BookingDetails()\n\n # We need to get the result from the LUIS JSON which at every level returns an array.\n to_entities = recognizer_result.entities.get(\"$instance\", {}).get(\n \"To\", []\n )\n if len(to_entities) > 0:\n if recognizer_result.entities.get(\"To\", [{\"$instance\": {}}])[0][\n \"$instance\"\n ]:\n result.destination = to_entities[0][\"text\"].capitalize()\n else:\n result.unsupported_airports.append(\n to_entities[0][\"text\"].capitalize()\n )\n\n from_entities = recognizer_result.entities.get(\"$instance\", {}).get(\n \"From\", []\n )\n if len(from_entities) > 0:\n if recognizer_result.entities.get(\"From\", [{\"$instance\": {}}])[0][\n \"$instance\"\n ]:\n result.origin = from_entities[0][\"text\"].capitalize()\n else:\n result.unsupported_airports.append(\n from_entities[0][\"text\"].capitalize()\n )\n\n # TODO: This value will be a TIMEX. And we are only interested in a Date so grab the first result and drop the Time part.\n # TIMEX is a format that represents DateTime expressions that include some ambiguity. e.g. missing a Year.\n date_entities = recognizer_result.entities.get(\"$instance\", {}).get(\n \"datetime\", []\n )\n if len(date_entities) > 0:\n result.travel_date = None # TODO: Set when we get a timex format\n except Exception as e:\n print(e)\n\n return intent, result\n", "path": "samples/13.core-bot/helpers/luis_helper.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nfrom botbuilder.core import MessageFactory\nfrom botbuilder.dialogs import WaterfallDialog, DialogTurnResult, WaterfallStepContext\nfrom botbuilder.dialogs.prompts import (\n DateTimePrompt,\n PromptValidatorContext,\n PromptOptions,\n DateTimeResolution,\n)\nfrom botbuilder.schema import InputHints\nfrom .cancel_and_help_dialog import CancelAndHelpDialog\n\nfrom datatypes_date_time.timex import Timex\n\n\nclass DateResolverDialog(CancelAndHelpDialog):\n def __init__(self, dialog_id: str = None):\n super(DateResolverDialog, self).__init__(\n dialog_id or DateResolverDialog.__name__\n )\n\n self.add_dialog(\n DateTimePrompt(\n DateTimePrompt.__name__, DateResolverDialog.datetime_prompt_validator\n )\n )\n self.add_dialog(\n WaterfallDialog(\n WaterfallDialog.__name__ + \"2\", [self.initial_step, self.final_step]\n )\n )\n\n self.initial_dialog_id = WaterfallDialog.__name__ + \"2\"\n\n async def initial_step(\n self, step_context: WaterfallStepContext\n ) -> DialogTurnResult:\n timex = step_context.options\n\n prompt_msg_text = \"On what date would you like to travel?\"\n prompt_msg = MessageFactory.text(\n prompt_msg_text, prompt_msg_text, InputHints.expecting_input\n )\n\n reprompt_msg_text = \"I'm sorry, for best results, please enter your travel date including the month, day and year.\"\n reprompt_msg = MessageFactory.text(\n reprompt_msg_text, reprompt_msg_text, InputHints.expecting_input\n )\n\n if timex is None:\n # We were not given any date at all so prompt the user.\n return await step_context.prompt(\n DateTimePrompt.__name__,\n PromptOptions(prompt=prompt_msg, retry_prompt=reprompt_msg),\n )\n # We have a Date we just need to check it is unambiguous.\n if \"definite\" not in Timex(timex).types:\n # This is essentially a \"reprompt\" of the data we were given up front.\n return await step_context.prompt(\n DateTimePrompt.__name__, PromptOptions(prompt=reprompt_msg)\n )\n\n return await step_context.next(DateTimeResolution(timex=timex))\n\n async def final_step(self, step_context: WaterfallStepContext):\n timex = step_context.result[0].timex\n return await step_context.end_dialog(timex)\n\n @staticmethod\n async def datetime_prompt_validator(prompt_context: PromptValidatorContext) -> bool:\n if prompt_context.recognized.succeeded:\n timex = prompt_context.recognized.value[0].timex.split(\"T\")[0]\n\n # TODO: Needs TimexProperty\n return \"definite\" in Timex(timex).types\n\n return False\n", "path": "samples/13.core-bot/dialogs/date_resolver_dialog.py"}, {"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nfrom botbuilder.dialogs import (\n ComponentDialog,\n WaterfallDialog,\n WaterfallStepContext,\n DialogTurnResult,\n)\nfrom botbuilder.dialogs.prompts import TextPrompt, PromptOptions\nfrom botbuilder.core import MessageFactory, TurnContext\nfrom botbuilder.schema import InputHints\n\nfrom .booking_dialog import BookingDialog\nfrom booking_details import BookingDetails\nfrom flight_booking_recognizer import FlightBookingRecognizer\nfrom helpers.luis_helper import LuisHelper, Intent\n\n\nclass MainDialog(ComponentDialog):\n def __init__(\n self, luis_recognizer: FlightBookingRecognizer, booking_dialog: BookingDialog\n ):\n super(MainDialog, self).__init__(MainDialog.__name__)\n\n self._luis_recognizer = luis_recognizer\n self._booking_dialog_id = booking_dialog.id\n\n self.add_dialog(TextPrompt(TextPrompt.__name__))\n self.add_dialog(booking_dialog)\n self.add_dialog(\n WaterfallDialog(\n \"WFDialog\", [self.intro_step, self.act_step, self.final_step]\n )\n )\n\n self.initial_dialog_id = \"WFDialog\"\n\n async def intro_step(self, step_context: WaterfallStepContext) -> DialogTurnResult:\n if not self._luis_recognizer.is_configured:\n await step_context.context.send_activity(\n MessageFactory.text(\n \"NOTE: LUIS is not configured. To enable all capabilities, add 'LuisAppId', 'LuisAPIKey' and \"\n \"'LuisAPIHostName' to the appsettings.json file.\",\n input_hint=InputHints.ignoring_input,\n )\n )\n\n return await step_context.next(None)\n message_text = (\n str(step_context.options)\n if step_context.options\n else \"What can I help you with today?\"\n )\n prompt_message = MessageFactory.text(\n message_text, message_text, InputHints.expecting_input\n )\n\n return await step_context.prompt(\n TextPrompt.__name__, PromptOptions(prompt=prompt_message)\n )\n\n async def act_step(self, step_context: WaterfallStepContext) -> DialogTurnResult:\n if not self._luis_recognizer.is_configured:\n # LUIS is not configured, we just run the BookingDialog path with an empty BookingDetailsInstance.\n return await step_context.begin_dialog(\n self._booking_dialog_id, BookingDetails()\n )\n\n # Call LUIS and gather any potential booking details. (Note the TurnContext has the response to the prompt.)\n intent, luis_result = await LuisHelper.execute_luis_query(\n self._luis_recognizer, step_context.context\n )\n\n if intent == Intent.BOOK_FLIGHT.value and luis_result:\n # Show a warning for Origin and Destination if we can't resolve them.\n await MainDialog._show_warning_for_unsupported_cities(\n step_context.context, luis_result\n )\n\n # Run the BookingDialog giving it whatever details we have from the LUIS call.\n return await step_context.begin_dialog(self._booking_dialog_id, luis_result)\n\n elif intent == Intent.GET_WEATHER.value:\n get_weather_text = \"TODO: get weather flow here\"\n get_weather_message = MessageFactory.text(\n get_weather_text, get_weather_text, InputHints.ignoring_input\n )\n await step_context.context.send_activity(get_weather_message)\n\n else:\n didnt_understand_text = (\n \"Sorry, I didn't get that. Please try asking in a different way\"\n )\n didnt_understand_message = MessageFactory.text(\n didnt_understand_text, didnt_understand_text, InputHints.ignoring_input\n )\n await step_context.context.send_activity(didnt_understand_message)\n\n return await step_context.next(None)\n\n async def final_step(self, step_context: WaterfallStepContext) -> DialogTurnResult:\n # If the child dialog (\"BookingDialog\") was cancelled or the user failed to confirm,\n # the Result here will be null.\n if step_context.result is not None:\n result = step_context.result\n\n # Now we have all the booking details call the booking service.\n\n # If the call to the booking service was successful tell the user.\n # time_property = Timex(result.travel_date)\n # travel_date_msg = time_property.to_natural_language(datetime.now())\n msg_txt = f\"I have you booked to {result.destination} from {result.origin} on {result.travel_date}\"\n message = MessageFactory.text(msg_txt, msg_txt, InputHints.ignoring_input)\n await step_context.context.send_activity(message)\n\n prompt_message = \"What else can I do for you?\"\n return await step_context.replace_dialog(self.id, prompt_message)\n\n @staticmethod\n async def _show_warning_for_unsupported_cities(\n context: TurnContext, luis_result: BookingDetails\n ) -> None:\n if luis_result.unsupported_airports:\n message_text = (\n f\"Sorry but the following airports are not supported:\"\n f\" {', '.join(luis_result.unsupported_airports)}\"\n )\n message = MessageFactory.text(\n message_text, message_text, InputHints.ignoring_input\n )\n await context.send_activity(message)\n", "path": "samples/13.core-bot/dialogs/main_dialog.py"}, {"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\nfrom enum import Enum\nfrom typing import Dict\nfrom botbuilder.ai.luis import LuisRecognizer\nfrom botbuilder.core import IntentScore, TopIntent, TurnContext\n\nfrom booking_details import BookingDetails\n\n\nclass Intent(Enum):\n BOOK_FLIGHT = \"BookFlight\"\n CANCEL = \"Cancel\"\n GET_WEATHER = \"GetWeather\"\n NONE_INTENT = \"NoneIntent\"\n\n\ndef top_intent(intents: Dict[Intent, dict]) -> TopIntent:\n max_intent = Intent.NONE_INTENT\n max_value = 0.0\n\n for intent, value in intents:\n intent_score = IntentScore(value)\n if intent_score.score > max_value:\n max_intent, max_value = intent, intent_score.score\n\n return TopIntent(max_intent, max_value)\n\n\nclass LuisHelper:\n @staticmethod\n async def execute_luis_query(\n luis_recognizer: LuisRecognizer, turn_context: TurnContext\n ) -> (Intent, object):\n \"\"\"\n Returns an object with preformatted LUIS results for the bot's dialogs to consume.\n \"\"\"\n result = None\n intent = None\n\n try:\n recognizer_result = await luis_recognizer.recognize(turn_context)\n\n intent = (\n sorted(\n recognizer_result.intents,\n key=recognizer_result.intents.get,\n reverse=True,\n )[:1][0]\n if recognizer_result.intents\n else None\n )\n\n if intent == Intent.BOOK_FLIGHT.value:\n result = BookingDetails()\n\n # We need to get the result from the LUIS JSON which at every level returns an array.\n to_entities = recognizer_result.entities.get(\"$instance\", {}).get(\n \"To\", []\n )\n if len(to_entities) > 0:\n if recognizer_result.entities.get(\"To\", [{\"$instance\": {}}])[0][\n \"$instance\"\n ]:\n result.destination = to_entities[0][\"text\"].capitalize()\n else:\n result.unsupported_airports.append(\n to_entities[0][\"text\"].capitalize()\n )\n\n from_entities = recognizer_result.entities.get(\"$instance\", {}).get(\n \"From\", []\n )\n if len(from_entities) > 0:\n if recognizer_result.entities.get(\"From\", [{\"$instance\": {}}])[0][\n \"$instance\"\n ]:\n result.origin = from_entities[0][\"text\"].capitalize()\n else:\n result.unsupported_airports.append(\n from_entities[0][\"text\"].capitalize()\n )\n\n # This value will be a TIMEX. And we are only interested in a Date so grab the first result and drop the Time part.\n # TIMEX is a format that represents DateTime expressions that include some ambiguity. e.g. missing a Year.\n date_entities = recognizer_result.entities.get(\"datetime\", [])\n if date_entities:\n timex = date_entities[0][\"timex\"]\n\n if timex:\n datetime = timex[0].split(\"T\")[0]\n\n result.travel_date = datetime\n\n else:\n result.travel_date = None\n\n except Exception as e:\n print(e)\n\n return intent, result\n", "path": "samples/13.core-bot/helpers/luis_helper.py"}]}
| 3,501 | 942 |
gh_patches_debug_6519
|
rasdani/github-patches
|
git_diff
|
mlflow__mlflow-6213
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Upgrade `pylint` to `2.14.4`
We currently use pylint 2.11.1. This is a bit old so we should upgrade it to the latest version (2.14.4):
https://github.com/mlflow/mlflow/blob/d40780be361f4bd2741c2e8fcbd428c1d693edcf/requirements/lint-requirements.txt#L1
- https://pypi.org/project/pylint/2.11.1/
- https://pypi.org/project/pylint/2.14.4/
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mlflow/utils/autologging_utils/logging_and_warnings.py`
Content:
```
1 import os
2 import warnings
3 from contextlib import contextmanager
4 from pathlib import Path
5 from threading import RLock, get_ident as get_current_thread_id
6
7 import mlflow
8 import mlflow.utils.logging_utils as logging_utils
9
10
11 class _WarningsController:
12 """
13 Provides threadsafe utilities to modify warning behavior for MLflow autologging, including:
14
15 - Global disablement of MLflow warnings across all threads
16 - Global rerouting of MLflow warnings to an MLflow event logger (i.e. `logger.warn()`)
17 across all threads
18 - Disablement of non-MLflow warnings for the current thread
19 - Rerouting of non-MLflow warnings to an MLflow event logger for the current thread
20 """
21
22 def __init__(self):
23 self._mlflow_root_path = Path(os.path.dirname(mlflow.__file__)).resolve()
24 self._state_lock = RLock()
25
26 self._did_patch_showwarning = False
27 self._original_showwarning = None
28
29 self._disabled_threads = set()
30 self._rerouted_threads = set()
31 self._mlflow_warnings_disabled_globally = False
32 self._mlflow_warnings_rerouted_to_event_logs = False
33
34 def _patched_showwarning(self, message, category, filename, lineno, *args, **kwargs):
35 """
36 A patched implementation of `warnings.showwarning` that enforces the warning configuration
37 options configured on the controller (e.g. rerouting or disablement of MLflow warnings,
38 disablement of all warnings for the current thread).
39
40 Note that reassigning `warnings.showwarning` is the standard / recommended approach for
41 modifying warning message display behaviors. For reference, see
42 https://docs.python.org/3/library/warnings.html#warnings.showwarning
43 """
44 # NB: We explicitly avoid blocking on the `self._state_lock` lock during `showwarning`
45 # to so that threads don't have to execute serially whenever they emit warnings with
46 # `warnings.warn()`. We only lock during configuration changes to ensure that
47 # `warnings.showwarning` is patched or unpatched at the correct times.
48
49 from mlflow.utils.autologging_utils import _logger
50
51 # If the warning's source file is contained within the MLflow package's base
52 # directory, it is an MLflow warning and should be emitted via `logger.warning`
53 warning_source_path = Path(filename).resolve()
54 is_mlflow_warning = self._mlflow_root_path in warning_source_path.parents
55 curr_thread = get_current_thread_id()
56
57 if (curr_thread in self._disabled_threads) or (
58 is_mlflow_warning and self._mlflow_warnings_disabled_globally
59 ):
60 return
61 elif (curr_thread in self._rerouted_threads and not is_mlflow_warning) or (
62 is_mlflow_warning and self._mlflow_warnings_rerouted_to_event_logs
63 ):
64 _logger.warning(
65 "MLflow autologging encountered a warning:" ' "%s:%d: %s: %s"',
66 filename,
67 lineno,
68 category.__name__,
69 message,
70 )
71 else:
72 self._original_showwarning(message, category, filename, lineno, *args, **kwargs)
73
74 def _should_patch_showwarning(self):
75 return (
76 (len(self._disabled_threads) > 0)
77 or (len(self._rerouted_threads) > 0)
78 or self._mlflow_warnings_disabled_globally
79 or self._mlflow_warnings_rerouted_to_event_logs
80 )
81
82 def _modify_patch_state_if_necessary(self):
83 """
84 Patches or unpatches `warnings.showwarning` if necessary, as determined by:
85 - Whether or not `warnings.showwarning` is already patched
86 - Whether or not any custom warning state has been configured on the warnings
87 controller (i.e. disablement or rerouting of certain warnings globally or for a
88 particular thread)
89
90 Note that reassigning `warnings.showwarning` is the standard / recommended approach for
91 modifying warning message display behaviors. For reference, see
92 https://docs.python.org/3/library/warnings.html#warnings.showwarning
93 """
94 with self._state_lock:
95 if self._should_patch_showwarning() and not self._did_patch_showwarning:
96 self._original_showwarning = warnings.showwarning
97 warnings.showwarning = self._patched_showwarning
98 self._did_patch_showwarning = True
99 elif not self._should_patch_showwarning() and self._did_patch_showwarning:
100 warnings.showwarning = self._original_showwarning
101 self._did_patch_showwarning = False
102
103 def set_mlflow_warnings_disablement_state_globally(self, disabled=True):
104 """
105 Disables (or re-enables) MLflow warnings globally across all threads.
106
107 :param disabled: If `True`, disables MLflow warnings globally across all threads.
108 If `False`, enables MLflow warnings globally across all threads.
109 """
110 with self._state_lock:
111 self._mlflow_warnings_disabled_globally = disabled
112 self._modify_patch_state_if_necessary()
113
114 def set_mlflow_warnings_rerouting_state_globally(self, rerouted=True):
115 """
116 Enables (or disables) rerouting of MLflow warnings to an MLflow event logger with level
117 WARNING (e.g. `logger.warning()`) globally across all threads.
118
119 :param rerouted: If `True`, enables MLflow warning rerouting globally across all threads.
120 If `False`, disables MLflow warning rerouting globally across all threads.
121 """
122 with self._state_lock:
123 self._mlflow_warnings_rerouted_to_event_logs = rerouted
124 self._modify_patch_state_if_necessary()
125
126 def set_non_mlflow_warnings_disablement_state_for_current_thread(self, disabled=True):
127 """
128 Disables (or re-enables) non-MLflow warnings for the current thread.
129
130 :param disabled: If `True`, disables non-MLflow warnings for the current thread. If `False`,
131 enables non-MLflow warnings for the current thread. non-MLflow warning
132 behavior in other threads is unaffected.
133 """
134 with self._state_lock:
135 if disabled:
136 self._disabled_threads.add(get_current_thread_id())
137 else:
138 self._disabled_threads.discard(get_current_thread_id())
139 self._modify_patch_state_if_necessary()
140
141 def set_non_mlflow_warnings_rerouting_state_for_current_thread(self, rerouted=True):
142 """
143 Enables (or disables) rerouting of non-MLflow warnings to an MLflow event logger with level
144 WARNING (e.g. `logger.warning()`) for the current thread.
145
146 :param rerouted: If `True`, enables non-MLflow warning rerouting for the current thread.
147 If `False`, disables non-MLflow warning rerouting for the current thread.
148 non-MLflow warning behavior in other threads is unaffected.
149 """
150 with self._state_lock:
151 if rerouted:
152 self._rerouted_threads.add(get_current_thread_id())
153 else:
154 self._rerouted_threads.discard(get_current_thread_id())
155 self._modify_patch_state_if_necessary()
156
157 def get_warnings_disablement_state_for_current_thread(self):
158 """
159 :return: `True` if non-MLflow warnings are disabled for the current thread.
160 `False` otherwise.
161 """
162 return get_current_thread_id() in self._disabled_threads
163
164 def get_warnings_rerouting_state_for_current_thread(self):
165 """
166 :return: `True` if non-MLflow warnings are rerouted to an MLflow event logger with level
167 WARNING for the current thread. `False` otherwise.
168 """
169 return get_current_thread_id() in self._rerouted_threads
170
171
172 _WARNINGS_CONTROLLER = _WarningsController()
173
174
175 @contextmanager
176 def set_non_mlflow_warnings_behavior_for_current_thread(disable_warnings, reroute_warnings):
177 """
178 Context manager that modifies the behavior of non-MLflow warnings upon entry, according to the
179 specified parameters.
180
181 :param disable_warnings: If `True`, disable (mutate & discard) non-MLflow warnings. If `False`,
182 do not disable non-MLflow warnings.
183 :param reroute_warnings: If `True`, reroute non-MLflow warnings to an MLflow event logger with
184 level WARNING. If `False`, do not reroute non-MLflow warnings.
185 """
186 prev_disablement_state = (
187 _WARNINGS_CONTROLLER.get_warnings_disablement_state_for_current_thread()
188 )
189 prev_rerouting_state = _WARNINGS_CONTROLLER.get_warnings_rerouting_state_for_current_thread()
190 try:
191 _WARNINGS_CONTROLLER.set_non_mlflow_warnings_disablement_state_for_current_thread(
192 disabled=disable_warnings
193 )
194 _WARNINGS_CONTROLLER.set_non_mlflow_warnings_rerouting_state_for_current_thread(
195 rerouted=reroute_warnings
196 )
197 yield
198 finally:
199 _WARNINGS_CONTROLLER.set_non_mlflow_warnings_disablement_state_for_current_thread(
200 disabled=prev_disablement_state
201 )
202 _WARNINGS_CONTROLLER.set_non_mlflow_warnings_rerouting_state_for_current_thread(
203 rerouted=prev_rerouting_state
204 )
205
206
207 @contextmanager
208 def set_mlflow_events_and_warnings_behavior_globally(
209 disable_event_logs, disable_warnings, reroute_warnings
210 ):
211 """
212 Threadsafe context manager that modifies the behavior of MLflow event logging statements
213 and MLflow warnings upon entry, according to the specified parameters. Modifications are
214 applied globally across all threads and are not reverted until all threads that have made
215 a particular modification have exited the context.
216
217 :param disable_event_logs: If `True`, disable (mute & discard) MLflow event logging statements.
218 If `False`, do not disable MLflow event logging statements.
219 :param disable_warnings: If `True`, disable (mutate & discard) MLflow warnings. If `False`,
220 do not disable MLflow warnings.
221 :param reroute_warnings: If `True`, reroute MLflow warnings to an MLflow event logger with
222 level WARNING. If `False`, do not reroute MLflow warnings.
223 """
224
225 with _SetMLflowEventsAndWarningsBehaviorGlobally(
226 disable_event_logs, disable_warnings, reroute_warnings
227 ):
228 yield
229
230
231 class _SetMLflowEventsAndWarningsBehaviorGlobally:
232 _lock = RLock()
233 _disable_event_logs_count = 0
234 _disable_warnings_count = 0
235 _reroute_warnings_count = 0
236
237 def __init__(self, disable_event_logs, disable_warnings, reroute_warnings):
238 self._disable_event_logs = disable_event_logs
239 self._disable_warnings = disable_warnings
240 self._reroute_warnings = reroute_warnings
241
242 def __enter__(self):
243 try:
244 with _SetMLflowEventsAndWarningsBehaviorGlobally._lock:
245 if self._disable_event_logs:
246 if _SetMLflowEventsAndWarningsBehaviorGlobally._disable_event_logs_count <= 0:
247 logging_utils.disable_logging()
248 _SetMLflowEventsAndWarningsBehaviorGlobally._disable_event_logs_count += 1
249
250 if self._disable_warnings:
251 if _SetMLflowEventsAndWarningsBehaviorGlobally._disable_warnings_count <= 0:
252 _WARNINGS_CONTROLLER.set_mlflow_warnings_disablement_state_globally(
253 disabled=True
254 )
255 _SetMLflowEventsAndWarningsBehaviorGlobally._disable_warnings_count += 1
256
257 if self._reroute_warnings:
258 if _SetMLflowEventsAndWarningsBehaviorGlobally._reroute_warnings_count <= 0:
259 _WARNINGS_CONTROLLER.set_mlflow_warnings_rerouting_state_globally(
260 rerouted=True
261 )
262 _SetMLflowEventsAndWarningsBehaviorGlobally._reroute_warnings_count += 1
263 except Exception:
264 pass
265
266 def __exit__(self, *args, **kwargs):
267 try:
268 with _SetMLflowEventsAndWarningsBehaviorGlobally._lock:
269 if self._disable_event_logs:
270 _SetMLflowEventsAndWarningsBehaviorGlobally._disable_event_logs_count -= 1
271 if self._disable_warnings:
272 _SetMLflowEventsAndWarningsBehaviorGlobally._disable_warnings_count -= 1
273 if self._reroute_warnings:
274 _SetMLflowEventsAndWarningsBehaviorGlobally._reroute_warnings_count -= 1
275
276 if _SetMLflowEventsAndWarningsBehaviorGlobally._disable_event_logs_count <= 0:
277 logging_utils.enable_logging()
278 if _SetMLflowEventsAndWarningsBehaviorGlobally._disable_warnings_count <= 0:
279 _WARNINGS_CONTROLLER.set_mlflow_warnings_disablement_state_globally(
280 disabled=False
281 )
282 if _SetMLflowEventsAndWarningsBehaviorGlobally._reroute_warnings_count <= 0:
283 _WARNINGS_CONTROLLER.set_mlflow_warnings_rerouting_state_globally(
284 rerouted=False
285 )
286 except Exception:
287 pass
288
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mlflow/utils/autologging_utils/logging_and_warnings.py b/mlflow/utils/autologging_utils/logging_and_warnings.py
--- a/mlflow/utils/autologging_utils/logging_and_warnings.py
+++ b/mlflow/utils/autologging_utils/logging_and_warnings.py
@@ -62,7 +62,7 @@
is_mlflow_warning and self._mlflow_warnings_rerouted_to_event_logs
):
_logger.warning(
- "MLflow autologging encountered a warning:" ' "%s:%d: %s: %s"',
+ 'MLflow autologging encountered a warning: "%s:%d: %s: %s"',
filename,
lineno,
category.__name__,
|
{"golden_diff": "diff --git a/mlflow/utils/autologging_utils/logging_and_warnings.py b/mlflow/utils/autologging_utils/logging_and_warnings.py\n--- a/mlflow/utils/autologging_utils/logging_and_warnings.py\n+++ b/mlflow/utils/autologging_utils/logging_and_warnings.py\n@@ -62,7 +62,7 @@\n is_mlflow_warning and self._mlflow_warnings_rerouted_to_event_logs\n ):\n _logger.warning(\n- \"MLflow autologging encountered a warning:\" ' \"%s:%d: %s: %s\"',\n+ 'MLflow autologging encountered a warning: \"%s:%d: %s: %s\"',\n filename,\n lineno,\n category.__name__,\n", "issue": "Upgrade `pylint` to `2.14.4`\nWe currently use pylint 2.11.1. This is a bit old so we should upgrade it to the latest version (2.14.4):\r\n\r\nhttps://github.com/mlflow/mlflow/blob/d40780be361f4bd2741c2e8fcbd428c1d693edcf/requirements/lint-requirements.txt#L1\r\n\r\n- https://pypi.org/project/pylint/2.11.1/\r\n- https://pypi.org/project/pylint/2.14.4/\n", "before_files": [{"content": "import os\nimport warnings\nfrom contextlib import contextmanager\nfrom pathlib import Path\nfrom threading import RLock, get_ident as get_current_thread_id\n\nimport mlflow\nimport mlflow.utils.logging_utils as logging_utils\n\n\nclass _WarningsController:\n \"\"\"\n Provides threadsafe utilities to modify warning behavior for MLflow autologging, including:\n\n - Global disablement of MLflow warnings across all threads\n - Global rerouting of MLflow warnings to an MLflow event logger (i.e. `logger.warn()`)\n across all threads\n - Disablement of non-MLflow warnings for the current thread\n - Rerouting of non-MLflow warnings to an MLflow event logger for the current thread\n \"\"\"\n\n def __init__(self):\n self._mlflow_root_path = Path(os.path.dirname(mlflow.__file__)).resolve()\n self._state_lock = RLock()\n\n self._did_patch_showwarning = False\n self._original_showwarning = None\n\n self._disabled_threads = set()\n self._rerouted_threads = set()\n self._mlflow_warnings_disabled_globally = False\n self._mlflow_warnings_rerouted_to_event_logs = False\n\n def _patched_showwarning(self, message, category, filename, lineno, *args, **kwargs):\n \"\"\"\n A patched implementation of `warnings.showwarning` that enforces the warning configuration\n options configured on the controller (e.g. rerouting or disablement of MLflow warnings,\n disablement of all warnings for the current thread).\n\n Note that reassigning `warnings.showwarning` is the standard / recommended approach for\n modifying warning message display behaviors. For reference, see\n https://docs.python.org/3/library/warnings.html#warnings.showwarning\n \"\"\"\n # NB: We explicitly avoid blocking on the `self._state_lock` lock during `showwarning`\n # to so that threads don't have to execute serially whenever they emit warnings with\n # `warnings.warn()`. We only lock during configuration changes to ensure that\n # `warnings.showwarning` is patched or unpatched at the correct times.\n\n from mlflow.utils.autologging_utils import _logger\n\n # If the warning's source file is contained within the MLflow package's base\n # directory, it is an MLflow warning and should be emitted via `logger.warning`\n warning_source_path = Path(filename).resolve()\n is_mlflow_warning = self._mlflow_root_path in warning_source_path.parents\n curr_thread = get_current_thread_id()\n\n if (curr_thread in self._disabled_threads) or (\n is_mlflow_warning and self._mlflow_warnings_disabled_globally\n ):\n return\n elif (curr_thread in self._rerouted_threads and not is_mlflow_warning) or (\n is_mlflow_warning and self._mlflow_warnings_rerouted_to_event_logs\n ):\n _logger.warning(\n \"MLflow autologging encountered a warning:\" ' \"%s:%d: %s: %s\"',\n filename,\n lineno,\n category.__name__,\n message,\n )\n else:\n self._original_showwarning(message, category, filename, lineno, *args, **kwargs)\n\n def _should_patch_showwarning(self):\n return (\n (len(self._disabled_threads) > 0)\n or (len(self._rerouted_threads) > 0)\n or self._mlflow_warnings_disabled_globally\n or self._mlflow_warnings_rerouted_to_event_logs\n )\n\n def _modify_patch_state_if_necessary(self):\n \"\"\"\n Patches or unpatches `warnings.showwarning` if necessary, as determined by:\n - Whether or not `warnings.showwarning` is already patched\n - Whether or not any custom warning state has been configured on the warnings\n controller (i.e. disablement or rerouting of certain warnings globally or for a\n particular thread)\n\n Note that reassigning `warnings.showwarning` is the standard / recommended approach for\n modifying warning message display behaviors. For reference, see\n https://docs.python.org/3/library/warnings.html#warnings.showwarning\n \"\"\"\n with self._state_lock:\n if self._should_patch_showwarning() and not self._did_patch_showwarning:\n self._original_showwarning = warnings.showwarning\n warnings.showwarning = self._patched_showwarning\n self._did_patch_showwarning = True\n elif not self._should_patch_showwarning() and self._did_patch_showwarning:\n warnings.showwarning = self._original_showwarning\n self._did_patch_showwarning = False\n\n def set_mlflow_warnings_disablement_state_globally(self, disabled=True):\n \"\"\"\n Disables (or re-enables) MLflow warnings globally across all threads.\n\n :param disabled: If `True`, disables MLflow warnings globally across all threads.\n If `False`, enables MLflow warnings globally across all threads.\n \"\"\"\n with self._state_lock:\n self._mlflow_warnings_disabled_globally = disabled\n self._modify_patch_state_if_necessary()\n\n def set_mlflow_warnings_rerouting_state_globally(self, rerouted=True):\n \"\"\"\n Enables (or disables) rerouting of MLflow warnings to an MLflow event logger with level\n WARNING (e.g. `logger.warning()`) globally across all threads.\n\n :param rerouted: If `True`, enables MLflow warning rerouting globally across all threads.\n If `False`, disables MLflow warning rerouting globally across all threads.\n \"\"\"\n with self._state_lock:\n self._mlflow_warnings_rerouted_to_event_logs = rerouted\n self._modify_patch_state_if_necessary()\n\n def set_non_mlflow_warnings_disablement_state_for_current_thread(self, disabled=True):\n \"\"\"\n Disables (or re-enables) non-MLflow warnings for the current thread.\n\n :param disabled: If `True`, disables non-MLflow warnings for the current thread. If `False`,\n enables non-MLflow warnings for the current thread. non-MLflow warning\n behavior in other threads is unaffected.\n \"\"\"\n with self._state_lock:\n if disabled:\n self._disabled_threads.add(get_current_thread_id())\n else:\n self._disabled_threads.discard(get_current_thread_id())\n self._modify_patch_state_if_necessary()\n\n def set_non_mlflow_warnings_rerouting_state_for_current_thread(self, rerouted=True):\n \"\"\"\n Enables (or disables) rerouting of non-MLflow warnings to an MLflow event logger with level\n WARNING (e.g. `logger.warning()`) for the current thread.\n\n :param rerouted: If `True`, enables non-MLflow warning rerouting for the current thread.\n If `False`, disables non-MLflow warning rerouting for the current thread.\n non-MLflow warning behavior in other threads is unaffected.\n \"\"\"\n with self._state_lock:\n if rerouted:\n self._rerouted_threads.add(get_current_thread_id())\n else:\n self._rerouted_threads.discard(get_current_thread_id())\n self._modify_patch_state_if_necessary()\n\n def get_warnings_disablement_state_for_current_thread(self):\n \"\"\"\n :return: `True` if non-MLflow warnings are disabled for the current thread.\n `False` otherwise.\n \"\"\"\n return get_current_thread_id() in self._disabled_threads\n\n def get_warnings_rerouting_state_for_current_thread(self):\n \"\"\"\n :return: `True` if non-MLflow warnings are rerouted to an MLflow event logger with level\n WARNING for the current thread. `False` otherwise.\n \"\"\"\n return get_current_thread_id() in self._rerouted_threads\n\n\n_WARNINGS_CONTROLLER = _WarningsController()\n\n\n@contextmanager\ndef set_non_mlflow_warnings_behavior_for_current_thread(disable_warnings, reroute_warnings):\n \"\"\"\n Context manager that modifies the behavior of non-MLflow warnings upon entry, according to the\n specified parameters.\n\n :param disable_warnings: If `True`, disable (mutate & discard) non-MLflow warnings. If `False`,\n do not disable non-MLflow warnings.\n :param reroute_warnings: If `True`, reroute non-MLflow warnings to an MLflow event logger with\n level WARNING. If `False`, do not reroute non-MLflow warnings.\n \"\"\"\n prev_disablement_state = (\n _WARNINGS_CONTROLLER.get_warnings_disablement_state_for_current_thread()\n )\n prev_rerouting_state = _WARNINGS_CONTROLLER.get_warnings_rerouting_state_for_current_thread()\n try:\n _WARNINGS_CONTROLLER.set_non_mlflow_warnings_disablement_state_for_current_thread(\n disabled=disable_warnings\n )\n _WARNINGS_CONTROLLER.set_non_mlflow_warnings_rerouting_state_for_current_thread(\n rerouted=reroute_warnings\n )\n yield\n finally:\n _WARNINGS_CONTROLLER.set_non_mlflow_warnings_disablement_state_for_current_thread(\n disabled=prev_disablement_state\n )\n _WARNINGS_CONTROLLER.set_non_mlflow_warnings_rerouting_state_for_current_thread(\n rerouted=prev_rerouting_state\n )\n\n\n@contextmanager\ndef set_mlflow_events_and_warnings_behavior_globally(\n disable_event_logs, disable_warnings, reroute_warnings\n):\n \"\"\"\n Threadsafe context manager that modifies the behavior of MLflow event logging statements\n and MLflow warnings upon entry, according to the specified parameters. Modifications are\n applied globally across all threads and are not reverted until all threads that have made\n a particular modification have exited the context.\n\n :param disable_event_logs: If `True`, disable (mute & discard) MLflow event logging statements.\n If `False`, do not disable MLflow event logging statements.\n :param disable_warnings: If `True`, disable (mutate & discard) MLflow warnings. If `False`,\n do not disable MLflow warnings.\n :param reroute_warnings: If `True`, reroute MLflow warnings to an MLflow event logger with\n level WARNING. If `False`, do not reroute MLflow warnings.\n \"\"\"\n\n with _SetMLflowEventsAndWarningsBehaviorGlobally(\n disable_event_logs, disable_warnings, reroute_warnings\n ):\n yield\n\n\nclass _SetMLflowEventsAndWarningsBehaviorGlobally:\n _lock = RLock()\n _disable_event_logs_count = 0\n _disable_warnings_count = 0\n _reroute_warnings_count = 0\n\n def __init__(self, disable_event_logs, disable_warnings, reroute_warnings):\n self._disable_event_logs = disable_event_logs\n self._disable_warnings = disable_warnings\n self._reroute_warnings = reroute_warnings\n\n def __enter__(self):\n try:\n with _SetMLflowEventsAndWarningsBehaviorGlobally._lock:\n if self._disable_event_logs:\n if _SetMLflowEventsAndWarningsBehaviorGlobally._disable_event_logs_count <= 0:\n logging_utils.disable_logging()\n _SetMLflowEventsAndWarningsBehaviorGlobally._disable_event_logs_count += 1\n\n if self._disable_warnings:\n if _SetMLflowEventsAndWarningsBehaviorGlobally._disable_warnings_count <= 0:\n _WARNINGS_CONTROLLER.set_mlflow_warnings_disablement_state_globally(\n disabled=True\n )\n _SetMLflowEventsAndWarningsBehaviorGlobally._disable_warnings_count += 1\n\n if self._reroute_warnings:\n if _SetMLflowEventsAndWarningsBehaviorGlobally._reroute_warnings_count <= 0:\n _WARNINGS_CONTROLLER.set_mlflow_warnings_rerouting_state_globally(\n rerouted=True\n )\n _SetMLflowEventsAndWarningsBehaviorGlobally._reroute_warnings_count += 1\n except Exception:\n pass\n\n def __exit__(self, *args, **kwargs):\n try:\n with _SetMLflowEventsAndWarningsBehaviorGlobally._lock:\n if self._disable_event_logs:\n _SetMLflowEventsAndWarningsBehaviorGlobally._disable_event_logs_count -= 1\n if self._disable_warnings:\n _SetMLflowEventsAndWarningsBehaviorGlobally._disable_warnings_count -= 1\n if self._reroute_warnings:\n _SetMLflowEventsAndWarningsBehaviorGlobally._reroute_warnings_count -= 1\n\n if _SetMLflowEventsAndWarningsBehaviorGlobally._disable_event_logs_count <= 0:\n logging_utils.enable_logging()\n if _SetMLflowEventsAndWarningsBehaviorGlobally._disable_warnings_count <= 0:\n _WARNINGS_CONTROLLER.set_mlflow_warnings_disablement_state_globally(\n disabled=False\n )\n if _SetMLflowEventsAndWarningsBehaviorGlobally._reroute_warnings_count <= 0:\n _WARNINGS_CONTROLLER.set_mlflow_warnings_rerouting_state_globally(\n rerouted=False\n )\n except Exception:\n pass\n", "path": "mlflow/utils/autologging_utils/logging_and_warnings.py"}], "after_files": [{"content": "import os\nimport warnings\nfrom contextlib import contextmanager\nfrom pathlib import Path\nfrom threading import RLock, get_ident as get_current_thread_id\n\nimport mlflow\nimport mlflow.utils.logging_utils as logging_utils\n\n\nclass _WarningsController:\n \"\"\"\n Provides threadsafe utilities to modify warning behavior for MLflow autologging, including:\n\n - Global disablement of MLflow warnings across all threads\n - Global rerouting of MLflow warnings to an MLflow event logger (i.e. `logger.warn()`)\n across all threads\n - Disablement of non-MLflow warnings for the current thread\n - Rerouting of non-MLflow warnings to an MLflow event logger for the current thread\n \"\"\"\n\n def __init__(self):\n self._mlflow_root_path = Path(os.path.dirname(mlflow.__file__)).resolve()\n self._state_lock = RLock()\n\n self._did_patch_showwarning = False\n self._original_showwarning = None\n\n self._disabled_threads = set()\n self._rerouted_threads = set()\n self._mlflow_warnings_disabled_globally = False\n self._mlflow_warnings_rerouted_to_event_logs = False\n\n def _patched_showwarning(self, message, category, filename, lineno, *args, **kwargs):\n \"\"\"\n A patched implementation of `warnings.showwarning` that enforces the warning configuration\n options configured on the controller (e.g. rerouting or disablement of MLflow warnings,\n disablement of all warnings for the current thread).\n\n Note that reassigning `warnings.showwarning` is the standard / recommended approach for\n modifying warning message display behaviors. For reference, see\n https://docs.python.org/3/library/warnings.html#warnings.showwarning\n \"\"\"\n # NB: We explicitly avoid blocking on the `self._state_lock` lock during `showwarning`\n # to so that threads don't have to execute serially whenever they emit warnings with\n # `warnings.warn()`. We only lock during configuration changes to ensure that\n # `warnings.showwarning` is patched or unpatched at the correct times.\n\n from mlflow.utils.autologging_utils import _logger\n\n # If the warning's source file is contained within the MLflow package's base\n # directory, it is an MLflow warning and should be emitted via `logger.warning`\n warning_source_path = Path(filename).resolve()\n is_mlflow_warning = self._mlflow_root_path in warning_source_path.parents\n curr_thread = get_current_thread_id()\n\n if (curr_thread in self._disabled_threads) or (\n is_mlflow_warning and self._mlflow_warnings_disabled_globally\n ):\n return\n elif (curr_thread in self._rerouted_threads and not is_mlflow_warning) or (\n is_mlflow_warning and self._mlflow_warnings_rerouted_to_event_logs\n ):\n _logger.warning(\n 'MLflow autologging encountered a warning: \"%s:%d: %s: %s\"',\n filename,\n lineno,\n category.__name__,\n message,\n )\n else:\n self._original_showwarning(message, category, filename, lineno, *args, **kwargs)\n\n def _should_patch_showwarning(self):\n return (\n (len(self._disabled_threads) > 0)\n or (len(self._rerouted_threads) > 0)\n or self._mlflow_warnings_disabled_globally\n or self._mlflow_warnings_rerouted_to_event_logs\n )\n\n def _modify_patch_state_if_necessary(self):\n \"\"\"\n Patches or unpatches `warnings.showwarning` if necessary, as determined by:\n - Whether or not `warnings.showwarning` is already patched\n - Whether or not any custom warning state has been configured on the warnings\n controller (i.e. disablement or rerouting of certain warnings globally or for a\n particular thread)\n\n Note that reassigning `warnings.showwarning` is the standard / recommended approach for\n modifying warning message display behaviors. For reference, see\n https://docs.python.org/3/library/warnings.html#warnings.showwarning\n \"\"\"\n with self._state_lock:\n if self._should_patch_showwarning() and not self._did_patch_showwarning:\n self._original_showwarning = warnings.showwarning\n warnings.showwarning = self._patched_showwarning\n self._did_patch_showwarning = True\n elif not self._should_patch_showwarning() and self._did_patch_showwarning:\n warnings.showwarning = self._original_showwarning\n self._did_patch_showwarning = False\n\n def set_mlflow_warnings_disablement_state_globally(self, disabled=True):\n \"\"\"\n Disables (or re-enables) MLflow warnings globally across all threads.\n\n :param disabled: If `True`, disables MLflow warnings globally across all threads.\n If `False`, enables MLflow warnings globally across all threads.\n \"\"\"\n with self._state_lock:\n self._mlflow_warnings_disabled_globally = disabled\n self._modify_patch_state_if_necessary()\n\n def set_mlflow_warnings_rerouting_state_globally(self, rerouted=True):\n \"\"\"\n Enables (or disables) rerouting of MLflow warnings to an MLflow event logger with level\n WARNING (e.g. `logger.warning()`) globally across all threads.\n\n :param rerouted: If `True`, enables MLflow warning rerouting globally across all threads.\n If `False`, disables MLflow warning rerouting globally across all threads.\n \"\"\"\n with self._state_lock:\n self._mlflow_warnings_rerouted_to_event_logs = rerouted\n self._modify_patch_state_if_necessary()\n\n def set_non_mlflow_warnings_disablement_state_for_current_thread(self, disabled=True):\n \"\"\"\n Disables (or re-enables) non-MLflow warnings for the current thread.\n\n :param disabled: If `True`, disables non-MLflow warnings for the current thread. If `False`,\n enables non-MLflow warnings for the current thread. non-MLflow warning\n behavior in other threads is unaffected.\n \"\"\"\n with self._state_lock:\n if disabled:\n self._disabled_threads.add(get_current_thread_id())\n else:\n self._disabled_threads.discard(get_current_thread_id())\n self._modify_patch_state_if_necessary()\n\n def set_non_mlflow_warnings_rerouting_state_for_current_thread(self, rerouted=True):\n \"\"\"\n Enables (or disables) rerouting of non-MLflow warnings to an MLflow event logger with level\n WARNING (e.g. `logger.warning()`) for the current thread.\n\n :param rerouted: If `True`, enables non-MLflow warning rerouting for the current thread.\n If `False`, disables non-MLflow warning rerouting for the current thread.\n non-MLflow warning behavior in other threads is unaffected.\n \"\"\"\n with self._state_lock:\n if rerouted:\n self._rerouted_threads.add(get_current_thread_id())\n else:\n self._rerouted_threads.discard(get_current_thread_id())\n self._modify_patch_state_if_necessary()\n\n def get_warnings_disablement_state_for_current_thread(self):\n \"\"\"\n :return: `True` if non-MLflow warnings are disabled for the current thread.\n `False` otherwise.\n \"\"\"\n return get_current_thread_id() in self._disabled_threads\n\n def get_warnings_rerouting_state_for_current_thread(self):\n \"\"\"\n :return: `True` if non-MLflow warnings are rerouted to an MLflow event logger with level\n WARNING for the current thread. `False` otherwise.\n \"\"\"\n return get_current_thread_id() in self._rerouted_threads\n\n\n_WARNINGS_CONTROLLER = _WarningsController()\n\n\n@contextmanager\ndef set_non_mlflow_warnings_behavior_for_current_thread(disable_warnings, reroute_warnings):\n \"\"\"\n Context manager that modifies the behavior of non-MLflow warnings upon entry, according to the\n specified parameters.\n\n :param disable_warnings: If `True`, disable (mutate & discard) non-MLflow warnings. If `False`,\n do not disable non-MLflow warnings.\n :param reroute_warnings: If `True`, reroute non-MLflow warnings to an MLflow event logger with\n level WARNING. If `False`, do not reroute non-MLflow warnings.\n \"\"\"\n prev_disablement_state = (\n _WARNINGS_CONTROLLER.get_warnings_disablement_state_for_current_thread()\n )\n prev_rerouting_state = _WARNINGS_CONTROLLER.get_warnings_rerouting_state_for_current_thread()\n try:\n _WARNINGS_CONTROLLER.set_non_mlflow_warnings_disablement_state_for_current_thread(\n disabled=disable_warnings\n )\n _WARNINGS_CONTROLLER.set_non_mlflow_warnings_rerouting_state_for_current_thread(\n rerouted=reroute_warnings\n )\n yield\n finally:\n _WARNINGS_CONTROLLER.set_non_mlflow_warnings_disablement_state_for_current_thread(\n disabled=prev_disablement_state\n )\n _WARNINGS_CONTROLLER.set_non_mlflow_warnings_rerouting_state_for_current_thread(\n rerouted=prev_rerouting_state\n )\n\n\n@contextmanager\ndef set_mlflow_events_and_warnings_behavior_globally(\n disable_event_logs, disable_warnings, reroute_warnings\n):\n \"\"\"\n Threadsafe context manager that modifies the behavior of MLflow event logging statements\n and MLflow warnings upon entry, according to the specified parameters. Modifications are\n applied globally across all threads and are not reverted until all threads that have made\n a particular modification have exited the context.\n\n :param disable_event_logs: If `True`, disable (mute & discard) MLflow event logging statements.\n If `False`, do not disable MLflow event logging statements.\n :param disable_warnings: If `True`, disable (mutate & discard) MLflow warnings. If `False`,\n do not disable MLflow warnings.\n :param reroute_warnings: If `True`, reroute MLflow warnings to an MLflow event logger with\n level WARNING. If `False`, do not reroute MLflow warnings.\n \"\"\"\n\n with _SetMLflowEventsAndWarningsBehaviorGlobally(\n disable_event_logs, disable_warnings, reroute_warnings\n ):\n yield\n\n\nclass _SetMLflowEventsAndWarningsBehaviorGlobally:\n _lock = RLock()\n _disable_event_logs_count = 0\n _disable_warnings_count = 0\n _reroute_warnings_count = 0\n\n def __init__(self, disable_event_logs, disable_warnings, reroute_warnings):\n self._disable_event_logs = disable_event_logs\n self._disable_warnings = disable_warnings\n self._reroute_warnings = reroute_warnings\n\n def __enter__(self):\n try:\n with _SetMLflowEventsAndWarningsBehaviorGlobally._lock:\n if self._disable_event_logs:\n if _SetMLflowEventsAndWarningsBehaviorGlobally._disable_event_logs_count <= 0:\n logging_utils.disable_logging()\n _SetMLflowEventsAndWarningsBehaviorGlobally._disable_event_logs_count += 1\n\n if self._disable_warnings:\n if _SetMLflowEventsAndWarningsBehaviorGlobally._disable_warnings_count <= 0:\n _WARNINGS_CONTROLLER.set_mlflow_warnings_disablement_state_globally(\n disabled=True\n )\n _SetMLflowEventsAndWarningsBehaviorGlobally._disable_warnings_count += 1\n\n if self._reroute_warnings:\n if _SetMLflowEventsAndWarningsBehaviorGlobally._reroute_warnings_count <= 0:\n _WARNINGS_CONTROLLER.set_mlflow_warnings_rerouting_state_globally(\n rerouted=True\n )\n _SetMLflowEventsAndWarningsBehaviorGlobally._reroute_warnings_count += 1\n except Exception:\n pass\n\n def __exit__(self, *args, **kwargs):\n try:\n with _SetMLflowEventsAndWarningsBehaviorGlobally._lock:\n if self._disable_event_logs:\n _SetMLflowEventsAndWarningsBehaviorGlobally._disable_event_logs_count -= 1\n if self._disable_warnings:\n _SetMLflowEventsAndWarningsBehaviorGlobally._disable_warnings_count -= 1\n if self._reroute_warnings:\n _SetMLflowEventsAndWarningsBehaviorGlobally._reroute_warnings_count -= 1\n\n if _SetMLflowEventsAndWarningsBehaviorGlobally._disable_event_logs_count <= 0:\n logging_utils.enable_logging()\n if _SetMLflowEventsAndWarningsBehaviorGlobally._disable_warnings_count <= 0:\n _WARNINGS_CONTROLLER.set_mlflow_warnings_disablement_state_globally(\n disabled=False\n )\n if _SetMLflowEventsAndWarningsBehaviorGlobally._reroute_warnings_count <= 0:\n _WARNINGS_CONTROLLER.set_mlflow_warnings_rerouting_state_globally(\n rerouted=False\n )\n except Exception:\n pass\n", "path": "mlflow/utils/autologging_utils/logging_and_warnings.py"}]}
| 3,936 | 155 |
gh_patches_debug_30202
|
rasdani/github-patches
|
git_diff
|
MycroftAI__mycroft-core-2831
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
gTTS needs to be upgraded to 2.2.2 on dev branch
**Describe the bug**
When using Google TTS, the audio service returns an error and never returns audio.
```
2021-02-06 02:39:24.895 | ERROR | 1 | mycroft.audio.speech:mute_and_speak:134 | TTS execution failed (gTTSError('200 (OK) from TTS API. Probable cause: Unknown'))
```
I had to upgrade `gTTS` from `2.2.0` to `2.2.2` to fix the issue.
```
pip install gTTS -U
Collecting gTTS
Downloading https://files.pythonhosted.org/packages/5f/b9/94e59337107be134b21ce395a29fc0715b707b560108d6797de2d93e1178/gTTS-2.2.2-py3-none-any.whl
Requirement already satisfied, skipping upgrade: click in /opt/mycroft-venv/lib/python3.7/site-packages (from gTTS) (7.1.2)
Requirement already satisfied, skipping upgrade: six in /opt/mycroft-venv/lib/python3.7/site-packages (from gTTS) (1.15.0)
Requirement already satisfied, skipping upgrade: requests in /opt/mycroft-venv/lib/python3.7/site-packages (from gTTS) (2.20.0)
Requirement already satisfied, skipping upgrade: urllib3<1.25,>=1.21.1 in /opt/mycroft-venv/lib/python3.7/site-packages (from requests->gTTS) (1.24.3)
Requirement already satisfied, skipping upgrade: chardet<3.1.0,>=3.0.2 in /opt/mycroft-venv/lib/python3.7/site-packages (from requests->gTTS) (3.0.4)
Requirement already satisfied, skipping upgrade: certifi>=2017.4.17 in /opt/mycroft-venv/lib/python3.7/site-packages (from requests->gTTS) (2020.12.5)
Requirement already satisfied, skipping upgrade: idna<2.8,>=2.5 in /opt/mycroft-venv/lib/python3.7/site-packages (from requests->gTTS) (2.7)
Installing collected packages: gTTS
Found existing installation: gTTS 2.2.0
Uninstalling gTTS-2.2.0:
Successfully uninstalled gTTS-2.2.0
Successfully installed gTTS-2.2.2
```
**Environment:**
- Device type: Raspberry Pi 4
- OS: Raspberry OS 64-bit
- Mycroft-core version: dev
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mycroft/tts/google_tts.py`
Content:
```
1 # Copyright 2017 Mycroft AI Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 #
15 from gtts import gTTS
16 from gtts.lang import tts_langs
17
18 from .tts import TTS, TTSValidator
19
20 from mycroft.util.log import LOG
21
22 # Live list of languages
23 # Cached list of supported languages (2020-05-27)
24 _default_langs = {'af': 'Afrikaans', 'sq': 'Albanian', 'ar': 'Arabic',
25 'hy': 'Armenian', 'bn': 'Bengali', 'bs': 'Bosnian',
26 'ca': 'Catalan', 'hr': 'Croatian', 'cs': 'Czech',
27 'da': 'Danish', 'nl': 'Dutch', 'en': 'English',
28 'eo': 'Esperanto', 'et': 'Estonian', 'tl': 'Filipino',
29 'fi': 'Finnish', 'fr': 'French', 'de': 'German',
30 'el': 'Greek', 'gu': 'Gujarati', 'hi': 'Hindi',
31 'hu': 'Hungarian', 'is': 'Icelandic', 'id': 'Indonesian',
32 'it': 'Italian', 'ja': 'Japanese', 'jw': 'Javanese',
33 'kn': 'Kannada', 'km': 'Khmer', 'ko': 'Korean',
34 'la': 'Latin', 'lv': 'Latvian', 'mk': 'Macedonian',
35 'ml': 'Malayalam', 'mr': 'Marathi',
36 'my': 'Myanmar (Burmese)', 'ne': 'Nepali',
37 'no': 'Norwegian', 'pl': 'Polish', 'pt': 'Portuguese',
38 'ro': 'Romanian', 'ru': 'Russian', 'sr': 'Serbian',
39 'si': 'Sinhala', 'sk': 'Slovak', 'es': 'Spanish',
40 'su': 'Sundanese', 'sw': 'Swahili', 'sv': 'Swedish',
41 'ta': 'Tamil', 'te': 'Telugu', 'th': 'Thai', 'tr': 'Turkish',
42 'uk': 'Ukrainian', 'ur': 'Urdu', 'vi': 'Vietnamese',
43 'cy': 'Welsh', 'zh-cn': 'Chinese (Mandarin/China)',
44 'zh-tw': 'Chinese (Mandarin/Taiwan)',
45 'en-us': 'English (US)', 'en-ca': 'English (Canada)',
46 'en-uk': 'English (UK)', 'en-gb': 'English (UK)',
47 'en-au': 'English (Australia)', 'en-gh': 'English (Ghana)',
48 'en-in': 'English (India)', 'en-ie': 'English (Ireland)',
49 'en-nz': 'English (New Zealand)',
50 'en-ng': 'English (Nigeria)',
51 'en-ph': 'English (Philippines)',
52 'en-za': 'English (South Africa)',
53 'en-tz': 'English (Tanzania)', 'fr-ca': 'French (Canada)',
54 'fr-fr': 'French (France)', 'pt-br': 'Portuguese (Brazil)',
55 'pt-pt': 'Portuguese (Portugal)', 'es-es': 'Spanish (Spain)',
56 'es-us': 'Spanish (United States)'
57 }
58
59
60 _supported_langs = None
61
62
63 def get_supported_langs():
64 """Get dict of supported languages.
65
66 Tries to fetch remote list, if that fails a local cache will be used.
67
68 Returns:
69 (dict): Lang code to lang name map.
70 """
71 global _supported_langs
72 if not _supported_langs:
73 try:
74 _supported_langs = tts_langs()
75 except Exception:
76 LOG.warning('Couldn\'t fetch upto date language codes')
77 return _supported_langs or _default_langs
78
79
80 class GoogleTTS(TTS):
81 """Interface to google TTS."""
82 def __init__(self, lang, config):
83 self._google_lang = None
84 super(GoogleTTS, self).__init__(lang, config, GoogleTTSValidator(
85 self), 'mp3')
86
87 @property
88 def google_lang(self):
89 """Property containing a converted language code suitable for gTTS."""
90 supported_langs = get_supported_langs()
91 if not self._google_lang:
92 if self.lang.lower() in supported_langs:
93 self._google_lang = self.lang.lower()
94 elif self.lang[:2].lower() in supported_langs:
95 self._google_lang = self.lang[:2]
96 return self._google_lang or self.lang.lower()
97
98 def get_tts(self, sentence, wav_file):
99 """Fetch tts audio using gTTS.
100
101 Arguments:
102 sentence (str): Sentence to generate audio for
103 wav_file (str): output file path
104 Returns:
105 Tuple ((str) written file, None)
106 """
107 tts = gTTS(text=sentence, lang=self.google_lang)
108 tts.save(wav_file)
109 return (wav_file, None) # No phonemes
110
111
112 class GoogleTTSValidator(TTSValidator):
113 def __init__(self, tts):
114 super(GoogleTTSValidator, self).__init__(tts)
115
116 def validate_lang(self):
117 lang = self.tts.google_lang
118 if lang.lower() not in get_supported_langs():
119 raise ValueError("Language not supported by gTTS: {}".format(lang))
120
121 def validate_connection(self):
122 try:
123 gTTS(text='Hi').save(self.tts.filename)
124 except Exception:
125 raise Exception(
126 'GoogleTTS server could not be verified. Please check your '
127 'internet connection.')
128
129 def get_tts_class(self):
130 return GoogleTTS
131
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mycroft/tts/google_tts.py b/mycroft/tts/google_tts.py
--- a/mycroft/tts/google_tts.py
+++ b/mycroft/tts/google_tts.py
@@ -20,7 +20,7 @@
from mycroft.util.log import LOG
# Live list of languages
-# Cached list of supported languages (2020-05-27)
+# Cached list of supported languages (2021-02-09)
_default_langs = {'af': 'Afrikaans', 'sq': 'Albanian', 'ar': 'Arabic',
'hy': 'Armenian', 'bn': 'Bengali', 'bs': 'Bosnian',
'ca': 'Catalan', 'hr': 'Croatian', 'cs': 'Czech',
@@ -40,20 +40,7 @@
'su': 'Sundanese', 'sw': 'Swahili', 'sv': 'Swedish',
'ta': 'Tamil', 'te': 'Telugu', 'th': 'Thai', 'tr': 'Turkish',
'uk': 'Ukrainian', 'ur': 'Urdu', 'vi': 'Vietnamese',
- 'cy': 'Welsh', 'zh-cn': 'Chinese (Mandarin/China)',
- 'zh-tw': 'Chinese (Mandarin/Taiwan)',
- 'en-us': 'English (US)', 'en-ca': 'English (Canada)',
- 'en-uk': 'English (UK)', 'en-gb': 'English (UK)',
- 'en-au': 'English (Australia)', 'en-gh': 'English (Ghana)',
- 'en-in': 'English (India)', 'en-ie': 'English (Ireland)',
- 'en-nz': 'English (New Zealand)',
- 'en-ng': 'English (Nigeria)',
- 'en-ph': 'English (Philippines)',
- 'en-za': 'English (South Africa)',
- 'en-tz': 'English (Tanzania)', 'fr-ca': 'French (Canada)',
- 'fr-fr': 'French (France)', 'pt-br': 'Portuguese (Brazil)',
- 'pt-pt': 'Portuguese (Portugal)', 'es-es': 'Spanish (Spain)',
- 'es-us': 'Spanish (United States)'
+ 'cy': 'Welsh', 'zh': 'Chinese (Mandarin/China)'
}
|
{"golden_diff": "diff --git a/mycroft/tts/google_tts.py b/mycroft/tts/google_tts.py\n--- a/mycroft/tts/google_tts.py\n+++ b/mycroft/tts/google_tts.py\n@@ -20,7 +20,7 @@\n from mycroft.util.log import LOG\n \n # Live list of languages\n-# Cached list of supported languages (2020-05-27)\n+# Cached list of supported languages (2021-02-09)\n _default_langs = {'af': 'Afrikaans', 'sq': 'Albanian', 'ar': 'Arabic',\n 'hy': 'Armenian', 'bn': 'Bengali', 'bs': 'Bosnian',\n 'ca': 'Catalan', 'hr': 'Croatian', 'cs': 'Czech',\n@@ -40,20 +40,7 @@\n 'su': 'Sundanese', 'sw': 'Swahili', 'sv': 'Swedish',\n 'ta': 'Tamil', 'te': 'Telugu', 'th': 'Thai', 'tr': 'Turkish',\n 'uk': 'Ukrainian', 'ur': 'Urdu', 'vi': 'Vietnamese',\n- 'cy': 'Welsh', 'zh-cn': 'Chinese (Mandarin/China)',\n- 'zh-tw': 'Chinese (Mandarin/Taiwan)',\n- 'en-us': 'English (US)', 'en-ca': 'English (Canada)',\n- 'en-uk': 'English (UK)', 'en-gb': 'English (UK)',\n- 'en-au': 'English (Australia)', 'en-gh': 'English (Ghana)',\n- 'en-in': 'English (India)', 'en-ie': 'English (Ireland)',\n- 'en-nz': 'English (New Zealand)',\n- 'en-ng': 'English (Nigeria)',\n- 'en-ph': 'English (Philippines)',\n- 'en-za': 'English (South Africa)',\n- 'en-tz': 'English (Tanzania)', 'fr-ca': 'French (Canada)',\n- 'fr-fr': 'French (France)', 'pt-br': 'Portuguese (Brazil)',\n- 'pt-pt': 'Portuguese (Portugal)', 'es-es': 'Spanish (Spain)',\n- 'es-us': 'Spanish (United States)'\n+ 'cy': 'Welsh', 'zh': 'Chinese (Mandarin/China)'\n }\n", "issue": "gTTS needs to be upgraded to 2.2.2 on dev branch\n**Describe the bug**\r\nWhen using Google TTS, the audio service returns an error and never returns audio.\r\n```\r\n2021-02-06 02:39:24.895 | ERROR | 1 | mycroft.audio.speech:mute_and_speak:134 | TTS execution failed (gTTSError('200 (OK) from TTS API. Probable cause: Unknown'))\r\n```\r\n\r\nI had to upgrade `gTTS` from `2.2.0` to `2.2.2` to fix the issue.\r\n\r\n```\r\npip install gTTS -U\r\nCollecting gTTS\r\n Downloading https://files.pythonhosted.org/packages/5f/b9/94e59337107be134b21ce395a29fc0715b707b560108d6797de2d93e1178/gTTS-2.2.2-py3-none-any.whl\r\nRequirement already satisfied, skipping upgrade: click in /opt/mycroft-venv/lib/python3.7/site-packages (from gTTS) (7.1.2)\r\nRequirement already satisfied, skipping upgrade: six in /opt/mycroft-venv/lib/python3.7/site-packages (from gTTS) (1.15.0)\r\nRequirement already satisfied, skipping upgrade: requests in /opt/mycroft-venv/lib/python3.7/site-packages (from gTTS) (2.20.0)\r\nRequirement already satisfied, skipping upgrade: urllib3<1.25,>=1.21.1 in /opt/mycroft-venv/lib/python3.7/site-packages (from requests->gTTS) (1.24.3)\r\nRequirement already satisfied, skipping upgrade: chardet<3.1.0,>=3.0.2 in /opt/mycroft-venv/lib/python3.7/site-packages (from requests->gTTS) (3.0.4)\r\nRequirement already satisfied, skipping upgrade: certifi>=2017.4.17 in /opt/mycroft-venv/lib/python3.7/site-packages (from requests->gTTS) (2020.12.5)\r\nRequirement already satisfied, skipping upgrade: idna<2.8,>=2.5 in /opt/mycroft-venv/lib/python3.7/site-packages (from requests->gTTS) (2.7)\r\nInstalling collected packages: gTTS\r\n Found existing installation: gTTS 2.2.0\r\n Uninstalling gTTS-2.2.0:\r\n Successfully uninstalled gTTS-2.2.0\r\nSuccessfully installed gTTS-2.2.2\r\n```\r\n\r\n**Environment:**\r\n - Device type: Raspberry Pi 4\r\n - OS: Raspberry OS 64-bit\r\n - Mycroft-core version: dev\r\n\r\n\n", "before_files": [{"content": "# Copyright 2017 Mycroft AI Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nfrom gtts import gTTS\nfrom gtts.lang import tts_langs\n\nfrom .tts import TTS, TTSValidator\n\nfrom mycroft.util.log import LOG\n\n# Live list of languages\n# Cached list of supported languages (2020-05-27)\n_default_langs = {'af': 'Afrikaans', 'sq': 'Albanian', 'ar': 'Arabic',\n 'hy': 'Armenian', 'bn': 'Bengali', 'bs': 'Bosnian',\n 'ca': 'Catalan', 'hr': 'Croatian', 'cs': 'Czech',\n 'da': 'Danish', 'nl': 'Dutch', 'en': 'English',\n 'eo': 'Esperanto', 'et': 'Estonian', 'tl': 'Filipino',\n 'fi': 'Finnish', 'fr': 'French', 'de': 'German',\n 'el': 'Greek', 'gu': 'Gujarati', 'hi': 'Hindi',\n 'hu': 'Hungarian', 'is': 'Icelandic', 'id': 'Indonesian',\n 'it': 'Italian', 'ja': 'Japanese', 'jw': 'Javanese',\n 'kn': 'Kannada', 'km': 'Khmer', 'ko': 'Korean',\n 'la': 'Latin', 'lv': 'Latvian', 'mk': 'Macedonian',\n 'ml': 'Malayalam', 'mr': 'Marathi',\n 'my': 'Myanmar (Burmese)', 'ne': 'Nepali',\n 'no': 'Norwegian', 'pl': 'Polish', 'pt': 'Portuguese',\n 'ro': 'Romanian', 'ru': 'Russian', 'sr': 'Serbian',\n 'si': 'Sinhala', 'sk': 'Slovak', 'es': 'Spanish',\n 'su': 'Sundanese', 'sw': 'Swahili', 'sv': 'Swedish',\n 'ta': 'Tamil', 'te': 'Telugu', 'th': 'Thai', 'tr': 'Turkish',\n 'uk': 'Ukrainian', 'ur': 'Urdu', 'vi': 'Vietnamese',\n 'cy': 'Welsh', 'zh-cn': 'Chinese (Mandarin/China)',\n 'zh-tw': 'Chinese (Mandarin/Taiwan)',\n 'en-us': 'English (US)', 'en-ca': 'English (Canada)',\n 'en-uk': 'English (UK)', 'en-gb': 'English (UK)',\n 'en-au': 'English (Australia)', 'en-gh': 'English (Ghana)',\n 'en-in': 'English (India)', 'en-ie': 'English (Ireland)',\n 'en-nz': 'English (New Zealand)',\n 'en-ng': 'English (Nigeria)',\n 'en-ph': 'English (Philippines)',\n 'en-za': 'English (South Africa)',\n 'en-tz': 'English (Tanzania)', 'fr-ca': 'French (Canada)',\n 'fr-fr': 'French (France)', 'pt-br': 'Portuguese (Brazil)',\n 'pt-pt': 'Portuguese (Portugal)', 'es-es': 'Spanish (Spain)',\n 'es-us': 'Spanish (United States)'\n }\n\n\n_supported_langs = None\n\n\ndef get_supported_langs():\n \"\"\"Get dict of supported languages.\n\n Tries to fetch remote list, if that fails a local cache will be used.\n\n Returns:\n (dict): Lang code to lang name map.\n \"\"\"\n global _supported_langs\n if not _supported_langs:\n try:\n _supported_langs = tts_langs()\n except Exception:\n LOG.warning('Couldn\\'t fetch upto date language codes')\n return _supported_langs or _default_langs\n\n\nclass GoogleTTS(TTS):\n \"\"\"Interface to google TTS.\"\"\"\n def __init__(self, lang, config):\n self._google_lang = None\n super(GoogleTTS, self).__init__(lang, config, GoogleTTSValidator(\n self), 'mp3')\n\n @property\n def google_lang(self):\n \"\"\"Property containing a converted language code suitable for gTTS.\"\"\"\n supported_langs = get_supported_langs()\n if not self._google_lang:\n if self.lang.lower() in supported_langs:\n self._google_lang = self.lang.lower()\n elif self.lang[:2].lower() in supported_langs:\n self._google_lang = self.lang[:2]\n return self._google_lang or self.lang.lower()\n\n def get_tts(self, sentence, wav_file):\n \"\"\"Fetch tts audio using gTTS.\n\n Arguments:\n sentence (str): Sentence to generate audio for\n wav_file (str): output file path\n Returns:\n Tuple ((str) written file, None)\n \"\"\"\n tts = gTTS(text=sentence, lang=self.google_lang)\n tts.save(wav_file)\n return (wav_file, None) # No phonemes\n\n\nclass GoogleTTSValidator(TTSValidator):\n def __init__(self, tts):\n super(GoogleTTSValidator, self).__init__(tts)\n\n def validate_lang(self):\n lang = self.tts.google_lang\n if lang.lower() not in get_supported_langs():\n raise ValueError(\"Language not supported by gTTS: {}\".format(lang))\n\n def validate_connection(self):\n try:\n gTTS(text='Hi').save(self.tts.filename)\n except Exception:\n raise Exception(\n 'GoogleTTS server could not be verified. Please check your '\n 'internet connection.')\n\n def get_tts_class(self):\n return GoogleTTS\n", "path": "mycroft/tts/google_tts.py"}], "after_files": [{"content": "# Copyright 2017 Mycroft AI Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nfrom gtts import gTTS\nfrom gtts.lang import tts_langs\n\nfrom .tts import TTS, TTSValidator\n\nfrom mycroft.util.log import LOG\n\n# Live list of languages\n# Cached list of supported languages (2021-02-09)\n_default_langs = {'af': 'Afrikaans', 'sq': 'Albanian', 'ar': 'Arabic',\n 'hy': 'Armenian', 'bn': 'Bengali', 'bs': 'Bosnian',\n 'ca': 'Catalan', 'hr': 'Croatian', 'cs': 'Czech',\n 'da': 'Danish', 'nl': 'Dutch', 'en': 'English',\n 'eo': 'Esperanto', 'et': 'Estonian', 'tl': 'Filipino',\n 'fi': 'Finnish', 'fr': 'French', 'de': 'German',\n 'el': 'Greek', 'gu': 'Gujarati', 'hi': 'Hindi',\n 'hu': 'Hungarian', 'is': 'Icelandic', 'id': 'Indonesian',\n 'it': 'Italian', 'ja': 'Japanese', 'jw': 'Javanese',\n 'kn': 'Kannada', 'km': 'Khmer', 'ko': 'Korean',\n 'la': 'Latin', 'lv': 'Latvian', 'mk': 'Macedonian',\n 'ml': 'Malayalam', 'mr': 'Marathi',\n 'my': 'Myanmar (Burmese)', 'ne': 'Nepali',\n 'no': 'Norwegian', 'pl': 'Polish', 'pt': 'Portuguese',\n 'ro': 'Romanian', 'ru': 'Russian', 'sr': 'Serbian',\n 'si': 'Sinhala', 'sk': 'Slovak', 'es': 'Spanish',\n 'su': 'Sundanese', 'sw': 'Swahili', 'sv': 'Swedish',\n 'ta': 'Tamil', 'te': 'Telugu', 'th': 'Thai', 'tr': 'Turkish',\n 'uk': 'Ukrainian', 'ur': 'Urdu', 'vi': 'Vietnamese',\n 'cy': 'Welsh', 'zh': 'Chinese (Mandarin/China)'\n }\n\n\n_supported_langs = None\n\n\ndef get_supported_langs():\n \"\"\"Get dict of supported languages.\n\n Tries to fetch remote list, if that fails a local cache will be used.\n\n Returns:\n (dict): Lang code to lang name map.\n \"\"\"\n global _supported_langs\n if not _supported_langs:\n try:\n _supported_langs = tts_langs()\n except Exception:\n LOG.warning('Couldn\\'t fetch upto date language codes')\n return _supported_langs or _default_langs\n\n\nclass GoogleTTS(TTS):\n \"\"\"Interface to google TTS.\"\"\"\n def __init__(self, lang, config):\n self._google_lang = None\n super(GoogleTTS, self).__init__(lang, config, GoogleTTSValidator(\n self), 'mp3')\n\n @property\n def google_lang(self):\n \"\"\"Property containing a converted language code suitable for gTTS.\"\"\"\n supported_langs = get_supported_langs()\n if not self._google_lang:\n if self.lang.lower() in supported_langs:\n self._google_lang = self.lang.lower()\n elif self.lang[:2].lower() in supported_langs:\n self._google_lang = self.lang[:2]\n return self._google_lang or self.lang.lower()\n\n def get_tts(self, sentence, wav_file):\n \"\"\"Fetch tts audio using gTTS.\n\n Arguments:\n sentence (str): Sentence to generate audio for\n wav_file (str): output file path\n Returns:\n Tuple ((str) written file, None)\n \"\"\"\n tts = gTTS(text=sentence, lang=self.google_lang)\n tts.save(wav_file)\n return (wav_file, None) # No phonemes\n\n\nclass GoogleTTSValidator(TTSValidator):\n def __init__(self, tts):\n super(GoogleTTSValidator, self).__init__(tts)\n\n def validate_lang(self):\n lang = self.tts.google_lang\n if lang.lower() not in get_supported_langs():\n raise ValueError(\"Language not supported by gTTS: {}\".format(lang))\n\n def validate_connection(self):\n try:\n gTTS(text='Hi').save(self.tts.filename)\n except Exception:\n raise Exception(\n 'GoogleTTS server could not be verified. Please check your '\n 'internet connection.')\n\n def get_tts_class(self):\n return GoogleTTS\n", "path": "mycroft/tts/google_tts.py"}]}
| 2,615 | 551 |
gh_patches_debug_830
|
rasdani/github-patches
|
git_diff
|
internetarchive__openlibrary-4591
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Adding to lists broken
Adding an item to a list no longer works as of 12-02-2021.
### Evidence / Screenshot (if possible)
### Relevant url?
<!-- `https://openlibrary.org/...` -->
### Steps to Reproduce
<!-- What steps caused you to find the bug? -->
1. Go to ...an edition, etc.
2. Do ...add item to list.
<!-- What actually happened after these steps? What did you expect to happen? -->
* Actual: List link loads list page.
* Expected: Item should be added to list.
### Details
- **Logged in (Y/N)?** Y
- **Browser type/version?** Chrome Version 88.0.4324.150 (Official Build) (x86_64)
- **Operating system?** Mac Big Sur
- **Environment (prod/dev/local)?** prod
<!-- If not sure, put prod -->
### Proposal & Constraints
<!-- What is the proposed solution / implementation? Is there a precedent of this approach succeeding elsewhere? -->
### Related files
<!-- Files related to this issue; this is super useful for new contributors who might want to help! If you're not sure, leave this blank; a maintainer will add them. -->
### Stakeholders
<!-- @ tag stakeholders of this bug -->
@cclauss
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `openlibrary/core/helpers.py`
Content:
```
1 """Generic helper functions to use in the templates and the webapp.
2 """
3 import web
4 from datetime import datetime
5 import re
6
7 import six
8 from six.moves.urllib.parse import urlsplit
9
10 if six.PY2: # See #4525 json.dump(indent) MUST be an int on PY2
11 import simplejson as json
12 else:
13 import json
14
15 import babel
16 import babel.core
17 import babel.dates
18 import babel.numbers
19
20 try:
21 import genshi
22 import genshi.filters
23 except ImportError:
24 genshi = None
25
26 try:
27 from bs4 import BeautifulSoup
28 except ImportError:
29 BeautifulSoup = None
30
31 from infogami import config
32
33 # handy utility to parse ISO date strings
34 from infogami.infobase.utils import parse_datetime
35 from infogami.utils.view import safeint
36
37 # TODO: i18n should be moved to core or infogami
38 from openlibrary.i18n import gettext as _ # noqa: F401
39
40 __all__ = [
41 "sanitize",
42 "json_encode",
43 "safesort",
44 "days_since", "datestr", "format_date",
45 "sprintf", "cond", "commify", "truncate", "datetimestr_utc",
46 "urlsafe", "texsafe",
47 "percentage", "affiliate_id", "bookreader_host",
48 "private_collections", "private_collection_in",
49
50 # functions imported from elsewhere
51 "parse_datetime", "safeint"
52 ]
53 __docformat__ = "restructuredtext en"
54
55 def sanitize(html, encoding='utf8'):
56 """Removes unsafe tags and attributes from html and adds
57 ``rel="nofollow"`` attribute to all external links.
58 Using encoding=None if passing unicode strings e.g. for Python 3.
59 encoding="utf8" matches default format for earlier versions of Genshi
60 https://genshi.readthedocs.io/en/latest/upgrade/#upgrading-from-genshi-0-6-x-to-the-development-version
61 """
62
63 # Can't sanitize unless genshi module is available
64 if genshi is None:
65 return html
66
67 def get_nofollow(name, event):
68 attrs = event[1][1]
69 href = attrs.get('href', '')
70
71 if href:
72 # add rel=nofollow to all absolute links
73 _, host, _, _, _ = urlsplit(href)
74 if host:
75 return 'nofollow'
76
77 try:
78 html = genshi.HTML(html, encoding=encoding)
79
80 # except (genshi.ParseError, UnicodeDecodeError, UnicodeError) as e:
81 # don't catch Unicode errors so we can tell if we're getting bytes
82 except genshi.ParseError:
83 if BeautifulSoup:
84 # Bad html. Tidy it up using BeautifulSoup
85 html = str(BeautifulSoup(html, "lxml"))
86 try:
87 html = genshi.HTML(html)
88 except Exception:
89 # Failed to sanitize.
90 # We can't do any better than returning the original HTML, without sanitizing.
91 return html
92 else:
93 raise
94
95 stream = html \
96 | genshi.filters.HTMLSanitizer() \
97 | genshi.filters.Transformer("//a").attr("rel", get_nofollow)
98 return stream.render()
99
100
101 def json_encode(d, **kw):
102 """Same as json.dumps.
103 """
104 return json.dumps(d or {}, **kw)
105
106
107 def safesort(iterable, key=None, reverse=False):
108 """Sorts heterogeneous of objects without raising errors.
109
110 Sorting heterogeneous objects sometimes causes error. For example,
111 datetime and Nones don't go well together. This function takes special
112 care to make that work.
113 """
114 key = key or (lambda x: x)
115 def safekey(x):
116 k = key(x)
117 return (k.__class__.__name__, k)
118 return sorted(iterable, key=safekey, reverse=reverse)
119
120
121 def days_since(then, now=None):
122 delta = then - (now or datetime.now())
123 return abs(delta.days)
124
125
126 def datestr(then, now=None, lang=None, relative=True):
127 """Internationalized version of web.datestr."""
128 lang = lang or web.ctx.get('lang') or "en"
129 if relative:
130 if now is None:
131 now = datetime.now()
132 delta = then - now
133 if abs(delta.days) < 4: # Threshold from web.py
134 return babel.dates.format_timedelta(delta,
135 add_direction=True,
136 locale=_get_babel_locale(lang))
137 return format_date(then, lang=lang)
138
139
140 def datetimestr_utc(then):
141 return then.strftime("%Y-%m-%dT%H:%M:%SZ")
142
143 def format_date(date, lang=None):
144 lang = lang or web.ctx.get('lang') or "en"
145 locale = _get_babel_locale(lang)
146 return babel.dates.format_date(date, format="long", locale=locale)
147
148 def _get_babel_locale(lang):
149 try:
150 return babel.Locale(lang)
151 except babel.core.UnknownLocaleError:
152 return babel.Locale("en")
153
154
155 def sprintf(s, *a, **kw):
156 """Handy utility for string replacements.
157
158 >>> sprintf('hello %s', 'python')
159 'hello python'
160 >>> sprintf('hello %(name)s', name='python')
161 'hello python'
162 """
163 args = kw or a
164 if args:
165 return s % args
166 else:
167 return s
168
169
170 def cond(pred, true_value, false_value=""):
171 """Lisp style cond function.
172
173 Hanly to use instead of if-else expression.
174 """
175 if pred:
176 return true_value
177 else:
178 return false_value
179
180
181 def commify(number, lang=None):
182 """localized version of web.commify"""
183 try:
184 lang = lang or web.ctx.get("lang") or "en"
185 return babel.numbers.format_number(int(number), lang)
186 except:
187 return six.text_type(number)
188
189
190 def truncate(text, limit):
191 """Truncate text and add ellipses if it longer than specified limit."""
192 if not text:
193 return ''
194 if len(text) <= limit:
195 return text
196 return text[:limit] + "..."
197
198
199 def urlsafe(path):
200 """Replaces the unsafe chars from path with underscores.
201 """
202 return _get_safepath_re().sub('_', path).strip('_')[:100]
203
204 @web.memoize
205 def _get_safepath_re():
206 """Make regular expression that matches all unsafe chars."""
207 # unsafe chars according to RFC 2396
208 reserved = ";/?:@&=+$,"
209 delims = '<>#%"'
210 unwise = "{}|\\^[]`"
211 space = ' \n\r'
212
213 unsafe = reserved + delims + unwise + space
214 pattern = '[%s]+' % "".join(re.escape(c) for c in unsafe)
215 return re.compile(pattern)
216
217
218 def get_coverstore_url():
219 """Returns the base url of coverstore by looking at the config."""
220 return config.get('coverstore_url', 'https://covers.openlibrary.org').rstrip('/')
221
222
223 _texsafe_map = {
224 '"': r'\textquotedbl{}',
225 '#': r'\#',
226 '$': r'\$',
227 '%': r'\%',
228 '&': r'\&',
229 '<': r'\textless{}',
230 '>': r'\textgreater{}',
231 '\\': r'\textbackslash{}',
232 '^': r'\^{}',
233 '_': r'\_{}',
234 '{': r'\{',
235 '}': r'\}',
236 '|': r'\textbar{}',
237 '~': r'\~{}',
238 }
239
240 _texsafe_re = None
241
242 def texsafe(text):
243 """Escapes the special characters in the given text for using it in tex type setting.
244
245 Tex (or Latex) uses some characters in the ascii character range for
246 special notations. These characters must be escaped when occur in the
247 regular text. This function escapes those special characters.
248
249 The list of special characters and the latex command to typeset them can
250 be found in `The Comprehensive LaTeX Symbol List`_.
251
252 .. _The Comprehensive LaTeX Symbol List: http://www.ctan.org/tex-archive/info/symbols/comprehensive/symbols-a4.pdf
253 """
254 global _texsafe_re
255 if _texsafe_re is None:
256 pattern = "[%s]" % re.escape("".join(list(_texsafe_map)))
257 _texsafe_re = re.compile(pattern)
258
259 return _texsafe_re.sub(lambda m: _texsafe_map[m.group(0)], text)
260
261 def percentage(value, total):
262 """Computes percentage.
263
264 >>> percentage(1, 10)
265 10.0
266 >>> percentage(0, 0)
267 0.0
268 """
269 return (value * 100.0) / total if total else 0.0
270
271 def uniq(values, key=None):
272 """Returns the unique entries from the given values in the original order.
273
274 The value of the optional `key` parameter should be a function that takes
275 a single argument and returns a key to test the uniqueness.
276 """
277 key = key or (lambda x: x)
278 s = set()
279 result = []
280 for v in values:
281 k = key(v)
282 if k not in s:
283 s.add(k)
284 result.append(v)
285 return result
286
287 def affiliate_id(affiliate):
288 return config.get('affiliate_ids', {}).get(affiliate, '')
289
290 def bookreader_host():
291 return config.get('bookreader_host', '')
292
293 def private_collections():
294 """Collections which are lendable but should not be linked from OL
295 TODO: Remove when we can handle institutional books"""
296 return ['georgetown-university-law-library-rr']
297
298 def private_collection_in(collections):
299 return any(x in private_collections() for x in collections)
300
301 def _get_helpers():
302 _globals = globals()
303 return web.storage((k, _globals[k]) for k in __all__)
304
305
306 ## This must be at the end of this module
307 helpers = _get_helpers()
308
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/openlibrary/core/helpers.py b/openlibrary/core/helpers.py
--- a/openlibrary/core/helpers.py
+++ b/openlibrary/core/helpers.py
@@ -101,7 +101,7 @@
def json_encode(d, **kw):
"""Same as json.dumps.
"""
- return json.dumps(d or {}, **kw)
+ return json.dumps(d, **kw)
def safesort(iterable, key=None, reverse=False):
|
{"golden_diff": "diff --git a/openlibrary/core/helpers.py b/openlibrary/core/helpers.py\n--- a/openlibrary/core/helpers.py\n+++ b/openlibrary/core/helpers.py\n@@ -101,7 +101,7 @@\n def json_encode(d, **kw):\n \"\"\"Same as json.dumps.\n \"\"\"\n- return json.dumps(d or {}, **kw)\n+ return json.dumps(d, **kw)\n \n \n def safesort(iterable, key=None, reverse=False):\n", "issue": "Adding to lists broken\nAdding an item to a list no longer works as of 12-02-2021.\r\n\r\n### Evidence / Screenshot (if possible)\r\n\r\n### Relevant url?\r\n<!-- `https://openlibrary.org/...` -->\r\n\r\n### Steps to Reproduce\r\n<!-- What steps caused you to find the bug? -->\r\n1. Go to ...an edition, etc.\r\n2. Do ...add item to list.\r\n\r\n<!-- What actually happened after these steps? What did you expect to happen? -->\r\n* Actual: List link loads list page.\r\n* Expected: Item should be added to list.\r\n\r\n### Details\r\n\r\n- **Logged in (Y/N)?** Y\r\n- **Browser type/version?** Chrome Version 88.0.4324.150 (Official Build) (x86_64)\r\n- **Operating system?** Mac Big Sur\r\n- **Environment (prod/dev/local)?** prod\r\n<!-- If not sure, put prod -->\r\n\r\n### Proposal & Constraints\r\n<!-- What is the proposed solution / implementation? Is there a precedent of this approach succeeding elsewhere? -->\r\n\r\n### Related files\r\n<!-- Files related to this issue; this is super useful for new contributors who might want to help! If you're not sure, leave this blank; a maintainer will add them. -->\r\n\r\n### Stakeholders\r\n<!-- @ tag stakeholders of this bug -->\r\n@cclauss \n", "before_files": [{"content": "\"\"\"Generic helper functions to use in the templates and the webapp.\n\"\"\"\nimport web\nfrom datetime import datetime\nimport re\n\nimport six\nfrom six.moves.urllib.parse import urlsplit\n\nif six.PY2: # See #4525 json.dump(indent) MUST be an int on PY2\n import simplejson as json\nelse:\n import json\n\nimport babel\nimport babel.core\nimport babel.dates\nimport babel.numbers\n\ntry:\n import genshi\n import genshi.filters\nexcept ImportError:\n genshi = None\n\ntry:\n from bs4 import BeautifulSoup\nexcept ImportError:\n BeautifulSoup = None\n\nfrom infogami import config\n\n# handy utility to parse ISO date strings\nfrom infogami.infobase.utils import parse_datetime\nfrom infogami.utils.view import safeint\n\n# TODO: i18n should be moved to core or infogami\nfrom openlibrary.i18n import gettext as _ # noqa: F401\n\n__all__ = [\n \"sanitize\",\n \"json_encode\",\n \"safesort\",\n \"days_since\", \"datestr\", \"format_date\",\n \"sprintf\", \"cond\", \"commify\", \"truncate\", \"datetimestr_utc\",\n \"urlsafe\", \"texsafe\",\n \"percentage\", \"affiliate_id\", \"bookreader_host\",\n \"private_collections\", \"private_collection_in\",\n\n # functions imported from elsewhere\n \"parse_datetime\", \"safeint\"\n]\n__docformat__ = \"restructuredtext en\"\n\ndef sanitize(html, encoding='utf8'):\n \"\"\"Removes unsafe tags and attributes from html and adds\n ``rel=\"nofollow\"`` attribute to all external links.\n Using encoding=None if passing unicode strings e.g. for Python 3.\n encoding=\"utf8\" matches default format for earlier versions of Genshi\n https://genshi.readthedocs.io/en/latest/upgrade/#upgrading-from-genshi-0-6-x-to-the-development-version\n \"\"\"\n\n # Can't sanitize unless genshi module is available\n if genshi is None:\n return html\n\n def get_nofollow(name, event):\n attrs = event[1][1]\n href = attrs.get('href', '')\n\n if href:\n # add rel=nofollow to all absolute links\n _, host, _, _, _ = urlsplit(href)\n if host:\n return 'nofollow'\n\n try:\n html = genshi.HTML(html, encoding=encoding)\n\n # except (genshi.ParseError, UnicodeDecodeError, UnicodeError) as e:\n # don't catch Unicode errors so we can tell if we're getting bytes\n except genshi.ParseError:\n if BeautifulSoup:\n # Bad html. Tidy it up using BeautifulSoup\n html = str(BeautifulSoup(html, \"lxml\"))\n try:\n html = genshi.HTML(html)\n except Exception:\n # Failed to sanitize.\n # We can't do any better than returning the original HTML, without sanitizing.\n return html\n else:\n raise\n\n stream = html \\\n | genshi.filters.HTMLSanitizer() \\\n | genshi.filters.Transformer(\"//a\").attr(\"rel\", get_nofollow)\n return stream.render()\n\n\ndef json_encode(d, **kw):\n \"\"\"Same as json.dumps.\n \"\"\"\n return json.dumps(d or {}, **kw)\n\n\ndef safesort(iterable, key=None, reverse=False):\n \"\"\"Sorts heterogeneous of objects without raising errors.\n\n Sorting heterogeneous objects sometimes causes error. For example,\n datetime and Nones don't go well together. This function takes special\n care to make that work.\n \"\"\"\n key = key or (lambda x: x)\n def safekey(x):\n k = key(x)\n return (k.__class__.__name__, k)\n return sorted(iterable, key=safekey, reverse=reverse)\n\n\ndef days_since(then, now=None):\n delta = then - (now or datetime.now())\n return abs(delta.days)\n\n\ndef datestr(then, now=None, lang=None, relative=True):\n \"\"\"Internationalized version of web.datestr.\"\"\"\n lang = lang or web.ctx.get('lang') or \"en\"\n if relative:\n if now is None:\n now = datetime.now()\n delta = then - now\n if abs(delta.days) < 4: # Threshold from web.py\n return babel.dates.format_timedelta(delta,\n add_direction=True,\n locale=_get_babel_locale(lang))\n return format_date(then, lang=lang)\n\n\ndef datetimestr_utc(then):\n return then.strftime(\"%Y-%m-%dT%H:%M:%SZ\")\n\ndef format_date(date, lang=None):\n lang = lang or web.ctx.get('lang') or \"en\"\n locale = _get_babel_locale(lang)\n return babel.dates.format_date(date, format=\"long\", locale=locale)\n\ndef _get_babel_locale(lang):\n try:\n return babel.Locale(lang)\n except babel.core.UnknownLocaleError:\n return babel.Locale(\"en\")\n\n\ndef sprintf(s, *a, **kw):\n \"\"\"Handy utility for string replacements.\n\n >>> sprintf('hello %s', 'python')\n 'hello python'\n >>> sprintf('hello %(name)s', name='python')\n 'hello python'\n \"\"\"\n args = kw or a\n if args:\n return s % args\n else:\n return s\n\n\ndef cond(pred, true_value, false_value=\"\"):\n \"\"\"Lisp style cond function.\n\n Hanly to use instead of if-else expression.\n \"\"\"\n if pred:\n return true_value\n else:\n return false_value\n\n\ndef commify(number, lang=None):\n \"\"\"localized version of web.commify\"\"\"\n try:\n lang = lang or web.ctx.get(\"lang\") or \"en\"\n return babel.numbers.format_number(int(number), lang)\n except:\n return six.text_type(number)\n\n\ndef truncate(text, limit):\n \"\"\"Truncate text and add ellipses if it longer than specified limit.\"\"\"\n if not text:\n return ''\n if len(text) <= limit:\n return text\n return text[:limit] + \"...\"\n\n\ndef urlsafe(path):\n \"\"\"Replaces the unsafe chars from path with underscores.\n \"\"\"\n return _get_safepath_re().sub('_', path).strip('_')[:100]\n\[email protected]\ndef _get_safepath_re():\n \"\"\"Make regular expression that matches all unsafe chars.\"\"\"\n # unsafe chars according to RFC 2396\n reserved = \";/?:@&=+$,\"\n delims = '<>#%\"'\n unwise = \"{}|\\\\^[]`\"\n space = ' \\n\\r'\n\n unsafe = reserved + delims + unwise + space\n pattern = '[%s]+' % \"\".join(re.escape(c) for c in unsafe)\n return re.compile(pattern)\n\n\ndef get_coverstore_url():\n \"\"\"Returns the base url of coverstore by looking at the config.\"\"\"\n return config.get('coverstore_url', 'https://covers.openlibrary.org').rstrip('/')\n\n\n_texsafe_map = {\n '\"': r'\\textquotedbl{}',\n '#': r'\\#',\n '$': r'\\$',\n '%': r'\\%',\n '&': r'\\&',\n '<': r'\\textless{}',\n '>': r'\\textgreater{}',\n '\\\\': r'\\textbackslash{}',\n '^': r'\\^{}',\n '_': r'\\_{}',\n '{': r'\\{',\n '}': r'\\}',\n '|': r'\\textbar{}',\n '~': r'\\~{}',\n}\n\n_texsafe_re = None\n\ndef texsafe(text):\n \"\"\"Escapes the special characters in the given text for using it in tex type setting.\n\n Tex (or Latex) uses some characters in the ascii character range for\n special notations. These characters must be escaped when occur in the\n regular text. This function escapes those special characters.\n\n The list of special characters and the latex command to typeset them can\n be found in `The Comprehensive LaTeX Symbol List`_.\n\n .. _The Comprehensive LaTeX Symbol List: http://www.ctan.org/tex-archive/info/symbols/comprehensive/symbols-a4.pdf\n \"\"\"\n global _texsafe_re\n if _texsafe_re is None:\n pattern = \"[%s]\" % re.escape(\"\".join(list(_texsafe_map)))\n _texsafe_re = re.compile(pattern)\n\n return _texsafe_re.sub(lambda m: _texsafe_map[m.group(0)], text)\n\ndef percentage(value, total):\n \"\"\"Computes percentage.\n\n >>> percentage(1, 10)\n 10.0\n >>> percentage(0, 0)\n 0.0\n \"\"\"\n return (value * 100.0) / total if total else 0.0\n\ndef uniq(values, key=None):\n \"\"\"Returns the unique entries from the given values in the original order.\n\n The value of the optional `key` parameter should be a function that takes\n a single argument and returns a key to test the uniqueness.\n \"\"\"\n key = key or (lambda x: x)\n s = set()\n result = []\n for v in values:\n k = key(v)\n if k not in s:\n s.add(k)\n result.append(v)\n return result\n\ndef affiliate_id(affiliate):\n return config.get('affiliate_ids', {}).get(affiliate, '')\n\ndef bookreader_host():\n return config.get('bookreader_host', '')\n\ndef private_collections():\n \"\"\"Collections which are lendable but should not be linked from OL\n TODO: Remove when we can handle institutional books\"\"\"\n return ['georgetown-university-law-library-rr']\n\ndef private_collection_in(collections):\n return any(x in private_collections() for x in collections)\n\ndef _get_helpers():\n _globals = globals()\n return web.storage((k, _globals[k]) for k in __all__)\n\n\n## This must be at the end of this module\nhelpers = _get_helpers()\n", "path": "openlibrary/core/helpers.py"}], "after_files": [{"content": "\"\"\"Generic helper functions to use in the templates and the webapp.\n\"\"\"\nimport web\nfrom datetime import datetime\nimport re\n\nimport six\nfrom six.moves.urllib.parse import urlsplit\n\nif six.PY2: # See #4525 json.dump(indent) MUST be an int on PY2\n import simplejson as json\nelse:\n import json\n\nimport babel\nimport babel.core\nimport babel.dates\nimport babel.numbers\n\ntry:\n import genshi\n import genshi.filters\nexcept ImportError:\n genshi = None\n\ntry:\n from bs4 import BeautifulSoup\nexcept ImportError:\n BeautifulSoup = None\n\nfrom infogami import config\n\n# handy utility to parse ISO date strings\nfrom infogami.infobase.utils import parse_datetime\nfrom infogami.utils.view import safeint\n\n# TODO: i18n should be moved to core or infogami\nfrom openlibrary.i18n import gettext as _ # noqa: F401\n\n__all__ = [\n \"sanitize\",\n \"json_encode\",\n \"safesort\",\n \"days_since\", \"datestr\", \"format_date\",\n \"sprintf\", \"cond\", \"commify\", \"truncate\", \"datetimestr_utc\",\n \"urlsafe\", \"texsafe\",\n \"percentage\", \"affiliate_id\", \"bookreader_host\",\n \"private_collections\", \"private_collection_in\",\n\n # functions imported from elsewhere\n \"parse_datetime\", \"safeint\"\n]\n__docformat__ = \"restructuredtext en\"\n\ndef sanitize(html, encoding='utf8'):\n \"\"\"Removes unsafe tags and attributes from html and adds\n ``rel=\"nofollow\"`` attribute to all external links.\n Using encoding=None if passing unicode strings e.g. for Python 3.\n encoding=\"utf8\" matches default format for earlier versions of Genshi\n https://genshi.readthedocs.io/en/latest/upgrade/#upgrading-from-genshi-0-6-x-to-the-development-version\n \"\"\"\n\n # Can't sanitize unless genshi module is available\n if genshi is None:\n return html\n\n def get_nofollow(name, event):\n attrs = event[1][1]\n href = attrs.get('href', '')\n\n if href:\n # add rel=nofollow to all absolute links\n _, host, _, _, _ = urlsplit(href)\n if host:\n return 'nofollow'\n\n try:\n html = genshi.HTML(html, encoding=encoding)\n\n # except (genshi.ParseError, UnicodeDecodeError, UnicodeError) as e:\n # don't catch Unicode errors so we can tell if we're getting bytes\n except genshi.ParseError:\n if BeautifulSoup:\n # Bad html. Tidy it up using BeautifulSoup\n html = str(BeautifulSoup(html, \"lxml\"))\n try:\n html = genshi.HTML(html)\n except Exception:\n # Failed to sanitize.\n # We can't do any better than returning the original HTML, without sanitizing.\n return html\n else:\n raise\n\n stream = html \\\n | genshi.filters.HTMLSanitizer() \\\n | genshi.filters.Transformer(\"//a\").attr(\"rel\", get_nofollow)\n return stream.render()\n\n\ndef json_encode(d, **kw):\n \"\"\"Same as json.dumps.\n \"\"\"\n return json.dumps(d, **kw)\n\n\ndef safesort(iterable, key=None, reverse=False):\n \"\"\"Sorts heterogeneous of objects without raising errors.\n\n Sorting heterogeneous objects sometimes causes error. For example,\n datetime and Nones don't go well together. This function takes special\n care to make that work.\n \"\"\"\n key = key or (lambda x: x)\n def safekey(x):\n k = key(x)\n return (k.__class__.__name__, k)\n return sorted(iterable, key=safekey, reverse=reverse)\n\n\ndef days_since(then, now=None):\n delta = then - (now or datetime.now())\n return abs(delta.days)\n\n\ndef datestr(then, now=None, lang=None, relative=True):\n \"\"\"Internationalized version of web.datestr.\"\"\"\n lang = lang or web.ctx.get('lang') or \"en\"\n if relative:\n if now is None:\n now = datetime.now()\n delta = then - now\n if abs(delta.days) < 4: # Threshold from web.py\n return babel.dates.format_timedelta(delta,\n add_direction=True,\n locale=_get_babel_locale(lang))\n return format_date(then, lang=lang)\n\n\ndef datetimestr_utc(then):\n return then.strftime(\"%Y-%m-%dT%H:%M:%SZ\")\n\ndef format_date(date, lang=None):\n lang = lang or web.ctx.get('lang') or \"en\"\n locale = _get_babel_locale(lang)\n return babel.dates.format_date(date, format=\"long\", locale=locale)\n\ndef _get_babel_locale(lang):\n try:\n return babel.Locale(lang)\n except babel.core.UnknownLocaleError:\n return babel.Locale(\"en\")\n\n\ndef sprintf(s, *a, **kw):\n \"\"\"Handy utility for string replacements.\n\n >>> sprintf('hello %s', 'python')\n 'hello python'\n >>> sprintf('hello %(name)s', name='python')\n 'hello python'\n \"\"\"\n args = kw or a\n if args:\n return s % args\n else:\n return s\n\n\ndef cond(pred, true_value, false_value=\"\"):\n \"\"\"Lisp style cond function.\n\n Hanly to use instead of if-else expression.\n \"\"\"\n if pred:\n return true_value\n else:\n return false_value\n\n\ndef commify(number, lang=None):\n \"\"\"localized version of web.commify\"\"\"\n try:\n lang = lang or web.ctx.get(\"lang\") or \"en\"\n return babel.numbers.format_number(int(number), lang)\n except:\n return six.text_type(number)\n\n\ndef truncate(text, limit):\n \"\"\"Truncate text and add ellipses if it longer than specified limit.\"\"\"\n if not text:\n return ''\n if len(text) <= limit:\n return text\n return text[:limit] + \"...\"\n\n\ndef urlsafe(path):\n \"\"\"Replaces the unsafe chars from path with underscores.\n \"\"\"\n return _get_safepath_re().sub('_', path).strip('_')[:100]\n\[email protected]\ndef _get_safepath_re():\n \"\"\"Make regular expression that matches all unsafe chars.\"\"\"\n # unsafe chars according to RFC 2396\n reserved = \";/?:@&=+$,\"\n delims = '<>#%\"'\n unwise = \"{}|\\\\^[]`\"\n space = ' \\n\\r'\n\n unsafe = reserved + delims + unwise + space\n pattern = '[%s]+' % \"\".join(re.escape(c) for c in unsafe)\n return re.compile(pattern)\n\n\ndef get_coverstore_url():\n \"\"\"Returns the base url of coverstore by looking at the config.\"\"\"\n return config.get('coverstore_url', 'https://covers.openlibrary.org').rstrip('/')\n\n\n_texsafe_map = {\n '\"': r'\\textquotedbl{}',\n '#': r'\\#',\n '$': r'\\$',\n '%': r'\\%',\n '&': r'\\&',\n '<': r'\\textless{}',\n '>': r'\\textgreater{}',\n '\\\\': r'\\textbackslash{}',\n '^': r'\\^{}',\n '_': r'\\_{}',\n '{': r'\\{',\n '}': r'\\}',\n '|': r'\\textbar{}',\n '~': r'\\~{}',\n}\n\n_texsafe_re = None\n\ndef texsafe(text):\n \"\"\"Escapes the special characters in the given text for using it in tex type setting.\n\n Tex (or Latex) uses some characters in the ascii character range for\n special notations. These characters must be escaped when occur in the\n regular text. This function escapes those special characters.\n\n The list of special characters and the latex command to typeset them can\n be found in `The Comprehensive LaTeX Symbol List`_.\n\n .. _The Comprehensive LaTeX Symbol List: http://www.ctan.org/tex-archive/info/symbols/comprehensive/symbols-a4.pdf\n \"\"\"\n global _texsafe_re\n if _texsafe_re is None:\n pattern = \"[%s]\" % re.escape(\"\".join(list(_texsafe_map)))\n _texsafe_re = re.compile(pattern)\n\n return _texsafe_re.sub(lambda m: _texsafe_map[m.group(0)], text)\n\ndef percentage(value, total):\n \"\"\"Computes percentage.\n\n >>> percentage(1, 10)\n 10.0\n >>> percentage(0, 0)\n 0.0\n \"\"\"\n return (value * 100.0) / total if total else 0.0\n\ndef uniq(values, key=None):\n \"\"\"Returns the unique entries from the given values in the original order.\n\n The value of the optional `key` parameter should be a function that takes\n a single argument and returns a key to test the uniqueness.\n \"\"\"\n key = key or (lambda x: x)\n s = set()\n result = []\n for v in values:\n k = key(v)\n if k not in s:\n s.add(k)\n result.append(v)\n return result\n\ndef affiliate_id(affiliate):\n return config.get('affiliate_ids', {}).get(affiliate, '')\n\ndef bookreader_host():\n return config.get('bookreader_host', '')\n\ndef private_collections():\n \"\"\"Collections which are lendable but should not be linked from OL\n TODO: Remove when we can handle institutional books\"\"\"\n return ['georgetown-university-law-library-rr']\n\ndef private_collection_in(collections):\n return any(x in private_collections() for x in collections)\n\ndef _get_helpers():\n _globals = globals()\n return web.storage((k, _globals[k]) for k in __all__)\n\n\n## This must be at the end of this module\nhelpers = _get_helpers()\n", "path": "openlibrary/core/helpers.py"}]}
| 3,561 | 98 |
gh_patches_debug_8173
|
rasdani/github-patches
|
git_diff
|
google__personfinder-407
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
sms_number_to_repo should have default value
Should be initialized here:
https://github.com/google/personfinder/blob/546f238fab407145292cc81c5e5682ad952f92f6/app/setup_pf.py#L62
Otherwise it shows error on save of global admin page unless the user fills it manually.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/setup_pf.py`
Content:
```
1 # Copyright 2009-2010 by Ka-Ping Yee
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from datetime import datetime
16
17 import const
18 from model import *
19 from utils import *
20
21 def setup_datastore():
22 """Sets up the subject types and translations in a datastore. (Existing
23 subject types and messages will be updated; existing Subject or Report
24 information will not be changed or deleted.)"""
25 setup_repos()
26 setup_configs()
27
28 def wipe_datastore(delete=None, keep=None):
29 """Deletes everything in the datastore. If 'delete' is given (a list of
30 kind names), deletes only those kinds of entities. If 'keep' is given,
31 skips deleting those kinds of entities."""
32 query = db.Query(keys_only=True)
33 keys = query.fetch(1000)
34 while keys:
35 db.delete([key for key in keys
36 if delete is None or key.kind() in delete
37 if keep is None or key.kind() not in keep])
38 keys = query.with_cursor(query.cursor()).fetch(1000)
39
40 def reset_datastore():
41 """Wipes everything in the datastore except Accounts,
42 then sets up the datastore for new data."""
43 wipe_datastore(keep=['Account'])
44 setup_datastore()
45
46 def setup_repos():
47 db.put([Repo(key_name='haiti'),
48 Repo(key_name='japan'),
49 Repo(key_name='pakistan')])
50 # Set some repositories active so they show on the main page.
51 config.set_for_repo('japan', launched=True)
52 config.set_for_repo('haiti', launched=True)
53
54 def setup_configs():
55 """Installs configuration settings used for testing by server_tests."""
56 COMMON_KEYWORDS = ['person', 'people', 'finder', 'person finder',
57 'people finder', 'crisis', 'survivor', 'family']
58
59 # NOTE: the following two CAPTCHA keys are dummy keys for testing only.
60 # (https://developers.google.com/recaptcha/docs/faq)
61 # They should be replaced with real keys upon launch.
62 config.set(captcha_site_key='6LeIxAcTAAAAAJcZVRqyHh71UMIEGNQ_MXjiZKhI',
63 captcha_secret_key='6LeIxAcTAAAAAGG-vFI1TnRWxMZNFuojJ4WifJWe',
64 # A Google Translate API key with a very low quota, just for testing.
65 translate_api_key='AIzaSyCXdz9x7LDL3BvieEP8Wcze64CC_iqslSE',
66 repo_aliases={},
67 referrer_whitelist=[],
68 initialized=True,
69 notification_email=const.DEFAULT_NOTIFICATION_EMAIL,
70 unreviewed_notes_threshold=(
71 const.DEFAULT_UNREVIEWED_NOTES_THRESHOLD),
72 )
73
74 config.set_for_repo(
75 'haiti',
76 # Appended to "Google Person Finder" in page titles.
77 repo_titles={
78 'en': 'Haiti Earthquake',
79 'fr': u'S\xe9isme en Ha\xefti',
80 'ht': u'Tranbleman T\xe8 an Ayiti',
81 'es': u'Terremoto en Hait\xed'
82 },
83 # List of language codes that appear in the language menu.
84 language_menu_options=['en', 'ht', 'fr', 'es'],
85 # Content for the <meta name="keywords"> tag.
86 keywords=', '.join([
87 'haiti', 'earthquake', 'haiti earthquake', 'haitian',
88 u'ha\xefti', u's\xe9isme', 'tremblement', 'tremblement de terre',
89 'famille', 'recherche de personnes', 'terremoto'
90 ] + COMMON_KEYWORDS),
91 # If false, hide the family_name field and use only given_name.
92 use_family_name=True,
93 # Presentation order for the given name and family name.
94 family_name_first=False,
95 # If true, show extra fields for alternate names.
96 use_alternate_names=True,
97 # If false, hide the home_zip field.
98 use_postal_code=True,
99 # Require at least this many letters in each word of a text query.
100 min_query_word_length=2,
101 # Show input fields for profile URLs in create page.
102 show_profile_entry=True,
103 # Default list of profile websites to show in create page.
104 profile_websites=const.DEFAULT_PROFILE_WEBSITES,
105 # Default map viewport for the location field in the note form.
106 map_default_zoom=7,
107 map_default_center=[18.968637, -72.284546],
108 map_size_pixels=[400, 280],
109 # If true, the feeds and read API require an authorization key.
110 read_auth_key_required=False,
111 # If true, the search API requires an authorization key.
112 search_auth_key_required=False,
113 # If true, show "believed dead" option in the note status dropdown
114 allow_believed_dead_via_ui=True,
115 # Custom html messages to show on main page, results page, view page,
116 # and query form, keyed by language codes.
117 start_page_custom_htmls={'en': '', 'fr': ''},
118 results_page_custom_htmls={'en': '', 'fr': ''},
119 view_page_custom_htmls={'en': '', 'fr': ''},
120 seek_query_form_custom_htmls={'en': '', 'fr': ''},
121 time_zone_offset=0,
122 time_zone_abbreviation='UTC',
123 published_date=get_timestamp(datetime(2010, 1, 12)),
124 updated_date=get_timestamp(datetime(2010, 1, 12)),
125 )
126
127 config.set_for_repo(
128 'japan',
129 language_menu_options=['ja', 'en', 'ko', 'zh-CN', 'zh-TW', 'pt-BR', 'es'],
130 repo_titles={
131 'en': '2011 Japan Earthquake',
132 'zh-TW': u'2011 \u65e5\u672c\u5730\u9707',
133 'zh-CN': u'2011 \u65e5\u672c\u5730\u9707',
134 'pt-BR': u'2011 Terremoto no Jap\xe3o',
135 'ja': u'2011 \u65e5\u672c\u5730\u9707',
136 'es': u'2011 Terremoto en Jap\xf3n'
137 },
138 keywords=', '.join(COMMON_KEYWORDS),
139 use_family_name=True,
140 family_name_first=True,
141 use_alternate_names=True,
142 use_postal_code=True,
143 min_query_word_length=1,
144 show_profile_entry=True,
145 profile_websites=const.DEFAULT_PROFILE_WEBSITES,
146 map_default_zoom=7,
147 map_default_center=[38, 140.7],
148 map_size_pixels=[400, 400],
149 search_auth_key_required=True,
150 read_auth_key_required=True,
151 allow_believed_dead_via_ui=True,
152 start_page_custom_htmls={'en': 'Custom message', 'fr': 'French'},
153 results_page_custom_htmls={'en': 'Custom message', 'fr': 'French'},
154 view_page_custom_htmls={'en': 'Custom message', 'fr': 'French'},
155 seek_query_form_custom_htmls={'en': '', 'fr': ''},
156 # NOTE(kpy): These two configuration settings only work for locations
157 # with a single, fixed time zone offset and no Daylight Saving Time.
158 time_zone_offset=9, # UTC+9
159 time_zone_abbreviation='JST',
160 jp_mobile_carrier_redirect=True,
161 published_date=get_timestamp(datetime(2011, 3, 11)),
162 updated_date=get_timestamp(datetime(2011, 3, 11)),
163 )
164
165 config.set_for_repo(
166 'pakistan',
167 repo_titles={
168 'en': 'Pakistan Floods',
169 'ur': u'\u067e\u0627\u06a9\u0633\u062a\u0627\u0646\u06cc \u0633\u06cc\u0644\u0627\u0628'
170 },
171 language_menu_options=['en', 'ur'],
172 keywords=', '.join([
173 'pakistan', 'flood', 'pakistan flood', 'pakistani'
174 ] + COMMON_KEYWORDS),
175 use_family_name=False,
176 family_name_first=False,
177 use_alternate_names=False,
178 use_postal_code=False,
179 min_query_word_length=1,
180 map_default_zoom=6,
181 map_default_center=[33.36, 73.26], # near Rawalpindi, Pakistan
182 map_size_pixels=[400, 500],
183 read_auth_key_required=False,
184 search_auth_key_required=False,
185 allow_believed_dead_via_ui=True,
186 start_page_custom_htmls={'en': '', 'fr': ''},
187 results_page_custom_htmls={'en': '', 'fr': ''},
188 view_page_custom_htmls={'en': '', 'fr': ''},
189 seek_query_form_custom_htmls={'en': '', 'fr': ''},
190 time_zone_offset=0,
191 time_zone_abbreviation='UTC',
192 published_date=get_timestamp(datetime(2010, 8, 6)),
193 updated_date=get_timestamp(datetime(2010, 8, 6)),
194 )
195
196
197 def setup_lang_test_config():
198 config.set_for_repo(
199 'lang-test',
200 # We set short titles to avoid exceeding the field's 500-char limit.
201 repo_titles=dict((lang, lang) for lang in const.LANGUAGE_ENDONYMS),
202 language_menu_options=list(const.LANGUAGE_ENDONYMS.keys()),
203 keywords=', '.join(COMMON_KEYWORDS),
204 use_family_name=True,
205 family_name_first=True,
206 use_alternate_names=True,
207 use_postal_code=True,
208 min_query_word_length=1,
209 map_default_zoom=6,
210 map_default_center=[0 ,0],
211 map_size_pixels=[400, 500],
212 read_auth_key_required=False,
213 search_auth_key_required=False,
214 allow_believed_dead_via_ui=True,
215 start_page_custom_htmls={'en': '', 'fr': ''},
216 results_page_custom_htmls={'en': '', 'fr': ''},
217 view_page_custom_htmls={'en': '', 'fr': ''},
218 seek_query_form_custom_htmls={'en': '', 'fr': ''},
219 )
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/app/setup_pf.py b/app/setup_pf.py
--- a/app/setup_pf.py
+++ b/app/setup_pf.py
@@ -64,6 +64,7 @@
# A Google Translate API key with a very low quota, just for testing.
translate_api_key='AIzaSyCXdz9x7LDL3BvieEP8Wcze64CC_iqslSE',
repo_aliases={},
+ sms_number_to_repo={},
referrer_whitelist=[],
initialized=True,
notification_email=const.DEFAULT_NOTIFICATION_EMAIL,
|
{"golden_diff": "diff --git a/app/setup_pf.py b/app/setup_pf.py\n--- a/app/setup_pf.py\n+++ b/app/setup_pf.py\n@@ -64,6 +64,7 @@\n # A Google Translate API key with a very low quota, just for testing.\n translate_api_key='AIzaSyCXdz9x7LDL3BvieEP8Wcze64CC_iqslSE',\n repo_aliases={},\n+ sms_number_to_repo={},\n referrer_whitelist=[],\n initialized=True,\n notification_email=const.DEFAULT_NOTIFICATION_EMAIL,\n", "issue": "sms_number_to_repo should have default value\nShould be initialized here:\r\nhttps://github.com/google/personfinder/blob/546f238fab407145292cc81c5e5682ad952f92f6/app/setup_pf.py#L62\r\n\r\nOtherwise it shows error on save of global admin page unless the user fills it manually.\n", "before_files": [{"content": "# Copyright 2009-2010 by Ka-Ping Yee\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom datetime import datetime\n\nimport const\nfrom model import *\nfrom utils import *\n\ndef setup_datastore():\n \"\"\"Sets up the subject types and translations in a datastore. (Existing\n subject types and messages will be updated; existing Subject or Report\n information will not be changed or deleted.)\"\"\"\n setup_repos()\n setup_configs()\n\ndef wipe_datastore(delete=None, keep=None):\n \"\"\"Deletes everything in the datastore. If 'delete' is given (a list of\n kind names), deletes only those kinds of entities. If 'keep' is given,\n skips deleting those kinds of entities.\"\"\"\n query = db.Query(keys_only=True)\n keys = query.fetch(1000)\n while keys:\n db.delete([key for key in keys\n if delete is None or key.kind() in delete\n if keep is None or key.kind() not in keep])\n keys = query.with_cursor(query.cursor()).fetch(1000)\n\ndef reset_datastore():\n \"\"\"Wipes everything in the datastore except Accounts,\n then sets up the datastore for new data.\"\"\"\n wipe_datastore(keep=['Account'])\n setup_datastore()\n\ndef setup_repos():\n db.put([Repo(key_name='haiti'),\n Repo(key_name='japan'),\n Repo(key_name='pakistan')])\n # Set some repositories active so they show on the main page.\n config.set_for_repo('japan', launched=True)\n config.set_for_repo('haiti', launched=True)\n\ndef setup_configs():\n \"\"\"Installs configuration settings used for testing by server_tests.\"\"\"\n COMMON_KEYWORDS = ['person', 'people', 'finder', 'person finder',\n 'people finder', 'crisis', 'survivor', 'family']\n\n # NOTE: the following two CAPTCHA keys are dummy keys for testing only.\n # (https://developers.google.com/recaptcha/docs/faq)\n # They should be replaced with real keys upon launch.\n config.set(captcha_site_key='6LeIxAcTAAAAAJcZVRqyHh71UMIEGNQ_MXjiZKhI',\n captcha_secret_key='6LeIxAcTAAAAAGG-vFI1TnRWxMZNFuojJ4WifJWe',\n # A Google Translate API key with a very low quota, just for testing.\n translate_api_key='AIzaSyCXdz9x7LDL3BvieEP8Wcze64CC_iqslSE',\n repo_aliases={},\n referrer_whitelist=[],\n initialized=True,\n notification_email=const.DEFAULT_NOTIFICATION_EMAIL,\n unreviewed_notes_threshold=(\n const.DEFAULT_UNREVIEWED_NOTES_THRESHOLD),\n )\n\n config.set_for_repo(\n 'haiti',\n # Appended to \"Google Person Finder\" in page titles.\n repo_titles={\n 'en': 'Haiti Earthquake',\n 'fr': u'S\\xe9isme en Ha\\xefti',\n 'ht': u'Tranbleman T\\xe8 an Ayiti',\n 'es': u'Terremoto en Hait\\xed'\n },\n # List of language codes that appear in the language menu.\n language_menu_options=['en', 'ht', 'fr', 'es'],\n # Content for the <meta name=\"keywords\"> tag.\n keywords=', '.join([\n 'haiti', 'earthquake', 'haiti earthquake', 'haitian',\n u'ha\\xefti', u's\\xe9isme', 'tremblement', 'tremblement de terre',\n 'famille', 'recherche de personnes', 'terremoto'\n ] + COMMON_KEYWORDS),\n # If false, hide the family_name field and use only given_name.\n use_family_name=True,\n # Presentation order for the given name and family name.\n family_name_first=False,\n # If true, show extra fields for alternate names.\n use_alternate_names=True,\n # If false, hide the home_zip field.\n use_postal_code=True,\n # Require at least this many letters in each word of a text query.\n min_query_word_length=2,\n # Show input fields for profile URLs in create page.\n show_profile_entry=True,\n # Default list of profile websites to show in create page.\n profile_websites=const.DEFAULT_PROFILE_WEBSITES,\n # Default map viewport for the location field in the note form.\n map_default_zoom=7,\n map_default_center=[18.968637, -72.284546],\n map_size_pixels=[400, 280],\n # If true, the feeds and read API require an authorization key.\n read_auth_key_required=False,\n # If true, the search API requires an authorization key.\n search_auth_key_required=False,\n # If true, show \"believed dead\" option in the note status dropdown\n allow_believed_dead_via_ui=True,\n # Custom html messages to show on main page, results page, view page,\n # and query form, keyed by language codes.\n start_page_custom_htmls={'en': '', 'fr': ''},\n results_page_custom_htmls={'en': '', 'fr': ''},\n view_page_custom_htmls={'en': '', 'fr': ''},\n seek_query_form_custom_htmls={'en': '', 'fr': ''},\n time_zone_offset=0,\n time_zone_abbreviation='UTC',\n published_date=get_timestamp(datetime(2010, 1, 12)),\n updated_date=get_timestamp(datetime(2010, 1, 12)),\n )\n\n config.set_for_repo(\n 'japan',\n language_menu_options=['ja', 'en', 'ko', 'zh-CN', 'zh-TW', 'pt-BR', 'es'],\n repo_titles={\n 'en': '2011 Japan Earthquake',\n 'zh-TW': u'2011 \\u65e5\\u672c\\u5730\\u9707',\n 'zh-CN': u'2011 \\u65e5\\u672c\\u5730\\u9707',\n 'pt-BR': u'2011 Terremoto no Jap\\xe3o',\n 'ja': u'2011 \\u65e5\\u672c\\u5730\\u9707',\n 'es': u'2011 Terremoto en Jap\\xf3n'\n },\n keywords=', '.join(COMMON_KEYWORDS),\n use_family_name=True,\n family_name_first=True,\n use_alternate_names=True,\n use_postal_code=True,\n min_query_word_length=1,\n show_profile_entry=True,\n profile_websites=const.DEFAULT_PROFILE_WEBSITES,\n map_default_zoom=7,\n map_default_center=[38, 140.7],\n map_size_pixels=[400, 400],\n search_auth_key_required=True,\n read_auth_key_required=True,\n allow_believed_dead_via_ui=True,\n start_page_custom_htmls={'en': 'Custom message', 'fr': 'French'},\n results_page_custom_htmls={'en': 'Custom message', 'fr': 'French'},\n view_page_custom_htmls={'en': 'Custom message', 'fr': 'French'},\n seek_query_form_custom_htmls={'en': '', 'fr': ''},\n # NOTE(kpy): These two configuration settings only work for locations\n # with a single, fixed time zone offset and no Daylight Saving Time.\n time_zone_offset=9, # UTC+9\n time_zone_abbreviation='JST',\n jp_mobile_carrier_redirect=True,\n published_date=get_timestamp(datetime(2011, 3, 11)),\n updated_date=get_timestamp(datetime(2011, 3, 11)),\n )\n\n config.set_for_repo(\n 'pakistan',\n repo_titles={\n 'en': 'Pakistan Floods',\n 'ur': u'\\u067e\\u0627\\u06a9\\u0633\\u062a\\u0627\\u0646\\u06cc \\u0633\\u06cc\\u0644\\u0627\\u0628'\n },\n language_menu_options=['en', 'ur'],\n keywords=', '.join([\n 'pakistan', 'flood', 'pakistan flood', 'pakistani'\n ] + COMMON_KEYWORDS),\n use_family_name=False,\n family_name_first=False,\n use_alternate_names=False,\n use_postal_code=False,\n min_query_word_length=1,\n map_default_zoom=6,\n map_default_center=[33.36, 73.26], # near Rawalpindi, Pakistan\n map_size_pixels=[400, 500],\n read_auth_key_required=False,\n search_auth_key_required=False,\n allow_believed_dead_via_ui=True,\n start_page_custom_htmls={'en': '', 'fr': ''},\n results_page_custom_htmls={'en': '', 'fr': ''},\n view_page_custom_htmls={'en': '', 'fr': ''},\n seek_query_form_custom_htmls={'en': '', 'fr': ''},\n time_zone_offset=0,\n time_zone_abbreviation='UTC',\n published_date=get_timestamp(datetime(2010, 8, 6)),\n updated_date=get_timestamp(datetime(2010, 8, 6)),\n )\n\n\ndef setup_lang_test_config():\n config.set_for_repo(\n 'lang-test',\n # We set short titles to avoid exceeding the field's 500-char limit.\n repo_titles=dict((lang, lang) for lang in const.LANGUAGE_ENDONYMS),\n language_menu_options=list(const.LANGUAGE_ENDONYMS.keys()),\n keywords=', '.join(COMMON_KEYWORDS),\n use_family_name=True,\n family_name_first=True,\n use_alternate_names=True,\n use_postal_code=True,\n min_query_word_length=1,\n map_default_zoom=6,\n map_default_center=[0 ,0],\n map_size_pixels=[400, 500],\n read_auth_key_required=False,\n search_auth_key_required=False,\n allow_believed_dead_via_ui=True,\n start_page_custom_htmls={'en': '', 'fr': ''},\n results_page_custom_htmls={'en': '', 'fr': ''},\n view_page_custom_htmls={'en': '', 'fr': ''},\n seek_query_form_custom_htmls={'en': '', 'fr': ''},\n )", "path": "app/setup_pf.py"}], "after_files": [{"content": "# Copyright 2009-2010 by Ka-Ping Yee\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom datetime import datetime\n\nimport const\nfrom model import *\nfrom utils import *\n\ndef setup_datastore():\n \"\"\"Sets up the subject types and translations in a datastore. (Existing\n subject types and messages will be updated; existing Subject or Report\n information will not be changed or deleted.)\"\"\"\n setup_repos()\n setup_configs()\n\ndef wipe_datastore(delete=None, keep=None):\n \"\"\"Deletes everything in the datastore. If 'delete' is given (a list of\n kind names), deletes only those kinds of entities. If 'keep' is given,\n skips deleting those kinds of entities.\"\"\"\n query = db.Query(keys_only=True)\n keys = query.fetch(1000)\n while keys:\n db.delete([key for key in keys\n if delete is None or key.kind() in delete\n if keep is None or key.kind() not in keep])\n keys = query.with_cursor(query.cursor()).fetch(1000)\n\ndef reset_datastore():\n \"\"\"Wipes everything in the datastore except Accounts,\n then sets up the datastore for new data.\"\"\"\n wipe_datastore(keep=['Account'])\n setup_datastore()\n\ndef setup_repos():\n db.put([Repo(key_name='haiti'),\n Repo(key_name='japan'),\n Repo(key_name='pakistan')])\n # Set some repositories active so they show on the main page.\n config.set_for_repo('japan', launched=True)\n config.set_for_repo('haiti', launched=True)\n\ndef setup_configs():\n \"\"\"Installs configuration settings used for testing by server_tests.\"\"\"\n COMMON_KEYWORDS = ['person', 'people', 'finder', 'person finder',\n 'people finder', 'crisis', 'survivor', 'family']\n\n # NOTE: the following two CAPTCHA keys are dummy keys for testing only.\n # (https://developers.google.com/recaptcha/docs/faq)\n # They should be replaced with real keys upon launch.\n config.set(captcha_site_key='6LeIxAcTAAAAAJcZVRqyHh71UMIEGNQ_MXjiZKhI',\n captcha_secret_key='6LeIxAcTAAAAAGG-vFI1TnRWxMZNFuojJ4WifJWe',\n # A Google Translate API key with a very low quota, just for testing.\n translate_api_key='AIzaSyCXdz9x7LDL3BvieEP8Wcze64CC_iqslSE',\n repo_aliases={},\n sms_number_to_repo={},\n referrer_whitelist=[],\n initialized=True,\n notification_email=const.DEFAULT_NOTIFICATION_EMAIL,\n unreviewed_notes_threshold=(\n const.DEFAULT_UNREVIEWED_NOTES_THRESHOLD),\n )\n\n config.set_for_repo(\n 'haiti',\n # Appended to \"Google Person Finder\" in page titles.\n repo_titles={\n 'en': 'Haiti Earthquake',\n 'fr': u'S\\xe9isme en Ha\\xefti',\n 'ht': u'Tranbleman T\\xe8 an Ayiti',\n 'es': u'Terremoto en Hait\\xed'\n },\n # List of language codes that appear in the language menu.\n language_menu_options=['en', 'ht', 'fr', 'es'],\n # Content for the <meta name=\"keywords\"> tag.\n keywords=', '.join([\n 'haiti', 'earthquake', 'haiti earthquake', 'haitian',\n u'ha\\xefti', u's\\xe9isme', 'tremblement', 'tremblement de terre',\n 'famille', 'recherche de personnes', 'terremoto'\n ] + COMMON_KEYWORDS),\n # If false, hide the family_name field and use only given_name.\n use_family_name=True,\n # Presentation order for the given name and family name.\n family_name_first=False,\n # If true, show extra fields for alternate names.\n use_alternate_names=True,\n # If false, hide the home_zip field.\n use_postal_code=True,\n # Require at least this many letters in each word of a text query.\n min_query_word_length=2,\n # Show input fields for profile URLs in create page.\n show_profile_entry=True,\n # Default list of profile websites to show in create page.\n profile_websites=const.DEFAULT_PROFILE_WEBSITES,\n # Default map viewport for the location field in the note form.\n map_default_zoom=7,\n map_default_center=[18.968637, -72.284546],\n map_size_pixels=[400, 280],\n # If true, the feeds and read API require an authorization key.\n read_auth_key_required=False,\n # If true, the search API requires an authorization key.\n search_auth_key_required=False,\n # If true, show \"believed dead\" option in the note status dropdown\n allow_believed_dead_via_ui=True,\n # Custom html messages to show on main page, results page, view page,\n # and query form, keyed by language codes.\n start_page_custom_htmls={'en': '', 'fr': ''},\n results_page_custom_htmls={'en': '', 'fr': ''},\n view_page_custom_htmls={'en': '', 'fr': ''},\n seek_query_form_custom_htmls={'en': '', 'fr': ''},\n time_zone_offset=0,\n time_zone_abbreviation='UTC',\n published_date=get_timestamp(datetime(2010, 1, 12)),\n updated_date=get_timestamp(datetime(2010, 1, 12)),\n )\n\n config.set_for_repo(\n 'japan',\n language_menu_options=['ja', 'en', 'ko', 'zh-CN', 'zh-TW', 'pt-BR', 'es'],\n repo_titles={\n 'en': '2011 Japan Earthquake',\n 'zh-TW': u'2011 \\u65e5\\u672c\\u5730\\u9707',\n 'zh-CN': u'2011 \\u65e5\\u672c\\u5730\\u9707',\n 'pt-BR': u'2011 Terremoto no Jap\\xe3o',\n 'ja': u'2011 \\u65e5\\u672c\\u5730\\u9707',\n 'es': u'2011 Terremoto en Jap\\xf3n'\n },\n keywords=', '.join(COMMON_KEYWORDS),\n use_family_name=True,\n family_name_first=True,\n use_alternate_names=True,\n use_postal_code=True,\n min_query_word_length=1,\n show_profile_entry=True,\n profile_websites=const.DEFAULT_PROFILE_WEBSITES,\n map_default_zoom=7,\n map_default_center=[38, 140.7],\n map_size_pixels=[400, 400],\n search_auth_key_required=True,\n read_auth_key_required=True,\n allow_believed_dead_via_ui=True,\n start_page_custom_htmls={'en': 'Custom message', 'fr': 'French'},\n results_page_custom_htmls={'en': 'Custom message', 'fr': 'French'},\n view_page_custom_htmls={'en': 'Custom message', 'fr': 'French'},\n seek_query_form_custom_htmls={'en': '', 'fr': ''},\n # NOTE(kpy): These two configuration settings only work for locations\n # with a single, fixed time zone offset and no Daylight Saving Time.\n time_zone_offset=9, # UTC+9\n time_zone_abbreviation='JST',\n jp_mobile_carrier_redirect=True,\n published_date=get_timestamp(datetime(2011, 3, 11)),\n updated_date=get_timestamp(datetime(2011, 3, 11)),\n )\n\n config.set_for_repo(\n 'pakistan',\n repo_titles={\n 'en': 'Pakistan Floods',\n 'ur': u'\\u067e\\u0627\\u06a9\\u0633\\u062a\\u0627\\u0646\\u06cc \\u0633\\u06cc\\u0644\\u0627\\u0628'\n },\n language_menu_options=['en', 'ur'],\n keywords=', '.join([\n 'pakistan', 'flood', 'pakistan flood', 'pakistani'\n ] + COMMON_KEYWORDS),\n use_family_name=False,\n family_name_first=False,\n use_alternate_names=False,\n use_postal_code=False,\n min_query_word_length=1,\n map_default_zoom=6,\n map_default_center=[33.36, 73.26], # near Rawalpindi, Pakistan\n map_size_pixels=[400, 500],\n read_auth_key_required=False,\n search_auth_key_required=False,\n allow_believed_dead_via_ui=True,\n start_page_custom_htmls={'en': '', 'fr': ''},\n results_page_custom_htmls={'en': '', 'fr': ''},\n view_page_custom_htmls={'en': '', 'fr': ''},\n seek_query_form_custom_htmls={'en': '', 'fr': ''},\n time_zone_offset=0,\n time_zone_abbreviation='UTC',\n published_date=get_timestamp(datetime(2010, 8, 6)),\n updated_date=get_timestamp(datetime(2010, 8, 6)),\n )\n\n\ndef setup_lang_test_config():\n config.set_for_repo(\n 'lang-test',\n # We set short titles to avoid exceeding the field's 500-char limit.\n repo_titles=dict((lang, lang) for lang in const.LANGUAGE_ENDONYMS),\n language_menu_options=list(const.LANGUAGE_ENDONYMS.keys()),\n keywords=', '.join(COMMON_KEYWORDS),\n use_family_name=True,\n family_name_first=True,\n use_alternate_names=True,\n use_postal_code=True,\n min_query_word_length=1,\n map_default_zoom=6,\n map_default_center=[0 ,0],\n map_size_pixels=[400, 500],\n read_auth_key_required=False,\n search_auth_key_required=False,\n allow_believed_dead_via_ui=True,\n start_page_custom_htmls={'en': '', 'fr': ''},\n results_page_custom_htmls={'en': '', 'fr': ''},\n view_page_custom_htmls={'en': '', 'fr': ''},\n seek_query_form_custom_htmls={'en': '', 'fr': ''},\n )", "path": "app/setup_pf.py"}]}
| 3,318 | 122 |
gh_patches_debug_6084
|
rasdani/github-patches
|
git_diff
|
bridgecrewio__checkov-107
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Checkov fails to start in Windows environments
**Describe the bug**
After you install Checkov on Windows, running Checkov does nothing.
**To Reproduce**
Steps to reproduce the behavior:
1. Open Powershell/cmd
2. Run cli command 'checkov'
3. Does nothing
**Expected behavior**
The tool running. Magic.
**Screenshots**
I'm not sure showing nothing would help.
**Desktop (please complete the following information):**
- OS: Windows 10
- Checkov Version 1.0.173
**Additional context**
I know Windows! Like who cares and tbh ive got WSL2 and it works a dream but customers, customers and their awful locked down... anyway.
I'm using Python37 where i've installed .
If you look in your c:/Python37/scripts folder there is a "checkov" bash script. This is the nub of it this doesn't run! However if you add a batch file "checkov-scan.bat" [or call whatever} with this content:
```cmd
C:\Python37\python C:\Python37\Lib\site-packages\checkov\main.py %1 %2
```
Then when you run "checkov-scan" at your shell, it works! So is there anyway you could package up something similar in a release? please?
Also I made a python based pre-commit for checkov called checkov-scan - here <https://github.com/JamesWoolfenden/pre-commit>
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 import logging
3 import os
4 from importlib import util
5 from os import path
6
7 import setuptools
8 from setuptools import setup
9
10 # read the contents of your README file
11 this_directory = path.abspath(path.dirname(__file__))
12 with open(path.join(this_directory, "README.md"), encoding="utf-8") as f:
13 long_description = f.read()
14
15 logger = logging.getLogger(__name__)
16 spec = util.spec_from_file_location(
17 "checkov.version", os.path.join("checkov", "version.py")
18 )
19 # noinspection PyUnresolvedReferences
20 mod = util.module_from_spec(spec)
21 spec.loader.exec_module(mod) # type: ignore
22 version = mod.version # type: ignore
23
24 setup(
25 extras_require={
26 "dev": [
27 "alabaster==0.7.12",
28 "attrs==19.3.0",
29 "babel==2.7.0",
30 "certifi==2019.11.28",
31 "chardet==3.0.4",
32 "coverage==4.5.4",
33 "coverage-badge==1.0.1",
34 "detect-secrets==0.13.0",
35 "docopt==0.6.2",
36 "docutils==0.15.2",
37 "idna==2.8",
38 "imagesize==1.1.0",
39 "importlib-metadata==1.1.0; python_version < '3.8'",
40 "jinja2==2.10.3",
41 "lark-parser==0.7.8",
42 "markupsafe==1.1.1",
43 "more-itertools==8.0.0",
44 "packaging==19.2",
45 "pluggy==0.13.1",
46 "py==1.8.0",
47 "pygments==2.5.2",
48 "pyparsing==2.4.5",
49 "pytest==5.3.1",
50 "python-hcl2==0.2.0",
51 "pytz==2019.3",
52 "pyyaml==5.1.2",
53 "requests==2.22.0",
54 "six==1.13.0",
55 "snowballstemmer==2.0.0",
56 "sphinx==2.2.1",
57 "sphinxcontrib-applehelp==1.0.1",
58 "sphinxcontrib-devhelp==1.0.1",
59 "sphinxcontrib-htmlhelp==1.0.2",
60 "sphinxcontrib-jsmath==1.0.1",
61 "sphinxcontrib-qthelp==1.0.2",
62 "sphinxcontrib-serializinghtml==1.1.3",
63 "urllib3==1.25.7",
64 "wcwidth==0.1.7",
65 "zipp==0.6.0",
66 ]
67 },
68 install_requires=[
69 "chardet==3.0.4",
70 "colorama==0.4.3",
71 "docopt==0.6.2",
72 "idna==2.8",
73 "junit-xml==1.8",
74 "lark-parser==0.7.8",
75 "python-hcl2==0.2.0",
76 "pyyaml==5.2",
77 "requests==2.22.0",
78 "six==1.13.0",
79 "tabulate==0.8.6",
80 "termcolor==1.1.0",
81 "urllib3==1.25.7",
82 "dpath==1.5.0"
83 ],
84 license="Apache License 2.0",
85 name="checkov",
86 version=version,
87 description="Infrastructure as code static analysis",
88 author="bridgecrew",
89 author_email="[email protected]",
90 url="https://github.com/bridgecrewio/checkov",
91 packages=setuptools.find_packages(exclude=["tests*"]),
92 scripts=["bin/checkov"],
93 long_description=long_description,
94 long_description_content_type="text/markdown",
95 classifiers=[
96 'Environment :: Console',
97 'Intended Audience :: Developers',
98 'Intended Audience :: System Administrators',
99 'Programming Language :: Python :: 3.6',
100 'Programming Language :: Python :: 3.7',
101 'Topic :: Security',
102 'Topic :: Software Development :: Build Tools'
103 ]
104 )
105
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -89,7 +89,7 @@
author_email="[email protected]",
url="https://github.com/bridgecrewio/checkov",
packages=setuptools.find_packages(exclude=["tests*"]),
- scripts=["bin/checkov"],
+ scripts=["bin/checkov","bin/checkov.bat"],
long_description=long_description,
long_description_content_type="text/markdown",
classifiers=[
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -89,7 +89,7 @@\n author_email=\"[email protected]\",\n url=\"https://github.com/bridgecrewio/checkov\",\n packages=setuptools.find_packages(exclude=[\"tests*\"]),\n- scripts=[\"bin/checkov\"],\n+ scripts=[\"bin/checkov\",\"bin/checkov.bat\"],\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n classifiers=[\n", "issue": "Checkov fails to start in Windows environments \n**Describe the bug**\r\nAfter you install Checkov on Windows, running Checkov does nothing.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Open Powershell/cmd\r\n2. Run cli command 'checkov'\r\n3. Does nothing\r\n\r\n**Expected behavior**\r\nThe tool running. Magic.\r\n\r\n**Screenshots**\r\nI'm not sure showing nothing would help.\r\n\r\n**Desktop (please complete the following information):**\r\n - OS: Windows 10\r\n - Checkov Version 1.0.173\r\n\r\n**Additional context**\r\nI know Windows! Like who cares and tbh ive got WSL2 and it works a dream but customers, customers and their awful locked down... anyway.\r\nI'm using Python37 where i've installed .\r\nIf you look in your c:/Python37/scripts folder there is a \"checkov\" bash script. This is the nub of it this doesn't run! However if you add a batch file \"checkov-scan.bat\" [or call whatever} with this content:\r\n```cmd\r\nC:\\Python37\\python C:\\Python37\\Lib\\site-packages\\checkov\\main.py %1 %2\r\n```\r\nThen when you run \"checkov-scan\" at your shell, it works! So is there anyway you could package up something similar in a release? please? \r\nAlso I made a python based pre-commit for checkov called checkov-scan - here <https://github.com/JamesWoolfenden/pre-commit>\r\n\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\nimport logging\nimport os\nfrom importlib import util\nfrom os import path\n\nimport setuptools\nfrom setuptools import setup\n\n# read the contents of your README file\nthis_directory = path.abspath(path.dirname(__file__))\nwith open(path.join(this_directory, \"README.md\"), encoding=\"utf-8\") as f:\n long_description = f.read()\n\nlogger = logging.getLogger(__name__)\nspec = util.spec_from_file_location(\n \"checkov.version\", os.path.join(\"checkov\", \"version.py\")\n)\n# noinspection PyUnresolvedReferences\nmod = util.module_from_spec(spec)\nspec.loader.exec_module(mod) # type: ignore\nversion = mod.version # type: ignore\n\nsetup(\n extras_require={\n \"dev\": [\n \"alabaster==0.7.12\",\n \"attrs==19.3.0\",\n \"babel==2.7.0\",\n \"certifi==2019.11.28\",\n \"chardet==3.0.4\",\n \"coverage==4.5.4\",\n \"coverage-badge==1.0.1\",\n \"detect-secrets==0.13.0\",\n \"docopt==0.6.2\",\n \"docutils==0.15.2\",\n \"idna==2.8\",\n \"imagesize==1.1.0\",\n \"importlib-metadata==1.1.0; python_version < '3.8'\",\n \"jinja2==2.10.3\",\n \"lark-parser==0.7.8\",\n \"markupsafe==1.1.1\",\n \"more-itertools==8.0.0\",\n \"packaging==19.2\",\n \"pluggy==0.13.1\",\n \"py==1.8.0\",\n \"pygments==2.5.2\",\n \"pyparsing==2.4.5\",\n \"pytest==5.3.1\",\n \"python-hcl2==0.2.0\",\n \"pytz==2019.3\",\n \"pyyaml==5.1.2\",\n \"requests==2.22.0\",\n \"six==1.13.0\",\n \"snowballstemmer==2.0.0\",\n \"sphinx==2.2.1\",\n \"sphinxcontrib-applehelp==1.0.1\",\n \"sphinxcontrib-devhelp==1.0.1\",\n \"sphinxcontrib-htmlhelp==1.0.2\",\n \"sphinxcontrib-jsmath==1.0.1\",\n \"sphinxcontrib-qthelp==1.0.2\",\n \"sphinxcontrib-serializinghtml==1.1.3\",\n \"urllib3==1.25.7\",\n \"wcwidth==0.1.7\",\n \"zipp==0.6.0\",\n ]\n },\n install_requires=[\n \"chardet==3.0.4\",\n \"colorama==0.4.3\",\n \"docopt==0.6.2\",\n \"idna==2.8\",\n \"junit-xml==1.8\",\n \"lark-parser==0.7.8\",\n \"python-hcl2==0.2.0\",\n \"pyyaml==5.2\",\n \"requests==2.22.0\",\n \"six==1.13.0\",\n \"tabulate==0.8.6\",\n \"termcolor==1.1.0\",\n \"urllib3==1.25.7\",\n \"dpath==1.5.0\"\n ],\n license=\"Apache License 2.0\",\n name=\"checkov\",\n version=version,\n description=\"Infrastructure as code static analysis\",\n author=\"bridgecrew\",\n author_email=\"[email protected]\",\n url=\"https://github.com/bridgecrewio/checkov\",\n packages=setuptools.find_packages(exclude=[\"tests*\"]),\n scripts=[\"bin/checkov\"],\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n classifiers=[\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Security',\n 'Topic :: Software Development :: Build Tools'\n ]\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\nimport logging\nimport os\nfrom importlib import util\nfrom os import path\n\nimport setuptools\nfrom setuptools import setup\n\n# read the contents of your README file\nthis_directory = path.abspath(path.dirname(__file__))\nwith open(path.join(this_directory, \"README.md\"), encoding=\"utf-8\") as f:\n long_description = f.read()\n\nlogger = logging.getLogger(__name__)\nspec = util.spec_from_file_location(\n \"checkov.version\", os.path.join(\"checkov\", \"version.py\")\n)\n# noinspection PyUnresolvedReferences\nmod = util.module_from_spec(spec)\nspec.loader.exec_module(mod) # type: ignore\nversion = mod.version # type: ignore\n\nsetup(\n extras_require={\n \"dev\": [\n \"alabaster==0.7.12\",\n \"attrs==19.3.0\",\n \"babel==2.7.0\",\n \"certifi==2019.11.28\",\n \"chardet==3.0.4\",\n \"coverage==4.5.4\",\n \"coverage-badge==1.0.1\",\n \"detect-secrets==0.13.0\",\n \"docopt==0.6.2\",\n \"docutils==0.15.2\",\n \"idna==2.8\",\n \"imagesize==1.1.0\",\n \"importlib-metadata==1.1.0; python_version < '3.8'\",\n \"jinja2==2.10.3\",\n \"lark-parser==0.7.8\",\n \"markupsafe==1.1.1\",\n \"more-itertools==8.0.0\",\n \"packaging==19.2\",\n \"pluggy==0.13.1\",\n \"py==1.8.0\",\n \"pygments==2.5.2\",\n \"pyparsing==2.4.5\",\n \"pytest==5.3.1\",\n \"python-hcl2==0.2.0\",\n \"pytz==2019.3\",\n \"pyyaml==5.1.2\",\n \"requests==2.22.0\",\n \"six==1.13.0\",\n \"snowballstemmer==2.0.0\",\n \"sphinx==2.2.1\",\n \"sphinxcontrib-applehelp==1.0.1\",\n \"sphinxcontrib-devhelp==1.0.1\",\n \"sphinxcontrib-htmlhelp==1.0.2\",\n \"sphinxcontrib-jsmath==1.0.1\",\n \"sphinxcontrib-qthelp==1.0.2\",\n \"sphinxcontrib-serializinghtml==1.1.3\",\n \"urllib3==1.25.7\",\n \"wcwidth==0.1.7\",\n \"zipp==0.6.0\",\n ]\n },\n install_requires=[\n \"chardet==3.0.4\",\n \"colorama==0.4.3\",\n \"docopt==0.6.2\",\n \"idna==2.8\",\n \"junit-xml==1.8\",\n \"lark-parser==0.7.8\",\n \"python-hcl2==0.2.0\",\n \"pyyaml==5.2\",\n \"requests==2.22.0\",\n \"six==1.13.0\",\n \"tabulate==0.8.6\",\n \"termcolor==1.1.0\",\n \"urllib3==1.25.7\",\n \"dpath==1.5.0\"\n ],\n license=\"Apache License 2.0\",\n name=\"checkov\",\n version=version,\n description=\"Infrastructure as code static analysis\",\n author=\"bridgecrew\",\n author_email=\"[email protected]\",\n url=\"https://github.com/bridgecrewio/checkov\",\n packages=setuptools.find_packages(exclude=[\"tests*\"]),\n scripts=[\"bin/checkov\",\"bin/checkov.bat\"],\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n classifiers=[\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Security',\n 'Topic :: Software Development :: Build Tools'\n ]\n)\n", "path": "setup.py"}]}
| 1,756 | 109 |
gh_patches_debug_37157
|
rasdani/github-patches
|
git_diff
|
MycroftAI__mycroft-core-230
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Listener waits too long and too often for more sound
Often when saying short queries in which the listener hasn't detected enough total noise it sits there waiting for more when in fact you are actually done speaking. To resolve this we should decrease the minimum seconds of noise and put a limit to how long it will wait (perhaps 3-4 seconds).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mycroft/client/speech/mic.py`
Content:
```
1 # Copyright 2016 Mycroft AI, Inc.
2 #
3 # This file is part of Mycroft Core.
4 #
5 # Mycroft Core is free software: you can redistribute it and/or modify
6 # it under the terms of the GNU General Public License as published by
7 # the Free Software Foundation, either version 3 of the License, or
8 # (at your option) any later version.
9 #
10 # Mycroft Core is distributed in the hope that it will be useful,
11 # but WITHOUT ANY WARRANTY; without even the implied warranty of
12 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 # GNU General Public License for more details.
14 #
15 # You should have received a copy of the GNU General Public License
16 # along with Mycroft Core. If not, see <http://www.gnu.org/licenses/>.
17
18
19 import collections
20 import audioop
21 from time import sleep
22
23 import pyaudio
24 from speech_recognition import (
25 Microphone,
26 AudioSource,
27 WaitTimeoutError,
28 AudioData
29 )
30 import speech_recognition
31 from mycroft.util.log import getLogger
32
33 logger = getLogger(__name__)
34 __author__ = 'seanfitz'
35
36
37 class MutableStream(object):
38 def __init__(self, wrapped_stream, format, muted=False):
39 assert wrapped_stream is not None
40 self.wrapped_stream = wrapped_stream
41 self.muted = muted
42 self.SAMPLE_WIDTH = pyaudio.get_sample_size(format)
43 self.muted_buffer = b''.join([b'\x00' * self.SAMPLE_WIDTH])
44
45 def mute(self):
46 self.muted = True
47
48 def unmute(self):
49 self.muted = False
50
51 def read(self, size):
52 frames = collections.deque()
53 remaining = size
54 while remaining > 0:
55 to_read = min(self.wrapped_stream.get_read_available(), remaining)
56 if to_read == 0:
57 sleep(.01)
58 continue
59 result = self.wrapped_stream.read(to_read)
60 frames.append(result)
61 remaining -= to_read
62
63 if self.muted:
64 return self.muted_buffer
65 input_latency = self.wrapped_stream.get_input_latency()
66 if input_latency > 0.2:
67 logger.warn("High input latency: %f" % input_latency)
68 audio = b"".join(list(frames))
69 return audio
70
71 def close(self):
72 self.wrapped_stream.close()
73 self.wrapped_stream = None
74
75 def is_stopped(self):
76 return self.wrapped_stream.is_stopped()
77
78 def stop_stream(self):
79 return self.wrapped_stream.stop_stream()
80
81
82 class MutableMicrophone(Microphone):
83 def __init__(self, device_index=None, sample_rate=16000, chunk_size=1024):
84 Microphone.__init__(
85 self, device_index=device_index, sample_rate=sample_rate,
86 chunk_size=chunk_size)
87 self.muted = False
88
89 def __enter__(self):
90 assert self.stream is None, \
91 "This audio source is already inside a context manager"
92 self.audio = pyaudio.PyAudio()
93 self.stream = MutableStream(self.audio.open(
94 input_device_index=self.device_index, channels=1,
95 format=self.format, rate=self.SAMPLE_RATE,
96 frames_per_buffer=self.CHUNK,
97 input=True, # stream is an input stream
98 ), self.format, self.muted)
99 return self
100
101 def __exit__(self, exc_type, exc_value, traceback):
102 if not self.stream.is_stopped():
103 self.stream.stop_stream()
104 self.stream.close()
105 self.stream = None
106 self.audio.terminate()
107
108 def mute(self):
109 self.muted = True
110 if self.stream:
111 self.stream.mute()
112
113 def unmute(self):
114 self.muted = False
115 if self.stream:
116 self.stream.unmute()
117
118
119 class ResponsiveRecognizer(speech_recognition.Recognizer):
120 # The maximum audio in seconds to keep for transcribing a phrase
121 # The wake word must fit in this time
122 SAVED_WW_SEC = 1.0
123
124 # Padding of silence when feeding to pocketsphinx
125 SILENCE_SEC = 0.01
126
127 # The minimum seconds of noise before a
128 # phrase can be considered complete
129 MIN_LOUD_SEC_PER_PHRASE = 0.2
130
131 # The maximum length a phrase can be recorded,
132 # provided there is noise the entire time
133 RECORDING_TIMEOUT = 30.0
134
135 # Time between pocketsphinx checks for the wake word
136 SEC_BETWEEN_WW_CHECKS = 0.2
137
138 def __init__(self, wake_word_recognizer):
139 speech_recognition.Recognizer.__init__(self)
140 self.wake_word_recognizer = wake_word_recognizer
141 self.audio = pyaudio.PyAudio()
142
143 @staticmethod
144 def record_sound_chunk(source):
145 return source.stream.read(source.CHUNK)
146
147 @staticmethod
148 def calc_energy(sound_chunk, sample_width):
149 return audioop.rms(sound_chunk, sample_width)
150
151 def wake_word_in_audio(self, frame_data):
152 hyp = self.wake_word_recognizer.transcribe(frame_data)
153 return self.wake_word_recognizer.found_wake_word(hyp)
154
155 def record_phrase(self, source, sec_per_buffer):
156 """
157 This attempts to record an entire spoken phrase. Essentially,
158 this waits for a period of silence and then returns the audio
159
160 :rtype: bytearray
161 :param source: AudioSource
162 :param sec_per_buffer: Based on source.SAMPLE_RATE
163 :return: bytearray representing the frame_data of the recorded phrase
164 """
165 num_loud_chunks = 0
166 noise = 0
167
168 max_noise = 20
169 min_noise = 0
170
171 def increase_noise(level):
172 if level < max_noise:
173 return level + 2
174 return level
175
176 def decrease_noise(level):
177 if level > min_noise:
178 return level - 1
179 return level
180
181 # Smallest number of loud chunks required to return
182 min_loud_chunks = int(self.MIN_LOUD_SEC_PER_PHRASE / sec_per_buffer)
183
184 # Maximum number of chunks to record before timing out
185 max_chunks = int(self.RECORDING_TIMEOUT / sec_per_buffer)
186 num_chunks = 0
187
188 # bytearray to store audio in
189 byte_data = '\0' * source.SAMPLE_WIDTH
190
191 phrase_complete = False
192 while num_chunks < max_chunks and not phrase_complete:
193 chunk = self.record_sound_chunk(source)
194 byte_data += chunk
195 num_chunks += 1
196
197 energy = self.calc_energy(chunk, source.SAMPLE_WIDTH)
198 is_loud = energy > self.energy_threshold
199 if is_loud:
200 noise = increase_noise(noise)
201 num_loud_chunks += 1
202 else:
203 noise = decrease_noise(noise)
204 self.adjust_threshold(energy, sec_per_buffer)
205
206 if noise <= min_noise and num_loud_chunks > min_loud_chunks:
207 phrase_complete = True
208
209 return byte_data
210
211 @staticmethod
212 def sec_to_bytes(sec, source):
213 return sec * source.SAMPLE_RATE * source.SAMPLE_WIDTH
214
215 def wait_until_wake_word(self, source, sec_per_buffer):
216 num_silent_bytes = int(self.SILENCE_SEC * source.SAMPLE_RATE *
217 source.SAMPLE_WIDTH)
218
219 silence = '\0' * num_silent_bytes
220
221 # bytearray to store audio in
222 byte_data = silence
223
224 buffers_per_check = self.SEC_BETWEEN_WW_CHECKS / sec_per_buffer
225 buffers_since_check = 0.0
226
227 # Max bytes for byte_data before audio is removed from the front
228 max_size = self.sec_to_bytes(self.SAVED_WW_SEC, source)
229
230 said_wake_word = False
231 while not said_wake_word:
232 chunk = self.record_sound_chunk(source)
233
234 energy = self.calc_energy(chunk, source.SAMPLE_WIDTH)
235 if energy < self.energy_threshold:
236 self.adjust_threshold(energy, sec_per_buffer)
237
238 needs_to_grow = len(byte_data) < max_size
239 if needs_to_grow:
240 byte_data += chunk
241 else: # Remove beginning of audio and add new chunk to end
242 byte_data = byte_data[len(chunk):] + chunk
243
244 buffers_since_check += 1.0
245 if buffers_since_check < buffers_per_check:
246 buffers_since_check -= buffers_per_check
247 said_wake_word = self.wake_word_in_audio(byte_data + silence)
248
249 @staticmethod
250 def create_audio_data(raw_data, source):
251 """
252 Constructs an AudioData instance with the same parameters
253 as the source and the specified frame_data
254 """
255 return AudioData(raw_data, source.SAMPLE_RATE, source.SAMPLE_WIDTH)
256
257 def listen(self, source, emitter):
258 """
259 Listens for audio that Mycroft should respond to
260
261 :param source: an ``AudioSource`` instance for reading from
262 :param emitter: a pyee EventEmitter for sending when the wakeword
263 has been found
264 """
265 assert isinstance(source, AudioSource), "Source must be an AudioSource"
266
267 bytes_per_sec = source.SAMPLE_RATE * source.SAMPLE_WIDTH
268 sec_per_buffer = float(source.CHUNK) / bytes_per_sec
269
270 logger.debug("Waiting for wake word...")
271 self.wait_until_wake_word(source, sec_per_buffer)
272
273 logger.debug("Recording...")
274 emitter.emit("recognizer_loop:record_begin")
275 frame_data = self.record_phrase(source, sec_per_buffer)
276 audio_data = self.create_audio_data(frame_data, source)
277 emitter.emit("recognizer_loop:record_end")
278 logger.debug("Thinking...")
279
280 return audio_data
281
282 def adjust_threshold(self, energy, seconds_per_buffer):
283 if self.dynamic_energy_threshold and energy > 0:
284 # account for different chunk sizes and rates
285 damping = (
286 self.dynamic_energy_adjustment_damping ** seconds_per_buffer)
287 target_energy = energy * self.dynamic_energy_ratio
288 self.energy_threshold = (
289 self.energy_threshold * damping +
290 target_energy * (1 - damping))
291
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mycroft/client/speech/mic.py b/mycroft/client/speech/mic.py
--- a/mycroft/client/speech/mic.py
+++ b/mycroft/client/speech/mic.py
@@ -126,12 +126,16 @@
# The minimum seconds of noise before a
# phrase can be considered complete
- MIN_LOUD_SEC_PER_PHRASE = 0.2
+ MIN_LOUD_SEC_PER_PHRASE = 0.1
# The maximum length a phrase can be recorded,
# provided there is noise the entire time
RECORDING_TIMEOUT = 30.0
+ # The maximum time it will continue to record silence
+ # when not enough noise has been detected
+ RECORDING_TIMEOUT_WITH_SILENCE = 3.0
+
# Time between pocketsphinx checks for the wake word
SEC_BETWEEN_WW_CHECKS = 0.2
@@ -167,17 +171,17 @@
num_loud_chunks = 0
noise = 0
- max_noise = 20
+ max_noise = 25
min_noise = 0
def increase_noise(level):
if level < max_noise:
- return level + 2
+ return level + 200 * sec_per_buffer
return level
def decrease_noise(level):
if level > min_noise:
- return level - 1
+ return level - 100 * sec_per_buffer
return level
# Smallest number of loud chunks required to return
@@ -187,6 +191,10 @@
max_chunks = int(self.RECORDING_TIMEOUT / sec_per_buffer)
num_chunks = 0
+ # Will return if exceeded this even if there's not enough loud chunks
+ max_chunks_of_silence = int(self.RECORDING_TIMEOUT_WITH_SILENCE /
+ sec_per_buffer)
+
# bytearray to store audio in
byte_data = '\0' * source.SAMPLE_WIDTH
@@ -205,7 +213,10 @@
noise = decrease_noise(noise)
self.adjust_threshold(energy, sec_per_buffer)
- if noise <= min_noise and num_loud_chunks > min_loud_chunks:
+ was_loud_enough = num_loud_chunks > min_loud_chunks
+ quiet_enough = noise <= min_noise
+ recorded_too_much_silence = num_chunks > max_chunks_of_silence
+ if quiet_enough and (was_loud_enough or recorded_too_much_silence):
phrase_complete = True
return byte_data
|
{"golden_diff": "diff --git a/mycroft/client/speech/mic.py b/mycroft/client/speech/mic.py\n--- a/mycroft/client/speech/mic.py\n+++ b/mycroft/client/speech/mic.py\n@@ -126,12 +126,16 @@\n \n # The minimum seconds of noise before a\n # phrase can be considered complete\n- MIN_LOUD_SEC_PER_PHRASE = 0.2\n+ MIN_LOUD_SEC_PER_PHRASE = 0.1\n \n # The maximum length a phrase can be recorded,\n # provided there is noise the entire time\n RECORDING_TIMEOUT = 30.0\n \n+ # The maximum time it will continue to record silence\n+ # when not enough noise has been detected\n+ RECORDING_TIMEOUT_WITH_SILENCE = 3.0\n+\n # Time between pocketsphinx checks for the wake word\n SEC_BETWEEN_WW_CHECKS = 0.2\n \n@@ -167,17 +171,17 @@\n num_loud_chunks = 0\n noise = 0\n \n- max_noise = 20\n+ max_noise = 25\n min_noise = 0\n \n def increase_noise(level):\n if level < max_noise:\n- return level + 2\n+ return level + 200 * sec_per_buffer\n return level\n \n def decrease_noise(level):\n if level > min_noise:\n- return level - 1\n+ return level - 100 * sec_per_buffer\n return level\n \n # Smallest number of loud chunks required to return\n@@ -187,6 +191,10 @@\n max_chunks = int(self.RECORDING_TIMEOUT / sec_per_buffer)\n num_chunks = 0\n \n+ # Will return if exceeded this even if there's not enough loud chunks\n+ max_chunks_of_silence = int(self.RECORDING_TIMEOUT_WITH_SILENCE /\n+ sec_per_buffer)\n+\n # bytearray to store audio in\n byte_data = '\\0' * source.SAMPLE_WIDTH\n \n@@ -205,7 +213,10 @@\n noise = decrease_noise(noise)\n self.adjust_threshold(energy, sec_per_buffer)\n \n- if noise <= min_noise and num_loud_chunks > min_loud_chunks:\n+ was_loud_enough = num_loud_chunks > min_loud_chunks\n+ quiet_enough = noise <= min_noise\n+ recorded_too_much_silence = num_chunks > max_chunks_of_silence\n+ if quiet_enough and (was_loud_enough or recorded_too_much_silence):\n phrase_complete = True\n \n return byte_data\n", "issue": "Listener waits too long and too often for more sound\nOften when saying short queries in which the listener hasn't detected enough total noise it sits there waiting for more when in fact you are actually done speaking. To resolve this we should decrease the minimum seconds of noise and put a limit to how long it will wait (perhaps 3-4 seconds).\n\n", "before_files": [{"content": "# Copyright 2016 Mycroft AI, Inc.\n#\n# This file is part of Mycroft Core.\n#\n# Mycroft Core is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Mycroft Core is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Mycroft Core. If not, see <http://www.gnu.org/licenses/>.\n\n\nimport collections\nimport audioop\nfrom time import sleep\n\nimport pyaudio\nfrom speech_recognition import (\n Microphone,\n AudioSource,\n WaitTimeoutError,\n AudioData\n)\nimport speech_recognition\nfrom mycroft.util.log import getLogger\n\nlogger = getLogger(__name__)\n__author__ = 'seanfitz'\n\n\nclass MutableStream(object):\n def __init__(self, wrapped_stream, format, muted=False):\n assert wrapped_stream is not None\n self.wrapped_stream = wrapped_stream\n self.muted = muted\n self.SAMPLE_WIDTH = pyaudio.get_sample_size(format)\n self.muted_buffer = b''.join([b'\\x00' * self.SAMPLE_WIDTH])\n\n def mute(self):\n self.muted = True\n\n def unmute(self):\n self.muted = False\n\n def read(self, size):\n frames = collections.deque()\n remaining = size\n while remaining > 0:\n to_read = min(self.wrapped_stream.get_read_available(), remaining)\n if to_read == 0:\n sleep(.01)\n continue\n result = self.wrapped_stream.read(to_read)\n frames.append(result)\n remaining -= to_read\n\n if self.muted:\n return self.muted_buffer\n input_latency = self.wrapped_stream.get_input_latency()\n if input_latency > 0.2:\n logger.warn(\"High input latency: %f\" % input_latency)\n audio = b\"\".join(list(frames))\n return audio\n\n def close(self):\n self.wrapped_stream.close()\n self.wrapped_stream = None\n\n def is_stopped(self):\n return self.wrapped_stream.is_stopped()\n\n def stop_stream(self):\n return self.wrapped_stream.stop_stream()\n\n\nclass MutableMicrophone(Microphone):\n def __init__(self, device_index=None, sample_rate=16000, chunk_size=1024):\n Microphone.__init__(\n self, device_index=device_index, sample_rate=sample_rate,\n chunk_size=chunk_size)\n self.muted = False\n\n def __enter__(self):\n assert self.stream is None, \\\n \"This audio source is already inside a context manager\"\n self.audio = pyaudio.PyAudio()\n self.stream = MutableStream(self.audio.open(\n input_device_index=self.device_index, channels=1,\n format=self.format, rate=self.SAMPLE_RATE,\n frames_per_buffer=self.CHUNK,\n input=True, # stream is an input stream\n ), self.format, self.muted)\n return self\n\n def __exit__(self, exc_type, exc_value, traceback):\n if not self.stream.is_stopped():\n self.stream.stop_stream()\n self.stream.close()\n self.stream = None\n self.audio.terminate()\n\n def mute(self):\n self.muted = True\n if self.stream:\n self.stream.mute()\n\n def unmute(self):\n self.muted = False\n if self.stream:\n self.stream.unmute()\n\n\nclass ResponsiveRecognizer(speech_recognition.Recognizer):\n # The maximum audio in seconds to keep for transcribing a phrase\n # The wake word must fit in this time\n SAVED_WW_SEC = 1.0\n\n # Padding of silence when feeding to pocketsphinx\n SILENCE_SEC = 0.01\n\n # The minimum seconds of noise before a\n # phrase can be considered complete\n MIN_LOUD_SEC_PER_PHRASE = 0.2\n\n # The maximum length a phrase can be recorded,\n # provided there is noise the entire time\n RECORDING_TIMEOUT = 30.0\n\n # Time between pocketsphinx checks for the wake word\n SEC_BETWEEN_WW_CHECKS = 0.2\n\n def __init__(self, wake_word_recognizer):\n speech_recognition.Recognizer.__init__(self)\n self.wake_word_recognizer = wake_word_recognizer\n self.audio = pyaudio.PyAudio()\n\n @staticmethod\n def record_sound_chunk(source):\n return source.stream.read(source.CHUNK)\n\n @staticmethod\n def calc_energy(sound_chunk, sample_width):\n return audioop.rms(sound_chunk, sample_width)\n\n def wake_word_in_audio(self, frame_data):\n hyp = self.wake_word_recognizer.transcribe(frame_data)\n return self.wake_word_recognizer.found_wake_word(hyp)\n\n def record_phrase(self, source, sec_per_buffer):\n \"\"\"\n This attempts to record an entire spoken phrase. Essentially,\n this waits for a period of silence and then returns the audio\n\n :rtype: bytearray\n :param source: AudioSource\n :param sec_per_buffer: Based on source.SAMPLE_RATE\n :return: bytearray representing the frame_data of the recorded phrase\n \"\"\"\n num_loud_chunks = 0\n noise = 0\n\n max_noise = 20\n min_noise = 0\n\n def increase_noise(level):\n if level < max_noise:\n return level + 2\n return level\n\n def decrease_noise(level):\n if level > min_noise:\n return level - 1\n return level\n\n # Smallest number of loud chunks required to return\n min_loud_chunks = int(self.MIN_LOUD_SEC_PER_PHRASE / sec_per_buffer)\n\n # Maximum number of chunks to record before timing out\n max_chunks = int(self.RECORDING_TIMEOUT / sec_per_buffer)\n num_chunks = 0\n\n # bytearray to store audio in\n byte_data = '\\0' * source.SAMPLE_WIDTH\n\n phrase_complete = False\n while num_chunks < max_chunks and not phrase_complete:\n chunk = self.record_sound_chunk(source)\n byte_data += chunk\n num_chunks += 1\n\n energy = self.calc_energy(chunk, source.SAMPLE_WIDTH)\n is_loud = energy > self.energy_threshold\n if is_loud:\n noise = increase_noise(noise)\n num_loud_chunks += 1\n else:\n noise = decrease_noise(noise)\n self.adjust_threshold(energy, sec_per_buffer)\n\n if noise <= min_noise and num_loud_chunks > min_loud_chunks:\n phrase_complete = True\n\n return byte_data\n\n @staticmethod\n def sec_to_bytes(sec, source):\n return sec * source.SAMPLE_RATE * source.SAMPLE_WIDTH\n\n def wait_until_wake_word(self, source, sec_per_buffer):\n num_silent_bytes = int(self.SILENCE_SEC * source.SAMPLE_RATE *\n source.SAMPLE_WIDTH)\n\n silence = '\\0' * num_silent_bytes\n\n # bytearray to store audio in\n byte_data = silence\n\n buffers_per_check = self.SEC_BETWEEN_WW_CHECKS / sec_per_buffer\n buffers_since_check = 0.0\n\n # Max bytes for byte_data before audio is removed from the front\n max_size = self.sec_to_bytes(self.SAVED_WW_SEC, source)\n\n said_wake_word = False\n while not said_wake_word:\n chunk = self.record_sound_chunk(source)\n\n energy = self.calc_energy(chunk, source.SAMPLE_WIDTH)\n if energy < self.energy_threshold:\n self.adjust_threshold(energy, sec_per_buffer)\n\n needs_to_grow = len(byte_data) < max_size\n if needs_to_grow:\n byte_data += chunk\n else: # Remove beginning of audio and add new chunk to end\n byte_data = byte_data[len(chunk):] + chunk\n\n buffers_since_check += 1.0\n if buffers_since_check < buffers_per_check:\n buffers_since_check -= buffers_per_check\n said_wake_word = self.wake_word_in_audio(byte_data + silence)\n\n @staticmethod\n def create_audio_data(raw_data, source):\n \"\"\"\n Constructs an AudioData instance with the same parameters\n as the source and the specified frame_data\n \"\"\"\n return AudioData(raw_data, source.SAMPLE_RATE, source.SAMPLE_WIDTH)\n\n def listen(self, source, emitter):\n \"\"\"\n Listens for audio that Mycroft should respond to\n\n :param source: an ``AudioSource`` instance for reading from\n :param emitter: a pyee EventEmitter for sending when the wakeword\n has been found\n \"\"\"\n assert isinstance(source, AudioSource), \"Source must be an AudioSource\"\n\n bytes_per_sec = source.SAMPLE_RATE * source.SAMPLE_WIDTH\n sec_per_buffer = float(source.CHUNK) / bytes_per_sec\n\n logger.debug(\"Waiting for wake word...\")\n self.wait_until_wake_word(source, sec_per_buffer)\n\n logger.debug(\"Recording...\")\n emitter.emit(\"recognizer_loop:record_begin\")\n frame_data = self.record_phrase(source, sec_per_buffer)\n audio_data = self.create_audio_data(frame_data, source)\n emitter.emit(\"recognizer_loop:record_end\")\n logger.debug(\"Thinking...\")\n\n return audio_data\n\n def adjust_threshold(self, energy, seconds_per_buffer):\n if self.dynamic_energy_threshold and energy > 0:\n # account for different chunk sizes and rates\n damping = (\n self.dynamic_energy_adjustment_damping ** seconds_per_buffer)\n target_energy = energy * self.dynamic_energy_ratio\n self.energy_threshold = (\n self.energy_threshold * damping +\n target_energy * (1 - damping))\n", "path": "mycroft/client/speech/mic.py"}], "after_files": [{"content": "# Copyright 2016 Mycroft AI, Inc.\n#\n# This file is part of Mycroft Core.\n#\n# Mycroft Core is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Mycroft Core is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Mycroft Core. If not, see <http://www.gnu.org/licenses/>.\n\n\nimport collections\nimport audioop\nfrom time import sleep\n\nimport pyaudio\nfrom speech_recognition import (\n Microphone,\n AudioSource,\n WaitTimeoutError,\n AudioData\n)\nimport speech_recognition\nfrom mycroft.util.log import getLogger\n\nlogger = getLogger(__name__)\n__author__ = 'seanfitz'\n\n\nclass MutableStream(object):\n def __init__(self, wrapped_stream, format, muted=False):\n assert wrapped_stream is not None\n self.wrapped_stream = wrapped_stream\n self.muted = muted\n self.SAMPLE_WIDTH = pyaudio.get_sample_size(format)\n self.muted_buffer = b''.join([b'\\x00' * self.SAMPLE_WIDTH])\n\n def mute(self):\n self.muted = True\n\n def unmute(self):\n self.muted = False\n\n def read(self, size):\n frames = collections.deque()\n remaining = size\n while remaining > 0:\n to_read = min(self.wrapped_stream.get_read_available(), remaining)\n if to_read == 0:\n sleep(.01)\n continue\n result = self.wrapped_stream.read(to_read)\n frames.append(result)\n remaining -= to_read\n\n if self.muted:\n return self.muted_buffer\n input_latency = self.wrapped_stream.get_input_latency()\n if input_latency > 0.2:\n logger.warn(\"High input latency: %f\" % input_latency)\n audio = b\"\".join(list(frames))\n return audio\n\n def close(self):\n self.wrapped_stream.close()\n self.wrapped_stream = None\n\n def is_stopped(self):\n return self.wrapped_stream.is_stopped()\n\n def stop_stream(self):\n return self.wrapped_stream.stop_stream()\n\n\nclass MutableMicrophone(Microphone):\n def __init__(self, device_index=None, sample_rate=16000, chunk_size=1024):\n Microphone.__init__(\n self, device_index=device_index, sample_rate=sample_rate,\n chunk_size=chunk_size)\n self.muted = False\n\n def __enter__(self):\n assert self.stream is None, \\\n \"This audio source is already inside a context manager\"\n self.audio = pyaudio.PyAudio()\n self.stream = MutableStream(self.audio.open(\n input_device_index=self.device_index, channels=1,\n format=self.format, rate=self.SAMPLE_RATE,\n frames_per_buffer=self.CHUNK,\n input=True, # stream is an input stream\n ), self.format, self.muted)\n return self\n\n def __exit__(self, exc_type, exc_value, traceback):\n if not self.stream.is_stopped():\n self.stream.stop_stream()\n self.stream.close()\n self.stream = None\n self.audio.terminate()\n\n def mute(self):\n self.muted = True\n if self.stream:\n self.stream.mute()\n\n def unmute(self):\n self.muted = False\n if self.stream:\n self.stream.unmute()\n\n\nclass ResponsiveRecognizer(speech_recognition.Recognizer):\n # The maximum audio in seconds to keep for transcribing a phrase\n # The wake word must fit in this time\n SAVED_WW_SEC = 1.0\n\n # Padding of silence when feeding to pocketsphinx\n SILENCE_SEC = 0.01\n\n # The minimum seconds of noise before a\n # phrase can be considered complete\n MIN_LOUD_SEC_PER_PHRASE = 0.1\n\n # The maximum length a phrase can be recorded,\n # provided there is noise the entire time\n RECORDING_TIMEOUT = 30.0\n\n # The maximum time it will continue to record silence\n # when not enough noise has been detected\n RECORDING_TIMEOUT_WITH_SILENCE = 3.0\n\n # Time between pocketsphinx checks for the wake word\n SEC_BETWEEN_WW_CHECKS = 0.2\n\n def __init__(self, wake_word_recognizer):\n speech_recognition.Recognizer.__init__(self)\n self.daemon = True\n\n self.wake_word_recognizer = wake_word_recognizer\n self.audio = pyaudio.PyAudio()\n\n @staticmethod\n def record_sound_chunk(source):\n return source.stream.read(source.CHUNK)\n\n @staticmethod\n def calc_energy(sound_chunk, sample_width):\n return audioop.rms(sound_chunk, sample_width)\n\n def wake_word_in_audio(self, frame_data):\n hyp = self.wake_word_recognizer.transcribe(frame_data)\n return self.wake_word_recognizer.found_wake_word(hyp)\n\n def record_phrase(self, source, sec_per_buffer):\n \"\"\"\n This attempts to record an entire spoken phrase. Essentially,\n this waits for a period of silence and then returns the audio\n\n :rtype: bytearray\n :param source: AudioSource\n :param sec_per_buffer: Based on source.SAMPLE_RATE\n :return: bytearray representing the frame_data of the recorded phrase\n \"\"\"\n num_loud_chunks = 0\n noise = 0\n\n max_noise = 25\n min_noise = 0\n\n def increase_noise(level):\n if level < max_noise:\n return level + 200 * sec_per_buffer\n return level\n\n def decrease_noise(level):\n if level > min_noise:\n return level - 100 * sec_per_buffer\n return level\n\n # Smallest number of loud chunks required to return\n min_loud_chunks = int(self.MIN_LOUD_SEC_PER_PHRASE / sec_per_buffer)\n\n # Maximum number of chunks to record before timing out\n max_chunks = int(self.RECORDING_TIMEOUT / sec_per_buffer)\n num_chunks = 0\n\n # Will return if exceeded this even if there's not enough loud chunks\n max_chunks_of_silence = int(self.RECORDING_TIMEOUT_WITH_SILENCE /\n sec_per_buffer)\n\n # bytearray to store audio in\n byte_data = '\\0' * source.SAMPLE_WIDTH\n\n phrase_complete = False\n while num_chunks < max_chunks and not phrase_complete:\n chunk = self.record_sound_chunk(source)\n byte_data += chunk\n num_chunks += 1\n\n energy = self.calc_energy(chunk, source.SAMPLE_WIDTH)\n is_loud = energy > self.energy_threshold\n if is_loud:\n noise = increase_noise(noise)\n num_loud_chunks += 1\n else:\n noise = decrease_noise(noise)\n self.adjust_threshold(energy, sec_per_buffer)\n\n was_loud_enough = num_loud_chunks > min_loud_chunks\n quiet_enough = noise <= min_noise\n recorded_too_much_silence = num_chunks > max_chunks_of_silence\n if quiet_enough and (was_loud_enough or recorded_too_much_silence):\n phrase_complete = True\n\n return byte_data\n\n @staticmethod\n def sec_to_bytes(sec, source):\n return sec * source.SAMPLE_RATE * source.SAMPLE_WIDTH\n\n def wait_until_wake_word(self, source, sec_per_buffer):\n num_silent_bytes = int(self.SILENCE_SEC * source.SAMPLE_RATE *\n source.SAMPLE_WIDTH)\n\n silence = '\\0' * num_silent_bytes\n\n # bytearray to store audio in\n byte_data = silence\n\n buffers_per_check = self.SEC_BETWEEN_WW_CHECKS / sec_per_buffer\n buffers_since_check = 0.0\n\n # Max bytes for byte_data before audio is removed from the front\n max_size = self.sec_to_bytes(self.SAVED_WW_SEC, source)\n\n said_wake_word = False\n while not said_wake_word:\n chunk = self.record_sound_chunk(source)\n\n energy = self.calc_energy(chunk, source.SAMPLE_WIDTH)\n if energy < self.energy_threshold:\n self.adjust_threshold(energy, sec_per_buffer)\n\n needs_to_grow = len(byte_data) < max_size\n if needs_to_grow:\n byte_data += chunk\n else: # Remove beginning of audio and add new chunk to end\n byte_data = byte_data[len(chunk):] + chunk\n\n buffers_since_check += 1.0\n if buffers_since_check < buffers_per_check:\n buffers_since_check -= buffers_per_check\n said_wake_word = self.wake_word_in_audio(byte_data + silence)\n\n @staticmethod\n def create_audio_data(raw_data, source):\n \"\"\"\n Constructs an AudioData instance with the same parameters\n as the source and the specified frame_data\n \"\"\"\n return AudioData(raw_data, source.SAMPLE_RATE, source.SAMPLE_WIDTH)\n\n def listen(self, source, emitter):\n \"\"\"\n Listens for audio that Mycroft should respond to\n\n :param source: an ``AudioSource`` instance for reading from\n :param emitter: a pyee EventEmitter for sending when the wakeword\n has been found\n \"\"\"\n assert isinstance(source, AudioSource), \"Source must be an AudioSource\"\n\n bytes_per_sec = source.SAMPLE_RATE * source.SAMPLE_WIDTH\n sec_per_buffer = float(source.CHUNK) / bytes_per_sec\n\n logger.debug(\"Waiting for wake word...\")\n self.wait_until_wake_word(source, sec_per_buffer)\n\n logger.debug(\"Recording...\")\n emitter.emit(\"recognizer_loop:record_begin\")\n frame_data = self.record_phrase(source, sec_per_buffer)\n audio_data = self.create_audio_data(frame_data, source)\n emitter.emit(\"recognizer_loop:record_end\")\n logger.debug(\"Thinking...\")\n\n return audio_data\n\n def adjust_threshold(self, energy, seconds_per_buffer):\n if self.dynamic_energy_threshold and energy > 0:\n # account for different chunk sizes and rates\n damping = (\n self.dynamic_energy_adjustment_damping ** seconds_per_buffer)\n target_energy = energy * self.dynamic_energy_ratio\n self.energy_threshold = (\n self.energy_threshold * damping +\n target_energy * (1 - damping))\n", "path": "mycroft/client/speech/mic.py"}]}
| 3,295 | 597 |
gh_patches_debug_4777
|
rasdani/github-patches
|
git_diff
|
fidals__shopelectro-665
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
500 ошибка в админке
https://www.shopelectro.ru/admin/shopelectro/productpage/?has_category=no
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `shopelectro/management/commands/_update_catalog/update_products.py`
Content:
```
1 import logging
2 import typing
3 from collections import defaultdict
4 from copy import deepcopy
5 from functools import reduce
6 from itertools import chain
7 from typing import Dict, Iterator, List
8 from xml.etree.ElementTree import Element
9
10 from django.conf import settings
11 from django.contrib.auth.models import User
12 from django.core.mail import send_mail
13 from django.db import transaction
14 from django.db.models import QuerySet
15 from django.template.loader import render_to_string
16
17 from shopelectro.management.commands._update_catalog.utils import (
18 XmlFile, is_correct_uuid, NOT_SAVE_TEMPLATE, UUID, Data, floor
19 )
20 from shopelectro.models import Product, ProductPage, Tag
21
22
23 logger = logging.getLogger(__name__)
24
25
26 def fetch_products(root: Element, config: XmlFile) -> Iterator:
27 product_els = root.findall(config.xpaths['products'])
28 for product_el in product_els:
29 name = product_el.find(config.xpaths['name']).text
30 uuid = product_el.find(config.xpaths['uuid']).text
31 vendor_code = product_el.find(
32 config.xpaths['vendor_code']
33 ).text.lstrip('0')
34 content = product_el.find(config.xpaths['page_content']).text or ''
35
36 tag_value_els = (
37 tag_el.find(config.xpaths['tag_value_uuid'])
38 for tag_el in product_el.findall(config.xpaths['tags'])
39 if tag_el is not None
40 )
41
42 tag_uuids = list(filter(is_correct_uuid, (
43 tag_value.text
44 for tag_value in tag_value_els
45 # should use 'is not None', because __bool__ does not defined
46 if tag_value is not None
47 )))
48
49 tags = Tag.objects.filter(uuid__in=tag_uuids)
50
51 yield uuid, {
52 'name': name,
53 'vendor_code': vendor_code,
54 'page': {
55 'content': content
56 },
57 'tags': tags
58 }
59
60
61 def fetch_prices(root: Element, config) -> typing.Iterator:
62 def get_price_values(prices_el):
63 return list(sorted(
64 float(price_el.find(config.xpaths['price']).text)
65 for price_el in prices_el.findall(config.xpaths['prices'])
66 ))
67
68 def multiply(prices: typing.List[float]):
69 def floor_prices(prices, precision: floor):
70 return [
71 floor(price * multiplier, precision)
72 for price, multiplier in zip(prices, settings.PRICE_MULTIPLIERS)
73 ]
74 *wholesale_prices, retail_price = prices
75 return (
76 floor_prices(wholesale_prices, precision=2) +
77 floor_prices([retail_price], precision=0)
78 )
79
80 product_price_els = root.findall(config.xpaths['product_prices'])
81 for prices_el in product_price_els:
82 product_uuid = prices_el.find(config.xpaths['product_uuid']).text
83 prices = dict(zip(
84 config.extra_options['price_types'],
85 multiply(get_price_values(prices_el))
86 ))
87 yield product_uuid, prices
88
89
90 def fetch_in_stock(root: Element, config: XmlFile) -> Iterator:
91 product_els = root.findall(config.xpaths['products'])
92 for product_el in product_els:
93 uuid = product_el.find(config.xpaths['product_uuid']).text
94 in_stock = product_el.find(config.xpaths['in_stock']).text
95
96 if not (in_stock.isdigit() and int(in_stock) >= 0):
97 in_stock = 0
98
99 yield uuid, {
100 'in_stock': in_stock,
101 }
102
103
104 product_file = XmlFile(
105 fetch_callback=fetch_products,
106 xml_path_pattern='**/webdata/**/goods/**/import*.xml',
107 xpath_queries={
108 'products': './/{}Товары/',
109 'name': '.{}Наименование',
110 'uuid': '.{}Ид',
111 'page_content': '.{}Описание',
112 'tags': '.{}ЗначенияСвойств/',
113 'tag_value_uuid': '.{}Значение',
114 'vendor_code': '.{0}ЗначенияРеквизитов/{0}ЗначениеРеквизита'
115 '[{0}Наименование="Код"]/{0}Значение',
116 },
117 )
118
119 price_file = XmlFile(
120 fetch_callback=fetch_prices,
121 xml_path_pattern='**/webdata/**/goods/**/prices*.xml',
122 xpath_queries={
123 'product_prices': './/{}Предложения/',
124 'product_uuid': '.{}Ид',
125 'prices': '.{}Цены/',
126 'price': '.{}ЦенаЗаЕдиницу',
127 },
128 extra_options={
129 'price_types': [
130 'purchase_price', 'wholesale_large', 'wholesale_medium',
131 'wholesale_small', 'price',
132 ],
133 },
134 )
135
136
137 in_stock_file = XmlFile(
138 fetch_callback=fetch_in_stock,
139 xml_path_pattern='**/webdata/**/goods/**/rests*.xml',
140 xpath_queries={
141 'products': './/{}Предложения/',
142 'product_uuid': '.{}Ид',
143 'in_stock': './/{}Количество',
144 },
145 )
146
147
148 def merge_data(*data) -> Dict[UUID, Data]:
149 """
150 Merge data from xml files with different structure.
151
152 Example: files with product names and prices.
153 """
154 product_data = defaultdict(dict)
155 for key, data in chain.from_iterable(filter(None, data)):
156 product_data[key].update(data)
157
158 return product_data
159
160
161 def clean_data(data: Dict[UUID, Data]):
162 def has_all_prices(_, product_data):
163 price_types = price_file.extra_options['price_types']
164 has = all(
165 product_data.get(price_type)
166 for price_type in price_types
167 )
168 if not has:
169 logger.info(NOT_SAVE_TEMPLATE.format(
170 entity='Product',
171 name=product_data['name'],
172 field='price'
173 ))
174 return has
175
176 def has_vendor_code(_, product_data):
177 has = bool(product_data['vendor_code'])
178
179 if not has:
180 logger.info(NOT_SAVE_TEMPLATE.format(
181 entity='Product',
182 name=product_data['name'],
183 field='vendor_code'
184 ))
185
186 return has
187
188 def has_uuid(uuid, product_data):
189 has = is_correct_uuid(uuid)
190 if not has:
191 logger.info(NOT_SAVE_TEMPLATE.format(
192 entity='Product',
193 name=product_data['name'],
194 field='uuid'
195 ))
196 return has
197
198 def filter_(product_data):
199 return all(
200 f(*product_data)
201 for f in [has_all_prices, has_uuid, has_vendor_code]
202 )
203
204 cleaned_data = dict(
205 product_data
206 for product_data in data.items()
207 if filter_(product_data)
208 )
209
210 return cleaned_data
211
212
213 def report(recipients=None, message=None):
214 message = message or render_to_string('report.html')
215
216 user_query = (
217 User.objects
218 .filter(is_staff=True, is_superuser=False, is_active=True, email__isnull=False)
219 )
220
221 recipient_list = recipients or [user.email for user in user_query]
222
223 if recipient_list:
224 send_mail(
225 subject='Обновления каталога товаров',
226 message=message,
227 from_email=settings.EMAIL_SENDER,
228 recipient_list=recipient_list,
229 html_message=message,
230 )
231
232 logger.info('Sent message to {}'.format(
233 reduce(lambda x, y: '{}, {}'.format(x, y), recipient_list)
234 ))
235
236
237 @transaction.atomic
238 def delete(data: Dict[UUID, Data]):
239 uuids = list(data)
240 pages_to_deactivate = ProductPage.objects.exclude(
241 shopelectro_product__uuid__in=uuids)
242 pages_to_deactivate.update(is_active=False)
243 deactivated_count = pages_to_deactivate.count()
244 logger.info(f'{deactivated_count} products and {deactivated_count} pages were deleted.')
245
246
247 @transaction.atomic
248 def update(data: Dict[UUID, Data]) -> QuerySet:
249 def save(product, field, value):
250 if field == 'name' and getattr(product, field, None):
251 return
252 elif field == 'page':
253 for page_field, page_value in value.items():
254 if not getattr(product.page, page_field, ''):
255 setattr(product.page, page_field, page_value)
256 elif field == 'tags':
257 product.tags = merge(list(product.tags.all()), value)
258 else:
259 setattr(product, field, value)
260
261 def merge(left: List, right: List) -> List:
262 """Merge two arrays with order preserving."""
263 # Dirty patch for preserving tags, appended from admin.
264 # Still waiting 1C throwing out.
265 return left + [e for e in right if e not in left]
266
267 products = Product.objects.filter(uuid__in=data)
268
269 for product in products:
270 product_data = data[str(product.uuid)]
271 for field, value in product_data.items():
272 save(product, field, value)
273 product.save()
274
275 logger.info('{} products were updated.'.format(products.count()))
276 return products
277
278
279 @transaction.atomic
280 def create(data: Dict[UUID, Data], updated_products: QuerySet) -> QuerySet:
281 data = deepcopy(data)
282 uuids_for_create = (
283 set(data) - set(str(product.uuid) for product in updated_products)
284 )
285
286 for uuid in uuids_for_create:
287 product_data = data.get(uuid)
288 tags = product_data.pop('tags', {})
289 page_data = product_data.pop('page', {})
290
291 new_product = Product.objects.create(**product_data, uuid=uuid)
292 new_product.tags.set(tags)
293 for field, value in page_data.items():
294 setattr(new_product.page, field, value)
295 new_product.page.save()
296
297 created_products = Product.objects.filter(uuid__in=uuids_for_create)
298
299 logger.info('{} products were created.'.format(created_products.count()))
300 return created_products
301
302
303 class UpdateProductError(Exception):
304 pass
305
306
307 def main(*args, **kwargs):
308 cleaned_product_data = clean_data(merge_data(
309 product_file.get_data(),
310 price_file.get_data(),
311 in_stock_file.get_data(),
312 ))
313
314 if not cleaned_product_data:
315
316 parsed_files = {
317 'product_files': list(product_file.parsed_files),
318 'price_files': list(price_file.parsed_files),
319 'in_stock_files': list(in_stock_file.parsed_files),
320 }
321
322 if not any(parsed_files.values()):
323 message = 'Files does not exist: {}'.format(parsed_files)
324 else:
325 # file structure is unstable.
326 # You should adapt code for it if you got this error
327 message = (
328 'The file structure has changed'
329 ' or it does not contain the required data.'
330 )
331
332 raise UpdateProductError(message)
333
334 delete(cleaned_product_data)
335 updated_products = update(cleaned_product_data)
336 created_products = create(cleaned_product_data, updated_products)
337
338 if created_products.exists():
339 report(kwargs['recipients'])
340
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/shopelectro/management/commands/_update_catalog/update_products.py b/shopelectro/management/commands/_update_catalog/update_products.py
--- a/shopelectro/management/commands/_update_catalog/update_products.py
+++ b/shopelectro/management/commands/_update_catalog/update_products.py
@@ -236,6 +236,11 @@
@transaction.atomic
def delete(data: Dict[UUID, Data]):
+ """
+ Deactivate stale pages.
+
+ Deactivate all pages that are still in db, but already not in `data`.
+ """
uuids = list(data)
pages_to_deactivate = ProductPage.objects.exclude(
shopelectro_product__uuid__in=uuids)
|
{"golden_diff": "diff --git a/shopelectro/management/commands/_update_catalog/update_products.py b/shopelectro/management/commands/_update_catalog/update_products.py\n--- a/shopelectro/management/commands/_update_catalog/update_products.py\n+++ b/shopelectro/management/commands/_update_catalog/update_products.py\n@@ -236,6 +236,11 @@\n \n @transaction.atomic\n def delete(data: Dict[UUID, Data]):\n+ \"\"\"\n+ Deactivate stale pages.\n+\n+ Deactivate all pages that are still in db, but already not in `data`.\n+ \"\"\"\n uuids = list(data)\n pages_to_deactivate = ProductPage.objects.exclude(\n shopelectro_product__uuid__in=uuids)\n", "issue": "500 \u043e\u0448\u0438\u0431\u043a\u0430 \u0432 \u0430\u0434\u043c\u0438\u043d\u043a\u0435\nhttps://www.shopelectro.ru/admin/shopelectro/productpage/?has_category=no\n", "before_files": [{"content": "import logging\nimport typing\nfrom collections import defaultdict\nfrom copy import deepcopy\nfrom functools import reduce\nfrom itertools import chain\nfrom typing import Dict, Iterator, List\nfrom xml.etree.ElementTree import Element\n\nfrom django.conf import settings\nfrom django.contrib.auth.models import User\nfrom django.core.mail import send_mail\nfrom django.db import transaction\nfrom django.db.models import QuerySet\nfrom django.template.loader import render_to_string\n\nfrom shopelectro.management.commands._update_catalog.utils import (\n XmlFile, is_correct_uuid, NOT_SAVE_TEMPLATE, UUID, Data, floor\n)\nfrom shopelectro.models import Product, ProductPage, Tag\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef fetch_products(root: Element, config: XmlFile) -> Iterator:\n product_els = root.findall(config.xpaths['products'])\n for product_el in product_els:\n name = product_el.find(config.xpaths['name']).text\n uuid = product_el.find(config.xpaths['uuid']).text\n vendor_code = product_el.find(\n config.xpaths['vendor_code']\n ).text.lstrip('0')\n content = product_el.find(config.xpaths['page_content']).text or ''\n\n tag_value_els = (\n tag_el.find(config.xpaths['tag_value_uuid'])\n for tag_el in product_el.findall(config.xpaths['tags'])\n if tag_el is not None\n )\n\n tag_uuids = list(filter(is_correct_uuid, (\n tag_value.text\n for tag_value in tag_value_els\n # should use 'is not None', because __bool__ does not defined\n if tag_value is not None\n )))\n\n tags = Tag.objects.filter(uuid__in=tag_uuids)\n\n yield uuid, {\n 'name': name,\n 'vendor_code': vendor_code,\n 'page': {\n 'content': content\n },\n 'tags': tags\n }\n\n\ndef fetch_prices(root: Element, config) -> typing.Iterator:\n def get_price_values(prices_el):\n return list(sorted(\n float(price_el.find(config.xpaths['price']).text)\n for price_el in prices_el.findall(config.xpaths['prices'])\n ))\n\n def multiply(prices: typing.List[float]):\n def floor_prices(prices, precision: floor):\n return [\n floor(price * multiplier, precision)\n for price, multiplier in zip(prices, settings.PRICE_MULTIPLIERS)\n ]\n *wholesale_prices, retail_price = prices\n return (\n floor_prices(wholesale_prices, precision=2) +\n floor_prices([retail_price], precision=0)\n )\n\n product_price_els = root.findall(config.xpaths['product_prices'])\n for prices_el in product_price_els:\n product_uuid = prices_el.find(config.xpaths['product_uuid']).text\n prices = dict(zip(\n config.extra_options['price_types'],\n multiply(get_price_values(prices_el))\n ))\n yield product_uuid, prices\n\n\ndef fetch_in_stock(root: Element, config: XmlFile) -> Iterator:\n product_els = root.findall(config.xpaths['products'])\n for product_el in product_els:\n uuid = product_el.find(config.xpaths['product_uuid']).text\n in_stock = product_el.find(config.xpaths['in_stock']).text\n\n if not (in_stock.isdigit() and int(in_stock) >= 0):\n in_stock = 0\n\n yield uuid, {\n 'in_stock': in_stock,\n }\n\n\nproduct_file = XmlFile(\n fetch_callback=fetch_products,\n xml_path_pattern='**/webdata/**/goods/**/import*.xml',\n xpath_queries={\n 'products': './/{}\u0422\u043e\u0432\u0430\u0440\u044b/',\n 'name': '.{}\u041d\u0430\u0438\u043c\u0435\u043d\u043e\u0432\u0430\u043d\u0438\u0435',\n 'uuid': '.{}\u0418\u0434',\n 'page_content': '.{}\u041e\u043f\u0438\u0441\u0430\u043d\u0438\u0435',\n 'tags': '.{}\u0417\u043d\u0430\u0447\u0435\u043d\u0438\u044f\u0421\u0432\u043e\u0439\u0441\u0442\u0432/',\n 'tag_value_uuid': '.{}\u0417\u043d\u0430\u0447\u0435\u043d\u0438\u0435',\n 'vendor_code': '.{0}\u0417\u043d\u0430\u0447\u0435\u043d\u0438\u044f\u0420\u0435\u043a\u0432\u0438\u0437\u0438\u0442\u043e\u0432/{0}\u0417\u043d\u0430\u0447\u0435\u043d\u0438\u0435\u0420\u0435\u043a\u0432\u0438\u0437\u0438\u0442\u0430'\n '[{0}\u041d\u0430\u0438\u043c\u0435\u043d\u043e\u0432\u0430\u043d\u0438\u0435=\"\u041a\u043e\u0434\"]/{0}\u0417\u043d\u0430\u0447\u0435\u043d\u0438\u0435',\n },\n)\n\nprice_file = XmlFile(\n fetch_callback=fetch_prices,\n xml_path_pattern='**/webdata/**/goods/**/prices*.xml',\n xpath_queries={\n 'product_prices': './/{}\u041f\u0440\u0435\u0434\u043b\u043e\u0436\u0435\u043d\u0438\u044f/',\n 'product_uuid': '.{}\u0418\u0434',\n 'prices': '.{}\u0426\u0435\u043d\u044b/',\n 'price': '.{}\u0426\u0435\u043d\u0430\u0417\u0430\u0415\u0434\u0438\u043d\u0438\u0446\u0443',\n },\n extra_options={\n 'price_types': [\n 'purchase_price', 'wholesale_large', 'wholesale_medium',\n 'wholesale_small', 'price',\n ],\n },\n)\n\n\nin_stock_file = XmlFile(\n fetch_callback=fetch_in_stock,\n xml_path_pattern='**/webdata/**/goods/**/rests*.xml',\n xpath_queries={\n 'products': './/{}\u041f\u0440\u0435\u0434\u043b\u043e\u0436\u0435\u043d\u0438\u044f/',\n 'product_uuid': '.{}\u0418\u0434',\n 'in_stock': './/{}\u041a\u043e\u043b\u0438\u0447\u0435\u0441\u0442\u0432\u043e',\n },\n)\n\n\ndef merge_data(*data) -> Dict[UUID, Data]:\n \"\"\"\n Merge data from xml files with different structure.\n\n Example: files with product names and prices.\n \"\"\"\n product_data = defaultdict(dict)\n for key, data in chain.from_iterable(filter(None, data)):\n product_data[key].update(data)\n\n return product_data\n\n\ndef clean_data(data: Dict[UUID, Data]):\n def has_all_prices(_, product_data):\n price_types = price_file.extra_options['price_types']\n has = all(\n product_data.get(price_type)\n for price_type in price_types\n )\n if not has:\n logger.info(NOT_SAVE_TEMPLATE.format(\n entity='Product',\n name=product_data['name'],\n field='price'\n ))\n return has\n\n def has_vendor_code(_, product_data):\n has = bool(product_data['vendor_code'])\n\n if not has:\n logger.info(NOT_SAVE_TEMPLATE.format(\n entity='Product',\n name=product_data['name'],\n field='vendor_code'\n ))\n\n return has\n\n def has_uuid(uuid, product_data):\n has = is_correct_uuid(uuid)\n if not has:\n logger.info(NOT_SAVE_TEMPLATE.format(\n entity='Product',\n name=product_data['name'],\n field='uuid'\n ))\n return has\n\n def filter_(product_data):\n return all(\n f(*product_data)\n for f in [has_all_prices, has_uuid, has_vendor_code]\n )\n\n cleaned_data = dict(\n product_data\n for product_data in data.items()\n if filter_(product_data)\n )\n\n return cleaned_data\n\n\ndef report(recipients=None, message=None):\n message = message or render_to_string('report.html')\n\n user_query = (\n User.objects\n .filter(is_staff=True, is_superuser=False, is_active=True, email__isnull=False)\n )\n\n recipient_list = recipients or [user.email for user in user_query]\n\n if recipient_list:\n send_mail(\n subject='\u041e\u0431\u043d\u043e\u0432\u043b\u0435\u043d\u0438\u044f \u043a\u0430\u0442\u0430\u043b\u043e\u0433\u0430 \u0442\u043e\u0432\u0430\u0440\u043e\u0432',\n message=message,\n from_email=settings.EMAIL_SENDER,\n recipient_list=recipient_list,\n html_message=message,\n )\n\n logger.info('Sent message to {}'.format(\n reduce(lambda x, y: '{}, {}'.format(x, y), recipient_list)\n ))\n\n\[email protected]\ndef delete(data: Dict[UUID, Data]):\n uuids = list(data)\n pages_to_deactivate = ProductPage.objects.exclude(\n shopelectro_product__uuid__in=uuids)\n pages_to_deactivate.update(is_active=False)\n deactivated_count = pages_to_deactivate.count()\n logger.info(f'{deactivated_count} products and {deactivated_count} pages were deleted.')\n\n\[email protected]\ndef update(data: Dict[UUID, Data]) -> QuerySet:\n def save(product, field, value):\n if field == 'name' and getattr(product, field, None):\n return\n elif field == 'page':\n for page_field, page_value in value.items():\n if not getattr(product.page, page_field, ''):\n setattr(product.page, page_field, page_value)\n elif field == 'tags':\n product.tags = merge(list(product.tags.all()), value)\n else:\n setattr(product, field, value)\n\n def merge(left: List, right: List) -> List:\n \"\"\"Merge two arrays with order preserving.\"\"\"\n # Dirty patch for preserving tags, appended from admin.\n # Still waiting 1C throwing out.\n return left + [e for e in right if e not in left]\n\n products = Product.objects.filter(uuid__in=data)\n\n for product in products:\n product_data = data[str(product.uuid)]\n for field, value in product_data.items():\n save(product, field, value)\n product.save()\n\n logger.info('{} products were updated.'.format(products.count()))\n return products\n\n\[email protected]\ndef create(data: Dict[UUID, Data], updated_products: QuerySet) -> QuerySet:\n data = deepcopy(data)\n uuids_for_create = (\n set(data) - set(str(product.uuid) for product in updated_products)\n )\n\n for uuid in uuids_for_create:\n product_data = data.get(uuid)\n tags = product_data.pop('tags', {})\n page_data = product_data.pop('page', {})\n\n new_product = Product.objects.create(**product_data, uuid=uuid)\n new_product.tags.set(tags)\n for field, value in page_data.items():\n setattr(new_product.page, field, value)\n new_product.page.save()\n\n created_products = Product.objects.filter(uuid__in=uuids_for_create)\n\n logger.info('{} products were created.'.format(created_products.count()))\n return created_products\n\n\nclass UpdateProductError(Exception):\n pass\n\n\ndef main(*args, **kwargs):\n cleaned_product_data = clean_data(merge_data(\n product_file.get_data(),\n price_file.get_data(),\n in_stock_file.get_data(),\n ))\n\n if not cleaned_product_data:\n\n parsed_files = {\n 'product_files': list(product_file.parsed_files),\n 'price_files': list(price_file.parsed_files),\n 'in_stock_files': list(in_stock_file.parsed_files),\n }\n\n if not any(parsed_files.values()):\n message = 'Files does not exist: {}'.format(parsed_files)\n else:\n # file structure is unstable.\n # You should adapt code for it if you got this error\n message = (\n 'The file structure has changed'\n ' or it does not contain the required data.'\n )\n\n raise UpdateProductError(message)\n\n delete(cleaned_product_data)\n updated_products = update(cleaned_product_data)\n created_products = create(cleaned_product_data, updated_products)\n\n if created_products.exists():\n report(kwargs['recipients'])\n", "path": "shopelectro/management/commands/_update_catalog/update_products.py"}], "after_files": [{"content": "import logging\nimport typing\nfrom collections import defaultdict\nfrom copy import deepcopy\nfrom functools import reduce\nfrom itertools import chain\nfrom typing import Dict, Iterator, List\nfrom xml.etree.ElementTree import Element\n\nfrom django.conf import settings\nfrom django.contrib.auth.models import User\nfrom django.core.mail import send_mail\nfrom django.db import transaction\nfrom django.db.models import QuerySet\nfrom django.template.loader import render_to_string\n\nfrom shopelectro.management.commands._update_catalog.utils import (\n XmlFile, is_correct_uuid, NOT_SAVE_TEMPLATE, UUID, Data, floor\n)\nfrom shopelectro.models import Product, ProductPage, Tag\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef fetch_products(root: Element, config: XmlFile) -> Iterator:\n product_els = root.findall(config.xpaths['products'])\n for product_el in product_els:\n name = product_el.find(config.xpaths['name']).text\n uuid = product_el.find(config.xpaths['uuid']).text\n vendor_code = product_el.find(\n config.xpaths['vendor_code']\n ).text.lstrip('0')\n content = product_el.find(config.xpaths['page_content']).text or ''\n\n tag_value_els = (\n tag_el.find(config.xpaths['tag_value_uuid'])\n for tag_el in product_el.findall(config.xpaths['tags'])\n if tag_el is not None\n )\n\n tag_uuids = list(filter(is_correct_uuid, (\n tag_value.text\n for tag_value in tag_value_els\n # should use 'is not None', because __bool__ does not defined\n if tag_value is not None\n )))\n\n tags = Tag.objects.filter(uuid__in=tag_uuids)\n\n yield uuid, {\n 'name': name,\n 'vendor_code': vendor_code,\n 'page': {\n 'content': content\n },\n 'tags': tags\n }\n\n\ndef fetch_prices(root: Element, config) -> typing.Iterator:\n def get_price_values(prices_el):\n return list(sorted(\n float(price_el.find(config.xpaths['price']).text)\n for price_el in prices_el.findall(config.xpaths['prices'])\n ))\n\n def multiply(prices: typing.List[float]):\n def floor_prices(prices, precision: floor):\n return [\n floor(price * multiplier, precision)\n for price, multiplier in zip(prices, settings.PRICE_MULTIPLIERS)\n ]\n *wholesale_prices, retail_price = prices\n return (\n floor_prices(wholesale_prices, precision=2) +\n floor_prices([retail_price], precision=0)\n )\n\n product_price_els = root.findall(config.xpaths['product_prices'])\n for prices_el in product_price_els:\n product_uuid = prices_el.find(config.xpaths['product_uuid']).text\n prices = dict(zip(\n config.extra_options['price_types'],\n multiply(get_price_values(prices_el))\n ))\n yield product_uuid, prices\n\n\ndef fetch_in_stock(root: Element, config: XmlFile) -> Iterator:\n product_els = root.findall(config.xpaths['products'])\n for product_el in product_els:\n uuid = product_el.find(config.xpaths['product_uuid']).text\n in_stock = product_el.find(config.xpaths['in_stock']).text\n\n if not (in_stock.isdigit() and int(in_stock) >= 0):\n in_stock = 0\n\n yield uuid, {\n 'in_stock': in_stock,\n }\n\n\nproduct_file = XmlFile(\n fetch_callback=fetch_products,\n xml_path_pattern='**/webdata/**/goods/**/import*.xml',\n xpath_queries={\n 'products': './/{}\u0422\u043e\u0432\u0430\u0440\u044b/',\n 'name': '.{}\u041d\u0430\u0438\u043c\u0435\u043d\u043e\u0432\u0430\u043d\u0438\u0435',\n 'uuid': '.{}\u0418\u0434',\n 'page_content': '.{}\u041e\u043f\u0438\u0441\u0430\u043d\u0438\u0435',\n 'tags': '.{}\u0417\u043d\u0430\u0447\u0435\u043d\u0438\u044f\u0421\u0432\u043e\u0439\u0441\u0442\u0432/',\n 'tag_value_uuid': '.{}\u0417\u043d\u0430\u0447\u0435\u043d\u0438\u0435',\n 'vendor_code': '.{0}\u0417\u043d\u0430\u0447\u0435\u043d\u0438\u044f\u0420\u0435\u043a\u0432\u0438\u0437\u0438\u0442\u043e\u0432/{0}\u0417\u043d\u0430\u0447\u0435\u043d\u0438\u0435\u0420\u0435\u043a\u0432\u0438\u0437\u0438\u0442\u0430'\n '[{0}\u041d\u0430\u0438\u043c\u0435\u043d\u043e\u0432\u0430\u043d\u0438\u0435=\"\u041a\u043e\u0434\"]/{0}\u0417\u043d\u0430\u0447\u0435\u043d\u0438\u0435',\n },\n)\n\nprice_file = XmlFile(\n fetch_callback=fetch_prices,\n xml_path_pattern='**/webdata/**/goods/**/prices*.xml',\n xpath_queries={\n 'product_prices': './/{}\u041f\u0440\u0435\u0434\u043b\u043e\u0436\u0435\u043d\u0438\u044f/',\n 'product_uuid': '.{}\u0418\u0434',\n 'prices': '.{}\u0426\u0435\u043d\u044b/',\n 'price': '.{}\u0426\u0435\u043d\u0430\u0417\u0430\u0415\u0434\u0438\u043d\u0438\u0446\u0443',\n },\n extra_options={\n 'price_types': [\n 'purchase_price', 'wholesale_large', 'wholesale_medium',\n 'wholesale_small', 'price',\n ],\n },\n)\n\n\nin_stock_file = XmlFile(\n fetch_callback=fetch_in_stock,\n xml_path_pattern='**/webdata/**/goods/**/rests*.xml',\n xpath_queries={\n 'products': './/{}\u041f\u0440\u0435\u0434\u043b\u043e\u0436\u0435\u043d\u0438\u044f/',\n 'product_uuid': '.{}\u0418\u0434',\n 'in_stock': './/{}\u041a\u043e\u043b\u0438\u0447\u0435\u0441\u0442\u0432\u043e',\n },\n)\n\n\ndef merge_data(*data) -> Dict[UUID, Data]:\n \"\"\"\n Merge data from xml files with different structure.\n\n Example: files with product names and prices.\n \"\"\"\n product_data = defaultdict(dict)\n for key, data in chain.from_iterable(filter(None, data)):\n product_data[key].update(data)\n\n return product_data\n\n\ndef clean_data(data: Dict[UUID, Data]):\n def has_all_prices(_, product_data):\n price_types = price_file.extra_options['price_types']\n has = all(\n product_data.get(price_type)\n for price_type in price_types\n )\n if not has:\n logger.info(NOT_SAVE_TEMPLATE.format(\n entity='Product',\n name=product_data['name'],\n field='price'\n ))\n return has\n\n def has_vendor_code(_, product_data):\n has = bool(product_data['vendor_code'])\n\n if not has:\n logger.info(NOT_SAVE_TEMPLATE.format(\n entity='Product',\n name=product_data['name'],\n field='vendor_code'\n ))\n\n return has\n\n def has_uuid(uuid, product_data):\n has = is_correct_uuid(uuid)\n if not has:\n logger.info(NOT_SAVE_TEMPLATE.format(\n entity='Product',\n name=product_data['name'],\n field='uuid'\n ))\n return has\n\n def filter_(product_data):\n return all(\n f(*product_data)\n for f in [has_all_prices, has_uuid, has_vendor_code]\n )\n\n cleaned_data = dict(\n product_data\n for product_data in data.items()\n if filter_(product_data)\n )\n\n return cleaned_data\n\n\ndef report(recipients=None, message=None):\n message = message or render_to_string('report.html')\n\n user_query = (\n User.objects\n .filter(is_staff=True, is_superuser=False, is_active=True, email__isnull=False)\n )\n\n recipient_list = recipients or [user.email for user in user_query]\n\n if recipient_list:\n send_mail(\n subject='\u041e\u0431\u043d\u043e\u0432\u043b\u0435\u043d\u0438\u044f \u043a\u0430\u0442\u0430\u043b\u043e\u0433\u0430 \u0442\u043e\u0432\u0430\u0440\u043e\u0432',\n message=message,\n from_email=settings.EMAIL_SENDER,\n recipient_list=recipient_list,\n html_message=message,\n )\n\n logger.info('Sent message to {}'.format(\n reduce(lambda x, y: '{}, {}'.format(x, y), recipient_list)\n ))\n\n\[email protected]\ndef delete(data: Dict[UUID, Data]):\n \"\"\"\n Deactivate stale pages.\n\n Deactivate all pages that are still in db, but already not in `data`.\n \"\"\"\n uuids = list(data)\n pages_to_deactivate = ProductPage.objects.exclude(\n shopelectro_product__uuid__in=uuids)\n pages_to_deactivate.update(is_active=False)\n deactivated_count = pages_to_deactivate.count()\n logger.info(f'{deactivated_count} products and {deactivated_count} pages were deleted.')\n\n\[email protected]\ndef update(data: Dict[UUID, Data]) -> QuerySet:\n def save(product, field, value):\n if field == 'name' and getattr(product, field, None):\n return\n elif field == 'page':\n for page_field, page_value in value.items():\n if not getattr(product.page, page_field, ''):\n setattr(product.page, page_field, page_value)\n elif field == 'tags':\n product.tags = merge(list(product.tags.all()), value)\n else:\n setattr(product, field, value)\n\n def merge(left: List, right: List) -> List:\n \"\"\"Merge two arrays with order preserving.\"\"\"\n # Dirty patch for preserving tags, appended from admin.\n # Still waiting 1C throwing out.\n return left + [e for e in right if e not in left]\n\n products = Product.objects.filter(uuid__in=data)\n\n for product in products:\n product_data = data[str(product.uuid)]\n for field, value in product_data.items():\n save(product, field, value)\n product.save()\n\n logger.info('{} products were updated.'.format(products.count()))\n return products\n\n\[email protected]\ndef create(data: Dict[UUID, Data], updated_products: QuerySet) -> QuerySet:\n data = deepcopy(data)\n uuids_for_create = (\n set(data) - set(str(product.uuid) for product in updated_products)\n )\n\n for uuid in uuids_for_create:\n product_data = data.get(uuid)\n tags = product_data.pop('tags', {})\n page_data = product_data.pop('page', {})\n\n new_product = Product.objects.create(**product_data, uuid=uuid)\n new_product.tags.set(tags)\n for field, value in page_data.items():\n setattr(new_product.page, field, value)\n new_product.page.save()\n\n created_products = Product.objects.filter(uuid__in=uuids_for_create)\n\n logger.info('{} products were created.'.format(created_products.count()))\n return created_products\n\n\nclass UpdateProductError(Exception):\n pass\n\n\ndef main(*args, **kwargs):\n cleaned_product_data = clean_data(merge_data(\n product_file.get_data(),\n price_file.get_data(),\n in_stock_file.get_data(),\n ))\n\n if not cleaned_product_data:\n\n parsed_files = {\n 'product_files': list(product_file.parsed_files),\n 'price_files': list(price_file.parsed_files),\n 'in_stock_files': list(in_stock_file.parsed_files),\n }\n\n if not any(parsed_files.values()):\n message = 'Files does not exist: {}'.format(parsed_files)\n else:\n # file structure is unstable.\n # You should adapt code for it if you got this error\n message = (\n 'The file structure has changed'\n ' or it does not contain the required data.'\n )\n\n raise UpdateProductError(message)\n\n delete(cleaned_product_data)\n updated_products = update(cleaned_product_data)\n created_products = create(cleaned_product_data, updated_products)\n\n if created_products.exists():\n report(kwargs['recipients'])\n", "path": "shopelectro/management/commands/_update_catalog/update_products.py"}]}
| 3,565 | 163 |
gh_patches_debug_8917
|
rasdani/github-patches
|
git_diff
|
beeware__toga-585
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add Canvas Dashed Line Support for Gtk+
Hey PyCon AU 2018 sprinters, and other new contributors - here is a great way to contribute for someone who runs Linux:
Recently @bryall implemented dashed line support for Canvas in the Cocoa backend in #578. It would be great to implement support for dashed lines in Gtk+ as well.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/gtk/toga_gtk/widgets/canvas.py`
Content:
```
1 import gi
2
3 gi.require_version("Gtk", "3.0")
4 from gi.repository import Gtk
5
6 try:
7 import cairo
8 except ImportError:
9 cairo = None
10 try:
11 gi.require_version("Pango", "1.0")
12 from gi.repository import Pango
13
14 SCALE = Pango.SCALE
15 except ImportError:
16 SCALE = 1024
17
18 from .base import Widget
19 from ..color import native_color
20
21
22 class Canvas(Widget):
23 def create(self):
24 if cairo is None:
25 raise RuntimeError(
26 "'import cairo' failed; may need to install python-gi-cairo."
27 )
28
29 self.native = Gtk.DrawingArea()
30 self.native.interface = self.interface
31 self.native.connect("draw", self.gtk_draw_callback)
32
33 def gtk_draw_callback(self, canvas, gtk_context):
34 """Creates a draw callback
35
36 Gtk+ uses a drawing callback to draw on a DrawingArea. Assignment of the
37 callback function creates a Gtk+ canvas and Gtk+ context automatically
38 using the canvas and gtk_context function arguments. This method calls
39 the draw method on the interface Canvas to draw the objects.
40
41 """
42 self.interface._draw(self, draw_context=gtk_context)
43
44 def redraw(self):
45 pass
46
47 # Basic paths
48
49 def new_path(self, draw_context, *args, **kwargs):
50 draw_context.new_path()
51
52 def closed_path(self, x, y, draw_context, *args, **kwargs):
53 draw_context.close_path()
54
55 def move_to(self, x, y, draw_context, *args, **kwargs):
56 draw_context.move_to(x, y)
57
58 def line_to(self, x, y, draw_context, *args, **kwargs):
59 draw_context.line_to(x, y)
60
61 # Basic shapes
62
63 def bezier_curve_to(
64 self, cp1x, cp1y, cp2x, cp2y, x, y, draw_context, *args, **kwargs
65 ):
66 draw_context.curve_to(cp1x, cp1y, cp2x, cp2y, x, y)
67
68 def quadratic_curve_to(self, cpx, cpy, x, y, draw_context, *args, **kwargs):
69 draw_context.curve_to(cpx, cpy, cpx, cpy, x, y)
70
71 def arc(
72 self,
73 x,
74 y,
75 radius,
76 startangle,
77 endangle,
78 anticlockwise,
79 draw_context,
80 *args,
81 **kwargs
82 ):
83 if anticlockwise:
84 draw_context.arc_negative(x, y, radius, startangle, endangle)
85 else:
86 draw_context.arc(x, y, radius, startangle, endangle)
87
88 def ellipse(
89 self,
90 x,
91 y,
92 radiusx,
93 radiusy,
94 rotation,
95 startangle,
96 endangle,
97 anticlockwise,
98 draw_context,
99 *args,
100 **kwargs
101 ):
102 draw_context.save()
103 draw_context.translate(x, y)
104 if radiusx >= radiusy:
105 draw_context.scale(1, radiusy / radiusx)
106 self.arc(0, 0, radiusx, startangle, endangle, anticlockwise, draw_context)
107 else:
108 draw_context.scale(radiusx / radiusy, 1)
109 self.arc(0, 0, radiusy, startangle, endangle, anticlockwise, draw_context)
110 draw_context.rotate(rotation)
111 draw_context.identity_matrix()
112 draw_context.restore()
113
114 def rect(self, x, y, width, height, draw_context, *args, **kwargs):
115 draw_context.rectangle(x, y, width, height)
116
117 # Drawing Paths
118
119 def apply_color(self, color, draw_context, *args, **kwargs):
120 if color is not None:
121 draw_context.set_source_rgba(*native_color(color))
122 else:
123 # set color to black
124 draw_context.set_source_rgba(0, 0, 0, 1.0)
125
126 def fill(self, color, fill_rule, preserve, draw_context, *args, **kwargs):
127 self.apply_color(color, draw_context)
128 if fill_rule is "evenodd":
129 draw_context.set_fill_rule(cairo.FILL_RULE_EVEN_ODD)
130 else:
131 draw_context.set_fill_rule(cairo.FILL_RULE_WINDING)
132 if preserve:
133 draw_context.fill_preserve()
134 else:
135 draw_context.fill()
136
137 def stroke(self, color, line_width, draw_context, *args, **kwargs):
138 self.apply_color(color, draw_context)
139 draw_context.set_line_width(line_width)
140 draw_context.stroke()
141
142 # Transformations
143
144 def rotate(self, radians, draw_context, *args, **kwargs):
145 draw_context.rotate(radians)
146
147 def scale(self, sx, sy, draw_context, *args, **kwargs):
148 draw_context.scale(sx, sy)
149
150 def translate(self, tx, ty, draw_context, *args, **kwargs):
151 draw_context.translate(tx, ty)
152
153 def reset_transform(self, draw_context, *args, **kwargs):
154 draw_context.identity_matrix()
155
156 # Text
157
158 def write_text(self, text, x, y, font, draw_context, *args, **kwargs):
159 # Set font family and size
160 if font:
161 write_font = font
162 elif self.native.font:
163 write_font = self.native.font
164 write_font.family = self.native.font.get_family()
165 write_font.size = self.native.font.get_size() / SCALE
166 draw_context.select_font_face(write_font.family)
167 draw_context.set_font_size(write_font.size)
168
169 # Support writing multiline text
170 for line in text.splitlines():
171 width, height = write_font.measure(line)
172 draw_context.move_to(x, y)
173 draw_context.text_path(line)
174 y += height
175
176 def measure_text(self, text, font, draw_context, *args, **kwargs):
177 # Set font family and size
178 if font:
179 draw_context.select_font_face(font.family)
180 draw_context.set_font_size(font.size)
181 elif self.native.font:
182 draw_context.select_font_face(self.native.font.get_family())
183 draw_context.set_font_size(self.native.font.get_size() / SCALE)
184
185 x_bearing, y_bearing, width, height, x_advance, y_advance = draw_context.text_extents(
186 text
187 )
188 return width, height
189
190 # Rehint
191
192 def rehint(self):
193 # print("REHINT", self, self.native.get_preferred_width(), self.native.get_preferred_height())
194 width = self.native.get_preferred_width()
195 height = self.native.get_preferred_height()
196
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/gtk/toga_gtk/widgets/canvas.py b/src/gtk/toga_gtk/widgets/canvas.py
--- a/src/gtk/toga_gtk/widgets/canvas.py
+++ b/src/gtk/toga_gtk/widgets/canvas.py
@@ -134,10 +134,13 @@
else:
draw_context.fill()
- def stroke(self, color, line_width, draw_context, *args, **kwargs):
+ def stroke(self, color, line_width, line_dash, draw_context, *args, **kwargs):
self.apply_color(color, draw_context)
draw_context.set_line_width(line_width)
+ if line_dash is not None:
+ draw_context.set_dash(line_dash)
draw_context.stroke()
+ draw_context.set_dash([])
# Transformations
|
{"golden_diff": "diff --git a/src/gtk/toga_gtk/widgets/canvas.py b/src/gtk/toga_gtk/widgets/canvas.py\n--- a/src/gtk/toga_gtk/widgets/canvas.py\n+++ b/src/gtk/toga_gtk/widgets/canvas.py\n@@ -134,10 +134,13 @@\n else:\n draw_context.fill()\n \n- def stroke(self, color, line_width, draw_context, *args, **kwargs):\n+ def stroke(self, color, line_width, line_dash, draw_context, *args, **kwargs):\n self.apply_color(color, draw_context)\n draw_context.set_line_width(line_width)\n+ if line_dash is not None:\n+ draw_context.set_dash(line_dash)\n draw_context.stroke()\n+ draw_context.set_dash([])\n \n # Transformations\n", "issue": "Add Canvas Dashed Line Support for Gtk+\nHey PyCon AU 2018 sprinters, and other new contributors - here is a great way to contribute for someone who runs Linux:\r\n\r\nRecently @bryall implemented dashed line support for Canvas in the Cocoa backend in #578. It would be great to implement support for dashed lines in Gtk+ as well.\n", "before_files": [{"content": "import gi\n\ngi.require_version(\"Gtk\", \"3.0\")\nfrom gi.repository import Gtk\n\ntry:\n import cairo\nexcept ImportError:\n cairo = None\ntry:\n gi.require_version(\"Pango\", \"1.0\")\n from gi.repository import Pango\n\n SCALE = Pango.SCALE\nexcept ImportError:\n SCALE = 1024\n\nfrom .base import Widget\nfrom ..color import native_color\n\n\nclass Canvas(Widget):\n def create(self):\n if cairo is None:\n raise RuntimeError(\n \"'import cairo' failed; may need to install python-gi-cairo.\"\n )\n\n self.native = Gtk.DrawingArea()\n self.native.interface = self.interface\n self.native.connect(\"draw\", self.gtk_draw_callback)\n\n def gtk_draw_callback(self, canvas, gtk_context):\n \"\"\"Creates a draw callback\n\n Gtk+ uses a drawing callback to draw on a DrawingArea. Assignment of the\n callback function creates a Gtk+ canvas and Gtk+ context automatically\n using the canvas and gtk_context function arguments. This method calls\n the draw method on the interface Canvas to draw the objects.\n\n \"\"\"\n self.interface._draw(self, draw_context=gtk_context)\n\n def redraw(self):\n pass\n\n # Basic paths\n\n def new_path(self, draw_context, *args, **kwargs):\n draw_context.new_path()\n\n def closed_path(self, x, y, draw_context, *args, **kwargs):\n draw_context.close_path()\n\n def move_to(self, x, y, draw_context, *args, **kwargs):\n draw_context.move_to(x, y)\n\n def line_to(self, x, y, draw_context, *args, **kwargs):\n draw_context.line_to(x, y)\n\n # Basic shapes\n\n def bezier_curve_to(\n self, cp1x, cp1y, cp2x, cp2y, x, y, draw_context, *args, **kwargs\n ):\n draw_context.curve_to(cp1x, cp1y, cp2x, cp2y, x, y)\n\n def quadratic_curve_to(self, cpx, cpy, x, y, draw_context, *args, **kwargs):\n draw_context.curve_to(cpx, cpy, cpx, cpy, x, y)\n\n def arc(\n self,\n x,\n y,\n radius,\n startangle,\n endangle,\n anticlockwise,\n draw_context,\n *args,\n **kwargs\n ):\n if anticlockwise:\n draw_context.arc_negative(x, y, radius, startangle, endangle)\n else:\n draw_context.arc(x, y, radius, startangle, endangle)\n\n def ellipse(\n self,\n x,\n y,\n radiusx,\n radiusy,\n rotation,\n startangle,\n endangle,\n anticlockwise,\n draw_context,\n *args,\n **kwargs\n ):\n draw_context.save()\n draw_context.translate(x, y)\n if radiusx >= radiusy:\n draw_context.scale(1, radiusy / radiusx)\n self.arc(0, 0, radiusx, startangle, endangle, anticlockwise, draw_context)\n else:\n draw_context.scale(radiusx / radiusy, 1)\n self.arc(0, 0, radiusy, startangle, endangle, anticlockwise, draw_context)\n draw_context.rotate(rotation)\n draw_context.identity_matrix()\n draw_context.restore()\n\n def rect(self, x, y, width, height, draw_context, *args, **kwargs):\n draw_context.rectangle(x, y, width, height)\n\n # Drawing Paths\n\n def apply_color(self, color, draw_context, *args, **kwargs):\n if color is not None:\n draw_context.set_source_rgba(*native_color(color))\n else:\n # set color to black\n draw_context.set_source_rgba(0, 0, 0, 1.0)\n\n def fill(self, color, fill_rule, preserve, draw_context, *args, **kwargs):\n self.apply_color(color, draw_context)\n if fill_rule is \"evenodd\":\n draw_context.set_fill_rule(cairo.FILL_RULE_EVEN_ODD)\n else:\n draw_context.set_fill_rule(cairo.FILL_RULE_WINDING)\n if preserve:\n draw_context.fill_preserve()\n else:\n draw_context.fill()\n\n def stroke(self, color, line_width, draw_context, *args, **kwargs):\n self.apply_color(color, draw_context)\n draw_context.set_line_width(line_width)\n draw_context.stroke()\n\n # Transformations\n\n def rotate(self, radians, draw_context, *args, **kwargs):\n draw_context.rotate(radians)\n\n def scale(self, sx, sy, draw_context, *args, **kwargs):\n draw_context.scale(sx, sy)\n\n def translate(self, tx, ty, draw_context, *args, **kwargs):\n draw_context.translate(tx, ty)\n\n def reset_transform(self, draw_context, *args, **kwargs):\n draw_context.identity_matrix()\n\n # Text\n\n def write_text(self, text, x, y, font, draw_context, *args, **kwargs):\n # Set font family and size\n if font:\n write_font = font\n elif self.native.font:\n write_font = self.native.font\n write_font.family = self.native.font.get_family()\n write_font.size = self.native.font.get_size() / SCALE\n draw_context.select_font_face(write_font.family)\n draw_context.set_font_size(write_font.size)\n\n # Support writing multiline text\n for line in text.splitlines():\n width, height = write_font.measure(line)\n draw_context.move_to(x, y)\n draw_context.text_path(line)\n y += height\n\n def measure_text(self, text, font, draw_context, *args, **kwargs):\n # Set font family and size\n if font:\n draw_context.select_font_face(font.family)\n draw_context.set_font_size(font.size)\n elif self.native.font:\n draw_context.select_font_face(self.native.font.get_family())\n draw_context.set_font_size(self.native.font.get_size() / SCALE)\n\n x_bearing, y_bearing, width, height, x_advance, y_advance = draw_context.text_extents(\n text\n )\n return width, height\n\n # Rehint\n\n def rehint(self):\n # print(\"REHINT\", self, self.native.get_preferred_width(), self.native.get_preferred_height())\n width = self.native.get_preferred_width()\n height = self.native.get_preferred_height()\n", "path": "src/gtk/toga_gtk/widgets/canvas.py"}], "after_files": [{"content": "import gi\n\ngi.require_version(\"Gtk\", \"3.0\")\nfrom gi.repository import Gtk\n\ntry:\n import cairo\nexcept ImportError:\n cairo = None\ntry:\n gi.require_version(\"Pango\", \"1.0\")\n from gi.repository import Pango\n\n SCALE = Pango.SCALE\nexcept ImportError:\n SCALE = 1024\n\nfrom .base import Widget\nfrom ..color import native_color\n\n\nclass Canvas(Widget):\n def create(self):\n if cairo is None:\n raise RuntimeError(\n \"'import cairo' failed; may need to install python-gi-cairo.\"\n )\n\n self.native = Gtk.DrawingArea()\n self.native.interface = self.interface\n self.native.connect(\"draw\", self.gtk_draw_callback)\n\n def gtk_draw_callback(self, canvas, gtk_context):\n \"\"\"Creates a draw callback\n\n Gtk+ uses a drawing callback to draw on a DrawingArea. Assignment of the\n callback function creates a Gtk+ canvas and Gtk+ context automatically\n using the canvas and gtk_context function arguments. This method calls\n the draw method on the interface Canvas to draw the objects.\n\n \"\"\"\n self.interface._draw(self, draw_context=gtk_context)\n\n def redraw(self):\n pass\n\n # Basic paths\n\n def new_path(self, draw_context, *args, **kwargs):\n draw_context.new_path()\n\n def closed_path(self, x, y, draw_context, *args, **kwargs):\n draw_context.close_path()\n\n def move_to(self, x, y, draw_context, *args, **kwargs):\n draw_context.move_to(x, y)\n\n def line_to(self, x, y, draw_context, *args, **kwargs):\n draw_context.line_to(x, y)\n\n # Basic shapes\n\n def bezier_curve_to(\n self, cp1x, cp1y, cp2x, cp2y, x, y, draw_context, *args, **kwargs\n ):\n draw_context.curve_to(cp1x, cp1y, cp2x, cp2y, x, y)\n\n def quadratic_curve_to(self, cpx, cpy, x, y, draw_context, *args, **kwargs):\n draw_context.curve_to(cpx, cpy, cpx, cpy, x, y)\n\n def arc(\n self,\n x,\n y,\n radius,\n startangle,\n endangle,\n anticlockwise,\n draw_context,\n *args,\n **kwargs\n ):\n if anticlockwise:\n draw_context.arc_negative(x, y, radius, startangle, endangle)\n else:\n draw_context.arc(x, y, radius, startangle, endangle)\n\n def ellipse(\n self,\n x,\n y,\n radiusx,\n radiusy,\n rotation,\n startangle,\n endangle,\n anticlockwise,\n draw_context,\n *args,\n **kwargs\n ):\n draw_context.save()\n draw_context.translate(x, y)\n if radiusx >= radiusy:\n draw_context.scale(1, radiusy / radiusx)\n self.arc(0, 0, radiusx, startangle, endangle, anticlockwise, draw_context)\n else:\n draw_context.scale(radiusx / radiusy, 1)\n self.arc(0, 0, radiusy, startangle, endangle, anticlockwise, draw_context)\n draw_context.rotate(rotation)\n draw_context.identity_matrix()\n draw_context.restore()\n\n def rect(self, x, y, width, height, draw_context, *args, **kwargs):\n draw_context.rectangle(x, y, width, height)\n\n # Drawing Paths\n\n def apply_color(self, color, draw_context, *args, **kwargs):\n if color is not None:\n draw_context.set_source_rgba(*native_color(color))\n else:\n # set color to black\n draw_context.set_source_rgba(0, 0, 0, 1.0)\n\n def fill(self, color, fill_rule, preserve, draw_context, *args, **kwargs):\n self.apply_color(color, draw_context)\n if fill_rule is \"evenodd\":\n draw_context.set_fill_rule(cairo.FILL_RULE_EVEN_ODD)\n else:\n draw_context.set_fill_rule(cairo.FILL_RULE_WINDING)\n if preserve:\n draw_context.fill_preserve()\n else:\n draw_context.fill()\n\n def stroke(self, color, line_width, line_dash, draw_context, *args, **kwargs):\n self.apply_color(color, draw_context)\n draw_context.set_line_width(line_width)\n if line_dash is not None:\n draw_context.set_dash(line_dash)\n draw_context.stroke()\n draw_context.set_dash([])\n\n # Transformations\n\n def rotate(self, radians, draw_context, *args, **kwargs):\n draw_context.rotate(radians)\n\n def scale(self, sx, sy, draw_context, *args, **kwargs):\n draw_context.scale(sx, sy)\n\n def translate(self, tx, ty, draw_context, *args, **kwargs):\n draw_context.translate(tx, ty)\n\n def reset_transform(self, draw_context, *args, **kwargs):\n draw_context.identity_matrix()\n\n # Text\n\n def write_text(self, text, x, y, font, draw_context, *args, **kwargs):\n # Set font family and size\n if font:\n write_font = font\n elif self.native.font:\n write_font = self.native.font\n write_font.family = self.native.font.get_family()\n write_font.size = self.native.font.get_size() / SCALE\n draw_context.select_font_face(write_font.family)\n draw_context.set_font_size(write_font.size)\n\n # Support writing multiline text\n for line in text.splitlines():\n width, height = write_font.measure(line)\n draw_context.move_to(x, y)\n draw_context.text_path(line)\n y += height\n\n def measure_text(self, text, font, draw_context, *args, **kwargs):\n # Set font family and size\n if font:\n draw_context.select_font_face(font.family)\n draw_context.set_font_size(font.size)\n elif self.native.font:\n draw_context.select_font_face(self.native.font.get_family())\n draw_context.set_font_size(self.native.font.get_size() / SCALE)\n\n x_bearing, y_bearing, width, height, x_advance, y_advance = draw_context.text_extents(\n text\n )\n return width, height\n\n # Rehint\n\n def rehint(self):\n # print(\"REHINT\", self, self.native.get_preferred_width(), self.native.get_preferred_height())\n width = self.native.get_preferred_width()\n height = self.native.get_preferred_height()\n", "path": "src/gtk/toga_gtk/widgets/canvas.py"}]}
| 2,255 | 178 |
gh_patches_debug_21881
|
rasdani/github-patches
|
git_diff
|
google__TensorNetwork-263
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ncon_interface tests fail
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conftest.py`
Content:
```
1 # Copyright 2019 The TensorNetwork Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from __future__ import absolute_import
16 from __future__ import division
17 from __future__ import print_function
18 import pytest
19
20
21 @pytest.fixture(name="backend", params=["numpy", "tensorflow",
22 "jax", "pytorch"])
23 def backend_fixure(request):
24 return request.param
25
```
Path: `tensornetwork/__init__.py`
Content:
```
1 from __future__ import absolute_import
2 from tensornetwork.network import TensorNetwork
3 from tensornetwork.network_components import Node, Edge, CopyNode
4 from tensornetwork.ncon_interface import ncon, ncon_network
5 from tensornetwork.version import __version__
6 from tensornetwork.visualization.graphviz import to_graphviz
7 from tensornetwork import contractors
8 from tensornetwork import config
9 from typing import Text, Optional, Type
10 from tensornetwork.utils import load
11
12
13 def set_default_backend(backend: Text, dtype: Optional[Type] = None) -> None:
14 config.default_backend = backend
15 config.default_dype = dtype
16
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/conftest.py b/conftest.py
--- a/conftest.py
+++ b/conftest.py
@@ -16,9 +16,33 @@
from __future__ import division
from __future__ import print_function
import pytest
+import jax
+import tensornetwork
+import tensorflow as tf
@pytest.fixture(name="backend", params=["numpy", "tensorflow",
"jax", "pytorch"])
def backend_fixure(request):
return request.param
+
+
[email protected](autouse=True)
+def reset_default_backend():
+ tensornetwork.set_default_backend("numpy")
+ yield
+ tensornetwork.set_default_backend("numpy")
+
+
[email protected](autouse=True)
+def enable_jax_64():
+ jax.config.update("jax_enable_x64", True)
+ yield
+ jax.config.update("jax_enable_x64", True)
+
+
[email protected](autouse=True)
+def tf_enable_v2_behaviour():
+ tf.compat.v1.enable_v2_behavior()
+ yield
+ tf.compat.v1.enable_v2_behavior()
diff --git a/tensornetwork/__init__.py b/tensornetwork/__init__.py
--- a/tensornetwork/__init__.py
+++ b/tensornetwork/__init__.py
@@ -12,4 +12,4 @@
def set_default_backend(backend: Text, dtype: Optional[Type] = None) -> None:
config.default_backend = backend
- config.default_dype = dtype
+ config.default_dtype = dtype
|
{"golden_diff": "diff --git a/conftest.py b/conftest.py\n--- a/conftest.py\n+++ b/conftest.py\n@@ -16,9 +16,33 @@\n from __future__ import division\n from __future__ import print_function\n import pytest\n+import jax\n+import tensornetwork\n+import tensorflow as tf\n \n \n @pytest.fixture(name=\"backend\", params=[\"numpy\", \"tensorflow\",\n \"jax\", \"pytorch\"])\n def backend_fixure(request):\n return request.param\n+\n+\[email protected](autouse=True)\n+def reset_default_backend():\n+ tensornetwork.set_default_backend(\"numpy\")\n+ yield\n+ tensornetwork.set_default_backend(\"numpy\")\n+\n+\[email protected](autouse=True)\n+def enable_jax_64():\n+ jax.config.update(\"jax_enable_x64\", True)\n+ yield\n+ jax.config.update(\"jax_enable_x64\", True)\n+\n+\[email protected](autouse=True)\n+def tf_enable_v2_behaviour():\n+ tf.compat.v1.enable_v2_behavior()\n+ yield\n+ tf.compat.v1.enable_v2_behavior()\ndiff --git a/tensornetwork/__init__.py b/tensornetwork/__init__.py\n--- a/tensornetwork/__init__.py\n+++ b/tensornetwork/__init__.py\n@@ -12,4 +12,4 @@\n \n def set_default_backend(backend: Text, dtype: Optional[Type] = None) -> None:\n config.default_backend = backend\n- config.default_dype = dtype\n+ config.default_dtype = dtype\n", "issue": "ncon_interface tests fail \n\n", "before_files": [{"content": "# Copyright 2019 The TensorNetwork Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nimport pytest\n\n\[email protected](name=\"backend\", params=[\"numpy\", \"tensorflow\",\n \"jax\", \"pytorch\"])\ndef backend_fixure(request):\n return request.param\n", "path": "conftest.py"}, {"content": "from __future__ import absolute_import\nfrom tensornetwork.network import TensorNetwork\nfrom tensornetwork.network_components import Node, Edge, CopyNode\nfrom tensornetwork.ncon_interface import ncon, ncon_network\nfrom tensornetwork.version import __version__\nfrom tensornetwork.visualization.graphviz import to_graphviz\nfrom tensornetwork import contractors\nfrom tensornetwork import config\nfrom typing import Text, Optional, Type\nfrom tensornetwork.utils import load\n\n\ndef set_default_backend(backend: Text, dtype: Optional[Type] = None) -> None:\n config.default_backend = backend\n config.default_dype = dtype\n", "path": "tensornetwork/__init__.py"}], "after_files": [{"content": "# Copyright 2019 The TensorNetwork Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nimport pytest\nimport jax\nimport tensornetwork\nimport tensorflow as tf\n\n\[email protected](name=\"backend\", params=[\"numpy\", \"tensorflow\",\n \"jax\", \"pytorch\"])\ndef backend_fixure(request):\n return request.param\n\n\[email protected](autouse=True)\ndef reset_default_backend():\n tensornetwork.set_default_backend(\"numpy\")\n yield\n tensornetwork.set_default_backend(\"numpy\")\n\n\[email protected](autouse=True)\ndef enable_jax_64():\n jax.config.update(\"jax_enable_x64\", True)\n yield\n jax.config.update(\"jax_enable_x64\", True)\n\n\[email protected](autouse=True)\ndef tf_enable_v2_behaviour():\n tf.compat.v1.enable_v2_behavior()\n yield\n tf.compat.v1.enable_v2_behavior()\n", "path": "conftest.py"}, {"content": "from __future__ import absolute_import\nfrom tensornetwork.network import TensorNetwork\nfrom tensornetwork.network_components import Node, Edge, CopyNode\nfrom tensornetwork.ncon_interface import ncon, ncon_network\nfrom tensornetwork.version import __version__\nfrom tensornetwork.visualization.graphviz import to_graphviz\nfrom tensornetwork import contractors\nfrom tensornetwork import config\nfrom typing import Text, Optional, Type\nfrom tensornetwork.utils import load\n\n\ndef set_default_backend(backend: Text, dtype: Optional[Type] = None) -> None:\n config.default_backend = backend\n config.default_dtype = dtype\n", "path": "tensornetwork/__init__.py"}]}
| 677 | 355 |
gh_patches_debug_5526
|
rasdani/github-patches
|
git_diff
|
readthedocs__readthedocs.org-11354
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Project version filter returns 500
It seems the `ProjectVersionListFilterSet.get_visibility()` is receiving the wrong amount of arguments.
To reproduce, just hit https://beta.readthedocs.org/projects/bigo-live-hack/?privacy=&sort=&visibility=hidden
Sentry issue: https://read-the-docs.sentry.io/issues/4721614191/?project=148442&query=is%3Aunresolved&referrer=issue-stream&statsPeriod=7d&stream_index=7
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `readthedocs/projects/filters.py`
Content:
```
1 """Filters used in project dashboard."""
2
3 import structlog
4 from django.db.models import Count, F, Max
5 from django.utils.translation import gettext_lazy as _
6 from django_filters import ChoiceFilter, OrderingFilter
7
8 from readthedocs.core.filters import FilteredModelChoiceFilter, ModelFilterSet
9 from readthedocs.projects.models import Project
10
11 log = structlog.get_logger(__name__)
12
13
14 class VersionSortOrderingFilter(OrderingFilter):
15
16 """
17 Version list sort ordering django_filters filter.
18
19 Django-filter is highly opionated, and the default model filters do not work
20 well with empty/null values in the filter choices. In our case, empty/null
21 values are used for a default query. So, to make this work, we will use a
22 custom filter, instead of an automated model filter.
23
24 The empty/None value is used to provide both a default value to the filter
25 (when there is no ``sort`` query param), but also provide an option that is
26 manually selectable (``?sort=relevance``). We can't do this with the default
27 filter, because result would be params like ``?sort=None``.
28 """
29
30 SORT_BUILD_COUNT = "build_count"
31 SORT_BUILD_DATE = "build_date"
32 SORT_NAME = "name"
33
34 def __init__(self, *args, **kwargs):
35 # The default filtering operation will be `-recent`, so we omit it
36 # from choices to avoid showing it on the list twice.
37 kwargs.setdefault("empty_label", _("Recently built"))
38 kwargs.setdefault(
39 "choices",
40 (
41 ("-" + self.SORT_BUILD_DATE, _("Least recently built")),
42 ("-" + self.SORT_BUILD_COUNT, _("Frequently built")),
43 (self.SORT_BUILD_COUNT, _("Least frequently built")),
44 (self.SORT_NAME, _("Name")),
45 ("-" + self.SORT_NAME, _("Name (descending)")),
46 ),
47 )
48 super().__init__(*args, **kwargs)
49
50 def filter(self, qs, value):
51 # This is where we use the None value for this custom filter. This
52 # doesn't work with a standard model filter. Note: ``value`` is always
53 # an iterable, but can be empty.
54
55 if not value:
56 value = [self.SORT_BUILD_DATE]
57
58 annotations = {}
59 order_bys = []
60 for field_ordered in value:
61 field = field_ordered.lstrip("-")
62
63 if field == self.SORT_BUILD_DATE:
64 annotations[self.SORT_BUILD_DATE] = Max("builds__date")
65 elif field == self.SORT_BUILD_COUNT:
66 annotations[self.SORT_BUILD_COUNT] = Count("builds")
67 elif field == self.SORT_NAME:
68 # Alias field name here, as ``OrderingFilter`` was having trouble
69 # doing this with it's native field mapping
70 annotations[self.SORT_NAME] = F("verbose_name")
71
72 if field_ordered == self.SORT_BUILD_DATE:
73 order_bys.append(F(field).desc(nulls_last=True))
74 elif field_ordered == "-" + self.SORT_BUILD_DATE:
75 order_bys.append(F(field).asc(nulls_first=True))
76 else:
77 order_bys.append(field_ordered)
78
79 return qs.annotate(**annotations).order_by(*order_bys)
80
81
82 class ProjectSortOrderingFilter(OrderingFilter):
83
84 """
85 Project list sort ordering django_filters filter.
86
87 Django-filter is highly opionated, and the default model filters do not work
88 well with empty/null values in the filter choices. In our case, empty/null
89 values are used for a default query. So, to make this work, we will use a
90 custom filter, instead of an automated model filter.
91 """
92
93 SORT_NAME = "name"
94 SORT_MODIFIED_DATE = "modified_date"
95 SORT_BUILD_DATE = "build_date"
96 SORT_BUILD_COUNT = "build_count"
97
98 def __init__(self, *args, **kwargs):
99 # The default filtering operation will be `name`, so we omit it
100 # from choices to avoid showing it on the list twice.
101 kwargs.setdefault("empty_label", _("Recently built"))
102 kwargs.setdefault(
103 "choices",
104 (
105 ("-" + self.SORT_BUILD_DATE, _("Least recently built")),
106 ("-" + self.SORT_BUILD_COUNT, _("Frequently built")),
107 (self.SORT_BUILD_COUNT, _("Least frequently built")),
108 ("-" + self.SORT_MODIFIED_DATE, _("Recently modified")),
109 (self.SORT_MODIFIED_DATE, _("Least recently modified")),
110 (self.SORT_NAME, _("Name")),
111 ("-" + self.SORT_NAME, _("Name (descending)")),
112 ),
113 )
114 super().__init__(*args, **kwargs)
115
116 def filter(self, qs, value):
117 # This is where we use the None value from the custom filter
118 if not value:
119 value = [self.SORT_BUILD_DATE]
120
121 annotations = {}
122 order_bys = []
123 for field_ordered in value:
124 field = field_ordered.lstrip("-")
125
126 if field == self.SORT_BUILD_DATE:
127 annotations[self.SORT_BUILD_DATE] = Max("builds__date")
128 elif field == self.SORT_BUILD_COUNT:
129 annotations[self.SORT_BUILD_COUNT] = Count("builds")
130
131 if field_ordered == self.SORT_BUILD_DATE:
132 order_bys.append(F(field).desc(nulls_last=True))
133 elif field_ordered == "-" + self.SORT_BUILD_DATE:
134 order_bys.append(F(field).asc(nulls_first=True))
135 else:
136 order_bys.append(field_ordered)
137
138 return qs.annotate(**annotations).order_by(*order_bys)
139
140
141 class ProjectListFilterSet(ModelFilterSet):
142
143 """
144 Project list filter set for project list view.
145
146 This filter set enables list view sorting using a custom filter, and
147 provides search-as-you-type lookup filter as well.
148 """
149
150 slug = FilteredModelChoiceFilter(
151 label=_("Project"),
152 empty_label=_("All projects"),
153 to_field_name="slug",
154 queryset_method="get_project_queryset",
155 method="get_project",
156 label_attribute="name",
157 )
158
159 sort = ProjectSortOrderingFilter(
160 field_name="sort",
161 label=_("Sort by"),
162 )
163
164 def get_project_queryset(self):
165 return Project.objects.for_user(user=self.request.user)
166
167 def get_project(self, queryset, field_name, project):
168 return queryset.filter(slug=project.slug)
169
170
171 class ProjectVersionListFilterSet(ModelFilterSet):
172
173 """
174 Filter and sorting for project version listing page.
175
176 This is used from the project versions list view page to provide filtering
177 and sorting to the version list and search UI. It is normally instantiated
178 with an included queryset, which provides user project authorization.
179 """
180
181 VISIBILITY_HIDDEN = "hidden"
182 VISIBILITY_VISIBLE = "visible"
183
184 VISIBILITY_CHOICES = (
185 ("hidden", _("Hidden versions")),
186 ("visible", _("Visible versions")),
187 )
188
189 PRIVACY_CHOICES = (
190 ("public", _("Public versions")),
191 ("private", _("Private versions")),
192 )
193
194 # Attribute filter fields
195 slug = FilteredModelChoiceFilter(
196 label=_("Version"),
197 empty_label=_("All versions"),
198 to_field_name="slug",
199 queryset_method="get_version_queryset",
200 method="get_version",
201 label_attribute="verbose_name",
202 )
203
204 privacy = ChoiceFilter(
205 field_name="privacy_level",
206 label=_("Privacy"),
207 choices=PRIVACY_CHOICES,
208 empty_label=_("Any"),
209 )
210 # This field looks better as ``visibility=hidden`` than it does
211 # ``hidden=true``, otherwise we could use a BooleanFilter instance here
212 # instead
213 visibility = ChoiceFilter(
214 field_name="hidden",
215 label=_("Visibility"),
216 choices=VISIBILITY_CHOICES,
217 method="get_visibility",
218 empty_label=_("Any"),
219 )
220
221 sort = VersionSortOrderingFilter(
222 field_name="sort",
223 label=_("Sort by"),
224 )
225
226 def __init__(self, *args, project=None, **kwargs):
227 self.project = project
228 super().__init__(*args, **kwargs)
229
230 def get_version(self, queryset, field_name, version):
231 return queryset.filter(slug=version.slug)
232
233 def get_version_queryset(self):
234 # This query is passed in at instantiation
235 return self.queryset
236
237 def get_visibility(self, queryset, *, value):
238 if value == self.VISIBILITY_HIDDEN:
239 return queryset.filter(hidden=True)
240 if value == self.VISIBILITY_VISIBLE:
241 return queryset.filter(hidden=False)
242 return queryset
243
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/readthedocs/projects/filters.py b/readthedocs/projects/filters.py
--- a/readthedocs/projects/filters.py
+++ b/readthedocs/projects/filters.py
@@ -234,7 +234,7 @@
# This query is passed in at instantiation
return self.queryset
- def get_visibility(self, queryset, *, value):
+ def get_visibility(self, queryset, field_name, value):
if value == self.VISIBILITY_HIDDEN:
return queryset.filter(hidden=True)
if value == self.VISIBILITY_VISIBLE:
|
{"golden_diff": "diff --git a/readthedocs/projects/filters.py b/readthedocs/projects/filters.py\n--- a/readthedocs/projects/filters.py\n+++ b/readthedocs/projects/filters.py\n@@ -234,7 +234,7 @@\n # This query is passed in at instantiation\n return self.queryset\n \n- def get_visibility(self, queryset, *, value):\n+ def get_visibility(self, queryset, field_name, value):\n if value == self.VISIBILITY_HIDDEN:\n return queryset.filter(hidden=True)\n if value == self.VISIBILITY_VISIBLE:\n", "issue": "Project version filter returns 500\nIt seems the `ProjectVersionListFilterSet.get_visibility()` is receiving the wrong amount of arguments.\r\n\r\nTo reproduce, just hit https://beta.readthedocs.org/projects/bigo-live-hack/?privacy=&sort=&visibility=hidden\r\n\r\nSentry issue: https://read-the-docs.sentry.io/issues/4721614191/?project=148442&query=is%3Aunresolved&referrer=issue-stream&statsPeriod=7d&stream_index=7\n", "before_files": [{"content": "\"\"\"Filters used in project dashboard.\"\"\"\n\nimport structlog\nfrom django.db.models import Count, F, Max\nfrom django.utils.translation import gettext_lazy as _\nfrom django_filters import ChoiceFilter, OrderingFilter\n\nfrom readthedocs.core.filters import FilteredModelChoiceFilter, ModelFilterSet\nfrom readthedocs.projects.models import Project\n\nlog = structlog.get_logger(__name__)\n\n\nclass VersionSortOrderingFilter(OrderingFilter):\n\n \"\"\"\n Version list sort ordering django_filters filter.\n\n Django-filter is highly opionated, and the default model filters do not work\n well with empty/null values in the filter choices. In our case, empty/null\n values are used for a default query. So, to make this work, we will use a\n custom filter, instead of an automated model filter.\n\n The empty/None value is used to provide both a default value to the filter\n (when there is no ``sort`` query param), but also provide an option that is\n manually selectable (``?sort=relevance``). We can't do this with the default\n filter, because result would be params like ``?sort=None``.\n \"\"\"\n\n SORT_BUILD_COUNT = \"build_count\"\n SORT_BUILD_DATE = \"build_date\"\n SORT_NAME = \"name\"\n\n def __init__(self, *args, **kwargs):\n # The default filtering operation will be `-recent`, so we omit it\n # from choices to avoid showing it on the list twice.\n kwargs.setdefault(\"empty_label\", _(\"Recently built\"))\n kwargs.setdefault(\n \"choices\",\n (\n (\"-\" + self.SORT_BUILD_DATE, _(\"Least recently built\")),\n (\"-\" + self.SORT_BUILD_COUNT, _(\"Frequently built\")),\n (self.SORT_BUILD_COUNT, _(\"Least frequently built\")),\n (self.SORT_NAME, _(\"Name\")),\n (\"-\" + self.SORT_NAME, _(\"Name (descending)\")),\n ),\n )\n super().__init__(*args, **kwargs)\n\n def filter(self, qs, value):\n # This is where we use the None value for this custom filter. This\n # doesn't work with a standard model filter. Note: ``value`` is always\n # an iterable, but can be empty.\n\n if not value:\n value = [self.SORT_BUILD_DATE]\n\n annotations = {}\n order_bys = []\n for field_ordered in value:\n field = field_ordered.lstrip(\"-\")\n\n if field == self.SORT_BUILD_DATE:\n annotations[self.SORT_BUILD_DATE] = Max(\"builds__date\")\n elif field == self.SORT_BUILD_COUNT:\n annotations[self.SORT_BUILD_COUNT] = Count(\"builds\")\n elif field == self.SORT_NAME:\n # Alias field name here, as ``OrderingFilter`` was having trouble\n # doing this with it's native field mapping\n annotations[self.SORT_NAME] = F(\"verbose_name\")\n\n if field_ordered == self.SORT_BUILD_DATE:\n order_bys.append(F(field).desc(nulls_last=True))\n elif field_ordered == \"-\" + self.SORT_BUILD_DATE:\n order_bys.append(F(field).asc(nulls_first=True))\n else:\n order_bys.append(field_ordered)\n\n return qs.annotate(**annotations).order_by(*order_bys)\n\n\nclass ProjectSortOrderingFilter(OrderingFilter):\n\n \"\"\"\n Project list sort ordering django_filters filter.\n\n Django-filter is highly opionated, and the default model filters do not work\n well with empty/null values in the filter choices. In our case, empty/null\n values are used for a default query. So, to make this work, we will use a\n custom filter, instead of an automated model filter.\n \"\"\"\n\n SORT_NAME = \"name\"\n SORT_MODIFIED_DATE = \"modified_date\"\n SORT_BUILD_DATE = \"build_date\"\n SORT_BUILD_COUNT = \"build_count\"\n\n def __init__(self, *args, **kwargs):\n # The default filtering operation will be `name`, so we omit it\n # from choices to avoid showing it on the list twice.\n kwargs.setdefault(\"empty_label\", _(\"Recently built\"))\n kwargs.setdefault(\n \"choices\",\n (\n (\"-\" + self.SORT_BUILD_DATE, _(\"Least recently built\")),\n (\"-\" + self.SORT_BUILD_COUNT, _(\"Frequently built\")),\n (self.SORT_BUILD_COUNT, _(\"Least frequently built\")),\n (\"-\" + self.SORT_MODIFIED_DATE, _(\"Recently modified\")),\n (self.SORT_MODIFIED_DATE, _(\"Least recently modified\")),\n (self.SORT_NAME, _(\"Name\")),\n (\"-\" + self.SORT_NAME, _(\"Name (descending)\")),\n ),\n )\n super().__init__(*args, **kwargs)\n\n def filter(self, qs, value):\n # This is where we use the None value from the custom filter\n if not value:\n value = [self.SORT_BUILD_DATE]\n\n annotations = {}\n order_bys = []\n for field_ordered in value:\n field = field_ordered.lstrip(\"-\")\n\n if field == self.SORT_BUILD_DATE:\n annotations[self.SORT_BUILD_DATE] = Max(\"builds__date\")\n elif field == self.SORT_BUILD_COUNT:\n annotations[self.SORT_BUILD_COUNT] = Count(\"builds\")\n\n if field_ordered == self.SORT_BUILD_DATE:\n order_bys.append(F(field).desc(nulls_last=True))\n elif field_ordered == \"-\" + self.SORT_BUILD_DATE:\n order_bys.append(F(field).asc(nulls_first=True))\n else:\n order_bys.append(field_ordered)\n\n return qs.annotate(**annotations).order_by(*order_bys)\n\n\nclass ProjectListFilterSet(ModelFilterSet):\n\n \"\"\"\n Project list filter set for project list view.\n\n This filter set enables list view sorting using a custom filter, and\n provides search-as-you-type lookup filter as well.\n \"\"\"\n\n slug = FilteredModelChoiceFilter(\n label=_(\"Project\"),\n empty_label=_(\"All projects\"),\n to_field_name=\"slug\",\n queryset_method=\"get_project_queryset\",\n method=\"get_project\",\n label_attribute=\"name\",\n )\n\n sort = ProjectSortOrderingFilter(\n field_name=\"sort\",\n label=_(\"Sort by\"),\n )\n\n def get_project_queryset(self):\n return Project.objects.for_user(user=self.request.user)\n\n def get_project(self, queryset, field_name, project):\n return queryset.filter(slug=project.slug)\n\n\nclass ProjectVersionListFilterSet(ModelFilterSet):\n\n \"\"\"\n Filter and sorting for project version listing page.\n\n This is used from the project versions list view page to provide filtering\n and sorting to the version list and search UI. It is normally instantiated\n with an included queryset, which provides user project authorization.\n \"\"\"\n\n VISIBILITY_HIDDEN = \"hidden\"\n VISIBILITY_VISIBLE = \"visible\"\n\n VISIBILITY_CHOICES = (\n (\"hidden\", _(\"Hidden versions\")),\n (\"visible\", _(\"Visible versions\")),\n )\n\n PRIVACY_CHOICES = (\n (\"public\", _(\"Public versions\")),\n (\"private\", _(\"Private versions\")),\n )\n\n # Attribute filter fields\n slug = FilteredModelChoiceFilter(\n label=_(\"Version\"),\n empty_label=_(\"All versions\"),\n to_field_name=\"slug\",\n queryset_method=\"get_version_queryset\",\n method=\"get_version\",\n label_attribute=\"verbose_name\",\n )\n\n privacy = ChoiceFilter(\n field_name=\"privacy_level\",\n label=_(\"Privacy\"),\n choices=PRIVACY_CHOICES,\n empty_label=_(\"Any\"),\n )\n # This field looks better as ``visibility=hidden`` than it does\n # ``hidden=true``, otherwise we could use a BooleanFilter instance here\n # instead\n visibility = ChoiceFilter(\n field_name=\"hidden\",\n label=_(\"Visibility\"),\n choices=VISIBILITY_CHOICES,\n method=\"get_visibility\",\n empty_label=_(\"Any\"),\n )\n\n sort = VersionSortOrderingFilter(\n field_name=\"sort\",\n label=_(\"Sort by\"),\n )\n\n def __init__(self, *args, project=None, **kwargs):\n self.project = project\n super().__init__(*args, **kwargs)\n\n def get_version(self, queryset, field_name, version):\n return queryset.filter(slug=version.slug)\n\n def get_version_queryset(self):\n # This query is passed in at instantiation\n return self.queryset\n\n def get_visibility(self, queryset, *, value):\n if value == self.VISIBILITY_HIDDEN:\n return queryset.filter(hidden=True)\n if value == self.VISIBILITY_VISIBLE:\n return queryset.filter(hidden=False)\n return queryset\n", "path": "readthedocs/projects/filters.py"}], "after_files": [{"content": "\"\"\"Filters used in project dashboard.\"\"\"\n\nimport structlog\nfrom django.db.models import Count, F, Max\nfrom django.utils.translation import gettext_lazy as _\nfrom django_filters import ChoiceFilter, OrderingFilter\n\nfrom readthedocs.core.filters import FilteredModelChoiceFilter, ModelFilterSet\nfrom readthedocs.projects.models import Project\n\nlog = structlog.get_logger(__name__)\n\n\nclass VersionSortOrderingFilter(OrderingFilter):\n\n \"\"\"\n Version list sort ordering django_filters filter.\n\n Django-filter is highly opionated, and the default model filters do not work\n well with empty/null values in the filter choices. In our case, empty/null\n values are used for a default query. So, to make this work, we will use a\n custom filter, instead of an automated model filter.\n\n The empty/None value is used to provide both a default value to the filter\n (when there is no ``sort`` query param), but also provide an option that is\n manually selectable (``?sort=relevance``). We can't do this with the default\n filter, because result would be params like ``?sort=None``.\n \"\"\"\n\n SORT_BUILD_COUNT = \"build_count\"\n SORT_BUILD_DATE = \"build_date\"\n SORT_NAME = \"name\"\n\n def __init__(self, *args, **kwargs):\n # The default filtering operation will be `-recent`, so we omit it\n # from choices to avoid showing it on the list twice.\n kwargs.setdefault(\"empty_label\", _(\"Recently built\"))\n kwargs.setdefault(\n \"choices\",\n (\n (\"-\" + self.SORT_BUILD_DATE, _(\"Least recently built\")),\n (\"-\" + self.SORT_BUILD_COUNT, _(\"Frequently built\")),\n (self.SORT_BUILD_COUNT, _(\"Least frequently built\")),\n (self.SORT_NAME, _(\"Name\")),\n (\"-\" + self.SORT_NAME, _(\"Name (descending)\")),\n ),\n )\n super().__init__(*args, **kwargs)\n\n def filter(self, qs, value):\n # This is where we use the None value for this custom filter. This\n # doesn't work with a standard model filter. Note: ``value`` is always\n # an iterable, but can be empty.\n\n if not value:\n value = [self.SORT_BUILD_DATE]\n\n annotations = {}\n order_bys = []\n for field_ordered in value:\n field = field_ordered.lstrip(\"-\")\n\n if field == self.SORT_BUILD_DATE:\n annotations[self.SORT_BUILD_DATE] = Max(\"builds__date\")\n elif field == self.SORT_BUILD_COUNT:\n annotations[self.SORT_BUILD_COUNT] = Count(\"builds\")\n elif field == self.SORT_NAME:\n # Alias field name here, as ``OrderingFilter`` was having trouble\n # doing this with it's native field mapping\n annotations[self.SORT_NAME] = F(\"verbose_name\")\n\n if field_ordered == self.SORT_BUILD_DATE:\n order_bys.append(F(field).desc(nulls_last=True))\n elif field_ordered == \"-\" + self.SORT_BUILD_DATE:\n order_bys.append(F(field).asc(nulls_first=True))\n else:\n order_bys.append(field_ordered)\n\n return qs.annotate(**annotations).order_by(*order_bys)\n\n\nclass ProjectSortOrderingFilter(OrderingFilter):\n\n \"\"\"\n Project list sort ordering django_filters filter.\n\n Django-filter is highly opionated, and the default model filters do not work\n well with empty/null values in the filter choices. In our case, empty/null\n values are used for a default query. So, to make this work, we will use a\n custom filter, instead of an automated model filter.\n \"\"\"\n\n SORT_NAME = \"name\"\n SORT_MODIFIED_DATE = \"modified_date\"\n SORT_BUILD_DATE = \"build_date\"\n SORT_BUILD_COUNT = \"build_count\"\n\n def __init__(self, *args, **kwargs):\n # The default filtering operation will be `name`, so we omit it\n # from choices to avoid showing it on the list twice.\n kwargs.setdefault(\"empty_label\", _(\"Recently built\"))\n kwargs.setdefault(\n \"choices\",\n (\n (\"-\" + self.SORT_BUILD_DATE, _(\"Least recently built\")),\n (\"-\" + self.SORT_BUILD_COUNT, _(\"Frequently built\")),\n (self.SORT_BUILD_COUNT, _(\"Least frequently built\")),\n (\"-\" + self.SORT_MODIFIED_DATE, _(\"Recently modified\")),\n (self.SORT_MODIFIED_DATE, _(\"Least recently modified\")),\n (self.SORT_NAME, _(\"Name\")),\n (\"-\" + self.SORT_NAME, _(\"Name (descending)\")),\n ),\n )\n super().__init__(*args, **kwargs)\n\n def filter(self, qs, value):\n # This is where we use the None value from the custom filter\n if not value:\n value = [self.SORT_BUILD_DATE]\n\n annotations = {}\n order_bys = []\n for field_ordered in value:\n field = field_ordered.lstrip(\"-\")\n\n if field == self.SORT_BUILD_DATE:\n annotations[self.SORT_BUILD_DATE] = Max(\"builds__date\")\n elif field == self.SORT_BUILD_COUNT:\n annotations[self.SORT_BUILD_COUNT] = Count(\"builds\")\n\n if field_ordered == self.SORT_BUILD_DATE:\n order_bys.append(F(field).desc(nulls_last=True))\n elif field_ordered == \"-\" + self.SORT_BUILD_DATE:\n order_bys.append(F(field).asc(nulls_first=True))\n else:\n order_bys.append(field_ordered)\n\n return qs.annotate(**annotations).order_by(*order_bys)\n\n\nclass ProjectListFilterSet(ModelFilterSet):\n\n \"\"\"\n Project list filter set for project list view.\n\n This filter set enables list view sorting using a custom filter, and\n provides search-as-you-type lookup filter as well.\n \"\"\"\n\n slug = FilteredModelChoiceFilter(\n label=_(\"Project\"),\n empty_label=_(\"All projects\"),\n to_field_name=\"slug\",\n queryset_method=\"get_project_queryset\",\n method=\"get_project\",\n label_attribute=\"name\",\n )\n\n sort = ProjectSortOrderingFilter(\n field_name=\"sort\",\n label=_(\"Sort by\"),\n )\n\n def get_project_queryset(self):\n return Project.objects.for_user(user=self.request.user)\n\n def get_project(self, queryset, field_name, project):\n return queryset.filter(slug=project.slug)\n\n\nclass ProjectVersionListFilterSet(ModelFilterSet):\n\n \"\"\"\n Filter and sorting for project version listing page.\n\n This is used from the project versions list view page to provide filtering\n and sorting to the version list and search UI. It is normally instantiated\n with an included queryset, which provides user project authorization.\n \"\"\"\n\n VISIBILITY_HIDDEN = \"hidden\"\n VISIBILITY_VISIBLE = \"visible\"\n\n VISIBILITY_CHOICES = (\n (\"hidden\", _(\"Hidden versions\")),\n (\"visible\", _(\"Visible versions\")),\n )\n\n PRIVACY_CHOICES = (\n (\"public\", _(\"Public versions\")),\n (\"private\", _(\"Private versions\")),\n )\n\n # Attribute filter fields\n slug = FilteredModelChoiceFilter(\n label=_(\"Version\"),\n empty_label=_(\"All versions\"),\n to_field_name=\"slug\",\n queryset_method=\"get_version_queryset\",\n method=\"get_version\",\n label_attribute=\"verbose_name\",\n )\n\n privacy = ChoiceFilter(\n field_name=\"privacy_level\",\n label=_(\"Privacy\"),\n choices=PRIVACY_CHOICES,\n empty_label=_(\"Any\"),\n )\n # This field looks better as ``visibility=hidden`` than it does\n # ``hidden=true``, otherwise we could use a BooleanFilter instance here\n # instead\n visibility = ChoiceFilter(\n field_name=\"hidden\",\n label=_(\"Visibility\"),\n choices=VISIBILITY_CHOICES,\n method=\"get_visibility\",\n empty_label=_(\"Any\"),\n )\n\n sort = VersionSortOrderingFilter(\n field_name=\"sort\",\n label=_(\"Sort by\"),\n )\n\n def __init__(self, *args, project=None, **kwargs):\n self.project = project\n super().__init__(*args, **kwargs)\n\n def get_version(self, queryset, field_name, version):\n return queryset.filter(slug=version.slug)\n\n def get_version_queryset(self):\n # This query is passed in at instantiation\n return self.queryset\n\n def get_visibility(self, queryset, field_name, value):\n if value == self.VISIBILITY_HIDDEN:\n return queryset.filter(hidden=True)\n if value == self.VISIBILITY_VISIBLE:\n return queryset.filter(hidden=False)\n return queryset\n", "path": "readthedocs/projects/filters.py"}]}
| 2,825 | 123 |
gh_patches_debug_14998
|
rasdani/github-patches
|
git_diff
|
ManageIQ__integration_tests-8406
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Catalog exists property throws "CandidateNotFound" Exception
When we call <catalog_obj>.exists , it throws "CandidateNotFound" Exception, where as in our test cases we expect a Boolean value "False"
>> https://github.com/ManageIQ/integration_tests/blob/master/cfme/services/catalogs/catalog.py#L119
Steps to Reproduce: <catalog_obj>.exists
Actual Result: Trace-back of "CandidateNotFound" Exception
Expected Result: False
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cfme/services/catalogs/catalog.py`
Content:
```
1 import attr
2
3 from widgetastic.utils import Parameter
4 from widgetastic.widget import Text
5 from widgetastic_manageiq import MultiBoxSelect
6 from widgetastic_patternfly import Button, Input
7 from navmazing import NavigateToAttribute, NavigateToSibling
8
9 from cfme.common import Taggable
10 from cfme.modeling.base import BaseCollection, BaseEntity
11 from cfme.utils.appliance.implementations.ui import navigator, CFMENavigateStep, navigate_to
12 from cfme.utils.pretty import Pretty
13 from cfme.utils.update import Updateable
14 from cfme.utils.wait import wait_for
15
16 from . import ServicesCatalogView
17
18
19 class CatalogsMultiBoxSelect(MultiBoxSelect):
20 move_into_button = Button(title=Parameter("@move_into"))
21 move_from_button = Button(title=Parameter("@move_from"))
22
23
24 class CatalogForm(ServicesCatalogView):
25 title = Text('#explorer_title_text')
26
27 name = Input(name='name')
28 description = Input(name="description")
29 assign_catalog_items = CatalogsMultiBoxSelect(
30 move_into="Move Selected buttons right",
31 move_from="Move Selected buttons left",
32 available_items="available_fields",
33 chosen_items="selected_fields"
34 )
35
36 save_button = Button('Save')
37 cancel_button = Button('Cancel')
38
39
40 class CatalogsView(ServicesCatalogView):
41 title = Text("#explorer_title_text")
42
43 @property
44 def is_displayed(self):
45 return (
46 self.in_explorer and
47 self.catalogs.is_opened and
48 self.catalogs.tree.currently_selected == ["All Catalogs"])
49
50
51 class DetailsCatalogView(ServicesCatalogView):
52 title = Text("#explorer_title_text")
53
54 @property
55 def is_displayed(self):
56 return (
57 self.in_explorer and self.catalogs.is_opened and
58 self.title.text == 'Catalog "{}"'.format(self.context["object"].name)
59 )
60
61
62 class AddCatalogView(CatalogForm):
63
64 add_button = Button("Add")
65
66 @property
67 def is_displayed(self):
68 return (
69 self.in_explorer and self.catalogs.is_opened and
70 self.title.text == 'Adding a new Catalog'
71 )
72
73
74 class EditCatalogView(CatalogForm):
75
76 save_button = Button('Save')
77 reset_button = Button('Reset')
78
79 @property
80 def is_displayed(self):
81 return (
82 self.in_explorer and self.catalogs.is_opened and
83 self.title.text == 'Editing Catalog "{}"'.format(self.context["object"].name)
84 )
85
86
87 @attr.s
88 class Catalog(BaseEntity, Updateable, Pretty, Taggable):
89
90 name = attr.ib()
91 description = attr.ib()
92 items = attr.ib(default=None)
93
94 def update(self, updates):
95 view = navigate_to(self, 'Edit')
96 changed = view.fill(updates)
97 if changed:
98 view.save_button.click()
99 else:
100 view.cancel_button.click()
101 view = self.create_view(DetailsCatalogView, override=updates, wait='10s')
102 view.flash.assert_no_error()
103 if changed:
104 view.flash.assert_message(
105 'Catalog "{}" was saved'.format(updates.get('name', self.name)))
106 else:
107 view.flash.assert_message(
108 'Edit of Catalog "{}" was cancelled by the user'.format(self.name))
109
110 def delete(self):
111 view = navigate_to(self, "Details")
112 view.configuration.item_select('Remove Catalog', handle_alert=True)
113 view = self.create_view(CatalogsView, wait='10s')
114 view.flash.assert_no_error()
115 view.flash.assert_success_message(
116 'Catalog "{}": Delete successful'.format(self.description or self.name))
117
118 @property
119 def exists(self):
120 try:
121 navigate_to(self, 'Details')
122 return True
123 except NameError:
124 return False
125
126
127 @attr.s
128 class CatalogCollection(BaseCollection):
129 """A collection for the :py:class:`cfme.services.catalogs.catalog.Catalog`"""
130 ENTITY = Catalog
131
132 def create(self, name, description, items=None):
133 """Create a catalog.
134
135 Args:
136 name: The name of the catalog
137 description: The description of the catalog
138 items: Items in the catalog
139 """
140 view = navigate_to(self, 'Add')
141 view.fill({
142 'name': name,
143 'description': description,
144 'assign_catalog_items': items
145 })
146 view.add_button.click()
147 catalog = self.instantiate(name=name, description=description, items=items)
148 view = self.create_view(CatalogsView)
149 assert view.is_displayed
150 view.flash.assert_no_error()
151 return catalog
152
153
154 @navigator.register(CatalogCollection)
155 class All(CFMENavigateStep):
156 VIEW = CatalogsView
157 prerequisite = NavigateToAttribute('appliance.server', 'LoggedIn')
158
159 def step(self):
160 self.prerequisite_view.navigation.select('Services', 'Catalogs')
161 self.view.catalogs.tree.click_path("All Catalogs")
162
163
164 @navigator.register(CatalogCollection)
165 class Add(CFMENavigateStep):
166 VIEW = AddCatalogView
167 prerequisite = NavigateToSibling('All')
168
169 def step(self):
170 self.prerequisite_view.configuration.item_select('Add a New Catalog')
171
172
173 @navigator.register(Catalog)
174 class Details(CFMENavigateStep):
175 VIEW = DetailsCatalogView
176 prerequisite = NavigateToAttribute('parent', 'All')
177
178 def step(self):
179 self.prerequisite_view.catalogs.tree.click_path("All Catalogs", self.obj.name)
180
181
182 @navigator.register(Catalog)
183 class Edit(CFMENavigateStep):
184 VIEW = EditCatalogView
185 prerequisite = NavigateToSibling('Details')
186
187 def step(self):
188 self.prerequisite_view.configuration.item_select('Edit this Item')
189
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/cfme/services/catalogs/catalog.py b/cfme/services/catalogs/catalog.py
--- a/cfme/services/catalogs/catalog.py
+++ b/cfme/services/catalogs/catalog.py
@@ -3,7 +3,7 @@
from widgetastic.utils import Parameter
from widgetastic.widget import Text
from widgetastic_manageiq import MultiBoxSelect
-from widgetastic_patternfly import Button, Input
+from widgetastic_patternfly import Button, CandidateNotFound, Input
from navmazing import NavigateToAttribute, NavigateToSibling
from cfme.common import Taggable
@@ -120,7 +120,7 @@
try:
navigate_to(self, 'Details')
return True
- except NameError:
+ except (NameError, CandidateNotFound):
return False
|
{"golden_diff": "diff --git a/cfme/services/catalogs/catalog.py b/cfme/services/catalogs/catalog.py\n--- a/cfme/services/catalogs/catalog.py\n+++ b/cfme/services/catalogs/catalog.py\n@@ -3,7 +3,7 @@\n from widgetastic.utils import Parameter\n from widgetastic.widget import Text\n from widgetastic_manageiq import MultiBoxSelect\n-from widgetastic_patternfly import Button, Input\n+from widgetastic_patternfly import Button, CandidateNotFound, Input\n from navmazing import NavigateToAttribute, NavigateToSibling\n \n from cfme.common import Taggable\n@@ -120,7 +120,7 @@\n try:\n navigate_to(self, 'Details')\n return True\n- except NameError:\n+ except (NameError, CandidateNotFound):\n return False\n", "issue": "Catalog exists property throws \"CandidateNotFound\" Exception\nWhen we call <catalog_obj>.exists , it throws \"CandidateNotFound\" Exception, where as in our test cases we expect a Boolean value \"False\"\r\n>> https://github.com/ManageIQ/integration_tests/blob/master/cfme/services/catalogs/catalog.py#L119 \r\n\r\nSteps to Reproduce: <catalog_obj>.exists\r\nActual Result: Trace-back of \"CandidateNotFound\" Exception\r\nExpected Result: False \n", "before_files": [{"content": "import attr\n\nfrom widgetastic.utils import Parameter\nfrom widgetastic.widget import Text\nfrom widgetastic_manageiq import MultiBoxSelect\nfrom widgetastic_patternfly import Button, Input\nfrom navmazing import NavigateToAttribute, NavigateToSibling\n\nfrom cfme.common import Taggable\nfrom cfme.modeling.base import BaseCollection, BaseEntity\nfrom cfme.utils.appliance.implementations.ui import navigator, CFMENavigateStep, navigate_to\nfrom cfme.utils.pretty import Pretty\nfrom cfme.utils.update import Updateable\nfrom cfme.utils.wait import wait_for\n\nfrom . import ServicesCatalogView\n\n\nclass CatalogsMultiBoxSelect(MultiBoxSelect):\n move_into_button = Button(title=Parameter(\"@move_into\"))\n move_from_button = Button(title=Parameter(\"@move_from\"))\n\n\nclass CatalogForm(ServicesCatalogView):\n title = Text('#explorer_title_text')\n\n name = Input(name='name')\n description = Input(name=\"description\")\n assign_catalog_items = CatalogsMultiBoxSelect(\n move_into=\"Move Selected buttons right\",\n move_from=\"Move Selected buttons left\",\n available_items=\"available_fields\",\n chosen_items=\"selected_fields\"\n )\n\n save_button = Button('Save')\n cancel_button = Button('Cancel')\n\n\nclass CatalogsView(ServicesCatalogView):\n title = Text(\"#explorer_title_text\")\n\n @property\n def is_displayed(self):\n return (\n self.in_explorer and\n self.catalogs.is_opened and\n self.catalogs.tree.currently_selected == [\"All Catalogs\"])\n\n\nclass DetailsCatalogView(ServicesCatalogView):\n title = Text(\"#explorer_title_text\")\n\n @property\n def is_displayed(self):\n return (\n self.in_explorer and self.catalogs.is_opened and\n self.title.text == 'Catalog \"{}\"'.format(self.context[\"object\"].name)\n )\n\n\nclass AddCatalogView(CatalogForm):\n\n add_button = Button(\"Add\")\n\n @property\n def is_displayed(self):\n return (\n self.in_explorer and self.catalogs.is_opened and\n self.title.text == 'Adding a new Catalog'\n )\n\n\nclass EditCatalogView(CatalogForm):\n\n save_button = Button('Save')\n reset_button = Button('Reset')\n\n @property\n def is_displayed(self):\n return (\n self.in_explorer and self.catalogs.is_opened and\n self.title.text == 'Editing Catalog \"{}\"'.format(self.context[\"object\"].name)\n )\n\n\[email protected]\nclass Catalog(BaseEntity, Updateable, Pretty, Taggable):\n\n name = attr.ib()\n description = attr.ib()\n items = attr.ib(default=None)\n\n def update(self, updates):\n view = navigate_to(self, 'Edit')\n changed = view.fill(updates)\n if changed:\n view.save_button.click()\n else:\n view.cancel_button.click()\n view = self.create_view(DetailsCatalogView, override=updates, wait='10s')\n view.flash.assert_no_error()\n if changed:\n view.flash.assert_message(\n 'Catalog \"{}\" was saved'.format(updates.get('name', self.name)))\n else:\n view.flash.assert_message(\n 'Edit of Catalog \"{}\" was cancelled by the user'.format(self.name))\n\n def delete(self):\n view = navigate_to(self, \"Details\")\n view.configuration.item_select('Remove Catalog', handle_alert=True)\n view = self.create_view(CatalogsView, wait='10s')\n view.flash.assert_no_error()\n view.flash.assert_success_message(\n 'Catalog \"{}\": Delete successful'.format(self.description or self.name))\n\n @property\n def exists(self):\n try:\n navigate_to(self, 'Details')\n return True\n except NameError:\n return False\n\n\[email protected]\nclass CatalogCollection(BaseCollection):\n \"\"\"A collection for the :py:class:`cfme.services.catalogs.catalog.Catalog`\"\"\"\n ENTITY = Catalog\n\n def create(self, name, description, items=None):\n \"\"\"Create a catalog.\n\n Args:\n name: The name of the catalog\n description: The description of the catalog\n items: Items in the catalog\n \"\"\"\n view = navigate_to(self, 'Add')\n view.fill({\n 'name': name,\n 'description': description,\n 'assign_catalog_items': items\n })\n view.add_button.click()\n catalog = self.instantiate(name=name, description=description, items=items)\n view = self.create_view(CatalogsView)\n assert view.is_displayed\n view.flash.assert_no_error()\n return catalog\n\n\[email protected](CatalogCollection)\nclass All(CFMENavigateStep):\n VIEW = CatalogsView\n prerequisite = NavigateToAttribute('appliance.server', 'LoggedIn')\n\n def step(self):\n self.prerequisite_view.navigation.select('Services', 'Catalogs')\n self.view.catalogs.tree.click_path(\"All Catalogs\")\n\n\[email protected](CatalogCollection)\nclass Add(CFMENavigateStep):\n VIEW = AddCatalogView\n prerequisite = NavigateToSibling('All')\n\n def step(self):\n self.prerequisite_view.configuration.item_select('Add a New Catalog')\n\n\[email protected](Catalog)\nclass Details(CFMENavigateStep):\n VIEW = DetailsCatalogView\n prerequisite = NavigateToAttribute('parent', 'All')\n\n def step(self):\n self.prerequisite_view.catalogs.tree.click_path(\"All Catalogs\", self.obj.name)\n\n\[email protected](Catalog)\nclass Edit(CFMENavigateStep):\n VIEW = EditCatalogView\n prerequisite = NavigateToSibling('Details')\n\n def step(self):\n self.prerequisite_view.configuration.item_select('Edit this Item')\n", "path": "cfme/services/catalogs/catalog.py"}], "after_files": [{"content": "import attr\n\nfrom widgetastic.utils import Parameter\nfrom widgetastic.widget import Text\nfrom widgetastic_manageiq import MultiBoxSelect\nfrom widgetastic_patternfly import Button, CandidateNotFound, Input\nfrom navmazing import NavigateToAttribute, NavigateToSibling\n\nfrom cfme.common import Taggable\nfrom cfme.modeling.base import BaseCollection, BaseEntity\nfrom cfme.utils.appliance.implementations.ui import navigator, CFMENavigateStep, navigate_to\nfrom cfme.utils.pretty import Pretty\nfrom cfme.utils.update import Updateable\nfrom cfme.utils.wait import wait_for\n\nfrom . import ServicesCatalogView\n\n\nclass CatalogsMultiBoxSelect(MultiBoxSelect):\n move_into_button = Button(title=Parameter(\"@move_into\"))\n move_from_button = Button(title=Parameter(\"@move_from\"))\n\n\nclass CatalogForm(ServicesCatalogView):\n title = Text('#explorer_title_text')\n\n name = Input(name='name')\n description = Input(name=\"description\")\n assign_catalog_items = CatalogsMultiBoxSelect(\n move_into=\"Move Selected buttons right\",\n move_from=\"Move Selected buttons left\",\n available_items=\"available_fields\",\n chosen_items=\"selected_fields\"\n )\n\n save_button = Button('Save')\n cancel_button = Button('Cancel')\n\n\nclass CatalogsView(ServicesCatalogView):\n title = Text(\"#explorer_title_text\")\n\n @property\n def is_displayed(self):\n return (\n self.in_explorer and\n self.catalogs.is_opened and\n self.catalogs.tree.currently_selected == [\"All Catalogs\"])\n\n\nclass DetailsCatalogView(ServicesCatalogView):\n title = Text(\"#explorer_title_text\")\n\n @property\n def is_displayed(self):\n return (\n self.in_explorer and self.catalogs.is_opened and\n self.title.text == 'Catalog \"{}\"'.format(self.context[\"object\"].name)\n )\n\n\nclass AddCatalogView(CatalogForm):\n\n add_button = Button(\"Add\")\n\n @property\n def is_displayed(self):\n return (\n self.in_explorer and self.catalogs.is_opened and\n self.title.text == 'Adding a new Catalog'\n )\n\n\nclass EditCatalogView(CatalogForm):\n\n save_button = Button('Save')\n reset_button = Button('Reset')\n\n @property\n def is_displayed(self):\n return (\n self.in_explorer and self.catalogs.is_opened and\n self.title.text == 'Editing Catalog \"{}\"'.format(self.context[\"object\"].name)\n )\n\n\[email protected]\nclass Catalog(BaseEntity, Updateable, Pretty, Taggable):\n\n name = attr.ib()\n description = attr.ib()\n items = attr.ib(default=None)\n\n def update(self, updates):\n view = navigate_to(self, 'Edit')\n changed = view.fill(updates)\n if changed:\n view.save_button.click()\n else:\n view.cancel_button.click()\n view = self.create_view(DetailsCatalogView, override=updates, wait='10s')\n view.flash.assert_no_error()\n if changed:\n view.flash.assert_message(\n 'Catalog \"{}\" was saved'.format(updates.get('name', self.name)))\n else:\n view.flash.assert_message(\n 'Edit of Catalog \"{}\" was cancelled by the user'.format(self.name))\n\n def delete(self):\n view = navigate_to(self, \"Details\")\n view.configuration.item_select('Remove Catalog', handle_alert=True)\n view = self.create_view(CatalogsView, wait='10s')\n view.flash.assert_no_error()\n view.flash.assert_success_message(\n 'Catalog \"{}\": Delete successful'.format(self.description or self.name))\n\n @property\n def exists(self):\n try:\n navigate_to(self, 'Details')\n return True\n except (NameError, CandidateNotFound):\n return False\n\n\[email protected]\nclass CatalogCollection(BaseCollection):\n \"\"\"A collection for the :py:class:`cfme.services.catalogs.catalog.Catalog`\"\"\"\n ENTITY = Catalog\n\n def create(self, name, description, items=None):\n \"\"\"Create a catalog.\n\n Args:\n name: The name of the catalog\n description: The description of the catalog\n items: Items in the catalog\n \"\"\"\n view = navigate_to(self, 'Add')\n view.fill({\n 'name': name,\n 'description': description,\n 'assign_catalog_items': items\n })\n view.add_button.click()\n catalog = self.instantiate(name=name, description=description, items=items)\n view = self.create_view(CatalogsView)\n assert view.is_displayed\n view.flash.assert_no_error()\n return catalog\n\n\[email protected](CatalogCollection)\nclass All(CFMENavigateStep):\n VIEW = CatalogsView\n prerequisite = NavigateToAttribute('appliance.server', 'LoggedIn')\n\n def step(self):\n self.prerequisite_view.navigation.select('Services', 'Catalogs')\n self.view.catalogs.tree.click_path(\"All Catalogs\")\n\n\[email protected](CatalogCollection)\nclass Add(CFMENavigateStep):\n VIEW = AddCatalogView\n prerequisite = NavigateToSibling('All')\n\n def step(self):\n self.prerequisite_view.configuration.item_select('Add a New Catalog')\n\n\[email protected](Catalog)\nclass Details(CFMENavigateStep):\n VIEW = DetailsCatalogView\n prerequisite = NavigateToAttribute('parent', 'All')\n\n def step(self):\n self.prerequisite_view.catalogs.tree.click_path(\"All Catalogs\", self.obj.name)\n\n\[email protected](Catalog)\nclass Edit(CFMENavigateStep):\n VIEW = EditCatalogView\n prerequisite = NavigateToSibling('Details')\n\n def step(self):\n self.prerequisite_view.configuration.item_select('Edit this Item')\n", "path": "cfme/services/catalogs/catalog.py"}]}
| 2,047 | 172 |
gh_patches_debug_33591
|
rasdani/github-patches
|
git_diff
|
litestar-org__litestar-2777
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
refactor: improve typing in CSRF middleware
https://github.com/litestar-org/litestar/blob/7414f7fd7d4782223502895e6a23b77ed635cd2d/litestar/middleware/csrf.py#L87-L127
At line 105, we use `dict.get()` to set the value of `existing_csrf_token` so it can be `None` if the header doesn't exist.
At line 123, that block is guarded by `self._csrf_tokens_match()` which will return `False` if it is `None`, so actually `existing_csrf_token` cannot be falsy in this block, its just that its value is not narrowed appropriately.
Fixing this would probably be as simple as using `request.cookies.get(..., "")` and `request.headers.get(..., "")` on lines 104 and 105 respectively, and re-type downstream methods to only accept `str` instead of `str | None`.
_Originally posted by @peterschutt in https://github.com/litestar-org/litestar/pull/2751#discussion_r1405515256_
<!-- POLAR PLEDGE BADGE START -->
---
> [!NOTE]
> While we are open for sponsoring on [GitHub Sponsors](https://github.com/sponsors/litestar-org/) and
> [OpenCollective](https://opencollective.com/litestar), we also utilize [Polar.sh](https://polar.sh/) to engage in pledge-based sponsorship.
>
> Check out all issues funded or available for funding [on our Polar.sh Litestar dashboard](https://polar.sh/litestar-org)
> * If you would like to see an issue prioritized, make a pledge towards it!
> * We receive the pledge once the issue is completed & verified
> * This, along with engagement in the community, helps us know which features are a priority to our users.
<a href="https://polar.sh/litestar-org/litestar/issues/2770">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://polar.sh/api/github/litestar-org/litestar/issues/2770/pledge.svg?darkmode=1">
<img alt="Fund with Polar" src="https://polar.sh/api/github/litestar-org/litestar/issues/2770/pledge.svg">
</picture>
</a>
<!-- POLAR PLEDGE BADGE END -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `litestar/middleware/csrf.py`
Content:
```
1 from __future__ import annotations
2
3 import hashlib
4 import hmac
5 import secrets
6 from secrets import compare_digest
7 from typing import TYPE_CHECKING, Any
8
9 from litestar.datastructures import MutableScopeHeaders
10 from litestar.datastructures.cookie import Cookie
11 from litestar.enums import RequestEncodingType, ScopeType
12 from litestar.exceptions import PermissionDeniedException
13 from litestar.middleware._utils import (
14 build_exclude_path_pattern,
15 should_bypass_middleware,
16 )
17 from litestar.middleware.base import MiddlewareProtocol
18 from litestar.utils.scope.state import ScopeState
19
20 if TYPE_CHECKING:
21 from litestar.config.csrf import CSRFConfig
22 from litestar.connection import Request
23 from litestar.types import (
24 ASGIApp,
25 HTTPSendMessage,
26 Message,
27 Receive,
28 Scope,
29 Scopes,
30 Send,
31 )
32
33 __all__ = ("CSRFMiddleware",)
34
35
36 CSRF_SECRET_BYTES = 32
37 CSRF_SECRET_LENGTH = CSRF_SECRET_BYTES * 2
38
39
40 def generate_csrf_hash(token: str, secret: str) -> str:
41 """Generate an HMAC that signs the CSRF token.
42
43 Args:
44 token: A hashed token.
45 secret: A secret value.
46
47 Returns:
48 A CSRF hash.
49 """
50 return hmac.new(secret.encode(), token.encode(), hashlib.sha256).hexdigest()
51
52
53 def generate_csrf_token(secret: str) -> str:
54 """Generate a CSRF token that includes a randomly generated string signed by an HMAC.
55
56 Args:
57 secret: A secret string.
58
59 Returns:
60 A unique CSRF token.
61 """
62 token = secrets.token_hex(CSRF_SECRET_BYTES)
63 token_hash = generate_csrf_hash(token=token, secret=secret)
64 return token + token_hash
65
66
67 class CSRFMiddleware(MiddlewareProtocol):
68 """CSRF Middleware class.
69
70 This Middleware protects against attacks by setting a CSRF cookie with a token and verifying it in request headers.
71 """
72
73 scopes: Scopes = {ScopeType.HTTP}
74
75 def __init__(self, app: ASGIApp, config: CSRFConfig) -> None:
76 """Initialize ``CSRFMiddleware``.
77
78 Args:
79 app: The ``next`` ASGI app to call.
80 config: The CSRFConfig instance.
81 """
82 self.app = app
83 self.config = config
84 self.exclude = build_exclude_path_pattern(exclude=config.exclude)
85
86 async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:
87 """ASGI callable.
88
89 Args:
90 scope: The ASGI connection scope.
91 receive: The ASGI receive function.
92 send: The ASGI send function.
93
94 Returns:
95 None
96 """
97 if scope["type"] != ScopeType.HTTP:
98 await self.app(scope, receive, send)
99 return
100
101 request: Request[Any, Any, Any] = scope["app"].request_class(scope=scope, receive=receive)
102 content_type, _ = request.content_type
103 csrf_cookie = request.cookies.get(self.config.cookie_name)
104 existing_csrf_token = request.headers.get(self.config.header_name)
105
106 if not existing_csrf_token and content_type in {
107 RequestEncodingType.URL_ENCODED,
108 RequestEncodingType.MULTI_PART,
109 }:
110 form = await request.form()
111 existing_csrf_token = form.get("_csrf_token", None)
112
113 connection_state = ScopeState.from_scope(scope)
114 if request.method in self.config.safe_methods or should_bypass_middleware(
115 scope=scope,
116 scopes=self.scopes,
117 exclude_opt_key=self.config.exclude_from_csrf_key,
118 exclude_path_pattern=self.exclude,
119 ):
120 token = connection_state.csrf_token = csrf_cookie or generate_csrf_token(secret=self.config.secret)
121 await self.app(scope, receive, self.create_send_wrapper(send=send, csrf_cookie=csrf_cookie, token=token))
122 elif self._csrf_tokens_match(existing_csrf_token, csrf_cookie):
123 # we haven't properly narrowed the type of `existing_csrf_token` to be non-None, but we know it is
124 connection_state.csrf_token = existing_csrf_token # type: ignore[assignment]
125 await self.app(scope, receive, send)
126 else:
127 raise PermissionDeniedException("CSRF token verification failed")
128
129 def create_send_wrapper(self, send: Send, token: str, csrf_cookie: str | None) -> Send:
130 """Wrap ``send`` to handle CSRF validation.
131
132 Args:
133 token: The CSRF token.
134 send: The ASGI send function.
135 csrf_cookie: CSRF cookie.
136
137 Returns:
138 An ASGI send function.
139 """
140
141 async def send_wrapper(message: Message) -> None:
142 """Send function that wraps the original send to inject a cookie.
143
144 Args:
145 message: An ASGI ``Message``
146
147 Returns:
148 None
149 """
150 if csrf_cookie is None and message["type"] == "http.response.start":
151 message.setdefault("headers", [])
152 self._set_cookie_if_needed(message=message, token=token)
153 await send(message)
154
155 return send_wrapper
156
157 def _set_cookie_if_needed(self, message: HTTPSendMessage, token: str) -> None:
158 headers = MutableScopeHeaders.from_message(message)
159 cookie = Cookie(
160 key=self.config.cookie_name,
161 value=token,
162 path=self.config.cookie_path,
163 secure=self.config.cookie_secure,
164 httponly=self.config.cookie_httponly,
165 samesite=self.config.cookie_samesite,
166 domain=self.config.cookie_domain,
167 )
168 headers.add("set-cookie", cookie.to_header(header=""))
169
170 def _decode_csrf_token(self, token: str) -> str | None:
171 """Decode a CSRF token and validate its HMAC."""
172 if len(token) < CSRF_SECRET_LENGTH + 1:
173 return None
174
175 token_secret = token[:CSRF_SECRET_LENGTH]
176 existing_hash = token[CSRF_SECRET_LENGTH:]
177 expected_hash = generate_csrf_hash(token=token_secret, secret=self.config.secret)
178 return token_secret if compare_digest(existing_hash, expected_hash) else None
179
180 def _csrf_tokens_match(self, request_csrf_token: str | None, cookie_csrf_token: str | None) -> bool:
181 """Take the CSRF tokens from the request and the cookie and verify both are valid and identical."""
182 if not (request_csrf_token and cookie_csrf_token):
183 return False
184
185 decoded_request_token = self._decode_csrf_token(request_csrf_token)
186 decoded_cookie_token = self._decode_csrf_token(cookie_csrf_token)
187 if decoded_request_token is None or decoded_cookie_token is None:
188 return False
189
190 return compare_digest(decoded_request_token, decoded_cookie_token)
191
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/litestar/middleware/csrf.py b/litestar/middleware/csrf.py
--- a/litestar/middleware/csrf.py
+++ b/litestar/middleware/csrf.py
@@ -119,9 +119,12 @@
):
token = connection_state.csrf_token = csrf_cookie or generate_csrf_token(secret=self.config.secret)
await self.app(scope, receive, self.create_send_wrapper(send=send, csrf_cookie=csrf_cookie, token=token))
- elif self._csrf_tokens_match(existing_csrf_token, csrf_cookie):
- # we haven't properly narrowed the type of `existing_csrf_token` to be non-None, but we know it is
- connection_state.csrf_token = existing_csrf_token # type: ignore[assignment]
+ elif (
+ existing_csrf_token is not None
+ and csrf_cookie is not None
+ and self._csrf_tokens_match(existing_csrf_token, csrf_cookie)
+ ):
+ connection_state.csrf_token = existing_csrf_token
await self.app(scope, receive, send)
else:
raise PermissionDeniedException("CSRF token verification failed")
@@ -177,11 +180,8 @@
expected_hash = generate_csrf_hash(token=token_secret, secret=self.config.secret)
return token_secret if compare_digest(existing_hash, expected_hash) else None
- def _csrf_tokens_match(self, request_csrf_token: str | None, cookie_csrf_token: str | None) -> bool:
+ def _csrf_tokens_match(self, request_csrf_token: str, cookie_csrf_token: str) -> bool:
"""Take the CSRF tokens from the request and the cookie and verify both are valid and identical."""
- if not (request_csrf_token and cookie_csrf_token):
- return False
-
decoded_request_token = self._decode_csrf_token(request_csrf_token)
decoded_cookie_token = self._decode_csrf_token(cookie_csrf_token)
if decoded_request_token is None or decoded_cookie_token is None:
|
{"golden_diff": "diff --git a/litestar/middleware/csrf.py b/litestar/middleware/csrf.py\n--- a/litestar/middleware/csrf.py\n+++ b/litestar/middleware/csrf.py\n@@ -119,9 +119,12 @@\n ):\n token = connection_state.csrf_token = csrf_cookie or generate_csrf_token(secret=self.config.secret)\n await self.app(scope, receive, self.create_send_wrapper(send=send, csrf_cookie=csrf_cookie, token=token))\n- elif self._csrf_tokens_match(existing_csrf_token, csrf_cookie):\n- # we haven't properly narrowed the type of `existing_csrf_token` to be non-None, but we know it is\n- connection_state.csrf_token = existing_csrf_token # type: ignore[assignment]\n+ elif (\n+ existing_csrf_token is not None\n+ and csrf_cookie is not None\n+ and self._csrf_tokens_match(existing_csrf_token, csrf_cookie)\n+ ):\n+ connection_state.csrf_token = existing_csrf_token\n await self.app(scope, receive, send)\n else:\n raise PermissionDeniedException(\"CSRF token verification failed\")\n@@ -177,11 +180,8 @@\n expected_hash = generate_csrf_hash(token=token_secret, secret=self.config.secret)\n return token_secret if compare_digest(existing_hash, expected_hash) else None\n \n- def _csrf_tokens_match(self, request_csrf_token: str | None, cookie_csrf_token: str | None) -> bool:\n+ def _csrf_tokens_match(self, request_csrf_token: str, cookie_csrf_token: str) -> bool:\n \"\"\"Take the CSRF tokens from the request and the cookie and verify both are valid and identical.\"\"\"\n- if not (request_csrf_token and cookie_csrf_token):\n- return False\n-\n decoded_request_token = self._decode_csrf_token(request_csrf_token)\n decoded_cookie_token = self._decode_csrf_token(cookie_csrf_token)\n if decoded_request_token is None or decoded_cookie_token is None:\n", "issue": "refactor: improve typing in CSRF middleware\nhttps://github.com/litestar-org/litestar/blob/7414f7fd7d4782223502895e6a23b77ed635cd2d/litestar/middleware/csrf.py#L87-L127\r\n\r\nAt line 105, we use `dict.get()` to set the value of `existing_csrf_token` so it can be `None` if the header doesn't exist.\r\n\r\nAt line 123, that block is guarded by `self._csrf_tokens_match()` which will return `False` if it is `None`, so actually `existing_csrf_token` cannot be falsy in this block, its just that its value is not narrowed appropriately.\r\n\r\nFixing this would probably be as simple as using `request.cookies.get(..., \"\")` and `request.headers.get(..., \"\")` on lines 104 and 105 respectively, and re-type downstream methods to only accept `str` instead of `str | None`.\r\n\r\n_Originally posted by @peterschutt in https://github.com/litestar-org/litestar/pull/2751#discussion_r1405515256_\r\n \n\n<!-- POLAR PLEDGE BADGE START -->\n---\n> [!NOTE] \n> While we are open for sponsoring on [GitHub Sponsors](https://github.com/sponsors/litestar-org/) and \n> [OpenCollective](https://opencollective.com/litestar), we also utilize [Polar.sh](https://polar.sh/) to engage in pledge-based sponsorship.\n>\n> Check out all issues funded or available for funding [on our Polar.sh Litestar dashboard](https://polar.sh/litestar-org)\n> * If you would like to see an issue prioritized, make a pledge towards it!\n> * We receive the pledge once the issue is completed & verified\n> * This, along with engagement in the community, helps us know which features are a priority to our users.\n\n<a href=\"https://polar.sh/litestar-org/litestar/issues/2770\">\n<picture>\n <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://polar.sh/api/github/litestar-org/litestar/issues/2770/pledge.svg?darkmode=1\">\n <img alt=\"Fund with Polar\" src=\"https://polar.sh/api/github/litestar-org/litestar/issues/2770/pledge.svg\">\n</picture>\n</a>\n<!-- POLAR PLEDGE BADGE END -->\n\n", "before_files": [{"content": "from __future__ import annotations\n\nimport hashlib\nimport hmac\nimport secrets\nfrom secrets import compare_digest\nfrom typing import TYPE_CHECKING, Any\n\nfrom litestar.datastructures import MutableScopeHeaders\nfrom litestar.datastructures.cookie import Cookie\nfrom litestar.enums import RequestEncodingType, ScopeType\nfrom litestar.exceptions import PermissionDeniedException\nfrom litestar.middleware._utils import (\n build_exclude_path_pattern,\n should_bypass_middleware,\n)\nfrom litestar.middleware.base import MiddlewareProtocol\nfrom litestar.utils.scope.state import ScopeState\n\nif TYPE_CHECKING:\n from litestar.config.csrf import CSRFConfig\n from litestar.connection import Request\n from litestar.types import (\n ASGIApp,\n HTTPSendMessage,\n Message,\n Receive,\n Scope,\n Scopes,\n Send,\n )\n\n__all__ = (\"CSRFMiddleware\",)\n\n\nCSRF_SECRET_BYTES = 32\nCSRF_SECRET_LENGTH = CSRF_SECRET_BYTES * 2\n\n\ndef generate_csrf_hash(token: str, secret: str) -> str:\n \"\"\"Generate an HMAC that signs the CSRF token.\n\n Args:\n token: A hashed token.\n secret: A secret value.\n\n Returns:\n A CSRF hash.\n \"\"\"\n return hmac.new(secret.encode(), token.encode(), hashlib.sha256).hexdigest()\n\n\ndef generate_csrf_token(secret: str) -> str:\n \"\"\"Generate a CSRF token that includes a randomly generated string signed by an HMAC.\n\n Args:\n secret: A secret string.\n\n Returns:\n A unique CSRF token.\n \"\"\"\n token = secrets.token_hex(CSRF_SECRET_BYTES)\n token_hash = generate_csrf_hash(token=token, secret=secret)\n return token + token_hash\n\n\nclass CSRFMiddleware(MiddlewareProtocol):\n \"\"\"CSRF Middleware class.\n\n This Middleware protects against attacks by setting a CSRF cookie with a token and verifying it in request headers.\n \"\"\"\n\n scopes: Scopes = {ScopeType.HTTP}\n\n def __init__(self, app: ASGIApp, config: CSRFConfig) -> None:\n \"\"\"Initialize ``CSRFMiddleware``.\n\n Args:\n app: The ``next`` ASGI app to call.\n config: The CSRFConfig instance.\n \"\"\"\n self.app = app\n self.config = config\n self.exclude = build_exclude_path_pattern(exclude=config.exclude)\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n \"\"\"ASGI callable.\n\n Args:\n scope: The ASGI connection scope.\n receive: The ASGI receive function.\n send: The ASGI send function.\n\n Returns:\n None\n \"\"\"\n if scope[\"type\"] != ScopeType.HTTP:\n await self.app(scope, receive, send)\n return\n\n request: Request[Any, Any, Any] = scope[\"app\"].request_class(scope=scope, receive=receive)\n content_type, _ = request.content_type\n csrf_cookie = request.cookies.get(self.config.cookie_name)\n existing_csrf_token = request.headers.get(self.config.header_name)\n\n if not existing_csrf_token and content_type in {\n RequestEncodingType.URL_ENCODED,\n RequestEncodingType.MULTI_PART,\n }:\n form = await request.form()\n existing_csrf_token = form.get(\"_csrf_token\", None)\n\n connection_state = ScopeState.from_scope(scope)\n if request.method in self.config.safe_methods or should_bypass_middleware(\n scope=scope,\n scopes=self.scopes,\n exclude_opt_key=self.config.exclude_from_csrf_key,\n exclude_path_pattern=self.exclude,\n ):\n token = connection_state.csrf_token = csrf_cookie or generate_csrf_token(secret=self.config.secret)\n await self.app(scope, receive, self.create_send_wrapper(send=send, csrf_cookie=csrf_cookie, token=token))\n elif self._csrf_tokens_match(existing_csrf_token, csrf_cookie):\n # we haven't properly narrowed the type of `existing_csrf_token` to be non-None, but we know it is\n connection_state.csrf_token = existing_csrf_token # type: ignore[assignment]\n await self.app(scope, receive, send)\n else:\n raise PermissionDeniedException(\"CSRF token verification failed\")\n\n def create_send_wrapper(self, send: Send, token: str, csrf_cookie: str | None) -> Send:\n \"\"\"Wrap ``send`` to handle CSRF validation.\n\n Args:\n token: The CSRF token.\n send: The ASGI send function.\n csrf_cookie: CSRF cookie.\n\n Returns:\n An ASGI send function.\n \"\"\"\n\n async def send_wrapper(message: Message) -> None:\n \"\"\"Send function that wraps the original send to inject a cookie.\n\n Args:\n message: An ASGI ``Message``\n\n Returns:\n None\n \"\"\"\n if csrf_cookie is None and message[\"type\"] == \"http.response.start\":\n message.setdefault(\"headers\", [])\n self._set_cookie_if_needed(message=message, token=token)\n await send(message)\n\n return send_wrapper\n\n def _set_cookie_if_needed(self, message: HTTPSendMessage, token: str) -> None:\n headers = MutableScopeHeaders.from_message(message)\n cookie = Cookie(\n key=self.config.cookie_name,\n value=token,\n path=self.config.cookie_path,\n secure=self.config.cookie_secure,\n httponly=self.config.cookie_httponly,\n samesite=self.config.cookie_samesite,\n domain=self.config.cookie_domain,\n )\n headers.add(\"set-cookie\", cookie.to_header(header=\"\"))\n\n def _decode_csrf_token(self, token: str) -> str | None:\n \"\"\"Decode a CSRF token and validate its HMAC.\"\"\"\n if len(token) < CSRF_SECRET_LENGTH + 1:\n return None\n\n token_secret = token[:CSRF_SECRET_LENGTH]\n existing_hash = token[CSRF_SECRET_LENGTH:]\n expected_hash = generate_csrf_hash(token=token_secret, secret=self.config.secret)\n return token_secret if compare_digest(existing_hash, expected_hash) else None\n\n def _csrf_tokens_match(self, request_csrf_token: str | None, cookie_csrf_token: str | None) -> bool:\n \"\"\"Take the CSRF tokens from the request and the cookie and verify both are valid and identical.\"\"\"\n if not (request_csrf_token and cookie_csrf_token):\n return False\n\n decoded_request_token = self._decode_csrf_token(request_csrf_token)\n decoded_cookie_token = self._decode_csrf_token(cookie_csrf_token)\n if decoded_request_token is None or decoded_cookie_token is None:\n return False\n\n return compare_digest(decoded_request_token, decoded_cookie_token)\n", "path": "litestar/middleware/csrf.py"}], "after_files": [{"content": "from __future__ import annotations\n\nimport hashlib\nimport hmac\nimport secrets\nfrom secrets import compare_digest\nfrom typing import TYPE_CHECKING, Any\n\nfrom litestar.datastructures import MutableScopeHeaders\nfrom litestar.datastructures.cookie import Cookie\nfrom litestar.enums import RequestEncodingType, ScopeType\nfrom litestar.exceptions import PermissionDeniedException\nfrom litestar.middleware._utils import (\n build_exclude_path_pattern,\n should_bypass_middleware,\n)\nfrom litestar.middleware.base import MiddlewareProtocol\nfrom litestar.utils.scope.state import ScopeState\n\nif TYPE_CHECKING:\n from litestar.config.csrf import CSRFConfig\n from litestar.connection import Request\n from litestar.types import (\n ASGIApp,\n HTTPSendMessage,\n Message,\n Receive,\n Scope,\n Scopes,\n Send,\n )\n\n__all__ = (\"CSRFMiddleware\",)\n\n\nCSRF_SECRET_BYTES = 32\nCSRF_SECRET_LENGTH = CSRF_SECRET_BYTES * 2\n\n\ndef generate_csrf_hash(token: str, secret: str) -> str:\n \"\"\"Generate an HMAC that signs the CSRF token.\n\n Args:\n token: A hashed token.\n secret: A secret value.\n\n Returns:\n A CSRF hash.\n \"\"\"\n return hmac.new(secret.encode(), token.encode(), hashlib.sha256).hexdigest()\n\n\ndef generate_csrf_token(secret: str) -> str:\n \"\"\"Generate a CSRF token that includes a randomly generated string signed by an HMAC.\n\n Args:\n secret: A secret string.\n\n Returns:\n A unique CSRF token.\n \"\"\"\n token = secrets.token_hex(CSRF_SECRET_BYTES)\n token_hash = generate_csrf_hash(token=token, secret=secret)\n return token + token_hash\n\n\nclass CSRFMiddleware(MiddlewareProtocol):\n \"\"\"CSRF Middleware class.\n\n This Middleware protects against attacks by setting a CSRF cookie with a token and verifying it in request headers.\n \"\"\"\n\n scopes: Scopes = {ScopeType.HTTP}\n\n def __init__(self, app: ASGIApp, config: CSRFConfig) -> None:\n \"\"\"Initialize ``CSRFMiddleware``.\n\n Args:\n app: The ``next`` ASGI app to call.\n config: The CSRFConfig instance.\n \"\"\"\n self.app = app\n self.config = config\n self.exclude = build_exclude_path_pattern(exclude=config.exclude)\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n \"\"\"ASGI callable.\n\n Args:\n scope: The ASGI connection scope.\n receive: The ASGI receive function.\n send: The ASGI send function.\n\n Returns:\n None\n \"\"\"\n if scope[\"type\"] != ScopeType.HTTP:\n await self.app(scope, receive, send)\n return\n\n request: Request[Any, Any, Any] = scope[\"app\"].request_class(scope=scope, receive=receive)\n content_type, _ = request.content_type\n csrf_cookie = request.cookies.get(self.config.cookie_name)\n existing_csrf_token = request.headers.get(self.config.header_name)\n\n if not existing_csrf_token and content_type in {\n RequestEncodingType.URL_ENCODED,\n RequestEncodingType.MULTI_PART,\n }:\n form = await request.form()\n existing_csrf_token = form.get(\"_csrf_token\", None)\n\n connection_state = ScopeState.from_scope(scope)\n if request.method in self.config.safe_methods or should_bypass_middleware(\n scope=scope,\n scopes=self.scopes,\n exclude_opt_key=self.config.exclude_from_csrf_key,\n exclude_path_pattern=self.exclude,\n ):\n token = connection_state.csrf_token = csrf_cookie or generate_csrf_token(secret=self.config.secret)\n await self.app(scope, receive, self.create_send_wrapper(send=send, csrf_cookie=csrf_cookie, token=token))\n elif (\n existing_csrf_token is not None\n and csrf_cookie is not None\n and self._csrf_tokens_match(existing_csrf_token, csrf_cookie)\n ):\n connection_state.csrf_token = existing_csrf_token\n await self.app(scope, receive, send)\n else:\n raise PermissionDeniedException(\"CSRF token verification failed\")\n\n def create_send_wrapper(self, send: Send, token: str, csrf_cookie: str | None) -> Send:\n \"\"\"Wrap ``send`` to handle CSRF validation.\n\n Args:\n token: The CSRF token.\n send: The ASGI send function.\n csrf_cookie: CSRF cookie.\n\n Returns:\n An ASGI send function.\n \"\"\"\n\n async def send_wrapper(message: Message) -> None:\n \"\"\"Send function that wraps the original send to inject a cookie.\n\n Args:\n message: An ASGI ``Message``\n\n Returns:\n None\n \"\"\"\n if csrf_cookie is None and message[\"type\"] == \"http.response.start\":\n message.setdefault(\"headers\", [])\n self._set_cookie_if_needed(message=message, token=token)\n await send(message)\n\n return send_wrapper\n\n def _set_cookie_if_needed(self, message: HTTPSendMessage, token: str) -> None:\n headers = MutableScopeHeaders.from_message(message)\n cookie = Cookie(\n key=self.config.cookie_name,\n value=token,\n path=self.config.cookie_path,\n secure=self.config.cookie_secure,\n httponly=self.config.cookie_httponly,\n samesite=self.config.cookie_samesite,\n domain=self.config.cookie_domain,\n )\n headers.add(\"set-cookie\", cookie.to_header(header=\"\"))\n\n def _decode_csrf_token(self, token: str) -> str | None:\n \"\"\"Decode a CSRF token and validate its HMAC.\"\"\"\n if len(token) < CSRF_SECRET_LENGTH + 1:\n return None\n\n token_secret = token[:CSRF_SECRET_LENGTH]\n existing_hash = token[CSRF_SECRET_LENGTH:]\n expected_hash = generate_csrf_hash(token=token_secret, secret=self.config.secret)\n return token_secret if compare_digest(existing_hash, expected_hash) else None\n\n def _csrf_tokens_match(self, request_csrf_token: str, cookie_csrf_token: str) -> bool:\n \"\"\"Take the CSRF tokens from the request and the cookie and verify both are valid and identical.\"\"\"\n decoded_request_token = self._decode_csrf_token(request_csrf_token)\n decoded_cookie_token = self._decode_csrf_token(cookie_csrf_token)\n if decoded_request_token is None or decoded_cookie_token is None:\n return False\n\n return compare_digest(decoded_request_token, decoded_cookie_token)\n", "path": "litestar/middleware/csrf.py"}]}
| 2,712 | 442 |
gh_patches_debug_57271
|
rasdani/github-patches
|
git_diff
|
DataDog__dd-trace-py-984
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
'async for' requires an object with __aiter__ method, got AIOTracedCursor
## Problem
Using ddtrace and aiopg, if I do:
```python
await cur.execute(query)
async for value in cur:
yield value
```
If my connection is not patched, I get:
```
TypeError: 'async for' requires an object with __aiter__ method, got AIOTracedCursor
(...)
File "path/to/my/file.py", line 241, in get_many
async for value in cur:
```
(if my connection is not patched, it works)
## Analysis
The cursor class is replaced with `AIOTracedCursor` which inherits `wrapt.ObjectProxy`.
Problem is, while thanks to `ObjectProxy`, `AIOTracedCursor().__aiter__()` would most probably work and return whatever the real proxy would return, this is not enough for Python to accept that the cursor is an iterator.
A small example with simple objects:
```python
class A():
def iter(self):
return iter([])
async def aiter(self):
return iter([])
def __getattr__(self, attr):
if attr.endswith("iter__"):
return getattr(self, attr.strip("_"))
a = A()
```
We implement `a.__iter__()` and `a.__aiter__()` but Python doesn't see it:
```
In [6]: a.__iter__()
Out[6]: <list_iterator at 0x7fdff00de860>
In [7]: a.__aiter__()
Out[7]: <coroutine object A.aiter at 0x7fdff00ddba0>
In [8]: async for e in a: print(e)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
cell_name in async-def-wrapper()
TypeError: 'async for' requires an object with __aiter__ method, got A
In [9]: iter(a)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-9-2b64cb055077> in <module>
----> 1 iter(a)
TypeError: 'A' object is not iterable
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ddtrace/contrib/aiopg/connection.py`
Content:
```
1 import asyncio
2 from ddtrace.vendor import wrapt
3
4 from aiopg.utils import _ContextManager
5
6 from .. import dbapi
7 from ...constants import ANALYTICS_SAMPLE_RATE_KEY
8 from ...ext import sql, AppTypes
9 from ...pin import Pin
10 from ...settings import config
11
12
13 class AIOTracedCursor(wrapt.ObjectProxy):
14 """ TracedCursor wraps a psql cursor and traces its queries. """
15
16 def __init__(self, cursor, pin):
17 super(AIOTracedCursor, self).__init__(cursor)
18 pin.onto(self)
19 name = pin.app or 'sql'
20 self._datadog_name = '%s.query' % name
21
22 @asyncio.coroutine
23 def _trace_method(self, method, resource, extra_tags, *args, **kwargs):
24 pin = Pin.get_from(self)
25 if not pin or not pin.enabled():
26 result = yield from method(*args, **kwargs)
27 return result
28 service = pin.service
29
30 with pin.tracer.trace(self._datadog_name, service=service,
31 resource=resource) as s:
32 s.span_type = sql.TYPE
33 s.set_tag(sql.QUERY, resource)
34 s.set_tags(pin.tags)
35 s.set_tags(extra_tags)
36
37 # set analytics sample rate
38 s.set_tag(
39 ANALYTICS_SAMPLE_RATE_KEY,
40 config.aiopg.get_analytics_sample_rate()
41 )
42
43 try:
44 result = yield from method(*args, **kwargs)
45 return result
46 finally:
47 s.set_metric('db.rowcount', self.rowcount)
48
49 @asyncio.coroutine
50 def executemany(self, query, *args, **kwargs):
51 # FIXME[matt] properly handle kwargs here. arg names can be different
52 # with different libs.
53 result = yield from self._trace_method(
54 self.__wrapped__.executemany, query, {'sql.executemany': 'true'},
55 query, *args, **kwargs)
56 return result
57
58 @asyncio.coroutine
59 def execute(self, query, *args, **kwargs):
60 result = yield from self._trace_method(
61 self.__wrapped__.execute, query, {}, query, *args, **kwargs)
62 return result
63
64 @asyncio.coroutine
65 def callproc(self, proc, args):
66 result = yield from self._trace_method(
67 self.__wrapped__.callproc, proc, {}, proc, args)
68 return result
69
70
71 class AIOTracedConnection(wrapt.ObjectProxy):
72 """ TracedConnection wraps a Connection with tracing code. """
73
74 def __init__(self, conn, pin=None, cursor_cls=AIOTracedCursor):
75 super(AIOTracedConnection, self).__init__(conn)
76 name = dbapi._get_vendor(conn)
77 db_pin = pin or Pin(service=name, app=name, app_type=AppTypes.db)
78 db_pin.onto(self)
79 # wrapt requires prefix of `_self` for attributes that are only in the
80 # proxy (since some of our source objects will use `__slots__`)
81 self._self_cursor_cls = cursor_cls
82
83 def cursor(self, *args, **kwargs):
84 # unfortunately we also need to patch this method as otherwise "self"
85 # ends up being the aiopg connection object
86 coro = self._cursor(*args, **kwargs)
87 return _ContextManager(coro)
88
89 @asyncio.coroutine
90 def _cursor(self, *args, **kwargs):
91 cursor = yield from self.__wrapped__._cursor(*args, **kwargs)
92 pin = Pin.get_from(self)
93 if not pin:
94 return cursor
95 return self._self_cursor_cls(cursor, pin)
96
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ddtrace/contrib/aiopg/connection.py b/ddtrace/contrib/aiopg/connection.py
--- a/ddtrace/contrib/aiopg/connection.py
+++ b/ddtrace/contrib/aiopg/connection.py
@@ -67,6 +67,9 @@
self.__wrapped__.callproc, proc, {}, proc, args)
return result
+ def __aiter__(self):
+ return self.__wrapped__.__aiter__()
+
class AIOTracedConnection(wrapt.ObjectProxy):
""" TracedConnection wraps a Connection with tracing code. """
|
{"golden_diff": "diff --git a/ddtrace/contrib/aiopg/connection.py b/ddtrace/contrib/aiopg/connection.py\n--- a/ddtrace/contrib/aiopg/connection.py\n+++ b/ddtrace/contrib/aiopg/connection.py\n@@ -67,6 +67,9 @@\n self.__wrapped__.callproc, proc, {}, proc, args)\n return result\n \n+ def __aiter__(self):\n+ return self.__wrapped__.__aiter__()\n+\n \n class AIOTracedConnection(wrapt.ObjectProxy):\n \"\"\" TracedConnection wraps a Connection with tracing code. \"\"\"\n", "issue": "'async for' requires an object with __aiter__ method, got AIOTracedCursor\n## Problem\r\nUsing ddtrace and aiopg, if I do:\r\n\r\n```python\r\nawait cur.execute(query)\r\nasync for value in cur:\r\n yield value\r\n```\r\nIf my connection is not patched, I get:\r\n```\r\nTypeError: 'async for' requires an object with __aiter__ method, got AIOTracedCursor\r\n(...)\r\n File \"path/to/my/file.py\", line 241, in get_many\r\n async for value in cur:\r\n```\r\n(if my connection is not patched, it works)\r\n\r\n## Analysis\r\n\r\nThe cursor class is replaced with `AIOTracedCursor` which inherits `wrapt.ObjectProxy`.\r\n\r\nProblem is, while thanks to `ObjectProxy`, `AIOTracedCursor().__aiter__()` would most probably work and return whatever the real proxy would return, this is not enough for Python to accept that the cursor is an iterator.\r\n\r\nA small example with simple objects:\r\n```python\r\nclass A():\r\n def iter(self):\r\n return iter([])\r\n\r\n async def aiter(self):\r\n return iter([])\r\n\r\n def __getattr__(self, attr):\r\n if attr.endswith(\"iter__\"):\r\n return getattr(self, attr.strip(\"_\"))\r\na = A()\r\n```\r\nWe implement `a.__iter__()` and `a.__aiter__()` but Python doesn't see it:\r\n```\r\nIn [6]: a.__iter__() \r\nOut[6]: <list_iterator at 0x7fdff00de860>\r\n\r\nIn [7]: a.__aiter__() \r\nOut[7]: <coroutine object A.aiter at 0x7fdff00ddba0>\r\n\r\nIn [8]: async for e in a: print(e) \r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\ncell_name in async-def-wrapper()\r\n\r\nTypeError: 'async for' requires an object with __aiter__ method, got A \r\n\r\nIn [9]: iter(a) \r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-9-2b64cb055077> in <module>\r\n----> 1 iter(a)\r\n\r\nTypeError: 'A' object is not iterable\r\n\r\n```\n", "before_files": [{"content": "import asyncio\nfrom ddtrace.vendor import wrapt\n\nfrom aiopg.utils import _ContextManager\n\nfrom .. import dbapi\nfrom ...constants import ANALYTICS_SAMPLE_RATE_KEY\nfrom ...ext import sql, AppTypes\nfrom ...pin import Pin\nfrom ...settings import config\n\n\nclass AIOTracedCursor(wrapt.ObjectProxy):\n \"\"\" TracedCursor wraps a psql cursor and traces its queries. \"\"\"\n\n def __init__(self, cursor, pin):\n super(AIOTracedCursor, self).__init__(cursor)\n pin.onto(self)\n name = pin.app or 'sql'\n self._datadog_name = '%s.query' % name\n\n @asyncio.coroutine\n def _trace_method(self, method, resource, extra_tags, *args, **kwargs):\n pin = Pin.get_from(self)\n if not pin or not pin.enabled():\n result = yield from method(*args, **kwargs)\n return result\n service = pin.service\n\n with pin.tracer.trace(self._datadog_name, service=service,\n resource=resource) as s:\n s.span_type = sql.TYPE\n s.set_tag(sql.QUERY, resource)\n s.set_tags(pin.tags)\n s.set_tags(extra_tags)\n\n # set analytics sample rate\n s.set_tag(\n ANALYTICS_SAMPLE_RATE_KEY,\n config.aiopg.get_analytics_sample_rate()\n )\n\n try:\n result = yield from method(*args, **kwargs)\n return result\n finally:\n s.set_metric('db.rowcount', self.rowcount)\n\n @asyncio.coroutine\n def executemany(self, query, *args, **kwargs):\n # FIXME[matt] properly handle kwargs here. arg names can be different\n # with different libs.\n result = yield from self._trace_method(\n self.__wrapped__.executemany, query, {'sql.executemany': 'true'},\n query, *args, **kwargs)\n return result\n\n @asyncio.coroutine\n def execute(self, query, *args, **kwargs):\n result = yield from self._trace_method(\n self.__wrapped__.execute, query, {}, query, *args, **kwargs)\n return result\n\n @asyncio.coroutine\n def callproc(self, proc, args):\n result = yield from self._trace_method(\n self.__wrapped__.callproc, proc, {}, proc, args)\n return result\n\n\nclass AIOTracedConnection(wrapt.ObjectProxy):\n \"\"\" TracedConnection wraps a Connection with tracing code. \"\"\"\n\n def __init__(self, conn, pin=None, cursor_cls=AIOTracedCursor):\n super(AIOTracedConnection, self).__init__(conn)\n name = dbapi._get_vendor(conn)\n db_pin = pin or Pin(service=name, app=name, app_type=AppTypes.db)\n db_pin.onto(self)\n # wrapt requires prefix of `_self` for attributes that are only in the\n # proxy (since some of our source objects will use `__slots__`)\n self._self_cursor_cls = cursor_cls\n\n def cursor(self, *args, **kwargs):\n # unfortunately we also need to patch this method as otherwise \"self\"\n # ends up being the aiopg connection object\n coro = self._cursor(*args, **kwargs)\n return _ContextManager(coro)\n\n @asyncio.coroutine\n def _cursor(self, *args, **kwargs):\n cursor = yield from self.__wrapped__._cursor(*args, **kwargs)\n pin = Pin.get_from(self)\n if not pin:\n return cursor\n return self._self_cursor_cls(cursor, pin)\n", "path": "ddtrace/contrib/aiopg/connection.py"}], "after_files": [{"content": "import asyncio\nfrom ddtrace.vendor import wrapt\n\nfrom aiopg.utils import _ContextManager\n\nfrom .. import dbapi\nfrom ...constants import ANALYTICS_SAMPLE_RATE_KEY\nfrom ...ext import sql, AppTypes\nfrom ...pin import Pin\nfrom ...settings import config\n\n\nclass AIOTracedCursor(wrapt.ObjectProxy):\n \"\"\" TracedCursor wraps a psql cursor and traces its queries. \"\"\"\n\n def __init__(self, cursor, pin):\n super(AIOTracedCursor, self).__init__(cursor)\n pin.onto(self)\n name = pin.app or 'sql'\n self._datadog_name = '%s.query' % name\n\n @asyncio.coroutine\n def _trace_method(self, method, resource, extra_tags, *args, **kwargs):\n pin = Pin.get_from(self)\n if not pin or not pin.enabled():\n result = yield from method(*args, **kwargs)\n return result\n service = pin.service\n\n with pin.tracer.trace(self._datadog_name, service=service,\n resource=resource) as s:\n s.span_type = sql.TYPE\n s.set_tag(sql.QUERY, resource)\n s.set_tags(pin.tags)\n s.set_tags(extra_tags)\n\n # set analytics sample rate\n s.set_tag(\n ANALYTICS_SAMPLE_RATE_KEY,\n config.aiopg.get_analytics_sample_rate()\n )\n\n try:\n result = yield from method(*args, **kwargs)\n return result\n finally:\n s.set_metric('db.rowcount', self.rowcount)\n\n @asyncio.coroutine\n def executemany(self, query, *args, **kwargs):\n # FIXME[matt] properly handle kwargs here. arg names can be different\n # with different libs.\n result = yield from self._trace_method(\n self.__wrapped__.executemany, query, {'sql.executemany': 'true'},\n query, *args, **kwargs)\n return result\n\n @asyncio.coroutine\n def execute(self, query, *args, **kwargs):\n result = yield from self._trace_method(\n self.__wrapped__.execute, query, {}, query, *args, **kwargs)\n return result\n\n @asyncio.coroutine\n def callproc(self, proc, args):\n result = yield from self._trace_method(\n self.__wrapped__.callproc, proc, {}, proc, args)\n return result\n\n def __aiter__(self):\n return self.__wrapped__.__aiter__()\n\n\nclass AIOTracedConnection(wrapt.ObjectProxy):\n \"\"\" TracedConnection wraps a Connection with tracing code. \"\"\"\n\n def __init__(self, conn, pin=None, cursor_cls=AIOTracedCursor):\n super(AIOTracedConnection, self).__init__(conn)\n name = dbapi._get_vendor(conn)\n db_pin = pin or Pin(service=name, app=name, app_type=AppTypes.db)\n db_pin.onto(self)\n # wrapt requires prefix of `_self` for attributes that are only in the\n # proxy (since some of our source objects will use `__slots__`)\n self._self_cursor_cls = cursor_cls\n\n def cursor(self, *args, **kwargs):\n # unfortunately we also need to patch this method as otherwise \"self\"\n # ends up being the aiopg connection object\n coro = self._cursor(*args, **kwargs)\n return _ContextManager(coro)\n\n @asyncio.coroutine\n def _cursor(self, *args, **kwargs):\n cursor = yield from self.__wrapped__._cursor(*args, **kwargs)\n pin = Pin.get_from(self)\n if not pin:\n return cursor\n return self._self_cursor_cls(cursor, pin)\n", "path": "ddtrace/contrib/aiopg/connection.py"}]}
| 1,735 | 129 |
gh_patches_debug_29199
|
rasdani/github-patches
|
git_diff
|
electricitymaps__electricitymaps-contrib-1361
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
IN-AP has changed its data url
The new link is https://core.ap.gov.in/CMDashBoard/UserInterface/Power/PowerReport.aspx (same page layout I think). Old link returns 404.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `parsers/IN_AP.py`
Content:
```
1 #!/usr/bin/env python3
2
3 from requests import Session
4 from .lib import zonekey, IN, web
5
6
7 def fetch_production(zone_key='IN-AP', session=None, target_datetime=None, logger=None):
8 """Fetch Andhra Pradesh production"""
9 if target_datetime:
10 raise NotImplementedError('This parser is not yet able to parse past dates')
11
12 zonekey.assert_zone_key(zone_key, 'IN-AP')
13
14 html = web.get_response_soup(zone_key,
15 'http://www.core.ap.gov.in/CMDashBoard/UserInterface/Power/PowerReport.aspx', session)
16 india_date = IN.read_datetime_from_span_id(html, 'lblPowerStatusDate', 'DD-MM-YYYY HH:mm')
17
18 hydro_value = IN.read_value_from_span_id(html, 'lblHydel')
19 gas_value = IN.read_value_from_span_id(html, 'lblGas')
20 wind_value = IN.read_value_from_span_id(html, 'lblWind')
21 solar_value = IN.read_value_from_span_id(html, 'lblSolar')
22
23 # All thermal centrals are considered coal based production
24 # https://en.wikipedia.org/wiki/Power_sector_of_Andhra_Pradesh
25 thermal_value = IN.read_value_from_span_id(html, 'lblThermal')
26
27 cgs_value = IN.read_value_from_span_id(html, 'lblCGS')
28 ipp_value = IN.read_value_from_span_id(html, 'lblIPPS')
29
30 data = {
31 'zoneKey': zone_key,
32 'datetime': india_date.datetime,
33 'production': {
34 'biomass': 0.0,
35 'coal': thermal_value,
36 'gas': gas_value,
37 'hydro': hydro_value,
38 'nuclear': 0.0,
39 'oil': 0.0,
40 'solar': solar_value,
41 'wind': wind_value,
42 'geothermal': 0.0,
43 'unknown': round(cgs_value + ipp_value, 2)
44 },
45 'storage': {
46 'hydro': 0.0
47 },
48 'source': 'core.ap.gov.in',
49 }
50
51 return data
52
53
54 def fetch_consumption(zone_key='IN-AP', session=None, target_datetime=None, logger=None):
55 """Fetch Andhra Pradesh consumption"""
56 if target_datetime:
57 raise NotImplementedError('This parser is not yet able to parse past dates')
58
59 zonekey.assert_zone_key(zone_key, 'IN-AP')
60
61 html = web.get_response_soup(zone_key,
62 'http://www.core.ap.gov.in/CMDashBoard/UserInterface/Power/PowerReport.aspx', session)
63 india_date = IN.read_datetime_from_span_id(html, 'lblPowerStatusDate', 'DD-MM-YYYY HH:mm')
64
65 demand_value = IN.read_value_from_span_id(html, 'lblGridDemand')
66
67 data = {
68 'zoneKey': zone_key,
69 'datetime': india_date.datetime,
70 'consumption': demand_value,
71 'source': 'core.ap.gov.in'
72 }
73
74 return data
75
76
77 if __name__ == '__main__':
78 session = Session()
79 print(fetch_production('IN-AP', session))
80 print(fetch_consumption('IN-AP', session))
81
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/parsers/IN_AP.py b/parsers/IN_AP.py
--- a/parsers/IN_AP.py
+++ b/parsers/IN_AP.py
@@ -3,7 +3,6 @@
from requests import Session
from .lib import zonekey, IN, web
-
def fetch_production(zone_key='IN-AP', session=None, target_datetime=None, logger=None):
"""Fetch Andhra Pradesh production"""
if target_datetime:
@@ -12,7 +11,7 @@
zonekey.assert_zone_key(zone_key, 'IN-AP')
html = web.get_response_soup(zone_key,
- 'http://www.core.ap.gov.in/CMDashBoard/UserInterface/Power/PowerReport.aspx', session)
+ 'https://core.ap.gov.in/CMDashBoard/UserInterface/Power/PowerReport.aspx', session)
india_date = IN.read_datetime_from_span_id(html, 'lblPowerStatusDate', 'DD-MM-YYYY HH:mm')
hydro_value = IN.read_value_from_span_id(html, 'lblHydel')
@@ -59,7 +58,7 @@
zonekey.assert_zone_key(zone_key, 'IN-AP')
html = web.get_response_soup(zone_key,
- 'http://www.core.ap.gov.in/CMDashBoard/UserInterface/Power/PowerReport.aspx', session)
+ 'https://core.ap.gov.in/CMDashBoard/UserInterface/Power/PowerReport.aspx', session)
india_date = IN.read_datetime_from_span_id(html, 'lblPowerStatusDate', 'DD-MM-YYYY HH:mm')
demand_value = IN.read_value_from_span_id(html, 'lblGridDemand')
|
{"golden_diff": "diff --git a/parsers/IN_AP.py b/parsers/IN_AP.py\n--- a/parsers/IN_AP.py\n+++ b/parsers/IN_AP.py\n@@ -3,7 +3,6 @@\n from requests import Session\n from .lib import zonekey, IN, web\n \n-\n def fetch_production(zone_key='IN-AP', session=None, target_datetime=None, logger=None):\n \"\"\"Fetch Andhra Pradesh production\"\"\"\n if target_datetime:\n@@ -12,7 +11,7 @@\n zonekey.assert_zone_key(zone_key, 'IN-AP')\n \n html = web.get_response_soup(zone_key,\n- 'http://www.core.ap.gov.in/CMDashBoard/UserInterface/Power/PowerReport.aspx', session)\n+ 'https://core.ap.gov.in/CMDashBoard/UserInterface/Power/PowerReport.aspx', session)\n india_date = IN.read_datetime_from_span_id(html, 'lblPowerStatusDate', 'DD-MM-YYYY HH:mm')\n \n hydro_value = IN.read_value_from_span_id(html, 'lblHydel')\n@@ -59,7 +58,7 @@\n zonekey.assert_zone_key(zone_key, 'IN-AP')\n \n html = web.get_response_soup(zone_key,\n- 'http://www.core.ap.gov.in/CMDashBoard/UserInterface/Power/PowerReport.aspx', session)\n+ 'https://core.ap.gov.in/CMDashBoard/UserInterface/Power/PowerReport.aspx', session)\n india_date = IN.read_datetime_from_span_id(html, 'lblPowerStatusDate', 'DD-MM-YYYY HH:mm')\n \n demand_value = IN.read_value_from_span_id(html, 'lblGridDemand')\n", "issue": "IN-AP has changed its data url\nThe new link is https://core.ap.gov.in/CMDashBoard/UserInterface/Power/PowerReport.aspx (same page layout I think). Old link returns 404.\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nfrom requests import Session\nfrom .lib import zonekey, IN, web\n\n\ndef fetch_production(zone_key='IN-AP', session=None, target_datetime=None, logger=None):\n \"\"\"Fetch Andhra Pradesh production\"\"\"\n if target_datetime:\n raise NotImplementedError('This parser is not yet able to parse past dates')\n\n zonekey.assert_zone_key(zone_key, 'IN-AP')\n\n html = web.get_response_soup(zone_key,\n 'http://www.core.ap.gov.in/CMDashBoard/UserInterface/Power/PowerReport.aspx', session)\n india_date = IN.read_datetime_from_span_id(html, 'lblPowerStatusDate', 'DD-MM-YYYY HH:mm')\n\n hydro_value = IN.read_value_from_span_id(html, 'lblHydel')\n gas_value = IN.read_value_from_span_id(html, 'lblGas')\n wind_value = IN.read_value_from_span_id(html, 'lblWind')\n solar_value = IN.read_value_from_span_id(html, 'lblSolar')\n\n # All thermal centrals are considered coal based production\n # https://en.wikipedia.org/wiki/Power_sector_of_Andhra_Pradesh\n thermal_value = IN.read_value_from_span_id(html, 'lblThermal')\n\n cgs_value = IN.read_value_from_span_id(html, 'lblCGS')\n ipp_value = IN.read_value_from_span_id(html, 'lblIPPS')\n\n data = {\n 'zoneKey': zone_key,\n 'datetime': india_date.datetime,\n 'production': {\n 'biomass': 0.0,\n 'coal': thermal_value,\n 'gas': gas_value,\n 'hydro': hydro_value,\n 'nuclear': 0.0,\n 'oil': 0.0,\n 'solar': solar_value,\n 'wind': wind_value,\n 'geothermal': 0.0,\n 'unknown': round(cgs_value + ipp_value, 2)\n },\n 'storage': {\n 'hydro': 0.0\n },\n 'source': 'core.ap.gov.in',\n }\n\n return data\n\n\ndef fetch_consumption(zone_key='IN-AP', session=None, target_datetime=None, logger=None):\n \"\"\"Fetch Andhra Pradesh consumption\"\"\"\n if target_datetime:\n raise NotImplementedError('This parser is not yet able to parse past dates')\n\n zonekey.assert_zone_key(zone_key, 'IN-AP')\n\n html = web.get_response_soup(zone_key,\n 'http://www.core.ap.gov.in/CMDashBoard/UserInterface/Power/PowerReport.aspx', session)\n india_date = IN.read_datetime_from_span_id(html, 'lblPowerStatusDate', 'DD-MM-YYYY HH:mm')\n\n demand_value = IN.read_value_from_span_id(html, 'lblGridDemand')\n\n data = {\n 'zoneKey': zone_key,\n 'datetime': india_date.datetime,\n 'consumption': demand_value,\n 'source': 'core.ap.gov.in'\n }\n\n return data\n\n\nif __name__ == '__main__':\n session = Session()\n print(fetch_production('IN-AP', session))\n print(fetch_consumption('IN-AP', session))\n", "path": "parsers/IN_AP.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\nfrom requests import Session\nfrom .lib import zonekey, IN, web\n\ndef fetch_production(zone_key='IN-AP', session=None, target_datetime=None, logger=None):\n \"\"\"Fetch Andhra Pradesh production\"\"\"\n if target_datetime:\n raise NotImplementedError('This parser is not yet able to parse past dates')\n\n zonekey.assert_zone_key(zone_key, 'IN-AP')\n\n html = web.get_response_soup(zone_key,\n 'https://core.ap.gov.in/CMDashBoard/UserInterface/Power/PowerReport.aspx', session)\n india_date = IN.read_datetime_from_span_id(html, 'lblPowerStatusDate', 'DD-MM-YYYY HH:mm')\n\n hydro_value = IN.read_value_from_span_id(html, 'lblHydel')\n gas_value = IN.read_value_from_span_id(html, 'lblGas')\n wind_value = IN.read_value_from_span_id(html, 'lblWind')\n solar_value = IN.read_value_from_span_id(html, 'lblSolar')\n\n # All thermal centrals are considered coal based production\n # https://en.wikipedia.org/wiki/Power_sector_of_Andhra_Pradesh\n thermal_value = IN.read_value_from_span_id(html, 'lblThermal')\n\n cgs_value = IN.read_value_from_span_id(html, 'lblCGS')\n ipp_value = IN.read_value_from_span_id(html, 'lblIPPS')\n\n data = {\n 'zoneKey': zone_key,\n 'datetime': india_date.datetime,\n 'production': {\n 'biomass': 0.0,\n 'coal': thermal_value,\n 'gas': gas_value,\n 'hydro': hydro_value,\n 'nuclear': 0.0,\n 'oil': 0.0,\n 'solar': solar_value,\n 'wind': wind_value,\n 'geothermal': 0.0,\n 'unknown': round(cgs_value + ipp_value, 2)\n },\n 'storage': {\n 'hydro': 0.0\n },\n 'source': 'core.ap.gov.in',\n }\n\n return data\n\n\ndef fetch_consumption(zone_key='IN-AP', session=None, target_datetime=None, logger=None):\n \"\"\"Fetch Andhra Pradesh consumption\"\"\"\n if target_datetime:\n raise NotImplementedError('This parser is not yet able to parse past dates')\n\n zonekey.assert_zone_key(zone_key, 'IN-AP')\n\n html = web.get_response_soup(zone_key,\n 'https://core.ap.gov.in/CMDashBoard/UserInterface/Power/PowerReport.aspx', session)\n india_date = IN.read_datetime_from_span_id(html, 'lblPowerStatusDate', 'DD-MM-YYYY HH:mm')\n\n demand_value = IN.read_value_from_span_id(html, 'lblGridDemand')\n\n data = {\n 'zoneKey': zone_key,\n 'datetime': india_date.datetime,\n 'consumption': demand_value,\n 'source': 'core.ap.gov.in'\n }\n\n return data\n\n\nif __name__ == '__main__':\n session = Session()\n print(fetch_production('IN-AP', session))\n print(fetch_consumption('IN-AP', session))\n", "path": "parsers/IN_AP.py"}]}
| 1,142 | 357 |
gh_patches_debug_9781
|
rasdani/github-patches
|
git_diff
|
docker__docker-py-1711
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
_check_api_features should support pure dict params
https://github.com/docker/docker-py/blob/2.4.2-release/docker/api/service.py#L41
Originally reported: https://github.com/moby/moby/issues/34116#issuecomment-315484688
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docker/api/service.py`
Content:
```
1 import warnings
2 from .. import auth, errors, utils
3 from ..types import ServiceMode
4
5
6 def _check_api_features(version, task_template, update_config):
7 if update_config is not None:
8 if utils.version_lt(version, '1.25'):
9 if 'MaxFailureRatio' in update_config:
10 raise errors.InvalidVersion(
11 'UpdateConfig.max_failure_ratio is not supported in'
12 ' API version < 1.25'
13 )
14 if 'Monitor' in update_config:
15 raise errors.InvalidVersion(
16 'UpdateConfig.monitor is not supported in'
17 ' API version < 1.25'
18 )
19
20 if task_template is not None:
21 if 'ForceUpdate' in task_template and utils.version_lt(
22 version, '1.25'):
23 raise errors.InvalidVersion(
24 'force_update is not supported in API version < 1.25'
25 )
26
27 if task_template.get('Placement'):
28 if utils.version_lt(version, '1.30'):
29 if task_template['Placement'].get('Platforms'):
30 raise errors.InvalidVersion(
31 'Placement.platforms is not supported in'
32 ' API version < 1.30'
33 )
34
35 if utils.version_lt(version, '1.27'):
36 if task_template['Placement'].get('Preferences'):
37 raise errors.InvalidVersion(
38 'Placement.preferences is not supported in'
39 ' API version < 1.27'
40 )
41 if task_template.container_spec.get('TTY'):
42 if utils.version_lt(version, '1.25'):
43 raise errors.InvalidVersion(
44 'ContainerSpec.TTY is not supported in API version < 1.25'
45 )
46
47
48 class ServiceApiMixin(object):
49 @utils.minimum_version('1.24')
50 def create_service(
51 self, task_template, name=None, labels=None, mode=None,
52 update_config=None, networks=None, endpoint_config=None,
53 endpoint_spec=None
54 ):
55 """
56 Create a service.
57
58 Args:
59 task_template (TaskTemplate): Specification of the task to start as
60 part of the new service.
61 name (string): User-defined name for the service. Optional.
62 labels (dict): A map of labels to associate with the service.
63 Optional.
64 mode (ServiceMode): Scheduling mode for the service (replicated
65 or global). Defaults to replicated.
66 update_config (UpdateConfig): Specification for the update strategy
67 of the service. Default: ``None``
68 networks (:py:class:`list`): List of network names or IDs to attach
69 the service to. Default: ``None``.
70 endpoint_spec (EndpointSpec): Properties that can be configured to
71 access and load balance a service. Default: ``None``.
72
73 Returns:
74 A dictionary containing an ``ID`` key for the newly created
75 service.
76
77 Raises:
78 :py:class:`docker.errors.APIError`
79 If the server returns an error.
80 """
81 if endpoint_config is not None:
82 warnings.warn(
83 'endpoint_config has been renamed to endpoint_spec.',
84 DeprecationWarning
85 )
86 endpoint_spec = endpoint_config
87
88 _check_api_features(self._version, task_template, update_config)
89
90 url = self._url('/services/create')
91 headers = {}
92 image = task_template.get('ContainerSpec', {}).get('Image', None)
93 if image is None:
94 raise errors.DockerException(
95 'Missing mandatory Image key in ContainerSpec'
96 )
97 if mode and not isinstance(mode, dict):
98 mode = ServiceMode(mode)
99
100 registry, repo_name = auth.resolve_repository_name(image)
101 auth_header = auth.get_config_header(self, registry)
102 if auth_header:
103 headers['X-Registry-Auth'] = auth_header
104 data = {
105 'Name': name,
106 'Labels': labels,
107 'TaskTemplate': task_template,
108 'Mode': mode,
109 'Networks': utils.convert_service_networks(networks),
110 'EndpointSpec': endpoint_spec
111 }
112
113 if update_config is not None:
114 data['UpdateConfig'] = update_config
115
116 return self._result(
117 self._post_json(url, data=data, headers=headers), True
118 )
119
120 @utils.minimum_version('1.24')
121 @utils.check_resource('service')
122 def inspect_service(self, service):
123 """
124 Return information about a service.
125
126 Args:
127 service (str): Service name or ID
128
129 Returns:
130 ``True`` if successful.
131
132 Raises:
133 :py:class:`docker.errors.APIError`
134 If the server returns an error.
135 """
136 url = self._url('/services/{0}', service)
137 return self._result(self._get(url), True)
138
139 @utils.minimum_version('1.24')
140 @utils.check_resource('task')
141 def inspect_task(self, task):
142 """
143 Retrieve information about a task.
144
145 Args:
146 task (str): Task ID
147
148 Returns:
149 (dict): Information about the task.
150
151 Raises:
152 :py:class:`docker.errors.APIError`
153 If the server returns an error.
154 """
155 url = self._url('/tasks/{0}', task)
156 return self._result(self._get(url), True)
157
158 @utils.minimum_version('1.24')
159 @utils.check_resource('service')
160 def remove_service(self, service):
161 """
162 Stop and remove a service.
163
164 Args:
165 service (str): Service name or ID
166
167 Returns:
168 ``True`` if successful.
169
170 Raises:
171 :py:class:`docker.errors.APIError`
172 If the server returns an error.
173 """
174
175 url = self._url('/services/{0}', service)
176 resp = self._delete(url)
177 self._raise_for_status(resp)
178 return True
179
180 @utils.minimum_version('1.24')
181 def services(self, filters=None):
182 """
183 List services.
184
185 Args:
186 filters (dict): Filters to process on the nodes list. Valid
187 filters: ``id`` and ``name``. Default: ``None``.
188
189 Returns:
190 A list of dictionaries containing data about each service.
191
192 Raises:
193 :py:class:`docker.errors.APIError`
194 If the server returns an error.
195 """
196 params = {
197 'filters': utils.convert_filters(filters) if filters else None
198 }
199 url = self._url('/services')
200 return self._result(self._get(url, params=params), True)
201
202 @utils.minimum_version('1.25')
203 @utils.check_resource('service')
204 def service_logs(self, service, details=False, follow=False, stdout=False,
205 stderr=False, since=0, timestamps=False, tail='all',
206 is_tty=None):
207 """
208 Get log stream for a service.
209 Note: This endpoint works only for services with the ``json-file``
210 or ``journald`` logging drivers.
211
212 Args:
213 service (str): ID or name of the service
214 details (bool): Show extra details provided to logs.
215 Default: ``False``
216 follow (bool): Keep connection open to read logs as they are
217 sent by the Engine. Default: ``False``
218 stdout (bool): Return logs from ``stdout``. Default: ``False``
219 stderr (bool): Return logs from ``stderr``. Default: ``False``
220 since (int): UNIX timestamp for the logs staring point.
221 Default: 0
222 timestamps (bool): Add timestamps to every log line.
223 tail (string or int): Number of log lines to be returned,
224 counting from the current end of the logs. Specify an
225 integer or ``'all'`` to output all log lines.
226 Default: ``all``
227 is_tty (bool): Whether the service's :py:class:`ContainerSpec`
228 enables the TTY option. If omitted, the method will query
229 the Engine for the information, causing an additional
230 roundtrip.
231
232 Returns (generator): Logs for the service.
233 """
234 params = {
235 'details': details,
236 'follow': follow,
237 'stdout': stdout,
238 'stderr': stderr,
239 'since': since,
240 'timestamps': timestamps,
241 'tail': tail
242 }
243
244 url = self._url('/services/{0}/logs', service)
245 res = self._get(url, params=params, stream=True)
246 if is_tty is None:
247 is_tty = self.inspect_service(
248 service
249 )['Spec']['TaskTemplate']['ContainerSpec'].get('TTY', False)
250 return self._get_result_tty(True, res, is_tty)
251
252 @utils.minimum_version('1.24')
253 def tasks(self, filters=None):
254 """
255 Retrieve a list of tasks.
256
257 Args:
258 filters (dict): A map of filters to process on the tasks list.
259 Valid filters: ``id``, ``name``, ``service``, ``node``,
260 ``label`` and ``desired-state``.
261
262 Returns:
263 (:py:class:`list`): List of task dictionaries.
264
265 Raises:
266 :py:class:`docker.errors.APIError`
267 If the server returns an error.
268 """
269
270 params = {
271 'filters': utils.convert_filters(filters) if filters else None
272 }
273 url = self._url('/tasks')
274 return self._result(self._get(url, params=params), True)
275
276 @utils.minimum_version('1.24')
277 @utils.check_resource('service')
278 def update_service(self, service, version, task_template=None, name=None,
279 labels=None, mode=None, update_config=None,
280 networks=None, endpoint_config=None,
281 endpoint_spec=None):
282 """
283 Update a service.
284
285 Args:
286 service (string): A service identifier (either its name or service
287 ID).
288 version (int): The version number of the service object being
289 updated. This is required to avoid conflicting writes.
290 task_template (TaskTemplate): Specification of the updated task to
291 start as part of the service.
292 name (string): New name for the service. Optional.
293 labels (dict): A map of labels to associate with the service.
294 Optional.
295 mode (ServiceMode): Scheduling mode for the service (replicated
296 or global). Defaults to replicated.
297 update_config (UpdateConfig): Specification for the update strategy
298 of the service. Default: ``None``.
299 networks (:py:class:`list`): List of network names or IDs to attach
300 the service to. Default: ``None``.
301 endpoint_spec (EndpointSpec): Properties that can be configured to
302 access and load balance a service. Default: ``None``.
303
304 Returns:
305 ``True`` if successful.
306
307 Raises:
308 :py:class:`docker.errors.APIError`
309 If the server returns an error.
310 """
311 if endpoint_config is not None:
312 warnings.warn(
313 'endpoint_config has been renamed to endpoint_spec.',
314 DeprecationWarning
315 )
316 endpoint_spec = endpoint_config
317
318 _check_api_features(self._version, task_template, update_config)
319
320 url = self._url('/services/{0}/update', service)
321 data = {}
322 headers = {}
323 if name is not None:
324 data['Name'] = name
325 if labels is not None:
326 data['Labels'] = labels
327 if mode is not None:
328 if not isinstance(mode, dict):
329 mode = ServiceMode(mode)
330 data['Mode'] = mode
331 if task_template is not None:
332 image = task_template.get('ContainerSpec', {}).get('Image', None)
333 if image is not None:
334 registry, repo_name = auth.resolve_repository_name(image)
335 auth_header = auth.get_config_header(self, registry)
336 if auth_header:
337 headers['X-Registry-Auth'] = auth_header
338 data['TaskTemplate'] = task_template
339 if update_config is not None:
340 data['UpdateConfig'] = update_config
341
342 if networks is not None:
343 data['Networks'] = utils.convert_service_networks(networks)
344 if endpoint_spec is not None:
345 data['EndpointSpec'] = endpoint_spec
346
347 resp = self._post_json(
348 url, data=data, params={'version': version}, headers=headers
349 )
350 self._raise_for_status(resp)
351 return True
352
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/docker/api/service.py b/docker/api/service.py
--- a/docker/api/service.py
+++ b/docker/api/service.py
@@ -38,7 +38,7 @@
'Placement.preferences is not supported in'
' API version < 1.27'
)
- if task_template.container_spec.get('TTY'):
+ if task_template.get('ContainerSpec', {}).get('TTY'):
if utils.version_lt(version, '1.25'):
raise errors.InvalidVersion(
'ContainerSpec.TTY is not supported in API version < 1.25'
|
{"golden_diff": "diff --git a/docker/api/service.py b/docker/api/service.py\n--- a/docker/api/service.py\n+++ b/docker/api/service.py\n@@ -38,7 +38,7 @@\n 'Placement.preferences is not supported in'\n ' API version < 1.27'\n )\n- if task_template.container_spec.get('TTY'):\n+ if task_template.get('ContainerSpec', {}).get('TTY'):\n if utils.version_lt(version, '1.25'):\n raise errors.InvalidVersion(\n 'ContainerSpec.TTY is not supported in API version < 1.25'\n", "issue": "_check_api_features should support pure dict params\nhttps://github.com/docker/docker-py/blob/2.4.2-release/docker/api/service.py#L41\r\n\r\nOriginally reported: https://github.com/moby/moby/issues/34116#issuecomment-315484688\n", "before_files": [{"content": "import warnings\nfrom .. import auth, errors, utils\nfrom ..types import ServiceMode\n\n\ndef _check_api_features(version, task_template, update_config):\n if update_config is not None:\n if utils.version_lt(version, '1.25'):\n if 'MaxFailureRatio' in update_config:\n raise errors.InvalidVersion(\n 'UpdateConfig.max_failure_ratio is not supported in'\n ' API version < 1.25'\n )\n if 'Monitor' in update_config:\n raise errors.InvalidVersion(\n 'UpdateConfig.monitor is not supported in'\n ' API version < 1.25'\n )\n\n if task_template is not None:\n if 'ForceUpdate' in task_template and utils.version_lt(\n version, '1.25'):\n raise errors.InvalidVersion(\n 'force_update is not supported in API version < 1.25'\n )\n\n if task_template.get('Placement'):\n if utils.version_lt(version, '1.30'):\n if task_template['Placement'].get('Platforms'):\n raise errors.InvalidVersion(\n 'Placement.platforms is not supported in'\n ' API version < 1.30'\n )\n\n if utils.version_lt(version, '1.27'):\n if task_template['Placement'].get('Preferences'):\n raise errors.InvalidVersion(\n 'Placement.preferences is not supported in'\n ' API version < 1.27'\n )\n if task_template.container_spec.get('TTY'):\n if utils.version_lt(version, '1.25'):\n raise errors.InvalidVersion(\n 'ContainerSpec.TTY is not supported in API version < 1.25'\n )\n\n\nclass ServiceApiMixin(object):\n @utils.minimum_version('1.24')\n def create_service(\n self, task_template, name=None, labels=None, mode=None,\n update_config=None, networks=None, endpoint_config=None,\n endpoint_spec=None\n ):\n \"\"\"\n Create a service.\n\n Args:\n task_template (TaskTemplate): Specification of the task to start as\n part of the new service.\n name (string): User-defined name for the service. Optional.\n labels (dict): A map of labels to associate with the service.\n Optional.\n mode (ServiceMode): Scheduling mode for the service (replicated\n or global). Defaults to replicated.\n update_config (UpdateConfig): Specification for the update strategy\n of the service. Default: ``None``\n networks (:py:class:`list`): List of network names or IDs to attach\n the service to. Default: ``None``.\n endpoint_spec (EndpointSpec): Properties that can be configured to\n access and load balance a service. Default: ``None``.\n\n Returns:\n A dictionary containing an ``ID`` key for the newly created\n service.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n if endpoint_config is not None:\n warnings.warn(\n 'endpoint_config has been renamed to endpoint_spec.',\n DeprecationWarning\n )\n endpoint_spec = endpoint_config\n\n _check_api_features(self._version, task_template, update_config)\n\n url = self._url('/services/create')\n headers = {}\n image = task_template.get('ContainerSpec', {}).get('Image', None)\n if image is None:\n raise errors.DockerException(\n 'Missing mandatory Image key in ContainerSpec'\n )\n if mode and not isinstance(mode, dict):\n mode = ServiceMode(mode)\n\n registry, repo_name = auth.resolve_repository_name(image)\n auth_header = auth.get_config_header(self, registry)\n if auth_header:\n headers['X-Registry-Auth'] = auth_header\n data = {\n 'Name': name,\n 'Labels': labels,\n 'TaskTemplate': task_template,\n 'Mode': mode,\n 'Networks': utils.convert_service_networks(networks),\n 'EndpointSpec': endpoint_spec\n }\n\n if update_config is not None:\n data['UpdateConfig'] = update_config\n\n return self._result(\n self._post_json(url, data=data, headers=headers), True\n )\n\n @utils.minimum_version('1.24')\n @utils.check_resource('service')\n def inspect_service(self, service):\n \"\"\"\n Return information about a service.\n\n Args:\n service (str): Service name or ID\n\n Returns:\n ``True`` if successful.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n url = self._url('/services/{0}', service)\n return self._result(self._get(url), True)\n\n @utils.minimum_version('1.24')\n @utils.check_resource('task')\n def inspect_task(self, task):\n \"\"\"\n Retrieve information about a task.\n\n Args:\n task (str): Task ID\n\n Returns:\n (dict): Information about the task.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n url = self._url('/tasks/{0}', task)\n return self._result(self._get(url), True)\n\n @utils.minimum_version('1.24')\n @utils.check_resource('service')\n def remove_service(self, service):\n \"\"\"\n Stop and remove a service.\n\n Args:\n service (str): Service name or ID\n\n Returns:\n ``True`` if successful.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n\n url = self._url('/services/{0}', service)\n resp = self._delete(url)\n self._raise_for_status(resp)\n return True\n\n @utils.minimum_version('1.24')\n def services(self, filters=None):\n \"\"\"\n List services.\n\n Args:\n filters (dict): Filters to process on the nodes list. Valid\n filters: ``id`` and ``name``. Default: ``None``.\n\n Returns:\n A list of dictionaries containing data about each service.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n params = {\n 'filters': utils.convert_filters(filters) if filters else None\n }\n url = self._url('/services')\n return self._result(self._get(url, params=params), True)\n\n @utils.minimum_version('1.25')\n @utils.check_resource('service')\n def service_logs(self, service, details=False, follow=False, stdout=False,\n stderr=False, since=0, timestamps=False, tail='all',\n is_tty=None):\n \"\"\"\n Get log stream for a service.\n Note: This endpoint works only for services with the ``json-file``\n or ``journald`` logging drivers.\n\n Args:\n service (str): ID or name of the service\n details (bool): Show extra details provided to logs.\n Default: ``False``\n follow (bool): Keep connection open to read logs as they are\n sent by the Engine. Default: ``False``\n stdout (bool): Return logs from ``stdout``. Default: ``False``\n stderr (bool): Return logs from ``stderr``. Default: ``False``\n since (int): UNIX timestamp for the logs staring point.\n Default: 0\n timestamps (bool): Add timestamps to every log line.\n tail (string or int): Number of log lines to be returned,\n counting from the current end of the logs. Specify an\n integer or ``'all'`` to output all log lines.\n Default: ``all``\n is_tty (bool): Whether the service's :py:class:`ContainerSpec`\n enables the TTY option. If omitted, the method will query\n the Engine for the information, causing an additional\n roundtrip.\n\n Returns (generator): Logs for the service.\n \"\"\"\n params = {\n 'details': details,\n 'follow': follow,\n 'stdout': stdout,\n 'stderr': stderr,\n 'since': since,\n 'timestamps': timestamps,\n 'tail': tail\n }\n\n url = self._url('/services/{0}/logs', service)\n res = self._get(url, params=params, stream=True)\n if is_tty is None:\n is_tty = self.inspect_service(\n service\n )['Spec']['TaskTemplate']['ContainerSpec'].get('TTY', False)\n return self._get_result_tty(True, res, is_tty)\n\n @utils.minimum_version('1.24')\n def tasks(self, filters=None):\n \"\"\"\n Retrieve a list of tasks.\n\n Args:\n filters (dict): A map of filters to process on the tasks list.\n Valid filters: ``id``, ``name``, ``service``, ``node``,\n ``label`` and ``desired-state``.\n\n Returns:\n (:py:class:`list`): List of task dictionaries.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n\n params = {\n 'filters': utils.convert_filters(filters) if filters else None\n }\n url = self._url('/tasks')\n return self._result(self._get(url, params=params), True)\n\n @utils.minimum_version('1.24')\n @utils.check_resource('service')\n def update_service(self, service, version, task_template=None, name=None,\n labels=None, mode=None, update_config=None,\n networks=None, endpoint_config=None,\n endpoint_spec=None):\n \"\"\"\n Update a service.\n\n Args:\n service (string): A service identifier (either its name or service\n ID).\n version (int): The version number of the service object being\n updated. This is required to avoid conflicting writes.\n task_template (TaskTemplate): Specification of the updated task to\n start as part of the service.\n name (string): New name for the service. Optional.\n labels (dict): A map of labels to associate with the service.\n Optional.\n mode (ServiceMode): Scheduling mode for the service (replicated\n or global). Defaults to replicated.\n update_config (UpdateConfig): Specification for the update strategy\n of the service. Default: ``None``.\n networks (:py:class:`list`): List of network names or IDs to attach\n the service to. Default: ``None``.\n endpoint_spec (EndpointSpec): Properties that can be configured to\n access and load balance a service. Default: ``None``.\n\n Returns:\n ``True`` if successful.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n if endpoint_config is not None:\n warnings.warn(\n 'endpoint_config has been renamed to endpoint_spec.',\n DeprecationWarning\n )\n endpoint_spec = endpoint_config\n\n _check_api_features(self._version, task_template, update_config)\n\n url = self._url('/services/{0}/update', service)\n data = {}\n headers = {}\n if name is not None:\n data['Name'] = name\n if labels is not None:\n data['Labels'] = labels\n if mode is not None:\n if not isinstance(mode, dict):\n mode = ServiceMode(mode)\n data['Mode'] = mode\n if task_template is not None:\n image = task_template.get('ContainerSpec', {}).get('Image', None)\n if image is not None:\n registry, repo_name = auth.resolve_repository_name(image)\n auth_header = auth.get_config_header(self, registry)\n if auth_header:\n headers['X-Registry-Auth'] = auth_header\n data['TaskTemplate'] = task_template\n if update_config is not None:\n data['UpdateConfig'] = update_config\n\n if networks is not None:\n data['Networks'] = utils.convert_service_networks(networks)\n if endpoint_spec is not None:\n data['EndpointSpec'] = endpoint_spec\n\n resp = self._post_json(\n url, data=data, params={'version': version}, headers=headers\n )\n self._raise_for_status(resp)\n return True\n", "path": "docker/api/service.py"}], "after_files": [{"content": "import warnings\nfrom .. import auth, errors, utils\nfrom ..types import ServiceMode\n\n\ndef _check_api_features(version, task_template, update_config):\n if update_config is not None:\n if utils.version_lt(version, '1.25'):\n if 'MaxFailureRatio' in update_config:\n raise errors.InvalidVersion(\n 'UpdateConfig.max_failure_ratio is not supported in'\n ' API version < 1.25'\n )\n if 'Monitor' in update_config:\n raise errors.InvalidVersion(\n 'UpdateConfig.monitor is not supported in'\n ' API version < 1.25'\n )\n\n if task_template is not None:\n if 'ForceUpdate' in task_template and utils.version_lt(\n version, '1.25'):\n raise errors.InvalidVersion(\n 'force_update is not supported in API version < 1.25'\n )\n\n if task_template.get('Placement'):\n if utils.version_lt(version, '1.30'):\n if task_template['Placement'].get('Platforms'):\n raise errors.InvalidVersion(\n 'Placement.platforms is not supported in'\n ' API version < 1.30'\n )\n\n if utils.version_lt(version, '1.27'):\n if task_template['Placement'].get('Preferences'):\n raise errors.InvalidVersion(\n 'Placement.preferences is not supported in'\n ' API version < 1.27'\n )\n if task_template.get('ContainerSpec', {}).get('TTY'):\n if utils.version_lt(version, '1.25'):\n raise errors.InvalidVersion(\n 'ContainerSpec.TTY is not supported in API version < 1.25'\n )\n\n\nclass ServiceApiMixin(object):\n @utils.minimum_version('1.24')\n def create_service(\n self, task_template, name=None, labels=None, mode=None,\n update_config=None, networks=None, endpoint_config=None,\n endpoint_spec=None\n ):\n \"\"\"\n Create a service.\n\n Args:\n task_template (TaskTemplate): Specification of the task to start as\n part of the new service.\n name (string): User-defined name for the service. Optional.\n labels (dict): A map of labels to associate with the service.\n Optional.\n mode (ServiceMode): Scheduling mode for the service (replicated\n or global). Defaults to replicated.\n update_config (UpdateConfig): Specification for the update strategy\n of the service. Default: ``None``\n networks (:py:class:`list`): List of network names or IDs to attach\n the service to. Default: ``None``.\n endpoint_spec (EndpointSpec): Properties that can be configured to\n access and load balance a service. Default: ``None``.\n\n Returns:\n A dictionary containing an ``ID`` key for the newly created\n service.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n if endpoint_config is not None:\n warnings.warn(\n 'endpoint_config has been renamed to endpoint_spec.',\n DeprecationWarning\n )\n endpoint_spec = endpoint_config\n\n _check_api_features(self._version, task_template, update_config)\n\n url = self._url('/services/create')\n headers = {}\n image = task_template.get('ContainerSpec', {}).get('Image', None)\n if image is None:\n raise errors.DockerException(\n 'Missing mandatory Image key in ContainerSpec'\n )\n if mode and not isinstance(mode, dict):\n mode = ServiceMode(mode)\n\n registry, repo_name = auth.resolve_repository_name(image)\n auth_header = auth.get_config_header(self, registry)\n if auth_header:\n headers['X-Registry-Auth'] = auth_header\n data = {\n 'Name': name,\n 'Labels': labels,\n 'TaskTemplate': task_template,\n 'Mode': mode,\n 'Networks': utils.convert_service_networks(networks),\n 'EndpointSpec': endpoint_spec\n }\n\n if update_config is not None:\n data['UpdateConfig'] = update_config\n\n return self._result(\n self._post_json(url, data=data, headers=headers), True\n )\n\n @utils.minimum_version('1.24')\n @utils.check_resource('service')\n def inspect_service(self, service):\n \"\"\"\n Return information about a service.\n\n Args:\n service (str): Service name or ID\n\n Returns:\n ``True`` if successful.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n url = self._url('/services/{0}', service)\n return self._result(self._get(url), True)\n\n @utils.minimum_version('1.24')\n @utils.check_resource('task')\n def inspect_task(self, task):\n \"\"\"\n Retrieve information about a task.\n\n Args:\n task (str): Task ID\n\n Returns:\n (dict): Information about the task.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n url = self._url('/tasks/{0}', task)\n return self._result(self._get(url), True)\n\n @utils.minimum_version('1.24')\n @utils.check_resource('service')\n def remove_service(self, service):\n \"\"\"\n Stop and remove a service.\n\n Args:\n service (str): Service name or ID\n\n Returns:\n ``True`` if successful.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n\n url = self._url('/services/{0}', service)\n resp = self._delete(url)\n self._raise_for_status(resp)\n return True\n\n @utils.minimum_version('1.24')\n def services(self, filters=None):\n \"\"\"\n List services.\n\n Args:\n filters (dict): Filters to process on the nodes list. Valid\n filters: ``id`` and ``name``. Default: ``None``.\n\n Returns:\n A list of dictionaries containing data about each service.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n params = {\n 'filters': utils.convert_filters(filters) if filters else None\n }\n url = self._url('/services')\n return self._result(self._get(url, params=params), True)\n\n @utils.minimum_version('1.25')\n @utils.check_resource('service')\n def service_logs(self, service, details=False, follow=False, stdout=False,\n stderr=False, since=0, timestamps=False, tail='all',\n is_tty=None):\n \"\"\"\n Get log stream for a service.\n Note: This endpoint works only for services with the ``json-file``\n or ``journald`` logging drivers.\n\n Args:\n service (str): ID or name of the service\n details (bool): Show extra details provided to logs.\n Default: ``False``\n follow (bool): Keep connection open to read logs as they are\n sent by the Engine. Default: ``False``\n stdout (bool): Return logs from ``stdout``. Default: ``False``\n stderr (bool): Return logs from ``stderr``. Default: ``False``\n since (int): UNIX timestamp for the logs staring point.\n Default: 0\n timestamps (bool): Add timestamps to every log line.\n tail (string or int): Number of log lines to be returned,\n counting from the current end of the logs. Specify an\n integer or ``'all'`` to output all log lines.\n Default: ``all``\n is_tty (bool): Whether the service's :py:class:`ContainerSpec`\n enables the TTY option. If omitted, the method will query\n the Engine for the information, causing an additional\n roundtrip.\n\n Returns (generator): Logs for the service.\n \"\"\"\n params = {\n 'details': details,\n 'follow': follow,\n 'stdout': stdout,\n 'stderr': stderr,\n 'since': since,\n 'timestamps': timestamps,\n 'tail': tail\n }\n\n url = self._url('/services/{0}/logs', service)\n res = self._get(url, params=params, stream=True)\n if is_tty is None:\n is_tty = self.inspect_service(\n service\n )['Spec']['TaskTemplate']['ContainerSpec'].get('TTY', False)\n return self._get_result_tty(True, res, is_tty)\n\n @utils.minimum_version('1.24')\n def tasks(self, filters=None):\n \"\"\"\n Retrieve a list of tasks.\n\n Args:\n filters (dict): A map of filters to process on the tasks list.\n Valid filters: ``id``, ``name``, ``service``, ``node``,\n ``label`` and ``desired-state``.\n\n Returns:\n (:py:class:`list`): List of task dictionaries.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n\n params = {\n 'filters': utils.convert_filters(filters) if filters else None\n }\n url = self._url('/tasks')\n return self._result(self._get(url, params=params), True)\n\n @utils.minimum_version('1.24')\n @utils.check_resource('service')\n def update_service(self, service, version, task_template=None, name=None,\n labels=None, mode=None, update_config=None,\n networks=None, endpoint_config=None,\n endpoint_spec=None):\n \"\"\"\n Update a service.\n\n Args:\n service (string): A service identifier (either its name or service\n ID).\n version (int): The version number of the service object being\n updated. This is required to avoid conflicting writes.\n task_template (TaskTemplate): Specification of the updated task to\n start as part of the service.\n name (string): New name for the service. Optional.\n labels (dict): A map of labels to associate with the service.\n Optional.\n mode (ServiceMode): Scheduling mode for the service (replicated\n or global). Defaults to replicated.\n update_config (UpdateConfig): Specification for the update strategy\n of the service. Default: ``None``.\n networks (:py:class:`list`): List of network names or IDs to attach\n the service to. Default: ``None``.\n endpoint_spec (EndpointSpec): Properties that can be configured to\n access and load balance a service. Default: ``None``.\n\n Returns:\n ``True`` if successful.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n if endpoint_config is not None:\n warnings.warn(\n 'endpoint_config has been renamed to endpoint_spec.',\n DeprecationWarning\n )\n endpoint_spec = endpoint_config\n\n _check_api_features(self._version, task_template, update_config)\n\n url = self._url('/services/{0}/update', service)\n data = {}\n headers = {}\n if name is not None:\n data['Name'] = name\n if labels is not None:\n data['Labels'] = labels\n if mode is not None:\n if not isinstance(mode, dict):\n mode = ServiceMode(mode)\n data['Mode'] = mode\n if task_template is not None:\n image = task_template.get('ContainerSpec', {}).get('Image', None)\n if image is not None:\n registry, repo_name = auth.resolve_repository_name(image)\n auth_header = auth.get_config_header(self, registry)\n if auth_header:\n headers['X-Registry-Auth'] = auth_header\n data['TaskTemplate'] = task_template\n if update_config is not None:\n data['UpdateConfig'] = update_config\n\n if networks is not None:\n data['Networks'] = utils.convert_service_networks(networks)\n if endpoint_spec is not None:\n data['EndpointSpec'] = endpoint_spec\n\n resp = self._post_json(\n url, data=data, params={'version': version}, headers=headers\n )\n self._raise_for_status(resp)\n return True\n", "path": "docker/api/service.py"}]}
| 3,902 | 126 |
gh_patches_debug_16923
|
rasdani/github-patches
|
git_diff
|
Mailu__Mailu-1130
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unnecessary assignment on `HOST_WEBMAIL`
We came across another piece of garbage:
https://github.com/Mailu/Mailu/blob/f3f0c3190be9ab9b53a29c5b0326fc9a4602df46/core/nginx/config.py#L19
https://github.com/Mailu/Mailu/blob/f3f0c3190be9ab9b53a29c5b0326fc9a4602df46/core/nginx/config.py#L22
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/nginx/config.py`
Content:
```
1 #!/usr/bin/python3
2
3 import os
4 import logging as log
5 import sys
6 from socrate import system, conf
7
8 args = os.environ.copy()
9
10 log.basicConfig(stream=sys.stderr, level=args.get("LOG_LEVEL", "WARNING"))
11
12 # Get the first DNS server
13 with open("/etc/resolv.conf") as handle:
14 content = handle.read().split()
15 args["RESOLVER"] = content[content.index("nameserver") + 1]
16
17 args["ADMIN_ADDRESS"] = system.resolve_address(args.get("HOST_ADMIN", "admin"))
18 args["ANTISPAM_ADDRESS"] = system.resolve_address(args.get("HOST_ANTISPAM", "antispam:11334"))
19 args["HOST_WEBMAIL"] = args.get("HOST_WEBMAIL", "webmail")
20 if args["WEBMAIL"] != "none":
21 args["WEBMAIL_ADDRESS"] = system.resolve_address(args.get("HOST_WEBMAIL"))
22 args["HOST_WEBDAV"] = args.get("HOST_WEBDAV", "webdav:5232")
23 if args["WEBDAV"] != "none":
24 args["WEBDAV_ADDRESS"] = system.resolve_address(args.get("HOST_WEBDAV"))
25
26 # TLS configuration
27 cert_name = os.getenv("TLS_CERT_FILENAME", default="cert.pem")
28 keypair_name = os.getenv("TLS_KEYPAIR_FILENAME", default="key.pem")
29 args["TLS"] = {
30 "cert": ("/certs/%s" % cert_name, "/certs/%s" % keypair_name),
31 "letsencrypt": ("/certs/letsencrypt/live/mailu/fullchain.pem",
32 "/certs/letsencrypt/live/mailu/privkey.pem"),
33 "mail": ("/certs/%s" % cert_name, "/certs/%s" % keypair_name),
34 "mail-letsencrypt": ("/certs/letsencrypt/live/mailu/fullchain.pem",
35 "/certs/letsencrypt/live/mailu/privkey.pem"),
36 "notls": None
37 }[args["TLS_FLAVOR"]]
38
39 if args["TLS"] and not all(os.path.exists(file_path) for file_path in args["TLS"]):
40 print("Missing cert or key file, disabling TLS")
41 args["TLS_ERROR"] = "yes"
42
43 # Build final configuration paths
44 conf.jinja("/conf/tls.conf", args, "/etc/nginx/tls.conf")
45 conf.jinja("/conf/proxy.conf", args, "/etc/nginx/proxy.conf")
46 conf.jinja("/conf/nginx.conf", args, "/etc/nginx/nginx.conf")
47 if os.path.exists("/var/run/nginx.pid"):
48 os.system("nginx -s reload")
49
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/core/nginx/config.py b/core/nginx/config.py
--- a/core/nginx/config.py
+++ b/core/nginx/config.py
@@ -16,12 +16,10 @@
args["ADMIN_ADDRESS"] = system.resolve_address(args.get("HOST_ADMIN", "admin"))
args["ANTISPAM_ADDRESS"] = system.resolve_address(args.get("HOST_ANTISPAM", "antispam:11334"))
-args["HOST_WEBMAIL"] = args.get("HOST_WEBMAIL", "webmail")
if args["WEBMAIL"] != "none":
- args["WEBMAIL_ADDRESS"] = system.resolve_address(args.get("HOST_WEBMAIL"))
-args["HOST_WEBDAV"] = args.get("HOST_WEBDAV", "webdav:5232")
+ args["WEBMAIL_ADDRESS"] = system.resolve_address(args.get("HOST_WEBMAIL", "webmail"))
if args["WEBDAV"] != "none":
- args["WEBDAV_ADDRESS"] = system.resolve_address(args.get("HOST_WEBDAV"))
+ args["WEBDAV_ADDRESS"] = system.resolve_address(args.get("HOST_WEBDAV", "webdav:5232"))
# TLS configuration
cert_name = os.getenv("TLS_CERT_FILENAME", default="cert.pem")
|
{"golden_diff": "diff --git a/core/nginx/config.py b/core/nginx/config.py\n--- a/core/nginx/config.py\n+++ b/core/nginx/config.py\n@@ -16,12 +16,10 @@\n \n args[\"ADMIN_ADDRESS\"] = system.resolve_address(args.get(\"HOST_ADMIN\", \"admin\"))\n args[\"ANTISPAM_ADDRESS\"] = system.resolve_address(args.get(\"HOST_ANTISPAM\", \"antispam:11334\"))\n-args[\"HOST_WEBMAIL\"] = args.get(\"HOST_WEBMAIL\", \"webmail\")\n if args[\"WEBMAIL\"] != \"none\":\n- args[\"WEBMAIL_ADDRESS\"] = system.resolve_address(args.get(\"HOST_WEBMAIL\"))\n-args[\"HOST_WEBDAV\"] = args.get(\"HOST_WEBDAV\", \"webdav:5232\")\n+ args[\"WEBMAIL_ADDRESS\"] = system.resolve_address(args.get(\"HOST_WEBMAIL\", \"webmail\"))\n if args[\"WEBDAV\"] != \"none\":\n- args[\"WEBDAV_ADDRESS\"] = system.resolve_address(args.get(\"HOST_WEBDAV\"))\n+ args[\"WEBDAV_ADDRESS\"] = system.resolve_address(args.get(\"HOST_WEBDAV\", \"webdav:5232\"))\n \n # TLS configuration\n cert_name = os.getenv(\"TLS_CERT_FILENAME\", default=\"cert.pem\")\n", "issue": "Unnecessary assignment on `HOST_WEBMAIL`\nWe came across another piece of garbage:\r\n\r\nhttps://github.com/Mailu/Mailu/blob/f3f0c3190be9ab9b53a29c5b0326fc9a4602df46/core/nginx/config.py#L19\r\n\r\nhttps://github.com/Mailu/Mailu/blob/f3f0c3190be9ab9b53a29c5b0326fc9a4602df46/core/nginx/config.py#L22\n", "before_files": [{"content": "#!/usr/bin/python3\n\nimport os\nimport logging as log\nimport sys\nfrom socrate import system, conf\n\nargs = os.environ.copy()\n\nlog.basicConfig(stream=sys.stderr, level=args.get(\"LOG_LEVEL\", \"WARNING\"))\n\n# Get the first DNS server\nwith open(\"/etc/resolv.conf\") as handle:\n content = handle.read().split()\n args[\"RESOLVER\"] = content[content.index(\"nameserver\") + 1]\n\nargs[\"ADMIN_ADDRESS\"] = system.resolve_address(args.get(\"HOST_ADMIN\", \"admin\"))\nargs[\"ANTISPAM_ADDRESS\"] = system.resolve_address(args.get(\"HOST_ANTISPAM\", \"antispam:11334\"))\nargs[\"HOST_WEBMAIL\"] = args.get(\"HOST_WEBMAIL\", \"webmail\")\nif args[\"WEBMAIL\"] != \"none\":\n args[\"WEBMAIL_ADDRESS\"] = system.resolve_address(args.get(\"HOST_WEBMAIL\"))\nargs[\"HOST_WEBDAV\"] = args.get(\"HOST_WEBDAV\", \"webdav:5232\")\nif args[\"WEBDAV\"] != \"none\":\n args[\"WEBDAV_ADDRESS\"] = system.resolve_address(args.get(\"HOST_WEBDAV\"))\n\n# TLS configuration\ncert_name = os.getenv(\"TLS_CERT_FILENAME\", default=\"cert.pem\")\nkeypair_name = os.getenv(\"TLS_KEYPAIR_FILENAME\", default=\"key.pem\")\nargs[\"TLS\"] = {\n \"cert\": (\"/certs/%s\" % cert_name, \"/certs/%s\" % keypair_name),\n \"letsencrypt\": (\"/certs/letsencrypt/live/mailu/fullchain.pem\",\n \"/certs/letsencrypt/live/mailu/privkey.pem\"),\n \"mail\": (\"/certs/%s\" % cert_name, \"/certs/%s\" % keypair_name),\n \"mail-letsencrypt\": (\"/certs/letsencrypt/live/mailu/fullchain.pem\",\n \"/certs/letsencrypt/live/mailu/privkey.pem\"),\n \"notls\": None\n}[args[\"TLS_FLAVOR\"]]\n\nif args[\"TLS\"] and not all(os.path.exists(file_path) for file_path in args[\"TLS\"]):\n print(\"Missing cert or key file, disabling TLS\")\n args[\"TLS_ERROR\"] = \"yes\"\n\n# Build final configuration paths\nconf.jinja(\"/conf/tls.conf\", args, \"/etc/nginx/tls.conf\")\nconf.jinja(\"/conf/proxy.conf\", args, \"/etc/nginx/proxy.conf\")\nconf.jinja(\"/conf/nginx.conf\", args, \"/etc/nginx/nginx.conf\")\nif os.path.exists(\"/var/run/nginx.pid\"):\n os.system(\"nginx -s reload\")\n", "path": "core/nginx/config.py"}], "after_files": [{"content": "#!/usr/bin/python3\n\nimport os\nimport logging as log\nimport sys\nfrom socrate import system, conf\n\nargs = os.environ.copy()\n\nlog.basicConfig(stream=sys.stderr, level=args.get(\"LOG_LEVEL\", \"WARNING\"))\n\n# Get the first DNS server\nwith open(\"/etc/resolv.conf\") as handle:\n content = handle.read().split()\n args[\"RESOLVER\"] = content[content.index(\"nameserver\") + 1]\n\nargs[\"ADMIN_ADDRESS\"] = system.resolve_address(args.get(\"HOST_ADMIN\", \"admin\"))\nargs[\"ANTISPAM_ADDRESS\"] = system.resolve_address(args.get(\"HOST_ANTISPAM\", \"antispam:11334\"))\nif args[\"WEBMAIL\"] != \"none\":\n args[\"WEBMAIL_ADDRESS\"] = system.resolve_address(args.get(\"HOST_WEBMAIL\", \"webmail\"))\nif args[\"WEBDAV\"] != \"none\":\n args[\"WEBDAV_ADDRESS\"] = system.resolve_address(args.get(\"HOST_WEBDAV\", \"webdav:5232\"))\n\n# TLS configuration\ncert_name = os.getenv(\"TLS_CERT_FILENAME\", default=\"cert.pem\")\nkeypair_name = os.getenv(\"TLS_KEYPAIR_FILENAME\", default=\"key.pem\")\nargs[\"TLS\"] = {\n \"cert\": (\"/certs/%s\" % cert_name, \"/certs/%s\" % keypair_name),\n \"letsencrypt\": (\"/certs/letsencrypt/live/mailu/fullchain.pem\",\n \"/certs/letsencrypt/live/mailu/privkey.pem\"),\n \"mail\": (\"/certs/%s\" % cert_name, \"/certs/%s\" % keypair_name),\n \"mail-letsencrypt\": (\"/certs/letsencrypt/live/mailu/fullchain.pem\",\n \"/certs/letsencrypt/live/mailu/privkey.pem\"),\n \"notls\": None\n}[args[\"TLS_FLAVOR\"]]\n\nif args[\"TLS\"] and not all(os.path.exists(file_path) for file_path in args[\"TLS\"]):\n print(\"Missing cert or key file, disabling TLS\")\n args[\"TLS_ERROR\"] = \"yes\"\n\n# Build final configuration paths\nconf.jinja(\"/conf/tls.conf\", args, \"/etc/nginx/tls.conf\")\nconf.jinja(\"/conf/proxy.conf\", args, \"/etc/nginx/proxy.conf\")\nconf.jinja(\"/conf/nginx.conf\", args, \"/etc/nginx/nginx.conf\")\nif os.path.exists(\"/var/run/nginx.pid\"):\n os.system(\"nginx -s reload\")\n", "path": "core/nginx/config.py"}]}
| 1,027 | 272 |
gh_patches_debug_15711
|
rasdani/github-patches
|
git_diff
|
translate__pootle-6087
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Delete a TP from an old style project and the Project page stays cached
1. Create a new TP
2. TP is available
3. Delete TP
4. Project page still shows project listed - though it should be gone
5. Going to supposedly deleted TP and we get 404
We're not expiring cache when a TP is deleted.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pootle/apps/pootle_revision/receivers.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright (C) Pootle contributors.
4 #
5 # This file is a part of the Pootle project. It is distributed under the GPL3
6 # or later license. See the LICENSE file for a copy of the license and the
7 # AUTHORS file for copyright and authorship information.
8
9 from django.db.models.signals import post_save, pre_delete
10 from django.dispatch import receiver
11
12 from pootle.core.delegate import revision_updater
13 from pootle_app.models import Directory
14 from pootle_data.models import StoreData
15 from pootle_store.models import Store
16
17
18 @receiver(post_save, sender=StoreData)
19 def handle_storedata_save(**kwargs):
20 revision_updater.get(Store)(
21 context=kwargs["instance"].store).update(keys=["stats", "checks"])
22
23
24 @receiver(post_save, sender=Directory)
25 def handle_directory_save(**kwargs):
26 context = (
27 kwargs["instance"].parent
28 if kwargs.get("created")
29 else kwargs["instance"])
30 revision_updater.get(Directory)(
31 context=context).update(keys=["stats", "checks"])
32
33
34 @receiver(pre_delete, sender=Directory)
35 def handle_directory_delete(**kwargs):
36 revision_updater.get(Directory)(
37 context=kwargs["instance"].parent).update(keys=["stats", "checks"])
38
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pootle/apps/pootle_revision/receivers.py b/pootle/apps/pootle_revision/receivers.py
--- a/pootle/apps/pootle_revision/receivers.py
+++ b/pootle/apps/pootle_revision/receivers.py
@@ -13,6 +13,7 @@
from pootle_app.models import Directory
from pootle_data.models import StoreData
from pootle_store.models import Store
+from pootle_translationproject.models import TranslationProject
@receiver(post_save, sender=StoreData)
@@ -35,3 +36,9 @@
def handle_directory_delete(**kwargs):
revision_updater.get(Directory)(
context=kwargs["instance"].parent).update(keys=["stats", "checks"])
+
+
+@receiver(pre_delete, sender=TranslationProject)
+def handle_tp_delete(**kwargs):
+ revision_updater.get(Directory)(
+ context=kwargs["instance"].directory).update(keys=["stats", "checks"])
|
{"golden_diff": "diff --git a/pootle/apps/pootle_revision/receivers.py b/pootle/apps/pootle_revision/receivers.py\n--- a/pootle/apps/pootle_revision/receivers.py\n+++ b/pootle/apps/pootle_revision/receivers.py\n@@ -13,6 +13,7 @@\n from pootle_app.models import Directory\n from pootle_data.models import StoreData\n from pootle_store.models import Store\n+from pootle_translationproject.models import TranslationProject\n \n \n @receiver(post_save, sender=StoreData)\n@@ -35,3 +36,9 @@\n def handle_directory_delete(**kwargs):\n revision_updater.get(Directory)(\n context=kwargs[\"instance\"].parent).update(keys=[\"stats\", \"checks\"])\n+\n+\n+@receiver(pre_delete, sender=TranslationProject)\n+def handle_tp_delete(**kwargs):\n+ revision_updater.get(Directory)(\n+ context=kwargs[\"instance\"].directory).update(keys=[\"stats\", \"checks\"])\n", "issue": "Delete a TP from an old style project and the Project page stays cached\n1. Create a new TP\r\n2. TP is available\r\n3. Delete TP\r\n4. Project page still shows project listed - though it should be gone\r\n5. Going to supposedly deleted TP and we get 404\r\n\r\nWe're not expiring cache when a TP is deleted.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nfrom django.db.models.signals import post_save, pre_delete\nfrom django.dispatch import receiver\n\nfrom pootle.core.delegate import revision_updater\nfrom pootle_app.models import Directory\nfrom pootle_data.models import StoreData\nfrom pootle_store.models import Store\n\n\n@receiver(post_save, sender=StoreData)\ndef handle_storedata_save(**kwargs):\n revision_updater.get(Store)(\n context=kwargs[\"instance\"].store).update(keys=[\"stats\", \"checks\"])\n\n\n@receiver(post_save, sender=Directory)\ndef handle_directory_save(**kwargs):\n context = (\n kwargs[\"instance\"].parent\n if kwargs.get(\"created\")\n else kwargs[\"instance\"])\n revision_updater.get(Directory)(\n context=context).update(keys=[\"stats\", \"checks\"])\n\n\n@receiver(pre_delete, sender=Directory)\ndef handle_directory_delete(**kwargs):\n revision_updater.get(Directory)(\n context=kwargs[\"instance\"].parent).update(keys=[\"stats\", \"checks\"])\n", "path": "pootle/apps/pootle_revision/receivers.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nfrom django.db.models.signals import post_save, pre_delete\nfrom django.dispatch import receiver\n\nfrom pootle.core.delegate import revision_updater\nfrom pootle_app.models import Directory\nfrom pootle_data.models import StoreData\nfrom pootle_store.models import Store\nfrom pootle_translationproject.models import TranslationProject\n\n\n@receiver(post_save, sender=StoreData)\ndef handle_storedata_save(**kwargs):\n revision_updater.get(Store)(\n context=kwargs[\"instance\"].store).update(keys=[\"stats\", \"checks\"])\n\n\n@receiver(post_save, sender=Directory)\ndef handle_directory_save(**kwargs):\n context = (\n kwargs[\"instance\"].parent\n if kwargs.get(\"created\")\n else kwargs[\"instance\"])\n revision_updater.get(Directory)(\n context=context).update(keys=[\"stats\", \"checks\"])\n\n\n@receiver(pre_delete, sender=Directory)\ndef handle_directory_delete(**kwargs):\n revision_updater.get(Directory)(\n context=kwargs[\"instance\"].parent).update(keys=[\"stats\", \"checks\"])\n\n\n@receiver(pre_delete, sender=TranslationProject)\ndef handle_tp_delete(**kwargs):\n revision_updater.get(Directory)(\n context=kwargs[\"instance\"].directory).update(keys=[\"stats\", \"checks\"])\n", "path": "pootle/apps/pootle_revision/receivers.py"}]}
| 686 | 215 |
gh_patches_debug_41037
|
rasdani/github-patches
|
git_diff
|
pyca__cryptography-3716
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Yet another building on PyPy(5.8) issue
Hi,
I'm trying to build cryptography (tried both 1.9 and master) against PyPy 5.8beta0.
I'm running on osx sierra.
I've looked at several issues here, and have tried several suggested solutions, but none worked for me 😞
I'm trying to link against brew openssl.
`brew list --versions openssl` -> `openssl 1.0.2j 1.0.2k 1.0.2l`
I've built pypy's openssl module without any special problem
```
env PYTHONPATH=/Users/omerba/Workspace/pypy \
DYLD_LIBRARY_PATH="/Users/omerba/anaconda/lib" \
CFLAGS="-I/usr/local/opt/openssl/include" \
LDFLAGS="-L/usr/local/opt/openssl/lib" \
/Users/omerba/Workspace/pypy/pypy3-c pypy/tool/build_cffi_imports.py
```
But when i try to build cryptography (in cryptography's dir):
```
env PYTHONPATH=/Users/omerba/Workspace/pypy \
DYLD_LIBRARY_PATH="/Users/omerba/anaconda/lib" \
CFLAGS="-I/usr/local/opt/openssl/include" \
LDFLAGS="-L/usr/local/opt/openssl/lib" \
/Users/omerba/Workspace/pypy/pypy3-c setup.py install
```
I get:
```
running install
running bdist_egg
running egg_info
writing src/cryptography.egg-info/PKG-INFO
writing dependency_links to src/cryptography.egg-info/dependency_links.txt
writing entry points to src/cryptography.egg-info/entry_points.txt
writing requirements to src/cryptography.egg-info/requires.txt
writing top-level names to src/cryptography.egg-info/top_level.txt
reading manifest file 'src/cryptography.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
no previously-included directories found matching 'docs/_build'
warning: no previously-included files matching '*' found under directory 'vectors'
writing manifest file 'src/cryptography.egg-info/SOURCES.txt'
installing library code to build/bdist.macosx-10.12-x86_64/egg
running install_lib
running build_py
running build_ext
generating cffi module 'build/temp.macosx-10.12-x86_64-3.5/_padding.c'
already up-to-date
generating cffi module 'build/temp.macosx-10.12-x86_64-3.5/_constant_time.c'
already up-to-date
generating cffi module 'build/temp.macosx-10.12-x86_64-3.5/_openssl.c'
building '_openssl' extension
cc -pthread -DNDEBUG -O2 -I/usr/local/opt/openssl/include -fPIC -I/Users/omerba/Workspace/pypy/include -c build/temp.macosx-10.12-x86_64-3.5/_openssl.c -o build/temp.macosx-10.12-x86_64-3.5/build/temp.macosx-10.12-x86_64-3.5/_openssl.o
build/temp.macosx-10.12-x86_64-3.5/_openssl.c:2503:9: warning: comparison of function 'getentropy' not equal to a null pointer is always true [-Wtautological-pointer-compare]
if (getentropy != NULL) {
^~~~~~~~~~ ~~~~
build/temp.macosx-10.12-x86_64-3.5/_openssl.c:2503:9: note: prefix with the address-of operator to silence this warning
if (getentropy != NULL) {
^
&
build/temp.macosx-10.12-x86_64-3.5/_openssl.c:3454:22: warning: comparison of constant 1152921504606846975 with expression of type 'unsigned int' is always false [-Wtautological-constant-out-of-range-compare]
_ssl_locks = PyMem_New(PyThread_type_lock, _ssl_locks_count);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/Users/omerba/Workspace/pypy/include/pymem.h:38:10: note: expanded from macro 'PyMem_New'
( ((n) > PY_SSIZE_T_MAX / sizeof(type)) ? NULL : \
~~~ ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
build/temp.macosx-10.12-x86_64-3.5/_openssl.c:74852:1: warning: control reaches end of non-void function [-Wreturn-type]
}
^
3 warnings generated.
cc -pthread -shared -L/usr/local/opt/openssl/lib -I/usr/local/opt/openssl/include build/temp.macosx-10.12-x86_64-3.5/build/temp.macosx-10.12-x86_64-3.5/_openssl.o -lssl -lcrypto -o build/lib.macosx-10.12-x86_64-3.5/cryptography/hazmat/bindings/_openssl.pypy3-58-x86_64-darwin.so
clang: warning: argument unused during compilation: '-pthread' [-Wunused-command-line-argument]
Undefined symbols for architecture x86_64:
"_PyPyErr_NoMemory", referenced from:
__setup_ssl_threads in _openssl.o
"_PyPyMem_Free", referenced from:
__setup_ssl_threads in _openssl.o
"_PyPyMem_Malloc", referenced from:
__setup_ssl_threads in _openssl.o
"_PyPyThread_acquire_lock", referenced from:
__ssl_thread_locking_function in _openssl.o
"_PyPyThread_allocate_lock", referenced from:
__setup_ssl_threads in _openssl.o
"_PyPyThread_free_lock", referenced from:
__setup_ssl_threads in _openssl.o
"_PyPyThread_release_lock", referenced from:
__ssl_thread_locking_function in _openssl.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
error: command 'cc' failed with exit status 1
```
Googling the issue only leads to [this](https://bitbucket.org/pypy/pypy/issues/2538/_ssl_buildpy-fails-on-macos-with-the-py35) exact same issue on the PyPy repo.
But the PyPy team seemed to have resolved it by making some changes to their cffi backend.
I'll admit that I tried to shamelessly copy these changes to cryptography's _cffi_src dir - which made the package install successfully, but then when I actually tried to use it:
`from cryptography.hazmat.backends import default_backend, openssl`
it blows up..
```
AttributeErrorTraceback (most recent call last)
<ipython-input-5-cc19c86b2edb> in <module>()
1 from cryptography.hazmat.primitives.ciphers import Cipher, algorithms, modes
----> 2 from cryptography.hazmat.backends import default_backend, openssl
~/Workspace/pypy/site-packages/cryptography-1.9-py3.5-macosx-10.12-x86_64.egg/cryptography/hazmat/backends/openssl/__init__.py in <module>()
5 from __future__ import absolute_import, division, print_function
6
----> 7 from cryptography.hazmat.backends.openssl.backend import backend
8
9
~/Workspace/pypy/site-packages/cryptography-1.9-py3.5-macosx-10.12-x86_64.egg/cryptography/hazmat/backends/openssl/backend.py in <module>()
47 _CertificateSigningRequest, _RevokedCertificate
48 )
---> 49 from cryptography.hazmat.bindings.openssl import binding
50 from cryptography.hazmat.primitives import hashes, serialization
51 from cryptography.hazmat.primitives.asymmetric import dsa, ec, rsa
~/Workspace/pypy/site-packages/cryptography-1.9-py3.5-macosx-10.12-x86_64.egg/cryptography/hazmat/bindings/openssl/binding.py in <module>()
154 # condition registering the OpenSSL locks. On Python 3.4+ the import lock
155 # is per module so this approach will not work.
--> 156 Binding.init_static_locks()
~/Workspace/pypy/site-packages/cryptography-1.9-py3.5-macosx-10.12-x86_64.egg/cryptography/hazmat/bindings/openssl/binding.py in init_static_locks(cls)
135 def init_static_locks(cls):
136 with cls._lock_init_lock:
--> 137 cls._ensure_ffi_initialized()
138 # Use Python's implementation if available, importing _ssl triggers
139 # the setup for this.
~/Workspace/pypy/site-packages/cryptography-1.9-py3.5-macosx-10.12-x86_64.egg/cryptography/hazmat/bindings/openssl/binding.py in _ensure_ffi_initialized(cls)
122 with cls._init_lock:
123 if not cls._lib_loaded:
--> 124 cls.lib = build_conditional_library(lib, CONDITIONAL_NAMES)
125 cls._lib_loaded = True
126 # initialize the SSL library
~/Workspace/pypy/site-packages/cryptography-1.9-py3.5-macosx-10.12-x86_64.egg/cryptography/hazmat/bindings/openssl/binding.py in build_conditional_library(lib, conditional_names)
82 excluded_names = set()
83 for condition, names in conditional_names.items():
---> 84 if not getattr(lib, condition):
85 excluded_names |= set(names)
86
AttributeError: cffi library 'cryptography.hazmat.bindings._openssl' has no function, constant or global variable named 'Cryptography_HAS_DTLS'
```
Thanks for your work guys!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/_cffi_src/openssl/callbacks.py`
Content:
```
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 INCLUDES = """
8 #include <openssl/ssl.h>
9 #include <openssl/x509.h>
10 #include <openssl/x509_vfy.h>
11 #include <openssl/crypto.h>
12
13 #include <pythread.h>
14 """
15
16 TYPES = """
17 typedef struct {
18 char *password;
19 int length;
20 int called;
21 int error;
22 int maxsize;
23 } CRYPTOGRAPHY_PASSWORD_DATA;
24 """
25
26 FUNCTIONS = """
27 int _setup_ssl_threads(void);
28 int Cryptography_pem_password_cb(char *, int, int, void *);
29 """
30
31 MACROS = """
32 """
33
34 CUSTOMIZATIONS = """
35 /* This code is derived from the locking code found in the Python _ssl module's
36 locking callback for OpenSSL.
37
38 Copyright 2001-2016 Python Software Foundation; All Rights Reserved.
39 */
40
41 static unsigned int _ssl_locks_count = 0;
42 static PyThread_type_lock *_ssl_locks = NULL;
43
44 static void _ssl_thread_locking_function(int mode, int n, const char *file,
45 int line) {
46 /* this function is needed to perform locking on shared data
47 structures. (Note that OpenSSL uses a number of global data
48 structures that will be implicitly shared whenever multiple
49 threads use OpenSSL.) Multi-threaded applications will
50 crash at random if it is not set.
51
52 locking_function() must be able to handle up to
53 CRYPTO_num_locks() different mutex locks. It sets the n-th
54 lock if mode & CRYPTO_LOCK, and releases it otherwise.
55
56 file and line are the file number of the function setting the
57 lock. They can be useful for debugging.
58 */
59
60 if ((_ssl_locks == NULL) ||
61 (n < 0) || ((unsigned)n >= _ssl_locks_count)) {
62 return;
63 }
64
65 if (mode & CRYPTO_LOCK) {
66 PyThread_acquire_lock(_ssl_locks[n], 1);
67 } else {
68 PyThread_release_lock(_ssl_locks[n]);
69 }
70 }
71
72 int _setup_ssl_threads(void) {
73 unsigned int i;
74
75 if (_ssl_locks == NULL) {
76 _ssl_locks_count = CRYPTO_num_locks();
77 _ssl_locks = PyMem_New(PyThread_type_lock, _ssl_locks_count);
78 if (_ssl_locks == NULL) {
79 PyErr_NoMemory();
80 return 0;
81 }
82 memset(_ssl_locks, 0, sizeof(PyThread_type_lock) * _ssl_locks_count);
83 for (i = 0; i < _ssl_locks_count; i++) {
84 _ssl_locks[i] = PyThread_allocate_lock();
85 if (_ssl_locks[i] == NULL) {
86 unsigned int j;
87 for (j = 0; j < i; j++) {
88 PyThread_free_lock(_ssl_locks[j]);
89 }
90 PyMem_Free(_ssl_locks);
91 return 0;
92 }
93 }
94 CRYPTO_set_locking_callback(_ssl_thread_locking_function);
95 }
96 return 1;
97 }
98
99 typedef struct {
100 char *password;
101 int length;
102 int called;
103 int error;
104 int maxsize;
105 } CRYPTOGRAPHY_PASSWORD_DATA;
106
107 int Cryptography_pem_password_cb(char *buf, int size,
108 int rwflag, void *userdata) {
109 /* The password cb is only invoked if OpenSSL decides the private
110 key is encrypted. So this path only occurs if it needs a password */
111 CRYPTOGRAPHY_PASSWORD_DATA *st = (CRYPTOGRAPHY_PASSWORD_DATA *)userdata;
112 st->called += 1;
113 st->maxsize = size;
114 if (st->length == 0) {
115 st->error = -1;
116 return 0;
117 } else if (st->length < size) {
118 memcpy(buf, st->password, st->length);
119 return st->length;
120 } else {
121 st->error = -2;
122 return 0;
123 }
124 }
125 """
126
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/_cffi_src/openssl/callbacks.py b/src/_cffi_src/openssl/callbacks.py
--- a/src/_cffi_src/openssl/callbacks.py
+++ b/src/_cffi_src/openssl/callbacks.py
@@ -10,7 +10,13 @@
#include <openssl/x509_vfy.h>
#include <openssl/crypto.h>
-#include <pythread.h>
+#ifdef _WIN32
+#include <Windows.h>
+#else
+#include <stdio.h>
+#include <stdlib.h>
+#include <pthread.h>
+#endif
"""
TYPES = """
@@ -36,10 +42,47 @@
locking callback for OpenSSL.
Copyright 2001-2016 Python Software Foundation; All Rights Reserved.
+
+ It has been subsequently modified to use cross platform locking without
+ using CPython APIs by Armin Rigo of the PyPy project.
*/
+#ifdef _WIN32
+typedef CRITICAL_SECTION Cryptography_mutex;
+static __inline void cryptography_mutex_init(Cryptography_mutex *mutex) {
+ InitializeCriticalSection(mutex);
+}
+static __inline void cryptography_mutex_lock(Cryptography_mutex *mutex) {
+ EnterCriticalSection(mutex);
+}
+static __inline void cryptography_mutex_unlock(Cryptography_mutex *mutex) {
+ LeaveCriticalSection(mutex);
+}
+#else
+typedef pthread_mutex_t Cryptography_mutex;
+#define ASSERT_STATUS(call) \
+ if ((call) != 0) { \
+ perror("Fatal error in callback initialization: " #call); \
+ abort(); \
+ }
+static inline void cryptography_mutex_init(Cryptography_mutex *mutex) {
+#if !defined(pthread_mutexattr_default)
+# define pthread_mutexattr_default ((pthread_mutexattr_t *)NULL)
+#endif
+ ASSERT_STATUS(pthread_mutex_init(mutex, pthread_mutexattr_default));
+}
+static inline void cryptography_mutex_lock(Cryptography_mutex *mutex) {
+ ASSERT_STATUS(pthread_mutex_lock(mutex));
+}
+static inline void cryptography_mutex_unlock(Cryptography_mutex *mutex) {
+ ASSERT_STATUS(pthread_mutex_unlock(mutex));
+}
+#endif
+
+
+
static unsigned int _ssl_locks_count = 0;
-static PyThread_type_lock *_ssl_locks = NULL;
+static Cryptography_mutex *_ssl_locks = NULL;
static void _ssl_thread_locking_function(int mode, int n, const char *file,
int line) {
@@ -63,35 +106,32 @@
}
if (mode & CRYPTO_LOCK) {
- PyThread_acquire_lock(_ssl_locks[n], 1);
+ cryptography_mutex_lock(_ssl_locks + n);
} else {
- PyThread_release_lock(_ssl_locks[n]);
+ cryptography_mutex_unlock(_ssl_locks + n);
+ }
+}
+
+static void init_mutexes(void) {
+ int i;
+ for (i = 0; i < _ssl_locks_count; i++) {
+ cryptography_mutex_init(_ssl_locks + i);
}
}
-int _setup_ssl_threads(void) {
- unsigned int i;
+int _setup_ssl_threads(void) {
if (_ssl_locks == NULL) {
_ssl_locks_count = CRYPTO_num_locks();
- _ssl_locks = PyMem_New(PyThread_type_lock, _ssl_locks_count);
+ _ssl_locks = calloc(_ssl_locks_count, sizeof(Cryptography_mutex));
if (_ssl_locks == NULL) {
- PyErr_NoMemory();
return 0;
}
- memset(_ssl_locks, 0, sizeof(PyThread_type_lock) * _ssl_locks_count);
- for (i = 0; i < _ssl_locks_count; i++) {
- _ssl_locks[i] = PyThread_allocate_lock();
- if (_ssl_locks[i] == NULL) {
- unsigned int j;
- for (j = 0; j < i; j++) {
- PyThread_free_lock(_ssl_locks[j]);
- }
- PyMem_Free(_ssl_locks);
- return 0;
- }
- }
+ init_mutexes();
CRYPTO_set_locking_callback(_ssl_thread_locking_function);
+#ifndef _WIN32
+ pthread_atfork(NULL, NULL, &init_mutexes);
+#endif
}
return 1;
}
|
{"golden_diff": "diff --git a/src/_cffi_src/openssl/callbacks.py b/src/_cffi_src/openssl/callbacks.py\n--- a/src/_cffi_src/openssl/callbacks.py\n+++ b/src/_cffi_src/openssl/callbacks.py\n@@ -10,7 +10,13 @@\n #include <openssl/x509_vfy.h>\n #include <openssl/crypto.h>\n \n-#include <pythread.h>\n+#ifdef _WIN32\n+#include <Windows.h>\n+#else\n+#include <stdio.h>\n+#include <stdlib.h>\n+#include <pthread.h>\n+#endif\n \"\"\"\n \n TYPES = \"\"\"\n@@ -36,10 +42,47 @@\n locking callback for OpenSSL.\n \n Copyright 2001-2016 Python Software Foundation; All Rights Reserved.\n+\n+ It has been subsequently modified to use cross platform locking without\n+ using CPython APIs by Armin Rigo of the PyPy project.\n */\n \n+#ifdef _WIN32\n+typedef CRITICAL_SECTION Cryptography_mutex;\n+static __inline void cryptography_mutex_init(Cryptography_mutex *mutex) {\n+ InitializeCriticalSection(mutex);\n+}\n+static __inline void cryptography_mutex_lock(Cryptography_mutex *mutex) {\n+ EnterCriticalSection(mutex);\n+}\n+static __inline void cryptography_mutex_unlock(Cryptography_mutex *mutex) {\n+ LeaveCriticalSection(mutex);\n+}\n+#else\n+typedef pthread_mutex_t Cryptography_mutex;\n+#define ASSERT_STATUS(call) \\\n+ if ((call) != 0) { \\\n+ perror(\"Fatal error in callback initialization: \" #call); \\\n+ abort(); \\\n+ }\n+static inline void cryptography_mutex_init(Cryptography_mutex *mutex) {\n+#if !defined(pthread_mutexattr_default)\n+# define pthread_mutexattr_default ((pthread_mutexattr_t *)NULL)\n+#endif\n+ ASSERT_STATUS(pthread_mutex_init(mutex, pthread_mutexattr_default));\n+}\n+static inline void cryptography_mutex_lock(Cryptography_mutex *mutex) {\n+ ASSERT_STATUS(pthread_mutex_lock(mutex));\n+}\n+static inline void cryptography_mutex_unlock(Cryptography_mutex *mutex) {\n+ ASSERT_STATUS(pthread_mutex_unlock(mutex));\n+}\n+#endif\n+\n+\n+\n static unsigned int _ssl_locks_count = 0;\n-static PyThread_type_lock *_ssl_locks = NULL;\n+static Cryptography_mutex *_ssl_locks = NULL;\n \n static void _ssl_thread_locking_function(int mode, int n, const char *file,\n int line) {\n@@ -63,35 +106,32 @@\n }\n \n if (mode & CRYPTO_LOCK) {\n- PyThread_acquire_lock(_ssl_locks[n], 1);\n+ cryptography_mutex_lock(_ssl_locks + n);\n } else {\n- PyThread_release_lock(_ssl_locks[n]);\n+ cryptography_mutex_unlock(_ssl_locks + n);\n+ }\n+}\n+\n+static void init_mutexes(void) {\n+ int i;\n+ for (i = 0; i < _ssl_locks_count; i++) {\n+ cryptography_mutex_init(_ssl_locks + i);\n }\n }\n \n-int _setup_ssl_threads(void) {\n- unsigned int i;\n \n+int _setup_ssl_threads(void) {\n if (_ssl_locks == NULL) {\n _ssl_locks_count = CRYPTO_num_locks();\n- _ssl_locks = PyMem_New(PyThread_type_lock, _ssl_locks_count);\n+ _ssl_locks = calloc(_ssl_locks_count, sizeof(Cryptography_mutex));\n if (_ssl_locks == NULL) {\n- PyErr_NoMemory();\n return 0;\n }\n- memset(_ssl_locks, 0, sizeof(PyThread_type_lock) * _ssl_locks_count);\n- for (i = 0; i < _ssl_locks_count; i++) {\n- _ssl_locks[i] = PyThread_allocate_lock();\n- if (_ssl_locks[i] == NULL) {\n- unsigned int j;\n- for (j = 0; j < i; j++) {\n- PyThread_free_lock(_ssl_locks[j]);\n- }\n- PyMem_Free(_ssl_locks);\n- return 0;\n- }\n- }\n+ init_mutexes();\n CRYPTO_set_locking_callback(_ssl_thread_locking_function);\n+#ifndef _WIN32\n+ pthread_atfork(NULL, NULL, &init_mutexes);\n+#endif\n }\n return 1;\n }\n", "issue": "Yet another building on PyPy(5.8) issue\nHi,\r\n\r\nI'm trying to build cryptography (tried both 1.9 and master) against PyPy 5.8beta0.\r\nI'm running on osx sierra.\r\n\r\nI've looked at several issues here, and have tried several suggested solutions, but none worked for me \ud83d\ude1e \r\n\r\nI'm trying to link against brew openssl.\r\n`brew list --versions openssl` -> `openssl 1.0.2j 1.0.2k 1.0.2l`\r\n\r\nI've built pypy's openssl module without any special problem\r\n\r\n```\r\nenv PYTHONPATH=/Users/omerba/Workspace/pypy \\ \r\n DYLD_LIBRARY_PATH=\"/Users/omerba/anaconda/lib\" \\\r\n CFLAGS=\"-I/usr/local/opt/openssl/include\" \\\r\n LDFLAGS=\"-L/usr/local/opt/openssl/lib\" \\\r\n /Users/omerba/Workspace/pypy/pypy3-c pypy/tool/build_cffi_imports.py\r\n```\r\nBut when i try to build cryptography (in cryptography's dir):\r\n```\r\n env PYTHONPATH=/Users/omerba/Workspace/pypy \\ \r\n DYLD_LIBRARY_PATH=\"/Users/omerba/anaconda/lib\" \\\r\n CFLAGS=\"-I/usr/local/opt/openssl/include\" \\\r\n LDFLAGS=\"-L/usr/local/opt/openssl/lib\" \\\r\n /Users/omerba/Workspace/pypy/pypy3-c setup.py install\r\n```\r\nI get:\r\n```\r\nrunning install\r\nrunning bdist_egg\r\nrunning egg_info\r\nwriting src/cryptography.egg-info/PKG-INFO\r\nwriting dependency_links to src/cryptography.egg-info/dependency_links.txt\r\nwriting entry points to src/cryptography.egg-info/entry_points.txt\r\nwriting requirements to src/cryptography.egg-info/requires.txt\r\nwriting top-level names to src/cryptography.egg-info/top_level.txt\r\nreading manifest file 'src/cryptography.egg-info/SOURCES.txt'\r\nreading manifest template 'MANIFEST.in'\r\nno previously-included directories found matching 'docs/_build'\r\nwarning: no previously-included files matching '*' found under directory 'vectors'\r\nwriting manifest file 'src/cryptography.egg-info/SOURCES.txt'\r\ninstalling library code to build/bdist.macosx-10.12-x86_64/egg\r\nrunning install_lib\r\nrunning build_py\r\nrunning build_ext\r\ngenerating cffi module 'build/temp.macosx-10.12-x86_64-3.5/_padding.c'\r\nalready up-to-date\r\ngenerating cffi module 'build/temp.macosx-10.12-x86_64-3.5/_constant_time.c'\r\nalready up-to-date\r\ngenerating cffi module 'build/temp.macosx-10.12-x86_64-3.5/_openssl.c'\r\nbuilding '_openssl' extension\r\ncc -pthread -DNDEBUG -O2 -I/usr/local/opt/openssl/include -fPIC -I/Users/omerba/Workspace/pypy/include -c build/temp.macosx-10.12-x86_64-3.5/_openssl.c -o build/temp.macosx-10.12-x86_64-3.5/build/temp.macosx-10.12-x86_64-3.5/_openssl.o\r\nbuild/temp.macosx-10.12-x86_64-3.5/_openssl.c:2503:9: warning: comparison of function 'getentropy' not equal to a null pointer is always true [-Wtautological-pointer-compare]\r\n if (getentropy != NULL) {\r\n ^~~~~~~~~~ ~~~~\r\nbuild/temp.macosx-10.12-x86_64-3.5/_openssl.c:2503:9: note: prefix with the address-of operator to silence this warning\r\n if (getentropy != NULL) {\r\n ^\r\n &\r\nbuild/temp.macosx-10.12-x86_64-3.5/_openssl.c:3454:22: warning: comparison of constant 1152921504606846975 with expression of type 'unsigned int' is always false [-Wtautological-constant-out-of-range-compare]\r\n _ssl_locks = PyMem_New(PyThread_type_lock, _ssl_locks_count);\r\n ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\r\n/Users/omerba/Workspace/pypy/include/pymem.h:38:10: note: expanded from macro 'PyMem_New'\r\n ( ((n) > PY_SSIZE_T_MAX / sizeof(type)) ? NULL : \\\r\n ~~~ ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\r\nbuild/temp.macosx-10.12-x86_64-3.5/_openssl.c:74852:1: warning: control reaches end of non-void function [-Wreturn-type]\r\n}\r\n^\r\n3 warnings generated.\r\ncc -pthread -shared -L/usr/local/opt/openssl/lib -I/usr/local/opt/openssl/include build/temp.macosx-10.12-x86_64-3.5/build/temp.macosx-10.12-x86_64-3.5/_openssl.o -lssl -lcrypto -o build/lib.macosx-10.12-x86_64-3.5/cryptography/hazmat/bindings/_openssl.pypy3-58-x86_64-darwin.so\r\nclang: warning: argument unused during compilation: '-pthread' [-Wunused-command-line-argument]\r\nUndefined symbols for architecture x86_64:\r\n \"_PyPyErr_NoMemory\", referenced from:\r\n __setup_ssl_threads in _openssl.o\r\n \"_PyPyMem_Free\", referenced from:\r\n __setup_ssl_threads in _openssl.o\r\n \"_PyPyMem_Malloc\", referenced from:\r\n __setup_ssl_threads in _openssl.o\r\n \"_PyPyThread_acquire_lock\", referenced from:\r\n __ssl_thread_locking_function in _openssl.o\r\n \"_PyPyThread_allocate_lock\", referenced from:\r\n __setup_ssl_threads in _openssl.o\r\n \"_PyPyThread_free_lock\", referenced from:\r\n __setup_ssl_threads in _openssl.o\r\n \"_PyPyThread_release_lock\", referenced from:\r\n __ssl_thread_locking_function in _openssl.o\r\nld: symbol(s) not found for architecture x86_64\r\nclang: error: linker command failed with exit code 1 (use -v to see invocation)\r\nerror: command 'cc' failed with exit status 1\r\n```\r\n\r\nGoogling the issue only leads to [this](https://bitbucket.org/pypy/pypy/issues/2538/_ssl_buildpy-fails-on-macos-with-the-py35) exact same issue on the PyPy repo.\r\n\r\nBut the PyPy team seemed to have resolved it by making some changes to their cffi backend.\r\nI'll admit that I tried to shamelessly copy these changes to cryptography's _cffi_src dir - which made the package install successfully, but then when I actually tried to use it:\r\n\r\n`from cryptography.hazmat.backends import default_backend, openssl`\r\n\r\nit blows up.. \r\n\r\n```\r\nAttributeErrorTraceback (most recent call last)\r\n<ipython-input-5-cc19c86b2edb> in <module>()\r\n 1 from cryptography.hazmat.primitives.ciphers import Cipher, algorithms, modes\r\n----> 2 from cryptography.hazmat.backends import default_backend, openssl\r\n\r\n~/Workspace/pypy/site-packages/cryptography-1.9-py3.5-macosx-10.12-x86_64.egg/cryptography/hazmat/backends/openssl/__init__.py in <module>()\r\n 5 from __future__ import absolute_import, division, print_function\r\n 6 \r\n----> 7 from cryptography.hazmat.backends.openssl.backend import backend\r\n 8 \r\n 9 \r\n\r\n~/Workspace/pypy/site-packages/cryptography-1.9-py3.5-macosx-10.12-x86_64.egg/cryptography/hazmat/backends/openssl/backend.py in <module>()\r\n 47 _CertificateSigningRequest, _RevokedCertificate\r\n 48 )\r\n---> 49 from cryptography.hazmat.bindings.openssl import binding\r\n 50 from cryptography.hazmat.primitives import hashes, serialization\r\n 51 from cryptography.hazmat.primitives.asymmetric import dsa, ec, rsa\r\n\r\n~/Workspace/pypy/site-packages/cryptography-1.9-py3.5-macosx-10.12-x86_64.egg/cryptography/hazmat/bindings/openssl/binding.py in <module>()\r\n 154 # condition registering the OpenSSL locks. On Python 3.4+ the import lock\r\n 155 # is per module so this approach will not work.\r\n--> 156 Binding.init_static_locks()\r\n\r\n~/Workspace/pypy/site-packages/cryptography-1.9-py3.5-macosx-10.12-x86_64.egg/cryptography/hazmat/bindings/openssl/binding.py in init_static_locks(cls)\r\n 135 def init_static_locks(cls):\r\n 136 with cls._lock_init_lock:\r\n--> 137 cls._ensure_ffi_initialized()\r\n 138 # Use Python's implementation if available, importing _ssl triggers\r\n 139 # the setup for this.\r\n\r\n~/Workspace/pypy/site-packages/cryptography-1.9-py3.5-macosx-10.12-x86_64.egg/cryptography/hazmat/bindings/openssl/binding.py in _ensure_ffi_initialized(cls)\r\n 122 with cls._init_lock:\r\n 123 if not cls._lib_loaded:\r\n--> 124 cls.lib = build_conditional_library(lib, CONDITIONAL_NAMES)\r\n 125 cls._lib_loaded = True\r\n 126 # initialize the SSL library\r\n\r\n~/Workspace/pypy/site-packages/cryptography-1.9-py3.5-macosx-10.12-x86_64.egg/cryptography/hazmat/bindings/openssl/binding.py in build_conditional_library(lib, conditional_names)\r\n 82 excluded_names = set()\r\n 83 for condition, names in conditional_names.items():\r\n---> 84 if not getattr(lib, condition):\r\n 85 excluded_names |= set(names)\r\n 86 \r\n\r\nAttributeError: cffi library 'cryptography.hazmat.bindings._openssl' has no function, constant or global variable named 'Cryptography_HAS_DTLS'\r\n```\r\n\r\nThanks for your work guys!\n", "before_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nINCLUDES = \"\"\"\n#include <openssl/ssl.h>\n#include <openssl/x509.h>\n#include <openssl/x509_vfy.h>\n#include <openssl/crypto.h>\n\n#include <pythread.h>\n\"\"\"\n\nTYPES = \"\"\"\ntypedef struct {\n char *password;\n int length;\n int called;\n int error;\n int maxsize;\n} CRYPTOGRAPHY_PASSWORD_DATA;\n\"\"\"\n\nFUNCTIONS = \"\"\"\nint _setup_ssl_threads(void);\nint Cryptography_pem_password_cb(char *, int, int, void *);\n\"\"\"\n\nMACROS = \"\"\"\n\"\"\"\n\nCUSTOMIZATIONS = \"\"\"\n/* This code is derived from the locking code found in the Python _ssl module's\n locking callback for OpenSSL.\n\n Copyright 2001-2016 Python Software Foundation; All Rights Reserved.\n*/\n\nstatic unsigned int _ssl_locks_count = 0;\nstatic PyThread_type_lock *_ssl_locks = NULL;\n\nstatic void _ssl_thread_locking_function(int mode, int n, const char *file,\n int line) {\n /* this function is needed to perform locking on shared data\n structures. (Note that OpenSSL uses a number of global data\n structures that will be implicitly shared whenever multiple\n threads use OpenSSL.) Multi-threaded applications will\n crash at random if it is not set.\n\n locking_function() must be able to handle up to\n CRYPTO_num_locks() different mutex locks. It sets the n-th\n lock if mode & CRYPTO_LOCK, and releases it otherwise.\n\n file and line are the file number of the function setting the\n lock. They can be useful for debugging.\n */\n\n if ((_ssl_locks == NULL) ||\n (n < 0) || ((unsigned)n >= _ssl_locks_count)) {\n return;\n }\n\n if (mode & CRYPTO_LOCK) {\n PyThread_acquire_lock(_ssl_locks[n], 1);\n } else {\n PyThread_release_lock(_ssl_locks[n]);\n }\n}\n\nint _setup_ssl_threads(void) {\n unsigned int i;\n\n if (_ssl_locks == NULL) {\n _ssl_locks_count = CRYPTO_num_locks();\n _ssl_locks = PyMem_New(PyThread_type_lock, _ssl_locks_count);\n if (_ssl_locks == NULL) {\n PyErr_NoMemory();\n return 0;\n }\n memset(_ssl_locks, 0, sizeof(PyThread_type_lock) * _ssl_locks_count);\n for (i = 0; i < _ssl_locks_count; i++) {\n _ssl_locks[i] = PyThread_allocate_lock();\n if (_ssl_locks[i] == NULL) {\n unsigned int j;\n for (j = 0; j < i; j++) {\n PyThread_free_lock(_ssl_locks[j]);\n }\n PyMem_Free(_ssl_locks);\n return 0;\n }\n }\n CRYPTO_set_locking_callback(_ssl_thread_locking_function);\n }\n return 1;\n}\n\ntypedef struct {\n char *password;\n int length;\n int called;\n int error;\n int maxsize;\n} CRYPTOGRAPHY_PASSWORD_DATA;\n\nint Cryptography_pem_password_cb(char *buf, int size,\n int rwflag, void *userdata) {\n /* The password cb is only invoked if OpenSSL decides the private\n key is encrypted. So this path only occurs if it needs a password */\n CRYPTOGRAPHY_PASSWORD_DATA *st = (CRYPTOGRAPHY_PASSWORD_DATA *)userdata;\n st->called += 1;\n st->maxsize = size;\n if (st->length == 0) {\n st->error = -1;\n return 0;\n } else if (st->length < size) {\n memcpy(buf, st->password, st->length);\n return st->length;\n } else {\n st->error = -2;\n return 0;\n }\n}\n\"\"\"\n", "path": "src/_cffi_src/openssl/callbacks.py"}], "after_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nINCLUDES = \"\"\"\n#include <openssl/ssl.h>\n#include <openssl/x509.h>\n#include <openssl/x509_vfy.h>\n#include <openssl/crypto.h>\n\n#ifdef _WIN32\n#include <Windows.h>\n#else\n#include <stdio.h>\n#include <stdlib.h>\n#include <pthread.h>\n#endif\n\"\"\"\n\nTYPES = \"\"\"\ntypedef struct {\n char *password;\n int length;\n int called;\n int error;\n int maxsize;\n} CRYPTOGRAPHY_PASSWORD_DATA;\n\"\"\"\n\nFUNCTIONS = \"\"\"\nint _setup_ssl_threads(void);\nint Cryptography_pem_password_cb(char *, int, int, void *);\n\"\"\"\n\nMACROS = \"\"\"\n\"\"\"\n\nCUSTOMIZATIONS = \"\"\"\n/* This code is derived from the locking code found in the Python _ssl module's\n locking callback for OpenSSL.\n\n Copyright 2001-2016 Python Software Foundation; All Rights Reserved.\n\n It has been subsequently modified to use cross platform locking without\n using CPython APIs by Armin Rigo of the PyPy project.\n*/\n\n#ifdef _WIN32\ntypedef CRITICAL_SECTION Cryptography_mutex;\nstatic __inline void cryptography_mutex_init(Cryptography_mutex *mutex) {\n InitializeCriticalSection(mutex);\n}\nstatic __inline void cryptography_mutex_lock(Cryptography_mutex *mutex) {\n EnterCriticalSection(mutex);\n}\nstatic __inline void cryptography_mutex_unlock(Cryptography_mutex *mutex) {\n LeaveCriticalSection(mutex);\n}\n#else\ntypedef pthread_mutex_t Cryptography_mutex;\n#define ASSERT_STATUS(call) \\\n if ((call) != 0) { \\\n perror(\"Fatal error in callback initialization: \" #call); \\\n abort(); \\\n }\nstatic inline void cryptography_mutex_init(Cryptography_mutex *mutex) {\n#if !defined(pthread_mutexattr_default)\n# define pthread_mutexattr_default ((pthread_mutexattr_t *)NULL)\n#endif\n ASSERT_STATUS(pthread_mutex_init(mutex, pthread_mutexattr_default));\n}\nstatic inline void cryptography_mutex_lock(Cryptography_mutex *mutex) {\n ASSERT_STATUS(pthread_mutex_lock(mutex));\n}\nstatic inline void cryptography_mutex_unlock(Cryptography_mutex *mutex) {\n ASSERT_STATUS(pthread_mutex_unlock(mutex));\n}\n#endif\n\n\n\nstatic unsigned int _ssl_locks_count = 0;\nstatic Cryptography_mutex *_ssl_locks = NULL;\n\nstatic void _ssl_thread_locking_function(int mode, int n, const char *file,\n int line) {\n /* this function is needed to perform locking on shared data\n structures. (Note that OpenSSL uses a number of global data\n structures that will be implicitly shared whenever multiple\n threads use OpenSSL.) Multi-threaded applications will\n crash at random if it is not set.\n\n locking_function() must be able to handle up to\n CRYPTO_num_locks() different mutex locks. It sets the n-th\n lock if mode & CRYPTO_LOCK, and releases it otherwise.\n\n file and line are the file number of the function setting the\n lock. They can be useful for debugging.\n */\n\n if ((_ssl_locks == NULL) ||\n (n < 0) || ((unsigned)n >= _ssl_locks_count)) {\n return;\n }\n\n if (mode & CRYPTO_LOCK) {\n cryptography_mutex_lock(_ssl_locks + n);\n } else {\n cryptography_mutex_unlock(_ssl_locks + n);\n }\n}\n\nstatic void init_mutexes(void) {\n int i;\n for (i = 0; i < _ssl_locks_count; i++) {\n cryptography_mutex_init(_ssl_locks + i);\n }\n}\n\n\nint _setup_ssl_threads(void) {\n if (_ssl_locks == NULL) {\n _ssl_locks_count = CRYPTO_num_locks();\n _ssl_locks = calloc(_ssl_locks_count, sizeof(Cryptography_mutex));\n if (_ssl_locks == NULL) {\n return 0;\n }\n init_mutexes();\n CRYPTO_set_locking_callback(_ssl_thread_locking_function);\n#ifndef _WIN32\n pthread_atfork(NULL, NULL, &init_mutexes);\n#endif\n }\n return 1;\n}\n\ntypedef struct {\n char *password;\n int length;\n int called;\n int error;\n int maxsize;\n} CRYPTOGRAPHY_PASSWORD_DATA;\n\nint Cryptography_pem_password_cb(char *buf, int size,\n int rwflag, void *userdata) {\n /* The password cb is only invoked if OpenSSL decides the private\n key is encrypted. So this path only occurs if it needs a password */\n CRYPTOGRAPHY_PASSWORD_DATA *st = (CRYPTOGRAPHY_PASSWORD_DATA *)userdata;\n st->called += 1;\n st->maxsize = size;\n if (st->length == 0) {\n st->error = -1;\n return 0;\n } else if (st->length < size) {\n memcpy(buf, st->password, st->length);\n return st->length;\n } else {\n st->error = -2;\n return 0;\n }\n}\n\"\"\"\n", "path": "src/_cffi_src/openssl/callbacks.py"}]}
| 3,787 | 962 |
gh_patches_debug_54708
|
rasdani/github-patches
|
git_diff
|
qutebrowser__qutebrowser-4743
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Launching keyhint widget causes 100% usage of one CPU core
That's how it was for as long as I can remember, reproducible with all of my hardware (pressing _g_ or _;_ is enough). I don't think that's an intended behavior.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `qutebrowser/misc/keyhintwidget.py`
Content:
```
1 # vim: ft=python fileencoding=utf-8 sts=4 sw=4 et:
2
3 # Copyright 2016-2019 Ryan Roden-Corrent (rcorre) <[email protected]>
4 #
5 # This file is part of qutebrowser.
6 #
7 # qutebrowser is free software: you can redistribute it and/or modify
8 # it under the terms of the GNU General Public License as published by
9 # the Free Software Foundation, either version 3 of the License, or
10 # (at your option) any later version.
11 #
12 # qutebrowser is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU General Public License for more details.
16 #
17 # You should have received a copy of the GNU General Public License
18 # along with qutebrowser. If not, see <http://www.gnu.org/licenses/>.
19
20 """Small window that pops up to show hints for possible keystrings.
21
22 When a user inputs a key that forms a partial match, this shows a small window
23 with each possible completion of that keystring and the corresponding command.
24 It is intended to help discoverability of keybindings.
25 """
26
27 import html
28 import fnmatch
29 import re
30
31 from PyQt5.QtWidgets import QLabel, QSizePolicy
32 from PyQt5.QtCore import pyqtSlot, pyqtSignal, Qt
33
34 from qutebrowser.config import config
35 from qutebrowser.utils import utils, usertypes
36 from qutebrowser.misc import objects
37 from qutebrowser.keyinput import keyutils
38
39
40 class KeyHintView(QLabel):
41
42 """The view showing hints for key bindings based on the current key string.
43
44 Attributes:
45 _win_id: Window ID of parent.
46
47 Signals:
48 update_geometry: Emitted when this widget should be resized/positioned.
49 """
50
51 STYLESHEET = """
52 QLabel {
53 font: {{ conf.fonts.keyhint }};
54 color: {{ conf.colors.keyhint.fg }};
55 background-color: {{ conf.colors.keyhint.bg }};
56 padding: 6px;
57 {% if conf.statusbar.position == 'top' %}
58 border-bottom-right-radius: {{ conf.keyhint.radius }}px;
59 {% else %}
60 border-top-right-radius: {{ conf.keyhint.radius }}px;
61 {% endif %}
62 }
63 """
64 update_geometry = pyqtSignal()
65
66 def __init__(self, win_id, parent=None):
67 super().__init__(parent)
68 self.setTextFormat(Qt.RichText)
69 self._win_id = win_id
70 self.setSizePolicy(QSizePolicy.Fixed, QSizePolicy.Minimum)
71 self.hide()
72 self._show_timer = usertypes.Timer(self, 'keyhint_show')
73 self._show_timer.timeout.connect(self.show)
74 config.set_register_stylesheet(self)
75
76 def __repr__(self):
77 return utils.get_repr(self, win_id=self._win_id)
78
79 def showEvent(self, e):
80 """Adjust the keyhint size when it's freshly shown."""
81 self.update_geometry.emit()
82 super().showEvent(e)
83
84 @pyqtSlot(str)
85 def update_keyhint(self, modename, prefix):
86 """Show hints for the given prefix (or hide if prefix is empty).
87
88 Args:
89 prefix: The current partial keystring.
90 """
91 countstr, prefix = re.fullmatch(r'(\d*)(.*)', prefix).groups()
92 if not prefix:
93 self._show_timer.stop()
94 self.hide()
95 return
96
97 def blacklisted(keychain):
98 return any(fnmatch.fnmatchcase(keychain, glob)
99 for glob in config.val.keyhint.blacklist)
100
101 def takes_count(cmdstr):
102 """Return true iff this command can take a count argument."""
103 cmdname = cmdstr.split(' ')[0]
104 cmd = objects.commands.get(cmdname)
105 return cmd and cmd.takes_count()
106
107 bindings_dict = config.key_instance.get_bindings_for(modename)
108 bindings = [(k, v) for (k, v) in sorted(bindings_dict.items())
109 if keyutils.KeySequence.parse(prefix).matches(k) and
110 not blacklisted(str(k)) and
111 (takes_count(v) or not countstr)]
112
113 if not bindings:
114 self._show_timer.stop()
115 return
116
117 # delay so a quickly typed keychain doesn't display hints
118 self._show_timer.setInterval(config.val.keyhint.delay)
119 self._show_timer.start()
120 suffix_color = html.escape(config.val.colors.keyhint.suffix.fg)
121
122 text = ''
123 for seq, cmd in bindings:
124 text += (
125 "<tr>"
126 "<td>{}</td>"
127 "<td style='color: {}'>{}</td>"
128 "<td style='padding-left: 2ex'>{}</td>"
129 "</tr>"
130 ).format(
131 html.escape(prefix),
132 suffix_color,
133 html.escape(str(seq)[len(prefix):]),
134 html.escape(cmd)
135 )
136 text = '<table>{}</table>'.format(text)
137
138 self.setText(text)
139 self.adjustSize()
140 self.update_geometry.emit()
141
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/qutebrowser/misc/keyhintwidget.py b/qutebrowser/misc/keyhintwidget.py
--- a/qutebrowser/misc/keyhintwidget.py
+++ b/qutebrowser/misc/keyhintwidget.py
@@ -71,6 +71,7 @@
self.hide()
self._show_timer = usertypes.Timer(self, 'keyhint_show')
self._show_timer.timeout.connect(self.show)
+ self._show_timer.setSingleShot(True)
config.set_register_stylesheet(self)
def __repr__(self):
|
{"golden_diff": "diff --git a/qutebrowser/misc/keyhintwidget.py b/qutebrowser/misc/keyhintwidget.py\n--- a/qutebrowser/misc/keyhintwidget.py\n+++ b/qutebrowser/misc/keyhintwidget.py\n@@ -71,6 +71,7 @@\n self.hide()\n self._show_timer = usertypes.Timer(self, 'keyhint_show')\n self._show_timer.timeout.connect(self.show)\n+ self._show_timer.setSingleShot(True)\n config.set_register_stylesheet(self)\n \n def __repr__(self):\n", "issue": "Launching keyhint widget causes 100% usage of one CPU core\nThat's how it was for as long as I can remember, reproducible with all of my hardware (pressing _g_ or _;_ is enough). I don't think that's an intended behavior.\n", "before_files": [{"content": "# vim: ft=python fileencoding=utf-8 sts=4 sw=4 et:\n\n# Copyright 2016-2019 Ryan Roden-Corrent (rcorre) <[email protected]>\n#\n# This file is part of qutebrowser.\n#\n# qutebrowser is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# qutebrowser is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with qutebrowser. If not, see <http://www.gnu.org/licenses/>.\n\n\"\"\"Small window that pops up to show hints for possible keystrings.\n\nWhen a user inputs a key that forms a partial match, this shows a small window\nwith each possible completion of that keystring and the corresponding command.\nIt is intended to help discoverability of keybindings.\n\"\"\"\n\nimport html\nimport fnmatch\nimport re\n\nfrom PyQt5.QtWidgets import QLabel, QSizePolicy\nfrom PyQt5.QtCore import pyqtSlot, pyqtSignal, Qt\n\nfrom qutebrowser.config import config\nfrom qutebrowser.utils import utils, usertypes\nfrom qutebrowser.misc import objects\nfrom qutebrowser.keyinput import keyutils\n\n\nclass KeyHintView(QLabel):\n\n \"\"\"The view showing hints for key bindings based on the current key string.\n\n Attributes:\n _win_id: Window ID of parent.\n\n Signals:\n update_geometry: Emitted when this widget should be resized/positioned.\n \"\"\"\n\n STYLESHEET = \"\"\"\n QLabel {\n font: {{ conf.fonts.keyhint }};\n color: {{ conf.colors.keyhint.fg }};\n background-color: {{ conf.colors.keyhint.bg }};\n padding: 6px;\n {% if conf.statusbar.position == 'top' %}\n border-bottom-right-radius: {{ conf.keyhint.radius }}px;\n {% else %}\n border-top-right-radius: {{ conf.keyhint.radius }}px;\n {% endif %}\n }\n \"\"\"\n update_geometry = pyqtSignal()\n\n def __init__(self, win_id, parent=None):\n super().__init__(parent)\n self.setTextFormat(Qt.RichText)\n self._win_id = win_id\n self.setSizePolicy(QSizePolicy.Fixed, QSizePolicy.Minimum)\n self.hide()\n self._show_timer = usertypes.Timer(self, 'keyhint_show')\n self._show_timer.timeout.connect(self.show)\n config.set_register_stylesheet(self)\n\n def __repr__(self):\n return utils.get_repr(self, win_id=self._win_id)\n\n def showEvent(self, e):\n \"\"\"Adjust the keyhint size when it's freshly shown.\"\"\"\n self.update_geometry.emit()\n super().showEvent(e)\n\n @pyqtSlot(str)\n def update_keyhint(self, modename, prefix):\n \"\"\"Show hints for the given prefix (or hide if prefix is empty).\n\n Args:\n prefix: The current partial keystring.\n \"\"\"\n countstr, prefix = re.fullmatch(r'(\\d*)(.*)', prefix).groups()\n if not prefix:\n self._show_timer.stop()\n self.hide()\n return\n\n def blacklisted(keychain):\n return any(fnmatch.fnmatchcase(keychain, glob)\n for glob in config.val.keyhint.blacklist)\n\n def takes_count(cmdstr):\n \"\"\"Return true iff this command can take a count argument.\"\"\"\n cmdname = cmdstr.split(' ')[0]\n cmd = objects.commands.get(cmdname)\n return cmd and cmd.takes_count()\n\n bindings_dict = config.key_instance.get_bindings_for(modename)\n bindings = [(k, v) for (k, v) in sorted(bindings_dict.items())\n if keyutils.KeySequence.parse(prefix).matches(k) and\n not blacklisted(str(k)) and\n (takes_count(v) or not countstr)]\n\n if not bindings:\n self._show_timer.stop()\n return\n\n # delay so a quickly typed keychain doesn't display hints\n self._show_timer.setInterval(config.val.keyhint.delay)\n self._show_timer.start()\n suffix_color = html.escape(config.val.colors.keyhint.suffix.fg)\n\n text = ''\n for seq, cmd in bindings:\n text += (\n \"<tr>\"\n \"<td>{}</td>\"\n \"<td style='color: {}'>{}</td>\"\n \"<td style='padding-left: 2ex'>{}</td>\"\n \"</tr>\"\n ).format(\n html.escape(prefix),\n suffix_color,\n html.escape(str(seq)[len(prefix):]),\n html.escape(cmd)\n )\n text = '<table>{}</table>'.format(text)\n\n self.setText(text)\n self.adjustSize()\n self.update_geometry.emit()\n", "path": "qutebrowser/misc/keyhintwidget.py"}], "after_files": [{"content": "# vim: ft=python fileencoding=utf-8 sts=4 sw=4 et:\n\n# Copyright 2016-2019 Ryan Roden-Corrent (rcorre) <[email protected]>\n#\n# This file is part of qutebrowser.\n#\n# qutebrowser is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# qutebrowser is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with qutebrowser. If not, see <http://www.gnu.org/licenses/>.\n\n\"\"\"Small window that pops up to show hints for possible keystrings.\n\nWhen a user inputs a key that forms a partial match, this shows a small window\nwith each possible completion of that keystring and the corresponding command.\nIt is intended to help discoverability of keybindings.\n\"\"\"\n\nimport html\nimport fnmatch\nimport re\n\nfrom PyQt5.QtWidgets import QLabel, QSizePolicy\nfrom PyQt5.QtCore import pyqtSlot, pyqtSignal, Qt\n\nfrom qutebrowser.config import config\nfrom qutebrowser.utils import utils, usertypes\nfrom qutebrowser.misc import objects\nfrom qutebrowser.keyinput import keyutils\n\n\nclass KeyHintView(QLabel):\n\n \"\"\"The view showing hints for key bindings based on the current key string.\n\n Attributes:\n _win_id: Window ID of parent.\n\n Signals:\n update_geometry: Emitted when this widget should be resized/positioned.\n \"\"\"\n\n STYLESHEET = \"\"\"\n QLabel {\n font: {{ conf.fonts.keyhint }};\n color: {{ conf.colors.keyhint.fg }};\n background-color: {{ conf.colors.keyhint.bg }};\n padding: 6px;\n {% if conf.statusbar.position == 'top' %}\n border-bottom-right-radius: {{ conf.keyhint.radius }}px;\n {% else %}\n border-top-right-radius: {{ conf.keyhint.radius }}px;\n {% endif %}\n }\n \"\"\"\n update_geometry = pyqtSignal()\n\n def __init__(self, win_id, parent=None):\n super().__init__(parent)\n self.setTextFormat(Qt.RichText)\n self._win_id = win_id\n self.setSizePolicy(QSizePolicy.Fixed, QSizePolicy.Minimum)\n self.hide()\n self._show_timer = usertypes.Timer(self, 'keyhint_show')\n self._show_timer.timeout.connect(self.show)\n self._show_timer.setSingleShot(True)\n config.set_register_stylesheet(self)\n\n def __repr__(self):\n return utils.get_repr(self, win_id=self._win_id)\n\n def showEvent(self, e):\n \"\"\"Adjust the keyhint size when it's freshly shown.\"\"\"\n self.update_geometry.emit()\n super().showEvent(e)\n\n @pyqtSlot(str)\n def update_keyhint(self, modename, prefix):\n \"\"\"Show hints for the given prefix (or hide if prefix is empty).\n\n Args:\n prefix: The current partial keystring.\n \"\"\"\n countstr, prefix = re.fullmatch(r'(\\d*)(.*)', prefix).groups()\n if not prefix:\n self._show_timer.stop()\n self.hide()\n return\n\n def blacklisted(keychain):\n return any(fnmatch.fnmatchcase(keychain, glob)\n for glob in config.val.keyhint.blacklist)\n\n def takes_count(cmdstr):\n \"\"\"Return true iff this command can take a count argument.\"\"\"\n cmdname = cmdstr.split(' ')[0]\n cmd = objects.commands.get(cmdname)\n return cmd and cmd.takes_count()\n\n bindings_dict = config.key_instance.get_bindings_for(modename)\n bindings = [(k, v) for (k, v) in sorted(bindings_dict.items())\n if keyutils.KeySequence.parse(prefix).matches(k) and\n not blacklisted(str(k)) and\n (takes_count(v) or not countstr)]\n\n if not bindings:\n self._show_timer.stop()\n return\n\n # delay so a quickly typed keychain doesn't display hints\n self._show_timer.setInterval(config.val.keyhint.delay)\n self._show_timer.start()\n suffix_color = html.escape(config.val.colors.keyhint.suffix.fg)\n\n text = ''\n for seq, cmd in bindings:\n text += (\n \"<tr>\"\n \"<td>{}</td>\"\n \"<td style='color: {}'>{}</td>\"\n \"<td style='padding-left: 2ex'>{}</td>\"\n \"</tr>\"\n ).format(\n html.escape(prefix),\n suffix_color,\n html.escape(str(seq)[len(prefix):]),\n html.escape(cmd)\n )\n text = '<table>{}</table>'.format(text)\n\n self.setText(text)\n self.adjustSize()\n self.update_geometry.emit()\n", "path": "qutebrowser/misc/keyhintwidget.py"}]}
| 1,719 | 113 |
gh_patches_debug_64526
|
rasdani/github-patches
|
git_diff
|
kartoza__prj.app-342
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Order sponsors in their groups
From @andreasneumann:
```For the sponsors listing - is there a clear order within the same level at http://changelog.qgis.org/en/qgis/version/2.16.0/ ?
In my opinion, it should either be ordered alphabetically or by date. Neither seems to be the case. I would prefer alphabetic ordering with in each sponsorship level.```
I think it is actually better to order them with most recently added sponsors first to oldest sponsors last. That we they get the most visibility when they are new, degrading over time to the bottom of the list. What do you think @andreasneumann ?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `django_project/changes/models/version.py`
Content:
```
1 # coding=utf-8
2 from django.core.urlresolvers import reverse
3 # from django.utils.text import slugify
4 from common.utilities import version_slugify
5 import os
6 import logging
7 from core.settings.contrib import STOP_WORDS
8 from django.conf.global_settings import MEDIA_ROOT
9 from django.db import models
10 from .entry import Entry
11 from .sponsorship_period import SponsorshipPeriod
12 from django.contrib.auth.models import User
13 from django.utils.translation import ugettext_lazy as _
14
15 logger = logging.getLogger(__name__)
16
17
18 class ApprovedVersionManager(models.Manager):
19 """Custom version manager that shows only approved records."""
20
21 def get_queryset(self):
22 """Query set generator"""
23 return super(
24 ApprovedVersionManager, self).get_queryset().filter(
25 approved=True)
26
27
28 class UnapprovedVersionManager(models.Manager):
29 """Custom version manager that shows only unapproved records."""
30
31 def get_queryset(self):
32 """Query set generator"""
33 return super(
34 UnapprovedVersionManager, self).get_queryset().filter(
35 approved=False)
36
37
38 # noinspection PyUnresolvedReferences
39 class Version(models.Model):
40 """A version model that the changelog is associated with.."""
41
42 name = models.CharField(
43 help_text='Name of this release e.g. 1.0.1.',
44 max_length=255,
45 null=False,
46 blank=False,
47 unique=False)
48
49 padded_version = models.CharField(
50 help_text=(
51 'Numeric version for this release e.g. 001000001 for 1.0.1 '
52 'calculated by zero padding each component of maj/minor/bugfix '
53 'elements from name.'),
54 max_length=9,
55 null=False,
56 blank=True,
57 unique=False)
58
59 approved = models.BooleanField(
60 help_text=(
61 'Whether this version has been approved for use by the '
62 'project owner.'),
63 default=False)
64
65 image_file = models.ImageField(
66 help_text=(
67 'An optional image for this version e.g. a splashscreen. '
68 'Most browsers support dragging the image directly on to the '
69 '"Choose File" button above.'),
70 upload_to=os.path.join(MEDIA_ROOT, 'images/projects'),
71 blank=True)
72
73 description = models.TextField(
74 null=True,
75 blank=True,
76 help_text='Describe the new version. Markdown is supported.')
77
78 release_date = models.DateField(
79 _('Release date (yyyy-mm-dd)'),
80 help_text='Date of official release',
81 null=True,
82 blank=True)
83
84 author = models.ForeignKey(User)
85 slug = models.SlugField()
86 project = models.ForeignKey('base.Project')
87 objects = models.Manager()
88 approved_objects = ApprovedVersionManager()
89 unapproved_objects = UnapprovedVersionManager()
90
91 # noinspection PyClassicStyleClass
92 class Meta:
93 """Meta options for the version class."""
94 unique_together = (
95 ('name', 'project'),
96 ('slug', 'project'),
97 )
98 app_label = 'changes'
99 # ordering = ['-datetime_created']
100
101 def save(self, *args, **kwargs):
102 if not self.pk:
103 words = self.name.split()
104 filtered_words = [t for t in words if t.lower() not in STOP_WORDS]
105 new_list = ' '.join(filtered_words)
106 self.slug = version_slugify(new_list)[:50]
107 self.padded_version = self.pad_name(self.name)
108 super(Version, self).save(*args, **kwargs)
109
110 def pad_name(self, version):
111 """Create a 0 padded version of the version name.
112
113 e.g. input: 2.10.1
114 e.g. output: 002010100
115
116 This will ensure we have sortable version names.
117
118 :param version: A text version in the form 0.0.0 - if the version is
119 not in this form, we return the version unaltered.
120 :type version: str
121
122 :returns: Zero padded representation of the version e.g. 001010100
123 :rtype: str
124
125 """
126 tokens = version.split('.')
127 if len(tokens) != 3:
128 return version
129 result = ''
130 for token in tokens:
131 result += token.zfill(3)
132 return result
133
134 def __unicode__(self):
135 return u'%s : %s' % (self.project.name, self.name)
136
137 def get_absolute_url(self):
138 return reverse('version-detail', kwargs={
139 'slug': self.slug,
140 'project_slug': self.project.slug
141 })
142
143 def entries(self):
144 """Get the entries for this version."""
145 qs = Entry.objects.filter(version=self).order_by('category__sort_number')
146 return qs
147
148 def _entries_for_category(self, category):
149 """All entries for this version and filtered by the given category.
150
151 :param category: Category to filter by.
152 :type category: Category
153
154 .. note:: only approved entries returned.
155 """
156 qs = Entry.objects.filter(
157 version=self,
158 category=category,
159 approved=True)
160 return qs
161
162 def categories(self):
163 """Get a list of categories where there are one or more entries.
164
165 Example use in template::
166 {% for row in version.categories %}
167 <h2 class="text-muted">{{ row.category.name }}</h2>
168 <ul>
169 {% for entry in row.entries %}
170 <li>{{ entry.name }}</li>
171 {% endfor %}
172 </ul>
173 {% endfor %}
174 """
175 qs = self.entries()
176 used = []
177 categories = []
178 for entry in qs:
179 category = entry.category
180 if category not in used:
181 row = {
182 'category': category,
183 'entries': self._entries_for_category(category)
184 }
185 categories.append(row)
186 used.append(category)
187 return categories
188
189 def sponsors(self):
190 """Return a list of sponsors current at time of this version release.
191
192 :returns: A list of SponsorPeriod objects for current project
193 whose release date coincides with the version release date.
194 Only approved sponsors are returned.
195 Returns None if the release date (which is optional) is not set.
196 :rtype: Queryset, None
197 """
198 if self.release_date is None:
199 return None
200 sponsors = SponsorshipPeriod.approved_objects.filter(
201 end_date__gte=self.release_date).filter(
202 start_date__lte=self.release_date).filter(
203 project=self.project).order_by(
204 'start_date').order_by(
205 '-sponsorship_level__value')
206 return sponsors
207
208 def formatted_release_date(self):
209 """"Return a long formatted released date e.g. 24 June 2016.
210
211 :returns: A string containing the long formatted date, or an empty
212 string if the date is not set.
213 :rtype: str
214 """
215 long_date = None
216 if self.release_date:
217 # %-d Day of the month as a decimal number. (Platform specific)
218 # %B Month as locale’s full name.
219 # %Y Year e.g. 2016
220 long_date = self.release_date.strftime('%-d %B, %Y')
221 return long_date
222
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/django_project/changes/models/version.py b/django_project/changes/models/version.py
--- a/django_project/changes/models/version.py
+++ b/django_project/changes/models/version.py
@@ -202,7 +202,7 @@
start_date__lte=self.release_date).filter(
project=self.project).order_by(
'start_date').order_by(
- '-sponsorship_level__value')
+ '-sponsorship_level__value', 'sponsor__name')
return sponsors
def formatted_release_date(self):
|
{"golden_diff": "diff --git a/django_project/changes/models/version.py b/django_project/changes/models/version.py\n--- a/django_project/changes/models/version.py\n+++ b/django_project/changes/models/version.py\n@@ -202,7 +202,7 @@\n start_date__lte=self.release_date).filter(\n project=self.project).order_by(\n 'start_date').order_by(\n- '-sponsorship_level__value')\n+ '-sponsorship_level__value', 'sponsor__name')\n return sponsors\n \n def formatted_release_date(self):\n", "issue": "Order sponsors in their groups\nFrom @andreasneumann: \n\n```For the sponsors listing - is there a clear order within the same level at http://changelog.qgis.org/en/qgis/version/2.16.0/ ?\n\nIn my opinion, it should either be ordered alphabetically or by date. Neither seems to be the case. I would prefer alphabetic ordering with in each sponsorship level.```\n\nI think it is actually better to order them with most recently added sponsors first to oldest sponsors last. That we they get the most visibility when they are new, degrading over time to the bottom of the list. What do you think @andreasneumann ?\n\n", "before_files": [{"content": "# coding=utf-8\nfrom django.core.urlresolvers import reverse\n# from django.utils.text import slugify\nfrom common.utilities import version_slugify\nimport os\nimport logging\nfrom core.settings.contrib import STOP_WORDS\nfrom django.conf.global_settings import MEDIA_ROOT\nfrom django.db import models\nfrom .entry import Entry\nfrom .sponsorship_period import SponsorshipPeriod\nfrom django.contrib.auth.models import User\nfrom django.utils.translation import ugettext_lazy as _\n\nlogger = logging.getLogger(__name__)\n\n\nclass ApprovedVersionManager(models.Manager):\n \"\"\"Custom version manager that shows only approved records.\"\"\"\n\n def get_queryset(self):\n \"\"\"Query set generator\"\"\"\n return super(\n ApprovedVersionManager, self).get_queryset().filter(\n approved=True)\n\n\nclass UnapprovedVersionManager(models.Manager):\n \"\"\"Custom version manager that shows only unapproved records.\"\"\"\n\n def get_queryset(self):\n \"\"\"Query set generator\"\"\"\n return super(\n UnapprovedVersionManager, self).get_queryset().filter(\n approved=False)\n\n\n# noinspection PyUnresolvedReferences\nclass Version(models.Model):\n \"\"\"A version model that the changelog is associated with..\"\"\"\n\n name = models.CharField(\n help_text='Name of this release e.g. 1.0.1.',\n max_length=255,\n null=False,\n blank=False,\n unique=False)\n\n padded_version = models.CharField(\n help_text=(\n 'Numeric version for this release e.g. 001000001 for 1.0.1 '\n 'calculated by zero padding each component of maj/minor/bugfix '\n 'elements from name.'),\n max_length=9,\n null=False,\n blank=True,\n unique=False)\n\n approved = models.BooleanField(\n help_text=(\n 'Whether this version has been approved for use by the '\n 'project owner.'),\n default=False)\n\n image_file = models.ImageField(\n help_text=(\n 'An optional image for this version e.g. a splashscreen. '\n 'Most browsers support dragging the image directly on to the '\n '\"Choose File\" button above.'),\n upload_to=os.path.join(MEDIA_ROOT, 'images/projects'),\n blank=True)\n\n description = models.TextField(\n null=True,\n blank=True,\n help_text='Describe the new version. Markdown is supported.')\n\n release_date = models.DateField(\n _('Release date (yyyy-mm-dd)'),\n help_text='Date of official release',\n null=True,\n blank=True)\n\n author = models.ForeignKey(User)\n slug = models.SlugField()\n project = models.ForeignKey('base.Project')\n objects = models.Manager()\n approved_objects = ApprovedVersionManager()\n unapproved_objects = UnapprovedVersionManager()\n\n # noinspection PyClassicStyleClass\n class Meta:\n \"\"\"Meta options for the version class.\"\"\"\n unique_together = (\n ('name', 'project'),\n ('slug', 'project'),\n )\n app_label = 'changes'\n # ordering = ['-datetime_created']\n\n def save(self, *args, **kwargs):\n if not self.pk:\n words = self.name.split()\n filtered_words = [t for t in words if t.lower() not in STOP_WORDS]\n new_list = ' '.join(filtered_words)\n self.slug = version_slugify(new_list)[:50]\n self.padded_version = self.pad_name(self.name)\n super(Version, self).save(*args, **kwargs)\n\n def pad_name(self, version):\n \"\"\"Create a 0 padded version of the version name.\n\n e.g. input: 2.10.1\n e.g. output: 002010100\n\n This will ensure we have sortable version names.\n\n :param version: A text version in the form 0.0.0 - if the version is\n not in this form, we return the version unaltered.\n :type version: str\n\n :returns: Zero padded representation of the version e.g. 001010100\n :rtype: str\n\n \"\"\"\n tokens = version.split('.')\n if len(tokens) != 3:\n return version\n result = ''\n for token in tokens:\n result += token.zfill(3)\n return result\n\n def __unicode__(self):\n return u'%s : %s' % (self.project.name, self.name)\n\n def get_absolute_url(self):\n return reverse('version-detail', kwargs={\n 'slug': self.slug,\n 'project_slug': self.project.slug\n })\n\n def entries(self):\n \"\"\"Get the entries for this version.\"\"\"\n qs = Entry.objects.filter(version=self).order_by('category__sort_number')\n return qs\n\n def _entries_for_category(self, category):\n \"\"\"All entries for this version and filtered by the given category.\n\n :param category: Category to filter by.\n :type category: Category\n\n .. note:: only approved entries returned.\n \"\"\"\n qs = Entry.objects.filter(\n version=self,\n category=category,\n approved=True)\n return qs\n\n def categories(self):\n \"\"\"Get a list of categories where there are one or more entries.\n\n Example use in template::\n {% for row in version.categories %}\n <h2 class=\"text-muted\">{{ row.category.name }}</h2>\n <ul>\n {% for entry in row.entries %}\n <li>{{ entry.name }}</li>\n {% endfor %}\n </ul>\n {% endfor %}\n \"\"\"\n qs = self.entries()\n used = []\n categories = []\n for entry in qs:\n category = entry.category\n if category not in used:\n row = {\n 'category': category,\n 'entries': self._entries_for_category(category)\n }\n categories.append(row)\n used.append(category)\n return categories\n\n def sponsors(self):\n \"\"\"Return a list of sponsors current at time of this version release.\n\n :returns: A list of SponsorPeriod objects for current project\n whose release date coincides with the version release date.\n Only approved sponsors are returned.\n Returns None if the release date (which is optional) is not set.\n :rtype: Queryset, None\n \"\"\"\n if self.release_date is None:\n return None\n sponsors = SponsorshipPeriod.approved_objects.filter(\n end_date__gte=self.release_date).filter(\n start_date__lte=self.release_date).filter(\n project=self.project).order_by(\n 'start_date').order_by(\n '-sponsorship_level__value')\n return sponsors\n\n def formatted_release_date(self):\n \"\"\"\"Return a long formatted released date e.g. 24 June 2016.\n\n :returns: A string containing the long formatted date, or an empty\n string if the date is not set.\n :rtype: str\n \"\"\"\n long_date = None\n if self.release_date:\n # %-d Day of the month as a decimal number. (Platform specific)\n # %B Month as locale\u2019s full name.\n # %Y Year e.g. 2016\n long_date = self.release_date.strftime('%-d %B, %Y')\n return long_date\n", "path": "django_project/changes/models/version.py"}], "after_files": [{"content": "# coding=utf-8\nfrom django.core.urlresolvers import reverse\n# from django.utils.text import slugify\nfrom common.utilities import version_slugify\nimport os\nimport logging\nfrom core.settings.contrib import STOP_WORDS\nfrom django.conf.global_settings import MEDIA_ROOT\nfrom django.db import models\nfrom .entry import Entry\nfrom .sponsorship_period import SponsorshipPeriod\nfrom django.contrib.auth.models import User\nfrom django.utils.translation import ugettext_lazy as _\n\nlogger = logging.getLogger(__name__)\n\n\nclass ApprovedVersionManager(models.Manager):\n \"\"\"Custom version manager that shows only approved records.\"\"\"\n\n def get_queryset(self):\n \"\"\"Query set generator\"\"\"\n return super(\n ApprovedVersionManager, self).get_queryset().filter(\n approved=True)\n\n\nclass UnapprovedVersionManager(models.Manager):\n \"\"\"Custom version manager that shows only unapproved records.\"\"\"\n\n def get_queryset(self):\n \"\"\"Query set generator\"\"\"\n return super(\n UnapprovedVersionManager, self).get_queryset().filter(\n approved=False)\n\n\n# noinspection PyUnresolvedReferences\nclass Version(models.Model):\n \"\"\"A version model that the changelog is associated with..\"\"\"\n\n name = models.CharField(\n help_text='Name of this release e.g. 1.0.1.',\n max_length=255,\n null=False,\n blank=False,\n unique=False)\n\n padded_version = models.CharField(\n help_text=(\n 'Numeric version for this release e.g. 001000001 for 1.0.1 '\n 'calculated by zero padding each component of maj/minor/bugfix '\n 'elements from name.'),\n max_length=9,\n null=False,\n blank=True,\n unique=False)\n\n approved = models.BooleanField(\n help_text=(\n 'Whether this version has been approved for use by the '\n 'project owner.'),\n default=False)\n\n image_file = models.ImageField(\n help_text=(\n 'An optional image for this version e.g. a splashscreen. '\n 'Most browsers support dragging the image directly on to the '\n '\"Choose File\" button above.'),\n upload_to=os.path.join(MEDIA_ROOT, 'images/projects'),\n blank=True)\n\n description = models.TextField(\n null=True,\n blank=True,\n help_text='Describe the new version. Markdown is supported.')\n\n release_date = models.DateField(\n _('Release date (yyyy-mm-dd)'),\n help_text='Date of official release',\n null=True,\n blank=True)\n\n author = models.ForeignKey(User)\n slug = models.SlugField()\n project = models.ForeignKey('base.Project')\n objects = models.Manager()\n approved_objects = ApprovedVersionManager()\n unapproved_objects = UnapprovedVersionManager()\n\n # noinspection PyClassicStyleClass\n class Meta:\n \"\"\"Meta options for the version class.\"\"\"\n unique_together = (\n ('name', 'project'),\n ('slug', 'project'),\n )\n app_label = 'changes'\n # ordering = ['-datetime_created']\n\n def save(self, *args, **kwargs):\n if not self.pk:\n words = self.name.split()\n filtered_words = [t for t in words if t.lower() not in STOP_WORDS]\n new_list = ' '.join(filtered_words)\n self.slug = version_slugify(new_list)[:50]\n self.padded_version = self.pad_name(self.name)\n super(Version, self).save(*args, **kwargs)\n\n def pad_name(self, version):\n \"\"\"Create a 0 padded version of the version name.\n\n e.g. input: 2.10.1\n e.g. output: 002010100\n\n This will ensure we have sortable version names.\n\n :param version: A text version in the form 0.0.0 - if the version is\n not in this form, we return the version unaltered.\n :type version: str\n\n :returns: Zero padded representation of the version e.g. 001010100\n :rtype: str\n\n \"\"\"\n tokens = version.split('.')\n if len(tokens) != 3:\n return version\n result = ''\n for token in tokens:\n result += token.zfill(3)\n return result\n\n def __unicode__(self):\n return u'%s : %s' % (self.project.name, self.name)\n\n def get_absolute_url(self):\n return reverse('version-detail', kwargs={\n 'slug': self.slug,\n 'project_slug': self.project.slug\n })\n\n def entries(self):\n \"\"\"Get the entries for this version.\"\"\"\n qs = Entry.objects.filter(version=self).order_by('category__sort_number')\n return qs\n\n def _entries_for_category(self, category):\n \"\"\"All entries for this version and filtered by the given category.\n\n :param category: Category to filter by.\n :type category: Category\n\n .. note:: only approved entries returned.\n \"\"\"\n qs = Entry.objects.filter(\n version=self,\n category=category,\n approved=True)\n return qs\n\n def categories(self):\n \"\"\"Get a list of categories where there are one or more entries.\n\n Example use in template::\n {% for row in version.categories %}\n <h2 class=\"text-muted\">{{ row.category.name }}</h2>\n <ul>\n {% for entry in row.entries %}\n <li>{{ entry.name }}</li>\n {% endfor %}\n </ul>\n {% endfor %}\n \"\"\"\n qs = self.entries()\n used = []\n categories = []\n for entry in qs:\n category = entry.category\n if category not in used:\n row = {\n 'category': category,\n 'entries': self._entries_for_category(category)\n }\n categories.append(row)\n used.append(category)\n return categories\n\n def sponsors(self):\n \"\"\"Return a list of sponsors current at time of this version release.\n\n :returns: A list of SponsorPeriod objects for current project\n whose release date coincides with the version release date.\n Only approved sponsors are returned.\n Returns None if the release date (which is optional) is not set.\n :rtype: Queryset, None\n \"\"\"\n if self.release_date is None:\n return None\n sponsors = SponsorshipPeriod.approved_objects.filter(\n end_date__gte=self.release_date).filter(\n start_date__lte=self.release_date).filter(\n project=self.project).order_by(\n 'start_date').order_by(\n '-sponsorship_level__value', 'sponsor__name')\n return sponsors\n\n def formatted_release_date(self):\n \"\"\"\"Return a long formatted released date e.g. 24 June 2016.\n\n :returns: A string containing the long formatted date, or an empty\n string if the date is not set.\n :rtype: str\n \"\"\"\n long_date = None\n if self.release_date:\n # %-d Day of the month as a decimal number. (Platform specific)\n # %B Month as locale\u2019s full name.\n # %Y Year e.g. 2016\n long_date = self.release_date.strftime('%-d %B, %Y')\n return long_date\n", "path": "django_project/changes/models/version.py"}]}
| 2,510 | 119 |
gh_patches_debug_34428
|
rasdani/github-patches
|
git_diff
|
koxudaxi__datamodel-code-generator-1829
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Black 24.1.0 breaks code formatting if wrap-string-literal is set
**Describe the bug**
Black [24.1.0](https://github.com/psf/black/releases/tag/24.1.0) was just released and removes support for the deprecated `--experimental-string-processing` flag (psf/black#4096). This breaks the code in [`format.py`](https://github.com/koxudaxi/datamodel-code-generator/blob/acc6bf604b13626f22fc123d72ae08ff0a114155/datamodel_code_generator/format.py#L146) that uses this option:
```
Traceback (most recent call last):
File ".../python3.11/site-packages/datamodel_code_generator/__main__.py", line 429, in main
generate(
File ".../python3.11/site-packages/datamodel_code_generator/__init__.py", line 463, in generate
results = parser.parse()
^^^^^^^^^^^^^^
File ".../python3.11/site-packages/datamodel_code_generator/parser/base.py", line 1156, in parse
code_formatter: Optional[CodeFormatter] = CodeFormatter(
^^^^^^^^^^^^^^
File ".../python3.11/site-packages/datamodel_code_generator/format.py", line 152, in __init__
self.black_mode = black.FileMode(
^^^^^^^^^^^^^^^
TypeError: Mode.__init__() got an unexpected keyword argument 'experimental_string_processing'
```
**Expected behavior**
No crash.
**Version:**
- OS: Linux
- Python version: 3.11
- datamodel-code-generator version: 0.25.2
- black version: 0.24.1
**Additional context**
Possible mitigation:
- add a temporary upper bound to the `black` version spec in [pyproject.toml](https://github.com/koxudaxi/datamodel-code-generator/blob/acc6bf604b13626f22fc123d72ae08ff0a114155/pyproject.toml#L54)
- same, but in user environment definitions
- use `--preview --enable-unstable-feature string_processing` instead (as suggested by the black release notes).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `datamodel_code_generator/format.py`
Content:
```
1 from __future__ import annotations
2
3 from enum import Enum
4 from importlib import import_module
5 from pathlib import Path
6 from typing import TYPE_CHECKING, Any, Dict, List, Optional, Sequence
7 from warnings import warn
8
9 import black
10 import isort
11
12 from datamodel_code_generator.util import cached_property, load_toml
13
14
15 class PythonVersion(Enum):
16 PY_36 = '3.6'
17 PY_37 = '3.7'
18 PY_38 = '3.8'
19 PY_39 = '3.9'
20 PY_310 = '3.10'
21 PY_311 = '3.11'
22 PY_312 = '3.12'
23
24 @cached_property
25 def _is_py_38_or_later(self) -> bool: # pragma: no cover
26 return self.value not in {self.PY_36.value, self.PY_37.value} # type: ignore
27
28 @cached_property
29 def _is_py_39_or_later(self) -> bool: # pragma: no cover
30 return self.value not in {self.PY_36.value, self.PY_37.value, self.PY_38.value} # type: ignore
31
32 @cached_property
33 def _is_py_310_or_later(self) -> bool: # pragma: no cover
34 return self.value not in {
35 self.PY_36.value,
36 self.PY_37.value,
37 self.PY_38.value,
38 self.PY_39.value,
39 } # type: ignore
40
41 @cached_property
42 def _is_py_311_or_later(self) -> bool: # pragma: no cover
43 return self.value not in {
44 self.PY_36.value,
45 self.PY_37.value,
46 self.PY_38.value,
47 self.PY_39.value,
48 self.PY_310.value,
49 } # type: ignore
50
51 @property
52 def has_literal_type(self) -> bool:
53 return self._is_py_38_or_later
54
55 @property
56 def has_union_operator(self) -> bool: # pragma: no cover
57 return self._is_py_310_or_later
58
59 @property
60 def has_annotated_type(self) -> bool:
61 return self._is_py_39_or_later
62
63 @property
64 def has_typed_dict(self) -> bool:
65 return self._is_py_38_or_later
66
67 @property
68 def has_typed_dict_non_required(self) -> bool:
69 return self._is_py_311_or_later
70
71
72 if TYPE_CHECKING:
73
74 class _TargetVersion(Enum):
75 ...
76
77 BLACK_PYTHON_VERSION: Dict[PythonVersion, _TargetVersion]
78 else:
79 BLACK_PYTHON_VERSION: Dict[PythonVersion, black.TargetVersion] = {
80 v: getattr(black.TargetVersion, f'PY{v.name.split("_")[-1]}')
81 for v in PythonVersion
82 if hasattr(black.TargetVersion, f'PY{v.name.split("_")[-1]}')
83 }
84
85
86 def is_supported_in_black(python_version: PythonVersion) -> bool: # pragma: no cover
87 return python_version in BLACK_PYTHON_VERSION
88
89
90 def black_find_project_root(sources: Sequence[Path]) -> Path:
91 if TYPE_CHECKING:
92 from typing import Iterable, Tuple, Union
93
94 def _find_project_root(
95 srcs: Union[Sequence[str], Iterable[str]],
96 ) -> Union[Tuple[Path, str], Path]:
97 ...
98
99 else:
100 from black import find_project_root as _find_project_root
101 project_root = _find_project_root(tuple(str(s) for s in sources))
102 if isinstance(project_root, tuple):
103 return project_root[0]
104 else: # pragma: no cover
105 return project_root
106
107
108 class CodeFormatter:
109 def __init__(
110 self,
111 python_version: PythonVersion,
112 settings_path: Optional[Path] = None,
113 wrap_string_literal: Optional[bool] = None,
114 skip_string_normalization: bool = True,
115 known_third_party: Optional[List[str]] = None,
116 custom_formatters: Optional[List[str]] = None,
117 custom_formatters_kwargs: Optional[Dict[str, Any]] = None,
118 ) -> None:
119 if not settings_path:
120 settings_path = Path().resolve()
121
122 root = black_find_project_root((settings_path,))
123 path = root / 'pyproject.toml'
124 if path.is_file():
125 pyproject_toml = load_toml(path)
126 config = pyproject_toml.get('tool', {}).get('black', {})
127 else:
128 config = {}
129
130 black_kwargs: Dict[str, Any] = {}
131 if wrap_string_literal is not None:
132 experimental_string_processing = wrap_string_literal
133 else:
134 experimental_string_processing = config.get(
135 'experimental-string-processing'
136 )
137
138 if experimental_string_processing is not None: # pragma: no cover
139 if black.__version__.startswith('19.'): # type: ignore
140 warn(
141 f"black doesn't support `experimental-string-processing` option" # type: ignore
142 f' for wrapping string literal in {black.__version__}'
143 )
144 else:
145 black_kwargs[
146 'experimental_string_processing'
147 ] = experimental_string_processing
148
149 if TYPE_CHECKING:
150 self.black_mode: black.FileMode
151 else:
152 self.black_mode = black.FileMode(
153 target_versions={BLACK_PYTHON_VERSION[python_version]},
154 line_length=config.get('line-length', black.DEFAULT_LINE_LENGTH),
155 string_normalization=not skip_string_normalization
156 or not config.get('skip-string-normalization', True),
157 **black_kwargs,
158 )
159
160 self.settings_path: str = str(settings_path)
161
162 self.isort_config_kwargs: Dict[str, Any] = {}
163 if known_third_party:
164 self.isort_config_kwargs['known_third_party'] = known_third_party
165
166 if isort.__version__.startswith('4.'):
167 self.isort_config = None
168 else:
169 self.isort_config = isort.Config(
170 settings_path=self.settings_path, **self.isort_config_kwargs
171 )
172
173 self.custom_formatters_kwargs = custom_formatters_kwargs or {}
174 self.custom_formatters = self._check_custom_formatters(custom_formatters)
175
176 def _load_custom_formatter(
177 self, custom_formatter_import: str
178 ) -> CustomCodeFormatter:
179 import_ = import_module(custom_formatter_import)
180
181 if not hasattr(import_, 'CodeFormatter'):
182 raise NameError(
183 f'Custom formatter module `{import_.__name__}` must contains object with name Formatter'
184 )
185
186 formatter_class = import_.__getattribute__('CodeFormatter')
187
188 if not issubclass(formatter_class, CustomCodeFormatter):
189 raise TypeError(
190 f'The custom module {custom_formatter_import} must inherit from `datamodel-code-generator`'
191 )
192
193 return formatter_class(formatter_kwargs=self.custom_formatters_kwargs)
194
195 def _check_custom_formatters(
196 self, custom_formatters: Optional[List[str]]
197 ) -> List[CustomCodeFormatter]:
198 if custom_formatters is None:
199 return []
200
201 return [
202 self._load_custom_formatter(custom_formatter_import)
203 for custom_formatter_import in custom_formatters
204 ]
205
206 def format_code(
207 self,
208 code: str,
209 ) -> str:
210 code = self.apply_isort(code)
211 code = self.apply_black(code)
212
213 for formatter in self.custom_formatters:
214 code = formatter.apply(code)
215
216 return code
217
218 def apply_black(self, code: str) -> str:
219 return black.format_str(
220 code,
221 mode=self.black_mode,
222 )
223
224 if TYPE_CHECKING:
225
226 def apply_isort(self, code: str) -> str:
227 ...
228
229 else:
230 if isort.__version__.startswith('4.'):
231
232 def apply_isort(self, code: str) -> str:
233 return isort.SortImports(
234 file_contents=code,
235 settings_path=self.settings_path,
236 **self.isort_config_kwargs,
237 ).output
238
239 else:
240
241 def apply_isort(self, code: str) -> str:
242 return isort.code(code, config=self.isort_config)
243
244
245 class CustomCodeFormatter:
246 def __init__(self, formatter_kwargs: Dict[str, Any]) -> None:
247 self.formatter_kwargs = formatter_kwargs
248
249 def apply(self, code: str) -> str:
250 raise NotImplementedError
251
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/datamodel_code_generator/format.py b/datamodel_code_generator/format.py
--- a/datamodel_code_generator/format.py
+++ b/datamodel_code_generator/format.py
@@ -7,6 +7,7 @@
from warnings import warn
import black
+import black.mode
import isort
from datamodel_code_generator.util import cached_property, load_toml
@@ -131,9 +132,15 @@
if wrap_string_literal is not None:
experimental_string_processing = wrap_string_literal
else:
- experimental_string_processing = config.get(
- 'experimental-string-processing'
- )
+ if black.__version__ < '24.1.0': # type: ignore
+ experimental_string_processing = config.get(
+ 'experimental-string-processing'
+ )
+ else:
+ experimental_string_processing = config.get('preview', False) and (
+ config.get('unstable', False)
+ or 'string_processing' in config.get('enable-unstable-feature', [])
+ )
if experimental_string_processing is not None: # pragma: no cover
if black.__version__.startswith('19.'): # type: ignore
@@ -141,10 +148,16 @@
f"black doesn't support `experimental-string-processing` option" # type: ignore
f' for wrapping string literal in {black.__version__}'
)
- else:
+ elif black.__version__ < '24.1.0': # type: ignore
black_kwargs[
'experimental_string_processing'
] = experimental_string_processing
+ elif experimental_string_processing:
+ black_kwargs['preview'] = True
+ black_kwargs['unstable'] = config.get('unstable', False)
+ black_kwargs['enabled_features'] = {
+ black.mode.Preview.string_processing
+ }
if TYPE_CHECKING:
self.black_mode: black.FileMode
|
{"golden_diff": "diff --git a/datamodel_code_generator/format.py b/datamodel_code_generator/format.py\n--- a/datamodel_code_generator/format.py\n+++ b/datamodel_code_generator/format.py\n@@ -7,6 +7,7 @@\n from warnings import warn\n \n import black\n+import black.mode\n import isort\n \n from datamodel_code_generator.util import cached_property, load_toml\n@@ -131,9 +132,15 @@\n if wrap_string_literal is not None:\n experimental_string_processing = wrap_string_literal\n else:\n- experimental_string_processing = config.get(\n- 'experimental-string-processing'\n- )\n+ if black.__version__ < '24.1.0': # type: ignore\n+ experimental_string_processing = config.get(\n+ 'experimental-string-processing'\n+ )\n+ else:\n+ experimental_string_processing = config.get('preview', False) and (\n+ config.get('unstable', False)\n+ or 'string_processing' in config.get('enable-unstable-feature', [])\n+ )\n \n if experimental_string_processing is not None: # pragma: no cover\n if black.__version__.startswith('19.'): # type: ignore\n@@ -141,10 +148,16 @@\n f\"black doesn't support `experimental-string-processing` option\" # type: ignore\n f' for wrapping string literal in {black.__version__}'\n )\n- else:\n+ elif black.__version__ < '24.1.0': # type: ignore\n black_kwargs[\n 'experimental_string_processing'\n ] = experimental_string_processing\n+ elif experimental_string_processing:\n+ black_kwargs['preview'] = True\n+ black_kwargs['unstable'] = config.get('unstable', False)\n+ black_kwargs['enabled_features'] = {\n+ black.mode.Preview.string_processing\n+ }\n \n if TYPE_CHECKING:\n self.black_mode: black.FileMode\n", "issue": "Black 24.1.0 breaks code formatting if wrap-string-literal is set\n**Describe the bug**\r\n\r\nBlack [24.1.0](https://github.com/psf/black/releases/tag/24.1.0) was just released and removes support for the deprecated `--experimental-string-processing` flag (psf/black#4096). This breaks the code in [`format.py`](https://github.com/koxudaxi/datamodel-code-generator/blob/acc6bf604b13626f22fc123d72ae08ff0a114155/datamodel_code_generator/format.py#L146) that uses this option:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \".../python3.11/site-packages/datamodel_code_generator/__main__.py\", line 429, in main\r\n generate(\r\n File \".../python3.11/site-packages/datamodel_code_generator/__init__.py\", line 463, in generate\r\n results = parser.parse()\r\n ^^^^^^^^^^^^^^\r\n File \".../python3.11/site-packages/datamodel_code_generator/parser/base.py\", line 1156, in parse\r\n code_formatter: Optional[CodeFormatter] = CodeFormatter(\r\n ^^^^^^^^^^^^^^\r\n File \".../python3.11/site-packages/datamodel_code_generator/format.py\", line 152, in __init__\r\n self.black_mode = black.FileMode(\r\n ^^^^^^^^^^^^^^^\r\nTypeError: Mode.__init__() got an unexpected keyword argument 'experimental_string_processing'\r\n```\r\n\r\n**Expected behavior**\r\n\r\nNo crash.\r\n\r\n**Version:**\r\n - OS: Linux\r\n - Python version: 3.11\r\n - datamodel-code-generator version: 0.25.2\r\n - black version: 0.24.1\r\n\r\n**Additional context**\r\n\r\nPossible mitigation:\r\n- add a temporary upper bound to the `black` version spec in [pyproject.toml](https://github.com/koxudaxi/datamodel-code-generator/blob/acc6bf604b13626f22fc123d72ae08ff0a114155/pyproject.toml#L54)\r\n- same, but in user environment definitions\r\n- use `--preview --enable-unstable-feature string_processing` instead (as suggested by the black release notes).\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom enum import Enum\nfrom importlib import import_module\nfrom pathlib import Path\nfrom typing import TYPE_CHECKING, Any, Dict, List, Optional, Sequence\nfrom warnings import warn\n\nimport black\nimport isort\n\nfrom datamodel_code_generator.util import cached_property, load_toml\n\n\nclass PythonVersion(Enum):\n PY_36 = '3.6'\n PY_37 = '3.7'\n PY_38 = '3.8'\n PY_39 = '3.9'\n PY_310 = '3.10'\n PY_311 = '3.11'\n PY_312 = '3.12'\n\n @cached_property\n def _is_py_38_or_later(self) -> bool: # pragma: no cover\n return self.value not in {self.PY_36.value, self.PY_37.value} # type: ignore\n\n @cached_property\n def _is_py_39_or_later(self) -> bool: # pragma: no cover\n return self.value not in {self.PY_36.value, self.PY_37.value, self.PY_38.value} # type: ignore\n\n @cached_property\n def _is_py_310_or_later(self) -> bool: # pragma: no cover\n return self.value not in {\n self.PY_36.value,\n self.PY_37.value,\n self.PY_38.value,\n self.PY_39.value,\n } # type: ignore\n\n @cached_property\n def _is_py_311_or_later(self) -> bool: # pragma: no cover\n return self.value not in {\n self.PY_36.value,\n self.PY_37.value,\n self.PY_38.value,\n self.PY_39.value,\n self.PY_310.value,\n } # type: ignore\n\n @property\n def has_literal_type(self) -> bool:\n return self._is_py_38_or_later\n\n @property\n def has_union_operator(self) -> bool: # pragma: no cover\n return self._is_py_310_or_later\n\n @property\n def has_annotated_type(self) -> bool:\n return self._is_py_39_or_later\n\n @property\n def has_typed_dict(self) -> bool:\n return self._is_py_38_or_later\n\n @property\n def has_typed_dict_non_required(self) -> bool:\n return self._is_py_311_or_later\n\n\nif TYPE_CHECKING:\n\n class _TargetVersion(Enum):\n ...\n\n BLACK_PYTHON_VERSION: Dict[PythonVersion, _TargetVersion]\nelse:\n BLACK_PYTHON_VERSION: Dict[PythonVersion, black.TargetVersion] = {\n v: getattr(black.TargetVersion, f'PY{v.name.split(\"_\")[-1]}')\n for v in PythonVersion\n if hasattr(black.TargetVersion, f'PY{v.name.split(\"_\")[-1]}')\n }\n\n\ndef is_supported_in_black(python_version: PythonVersion) -> bool: # pragma: no cover\n return python_version in BLACK_PYTHON_VERSION\n\n\ndef black_find_project_root(sources: Sequence[Path]) -> Path:\n if TYPE_CHECKING:\n from typing import Iterable, Tuple, Union\n\n def _find_project_root(\n srcs: Union[Sequence[str], Iterable[str]],\n ) -> Union[Tuple[Path, str], Path]:\n ...\n\n else:\n from black import find_project_root as _find_project_root\n project_root = _find_project_root(tuple(str(s) for s in sources))\n if isinstance(project_root, tuple):\n return project_root[0]\n else: # pragma: no cover\n return project_root\n\n\nclass CodeFormatter:\n def __init__(\n self,\n python_version: PythonVersion,\n settings_path: Optional[Path] = None,\n wrap_string_literal: Optional[bool] = None,\n skip_string_normalization: bool = True,\n known_third_party: Optional[List[str]] = None,\n custom_formatters: Optional[List[str]] = None,\n custom_formatters_kwargs: Optional[Dict[str, Any]] = None,\n ) -> None:\n if not settings_path:\n settings_path = Path().resolve()\n\n root = black_find_project_root((settings_path,))\n path = root / 'pyproject.toml'\n if path.is_file():\n pyproject_toml = load_toml(path)\n config = pyproject_toml.get('tool', {}).get('black', {})\n else:\n config = {}\n\n black_kwargs: Dict[str, Any] = {}\n if wrap_string_literal is not None:\n experimental_string_processing = wrap_string_literal\n else:\n experimental_string_processing = config.get(\n 'experimental-string-processing'\n )\n\n if experimental_string_processing is not None: # pragma: no cover\n if black.__version__.startswith('19.'): # type: ignore\n warn(\n f\"black doesn't support `experimental-string-processing` option\" # type: ignore\n f' for wrapping string literal in {black.__version__}'\n )\n else:\n black_kwargs[\n 'experimental_string_processing'\n ] = experimental_string_processing\n\n if TYPE_CHECKING:\n self.black_mode: black.FileMode\n else:\n self.black_mode = black.FileMode(\n target_versions={BLACK_PYTHON_VERSION[python_version]},\n line_length=config.get('line-length', black.DEFAULT_LINE_LENGTH),\n string_normalization=not skip_string_normalization\n or not config.get('skip-string-normalization', True),\n **black_kwargs,\n )\n\n self.settings_path: str = str(settings_path)\n\n self.isort_config_kwargs: Dict[str, Any] = {}\n if known_third_party:\n self.isort_config_kwargs['known_third_party'] = known_third_party\n\n if isort.__version__.startswith('4.'):\n self.isort_config = None\n else:\n self.isort_config = isort.Config(\n settings_path=self.settings_path, **self.isort_config_kwargs\n )\n\n self.custom_formatters_kwargs = custom_formatters_kwargs or {}\n self.custom_formatters = self._check_custom_formatters(custom_formatters)\n\n def _load_custom_formatter(\n self, custom_formatter_import: str\n ) -> CustomCodeFormatter:\n import_ = import_module(custom_formatter_import)\n\n if not hasattr(import_, 'CodeFormatter'):\n raise NameError(\n f'Custom formatter module `{import_.__name__}` must contains object with name Formatter'\n )\n\n formatter_class = import_.__getattribute__('CodeFormatter')\n\n if not issubclass(formatter_class, CustomCodeFormatter):\n raise TypeError(\n f'The custom module {custom_formatter_import} must inherit from `datamodel-code-generator`'\n )\n\n return formatter_class(formatter_kwargs=self.custom_formatters_kwargs)\n\n def _check_custom_formatters(\n self, custom_formatters: Optional[List[str]]\n ) -> List[CustomCodeFormatter]:\n if custom_formatters is None:\n return []\n\n return [\n self._load_custom_formatter(custom_formatter_import)\n for custom_formatter_import in custom_formatters\n ]\n\n def format_code(\n self,\n code: str,\n ) -> str:\n code = self.apply_isort(code)\n code = self.apply_black(code)\n\n for formatter in self.custom_formatters:\n code = formatter.apply(code)\n\n return code\n\n def apply_black(self, code: str) -> str:\n return black.format_str(\n code,\n mode=self.black_mode,\n )\n\n if TYPE_CHECKING:\n\n def apply_isort(self, code: str) -> str:\n ...\n\n else:\n if isort.__version__.startswith('4.'):\n\n def apply_isort(self, code: str) -> str:\n return isort.SortImports(\n file_contents=code,\n settings_path=self.settings_path,\n **self.isort_config_kwargs,\n ).output\n\n else:\n\n def apply_isort(self, code: str) -> str:\n return isort.code(code, config=self.isort_config)\n\n\nclass CustomCodeFormatter:\n def __init__(self, formatter_kwargs: Dict[str, Any]) -> None:\n self.formatter_kwargs = formatter_kwargs\n\n def apply(self, code: str) -> str:\n raise NotImplementedError\n", "path": "datamodel_code_generator/format.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom enum import Enum\nfrom importlib import import_module\nfrom pathlib import Path\nfrom typing import TYPE_CHECKING, Any, Dict, List, Optional, Sequence\nfrom warnings import warn\n\nimport black\nimport black.mode\nimport isort\n\nfrom datamodel_code_generator.util import cached_property, load_toml\n\n\nclass PythonVersion(Enum):\n PY_36 = '3.6'\n PY_37 = '3.7'\n PY_38 = '3.8'\n PY_39 = '3.9'\n PY_310 = '3.10'\n PY_311 = '3.11'\n PY_312 = '3.12'\n\n @cached_property\n def _is_py_38_or_later(self) -> bool: # pragma: no cover\n return self.value not in {self.PY_36.value, self.PY_37.value} # type: ignore\n\n @cached_property\n def _is_py_39_or_later(self) -> bool: # pragma: no cover\n return self.value not in {self.PY_36.value, self.PY_37.value, self.PY_38.value} # type: ignore\n\n @cached_property\n def _is_py_310_or_later(self) -> bool: # pragma: no cover\n return self.value not in {\n self.PY_36.value,\n self.PY_37.value,\n self.PY_38.value,\n self.PY_39.value,\n } # type: ignore\n\n @cached_property\n def _is_py_311_or_later(self) -> bool: # pragma: no cover\n return self.value not in {\n self.PY_36.value,\n self.PY_37.value,\n self.PY_38.value,\n self.PY_39.value,\n self.PY_310.value,\n } # type: ignore\n\n @property\n def has_literal_type(self) -> bool:\n return self._is_py_38_or_later\n\n @property\n def has_union_operator(self) -> bool: # pragma: no cover\n return self._is_py_310_or_later\n\n @property\n def has_annotated_type(self) -> bool:\n return self._is_py_39_or_later\n\n @property\n def has_typed_dict(self) -> bool:\n return self._is_py_38_or_later\n\n @property\n def has_typed_dict_non_required(self) -> bool:\n return self._is_py_311_or_later\n\n\nif TYPE_CHECKING:\n\n class _TargetVersion(Enum):\n ...\n\n BLACK_PYTHON_VERSION: Dict[PythonVersion, _TargetVersion]\nelse:\n BLACK_PYTHON_VERSION: Dict[PythonVersion, black.TargetVersion] = {\n v: getattr(black.TargetVersion, f'PY{v.name.split(\"_\")[-1]}')\n for v in PythonVersion\n if hasattr(black.TargetVersion, f'PY{v.name.split(\"_\")[-1]}')\n }\n\n\ndef is_supported_in_black(python_version: PythonVersion) -> bool: # pragma: no cover\n return python_version in BLACK_PYTHON_VERSION\n\n\ndef black_find_project_root(sources: Sequence[Path]) -> Path:\n if TYPE_CHECKING:\n from typing import Iterable, Tuple, Union\n\n def _find_project_root(\n srcs: Union[Sequence[str], Iterable[str]],\n ) -> Union[Tuple[Path, str], Path]:\n ...\n\n else:\n from black import find_project_root as _find_project_root\n project_root = _find_project_root(tuple(str(s) for s in sources))\n if isinstance(project_root, tuple):\n return project_root[0]\n else: # pragma: no cover\n return project_root\n\n\nclass CodeFormatter:\n def __init__(\n self,\n python_version: PythonVersion,\n settings_path: Optional[Path] = None,\n wrap_string_literal: Optional[bool] = None,\n skip_string_normalization: bool = True,\n known_third_party: Optional[List[str]] = None,\n custom_formatters: Optional[List[str]] = None,\n custom_formatters_kwargs: Optional[Dict[str, Any]] = None,\n ) -> None:\n if not settings_path:\n settings_path = Path().resolve()\n\n root = black_find_project_root((settings_path,))\n path = root / 'pyproject.toml'\n if path.is_file():\n pyproject_toml = load_toml(path)\n config = pyproject_toml.get('tool', {}).get('black', {})\n else:\n config = {}\n\n black_kwargs: Dict[str, Any] = {}\n if wrap_string_literal is not None:\n experimental_string_processing = wrap_string_literal\n else:\n if black.__version__ < '24.1.0': # type: ignore\n experimental_string_processing = config.get(\n 'experimental-string-processing'\n )\n else:\n experimental_string_processing = config.get('preview', False) and (\n config.get('unstable', False)\n or 'string_processing' in config.get('enable-unstable-feature', [])\n )\n\n if experimental_string_processing is not None: # pragma: no cover\n if black.__version__.startswith('19.'): # type: ignore\n warn(\n f\"black doesn't support `experimental-string-processing` option\" # type: ignore\n f' for wrapping string literal in {black.__version__}'\n )\n elif black.__version__ < '24.1.0': # type: ignore\n black_kwargs[\n 'experimental_string_processing'\n ] = experimental_string_processing\n elif experimental_string_processing:\n black_kwargs['preview'] = True\n black_kwargs['unstable'] = config.get('unstable', False)\n black_kwargs['enabled_features'] = {\n black.mode.Preview.string_processing\n }\n\n if TYPE_CHECKING:\n self.black_mode: black.FileMode\n else:\n self.black_mode = black.FileMode(\n target_versions={BLACK_PYTHON_VERSION[python_version]},\n line_length=config.get('line-length', black.DEFAULT_LINE_LENGTH),\n string_normalization=not skip_string_normalization\n or not config.get('skip-string-normalization', True),\n **black_kwargs,\n )\n\n self.settings_path: str = str(settings_path)\n\n self.isort_config_kwargs: Dict[str, Any] = {}\n if known_third_party:\n self.isort_config_kwargs['known_third_party'] = known_third_party\n\n if isort.__version__.startswith('4.'):\n self.isort_config = None\n else:\n self.isort_config = isort.Config(\n settings_path=self.settings_path, **self.isort_config_kwargs\n )\n\n self.custom_formatters_kwargs = custom_formatters_kwargs or {}\n self.custom_formatters = self._check_custom_formatters(custom_formatters)\n\n def _load_custom_formatter(\n self, custom_formatter_import: str\n ) -> CustomCodeFormatter:\n import_ = import_module(custom_formatter_import)\n\n if not hasattr(import_, 'CodeFormatter'):\n raise NameError(\n f'Custom formatter module `{import_.__name__}` must contains object with name Formatter'\n )\n\n formatter_class = import_.__getattribute__('CodeFormatter')\n\n if not issubclass(formatter_class, CustomCodeFormatter):\n raise TypeError(\n f'The custom module {custom_formatter_import} must inherit from `datamodel-code-generator`'\n )\n\n return formatter_class(formatter_kwargs=self.custom_formatters_kwargs)\n\n def _check_custom_formatters(\n self, custom_formatters: Optional[List[str]]\n ) -> List[CustomCodeFormatter]:\n if custom_formatters is None:\n return []\n\n return [\n self._load_custom_formatter(custom_formatter_import)\n for custom_formatter_import in custom_formatters\n ]\n\n def format_code(\n self,\n code: str,\n ) -> str:\n code = self.apply_isort(code)\n code = self.apply_black(code)\n\n for formatter in self.custom_formatters:\n code = formatter.apply(code)\n\n return code\n\n def apply_black(self, code: str) -> str:\n return black.format_str(\n code,\n mode=self.black_mode,\n )\n\n if TYPE_CHECKING:\n\n def apply_isort(self, code: str) -> str:\n ...\n\n else:\n if isort.__version__.startswith('4.'):\n\n def apply_isort(self, code: str) -> str:\n return isort.SortImports(\n file_contents=code,\n settings_path=self.settings_path,\n **self.isort_config_kwargs,\n ).output\n\n else:\n\n def apply_isort(self, code: str) -> str:\n return isort.code(code, config=self.isort_config)\n\n\nclass CustomCodeFormatter:\n def __init__(self, formatter_kwargs: Dict[str, Any]) -> None:\n self.formatter_kwargs = formatter_kwargs\n\n def apply(self, code: str) -> str:\n raise NotImplementedError\n", "path": "datamodel_code_generator/format.py"}]}
| 3,310 | 425 |
gh_patches_debug_15512
|
rasdani/github-patches
|
git_diff
|
ResonantGeoData__ResonantGeoData-411
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unassigned permissions not working
Non-admin accounts are seeing an incorrect amount of spatial entries in the search results. Here are two results: 1) from my `@kitwar`e account which is an admin and one from my `@gmail` account that has no permissions. Using the changes from #401:
https://github.com/ResonantGeoData/ResonantGeoData/blob/014ce2693a0a3e899d6af0a9d7822a5f1327268c/rgd/geodata/permissions.py#L108
You can see 475 results with the admin account (which is the correct amount) and 4949 results with the nonadmin account which hits that new code (this number is wildly incorrect):
| admin | nonadmin |
| --- | --- |
|  |  |
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rgd/geodata/permissions.py`
Content:
```
1 from typing import Optional
2
3 from django.conf import settings
4 from django.contrib.auth.backends import BaseBackend
5 from django.core.exceptions import PermissionDenied
6 from django.db.models.functions import Coalesce
7
8 from rgd.geodata import models
9
10
11 def annotate_queryset(queryset):
12 """Annotate the queryset to include a path to a collection.
13
14 Some models don't have a direct path to `collection`
15 and must be annotated to include it.
16 """
17 model = queryset.model
18 if model == models.SpatialEntry:
19 return queryset.annotate(
20 _collection_permissions__user=Coalesce(
21 'fmventry__fmv_file__file__collection__collection_permissions__user',
22 'geometryentry__geometry_archive__file__collection__collection_permissions__user',
23 'rastermetaentry__parent_raster__image_set__images__image_file__file__collection__collection_permissions__user',
24 ),
25 _collection_permissions__role=Coalesce(
26 'fmventry__fmv_file__file__collection__collection_permissions__role',
27 'geometryentry__geometry_archive__file__collection__collection_permissions__role',
28 'rastermetaentry__parent_raster__image_set__images__image_file__file__collection__collection_permissions__role',
29 ),
30 )
31 return queryset
32
33
34 def get_collection_membership_path(model) -> Optional[str]:
35 """Get the path to the 'CollectionPermission' model.
36
37 Relationships are represented as 'dunder's ('__'). Returning `None`
38 means the model is explicitly unprotected.
39 """
40 # Collection
41 if issubclass(model, models.CollectionPermission):
42 return ''
43 if issubclass(model, models.Collection):
44 return 'collection_permissions'
45 # Common
46 if issubclass(model, models.ChecksumFile):
47 return 'collection__collection_permissions'
48 # Imagery
49 if issubclass(model, models.ImageEntry):
50 return 'image_file__file__collection__collection_permissions'
51 if issubclass(model, models.ImageSet):
52 return 'images__image_file__file__collection__collection_permissions'
53 if issubclass(model, models.RasterEntry):
54 return 'image_set__images__image_file__file__collection__collection_permissions'
55 if issubclass(model, models.RasterMetaEntry):
56 return (
57 'parent_raster__image_set__images__image_file__file__collection__collection_permissions'
58 )
59 if issubclass(model, models.BandMetaEntry):
60 return 'parent_image__image_file__file__collection__collection_permissions'
61 if issubclass(model, models.ConvertedImageFile):
62 return 'source_image__image_file__file__collection__collection_permissions'
63 if issubclass(model, models.SubsampledImage):
64 return 'source_image__image_file__file__collection__collection_permissions'
65 if issubclass(model, models.KWCOCOArchive):
66 return 'spec_file__collection__collection_permissions'
67 # Annotation
68 if issubclass(model, models.Annotation):
69 return 'image__image_file__collection__collection_permissions'
70 if issubclass(model, models.Segmentation):
71 return 'annotation__image__image_file__collection__collection_permissions'
72 # Geometry
73 if issubclass(model, models.GeometryEntry):
74 return 'geometry_archive__file__collection__collection_permissions'
75 # FMV
76 if issubclass(model, models.FMVEntry):
77 return 'fmv_file__file__collection__collection_permissions'
78 # SpatialEntry
79 if model == models.SpatialEntry:
80 return '_collection_permissions'
81
82 raise NotImplementedError
83
84
85 def filter_perm(user, queryset, role):
86 """Filter a queryset."""
87 # Called outside of view
88 if user is None:
89 return queryset
90 # Must be logged in
91 if not user.is_active or user.is_anonymous:
92 return queryset.none()
93 # Superusers can see all (not staff users)
94 if user.is_active and user.is_superuser:
95 return queryset
96 # No relationship to collection
97 path = get_collection_membership_path(queryset.model)
98 if path is None:
99 return queryset
100 # Check permissions
101 # `path` can be an empty string (meaning queryset is `CollectionPermission`)
102 user_path = (path + '__' if path != '' else path) + 'user'
103 role_path = (path + '__' if path != '' else path) + 'role'
104 queryset = annotate_queryset(queryset)
105 filtered = queryset.filter(**{user_path: user.pk}).exclude(**{role_path + '__lt': role})
106 # Check setting for unassigned permissions
107 if settings.RGD_GLOBAL_READ_ACCESS:
108 unassigned = queryset.filter(**{user_path + '__isnull': True})
109 return unassigned | filtered
110 return filtered
111
112
113 def filter_read_perm(user, queryset):
114 """Filter a queryset to what the user may read."""
115 return filter_perm(user, queryset, models.CollectionPermission.READER)
116
117
118 def filter_write_perm(user, queryset):
119 """Filter a queryset to what the user may edit."""
120 return filter_perm(user, queryset, models.CollectionPermission.OWNER)
121
122
123 def check_read_perm(user, obj):
124 """Raise 'PermissionDenied' error if user does not have read permissions."""
125 model = type(obj)
126 if not filter_read_perm(user, model.objects.filter(pk=obj.pk)).exists():
127 raise PermissionDenied
128
129
130 def check_write_perm(user, obj):
131 """Raise 'PermissionDenied' error if user does not have write permissions."""
132 # Called outside of view
133 model = type(obj)
134 if not filter_write_perm(user, model.objects.filter(pk=obj.pk)).exists():
135 raise PermissionDenied
136
137
138 class CollectionAuthorizationBackend(BaseBackend):
139 def has_perm(self, user, perm, obj=None):
140 """Supplement default Django permission backend.
141
142 Returns `True` if the user has the specified permission, where perm is in the format
143 `"<app label>.<permission codename>"`. If the user is
144 inactive, this method will always return False. For an active superuser, this method
145 will always return `True`.
146
147 https://docs.djangoproject.com/en/3.1/ref/contrib/auth/#django.contrib.auth.models.User.has_perm
148 """
149 app_label, codename = perm.split('.')
150 if app_label == 'geodata':
151 if codename.startswith('view'):
152 check_read_perm(user, obj)
153 if (
154 codename.startswith('add')
155 or codename.startswith('delete')
156 or codename.startswith('change')
157 ):
158 check_write_perm(user, obj)
159
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/rgd/geodata/permissions.py b/rgd/geodata/permissions.py
--- a/rgd/geodata/permissions.py
+++ b/rgd/geodata/permissions.py
@@ -102,10 +102,12 @@
user_path = (path + '__' if path != '' else path) + 'user'
role_path = (path + '__' if path != '' else path) + 'role'
queryset = annotate_queryset(queryset)
- filtered = queryset.filter(**{user_path: user.pk}).exclude(**{role_path + '__lt': role})
+ filtered = (
+ queryset.filter(**{user_path: user.pk}).exclude(**{role_path + '__lt': role}).distinct()
+ )
# Check setting for unassigned permissions
if settings.RGD_GLOBAL_READ_ACCESS:
- unassigned = queryset.filter(**{user_path + '__isnull': True})
+ unassigned = queryset.filter(**{user_path + '__isnull': True}).distinct()
return unassigned | filtered
return filtered
|
{"golden_diff": "diff --git a/rgd/geodata/permissions.py b/rgd/geodata/permissions.py\n--- a/rgd/geodata/permissions.py\n+++ b/rgd/geodata/permissions.py\n@@ -102,10 +102,12 @@\n user_path = (path + '__' if path != '' else path) + 'user'\n role_path = (path + '__' if path != '' else path) + 'role'\n queryset = annotate_queryset(queryset)\n- filtered = queryset.filter(**{user_path: user.pk}).exclude(**{role_path + '__lt': role})\n+ filtered = (\n+ queryset.filter(**{user_path: user.pk}).exclude(**{role_path + '__lt': role}).distinct()\n+ )\n # Check setting for unassigned permissions\n if settings.RGD_GLOBAL_READ_ACCESS:\n- unassigned = queryset.filter(**{user_path + '__isnull': True})\n+ unassigned = queryset.filter(**{user_path + '__isnull': True}).distinct()\n return unassigned | filtered\n return filtered\n", "issue": "Unassigned permissions not working\nNon-admin accounts are seeing an incorrect amount of spatial entries in the search results. Here are two results: 1) from my `@kitwar`e account which is an admin and one from my `@gmail` account that has no permissions. Using the changes from #401: \r\n\r\nhttps://github.com/ResonantGeoData/ResonantGeoData/blob/014ce2693a0a3e899d6af0a9d7822a5f1327268c/rgd/geodata/permissions.py#L108\r\n\r\nYou can see 475 results with the admin account (which is the correct amount) and 4949 results with the nonadmin account which hits that new code (this number is wildly incorrect): \r\n\r\n| admin | nonadmin |\r\n| --- | --- |\r\n|  |  |\r\n\r\n\n", "before_files": [{"content": "from typing import Optional\n\nfrom django.conf import settings\nfrom django.contrib.auth.backends import BaseBackend\nfrom django.core.exceptions import PermissionDenied\nfrom django.db.models.functions import Coalesce\n\nfrom rgd.geodata import models\n\n\ndef annotate_queryset(queryset):\n \"\"\"Annotate the queryset to include a path to a collection.\n\n Some models don't have a direct path to `collection`\n and must be annotated to include it.\n \"\"\"\n model = queryset.model\n if model == models.SpatialEntry:\n return queryset.annotate(\n _collection_permissions__user=Coalesce(\n 'fmventry__fmv_file__file__collection__collection_permissions__user',\n 'geometryentry__geometry_archive__file__collection__collection_permissions__user',\n 'rastermetaentry__parent_raster__image_set__images__image_file__file__collection__collection_permissions__user',\n ),\n _collection_permissions__role=Coalesce(\n 'fmventry__fmv_file__file__collection__collection_permissions__role',\n 'geometryentry__geometry_archive__file__collection__collection_permissions__role',\n 'rastermetaentry__parent_raster__image_set__images__image_file__file__collection__collection_permissions__role',\n ),\n )\n return queryset\n\n\ndef get_collection_membership_path(model) -> Optional[str]:\n \"\"\"Get the path to the 'CollectionPermission' model.\n\n Relationships are represented as 'dunder's ('__'). Returning `None`\n means the model is explicitly unprotected.\n \"\"\"\n # Collection\n if issubclass(model, models.CollectionPermission):\n return ''\n if issubclass(model, models.Collection):\n return 'collection_permissions'\n # Common\n if issubclass(model, models.ChecksumFile):\n return 'collection__collection_permissions'\n # Imagery\n if issubclass(model, models.ImageEntry):\n return 'image_file__file__collection__collection_permissions'\n if issubclass(model, models.ImageSet):\n return 'images__image_file__file__collection__collection_permissions'\n if issubclass(model, models.RasterEntry):\n return 'image_set__images__image_file__file__collection__collection_permissions'\n if issubclass(model, models.RasterMetaEntry):\n return (\n 'parent_raster__image_set__images__image_file__file__collection__collection_permissions'\n )\n if issubclass(model, models.BandMetaEntry):\n return 'parent_image__image_file__file__collection__collection_permissions'\n if issubclass(model, models.ConvertedImageFile):\n return 'source_image__image_file__file__collection__collection_permissions'\n if issubclass(model, models.SubsampledImage):\n return 'source_image__image_file__file__collection__collection_permissions'\n if issubclass(model, models.KWCOCOArchive):\n return 'spec_file__collection__collection_permissions'\n # Annotation\n if issubclass(model, models.Annotation):\n return 'image__image_file__collection__collection_permissions'\n if issubclass(model, models.Segmentation):\n return 'annotation__image__image_file__collection__collection_permissions'\n # Geometry\n if issubclass(model, models.GeometryEntry):\n return 'geometry_archive__file__collection__collection_permissions'\n # FMV\n if issubclass(model, models.FMVEntry):\n return 'fmv_file__file__collection__collection_permissions'\n # SpatialEntry\n if model == models.SpatialEntry:\n return '_collection_permissions'\n\n raise NotImplementedError\n\n\ndef filter_perm(user, queryset, role):\n \"\"\"Filter a queryset.\"\"\"\n # Called outside of view\n if user is None:\n return queryset\n # Must be logged in\n if not user.is_active or user.is_anonymous:\n return queryset.none()\n # Superusers can see all (not staff users)\n if user.is_active and user.is_superuser:\n return queryset\n # No relationship to collection\n path = get_collection_membership_path(queryset.model)\n if path is None:\n return queryset\n # Check permissions\n # `path` can be an empty string (meaning queryset is `CollectionPermission`)\n user_path = (path + '__' if path != '' else path) + 'user'\n role_path = (path + '__' if path != '' else path) + 'role'\n queryset = annotate_queryset(queryset)\n filtered = queryset.filter(**{user_path: user.pk}).exclude(**{role_path + '__lt': role})\n # Check setting for unassigned permissions\n if settings.RGD_GLOBAL_READ_ACCESS:\n unassigned = queryset.filter(**{user_path + '__isnull': True})\n return unassigned | filtered\n return filtered\n\n\ndef filter_read_perm(user, queryset):\n \"\"\"Filter a queryset to what the user may read.\"\"\"\n return filter_perm(user, queryset, models.CollectionPermission.READER)\n\n\ndef filter_write_perm(user, queryset):\n \"\"\"Filter a queryset to what the user may edit.\"\"\"\n return filter_perm(user, queryset, models.CollectionPermission.OWNER)\n\n\ndef check_read_perm(user, obj):\n \"\"\"Raise 'PermissionDenied' error if user does not have read permissions.\"\"\"\n model = type(obj)\n if not filter_read_perm(user, model.objects.filter(pk=obj.pk)).exists():\n raise PermissionDenied\n\n\ndef check_write_perm(user, obj):\n \"\"\"Raise 'PermissionDenied' error if user does not have write permissions.\"\"\"\n # Called outside of view\n model = type(obj)\n if not filter_write_perm(user, model.objects.filter(pk=obj.pk)).exists():\n raise PermissionDenied\n\n\nclass CollectionAuthorizationBackend(BaseBackend):\n def has_perm(self, user, perm, obj=None):\n \"\"\"Supplement default Django permission backend.\n\n Returns `True` if the user has the specified permission, where perm is in the format\n `\"<app label>.<permission codename>\"`. If the user is\n inactive, this method will always return False. For an active superuser, this method\n will always return `True`.\n\n https://docs.djangoproject.com/en/3.1/ref/contrib/auth/#django.contrib.auth.models.User.has_perm\n \"\"\"\n app_label, codename = perm.split('.')\n if app_label == 'geodata':\n if codename.startswith('view'):\n check_read_perm(user, obj)\n if (\n codename.startswith('add')\n or codename.startswith('delete')\n or codename.startswith('change')\n ):\n check_write_perm(user, obj)\n", "path": "rgd/geodata/permissions.py"}], "after_files": [{"content": "from typing import Optional\n\nfrom django.conf import settings\nfrom django.contrib.auth.backends import BaseBackend\nfrom django.core.exceptions import PermissionDenied\nfrom django.db.models.functions import Coalesce\n\nfrom rgd.geodata import models\n\n\ndef annotate_queryset(queryset):\n \"\"\"Annotate the queryset to include a path to a collection.\n\n Some models don't have a direct path to `collection`\n and must be annotated to include it.\n \"\"\"\n model = queryset.model\n if model == models.SpatialEntry:\n return queryset.annotate(\n _collection_permissions__user=Coalesce(\n 'fmventry__fmv_file__file__collection__collection_permissions__user',\n 'geometryentry__geometry_archive__file__collection__collection_permissions__user',\n 'rastermetaentry__parent_raster__image_set__images__image_file__file__collection__collection_permissions__user',\n ),\n _collection_permissions__role=Coalesce(\n 'fmventry__fmv_file__file__collection__collection_permissions__role',\n 'geometryentry__geometry_archive__file__collection__collection_permissions__role',\n 'rastermetaentry__parent_raster__image_set__images__image_file__file__collection__collection_permissions__role',\n ),\n )\n return queryset\n\n\ndef get_collection_membership_path(model) -> Optional[str]:\n \"\"\"Get the path to the 'CollectionPermission' model.\n\n Relationships are represented as 'dunder's ('__'). Returning `None`\n means the model is explicitly unprotected.\n \"\"\"\n # Collection\n if issubclass(model, models.CollectionPermission):\n return ''\n if issubclass(model, models.Collection):\n return 'collection_permissions'\n # Common\n if issubclass(model, models.ChecksumFile):\n return 'collection__collection_permissions'\n # Imagery\n if issubclass(model, models.ImageEntry):\n return 'image_file__file__collection__collection_permissions'\n if issubclass(model, models.ImageSet):\n return 'images__image_file__file__collection__collection_permissions'\n if issubclass(model, models.RasterEntry):\n return 'image_set__images__image_file__file__collection__collection_permissions'\n if issubclass(model, models.RasterMetaEntry):\n return (\n 'parent_raster__image_set__images__image_file__file__collection__collection_permissions'\n )\n if issubclass(model, models.BandMetaEntry):\n return 'parent_image__image_file__file__collection__collection_permissions'\n if issubclass(model, models.ConvertedImageFile):\n return 'source_image__image_file__file__collection__collection_permissions'\n if issubclass(model, models.SubsampledImage):\n return 'source_image__image_file__file__collection__collection_permissions'\n if issubclass(model, models.KWCOCOArchive):\n return 'spec_file__collection__collection_permissions'\n # Annotation\n if issubclass(model, models.Annotation):\n return 'image__image_file__collection__collection_permissions'\n if issubclass(model, models.Segmentation):\n return 'annotation__image__image_file__collection__collection_permissions'\n # Geometry\n if issubclass(model, models.GeometryEntry):\n return 'geometry_archive__file__collection__collection_permissions'\n # FMV\n if issubclass(model, models.FMVEntry):\n return 'fmv_file__file__collection__collection_permissions'\n # SpatialEntry\n if model == models.SpatialEntry:\n return '_collection_permissions'\n\n raise NotImplementedError\n\n\ndef filter_perm(user, queryset, role):\n \"\"\"Filter a queryset.\"\"\"\n # Called outside of view\n if user is None:\n return queryset\n # Must be logged in\n if not user.is_active or user.is_anonymous:\n return queryset.none()\n # Superusers can see all (not staff users)\n if user.is_active and user.is_superuser:\n return queryset\n # No relationship to collection\n path = get_collection_membership_path(queryset.model)\n if path is None:\n return queryset\n # Check permissions\n # `path` can be an empty string (meaning queryset is `CollectionPermission`)\n user_path = (path + '__' if path != '' else path) + 'user'\n role_path = (path + '__' if path != '' else path) + 'role'\n queryset = annotate_queryset(queryset)\n filtered = (\n queryset.filter(**{user_path: user.pk}).exclude(**{role_path + '__lt': role}).distinct()\n )\n # Check setting for unassigned permissions\n if settings.RGD_GLOBAL_READ_ACCESS:\n unassigned = queryset.filter(**{user_path + '__isnull': True}).distinct()\n return unassigned | filtered\n return filtered\n\n\ndef filter_read_perm(user, queryset):\n \"\"\"Filter a queryset to what the user may read.\"\"\"\n return filter_perm(user, queryset, models.CollectionPermission.READER)\n\n\ndef filter_write_perm(user, queryset):\n \"\"\"Filter a queryset to what the user may edit.\"\"\"\n return filter_perm(user, queryset, models.CollectionPermission.OWNER)\n\n\ndef check_read_perm(user, obj):\n \"\"\"Raise 'PermissionDenied' error if user does not have read permissions.\"\"\"\n model = type(obj)\n if not filter_read_perm(user, model.objects.filter(pk=obj.pk)).exists():\n raise PermissionDenied\n\n\ndef check_write_perm(user, obj):\n \"\"\"Raise 'PermissionDenied' error if user does not have write permissions.\"\"\"\n # Called outside of view\n model = type(obj)\n if not filter_write_perm(user, model.objects.filter(pk=obj.pk)).exists():\n raise PermissionDenied\n\n\nclass CollectionAuthorizationBackend(BaseBackend):\n def has_perm(self, user, perm, obj=None):\n \"\"\"Supplement default Django permission backend.\n\n Returns `True` if the user has the specified permission, where perm is in the format\n `\"<app label>.<permission codename>\"`. If the user is\n inactive, this method will always return False. For an active superuser, this method\n will always return `True`.\n\n https://docs.djangoproject.com/en/3.1/ref/contrib/auth/#django.contrib.auth.models.User.has_perm\n \"\"\"\n app_label, codename = perm.split('.')\n if app_label == 'geodata':\n if codename.startswith('view'):\n check_read_perm(user, obj)\n if (\n codename.startswith('add')\n or codename.startswith('delete')\n or codename.startswith('change')\n ):\n check_write_perm(user, obj)\n", "path": "rgd/geodata/permissions.py"}]}
| 2,372 | 233 |
gh_patches_debug_28759
|
rasdani/github-patches
|
git_diff
|
numba__numba-1992
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
jitclass __doc__ passthrough to instance
Jitclass is not exposing the docstring of the class nor the methods.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `numba/jitclass/boxing.py`
Content:
```
1 """
2 Implement logic relating to wrapping (box) and unwrapping (unbox) instances
3 of jitclasses for use inside the python interpreter.
4 """
5 from __future__ import print_function, absolute_import
6
7 from functools import wraps, partial
8
9 from llvmlite import ir
10
11 from numba import types, cgutils
12 from numba.pythonapi import box, unbox, NativeValue
13 from numba import njit
14 from numba.six import exec_
15 from . import _box
16
17
18 _getter_code_template = """
19 def accessor(__numba_self_):
20 return __numba_self_.{0}
21 """
22
23 _setter_code_template = """
24 def mutator(__numba_self_, __numba_val):
25 __numba_self_.{0} = __numba_val
26 """
27
28 _method_code_template = """
29 def method(__numba_self_, *args):
30 return __numba_self_.{method}(*args)
31 """
32
33
34 def _generate_property(field, template, fname):
35 """
36 Generate simple function that get/set a field of the instance
37 """
38 source = template.format(field)
39 glbls = {}
40 exec_(source, glbls)
41 return njit(glbls[fname])
42
43
44 _generate_getter = partial(_generate_property, template=_getter_code_template,
45 fname='accessor')
46 _generate_setter = partial(_generate_property, template=_setter_code_template,
47 fname='mutator')
48
49
50 def _generate_method(name, func):
51 """
52 Generate a wrapper for calling a method. Note the wrapper will only
53 accept positional arguments.
54 """
55 source = _method_code_template.format(method=name)
56 glbls = {}
57 exec_(source, glbls)
58 method = njit(glbls['method'])
59
60 @wraps(func)
61 def wrapper(*args, **kwargs):
62 return method(*args, **kwargs)
63
64 return wrapper
65
66
67 _cache_specialized_box = {}
68
69
70 def _specialize_box(typ):
71 """
72 Create a subclass of Box that is specialized to the jitclass.
73
74 This function caches the result to avoid code bloat.
75 """
76 # Check cache
77 if typ in _cache_specialized_box:
78 return _cache_specialized_box[typ]
79 dct = {'__slots__': (),
80 '_numba_type_': typ}
81 # Inject attributes as class properties
82 for field in typ.struct:
83 getter = _generate_getter(field)
84 setter = _generate_setter(field)
85 dct[field] = property(getter, setter)
86 # Inject properties as class properties
87 for field, impdct in typ.jitprops.items():
88 getter = None
89 setter = None
90 if 'get' in impdct:
91 getter = _generate_getter(field)
92 if 'set' in impdct:
93 setter = _generate_setter(field)
94 dct[field] = property(getter, setter)
95 # Inject methods as class members
96 for name, func in typ.methods.items():
97 if not (name.startswith('__') and name.endswith('__')):
98 dct[name] = _generate_method(name, func)
99 # Create subclass
100 subcls = type(typ.classname, (_box.Box,), dct)
101 # Store to cache
102 _cache_specialized_box[typ] = subcls
103
104 # Pre-compile attribute getter.
105 # Note: This must be done after the "box" class is created because
106 # compiling the getter requires the "box" class to be defined.
107 for k, v in dct.items():
108 if isinstance(v, property):
109 prop = getattr(subcls, k)
110 if prop.fget is not None:
111 fget = prop.fget
112 fast_fget = fget.compile((typ,))
113 fget.disable_compile()
114 setattr(subcls, k,
115 property(fast_fget, prop.fset, prop.fdel))
116
117 return subcls
118
119
120 ###############################################################################
121 # Implement box/unbox for call wrapper
122
123 @box(types.ClassInstanceType)
124 def _box_class_instance(typ, val, c):
125 meminfo, dataptr = cgutils.unpack_tuple(c.builder, val)
126
127 # Create Box instance
128 box_subclassed = _specialize_box(typ)
129 # Note: the ``box_subclassed`` is kept alive by the cache
130 int_addr_boxcls = c.context.get_constant(types.uintp, id(box_subclassed))
131
132 box_cls = c.builder.inttoptr(int_addr_boxcls, c.pyapi.pyobj)
133 box = c.pyapi.call_function_objargs(box_cls, ())
134
135 # Initialize Box instance
136 llvoidptr = ir.IntType(8).as_pointer()
137 addr_meminfo = c.builder.bitcast(meminfo, llvoidptr)
138 addr_data = c.builder.bitcast(dataptr, llvoidptr)
139
140 def set_member(member_offset, value):
141 # Access member by byte offset
142 offset = c.context.get_constant(types.uintp, member_offset)
143 ptr = cgutils.pointer_add(c.builder, box, offset)
144 casted = c.builder.bitcast(ptr, llvoidptr.as_pointer())
145 c.builder.store(value, casted)
146
147 set_member(_box.box_meminfoptr_offset, addr_meminfo)
148 set_member(_box.box_dataptr_offset, addr_data)
149 return box
150
151
152 @unbox(types.ClassInstanceType)
153 def _unbox_class_instance(typ, val, c):
154 def access_member(member_offset):
155 # Access member by byte offset
156 offset = c.context.get_constant(types.uintp, member_offset)
157 llvoidptr = ir.IntType(8).as_pointer()
158 ptr = cgutils.pointer_add(c.builder, val, offset)
159 casted = c.builder.bitcast(ptr, llvoidptr.as_pointer())
160 return c.builder.load(casted)
161
162 struct_cls = cgutils.create_struct_proxy(typ)
163 inst = struct_cls(c.context, c.builder)
164
165 # load from Python object
166 ptr_meminfo = access_member(_box.box_meminfoptr_offset)
167 ptr_dataptr = access_member(_box.box_dataptr_offset)
168
169 # store to native structure
170 inst.meminfo = c.builder.bitcast(ptr_meminfo, inst.meminfo.type)
171 inst.data = c.builder.bitcast(ptr_dataptr, inst.data.type)
172
173 ret = inst._getvalue()
174
175 c.context.nrt.incref(c.builder, typ, ret)
176
177 return NativeValue(ret, is_error=c.pyapi.c_api_error())
178
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/numba/jitclass/boxing.py b/numba/jitclass/boxing.py
--- a/numba/jitclass/boxing.py
+++ b/numba/jitclass/boxing.py
@@ -77,7 +77,9 @@
if typ in _cache_specialized_box:
return _cache_specialized_box[typ]
dct = {'__slots__': (),
- '_numba_type_': typ}
+ '_numba_type_': typ,
+ '__doc__': typ.class_type.class_def.__doc__,
+ }
# Inject attributes as class properties
for field in typ.struct:
getter = _generate_getter(field)
@@ -91,7 +93,10 @@
getter = _generate_getter(field)
if 'set' in impdct:
setter = _generate_setter(field)
- dct[field] = property(getter, setter)
+ # get docstring from either the fget or fset
+ imp = impdct.get('get') or impdct.get('set') or None
+ doc = getattr(imp, '__doc__', None)
+ dct[field] = property(getter, setter, doc=doc)
# Inject methods as class members
for name, func in typ.methods.items():
if not (name.startswith('__') and name.endswith('__')):
@@ -112,7 +117,8 @@
fast_fget = fget.compile((typ,))
fget.disable_compile()
setattr(subcls, k,
- property(fast_fget, prop.fset, prop.fdel))
+ property(fast_fget, prop.fset, prop.fdel,
+ doc=prop.__doc__))
return subcls
|
{"golden_diff": "diff --git a/numba/jitclass/boxing.py b/numba/jitclass/boxing.py\n--- a/numba/jitclass/boxing.py\n+++ b/numba/jitclass/boxing.py\n@@ -77,7 +77,9 @@\n if typ in _cache_specialized_box:\n return _cache_specialized_box[typ]\n dct = {'__slots__': (),\n- '_numba_type_': typ}\n+ '_numba_type_': typ,\n+ '__doc__': typ.class_type.class_def.__doc__,\n+ }\n # Inject attributes as class properties\n for field in typ.struct:\n getter = _generate_getter(field)\n@@ -91,7 +93,10 @@\n getter = _generate_getter(field)\n if 'set' in impdct:\n setter = _generate_setter(field)\n- dct[field] = property(getter, setter)\n+ # get docstring from either the fget or fset\n+ imp = impdct.get('get') or impdct.get('set') or None\n+ doc = getattr(imp, '__doc__', None)\n+ dct[field] = property(getter, setter, doc=doc)\n # Inject methods as class members\n for name, func in typ.methods.items():\n if not (name.startswith('__') and name.endswith('__')):\n@@ -112,7 +117,8 @@\n fast_fget = fget.compile((typ,))\n fget.disable_compile()\n setattr(subcls, k,\n- property(fast_fget, prop.fset, prop.fdel))\n+ property(fast_fget, prop.fset, prop.fdel,\n+ doc=prop.__doc__))\n \n return subcls\n", "issue": "jitclass __doc__ passthrough to instance\nJitclass is not exposing the docstring of the class nor the methods.\n\n", "before_files": [{"content": "\"\"\"\nImplement logic relating to wrapping (box) and unwrapping (unbox) instances\nof jitclasses for use inside the python interpreter.\n\"\"\"\nfrom __future__ import print_function, absolute_import\n\nfrom functools import wraps, partial\n\nfrom llvmlite import ir\n\nfrom numba import types, cgutils\nfrom numba.pythonapi import box, unbox, NativeValue\nfrom numba import njit\nfrom numba.six import exec_\nfrom . import _box\n\n\n_getter_code_template = \"\"\"\ndef accessor(__numba_self_):\n return __numba_self_.{0}\n\"\"\"\n\n_setter_code_template = \"\"\"\ndef mutator(__numba_self_, __numba_val):\n __numba_self_.{0} = __numba_val\n\"\"\"\n\n_method_code_template = \"\"\"\ndef method(__numba_self_, *args):\n return __numba_self_.{method}(*args)\n\"\"\"\n\n\ndef _generate_property(field, template, fname):\n \"\"\"\n Generate simple function that get/set a field of the instance\n \"\"\"\n source = template.format(field)\n glbls = {}\n exec_(source, glbls)\n return njit(glbls[fname])\n\n\n_generate_getter = partial(_generate_property, template=_getter_code_template,\n fname='accessor')\n_generate_setter = partial(_generate_property, template=_setter_code_template,\n fname='mutator')\n\n\ndef _generate_method(name, func):\n \"\"\"\n Generate a wrapper for calling a method. Note the wrapper will only\n accept positional arguments.\n \"\"\"\n source = _method_code_template.format(method=name)\n glbls = {}\n exec_(source, glbls)\n method = njit(glbls['method'])\n\n @wraps(func)\n def wrapper(*args, **kwargs):\n return method(*args, **kwargs)\n\n return wrapper\n\n\n_cache_specialized_box = {}\n\n\ndef _specialize_box(typ):\n \"\"\"\n Create a subclass of Box that is specialized to the jitclass.\n\n This function caches the result to avoid code bloat.\n \"\"\"\n # Check cache\n if typ in _cache_specialized_box:\n return _cache_specialized_box[typ]\n dct = {'__slots__': (),\n '_numba_type_': typ}\n # Inject attributes as class properties\n for field in typ.struct:\n getter = _generate_getter(field)\n setter = _generate_setter(field)\n dct[field] = property(getter, setter)\n # Inject properties as class properties\n for field, impdct in typ.jitprops.items():\n getter = None\n setter = None\n if 'get' in impdct:\n getter = _generate_getter(field)\n if 'set' in impdct:\n setter = _generate_setter(field)\n dct[field] = property(getter, setter)\n # Inject methods as class members\n for name, func in typ.methods.items():\n if not (name.startswith('__') and name.endswith('__')):\n dct[name] = _generate_method(name, func)\n # Create subclass\n subcls = type(typ.classname, (_box.Box,), dct)\n # Store to cache\n _cache_specialized_box[typ] = subcls\n\n # Pre-compile attribute getter.\n # Note: This must be done after the \"box\" class is created because\n # compiling the getter requires the \"box\" class to be defined.\n for k, v in dct.items():\n if isinstance(v, property):\n prop = getattr(subcls, k)\n if prop.fget is not None:\n fget = prop.fget\n fast_fget = fget.compile((typ,))\n fget.disable_compile()\n setattr(subcls, k,\n property(fast_fget, prop.fset, prop.fdel))\n\n return subcls\n\n\n###############################################################################\n# Implement box/unbox for call wrapper\n\n@box(types.ClassInstanceType)\ndef _box_class_instance(typ, val, c):\n meminfo, dataptr = cgutils.unpack_tuple(c.builder, val)\n\n # Create Box instance\n box_subclassed = _specialize_box(typ)\n # Note: the ``box_subclassed`` is kept alive by the cache\n int_addr_boxcls = c.context.get_constant(types.uintp, id(box_subclassed))\n\n box_cls = c.builder.inttoptr(int_addr_boxcls, c.pyapi.pyobj)\n box = c.pyapi.call_function_objargs(box_cls, ())\n\n # Initialize Box instance\n llvoidptr = ir.IntType(8).as_pointer()\n addr_meminfo = c.builder.bitcast(meminfo, llvoidptr)\n addr_data = c.builder.bitcast(dataptr, llvoidptr)\n\n def set_member(member_offset, value):\n # Access member by byte offset\n offset = c.context.get_constant(types.uintp, member_offset)\n ptr = cgutils.pointer_add(c.builder, box, offset)\n casted = c.builder.bitcast(ptr, llvoidptr.as_pointer())\n c.builder.store(value, casted)\n\n set_member(_box.box_meminfoptr_offset, addr_meminfo)\n set_member(_box.box_dataptr_offset, addr_data)\n return box\n\n\n@unbox(types.ClassInstanceType)\ndef _unbox_class_instance(typ, val, c):\n def access_member(member_offset):\n # Access member by byte offset\n offset = c.context.get_constant(types.uintp, member_offset)\n llvoidptr = ir.IntType(8).as_pointer()\n ptr = cgutils.pointer_add(c.builder, val, offset)\n casted = c.builder.bitcast(ptr, llvoidptr.as_pointer())\n return c.builder.load(casted)\n\n struct_cls = cgutils.create_struct_proxy(typ)\n inst = struct_cls(c.context, c.builder)\n\n # load from Python object\n ptr_meminfo = access_member(_box.box_meminfoptr_offset)\n ptr_dataptr = access_member(_box.box_dataptr_offset)\n\n # store to native structure\n inst.meminfo = c.builder.bitcast(ptr_meminfo, inst.meminfo.type)\n inst.data = c.builder.bitcast(ptr_dataptr, inst.data.type)\n\n ret = inst._getvalue()\n\n c.context.nrt.incref(c.builder, typ, ret)\n\n return NativeValue(ret, is_error=c.pyapi.c_api_error())\n", "path": "numba/jitclass/boxing.py"}], "after_files": [{"content": "\"\"\"\nImplement logic relating to wrapping (box) and unwrapping (unbox) instances\nof jitclasses for use inside the python interpreter.\n\"\"\"\nfrom __future__ import print_function, absolute_import\n\nfrom functools import wraps, partial\n\nfrom llvmlite import ir\n\nfrom numba import types, cgutils\nfrom numba.pythonapi import box, unbox, NativeValue\nfrom numba import njit\nfrom numba.six import exec_\nfrom . import _box\n\n\n_getter_code_template = \"\"\"\ndef accessor(__numba_self_):\n return __numba_self_.{0}\n\"\"\"\n\n_setter_code_template = \"\"\"\ndef mutator(__numba_self_, __numba_val):\n __numba_self_.{0} = __numba_val\n\"\"\"\n\n_method_code_template = \"\"\"\ndef method(__numba_self_, *args):\n return __numba_self_.{method}(*args)\n\"\"\"\n\n\ndef _generate_property(field, template, fname):\n \"\"\"\n Generate simple function that get/set a field of the instance\n \"\"\"\n source = template.format(field)\n glbls = {}\n exec_(source, glbls)\n return njit(glbls[fname])\n\n\n_generate_getter = partial(_generate_property, template=_getter_code_template,\n fname='accessor')\n_generate_setter = partial(_generate_property, template=_setter_code_template,\n fname='mutator')\n\n\ndef _generate_method(name, func):\n \"\"\"\n Generate a wrapper for calling a method. Note the wrapper will only\n accept positional arguments.\n \"\"\"\n source = _method_code_template.format(method=name)\n glbls = {}\n exec_(source, glbls)\n method = njit(glbls['method'])\n\n @wraps(func)\n def wrapper(*args, **kwargs):\n return method(*args, **kwargs)\n\n return wrapper\n\n\n_cache_specialized_box = {}\n\n\ndef _specialize_box(typ):\n \"\"\"\n Create a subclass of Box that is specialized to the jitclass.\n\n This function caches the result to avoid code bloat.\n \"\"\"\n # Check cache\n if typ in _cache_specialized_box:\n return _cache_specialized_box[typ]\n dct = {'__slots__': (),\n '_numba_type_': typ,\n '__doc__': typ.class_type.class_def.__doc__,\n }\n # Inject attributes as class properties\n for field in typ.struct:\n getter = _generate_getter(field)\n setter = _generate_setter(field)\n dct[field] = property(getter, setter)\n # Inject properties as class properties\n for field, impdct in typ.jitprops.items():\n getter = None\n setter = None\n if 'get' in impdct:\n getter = _generate_getter(field)\n if 'set' in impdct:\n setter = _generate_setter(field)\n # get docstring from either the fget or fset\n imp = impdct.get('get') or impdct.get('set') or None\n doc = getattr(imp, '__doc__', None)\n dct[field] = property(getter, setter, doc=doc)\n # Inject methods as class members\n for name, func in typ.methods.items():\n if not (name.startswith('__') and name.endswith('__')):\n dct[name] = _generate_method(name, func)\n # Create subclass\n subcls = type(typ.classname, (_box.Box,), dct)\n # Store to cache\n _cache_specialized_box[typ] = subcls\n\n # Pre-compile attribute getter.\n # Note: This must be done after the \"box\" class is created because\n # compiling the getter requires the \"box\" class to be defined.\n for k, v in dct.items():\n if isinstance(v, property):\n prop = getattr(subcls, k)\n if prop.fget is not None:\n fget = prop.fget\n fast_fget = fget.compile((typ,))\n fget.disable_compile()\n setattr(subcls, k,\n property(fast_fget, prop.fset, prop.fdel,\n doc=prop.__doc__))\n\n return subcls\n\n\n###############################################################################\n# Implement box/unbox for call wrapper\n\n@box(types.ClassInstanceType)\ndef _box_class_instance(typ, val, c):\n meminfo, dataptr = cgutils.unpack_tuple(c.builder, val)\n\n # Create Box instance\n box_subclassed = _specialize_box(typ)\n # Note: the ``box_subclassed`` is kept alive by the cache\n int_addr_boxcls = c.context.get_constant(types.uintp, id(box_subclassed))\n\n box_cls = c.builder.inttoptr(int_addr_boxcls, c.pyapi.pyobj)\n box = c.pyapi.call_function_objargs(box_cls, ())\n\n # Initialize Box instance\n llvoidptr = ir.IntType(8).as_pointer()\n addr_meminfo = c.builder.bitcast(meminfo, llvoidptr)\n addr_data = c.builder.bitcast(dataptr, llvoidptr)\n\n def set_member(member_offset, value):\n # Access member by byte offset\n offset = c.context.get_constant(types.uintp, member_offset)\n ptr = cgutils.pointer_add(c.builder, box, offset)\n casted = c.builder.bitcast(ptr, llvoidptr.as_pointer())\n c.builder.store(value, casted)\n\n set_member(_box.box_meminfoptr_offset, addr_meminfo)\n set_member(_box.box_dataptr_offset, addr_data)\n return box\n\n\n@unbox(types.ClassInstanceType)\ndef _unbox_class_instance(typ, val, c):\n def access_member(member_offset):\n # Access member by byte offset\n offset = c.context.get_constant(types.uintp, member_offset)\n llvoidptr = ir.IntType(8).as_pointer()\n ptr = cgutils.pointer_add(c.builder, val, offset)\n casted = c.builder.bitcast(ptr, llvoidptr.as_pointer())\n return c.builder.load(casted)\n\n struct_cls = cgutils.create_struct_proxy(typ)\n inst = struct_cls(c.context, c.builder)\n\n # load from Python object\n ptr_meminfo = access_member(_box.box_meminfoptr_offset)\n ptr_dataptr = access_member(_box.box_dataptr_offset)\n\n # store to native structure\n inst.meminfo = c.builder.bitcast(ptr_meminfo, inst.meminfo.type)\n inst.data = c.builder.bitcast(ptr_dataptr, inst.data.type)\n\n ret = inst._getvalue()\n\n c.context.nrt.incref(c.builder, typ, ret)\n\n return NativeValue(ret, is_error=c.pyapi.c_api_error())\n", "path": "numba/jitclass/boxing.py"}]}
| 2,095 | 384 |
gh_patches_debug_1554
|
rasdani/github-patches
|
git_diff
|
mampfes__hacs_waste_collection_schedule-520
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Please expose service for manual schedule refresh
As per my understanding current setup allows refresh of the schedule to happen only once a day at the time configured in `fetch_time`.
This may cause issues if for some reason the source is not available at the given time, there is an issue with connectivity or a schedule change has been announced via different channels and update needs to happen on-demand.
Please expose `waste_collection_schedule.reload` service that would call the same routing that is normally executed at `fetch_time`, but on demand.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `custom_components/waste_collection_schedule/__init__.py`
Content:
```
1 """Waste Collection Schedule Component."""
2 import logging
3 import site
4 from pathlib import Path
5 from random import randrange
6
7 import homeassistant.helpers.config_validation as cv
8 import homeassistant.util.dt as dt_util
9 import voluptuous as vol
10 from homeassistant.core import HomeAssistant, callback
11 from homeassistant.helpers.dispatcher import dispatcher_send
12
13 from .const import DOMAIN, UPDATE_SENSORS_SIGNAL
14
15 from homeassistant.helpers.event import async_call_later # isort:skip
16 from homeassistant.helpers.event import async_track_time_change # isort:skip
17
18 # add module directory to path
19 package_dir = Path(__file__).resolve().parents[0]
20 site.addsitedir(str(package_dir))
21 from waste_collection_schedule import Customize, SourceShell # type: ignore # isort:skip # noqa: E402
22
23 _LOGGER = logging.getLogger(__name__)
24
25 CONF_SOURCES = "sources"
26 CONF_SOURCE_NAME = "name"
27 CONF_SOURCE_ARGS = "args" # source arguments
28 CONF_SOURCE_CALENDAR_TITLE = "calendar_title"
29 CONF_SEPARATOR = "separator"
30 CONF_FETCH_TIME = "fetch_time"
31 CONF_RANDOM_FETCH_TIME_OFFSET = "random_fetch_time_offset"
32 CONF_DAY_SWITCH_TIME = "day_switch_time"
33
34 CONF_CUSTOMIZE = "customize"
35 CONF_TYPE = "type"
36 CONF_ALIAS = "alias"
37 CONF_SHOW = "show"
38 CONF_ICON = "icon"
39 CONF_PICTURE = "picture"
40 CONF_USE_DEDICATED_CALENDAR = "use_dedicated_calendar"
41 CONF_DEDICATED_CALENDAR_TITLE = "dedicated_calendar_title"
42
43 CUSTOMIZE_CONFIG = vol.Schema(
44 {
45 vol.Optional(CONF_TYPE): cv.string,
46 vol.Optional(CONF_ALIAS): cv.string,
47 vol.Optional(CONF_SHOW): cv.boolean,
48 vol.Optional(CONF_ICON): cv.icon,
49 vol.Optional(CONF_PICTURE): cv.string,
50 vol.Optional(CONF_USE_DEDICATED_CALENDAR): cv.boolean,
51 vol.Optional(CONF_DEDICATED_CALENDAR_TITLE): cv.string,
52 }
53 )
54
55 SOURCE_CONFIG = vol.Schema(
56 {
57 vol.Required(CONF_SOURCE_NAME): cv.string,
58 vol.Required(CONF_SOURCE_ARGS): dict,
59 vol.Optional(CONF_CUSTOMIZE, default=[]): vol.All(
60 cv.ensure_list, [CUSTOMIZE_CONFIG]
61 ),
62 vol.Optional(CONF_SOURCE_CALENDAR_TITLE): cv.string,
63 }
64 )
65
66 CONFIG_SCHEMA = vol.Schema(
67 {
68 DOMAIN: vol.Schema(
69 {
70 vol.Required(CONF_SOURCES): vol.All(cv.ensure_list, [SOURCE_CONFIG]),
71 vol.Optional(CONF_SEPARATOR, default=", "): cv.string,
72 vol.Optional(CONF_FETCH_TIME, default="01:00"): cv.time,
73 vol.Optional(
74 CONF_RANDOM_FETCH_TIME_OFFSET, default=60
75 ): cv.positive_int,
76 vol.Optional(CONF_DAY_SWITCH_TIME, default="10:00"): cv.time,
77 }
78 )
79 },
80 extra=vol.ALLOW_EXTRA,
81 )
82
83
84 async def async_setup(hass: HomeAssistant, config: dict):
85 """Set up the component. config contains data from configuration.yaml."""
86 # create empty api object as singleton
87 api = WasteCollectionApi(
88 hass,
89 separator=config[DOMAIN][CONF_SEPARATOR],
90 fetch_time=config[DOMAIN][CONF_FETCH_TIME],
91 random_fetch_time_offset=config[DOMAIN][CONF_RANDOM_FETCH_TIME_OFFSET],
92 day_switch_time=config[DOMAIN][CONF_DAY_SWITCH_TIME],
93 )
94
95 # create shells for source(s)
96 for source in config[DOMAIN][CONF_SOURCES]:
97 # create customize object
98 customize = {}
99 for c in source.get(CONF_CUSTOMIZE, {}):
100 customize[c[CONF_TYPE]] = Customize(
101 waste_type=c[CONF_TYPE],
102 alias=c.get(CONF_ALIAS),
103 show=c.get(CONF_SHOW, True),
104 icon=c.get(CONF_ICON),
105 picture=c.get(CONF_PICTURE),
106 use_dedicated_calendar=c.get(CONF_USE_DEDICATED_CALENDAR, False),
107 dedicated_calendar_title=c.get(CONF_DEDICATED_CALENDAR_TITLE, False),
108 )
109 api.add_source_shell(
110 source_name=source[CONF_SOURCE_NAME],
111 customize=customize,
112 calendar_title=source.get(CONF_SOURCE_CALENDAR_TITLE),
113 source_args=source.get(CONF_SOURCE_ARGS, {}),
114 )
115
116 # store api object
117 hass.data.setdefault(DOMAIN, api)
118
119 # load calendar platform
120 await hass.helpers.discovery.async_load_platform(
121 "calendar", DOMAIN, {"api": api}, config
122 )
123
124 # initial fetch of all data
125 hass.add_job(api._fetch)
126
127 return True
128
129
130 class WasteCollectionApi:
131 def __init__(
132 self, hass, separator, fetch_time, random_fetch_time_offset, day_switch_time
133 ):
134 self._hass = hass
135 self._source_shells = []
136 self._separator = separator
137 self._fetch_time = fetch_time
138 self._random_fetch_time_offset = random_fetch_time_offset
139 self._day_switch_time = day_switch_time
140
141 # start timer to fetch date once per day
142 async_track_time_change(
143 hass,
144 self._fetch_callback,
145 self._fetch_time.hour,
146 self._fetch_time.minute,
147 self._fetch_time.second,
148 )
149
150 # start timer for day-switch time
151 if self._day_switch_time != self._fetch_time:
152 async_track_time_change(
153 hass,
154 self._update_sensors_callback,
155 self._day_switch_time.hour,
156 self._day_switch_time.minute,
157 self._day_switch_time.second,
158 )
159
160 # add a timer at midnight (if not already there) to update days-to
161 midnight = dt_util.parse_time("00:00")
162 if midnight != self._fetch_time and midnight != self._day_switch_time:
163 async_track_time_change(
164 hass,
165 self._update_sensors_callback,
166 midnight.hour,
167 midnight.minute,
168 midnight.second,
169 )
170
171 @property
172 def separator(self):
173 """Separator string, used to separator waste types."""
174 return self._separator
175
176 @property
177 def fetch_time(self):
178 """When to fetch to data."""
179 return self._fetch_time
180
181 @property
182 def day_switch_time(self):
183 """When to hide entries for today."""
184 return self._day_switch_time
185
186 def add_source_shell(
187 self,
188 source_name,
189 customize,
190 source_args,
191 calendar_title,
192 ):
193 self._source_shells.append(
194 SourceShell.create(
195 source_name=source_name,
196 customize=customize,
197 source_args=source_args,
198 calendar_title=calendar_title,
199 )
200 )
201
202 def _fetch(self, *_):
203 for shell in self._source_shells:
204 shell.fetch()
205
206 self._update_sensors_callback()
207
208 @property
209 def shells(self):
210 return self._source_shells
211
212 def get_shell(self, index):
213 return self._source_shells[index] if index < len(self._source_shells) else None
214
215 @callback
216 def _fetch_callback(self, *_):
217 async_call_later(
218 self._hass,
219 randrange(0, 60 * self._random_fetch_time_offset),
220 self._fetch_now_callback,
221 )
222
223 @callback
224 def _fetch_now_callback(self, *_):
225 self._hass.add_job(self._fetch)
226
227 @callback
228 def _update_sensors_callback(self, *_):
229 dispatcher_send(self._hass, UPDATE_SENSORS_SIGNAL)
230
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/custom_components/waste_collection_schedule/__init__.py b/custom_components/waste_collection_schedule/__init__.py
--- a/custom_components/waste_collection_schedule/__init__.py
+++ b/custom_components/waste_collection_schedule/__init__.py
@@ -123,6 +123,12 @@
# initial fetch of all data
hass.add_job(api._fetch)
+
+ def fetch_data():
+ hass.add_job(api._fetch)
+
+ # Register new Service fetch_data
+ hass.services.async_register(DOMAIN, 'fetch_data', fetch_data)
return True
|
{"golden_diff": "diff --git a/custom_components/waste_collection_schedule/__init__.py b/custom_components/waste_collection_schedule/__init__.py\n--- a/custom_components/waste_collection_schedule/__init__.py\n+++ b/custom_components/waste_collection_schedule/__init__.py\n@@ -123,6 +123,12 @@\n \n # initial fetch of all data\n hass.add_job(api._fetch)\n+ \n+ def fetch_data():\n+ hass.add_job(api._fetch)\n+\n+ # Register new Service fetch_data\n+ hass.services.async_register(DOMAIN, 'fetch_data', fetch_data)\n \n return True\n", "issue": "Please expose service for manual schedule refresh\nAs per my understanding current setup allows refresh of the schedule to happen only once a day at the time configured in `fetch_time`.\r\nThis may cause issues if for some reason the source is not available at the given time, there is an issue with connectivity or a schedule change has been announced via different channels and update needs to happen on-demand.\r\n\r\nPlease expose `waste_collection_schedule.reload` service that would call the same routing that is normally executed at `fetch_time`, but on demand.\n", "before_files": [{"content": "\"\"\"Waste Collection Schedule Component.\"\"\"\nimport logging\nimport site\nfrom pathlib import Path\nfrom random import randrange\n\nimport homeassistant.helpers.config_validation as cv\nimport homeassistant.util.dt as dt_util\nimport voluptuous as vol\nfrom homeassistant.core import HomeAssistant, callback\nfrom homeassistant.helpers.dispatcher import dispatcher_send\n\nfrom .const import DOMAIN, UPDATE_SENSORS_SIGNAL\n\nfrom homeassistant.helpers.event import async_call_later # isort:skip\nfrom homeassistant.helpers.event import async_track_time_change # isort:skip\n\n# add module directory to path\npackage_dir = Path(__file__).resolve().parents[0]\nsite.addsitedir(str(package_dir))\nfrom waste_collection_schedule import Customize, SourceShell # type: ignore # isort:skip # noqa: E402\n\n_LOGGER = logging.getLogger(__name__)\n\nCONF_SOURCES = \"sources\"\nCONF_SOURCE_NAME = \"name\"\nCONF_SOURCE_ARGS = \"args\" # source arguments\nCONF_SOURCE_CALENDAR_TITLE = \"calendar_title\"\nCONF_SEPARATOR = \"separator\"\nCONF_FETCH_TIME = \"fetch_time\"\nCONF_RANDOM_FETCH_TIME_OFFSET = \"random_fetch_time_offset\"\nCONF_DAY_SWITCH_TIME = \"day_switch_time\"\n\nCONF_CUSTOMIZE = \"customize\"\nCONF_TYPE = \"type\"\nCONF_ALIAS = \"alias\"\nCONF_SHOW = \"show\"\nCONF_ICON = \"icon\"\nCONF_PICTURE = \"picture\"\nCONF_USE_DEDICATED_CALENDAR = \"use_dedicated_calendar\"\nCONF_DEDICATED_CALENDAR_TITLE = \"dedicated_calendar_title\"\n\nCUSTOMIZE_CONFIG = vol.Schema(\n {\n vol.Optional(CONF_TYPE): cv.string,\n vol.Optional(CONF_ALIAS): cv.string,\n vol.Optional(CONF_SHOW): cv.boolean,\n vol.Optional(CONF_ICON): cv.icon,\n vol.Optional(CONF_PICTURE): cv.string,\n vol.Optional(CONF_USE_DEDICATED_CALENDAR): cv.boolean,\n vol.Optional(CONF_DEDICATED_CALENDAR_TITLE): cv.string,\n }\n)\n\nSOURCE_CONFIG = vol.Schema(\n {\n vol.Required(CONF_SOURCE_NAME): cv.string,\n vol.Required(CONF_SOURCE_ARGS): dict,\n vol.Optional(CONF_CUSTOMIZE, default=[]): vol.All(\n cv.ensure_list, [CUSTOMIZE_CONFIG]\n ),\n vol.Optional(CONF_SOURCE_CALENDAR_TITLE): cv.string,\n }\n)\n\nCONFIG_SCHEMA = vol.Schema(\n {\n DOMAIN: vol.Schema(\n {\n vol.Required(CONF_SOURCES): vol.All(cv.ensure_list, [SOURCE_CONFIG]),\n vol.Optional(CONF_SEPARATOR, default=\", \"): cv.string,\n vol.Optional(CONF_FETCH_TIME, default=\"01:00\"): cv.time,\n vol.Optional(\n CONF_RANDOM_FETCH_TIME_OFFSET, default=60\n ): cv.positive_int,\n vol.Optional(CONF_DAY_SWITCH_TIME, default=\"10:00\"): cv.time,\n }\n )\n },\n extra=vol.ALLOW_EXTRA,\n)\n\n\nasync def async_setup(hass: HomeAssistant, config: dict):\n \"\"\"Set up the component. config contains data from configuration.yaml.\"\"\"\n # create empty api object as singleton\n api = WasteCollectionApi(\n hass,\n separator=config[DOMAIN][CONF_SEPARATOR],\n fetch_time=config[DOMAIN][CONF_FETCH_TIME],\n random_fetch_time_offset=config[DOMAIN][CONF_RANDOM_FETCH_TIME_OFFSET],\n day_switch_time=config[DOMAIN][CONF_DAY_SWITCH_TIME],\n )\n\n # create shells for source(s)\n for source in config[DOMAIN][CONF_SOURCES]:\n # create customize object\n customize = {}\n for c in source.get(CONF_CUSTOMIZE, {}):\n customize[c[CONF_TYPE]] = Customize(\n waste_type=c[CONF_TYPE],\n alias=c.get(CONF_ALIAS),\n show=c.get(CONF_SHOW, True),\n icon=c.get(CONF_ICON),\n picture=c.get(CONF_PICTURE),\n use_dedicated_calendar=c.get(CONF_USE_DEDICATED_CALENDAR, False),\n dedicated_calendar_title=c.get(CONF_DEDICATED_CALENDAR_TITLE, False),\n )\n api.add_source_shell(\n source_name=source[CONF_SOURCE_NAME],\n customize=customize,\n calendar_title=source.get(CONF_SOURCE_CALENDAR_TITLE),\n source_args=source.get(CONF_SOURCE_ARGS, {}),\n )\n\n # store api object\n hass.data.setdefault(DOMAIN, api)\n\n # load calendar platform\n await hass.helpers.discovery.async_load_platform(\n \"calendar\", DOMAIN, {\"api\": api}, config\n )\n\n # initial fetch of all data\n hass.add_job(api._fetch)\n\n return True\n\n\nclass WasteCollectionApi:\n def __init__(\n self, hass, separator, fetch_time, random_fetch_time_offset, day_switch_time\n ):\n self._hass = hass\n self._source_shells = []\n self._separator = separator\n self._fetch_time = fetch_time\n self._random_fetch_time_offset = random_fetch_time_offset\n self._day_switch_time = day_switch_time\n\n # start timer to fetch date once per day\n async_track_time_change(\n hass,\n self._fetch_callback,\n self._fetch_time.hour,\n self._fetch_time.minute,\n self._fetch_time.second,\n )\n\n # start timer for day-switch time\n if self._day_switch_time != self._fetch_time:\n async_track_time_change(\n hass,\n self._update_sensors_callback,\n self._day_switch_time.hour,\n self._day_switch_time.minute,\n self._day_switch_time.second,\n )\n\n # add a timer at midnight (if not already there) to update days-to\n midnight = dt_util.parse_time(\"00:00\")\n if midnight != self._fetch_time and midnight != self._day_switch_time:\n async_track_time_change(\n hass,\n self._update_sensors_callback,\n midnight.hour,\n midnight.minute,\n midnight.second,\n )\n\n @property\n def separator(self):\n \"\"\"Separator string, used to separator waste types.\"\"\"\n return self._separator\n\n @property\n def fetch_time(self):\n \"\"\"When to fetch to data.\"\"\"\n return self._fetch_time\n\n @property\n def day_switch_time(self):\n \"\"\"When to hide entries for today.\"\"\"\n return self._day_switch_time\n\n def add_source_shell(\n self,\n source_name,\n customize,\n source_args,\n calendar_title,\n ):\n self._source_shells.append(\n SourceShell.create(\n source_name=source_name,\n customize=customize,\n source_args=source_args,\n calendar_title=calendar_title,\n )\n )\n\n def _fetch(self, *_):\n for shell in self._source_shells:\n shell.fetch()\n\n self._update_sensors_callback()\n\n @property\n def shells(self):\n return self._source_shells\n\n def get_shell(self, index):\n return self._source_shells[index] if index < len(self._source_shells) else None\n\n @callback\n def _fetch_callback(self, *_):\n async_call_later(\n self._hass,\n randrange(0, 60 * self._random_fetch_time_offset),\n self._fetch_now_callback,\n )\n\n @callback\n def _fetch_now_callback(self, *_):\n self._hass.add_job(self._fetch)\n\n @callback\n def _update_sensors_callback(self, *_):\n dispatcher_send(self._hass, UPDATE_SENSORS_SIGNAL)\n", "path": "custom_components/waste_collection_schedule/__init__.py"}], "after_files": [{"content": "\"\"\"Waste Collection Schedule Component.\"\"\"\nimport logging\nimport site\nfrom pathlib import Path\nfrom random import randrange\n\nimport homeassistant.helpers.config_validation as cv\nimport homeassistant.util.dt as dt_util\nimport voluptuous as vol\nfrom homeassistant.core import HomeAssistant, callback\nfrom homeassistant.helpers.dispatcher import dispatcher_send\n\nfrom .const import DOMAIN, UPDATE_SENSORS_SIGNAL\n\nfrom homeassistant.helpers.event import async_call_later # isort:skip\nfrom homeassistant.helpers.event import async_track_time_change # isort:skip\n\n# add module directory to path\npackage_dir = Path(__file__).resolve().parents[0]\nsite.addsitedir(str(package_dir))\nfrom waste_collection_schedule import Customize, SourceShell # type: ignore # isort:skip # noqa: E402\n\n_LOGGER = logging.getLogger(__name__)\n\nCONF_SOURCES = \"sources\"\nCONF_SOURCE_NAME = \"name\"\nCONF_SOURCE_ARGS = \"args\" # source arguments\nCONF_SOURCE_CALENDAR_TITLE = \"calendar_title\"\nCONF_SEPARATOR = \"separator\"\nCONF_FETCH_TIME = \"fetch_time\"\nCONF_RANDOM_FETCH_TIME_OFFSET = \"random_fetch_time_offset\"\nCONF_DAY_SWITCH_TIME = \"day_switch_time\"\n\nCONF_CUSTOMIZE = \"customize\"\nCONF_TYPE = \"type\"\nCONF_ALIAS = \"alias\"\nCONF_SHOW = \"show\"\nCONF_ICON = \"icon\"\nCONF_PICTURE = \"picture\"\nCONF_USE_DEDICATED_CALENDAR = \"use_dedicated_calendar\"\nCONF_DEDICATED_CALENDAR_TITLE = \"dedicated_calendar_title\"\n\nCUSTOMIZE_CONFIG = vol.Schema(\n {\n vol.Optional(CONF_TYPE): cv.string,\n vol.Optional(CONF_ALIAS): cv.string,\n vol.Optional(CONF_SHOW): cv.boolean,\n vol.Optional(CONF_ICON): cv.icon,\n vol.Optional(CONF_PICTURE): cv.string,\n vol.Optional(CONF_USE_DEDICATED_CALENDAR): cv.boolean,\n vol.Optional(CONF_DEDICATED_CALENDAR_TITLE): cv.string,\n }\n)\n\nSOURCE_CONFIG = vol.Schema(\n {\n vol.Required(CONF_SOURCE_NAME): cv.string,\n vol.Required(CONF_SOURCE_ARGS): dict,\n vol.Optional(CONF_CUSTOMIZE, default=[]): vol.All(\n cv.ensure_list, [CUSTOMIZE_CONFIG]\n ),\n vol.Optional(CONF_SOURCE_CALENDAR_TITLE): cv.string,\n }\n)\n\nCONFIG_SCHEMA = vol.Schema(\n {\n DOMAIN: vol.Schema(\n {\n vol.Required(CONF_SOURCES): vol.All(cv.ensure_list, [SOURCE_CONFIG]),\n vol.Optional(CONF_SEPARATOR, default=\", \"): cv.string,\n vol.Optional(CONF_FETCH_TIME, default=\"01:00\"): cv.time,\n vol.Optional(\n CONF_RANDOM_FETCH_TIME_OFFSET, default=60\n ): cv.positive_int,\n vol.Optional(CONF_DAY_SWITCH_TIME, default=\"10:00\"): cv.time,\n }\n )\n },\n extra=vol.ALLOW_EXTRA,\n)\n\n\nasync def async_setup(hass: HomeAssistant, config: dict):\n \"\"\"Set up the component. config contains data from configuration.yaml.\"\"\"\n # create empty api object as singleton\n api = WasteCollectionApi(\n hass,\n separator=config[DOMAIN][CONF_SEPARATOR],\n fetch_time=config[DOMAIN][CONF_FETCH_TIME],\n random_fetch_time_offset=config[DOMAIN][CONF_RANDOM_FETCH_TIME_OFFSET],\n day_switch_time=config[DOMAIN][CONF_DAY_SWITCH_TIME],\n )\n\n # create shells for source(s)\n for source in config[DOMAIN][CONF_SOURCES]:\n # create customize object\n customize = {}\n for c in source.get(CONF_CUSTOMIZE, {}):\n customize[c[CONF_TYPE]] = Customize(\n waste_type=c[CONF_TYPE],\n alias=c.get(CONF_ALIAS),\n show=c.get(CONF_SHOW, True),\n icon=c.get(CONF_ICON),\n picture=c.get(CONF_PICTURE),\n use_dedicated_calendar=c.get(CONF_USE_DEDICATED_CALENDAR, False),\n dedicated_calendar_title=c.get(CONF_DEDICATED_CALENDAR_TITLE, False),\n )\n api.add_source_shell(\n source_name=source[CONF_SOURCE_NAME],\n customize=customize,\n calendar_title=source.get(CONF_SOURCE_CALENDAR_TITLE),\n source_args=source.get(CONF_SOURCE_ARGS, {}),\n )\n\n # store api object\n hass.data.setdefault(DOMAIN, api)\n\n # load calendar platform\n await hass.helpers.discovery.async_load_platform(\n \"calendar\", DOMAIN, {\"api\": api}, config\n )\n\n # initial fetch of all data\n hass.add_job(api._fetch)\n \n def fetch_data():\n hass.add_job(api._fetch)\n\n # Register new Service fetch_data\n hass.services.async_register(DOMAIN, 'fetch_data', fetch_data)\n\n return True\n\n\nclass WasteCollectionApi:\n def __init__(\n self, hass, separator, fetch_time, random_fetch_time_offset, day_switch_time\n ):\n self._hass = hass\n self._source_shells = []\n self._separator = separator\n self._fetch_time = fetch_time\n self._random_fetch_time_offset = random_fetch_time_offset\n self._day_switch_time = day_switch_time\n\n # start timer to fetch date once per day\n async_track_time_change(\n hass,\n self._fetch_callback,\n self._fetch_time.hour,\n self._fetch_time.minute,\n self._fetch_time.second,\n )\n\n # start timer for day-switch time\n if self._day_switch_time != self._fetch_time:\n async_track_time_change(\n hass,\n self._update_sensors_callback,\n self._day_switch_time.hour,\n self._day_switch_time.minute,\n self._day_switch_time.second,\n )\n\n # add a timer at midnight (if not already there) to update days-to\n midnight = dt_util.parse_time(\"00:00\")\n if midnight != self._fetch_time and midnight != self._day_switch_time:\n async_track_time_change(\n hass,\n self._update_sensors_callback,\n midnight.hour,\n midnight.minute,\n midnight.second,\n )\n\n @property\n def separator(self):\n \"\"\"Separator string, used to separator waste types.\"\"\"\n return self._separator\n\n @property\n def fetch_time(self):\n \"\"\"When to fetch to data.\"\"\"\n return self._fetch_time\n\n @property\n def day_switch_time(self):\n \"\"\"When to hide entries for today.\"\"\"\n return self._day_switch_time\n\n def add_source_shell(\n self,\n source_name,\n customize,\n source_args,\n calendar_title,\n ):\n self._source_shells.append(\n SourceShell.create(\n source_name=source_name,\n customize=customize,\n source_args=source_args,\n calendar_title=calendar_title,\n )\n )\n\n def _fetch(self, *_):\n for shell in self._source_shells:\n shell.fetch()\n\n self._update_sensors_callback()\n\n @property\n def shells(self):\n return self._source_shells\n\n def get_shell(self, index):\n return self._source_shells[index] if index < len(self._source_shells) else None\n\n @callback\n def _fetch_callback(self, *_):\n async_call_later(\n self._hass,\n randrange(0, 60 * self._random_fetch_time_offset),\n self._fetch_now_callback,\n )\n\n @callback\n def _fetch_now_callback(self, *_):\n self._hass.add_job(self._fetch)\n\n @callback\n def _update_sensors_callback(self, *_):\n dispatcher_send(self._hass, UPDATE_SENSORS_SIGNAL)\n", "path": "custom_components/waste_collection_schedule/__init__.py"}]}
| 2,552 | 133 |
gh_patches_debug_11218
|
rasdani/github-patches
|
git_diff
|
openstates__openstates-scrapers-2984
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
FL failing since at least 2019-06-03
FL has been failing since 2019-06-03
Based on automated runs it appears that FL has not run successfully in 2 days (2019-06-03).
```
04:01:17 CRITICAL pupa: Session(s) 2009B, 2003C, 2003B, 2002E, 2004A, 2012 Org., 2007D, 1998 Org, 2000A (Jan.), 2007C, 2007A, 2000A (Dec.), 2006 Org., 2000 Org., 2001C, 2005B, 2002D, 2008 Org., 2018 Org., 2003A, 2010 Org., 2004 Org., 2003D, 2007B, 2009A, 2001B, 2014 Org., 2002 Org., 2016 Org., 2010C, 2003E were reported by Florida.get_session_list() but were not found in Florida.legislative_sessions or Florida.ignored_scraped_sessions.
loaded Open States pupa settings...
fl (scrape, import)
bills: {}
```
Visit http://bobsled.openstates.org for more info.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `openstates/fl/__init__.py`
Content:
```
1 # encoding=utf-8
2 import logging
3 from pupa.scrape import Jurisdiction, Organization
4 from .bills import FlBillScraper
5 from .people import FlPersonScraper
6 # from .committees import FlCommitteeScraper
7 # from .events import FlEventScraper
8 from openstates.utils import url_xpath
9
10 logging.getLogger(__name__).addHandler(logging.NullHandler())
11
12
13 class Florida(Jurisdiction):
14 division_id = "ocd-division/country:us/state:fl"
15 classification = "government"
16 name = "Florida"
17 url = "http://myflorida.com"
18
19 scrapers = {
20 "bills": FlBillScraper,
21 "people": FlPersonScraper,
22 # "committees": FlCommitteeScraper,
23 # "events": FlEventScraper,
24 }
25 legislative_sessions = [
26 {'name': '2011 Regular Session', 'identifier': '2011',
27 'classification': 'primary'},
28 {'name': '2012 Regular Session', 'identifier': '2012',
29 'classification': 'primary'},
30 {'name': '2012 Extraordinary Apportionment Session', 'identifier': '2012B',
31 'classification': 'special'},
32 {'name': '2013 Regular Session', 'identifier': '2013',
33 'classification': 'primary'},
34 {'name': '2014 Regular Session', 'identifier': '2014',
35 'classification': 'primary'},
36 {'name': '2014 Special Session A',
37 'identifier': '2014A', 'classification': 'special'},
38 # data for the below
39 {'name': '2015 Regular Session', 'identifier': '2015',
40 'classification': 'primary'},
41 {'name': '2015 Special Session A',
42 'identifier': '2015A', 'classification': 'special'},
43 {'name': '2015 Special Session B',
44 'identifier': '2015B', 'classification': 'special'},
45 {'name': '2015 Special Session C',
46 'identifier': '2015C', 'classification': 'special'},
47 {'name': '2016 Regular Session', 'identifier': '2016',
48 'classification': 'primary'},
49 {'name': '2017 Regular Session', 'identifier': '2017', 'classification': 'primary',
50 'start_date': '2017-03-07', 'end_date': '2017-05-05'},
51 {'name': '2017 Special Session A',
52 'identifier': '2017A', 'classification': 'special'},
53 {'name': '2018 Regular Session', 'identifier': '2018', 'classification': 'primary',
54 'start_date': '2018-01-08', 'end_date': '2018-03-09'},
55 {'name': '2019 Regular Session', 'identifier': '2019', 'classification': 'primary',
56 'start_date': '2019-03-05', 'end_date': '2019-05-03'},
57 ]
58 ignored_scraped_sessions = [
59 *(str(each) for each in range(1997, 2010)),
60 '2010', '2010A', '2010O',
61 '2012O',
62 '2014O',
63 '2016O',
64 '2018O',
65 ]
66
67 def get_organizations(self):
68 legis = Organization(name="Florida Legislature",
69 classification="legislature")
70
71 upper = Organization(
72 'Florida Senate', classification='upper', parent_id=legis._id)
73 lower = Organization('Florida House of Representatives', classification='lower',
74 parent_id=legis._id)
75
76 yield legis
77 yield upper
78 yield lower
79
80 def get_session_list(self):
81 return url_xpath('http://flsenate.gov', '//option/text()')
82
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/openstates/fl/__init__.py b/openstates/fl/__init__.py
--- a/openstates/fl/__init__.py
+++ b/openstates/fl/__init__.py
@@ -62,6 +62,37 @@
'2014O',
'2016O',
'2018O',
+ '2018 Org.',
+ '2016 Org.',
+ '2014 Org.',
+ '2012 Org.',
+ '2010 Org.',
+ '2010C',
+ '2009B',
+ '2009A',
+ '2008 Org.',
+ '2007D',
+ '2007C',
+ '2007B',
+ '2007A',
+ '2006 Org.',
+ '2005B',
+ '2004A',
+ '2004 Org.',
+ '2003E',
+ '2003D',
+ '2003C',
+ '2003B',
+ '2003A',
+ '2002E',
+ '2002D',
+ '2002 Org.',
+ '2001C',
+ '2001B',
+ '2000A (Jan.)',
+ '2000A (Dec.)',
+ '2000 Org.',
+ '1998 Org',
]
def get_organizations(self):
|
{"golden_diff": "diff --git a/openstates/fl/__init__.py b/openstates/fl/__init__.py\n--- a/openstates/fl/__init__.py\n+++ b/openstates/fl/__init__.py\n@@ -62,6 +62,37 @@\n '2014O',\n '2016O',\n '2018O',\n+ '2018 Org.',\n+ '2016 Org.',\n+ '2014 Org.',\n+ '2012 Org.',\n+ '2010 Org.',\n+ '2010C',\n+ '2009B',\n+ '2009A',\n+ '2008 Org.',\n+ '2007D',\n+ '2007C',\n+ '2007B',\n+ '2007A',\n+ '2006 Org.',\n+ '2005B',\n+ '2004A',\n+ '2004 Org.',\n+ '2003E',\n+ '2003D',\n+ '2003C',\n+ '2003B',\n+ '2003A',\n+ '2002E',\n+ '2002D',\n+ '2002 Org.',\n+ '2001C',\n+ '2001B',\n+ '2000A (Jan.)',\n+ '2000A (Dec.)',\n+ '2000 Org.',\n+ '1998 Org',\n ]\n \n def get_organizations(self):\n", "issue": "FL failing since at least 2019-06-03\nFL has been failing since 2019-06-03\n\nBased on automated runs it appears that FL has not run successfully in 2 days (2019-06-03).\n\n\n```\n 04:01:17 CRITICAL pupa: Session(s) 2009B, 2003C, 2003B, 2002E, 2004A, 2012 Org., 2007D, 1998 Org, 2000A (Jan.), 2007C, 2007A, 2000A (Dec.), 2006 Org., 2000 Org., 2001C, 2005B, 2002D, 2008 Org., 2018 Org., 2003A, 2010 Org., 2004 Org., 2003D, 2007B, 2009A, 2001B, 2014 Org., 2002 Org., 2016 Org., 2010C, 2003E were reported by Florida.get_session_list() but were not found in Florida.legislative_sessions or Florida.ignored_scraped_sessions.\nloaded Open States pupa settings...\nfl (scrape, import)\n bills: {}\n```\n\nVisit http://bobsled.openstates.org for more info.\n\n", "before_files": [{"content": "# encoding=utf-8\nimport logging\nfrom pupa.scrape import Jurisdiction, Organization\nfrom .bills import FlBillScraper\nfrom .people import FlPersonScraper\n# from .committees import FlCommitteeScraper\n# from .events import FlEventScraper\nfrom openstates.utils import url_xpath\n\nlogging.getLogger(__name__).addHandler(logging.NullHandler())\n\n\nclass Florida(Jurisdiction):\n division_id = \"ocd-division/country:us/state:fl\"\n classification = \"government\"\n name = \"Florida\"\n url = \"http://myflorida.com\"\n\n scrapers = {\n \"bills\": FlBillScraper,\n \"people\": FlPersonScraper,\n # \"committees\": FlCommitteeScraper,\n # \"events\": FlEventScraper,\n }\n legislative_sessions = [\n {'name': '2011 Regular Session', 'identifier': '2011',\n 'classification': 'primary'},\n {'name': '2012 Regular Session', 'identifier': '2012',\n 'classification': 'primary'},\n {'name': '2012 Extraordinary Apportionment Session', 'identifier': '2012B',\n 'classification': 'special'},\n {'name': '2013 Regular Session', 'identifier': '2013',\n 'classification': 'primary'},\n {'name': '2014 Regular Session', 'identifier': '2014',\n 'classification': 'primary'},\n {'name': '2014 Special Session A',\n 'identifier': '2014A', 'classification': 'special'},\n # data for the below\n {'name': '2015 Regular Session', 'identifier': '2015',\n 'classification': 'primary'},\n {'name': '2015 Special Session A',\n 'identifier': '2015A', 'classification': 'special'},\n {'name': '2015 Special Session B',\n 'identifier': '2015B', 'classification': 'special'},\n {'name': '2015 Special Session C',\n 'identifier': '2015C', 'classification': 'special'},\n {'name': '2016 Regular Session', 'identifier': '2016',\n 'classification': 'primary'},\n {'name': '2017 Regular Session', 'identifier': '2017', 'classification': 'primary',\n 'start_date': '2017-03-07', 'end_date': '2017-05-05'},\n {'name': '2017 Special Session A',\n 'identifier': '2017A', 'classification': 'special'},\n {'name': '2018 Regular Session', 'identifier': '2018', 'classification': 'primary',\n 'start_date': '2018-01-08', 'end_date': '2018-03-09'},\n {'name': '2019 Regular Session', 'identifier': '2019', 'classification': 'primary',\n 'start_date': '2019-03-05', 'end_date': '2019-05-03'},\n ]\n ignored_scraped_sessions = [\n *(str(each) for each in range(1997, 2010)),\n '2010', '2010A', '2010O',\n '2012O',\n '2014O',\n '2016O',\n '2018O',\n ]\n\n def get_organizations(self):\n legis = Organization(name=\"Florida Legislature\",\n classification=\"legislature\")\n\n upper = Organization(\n 'Florida Senate', classification='upper', parent_id=legis._id)\n lower = Organization('Florida House of Representatives', classification='lower',\n parent_id=legis._id)\n\n yield legis\n yield upper\n yield lower\n\n def get_session_list(self):\n return url_xpath('http://flsenate.gov', '//option/text()')\n", "path": "openstates/fl/__init__.py"}], "after_files": [{"content": "# encoding=utf-8\nimport logging\nfrom pupa.scrape import Jurisdiction, Organization\nfrom .bills import FlBillScraper\nfrom .people import FlPersonScraper\n# from .committees import FlCommitteeScraper\n# from .events import FlEventScraper\nfrom openstates.utils import url_xpath\n\nlogging.getLogger(__name__).addHandler(logging.NullHandler())\n\n\nclass Florida(Jurisdiction):\n division_id = \"ocd-division/country:us/state:fl\"\n classification = \"government\"\n name = \"Florida\"\n url = \"http://myflorida.com\"\n\n scrapers = {\n \"bills\": FlBillScraper,\n \"people\": FlPersonScraper,\n # \"committees\": FlCommitteeScraper,\n # \"events\": FlEventScraper,\n }\n legislative_sessions = [\n {'name': '2011 Regular Session', 'identifier': '2011',\n 'classification': 'primary'},\n {'name': '2012 Regular Session', 'identifier': '2012',\n 'classification': 'primary'},\n {'name': '2012 Extraordinary Apportionment Session', 'identifier': '2012B',\n 'classification': 'special'},\n {'name': '2013 Regular Session', 'identifier': '2013',\n 'classification': 'primary'},\n {'name': '2014 Regular Session', 'identifier': '2014',\n 'classification': 'primary'},\n {'name': '2014 Special Session A',\n 'identifier': '2014A', 'classification': 'special'},\n # data for the below\n {'name': '2015 Regular Session', 'identifier': '2015',\n 'classification': 'primary'},\n {'name': '2015 Special Session A',\n 'identifier': '2015A', 'classification': 'special'},\n {'name': '2015 Special Session B',\n 'identifier': '2015B', 'classification': 'special'},\n {'name': '2015 Special Session C',\n 'identifier': '2015C', 'classification': 'special'},\n {'name': '2016 Regular Session', 'identifier': '2016',\n 'classification': 'primary'},\n {'name': '2017 Regular Session', 'identifier': '2017', 'classification': 'primary',\n 'start_date': '2017-03-07', 'end_date': '2017-05-05'},\n {'name': '2017 Special Session A',\n 'identifier': '2017A', 'classification': 'special'},\n {'name': '2018 Regular Session', 'identifier': '2018', 'classification': 'primary',\n 'start_date': '2018-01-08', 'end_date': '2018-03-09'},\n {'name': '2019 Regular Session', 'identifier': '2019', 'classification': 'primary',\n 'start_date': '2019-03-05', 'end_date': '2019-05-03'},\n ]\n ignored_scraped_sessions = [\n *(str(each) for each in range(1997, 2010)),\n '2010', '2010A', '2010O',\n '2012O',\n '2014O',\n '2016O',\n '2018O',\n '2018 Org.',\n '2016 Org.',\n '2014 Org.',\n '2012 Org.',\n '2010 Org.',\n '2010C',\n '2009B',\n '2009A',\n '2008 Org.',\n '2007D',\n '2007C',\n '2007B',\n '2007A',\n '2006 Org.',\n '2005B',\n '2004A',\n '2004 Org.',\n '2003E',\n '2003D',\n '2003C',\n '2003B',\n '2003A',\n '2002E',\n '2002D',\n '2002 Org.',\n '2001C',\n '2001B',\n '2000A (Jan.)',\n '2000A (Dec.)',\n '2000 Org.',\n '1998 Org',\n ]\n\n def get_organizations(self):\n legis = Organization(name=\"Florida Legislature\",\n classification=\"legislature\")\n\n upper = Organization(\n 'Florida Senate', classification='upper', parent_id=legis._id)\n lower = Organization('Florida House of Representatives', classification='lower',\n parent_id=legis._id)\n\n yield legis\n yield upper\n yield lower\n\n def get_session_list(self):\n return url_xpath('http://flsenate.gov', '//option/text()')\n", "path": "openstates/fl/__init__.py"}]}
| 1,694 | 372 |
gh_patches_debug_7337
|
rasdani/github-patches
|
git_diff
|
frappe__frappe-15233
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cannot search for keywords via Global Search
Steps:
1. Enter any keyword in global search
2. Hit enter
3. Instead of returning relevant records, system throws error message for Relevant Doctype

```
Traceback (most recent call last):
File "/home/frappe/frappe-io-bench/apps/frappe/frappe/app.py", line 66, in application
response = frappe.api.handle()
File "/home/frappe/frappe-io-bench/apps/frappe/frappe/api.py", line 54, in handle
return frappe.handler.handle()
File "/home/frappe/frappe-io-bench/apps/frappe/frappe/handler.py", line 31, in handle
data = execute_cmd(cmd)
File "/home/frappe/frappe-io-bench/apps/frappe/frappe/handler.py", line 67, in execute_cmd
return frappe.call(method, **frappe.form_dict)
File "/home/frappe/frappe-io-bench/apps/frappe/frappe/__init__.py", line 1213, in call
return fn(*args, **newargs)
File "/home/frappe/frappe-io-bench/apps/frappe/frappe/utils/global_search.py", line 422, in search
allowed_doctypes = get_doctypes_for_global_search()
File "/home/frappe/frappe-io-bench/apps/frappe/frappe/desk/doctype/global_search_settings/global_search_settings.py", line 39, in get_doctypes_for_global_search
return frappe.cache().hget("global_search", "search_priorities", get_from_db)
File "/home/frappe/frappe-io-bench/apps/frappe/frappe/utils/redis_wrapper.py", line 194, in hget
value = generator()
File "/home/frappe/frappe-io-bench/apps/frappe/frappe/desk/doctype/global_search_settings/global_search_settings.py", line 36, in get_from_db
doctypes = frappe.get_list("Global Search DocType", fields=["document_type"], order_by="idx ASC")
File "/home/frappe/frappe-io-bench/apps/frappe/frappe/__init__.py", line 1446, in get_list
return frappe.model.db_query.DatabaseQuery(doctype).execute(*args, **kwargs)
File "/home/frappe/frappe-io-bench/apps/frappe/frappe/model/db_query.py", line 40, in execute
not frappe.has_permission(self.doctype, "select", user=user, parent_doctype=parent_doctype) and \
File "/home/frappe/frappe-io-bench/apps/frappe/frappe/__init__.py", line 743, in has_permission
raise_exception=throw, parent_doctype=parent_doctype)
File "/home/frappe/frappe-io-bench/apps/frappe/frappe/permissions.py", line 24, in inner
result = func(*args, **kwargs)
File "/home/frappe/frappe-io-bench/apps/frappe/frappe/permissions.py", line 55, in has_permission
user, raise_exception, parent_doctype)
File "/home/frappe/frappe-io-bench/apps/frappe/frappe/permissions.py", line 585, in has_child_table_permission
), title=_("Parent DocType Required"))
File "/home/frappe/frappe-io-bench/apps/frappe/frappe/__init__.py", line 438, in throw
msgprint(msg, raise_exception=exc, title=title, indicator='red', is_minimizable=is_minimizable, wide=wide, as_list=as_list)
File "/home/frappe/frappe-io-bench/apps/frappe/frappe/__init__.py", line 417, in msgprint
_raise_exception()
File "/home/frappe/frappe-io-bench/apps/frappe/frappe/__init__.py", line 371, in _raise_exception
raise raise_exception(msg)
frappe.exceptions.ValidationError: Please specify a valid parent DocType for <strong>Global Search DocType</strong>
```
ERPNext: v13.x.x-develop () (develop)
Frappe Framework: v14.x.x-develop () (develop)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `frappe/desk/doctype/global_search_settings/global_search_settings.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Copyright (c) 2019, Frappe Technologies and contributors
3 # License: MIT. See LICENSE
4
5 import frappe
6 from frappe.model.document import Document
7 from frappe import _
8
9 class GlobalSearchSettings(Document):
10
11 def validate(self):
12 dts, core_dts, repeated_dts = [], [], []
13
14 for dt in self.allowed_in_global_search:
15 if dt.document_type in dts:
16 repeated_dts.append(dt.document_type)
17
18 if frappe.get_meta(dt.document_type).module == "Core":
19 core_dts.append(dt.document_type)
20
21 dts.append(dt.document_type)
22
23 if core_dts:
24 core_dts = ", ".join(frappe.bold(dt) for dt in core_dts)
25 frappe.throw(_("Core Modules {0} cannot be searched in Global Search.").format(core_dts))
26
27 if repeated_dts:
28 repeated_dts = (", ".join([frappe.bold(dt) for dt in repeated_dts]))
29 frappe.throw(_("Document Type {0} has been repeated.").format(repeated_dts))
30
31 # reset cache
32 frappe.cache().hdel('global_search', 'search_priorities')
33
34 def get_doctypes_for_global_search():
35 def get_from_db():
36 doctypes = frappe.get_list("Global Search DocType", fields=["document_type"], order_by="idx ASC")
37 return [d.document_type for d in doctypes] or []
38
39 return frappe.cache().hget("global_search", "search_priorities", get_from_db)
40
41
42 @frappe.whitelist()
43 def reset_global_search_settings_doctypes():
44 update_global_search_doctypes()
45
46 def update_global_search_doctypes():
47 global_search_doctypes = []
48 show_message(1, _("Fetching default Global Search documents."))
49
50 installed_apps = [app for app in frappe.get_installed_apps() if app]
51 active_domains = [domain for domain in frappe.get_active_domains() if domain]
52 active_domains.append("Default")
53
54 for app in installed_apps:
55 search_doctypes = frappe.get_hooks(hook="global_search_doctypes", app_name=app)
56 if not search_doctypes:
57 continue
58
59 for domain in active_domains:
60 if search_doctypes.get(domain):
61 global_search_doctypes.extend(search_doctypes.get(domain))
62
63 doctype_list = {dt.name for dt in frappe.get_all("DocType")}
64 allowed_in_global_search = []
65
66 for dt in global_search_doctypes:
67 if dt.get("index") is not None:
68 allowed_in_global_search.insert(dt.get("index"), dt.get("doctype"))
69 continue
70
71 allowed_in_global_search.append(dt.get("doctype"))
72
73 show_message(2, _("Setting up Global Search documents."))
74 global_search_settings = frappe.get_single("Global Search Settings")
75 global_search_settings.allowed_in_global_search = []
76 for dt in allowed_in_global_search:
77 if dt not in doctype_list:
78 continue
79
80 global_search_settings.append("allowed_in_global_search", {
81 "document_type": dt
82 })
83 global_search_settings.save(ignore_permissions=True)
84 show_message(3, "Global Search Documents have been reset.")
85
86 def show_message(progress, msg):
87 frappe.publish_realtime('global_search_settings', {"progress":progress, "total":3, "msg": msg}, user=frappe.session.user)
88
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/frappe/desk/doctype/global_search_settings/global_search_settings.py b/frappe/desk/doctype/global_search_settings/global_search_settings.py
--- a/frappe/desk/doctype/global_search_settings/global_search_settings.py
+++ b/frappe/desk/doctype/global_search_settings/global_search_settings.py
@@ -33,7 +33,7 @@
def get_doctypes_for_global_search():
def get_from_db():
- doctypes = frappe.get_list("Global Search DocType", fields=["document_type"], order_by="idx ASC")
+ doctypes = frappe.get_all("Global Search DocType", fields=["document_type"], order_by="idx ASC")
return [d.document_type for d in doctypes] or []
return frappe.cache().hget("global_search", "search_priorities", get_from_db)
|
{"golden_diff": "diff --git a/frappe/desk/doctype/global_search_settings/global_search_settings.py b/frappe/desk/doctype/global_search_settings/global_search_settings.py\n--- a/frappe/desk/doctype/global_search_settings/global_search_settings.py\n+++ b/frappe/desk/doctype/global_search_settings/global_search_settings.py\n@@ -33,7 +33,7 @@\n \n def get_doctypes_for_global_search():\n \tdef get_from_db():\n-\t\tdoctypes = frappe.get_list(\"Global Search DocType\", fields=[\"document_type\"], order_by=\"idx ASC\")\n+\t\tdoctypes = frappe.get_all(\"Global Search DocType\", fields=[\"document_type\"], order_by=\"idx ASC\")\n \t\treturn [d.document_type for d in doctypes] or []\n \n \treturn frappe.cache().hget(\"global_search\", \"search_priorities\", get_from_db)\n", "issue": "Cannot search for keywords via Global Search\nSteps:\r\n\r\n1. Enter any keyword in global search\r\n2. Hit enter\r\n3. Instead of returning relevant records, system throws error message for Relevant Doctype\r\n\r\n\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/frappe/frappe-io-bench/apps/frappe/frappe/app.py\", line 66, in application\r\n response = frappe.api.handle()\r\n File \"/home/frappe/frappe-io-bench/apps/frappe/frappe/api.py\", line 54, in handle\r\n return frappe.handler.handle()\r\n File \"/home/frappe/frappe-io-bench/apps/frappe/frappe/handler.py\", line 31, in handle\r\n data = execute_cmd(cmd)\r\n File \"/home/frappe/frappe-io-bench/apps/frappe/frappe/handler.py\", line 67, in execute_cmd\r\n return frappe.call(method, **frappe.form_dict)\r\n File \"/home/frappe/frappe-io-bench/apps/frappe/frappe/__init__.py\", line 1213, in call\r\n return fn(*args, **newargs)\r\n File \"/home/frappe/frappe-io-bench/apps/frappe/frappe/utils/global_search.py\", line 422, in search\r\n allowed_doctypes = get_doctypes_for_global_search()\r\n File \"/home/frappe/frappe-io-bench/apps/frappe/frappe/desk/doctype/global_search_settings/global_search_settings.py\", line 39, in get_doctypes_for_global_search\r\n return frappe.cache().hget(\"global_search\", \"search_priorities\", get_from_db)\r\n File \"/home/frappe/frappe-io-bench/apps/frappe/frappe/utils/redis_wrapper.py\", line 194, in hget\r\n value = generator()\r\n File \"/home/frappe/frappe-io-bench/apps/frappe/frappe/desk/doctype/global_search_settings/global_search_settings.py\", line 36, in get_from_db\r\n doctypes = frappe.get_list(\"Global Search DocType\", fields=[\"document_type\"], order_by=\"idx ASC\")\r\n File \"/home/frappe/frappe-io-bench/apps/frappe/frappe/__init__.py\", line 1446, in get_list\r\n return frappe.model.db_query.DatabaseQuery(doctype).execute(*args, **kwargs)\r\n File \"/home/frappe/frappe-io-bench/apps/frappe/frappe/model/db_query.py\", line 40, in execute\r\n not frappe.has_permission(self.doctype, \"select\", user=user, parent_doctype=parent_doctype) and \\\r\n File \"/home/frappe/frappe-io-bench/apps/frappe/frappe/__init__.py\", line 743, in has_permission\r\n raise_exception=throw, parent_doctype=parent_doctype)\r\n File \"/home/frappe/frappe-io-bench/apps/frappe/frappe/permissions.py\", line 24, in inner\r\n result = func(*args, **kwargs)\r\n File \"/home/frappe/frappe-io-bench/apps/frappe/frappe/permissions.py\", line 55, in has_permission\r\n user, raise_exception, parent_doctype)\r\n File \"/home/frappe/frappe-io-bench/apps/frappe/frappe/permissions.py\", line 585, in has_child_table_permission\r\n ), title=_(\"Parent DocType Required\"))\r\n File \"/home/frappe/frappe-io-bench/apps/frappe/frappe/__init__.py\", line 438, in throw\r\n msgprint(msg, raise_exception=exc, title=title, indicator='red', is_minimizable=is_minimizable, wide=wide, as_list=as_list)\r\n File \"/home/frappe/frappe-io-bench/apps/frappe/frappe/__init__.py\", line 417, in msgprint\r\n _raise_exception()\r\n File \"/home/frappe/frappe-io-bench/apps/frappe/frappe/__init__.py\", line 371, in _raise_exception\r\n raise raise_exception(msg)\r\nfrappe.exceptions.ValidationError: Please specify a valid parent DocType for <strong>Global Search DocType</strong>\r\n```\r\n\r\nERPNext: v13.x.x-develop () (develop)\r\n\r\nFrappe Framework: v14.x.x-develop () (develop)\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) 2019, Frappe Technologies and contributors\n# License: MIT. See LICENSE\n\nimport frappe\nfrom frappe.model.document import Document\nfrom frappe import _\n\nclass GlobalSearchSettings(Document):\n\n\tdef validate(self):\n\t\tdts, core_dts, repeated_dts = [], [], []\n\n\t\tfor dt in self.allowed_in_global_search:\n\t\t\tif dt.document_type in dts:\n\t\t\t\trepeated_dts.append(dt.document_type)\n\n\t\t\tif frappe.get_meta(dt.document_type).module == \"Core\":\n\t\t\t\tcore_dts.append(dt.document_type)\n\n\t\t\tdts.append(dt.document_type)\n\n\t\tif core_dts:\n\t\t\tcore_dts = \", \".join(frappe.bold(dt) for dt in core_dts)\n\t\t\tfrappe.throw(_(\"Core Modules {0} cannot be searched in Global Search.\").format(core_dts))\n\n\t\tif repeated_dts:\n\t\t\trepeated_dts = (\", \".join([frappe.bold(dt) for dt in repeated_dts]))\n\t\t\tfrappe.throw(_(\"Document Type {0} has been repeated.\").format(repeated_dts))\n\n\t\t# reset cache\n\t\tfrappe.cache().hdel('global_search', 'search_priorities')\n\ndef get_doctypes_for_global_search():\n\tdef get_from_db():\n\t\tdoctypes = frappe.get_list(\"Global Search DocType\", fields=[\"document_type\"], order_by=\"idx ASC\")\n\t\treturn [d.document_type for d in doctypes] or []\n\n\treturn frappe.cache().hget(\"global_search\", \"search_priorities\", get_from_db)\n\n\[email protected]()\ndef reset_global_search_settings_doctypes():\n\tupdate_global_search_doctypes()\n\ndef update_global_search_doctypes():\n\tglobal_search_doctypes = []\n\tshow_message(1, _(\"Fetching default Global Search documents.\"))\n\n\tinstalled_apps = [app for app in frappe.get_installed_apps() if app]\n\tactive_domains = [domain for domain in frappe.get_active_domains() if domain]\n\tactive_domains.append(\"Default\")\n\n\tfor app in installed_apps:\n\t\tsearch_doctypes = frappe.get_hooks(hook=\"global_search_doctypes\", app_name=app)\n\t\tif not search_doctypes:\n\t\t\tcontinue\n\n\t\tfor domain in active_domains:\n\t\t\tif search_doctypes.get(domain):\n\t\t\t\tglobal_search_doctypes.extend(search_doctypes.get(domain))\n\n\tdoctype_list = {dt.name for dt in frappe.get_all(\"DocType\")}\n\tallowed_in_global_search = []\n\n\tfor dt in global_search_doctypes:\n\t\tif dt.get(\"index\") is not None:\n\t\t\tallowed_in_global_search.insert(dt.get(\"index\"), dt.get(\"doctype\"))\n\t\t\tcontinue\n\n\t\tallowed_in_global_search.append(dt.get(\"doctype\"))\n\n\tshow_message(2, _(\"Setting up Global Search documents.\"))\n\tglobal_search_settings = frappe.get_single(\"Global Search Settings\")\n\tglobal_search_settings.allowed_in_global_search = []\n\tfor dt in allowed_in_global_search:\n\t\tif dt not in doctype_list:\n\t\t\tcontinue\n\n\t\tglobal_search_settings.append(\"allowed_in_global_search\", {\n\t\t\t\"document_type\": dt\n\t\t})\n\tglobal_search_settings.save(ignore_permissions=True)\n\tshow_message(3, \"Global Search Documents have been reset.\")\n\ndef show_message(progress, msg):\n\tfrappe.publish_realtime('global_search_settings', {\"progress\":progress, \"total\":3, \"msg\": msg}, user=frappe.session.user)\n", "path": "frappe/desk/doctype/global_search_settings/global_search_settings.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) 2019, Frappe Technologies and contributors\n# License: MIT. See LICENSE\n\nimport frappe\nfrom frappe.model.document import Document\nfrom frappe import _\n\nclass GlobalSearchSettings(Document):\n\n\tdef validate(self):\n\t\tdts, core_dts, repeated_dts = [], [], []\n\n\t\tfor dt in self.allowed_in_global_search:\n\t\t\tif dt.document_type in dts:\n\t\t\t\trepeated_dts.append(dt.document_type)\n\n\t\t\tif frappe.get_meta(dt.document_type).module == \"Core\":\n\t\t\t\tcore_dts.append(dt.document_type)\n\n\t\t\tdts.append(dt.document_type)\n\n\t\tif core_dts:\n\t\t\tcore_dts = \", \".join(frappe.bold(dt) for dt in core_dts)\n\t\t\tfrappe.throw(_(\"Core Modules {0} cannot be searched in Global Search.\").format(core_dts))\n\n\t\tif repeated_dts:\n\t\t\trepeated_dts = (\", \".join([frappe.bold(dt) for dt in repeated_dts]))\n\t\t\tfrappe.throw(_(\"Document Type {0} has been repeated.\").format(repeated_dts))\n\n\t\t# reset cache\n\t\tfrappe.cache().hdel('global_search', 'search_priorities')\n\ndef get_doctypes_for_global_search():\n\tdef get_from_db():\n\t\tdoctypes = frappe.get_all(\"Global Search DocType\", fields=[\"document_type\"], order_by=\"idx ASC\")\n\t\treturn [d.document_type for d in doctypes] or []\n\n\treturn frappe.cache().hget(\"global_search\", \"search_priorities\", get_from_db)\n\n\[email protected]()\ndef reset_global_search_settings_doctypes():\n\tupdate_global_search_doctypes()\n\ndef update_global_search_doctypes():\n\tglobal_search_doctypes = []\n\tshow_message(1, _(\"Fetching default Global Search documents.\"))\n\n\tinstalled_apps = [app for app in frappe.get_installed_apps() if app]\n\tactive_domains = [domain for domain in frappe.get_active_domains() if domain]\n\tactive_domains.append(\"Default\")\n\n\tfor app in installed_apps:\n\t\tsearch_doctypes = frappe.get_hooks(hook=\"global_search_doctypes\", app_name=app)\n\t\tif not search_doctypes:\n\t\t\tcontinue\n\n\t\tfor domain in active_domains:\n\t\t\tif search_doctypes.get(domain):\n\t\t\t\tglobal_search_doctypes.extend(search_doctypes.get(domain))\n\n\tdoctype_list = {dt.name for dt in frappe.get_all(\"DocType\")}\n\tallowed_in_global_search = []\n\n\tfor dt in global_search_doctypes:\n\t\tif dt.get(\"index\") is not None:\n\t\t\tallowed_in_global_search.insert(dt.get(\"index\"), dt.get(\"doctype\"))\n\t\t\tcontinue\n\n\t\tallowed_in_global_search.append(dt.get(\"doctype\"))\n\n\tshow_message(2, _(\"Setting up Global Search documents.\"))\n\tglobal_search_settings = frappe.get_single(\"Global Search Settings\")\n\tglobal_search_settings.allowed_in_global_search = []\n\tfor dt in allowed_in_global_search:\n\t\tif dt not in doctype_list:\n\t\t\tcontinue\n\n\t\tglobal_search_settings.append(\"allowed_in_global_search\", {\n\t\t\t\"document_type\": dt\n\t\t})\n\tglobal_search_settings.save(ignore_permissions=True)\n\tshow_message(3, \"Global Search Documents have been reset.\")\n\ndef show_message(progress, msg):\n\tfrappe.publish_realtime('global_search_settings', {\"progress\":progress, \"total\":3, \"msg\": msg}, user=frappe.session.user)\n", "path": "frappe/desk/doctype/global_search_settings/global_search_settings.py"}]}
| 2,163 | 183 |
gh_patches_debug_51797
|
rasdani/github-patches
|
git_diff
|
HypothesisWorks__hypothesis-1379
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ImportError: cannot import name canonical_filename
Hi, I'm getting an import error on startup:
```
File "/Users/adaszko/repos/fieldaware/fieldaware-venv/lib/python2.7/site-packages/hypothesis/core.py", line 38, in <module>
from coverage.files import canonical_filename
ImportError: cannot import name canonical_filename
```
I've downloaded https://files.pythonhosted.org/packages/4b/e4/5ebf3220993de03f2120a16d9e91cfd053f4c11ada0cf033f2bfe9683fcf/hypothesis-3.65.0-py2-none-any.whl and the `METADATA` file there specifies dependency on `coverage` without any version number:
```
% grep coverage METADATA
Requires-Dist: coverage
```
My local `coverage` is at `3.7.1`. It works if I upgrade `coverage` to `4.4.1`, so I think there's an issue in hypothesis in that it doesn't specify the version bound on `coverage`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hypothesis-python/setup.py`
Content:
```
1 # coding=utf-8
2 #
3 # This file is part of Hypothesis, which may be found at
4 # https://github.com/HypothesisWorks/hypothesis-python
5 #
6 # Most of this work is copyright (C) 2013-2018 David R. MacIver
7 # ([email protected]), but it contains contributions by others. See
8 # CONTRIBUTING.rst for a full list of people who may hold copyright, and
9 # consult the git log if you need to determine who owns an individual
10 # contribution.
11 #
12 # This Source Code Form is subject to the terms of the Mozilla Public License,
13 # v. 2.0. If a copy of the MPL was not distributed with this file, You can
14 # obtain one at http://mozilla.org/MPL/2.0/.
15 #
16 # END HEADER
17
18 from __future__ import division, print_function, absolute_import
19
20 import os
21 import sys
22 import warnings
23
24 import setuptools
25
26
27 def local_file(name):
28 return os.path.relpath(os.path.join(os.path.dirname(__file__), name))
29
30
31 SOURCE = local_file('src')
32 README = local_file('README.rst')
33
34 setuptools_version = tuple(map(int, setuptools.__version__.split('.')[:2]))
35
36 if setuptools_version < (36, 2):
37 # Warning only - very bad if uploading bdist but fine if installing sdist.
38 warnings.warn(
39 'This version of setuptools is too old to correctly store '
40 'conditional dependencies in binary wheels. For more info, see: '
41 'https://hynek.me/articles/conditional-python-dependencies/'
42 )
43
44
45 # Assignment to placate pyflakes. The actual version is from the exec that
46 # follows.
47 __version__ = None
48
49 with open(local_file('src/hypothesis/version.py')) as o:
50 exec(o.read())
51
52 assert __version__ is not None
53
54
55 extras = {
56 'datetime': ['pytz'],
57 'pytz': ['pytz'],
58 'dateutil': ['python-dateutil'],
59 'fakefactory': ['Faker>=0.7'],
60 'numpy': ['numpy>=1.9.0'],
61 'pytest': ['pytest>=2.8.0'],
62 # We only support Django versions with upstream support - see
63 # https://www.djangoproject.com/download/#supported-versions
64 'django': ['pytz', 'django>=1.11'],
65 }
66
67 extras['faker'] = extras['fakefactory']
68 extras['all'] = sorted(sum(extras.values(), []))
69
70
71 install_requires = ['attrs>=16.0.0', 'coverage']
72 # Using an environment marker on enum34 makes the dependency condition
73 # independent of the build environemnt, which is important for wheels.
74 # https://www.python.org/dev/peps/pep-0345/#environment-markers
75 if sys.version_info[0] < 3 and setuptools_version < (8, 0):
76 # Except really old systems, where we give up and install unconditionally
77 install_requires.append('enum34')
78 else:
79 install_requires.append('enum34; python_version=="2.7"')
80
81
82 setuptools.setup(
83 name='hypothesis',
84 version=__version__,
85 author='David R. MacIver',
86 author_email='[email protected]',
87 packages=setuptools.find_packages(SOURCE),
88 package_dir={'': SOURCE},
89 package_data={'hypothesis': ['py.typed']},
90 url=(
91 'https://github.com/HypothesisWorks/hypothesis/'
92 'tree/master/hypothesis-python'
93 ),
94 license='MPL v2',
95 description='A library for property based testing',
96 zip_safe=False,
97 extras_require=extras,
98 install_requires=install_requires,
99 python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*',
100 classifiers=[
101 'Development Status :: 5 - Production/Stable',
102 'Intended Audience :: Developers',
103 'License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)',
104 'Operating System :: Unix',
105 'Operating System :: POSIX',
106 'Operating System :: Microsoft :: Windows',
107 'Programming Language :: Python',
108 'Programming Language :: Python :: 2.7',
109 'Programming Language :: Python :: 3',
110 'Programming Language :: Python :: 3.4',
111 'Programming Language :: Python :: 3.5',
112 'Programming Language :: Python :: 3.6',
113 'Programming Language :: Python :: Implementation :: CPython',
114 'Programming Language :: Python :: Implementation :: PyPy',
115 'Topic :: Software Development :: Testing',
116 'Framework :: Pytest',
117 ],
118 entry_points={
119 'pytest11': ['hypothesispytest = hypothesis.extra.pytestplugin'],
120 },
121 long_description=open(README).read(),
122 )
123
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/hypothesis-python/setup.py b/hypothesis-python/setup.py
--- a/hypothesis-python/setup.py
+++ b/hypothesis-python/setup.py
@@ -68,7 +68,7 @@
extras['all'] = sorted(sum(extras.values(), []))
-install_requires = ['attrs>=16.0.0', 'coverage']
+install_requires = ['attrs>=16.0.0', 'coverage>=4.0']
# Using an environment marker on enum34 makes the dependency condition
# independent of the build environemnt, which is important for wheels.
# https://www.python.org/dev/peps/pep-0345/#environment-markers
|
{"golden_diff": "diff --git a/hypothesis-python/setup.py b/hypothesis-python/setup.py\n--- a/hypothesis-python/setup.py\n+++ b/hypothesis-python/setup.py\n@@ -68,7 +68,7 @@\n extras['all'] = sorted(sum(extras.values(), []))\n \n \n-install_requires = ['attrs>=16.0.0', 'coverage']\n+install_requires = ['attrs>=16.0.0', 'coverage>=4.0']\n # Using an environment marker on enum34 makes the dependency condition\n # independent of the build environemnt, which is important for wheels.\n # https://www.python.org/dev/peps/pep-0345/#environment-markers\n", "issue": "ImportError: cannot import name canonical_filename\nHi, I'm getting an import error on startup:\r\n\r\n```\r\n File \"/Users/adaszko/repos/fieldaware/fieldaware-venv/lib/python2.7/site-packages/hypothesis/core.py\", line 38, in <module>\r\n from coverage.files import canonical_filename\r\nImportError: cannot import name canonical_filename\r\n```\r\n\r\nI've downloaded https://files.pythonhosted.org/packages/4b/e4/5ebf3220993de03f2120a16d9e91cfd053f4c11ada0cf033f2bfe9683fcf/hypothesis-3.65.0-py2-none-any.whl and the `METADATA` file there specifies dependency on `coverage` without any version number:\r\n\r\n```\r\n% grep coverage METADATA\r\nRequires-Dist: coverage\r\n```\r\n\r\nMy local `coverage` is at `3.7.1`. It works if I upgrade `coverage` to `4.4.1`, so I think there's an issue in hypothesis in that it doesn't specify the version bound on `coverage`.\n", "before_files": [{"content": "# coding=utf-8\n#\n# This file is part of Hypothesis, which may be found at\n# https://github.com/HypothesisWorks/hypothesis-python\n#\n# Most of this work is copyright (C) 2013-2018 David R. MacIver\n# ([email protected]), but it contains contributions by others. See\n# CONTRIBUTING.rst for a full list of people who may hold copyright, and\n# consult the git log if you need to determine who owns an individual\n# contribution.\n#\n# This Source Code Form is subject to the terms of the Mozilla Public License,\n# v. 2.0. If a copy of the MPL was not distributed with this file, You can\n# obtain one at http://mozilla.org/MPL/2.0/.\n#\n# END HEADER\n\nfrom __future__ import division, print_function, absolute_import\n\nimport os\nimport sys\nimport warnings\n\nimport setuptools\n\n\ndef local_file(name):\n return os.path.relpath(os.path.join(os.path.dirname(__file__), name))\n\n\nSOURCE = local_file('src')\nREADME = local_file('README.rst')\n\nsetuptools_version = tuple(map(int, setuptools.__version__.split('.')[:2]))\n\nif setuptools_version < (36, 2):\n # Warning only - very bad if uploading bdist but fine if installing sdist.\n warnings.warn(\n 'This version of setuptools is too old to correctly store '\n 'conditional dependencies in binary wheels. For more info, see: '\n 'https://hynek.me/articles/conditional-python-dependencies/'\n )\n\n\n# Assignment to placate pyflakes. The actual version is from the exec that\n# follows.\n__version__ = None\n\nwith open(local_file('src/hypothesis/version.py')) as o:\n exec(o.read())\n\nassert __version__ is not None\n\n\nextras = {\n 'datetime': ['pytz'],\n 'pytz': ['pytz'],\n 'dateutil': ['python-dateutil'],\n 'fakefactory': ['Faker>=0.7'],\n 'numpy': ['numpy>=1.9.0'],\n 'pytest': ['pytest>=2.8.0'],\n # We only support Django versions with upstream support - see\n # https://www.djangoproject.com/download/#supported-versions\n 'django': ['pytz', 'django>=1.11'],\n}\n\nextras['faker'] = extras['fakefactory']\nextras['all'] = sorted(sum(extras.values(), []))\n\n\ninstall_requires = ['attrs>=16.0.0', 'coverage']\n# Using an environment marker on enum34 makes the dependency condition\n# independent of the build environemnt, which is important for wheels.\n# https://www.python.org/dev/peps/pep-0345/#environment-markers\nif sys.version_info[0] < 3 and setuptools_version < (8, 0):\n # Except really old systems, where we give up and install unconditionally\n install_requires.append('enum34')\nelse:\n install_requires.append('enum34; python_version==\"2.7\"')\n\n\nsetuptools.setup(\n name='hypothesis',\n version=__version__,\n author='David R. MacIver',\n author_email='[email protected]',\n packages=setuptools.find_packages(SOURCE),\n package_dir={'': SOURCE},\n package_data={'hypothesis': ['py.typed']},\n url=(\n 'https://github.com/HypothesisWorks/hypothesis/'\n 'tree/master/hypothesis-python'\n ),\n license='MPL v2',\n description='A library for property based testing',\n zip_safe=False,\n extras_require=extras,\n install_requires=install_requires,\n python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*',\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)',\n 'Operating System :: Unix',\n 'Operating System :: POSIX',\n 'Operating System :: Microsoft :: Windows',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n 'Topic :: Software Development :: Testing',\n 'Framework :: Pytest',\n ],\n entry_points={\n 'pytest11': ['hypothesispytest = hypothesis.extra.pytestplugin'],\n },\n long_description=open(README).read(),\n)\n", "path": "hypothesis-python/setup.py"}], "after_files": [{"content": "# coding=utf-8\n#\n# This file is part of Hypothesis, which may be found at\n# https://github.com/HypothesisWorks/hypothesis-python\n#\n# Most of this work is copyright (C) 2013-2018 David R. MacIver\n# ([email protected]), but it contains contributions by others. See\n# CONTRIBUTING.rst for a full list of people who may hold copyright, and\n# consult the git log if you need to determine who owns an individual\n# contribution.\n#\n# This Source Code Form is subject to the terms of the Mozilla Public License,\n# v. 2.0. If a copy of the MPL was not distributed with this file, You can\n# obtain one at http://mozilla.org/MPL/2.0/.\n#\n# END HEADER\n\nfrom __future__ import division, print_function, absolute_import\n\nimport os\nimport sys\nimport warnings\n\nimport setuptools\n\n\ndef local_file(name):\n return os.path.relpath(os.path.join(os.path.dirname(__file__), name))\n\n\nSOURCE = local_file('src')\nREADME = local_file('README.rst')\n\nsetuptools_version = tuple(map(int, setuptools.__version__.split('.')[:2]))\n\nif setuptools_version < (36, 2):\n # Warning only - very bad if uploading bdist but fine if installing sdist.\n warnings.warn(\n 'This version of setuptools is too old to correctly store '\n 'conditional dependencies in binary wheels. For more info, see: '\n 'https://hynek.me/articles/conditional-python-dependencies/'\n )\n\n\n# Assignment to placate pyflakes. The actual version is from the exec that\n# follows.\n__version__ = None\n\nwith open(local_file('src/hypothesis/version.py')) as o:\n exec(o.read())\n\nassert __version__ is not None\n\n\nextras = {\n 'datetime': ['pytz'],\n 'pytz': ['pytz'],\n 'dateutil': ['python-dateutil'],\n 'fakefactory': ['Faker>=0.7'],\n 'numpy': ['numpy>=1.9.0'],\n 'pytest': ['pytest>=2.8.0'],\n # We only support Django versions with upstream support - see\n # https://www.djangoproject.com/download/#supported-versions\n 'django': ['pytz', 'django>=1.11'],\n}\n\nextras['faker'] = extras['fakefactory']\nextras['all'] = sorted(sum(extras.values(), []))\n\n\ninstall_requires = ['attrs>=16.0.0', 'coverage>=4.0']\n# Using an environment marker on enum34 makes the dependency condition\n# independent of the build environemnt, which is important for wheels.\n# https://www.python.org/dev/peps/pep-0345/#environment-markers\nif sys.version_info[0] < 3 and setuptools_version < (8, 0):\n # Except really old systems, where we give up and install unconditionally\n install_requires.append('enum34')\nelse:\n install_requires.append('enum34; python_version==\"2.7\"')\n\n\nsetuptools.setup(\n name='hypothesis',\n version=__version__,\n author='David R. MacIver',\n author_email='[email protected]',\n packages=setuptools.find_packages(SOURCE),\n package_dir={'': SOURCE},\n # package_data={'': ['py.typed']}, # un-comment to release type hints\n url=(\n 'https://github.com/HypothesisWorks/hypothesis/'\n 'tree/master/hypothesis-python'\n ),\n license='MPL v2',\n description='A library for property based testing',\n zip_safe=False,\n extras_require=extras,\n install_requires=install_requires,\n python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*',\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)',\n 'Operating System :: Unix',\n 'Operating System :: POSIX',\n 'Operating System :: Microsoft :: Windows',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n 'Topic :: Software Development :: Testing',\n 'Framework :: Pytest',\n ],\n entry_points={\n 'pytest11': ['hypothesispytest = hypothesis.extra.pytestplugin'],\n },\n long_description=open(README).read(),\n)\n", "path": "hypothesis-python/setup.py"}]}
| 1,825 | 151 |
gh_patches_debug_3905
|
rasdani/github-patches
|
git_diff
|
microsoft__hi-ml-65
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add all items required for making the repository public
Ensure that all files have copyright notices, and that editors are set up to automatically insert them (PyCharm does it correctly on InnerEye)
You must run the following source code analysis tools:
CredScan
CodeQL (Semmle)
Component Governance Detection
The easiest way to run these tools is to add thems in your build pipeline in a Microsoft-managed Azure DevOps account.
For CodeQL, please ensure the following (detailed instructions for CodeQL can be found here):
Select the source code language in the CodeQL task.
If your application was developed using multiple languages, add multiple CodeQL tasks.
Define the build variable LGTM.UploadSnapshot=true.
Configure the build to allow scripts to access OAuth token.
If the code is hosted in Github, create Azure DevOps PAT token with code read scope for dev.azure.com/Microsoft (or ‘all’) organization and set the local task variable System_AccessToken with it. (Note: This only works for YAML-based pipelines.)
Review security issues by navigating to semmleportal.azurewebsites.net/lookup. It may take up to one day to process results.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/health/azure/datasets.py`
Content:
```
1 import logging
2 from pathlib import Path
3 from typing import List, Optional, Union
4
5 from azureml.core import Dataset, Datastore, Workspace
6 from azureml.data import FileDataset, OutputFileDatasetConfig
7 from azureml.data.dataset_consumption_config import DatasetConsumptionConfig
8
9
10 def get_datastore(workspace: Workspace, datastore_name: str) -> Datastore:
11 """
12 Retrieves a datastore of a given name from an AzureML workspace. The datastore_name argument can be omitted if
13 the workspace only contains a single datastore. Raises a ValueError if there is no datastore of the given name.
14 :param workspace: The AzureML workspace to read from.
15 :param datastore_name: The name of the datastore to retrieve.
16 :return: An AzureML datastore.
17 """
18 datastores = workspace.datastores
19 existing_stores = list(datastores.keys())
20 if not datastore_name:
21 if len(existing_stores) == 1:
22 return datastores[existing_stores[0]]
23 raise ValueError("No datastore name provided. This is only possible if the workspace has a single datastore. "
24 f"However, the workspace has {len(existing_stores)} datastores: {existing_stores}")
25 if datastore_name in datastores:
26 return datastores[datastore_name]
27 raise ValueError(f"Datastore {datastore_name} was not found in the workspace. Existing datastores: "
28 f"{existing_stores}")
29
30
31 def get_or_create_dataset(workspace: Workspace, datastore_name: str, dataset_name: str) -> FileDataset:
32 """
33 Looks in the AzureML datastore for a dataset of the given name. If there is no such dataset, a dataset is
34 created and registered, assuming that the files are in a folder that has the same name as the dataset.
35 For example, if dataset_name is 'foo', then the 'foo' dataset should be pointing to the folder
36 <container_root>/datasets/dataset_name/
37 """
38 if not dataset_name:
39 raise ValueError("No dataset name provided.")
40 try:
41 logging.info(f"Trying to retrieve AzureML Dataset '{dataset_name}'")
42 azureml_dataset = Dataset.get_by_name(workspace, name=dataset_name)
43 logging.info("Dataset found.")
44 except Exception:
45 logging.info(f"Retrieving datastore '{datastore_name}' from AzureML workspace")
46 datastore = get_datastore(workspace, datastore_name)
47 logging.info(f"Creating a new dataset from data in folder '{dataset_name}' in the datastore")
48 # Ensure that there is a / at the end of the file path, otherwise folder that share a prefix could create
49 # trouble (for example, folders foo and foo_bar exist, and I'm trying to create a dataset from "foo")
50 azureml_dataset = Dataset.File.from_files(path=(datastore, dataset_name + "/"))
51 logging.info("Registering the dataset for future use.")
52 azureml_dataset.register(workspace, name=dataset_name)
53 return azureml_dataset
54
55
56 def _input_dataset_key(index: int) -> str:
57 return f"INPUT_{index}"
58
59
60 def _output_dataset_key(index: int) -> str:
61 return f"OUTPUT_{index}"
62
63
64 class DatasetConfig:
65 """
66 Contains information to use AzureML datasets as inputs or outputs.
67 """
68
69 def __init__(self,
70 name: str,
71 datastore: str = "",
72 version: Optional[int] = None,
73 use_mounting: Optional[bool] = None,
74 target_folder: str = "",
75 local_folder: Optional[Path] = None):
76 """
77 Creates a new configuration for using an AzureML dataset.
78 :param name: The name of the dataset, as it was registered in the AzureML workspace. For output datasets,
79 this will be the name given to the newly created dataset.
80 :param datastore: The name of the AzureML datastore that holds the dataset. This can be empty if the AzureML
81 workspace has only a single datastore, or if the default datastore should be used.
82 :param version: The version of the dataset that should be used. This is only used for input datasets.
83 If the version is not specified, the latest version will be used.
84 :param use_mounting: If True, the dataset will be "mounted", that is, individual files will be read
85 or written on-demand over the network. If False, the dataset will be fully downloaded before the job starts,
86 respectively fully uploaded at job end for output datasets.
87 Defaults: False (downloading) for datasets that are script inputs, True (mounting) for datasets that are script
88 outputs.
89 :param target_folder: The folder into which the dataset should be downloaded or mounted. If left empty, a
90 random folder on /tmp will be chosen.
91 :param local_folder: The folder on the local machine at which the dataset is available. This
92 is used only for runs outside of AzureML.
93 """
94 # This class would be a good candidate for a dataclass, but having an explicit constructor makes
95 # documentation tools in the editor work nicer.
96 name = name.strip()
97 if not name:
98 raise ValueError("The name of the dataset must be a non-empty string.")
99 self.name = name
100 self.datastore = datastore
101 self.version = version
102 self.use_mounting = use_mounting
103 self.target_folder = target_folder
104 self.local_folder = local_folder
105
106 def to_input_dataset(self,
107 workspace: Workspace,
108 dataset_index: int) -> DatasetConsumptionConfig:
109 """
110 Creates a configuration for using an AzureML dataset inside of an AzureML run. This will make the AzureML
111 dataset with given name available as a named input, using INPUT_0 as the key for dataset index 0.
112 :param workspace: The AzureML workspace to read from.
113 :param dataset_index: Suffix for using datasets as named inputs, the dataset will be marked INPUT_{index}
114 """
115 status = f"Dataset {self.name} (index {dataset_index}) will be "
116 azureml_dataset = get_or_create_dataset(workspace=workspace,
117 dataset_name=self.name,
118 datastore_name=self.datastore)
119 named_input = azureml_dataset.as_named_input(_input_dataset_key(index=dataset_index))
120 path_on_compute = self.target_folder or None
121 use_mounting = False if self.use_mounting is None else self.use_mounting
122 if use_mounting:
123 status += "mounted at "
124 result = named_input.as_mount(path_on_compute)
125 else:
126 status += "downloaded to "
127 result = named_input.as_download(path_on_compute)
128 if path_on_compute:
129 status += f"{path_on_compute}."
130 else:
131 status += "a randomly chosen folder."
132 logging.info(status)
133 return result
134
135 def to_output_dataset(self,
136 workspace: Workspace,
137 dataset_index: int) -> OutputFileDatasetConfig:
138 """
139 Creates a configuration to write a script output to an AzureML dataset. The name and datastore of this new
140 dataset will be taken from the present object.
141 :param workspace: The AzureML workspace to read from.
142 :param dataset_index: Suffix for using datasets as named inputs, the dataset will be marked OUTPUT_{index}
143 :return:
144 """
145 status = f"Output dataset {self.name} (index {dataset_index}) will be "
146 datastore = get_datastore(workspace, self.datastore)
147 dataset = OutputFileDatasetConfig(name=_output_dataset_key(index=dataset_index),
148 destination=(datastore, self.name + "/"))
149 # TODO: Can we get tags into here too?
150 dataset = dataset.register_on_complete(name=self.name)
151 if self.target_folder:
152 raise ValueError("Output datasets can't have a target_folder set.")
153 use_mounting = True if self.use_mounting is None else self.use_mounting
154 if use_mounting:
155 status += "uploaded while the job runs."
156 result = dataset.as_mount()
157 else:
158 status += "uploaded when the job completes."
159 result = dataset.as_upload()
160 logging.info(status)
161 return result
162
163
164 StrOrDatasetConfig = Union[str, DatasetConfig]
165
166
167 def _replace_string_datasets(datasets: List[StrOrDatasetConfig],
168 default_datastore_name: str) -> List[DatasetConfig]:
169 """
170 Processes a list of input or output datasets. All entries in the list that are strings are turned into
171 DatasetConfig objects, using the string as the dataset name, and pointing to the default datastore.
172 :param datasets: A list of datasets, each given either as a string or a DatasetConfig object.
173 :param default_datastore_name: The datastore to use for all datasets that are only specified via their name.
174 :return: A list of DatasetConfig objects, in the same order as the input list.
175 """
176 return [DatasetConfig(name=d, datastore=default_datastore_name) if isinstance(d, str) else d
177 for d in datasets]
178
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/health/azure/datasets.py b/src/health/azure/datasets.py
--- a/src/health/azure/datasets.py
+++ b/src/health/azure/datasets.py
@@ -1,3 +1,7 @@
+# ------------------------------------------------------------------------------------------
+# Copyright (c) Microsoft Corporation. All rights reserved.
+# Licensed under the MIT License (MIT). See LICENSE in the repo root for license information.
+# ------------------------------------------------------------------------------------------
import logging
from pathlib import Path
from typing import List, Optional, Union
|
{"golden_diff": "diff --git a/src/health/azure/datasets.py b/src/health/azure/datasets.py\n--- a/src/health/azure/datasets.py\n+++ b/src/health/azure/datasets.py\n@@ -1,3 +1,7 @@\n+# ------------------------------------------------------------------------------------------\n+# Copyright (c) Microsoft Corporation. All rights reserved.\n+# Licensed under the MIT License (MIT). See LICENSE in the repo root for license information.\n+# ------------------------------------------------------------------------------------------\n import logging\n from pathlib import Path\n from typing import List, Optional, Union\n", "issue": "Add all items required for making the repository public\nEnsure that all files have copyright notices, and that editors are set up to automatically insert them (PyCharm does it correctly on InnerEye)\r\n\r\nYou must run the following source code analysis tools:\r\nCredScan\r\nCodeQL (Semmle)\r\nComponent Governance Detection\r\nThe easiest way to run these tools is to add thems in your build pipeline in a Microsoft-managed Azure DevOps account.\r\n\r\nFor CodeQL, please ensure the following (detailed instructions for CodeQL can be found here):\r\nSelect the source code language in the CodeQL task.\r\nIf your application was developed using multiple languages, add multiple CodeQL tasks.\r\nDefine the build variable LGTM.UploadSnapshot=true.\r\nConfigure the build to allow scripts to access OAuth token.\r\nIf the code is hosted in Github, create Azure DevOps PAT token with code read scope for dev.azure.com/Microsoft (or \u2018all\u2019) organization and set the local task variable System_AccessToken with it. (Note: This only works for YAML-based pipelines.)\r\nReview security issues by navigating to semmleportal.azurewebsites.net/lookup. It may take up to one day to process results.\n", "before_files": [{"content": "import logging\nfrom pathlib import Path\nfrom typing import List, Optional, Union\n\nfrom azureml.core import Dataset, Datastore, Workspace\nfrom azureml.data import FileDataset, OutputFileDatasetConfig\nfrom azureml.data.dataset_consumption_config import DatasetConsumptionConfig\n\n\ndef get_datastore(workspace: Workspace, datastore_name: str) -> Datastore:\n \"\"\"\n Retrieves a datastore of a given name from an AzureML workspace. The datastore_name argument can be omitted if\n the workspace only contains a single datastore. Raises a ValueError if there is no datastore of the given name.\n :param workspace: The AzureML workspace to read from.\n :param datastore_name: The name of the datastore to retrieve.\n :return: An AzureML datastore.\n \"\"\"\n datastores = workspace.datastores\n existing_stores = list(datastores.keys())\n if not datastore_name:\n if len(existing_stores) == 1:\n return datastores[existing_stores[0]]\n raise ValueError(\"No datastore name provided. This is only possible if the workspace has a single datastore. \"\n f\"However, the workspace has {len(existing_stores)} datastores: {existing_stores}\")\n if datastore_name in datastores:\n return datastores[datastore_name]\n raise ValueError(f\"Datastore {datastore_name} was not found in the workspace. Existing datastores: \"\n f\"{existing_stores}\")\n\n\ndef get_or_create_dataset(workspace: Workspace, datastore_name: str, dataset_name: str) -> FileDataset:\n \"\"\"\n Looks in the AzureML datastore for a dataset of the given name. If there is no such dataset, a dataset is\n created and registered, assuming that the files are in a folder that has the same name as the dataset.\n For example, if dataset_name is 'foo', then the 'foo' dataset should be pointing to the folder\n <container_root>/datasets/dataset_name/\n \"\"\"\n if not dataset_name:\n raise ValueError(\"No dataset name provided.\")\n try:\n logging.info(f\"Trying to retrieve AzureML Dataset '{dataset_name}'\")\n azureml_dataset = Dataset.get_by_name(workspace, name=dataset_name)\n logging.info(\"Dataset found.\")\n except Exception:\n logging.info(f\"Retrieving datastore '{datastore_name}' from AzureML workspace\")\n datastore = get_datastore(workspace, datastore_name)\n logging.info(f\"Creating a new dataset from data in folder '{dataset_name}' in the datastore\")\n # Ensure that there is a / at the end of the file path, otherwise folder that share a prefix could create\n # trouble (for example, folders foo and foo_bar exist, and I'm trying to create a dataset from \"foo\")\n azureml_dataset = Dataset.File.from_files(path=(datastore, dataset_name + \"/\"))\n logging.info(\"Registering the dataset for future use.\")\n azureml_dataset.register(workspace, name=dataset_name)\n return azureml_dataset\n\n\ndef _input_dataset_key(index: int) -> str:\n return f\"INPUT_{index}\"\n\n\ndef _output_dataset_key(index: int) -> str:\n return f\"OUTPUT_{index}\"\n\n\nclass DatasetConfig:\n \"\"\"\n Contains information to use AzureML datasets as inputs or outputs.\n \"\"\"\n\n def __init__(self,\n name: str,\n datastore: str = \"\",\n version: Optional[int] = None,\n use_mounting: Optional[bool] = None,\n target_folder: str = \"\",\n local_folder: Optional[Path] = None):\n \"\"\"\n Creates a new configuration for using an AzureML dataset.\n :param name: The name of the dataset, as it was registered in the AzureML workspace. For output datasets,\n this will be the name given to the newly created dataset.\n :param datastore: The name of the AzureML datastore that holds the dataset. This can be empty if the AzureML\n workspace has only a single datastore, or if the default datastore should be used.\n :param version: The version of the dataset that should be used. This is only used for input datasets.\n If the version is not specified, the latest version will be used.\n :param use_mounting: If True, the dataset will be \"mounted\", that is, individual files will be read\n or written on-demand over the network. If False, the dataset will be fully downloaded before the job starts,\n respectively fully uploaded at job end for output datasets.\n Defaults: False (downloading) for datasets that are script inputs, True (mounting) for datasets that are script\n outputs.\n :param target_folder: The folder into which the dataset should be downloaded or mounted. If left empty, a\n random folder on /tmp will be chosen.\n :param local_folder: The folder on the local machine at which the dataset is available. This\n is used only for runs outside of AzureML.\n \"\"\"\n # This class would be a good candidate for a dataclass, but having an explicit constructor makes\n # documentation tools in the editor work nicer.\n name = name.strip()\n if not name:\n raise ValueError(\"The name of the dataset must be a non-empty string.\")\n self.name = name\n self.datastore = datastore\n self.version = version\n self.use_mounting = use_mounting\n self.target_folder = target_folder\n self.local_folder = local_folder\n\n def to_input_dataset(self,\n workspace: Workspace,\n dataset_index: int) -> DatasetConsumptionConfig:\n \"\"\"\n Creates a configuration for using an AzureML dataset inside of an AzureML run. This will make the AzureML\n dataset with given name available as a named input, using INPUT_0 as the key for dataset index 0.\n :param workspace: The AzureML workspace to read from.\n :param dataset_index: Suffix for using datasets as named inputs, the dataset will be marked INPUT_{index}\n \"\"\"\n status = f\"Dataset {self.name} (index {dataset_index}) will be \"\n azureml_dataset = get_or_create_dataset(workspace=workspace,\n dataset_name=self.name,\n datastore_name=self.datastore)\n named_input = azureml_dataset.as_named_input(_input_dataset_key(index=dataset_index))\n path_on_compute = self.target_folder or None\n use_mounting = False if self.use_mounting is None else self.use_mounting\n if use_mounting:\n status += \"mounted at \"\n result = named_input.as_mount(path_on_compute)\n else:\n status += \"downloaded to \"\n result = named_input.as_download(path_on_compute)\n if path_on_compute:\n status += f\"{path_on_compute}.\"\n else:\n status += \"a randomly chosen folder.\"\n logging.info(status)\n return result\n\n def to_output_dataset(self,\n workspace: Workspace,\n dataset_index: int) -> OutputFileDatasetConfig:\n \"\"\"\n Creates a configuration to write a script output to an AzureML dataset. The name and datastore of this new\n dataset will be taken from the present object.\n :param workspace: The AzureML workspace to read from.\n :param dataset_index: Suffix for using datasets as named inputs, the dataset will be marked OUTPUT_{index}\n :return:\n \"\"\"\n status = f\"Output dataset {self.name} (index {dataset_index}) will be \"\n datastore = get_datastore(workspace, self.datastore)\n dataset = OutputFileDatasetConfig(name=_output_dataset_key(index=dataset_index),\n destination=(datastore, self.name + \"/\"))\n # TODO: Can we get tags into here too?\n dataset = dataset.register_on_complete(name=self.name)\n if self.target_folder:\n raise ValueError(\"Output datasets can't have a target_folder set.\")\n use_mounting = True if self.use_mounting is None else self.use_mounting\n if use_mounting:\n status += \"uploaded while the job runs.\"\n result = dataset.as_mount()\n else:\n status += \"uploaded when the job completes.\"\n result = dataset.as_upload()\n logging.info(status)\n return result\n\n\nStrOrDatasetConfig = Union[str, DatasetConfig]\n\n\ndef _replace_string_datasets(datasets: List[StrOrDatasetConfig],\n default_datastore_name: str) -> List[DatasetConfig]:\n \"\"\"\n Processes a list of input or output datasets. All entries in the list that are strings are turned into\n DatasetConfig objects, using the string as the dataset name, and pointing to the default datastore.\n :param datasets: A list of datasets, each given either as a string or a DatasetConfig object.\n :param default_datastore_name: The datastore to use for all datasets that are only specified via their name.\n :return: A list of DatasetConfig objects, in the same order as the input list.\n \"\"\"\n return [DatasetConfig(name=d, datastore=default_datastore_name) if isinstance(d, str) else d\n for d in datasets]\n", "path": "src/health/azure/datasets.py"}], "after_files": [{"content": "# ------------------------------------------------------------------------------------------\n# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License (MIT). See LICENSE in the repo root for license information.\n# ------------------------------------------------------------------------------------------\nimport logging\nfrom pathlib import Path\nfrom typing import List, Optional, Union\n\nfrom azureml.core import Dataset, Datastore, Workspace\nfrom azureml.data import FileDataset, OutputFileDatasetConfig\nfrom azureml.data.dataset_consumption_config import DatasetConsumptionConfig\n\n\ndef get_datastore(workspace: Workspace, datastore_name: str) -> Datastore:\n \"\"\"\n Retrieves a datastore of a given name from an AzureML workspace. The datastore_name argument can be omitted if\n the workspace only contains a single datastore. Raises a ValueError if there is no datastore of the given name.\n :param workspace: The AzureML workspace to read from.\n :param datastore_name: The name of the datastore to retrieve.\n :return: An AzureML datastore.\n \"\"\"\n datastores = workspace.datastores\n existing_stores = list(datastores.keys())\n if not datastore_name:\n if len(existing_stores) == 1:\n return datastores[existing_stores[0]]\n raise ValueError(\"No datastore name provided. This is only possible if the workspace has a single datastore. \"\n f\"However, the workspace has {len(existing_stores)} datastores: {existing_stores}\")\n if datastore_name in datastores:\n return datastores[datastore_name]\n raise ValueError(f\"Datastore {datastore_name} was not found in the workspace. Existing datastores: \"\n f\"{existing_stores}\")\n\n\ndef get_or_create_dataset(workspace: Workspace, datastore_name: str, dataset_name: str) -> FileDataset:\n \"\"\"\n Looks in the AzureML datastore for a dataset of the given name. If there is no such dataset, a dataset is\n created and registered, assuming that the files are in a folder that has the same name as the dataset.\n For example, if dataset_name is 'foo', then the 'foo' dataset should be pointing to the folder\n <container_root>/datasets/dataset_name/\n \"\"\"\n if not dataset_name:\n raise ValueError(\"No dataset name provided.\")\n try:\n logging.info(f\"Trying to retrieve AzureML Dataset '{dataset_name}'\")\n azureml_dataset = Dataset.get_by_name(workspace, name=dataset_name)\n logging.info(\"Dataset found.\")\n except Exception:\n logging.info(f\"Retrieving datastore '{datastore_name}' from AzureML workspace\")\n datastore = get_datastore(workspace, datastore_name)\n logging.info(f\"Creating a new dataset from data in folder '{dataset_name}' in the datastore\")\n # Ensure that there is a / at the end of the file path, otherwise folder that share a prefix could create\n # trouble (for example, folders foo and foo_bar exist, and I'm trying to create a dataset from \"foo\")\n azureml_dataset = Dataset.File.from_files(path=(datastore, dataset_name + \"/\"))\n logging.info(\"Registering the dataset for future use.\")\n azureml_dataset.register(workspace, name=dataset_name)\n return azureml_dataset\n\n\ndef _input_dataset_key(index: int) -> str:\n return f\"INPUT_{index}\"\n\n\ndef _output_dataset_key(index: int) -> str:\n return f\"OUTPUT_{index}\"\n\n\nclass DatasetConfig:\n \"\"\"\n Contains information to use AzureML datasets as inputs or outputs.\n \"\"\"\n\n def __init__(self,\n name: str,\n datastore: str = \"\",\n version: Optional[int] = None,\n use_mounting: Optional[bool] = None,\n target_folder: str = \"\",\n local_folder: Optional[Path] = None):\n \"\"\"\n Creates a new configuration for using an AzureML dataset.\n :param name: The name of the dataset, as it was registered in the AzureML workspace. For output datasets,\n this will be the name given to the newly created dataset.\n :param datastore: The name of the AzureML datastore that holds the dataset. This can be empty if the AzureML\n workspace has only a single datastore, or if the default datastore should be used.\n :param version: The version of the dataset that should be used. This is only used for input datasets.\n If the version is not specified, the latest version will be used.\n :param use_mounting: If True, the dataset will be \"mounted\", that is, individual files will be read\n or written on-demand over the network. If False, the dataset will be fully downloaded before the job starts,\n respectively fully uploaded at job end for output datasets.\n Defaults: False (downloading) for datasets that are script inputs, True (mounting) for datasets that are script\n outputs.\n :param target_folder: The folder into which the dataset should be downloaded or mounted. If left empty, a\n random folder on /tmp will be chosen.\n :param local_folder: The folder on the local machine at which the dataset is available. This\n is used only for runs outside of AzureML.\n \"\"\"\n # This class would be a good candidate for a dataclass, but having an explicit constructor makes\n # documentation tools in the editor work nicer.\n name = name.strip()\n if not name:\n raise ValueError(\"The name of the dataset must be a non-empty string.\")\n self.name = name\n self.datastore = datastore\n self.version = version\n self.use_mounting = use_mounting\n self.target_folder = target_folder\n self.local_folder = local_folder\n\n def to_input_dataset(self,\n workspace: Workspace,\n dataset_index: int) -> DatasetConsumptionConfig:\n \"\"\"\n Creates a configuration for using an AzureML dataset inside of an AzureML run. This will make the AzureML\n dataset with given name available as a named input, using INPUT_0 as the key for dataset index 0.\n :param workspace: The AzureML workspace to read from.\n :param dataset_index: Suffix for using datasets as named inputs, the dataset will be marked INPUT_{index}\n \"\"\"\n status = f\"Dataset {self.name} (index {dataset_index}) will be \"\n azureml_dataset = get_or_create_dataset(workspace=workspace,\n dataset_name=self.name,\n datastore_name=self.datastore)\n named_input = azureml_dataset.as_named_input(_input_dataset_key(index=dataset_index))\n path_on_compute = self.target_folder or None\n use_mounting = False if self.use_mounting is None else self.use_mounting\n if use_mounting:\n status += \"mounted at \"\n result = named_input.as_mount(path_on_compute)\n else:\n status += \"downloaded to \"\n result = named_input.as_download(path_on_compute)\n if path_on_compute:\n status += f\"{path_on_compute}.\"\n else:\n status += \"a randomly chosen folder.\"\n logging.info(status)\n return result\n\n def to_output_dataset(self,\n workspace: Workspace,\n dataset_index: int) -> OutputFileDatasetConfig:\n \"\"\"\n Creates a configuration to write a script output to an AzureML dataset. The name and datastore of this new\n dataset will be taken from the present object.\n :param workspace: The AzureML workspace to read from.\n :param dataset_index: Suffix for using datasets as named inputs, the dataset will be marked OUTPUT_{index}\n :return:\n \"\"\"\n status = f\"Output dataset {self.name} (index {dataset_index}) will be \"\n datastore = get_datastore(workspace, self.datastore)\n dataset = OutputFileDatasetConfig(name=_output_dataset_key(index=dataset_index),\n destination=(datastore, self.name + \"/\"))\n # TODO: Can we get tags into here too?\n dataset = dataset.register_on_complete(name=self.name)\n if self.target_folder:\n raise ValueError(\"Output datasets can't have a target_folder set.\")\n use_mounting = True if self.use_mounting is None else self.use_mounting\n if use_mounting:\n status += \"uploaded while the job runs.\"\n result = dataset.as_mount()\n else:\n status += \"uploaded when the job completes.\"\n result = dataset.as_upload()\n logging.info(status)\n return result\n\n\nStrOrDatasetConfig = Union[str, DatasetConfig]\n\n\ndef _replace_string_datasets(datasets: List[StrOrDatasetConfig],\n default_datastore_name: str) -> List[DatasetConfig]:\n \"\"\"\n Processes a list of input or output datasets. All entries in the list that are strings are turned into\n DatasetConfig objects, using the string as the dataset name, and pointing to the default datastore.\n :param datasets: A list of datasets, each given either as a string or a DatasetConfig object.\n :param default_datastore_name: The datastore to use for all datasets that are only specified via their name.\n :return: A list of DatasetConfig objects, in the same order as the input list.\n \"\"\"\n return [DatasetConfig(name=d, datastore=default_datastore_name) if isinstance(d, str) else d\n for d in datasets]\n", "path": "src/health/azure/datasets.py"}]}
| 2,817 | 112 |
gh_patches_debug_14242
|
rasdani/github-patches
|
git_diff
|
deepset-ai__haystack-2957
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Code snippet wrongly formatted in docs
**Describe the bug**
The code snippet at:
https://github.com/deepset-ai/haystack/blob/master/haystack/nodes/ranker/sentence_transformers.py#L32
is rendered as plain text in the corresponding documentation page
https://haystack.deepset.ai/reference/ranker
**Expected behavior**
The snipped should be formatted as Python code
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `haystack/nodes/ranker/sentence_transformers.py`
Content:
```
1 from typing import List, Optional, Union, Tuple, Iterator, Any
2 import logging
3 from pathlib import Path
4
5 import torch
6 from torch.nn import DataParallel
7 from transformers import AutoModelForSequenceClassification, AutoTokenizer
8
9 from haystack.errors import HaystackError
10 from haystack.schema import Document
11 from haystack.nodes.ranker.base import BaseRanker
12 from haystack.modeling.utils import initialize_device_settings
13
14 logger = logging.getLogger(__name__)
15
16
17 class SentenceTransformersRanker(BaseRanker):
18 """
19 Sentence Transformer based pre-trained Cross-Encoder model for Document Re-ranking (https://huggingface.co/cross-encoder).
20 Re-Ranking can be used on top of a retriever to boost the performance for document search. This is particularly useful if the retriever has a high recall but is bad in sorting the documents by relevance.
21
22 SentenceTransformerRanker handles Cross-Encoder models
23 - use a single logit as similarity score e.g. cross-encoder/ms-marco-MiniLM-L-12-v2
24 - use two output logits (no_answer, has_answer) e.g. deepset/gbert-base-germandpr-reranking
25 https://www.sbert.net/docs/pretrained-models/ce-msmarco.html#usage-with-transformers
26
27 | With a SentenceTransformersRanker, you can:
28 - directly get predictions via predict()
29
30 Usage example:
31 ...
32 retriever = BM25Retriever(document_store=document_store)
33 ranker = SentenceTransformersRanker(model_name_or_path="cross-encoder/ms-marco-MiniLM-L-12-v2")
34 p = Pipeline()
35 p.add_node(component=retriever, name="ESRetriever", inputs=["Query"])
36 p.add_node(component=ranker, name="Ranker", inputs=["ESRetriever"])
37 """
38
39 def __init__(
40 self,
41 model_name_or_path: Union[str, Path],
42 model_version: Optional[str] = None,
43 top_k: int = 10,
44 use_gpu: bool = True,
45 devices: Optional[List[Union[str, torch.device]]] = None,
46 batch_size: Optional[int] = None,
47 scale_score: bool = True,
48 ):
49 """
50 :param model_name_or_path: Directory of a saved model or the name of a public model e.g.
51 'cross-encoder/ms-marco-MiniLM-L-12-v2'.
52 See https://huggingface.co/cross-encoder for full list of available models
53 :param model_version: The version of model to use from the HuggingFace model hub. Can be tag name, branch name, or commit hash.
54 :param top_k: The maximum number of documents to return
55 :param use_gpu: Whether to use all available GPUs or the CPU. Falls back on CPU if no GPU is available.
56 :param devices: List of GPU (or CPU) devices, to limit inference to certain GPUs and not use all available ones
57 The strings will be converted into pytorch devices, so use the string notation described here:
58 https://pytorch.org/docs/stable/tensor_attributes.html?highlight=torch%20device#torch.torch.device
59 (e.g. ["cuda:0"]).
60 :param batch_size: Number of documents to process at a time.
61 :param scale_score: The raw predictions will be transformed using a Sigmoid activation function in case the model
62 only predicts a single label. For multi-label predictions, no scaling is applied. Set this
63 to False if you do not want any scaling of the raw predictions.
64 """
65 super().__init__()
66
67 self.top_k = top_k
68
69 if devices is not None:
70 self.devices = [torch.device(device) for device in devices]
71 else:
72 self.devices, _ = initialize_device_settings(use_cuda=use_gpu, multi_gpu=True)
73
74 self.transformer_model = AutoModelForSequenceClassification.from_pretrained(
75 pretrained_model_name_or_path=model_name_or_path, revision=model_version
76 )
77 self.transformer_model.to(str(self.devices[0]))
78 self.transformer_tokenizer = AutoTokenizer.from_pretrained(
79 pretrained_model_name_or_path=model_name_or_path, revision=model_version
80 )
81 self.transformer_model.eval()
82
83 # we use sigmoid activation function to scale the score in case there is only a single label
84 # we do not apply any scaling when scale_score is set to False
85 num_labels = self.transformer_model.num_labels
86 self.activation_function: torch.nn.Module
87 if num_labels == 1 and scale_score:
88 self.activation_function = torch.nn.Sigmoid()
89 else:
90 self.activation_function = torch.nn.Identity()
91
92 if len(self.devices) > 1:
93 self.model = DataParallel(self.transformer_model, device_ids=self.devices)
94
95 self.batch_size = batch_size
96
97 def predict(self, query: str, documents: List[Document], top_k: Optional[int] = None) -> List[Document]:
98 """
99 Use loaded ranker model to re-rank the supplied list of Document.
100
101 Returns list of Document sorted by (desc.) similarity with the query.
102
103 :param query: Query string
104 :param documents: List of Document to be re-ranked
105 :param top_k: The maximum number of documents to return
106 :return: List of Document
107 """
108 if top_k is None:
109 top_k = self.top_k
110
111 features = self.transformer_tokenizer(
112 [query for doc in documents],
113 [doc.content for doc in documents],
114 padding=True,
115 truncation=True,
116 return_tensors="pt",
117 ).to(self.devices[0])
118
119 # SentenceTransformerRanker uses:
120 # 1. the logit as similarity score/answerable classification
121 # 2. the logits as answerable classification (no_answer / has_answer)
122 # https://www.sbert.net/docs/pretrained-models/ce-msmarco.html#usage-with-transformers
123 with torch.no_grad():
124 similarity_scores = self.transformer_model(**features).logits
125
126 logits_dim = similarity_scores.shape[1] # [batch_size, logits_dim]
127 sorted_scores_and_documents = sorted(
128 zip(similarity_scores, documents),
129 key=lambda similarity_document_tuple:
130 # assume the last element in logits represents the `has_answer` label
131 similarity_document_tuple[0][-1] if logits_dim >= 2 else similarity_document_tuple[0],
132 reverse=True,
133 )
134
135 # add normalized scores to documents
136 sorted_documents = self._add_scores_to_documents(sorted_scores_and_documents[:top_k], logits_dim)
137
138 return sorted_documents
139
140 def _add_scores_to_documents(
141 self, sorted_scores_and_documents: List[Tuple[Any, Document]], logits_dim: int
142 ) -> List[Document]:
143 """
144 Normalize and add scores to retrieved result documents.
145
146 :param sorted_scores_and_documents: List of score, Document Tuples.
147 :param logits_dim: Dimensionality of the returned scores.
148 """
149 sorted_documents = []
150 for raw_score, doc in sorted_scores_and_documents:
151 if logits_dim >= 2:
152 score = self.activation_function(raw_score)[-1]
153 else:
154 score = self.activation_function(raw_score)[0]
155
156 doc.score = score.detach().cpu().numpy().tolist()
157 sorted_documents.append(doc)
158
159 return sorted_documents
160
161 def predict_batch(
162 self,
163 queries: List[str],
164 documents: Union[List[Document], List[List[Document]]],
165 top_k: Optional[int] = None,
166 batch_size: Optional[int] = None,
167 ) -> Union[List[Document], List[List[Document]]]:
168 """
169 Use loaded ranker model to re-rank the supplied lists of Documents.
170
171 Returns lists of Documents sorted by (desc.) similarity with the corresponding queries.
172
173
174 - If you provide a list containing a single query...
175
176 - ... and a single list of Documents, the single list of Documents will be re-ranked based on the
177 supplied query.
178 - ... and a list of lists of Documents, each list of Documents will be re-ranked individually based on the
179 supplied query.
180
181
182 - If you provide a list of multiple queries...
183
184 - ... you need to provide a list of lists of Documents. Each list of Documents will be re-ranked based on
185 its corresponding query.
186
187 :param queries: Single query string or list of queries
188 :param documents: Single list of Documents or list of lists of Documents to be reranked.
189 :param top_k: The maximum number of documents to return per Document list.
190 :param batch_size: Number of Documents to process at a time.
191 """
192 if top_k is None:
193 top_k = self.top_k
194
195 if batch_size is None:
196 batch_size = self.batch_size
197
198 number_of_docs, all_queries, all_docs, single_list_of_docs = self._preprocess_batch_queries_and_docs(
199 queries=queries, documents=documents
200 )
201
202 batches = self._get_batches(all_queries=all_queries, all_docs=all_docs, batch_size=batch_size)
203 preds = []
204 for cur_queries, cur_docs in batches:
205 features = self.transformer_tokenizer(
206 cur_queries, [doc.content for doc in cur_docs], padding=True, truncation=True, return_tensors="pt"
207 ).to(self.devices[0])
208
209 with torch.no_grad():
210 similarity_scores = self.transformer_model(**features).logits
211 preds.extend(similarity_scores)
212
213 logits_dim = similarity_scores.shape[1] # [batch_size, logits_dim]
214 if single_list_of_docs:
215 sorted_scores_and_documents = sorted(
216 zip(similarity_scores, documents),
217 key=lambda similarity_document_tuple:
218 # assume the last element in logits represents the `has_answer` label
219 similarity_document_tuple[0][-1] if logits_dim >= 2 else similarity_document_tuple[0],
220 reverse=True,
221 )
222
223 # is this step needed?
224 sorted_documents = [(score, doc) for score, doc in sorted_scores_and_documents if isinstance(doc, Document)]
225 sorted_documents_with_scores = self._add_scores_to_documents(sorted_documents[:top_k], logits_dim)
226
227 return sorted_documents_with_scores
228 else:
229 # Group predictions together
230 grouped_predictions = []
231 left_idx = 0
232 right_idx = 0
233 for number in number_of_docs:
234 right_idx = left_idx + number
235 grouped_predictions.append(similarity_scores[left_idx:right_idx])
236 left_idx = right_idx
237
238 result = []
239 for pred_group, doc_group in zip(grouped_predictions, documents):
240 sorted_scores_and_documents = sorted(
241 zip(pred_group, doc_group), # type: ignore
242 key=lambda similarity_document_tuple:
243 # assume the last element in logits represents the `has_answer` label
244 similarity_document_tuple[0][-1] if logits_dim >= 2 else similarity_document_tuple[0],
245 reverse=True,
246 )
247
248 # rank documents according to scores
249 sorted_documents = [
250 (score, doc) for score, doc in sorted_scores_and_documents if isinstance(doc, Document)
251 ]
252 sorted_documents_with_scores = self._add_scores_to_documents(sorted_documents[:top_k], logits_dim)
253
254 result.append(sorted_documents_with_scores)
255
256 return result
257
258 def _preprocess_batch_queries_and_docs(
259 self, queries: List[str], documents: Union[List[Document], List[List[Document]]]
260 ) -> Tuple[List[int], List[str], List[Document], bool]:
261 number_of_docs = []
262 all_queries = []
263 all_docs: List[Document] = []
264 single_list_of_docs = False
265
266 # Docs case 1: single list of Documents -> rerank single list of Documents based on single query
267 if len(documents) > 0 and isinstance(documents[0], Document):
268 if len(queries) != 1:
269 raise HaystackError("Number of queries must be 1 if a single list of Documents is provided.")
270 query = queries[0]
271 number_of_docs = [len(documents)]
272 all_queries = [query] * len(documents)
273 all_docs = documents # type: ignore
274 single_list_of_docs = True
275
276 # Docs case 2: list of lists of Documents -> rerank each list of Documents based on corresponding query
277 # If queries contains a single query, apply it to each list of Documents
278 if len(documents) > 0 and isinstance(documents[0], list):
279 if len(queries) == 1:
280 queries = queries * len(documents)
281 if len(queries) != len(documents):
282 raise HaystackError("Number of queries must be equal to number of provided Document lists.")
283 for query, cur_docs in zip(queries, documents):
284 if not isinstance(cur_docs, list):
285 raise HaystackError(f"cur_docs was of type {type(cur_docs)}, but expected a list of Documents.")
286 number_of_docs.append(len(cur_docs))
287 all_queries.extend([query] * len(cur_docs))
288 all_docs.extend(cur_docs)
289
290 return number_of_docs, all_queries, all_docs, single_list_of_docs
291
292 @staticmethod
293 def _get_batches(
294 all_queries: List[str], all_docs: List[Document], batch_size: Optional[int]
295 ) -> Iterator[Tuple[List[str], List[Document]]]:
296 if batch_size is None:
297 yield all_queries, all_docs
298 return
299 else:
300 for index in range(0, len(all_queries), batch_size):
301 yield all_queries[index : index + batch_size], all_docs[index : index + batch_size]
302
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/haystack/nodes/ranker/sentence_transformers.py b/haystack/nodes/ranker/sentence_transformers.py
--- a/haystack/nodes/ranker/sentence_transformers.py
+++ b/haystack/nodes/ranker/sentence_transformers.py
@@ -28,12 +28,14 @@
- directly get predictions via predict()
Usage example:
- ...
- retriever = BM25Retriever(document_store=document_store)
- ranker = SentenceTransformersRanker(model_name_or_path="cross-encoder/ms-marco-MiniLM-L-12-v2")
- p = Pipeline()
- p.add_node(component=retriever, name="ESRetriever", inputs=["Query"])
- p.add_node(component=ranker, name="Ranker", inputs=["ESRetriever"])
+
+ ```python
+ | retriever = BM25Retriever(document_store=document_store)
+ | ranker = SentenceTransformersRanker(model_name_or_path="cross-encoder/ms-marco-MiniLM-L-12-v2")
+ | p = Pipeline()
+ | p.add_node(component=retriever, name="ESRetriever", inputs=["Query"])
+ | p.add_node(component=ranker, name="Ranker", inputs=["ESRetriever"])
+ ```
"""
def __init__(
|
{"golden_diff": "diff --git a/haystack/nodes/ranker/sentence_transformers.py b/haystack/nodes/ranker/sentence_transformers.py\n--- a/haystack/nodes/ranker/sentence_transformers.py\n+++ b/haystack/nodes/ranker/sentence_transformers.py\n@@ -28,12 +28,14 @@\n - directly get predictions via predict()\n \n Usage example:\n- ...\n- retriever = BM25Retriever(document_store=document_store)\n- ranker = SentenceTransformersRanker(model_name_or_path=\"cross-encoder/ms-marco-MiniLM-L-12-v2\")\n- p = Pipeline()\n- p.add_node(component=retriever, name=\"ESRetriever\", inputs=[\"Query\"])\n- p.add_node(component=ranker, name=\"Ranker\", inputs=[\"ESRetriever\"])\n+\n+ ```python\n+ | retriever = BM25Retriever(document_store=document_store)\n+ | ranker = SentenceTransformersRanker(model_name_or_path=\"cross-encoder/ms-marco-MiniLM-L-12-v2\")\n+ | p = Pipeline()\n+ | p.add_node(component=retriever, name=\"ESRetriever\", inputs=[\"Query\"])\n+ | p.add_node(component=ranker, name=\"Ranker\", inputs=[\"ESRetriever\"])\n+ ```\n \"\"\"\n \n def __init__(\n", "issue": "Code snippet wrongly formatted in docs\n**Describe the bug**\r\nThe code snippet at:\r\n\r\nhttps://github.com/deepset-ai/haystack/blob/master/haystack/nodes/ranker/sentence_transformers.py#L32\r\n\r\nis rendered as plain text in the corresponding documentation page\r\n\r\nhttps://haystack.deepset.ai/reference/ranker\r\n\r\n\r\n**Expected behavior**\r\nThe snipped should be formatted as Python code\r\n\n", "before_files": [{"content": "from typing import List, Optional, Union, Tuple, Iterator, Any\nimport logging\nfrom pathlib import Path\n\nimport torch\nfrom torch.nn import DataParallel\nfrom transformers import AutoModelForSequenceClassification, AutoTokenizer\n\nfrom haystack.errors import HaystackError\nfrom haystack.schema import Document\nfrom haystack.nodes.ranker.base import BaseRanker\nfrom haystack.modeling.utils import initialize_device_settings\n\nlogger = logging.getLogger(__name__)\n\n\nclass SentenceTransformersRanker(BaseRanker):\n \"\"\"\n Sentence Transformer based pre-trained Cross-Encoder model for Document Re-ranking (https://huggingface.co/cross-encoder).\n Re-Ranking can be used on top of a retriever to boost the performance for document search. This is particularly useful if the retriever has a high recall but is bad in sorting the documents by relevance.\n\n SentenceTransformerRanker handles Cross-Encoder models\n - use a single logit as similarity score e.g. cross-encoder/ms-marco-MiniLM-L-12-v2\n - use two output logits (no_answer, has_answer) e.g. deepset/gbert-base-germandpr-reranking\n https://www.sbert.net/docs/pretrained-models/ce-msmarco.html#usage-with-transformers\n\n | With a SentenceTransformersRanker, you can:\n - directly get predictions via predict()\n\n Usage example:\n ...\n retriever = BM25Retriever(document_store=document_store)\n ranker = SentenceTransformersRanker(model_name_or_path=\"cross-encoder/ms-marco-MiniLM-L-12-v2\")\n p = Pipeline()\n p.add_node(component=retriever, name=\"ESRetriever\", inputs=[\"Query\"])\n p.add_node(component=ranker, name=\"Ranker\", inputs=[\"ESRetriever\"])\n \"\"\"\n\n def __init__(\n self,\n model_name_or_path: Union[str, Path],\n model_version: Optional[str] = None,\n top_k: int = 10,\n use_gpu: bool = True,\n devices: Optional[List[Union[str, torch.device]]] = None,\n batch_size: Optional[int] = None,\n scale_score: bool = True,\n ):\n \"\"\"\n :param model_name_or_path: Directory of a saved model or the name of a public model e.g.\n 'cross-encoder/ms-marco-MiniLM-L-12-v2'.\n See https://huggingface.co/cross-encoder for full list of available models\n :param model_version: The version of model to use from the HuggingFace model hub. Can be tag name, branch name, or commit hash.\n :param top_k: The maximum number of documents to return\n :param use_gpu: Whether to use all available GPUs or the CPU. Falls back on CPU if no GPU is available.\n :param devices: List of GPU (or CPU) devices, to limit inference to certain GPUs and not use all available ones\n The strings will be converted into pytorch devices, so use the string notation described here:\n https://pytorch.org/docs/stable/tensor_attributes.html?highlight=torch%20device#torch.torch.device\n (e.g. [\"cuda:0\"]).\n :param batch_size: Number of documents to process at a time.\n :param scale_score: The raw predictions will be transformed using a Sigmoid activation function in case the model\n only predicts a single label. For multi-label predictions, no scaling is applied. Set this\n to False if you do not want any scaling of the raw predictions.\n \"\"\"\n super().__init__()\n\n self.top_k = top_k\n\n if devices is not None:\n self.devices = [torch.device(device) for device in devices]\n else:\n self.devices, _ = initialize_device_settings(use_cuda=use_gpu, multi_gpu=True)\n\n self.transformer_model = AutoModelForSequenceClassification.from_pretrained(\n pretrained_model_name_or_path=model_name_or_path, revision=model_version\n )\n self.transformer_model.to(str(self.devices[0]))\n self.transformer_tokenizer = AutoTokenizer.from_pretrained(\n pretrained_model_name_or_path=model_name_or_path, revision=model_version\n )\n self.transformer_model.eval()\n\n # we use sigmoid activation function to scale the score in case there is only a single label\n # we do not apply any scaling when scale_score is set to False\n num_labels = self.transformer_model.num_labels\n self.activation_function: torch.nn.Module\n if num_labels == 1 and scale_score:\n self.activation_function = torch.nn.Sigmoid()\n else:\n self.activation_function = torch.nn.Identity()\n\n if len(self.devices) > 1:\n self.model = DataParallel(self.transformer_model, device_ids=self.devices)\n\n self.batch_size = batch_size\n\n def predict(self, query: str, documents: List[Document], top_k: Optional[int] = None) -> List[Document]:\n \"\"\"\n Use loaded ranker model to re-rank the supplied list of Document.\n\n Returns list of Document sorted by (desc.) similarity with the query.\n\n :param query: Query string\n :param documents: List of Document to be re-ranked\n :param top_k: The maximum number of documents to return\n :return: List of Document\n \"\"\"\n if top_k is None:\n top_k = self.top_k\n\n features = self.transformer_tokenizer(\n [query for doc in documents],\n [doc.content for doc in documents],\n padding=True,\n truncation=True,\n return_tensors=\"pt\",\n ).to(self.devices[0])\n\n # SentenceTransformerRanker uses:\n # 1. the logit as similarity score/answerable classification\n # 2. the logits as answerable classification (no_answer / has_answer)\n # https://www.sbert.net/docs/pretrained-models/ce-msmarco.html#usage-with-transformers\n with torch.no_grad():\n similarity_scores = self.transformer_model(**features).logits\n\n logits_dim = similarity_scores.shape[1] # [batch_size, logits_dim]\n sorted_scores_and_documents = sorted(\n zip(similarity_scores, documents),\n key=lambda similarity_document_tuple:\n # assume the last element in logits represents the `has_answer` label\n similarity_document_tuple[0][-1] if logits_dim >= 2 else similarity_document_tuple[0],\n reverse=True,\n )\n\n # add normalized scores to documents\n sorted_documents = self._add_scores_to_documents(sorted_scores_and_documents[:top_k], logits_dim)\n\n return sorted_documents\n\n def _add_scores_to_documents(\n self, sorted_scores_and_documents: List[Tuple[Any, Document]], logits_dim: int\n ) -> List[Document]:\n \"\"\"\n Normalize and add scores to retrieved result documents.\n\n :param sorted_scores_and_documents: List of score, Document Tuples.\n :param logits_dim: Dimensionality of the returned scores.\n \"\"\"\n sorted_documents = []\n for raw_score, doc in sorted_scores_and_documents:\n if logits_dim >= 2:\n score = self.activation_function(raw_score)[-1]\n else:\n score = self.activation_function(raw_score)[0]\n\n doc.score = score.detach().cpu().numpy().tolist()\n sorted_documents.append(doc)\n\n return sorted_documents\n\n def predict_batch(\n self,\n queries: List[str],\n documents: Union[List[Document], List[List[Document]]],\n top_k: Optional[int] = None,\n batch_size: Optional[int] = None,\n ) -> Union[List[Document], List[List[Document]]]:\n \"\"\"\n Use loaded ranker model to re-rank the supplied lists of Documents.\n\n Returns lists of Documents sorted by (desc.) similarity with the corresponding queries.\n\n\n - If you provide a list containing a single query...\n\n - ... and a single list of Documents, the single list of Documents will be re-ranked based on the\n supplied query.\n - ... and a list of lists of Documents, each list of Documents will be re-ranked individually based on the\n supplied query.\n\n\n - If you provide a list of multiple queries...\n\n - ... you need to provide a list of lists of Documents. Each list of Documents will be re-ranked based on\n its corresponding query.\n\n :param queries: Single query string or list of queries\n :param documents: Single list of Documents or list of lists of Documents to be reranked.\n :param top_k: The maximum number of documents to return per Document list.\n :param batch_size: Number of Documents to process at a time.\n \"\"\"\n if top_k is None:\n top_k = self.top_k\n\n if batch_size is None:\n batch_size = self.batch_size\n\n number_of_docs, all_queries, all_docs, single_list_of_docs = self._preprocess_batch_queries_and_docs(\n queries=queries, documents=documents\n )\n\n batches = self._get_batches(all_queries=all_queries, all_docs=all_docs, batch_size=batch_size)\n preds = []\n for cur_queries, cur_docs in batches:\n features = self.transformer_tokenizer(\n cur_queries, [doc.content for doc in cur_docs], padding=True, truncation=True, return_tensors=\"pt\"\n ).to(self.devices[0])\n\n with torch.no_grad():\n similarity_scores = self.transformer_model(**features).logits\n preds.extend(similarity_scores)\n\n logits_dim = similarity_scores.shape[1] # [batch_size, logits_dim]\n if single_list_of_docs:\n sorted_scores_and_documents = sorted(\n zip(similarity_scores, documents),\n key=lambda similarity_document_tuple:\n # assume the last element in logits represents the `has_answer` label\n similarity_document_tuple[0][-1] if logits_dim >= 2 else similarity_document_tuple[0],\n reverse=True,\n )\n\n # is this step needed?\n sorted_documents = [(score, doc) for score, doc in sorted_scores_and_documents if isinstance(doc, Document)]\n sorted_documents_with_scores = self._add_scores_to_documents(sorted_documents[:top_k], logits_dim)\n\n return sorted_documents_with_scores\n else:\n # Group predictions together\n grouped_predictions = []\n left_idx = 0\n right_idx = 0\n for number in number_of_docs:\n right_idx = left_idx + number\n grouped_predictions.append(similarity_scores[left_idx:right_idx])\n left_idx = right_idx\n\n result = []\n for pred_group, doc_group in zip(grouped_predictions, documents):\n sorted_scores_and_documents = sorted(\n zip(pred_group, doc_group), # type: ignore\n key=lambda similarity_document_tuple:\n # assume the last element in logits represents the `has_answer` label\n similarity_document_tuple[0][-1] if logits_dim >= 2 else similarity_document_tuple[0],\n reverse=True,\n )\n\n # rank documents according to scores\n sorted_documents = [\n (score, doc) for score, doc in sorted_scores_and_documents if isinstance(doc, Document)\n ]\n sorted_documents_with_scores = self._add_scores_to_documents(sorted_documents[:top_k], logits_dim)\n\n result.append(sorted_documents_with_scores)\n\n return result\n\n def _preprocess_batch_queries_and_docs(\n self, queries: List[str], documents: Union[List[Document], List[List[Document]]]\n ) -> Tuple[List[int], List[str], List[Document], bool]:\n number_of_docs = []\n all_queries = []\n all_docs: List[Document] = []\n single_list_of_docs = False\n\n # Docs case 1: single list of Documents -> rerank single list of Documents based on single query\n if len(documents) > 0 and isinstance(documents[0], Document):\n if len(queries) != 1:\n raise HaystackError(\"Number of queries must be 1 if a single list of Documents is provided.\")\n query = queries[0]\n number_of_docs = [len(documents)]\n all_queries = [query] * len(documents)\n all_docs = documents # type: ignore\n single_list_of_docs = True\n\n # Docs case 2: list of lists of Documents -> rerank each list of Documents based on corresponding query\n # If queries contains a single query, apply it to each list of Documents\n if len(documents) > 0 and isinstance(documents[0], list):\n if len(queries) == 1:\n queries = queries * len(documents)\n if len(queries) != len(documents):\n raise HaystackError(\"Number of queries must be equal to number of provided Document lists.\")\n for query, cur_docs in zip(queries, documents):\n if not isinstance(cur_docs, list):\n raise HaystackError(f\"cur_docs was of type {type(cur_docs)}, but expected a list of Documents.\")\n number_of_docs.append(len(cur_docs))\n all_queries.extend([query] * len(cur_docs))\n all_docs.extend(cur_docs)\n\n return number_of_docs, all_queries, all_docs, single_list_of_docs\n\n @staticmethod\n def _get_batches(\n all_queries: List[str], all_docs: List[Document], batch_size: Optional[int]\n ) -> Iterator[Tuple[List[str], List[Document]]]:\n if batch_size is None:\n yield all_queries, all_docs\n return\n else:\n for index in range(0, len(all_queries), batch_size):\n yield all_queries[index : index + batch_size], all_docs[index : index + batch_size]\n", "path": "haystack/nodes/ranker/sentence_transformers.py"}], "after_files": [{"content": "from typing import List, Optional, Union, Tuple, Iterator, Any\nimport logging\nfrom pathlib import Path\n\nimport torch\nfrom torch.nn import DataParallel\nfrom transformers import AutoModelForSequenceClassification, AutoTokenizer\n\nfrom haystack.errors import HaystackError\nfrom haystack.schema import Document\nfrom haystack.nodes.ranker.base import BaseRanker\nfrom haystack.modeling.utils import initialize_device_settings\n\nlogger = logging.getLogger(__name__)\n\n\nclass SentenceTransformersRanker(BaseRanker):\n \"\"\"\n Sentence Transformer based pre-trained Cross-Encoder model for Document Re-ranking (https://huggingface.co/cross-encoder).\n Re-Ranking can be used on top of a retriever to boost the performance for document search. This is particularly useful if the retriever has a high recall but is bad in sorting the documents by relevance.\n\n SentenceTransformerRanker handles Cross-Encoder models\n - use a single logit as similarity score e.g. cross-encoder/ms-marco-MiniLM-L-12-v2\n - use two output logits (no_answer, has_answer) e.g. deepset/gbert-base-germandpr-reranking\n https://www.sbert.net/docs/pretrained-models/ce-msmarco.html#usage-with-transformers\n\n | With a SentenceTransformersRanker, you can:\n - directly get predictions via predict()\n\n Usage example:\n\n ```python\n | retriever = BM25Retriever(document_store=document_store)\n | ranker = SentenceTransformersRanker(model_name_or_path=\"cross-encoder/ms-marco-MiniLM-L-12-v2\")\n | p = Pipeline()\n | p.add_node(component=retriever, name=\"ESRetriever\", inputs=[\"Query\"])\n | p.add_node(component=ranker, name=\"Ranker\", inputs=[\"ESRetriever\"])\n ```\n \"\"\"\n\n def __init__(\n self,\n model_name_or_path: Union[str, Path],\n model_version: Optional[str] = None,\n top_k: int = 10,\n use_gpu: bool = True,\n devices: Optional[List[Union[str, torch.device]]] = None,\n batch_size: Optional[int] = None,\n scale_score: bool = True,\n ):\n \"\"\"\n :param model_name_or_path: Directory of a saved model or the name of a public model e.g.\n 'cross-encoder/ms-marco-MiniLM-L-12-v2'.\n See https://huggingface.co/cross-encoder for full list of available models\n :param model_version: The version of model to use from the HuggingFace model hub. Can be tag name, branch name, or commit hash.\n :param top_k: The maximum number of documents to return\n :param use_gpu: Whether to use all available GPUs or the CPU. Falls back on CPU if no GPU is available.\n :param devices: List of GPU (or CPU) devices, to limit inference to certain GPUs and not use all available ones\n The strings will be converted into pytorch devices, so use the string notation described here:\n https://pytorch.org/docs/stable/tensor_attributes.html?highlight=torch%20device#torch.torch.device\n (e.g. [\"cuda:0\"]).\n :param batch_size: Number of documents to process at a time.\n :param scale_score: The raw predictions will be transformed using a Sigmoid activation function in case the model\n only predicts a single label. For multi-label predictions, no scaling is applied. Set this\n to False if you do not want any scaling of the raw predictions.\n \"\"\"\n super().__init__()\n\n self.top_k = top_k\n\n if devices is not None:\n self.devices = [torch.device(device) for device in devices]\n else:\n self.devices, _ = initialize_device_settings(use_cuda=use_gpu, multi_gpu=True)\n\n self.transformer_model = AutoModelForSequenceClassification.from_pretrained(\n pretrained_model_name_or_path=model_name_or_path, revision=model_version\n )\n self.transformer_model.to(str(self.devices[0]))\n self.transformer_tokenizer = AutoTokenizer.from_pretrained(\n pretrained_model_name_or_path=model_name_or_path, revision=model_version\n )\n self.transformer_model.eval()\n\n # we use sigmoid activation function to scale the score in case there is only a single label\n # we do not apply any scaling when scale_score is set to False\n num_labels = self.transformer_model.num_labels\n self.activation_function: torch.nn.Module\n if num_labels == 1 and scale_score:\n self.activation_function = torch.nn.Sigmoid()\n else:\n self.activation_function = torch.nn.Identity()\n\n if len(self.devices) > 1:\n self.model = DataParallel(self.transformer_model, device_ids=self.devices)\n\n self.batch_size = batch_size\n\n def predict(self, query: str, documents: List[Document], top_k: Optional[int] = None) -> List[Document]:\n \"\"\"\n Use loaded ranker model to re-rank the supplied list of Document.\n\n Returns list of Document sorted by (desc.) similarity with the query.\n\n :param query: Query string\n :param documents: List of Document to be re-ranked\n :param top_k: The maximum number of documents to return\n :return: List of Document\n \"\"\"\n if top_k is None:\n top_k = self.top_k\n\n features = self.transformer_tokenizer(\n [query for doc in documents],\n [doc.content for doc in documents],\n padding=True,\n truncation=True,\n return_tensors=\"pt\",\n ).to(self.devices[0])\n\n # SentenceTransformerRanker uses:\n # 1. the logit as similarity score/answerable classification\n # 2. the logits as answerable classification (no_answer / has_answer)\n # https://www.sbert.net/docs/pretrained-models/ce-msmarco.html#usage-with-transformers\n with torch.no_grad():\n similarity_scores = self.transformer_model(**features).logits\n\n logits_dim = similarity_scores.shape[1] # [batch_size, logits_dim]\n sorted_scores_and_documents = sorted(\n zip(similarity_scores, documents),\n key=lambda similarity_document_tuple:\n # assume the last element in logits represents the `has_answer` label\n similarity_document_tuple[0][-1] if logits_dim >= 2 else similarity_document_tuple[0],\n reverse=True,\n )\n\n # add normalized scores to documents\n sorted_documents = self._add_scores_to_documents(sorted_scores_and_documents[:top_k], logits_dim)\n\n return sorted_documents\n\n def _add_scores_to_documents(\n self, sorted_scores_and_documents: List[Tuple[Any, Document]], logits_dim: int\n ) -> List[Document]:\n \"\"\"\n Normalize and add scores to retrieved result documents.\n\n :param sorted_scores_and_documents: List of score, Document Tuples.\n :param logits_dim: Dimensionality of the returned scores.\n \"\"\"\n sorted_documents = []\n for raw_score, doc in sorted_scores_and_documents:\n if logits_dim >= 2:\n score = self.activation_function(raw_score)[-1]\n else:\n score = self.activation_function(raw_score)[0]\n\n doc.score = score.detach().cpu().numpy().tolist()\n sorted_documents.append(doc)\n\n return sorted_documents\n\n def predict_batch(\n self,\n queries: List[str],\n documents: Union[List[Document], List[List[Document]]],\n top_k: Optional[int] = None,\n batch_size: Optional[int] = None,\n ) -> Union[List[Document], List[List[Document]]]:\n \"\"\"\n Use loaded ranker model to re-rank the supplied lists of Documents.\n\n Returns lists of Documents sorted by (desc.) similarity with the corresponding queries.\n\n\n - If you provide a list containing a single query...\n\n - ... and a single list of Documents, the single list of Documents will be re-ranked based on the\n supplied query.\n - ... and a list of lists of Documents, each list of Documents will be re-ranked individually based on the\n supplied query.\n\n\n - If you provide a list of multiple queries...\n\n - ... you need to provide a list of lists of Documents. Each list of Documents will be re-ranked based on\n its corresponding query.\n\n :param queries: Single query string or list of queries\n :param documents: Single list of Documents or list of lists of Documents to be reranked.\n :param top_k: The maximum number of documents to return per Document list.\n :param batch_size: Number of Documents to process at a time.\n \"\"\"\n if top_k is None:\n top_k = self.top_k\n\n if batch_size is None:\n batch_size = self.batch_size\n\n number_of_docs, all_queries, all_docs, single_list_of_docs = self._preprocess_batch_queries_and_docs(\n queries=queries, documents=documents\n )\n\n batches = self._get_batches(all_queries=all_queries, all_docs=all_docs, batch_size=batch_size)\n preds = []\n for cur_queries, cur_docs in batches:\n features = self.transformer_tokenizer(\n cur_queries, [doc.content for doc in cur_docs], padding=True, truncation=True, return_tensors=\"pt\"\n ).to(self.devices[0])\n\n with torch.no_grad():\n similarity_scores = self.transformer_model(**features).logits\n preds.extend(similarity_scores)\n\n logits_dim = similarity_scores.shape[1] # [batch_size, logits_dim]\n if single_list_of_docs:\n sorted_scores_and_documents = sorted(\n zip(similarity_scores, documents),\n key=lambda similarity_document_tuple:\n # assume the last element in logits represents the `has_answer` label\n similarity_document_tuple[0][-1] if logits_dim >= 2 else similarity_document_tuple[0],\n reverse=True,\n )\n\n # is this step needed?\n sorted_documents = [(score, doc) for score, doc in sorted_scores_and_documents if isinstance(doc, Document)]\n sorted_documents_with_scores = self._add_scores_to_documents(sorted_documents[:top_k], logits_dim)\n\n return sorted_documents_with_scores\n else:\n # Group predictions together\n grouped_predictions = []\n left_idx = 0\n right_idx = 0\n for number in number_of_docs:\n right_idx = left_idx + number\n grouped_predictions.append(similarity_scores[left_idx:right_idx])\n left_idx = right_idx\n\n result = []\n for pred_group, doc_group in zip(grouped_predictions, documents):\n sorted_scores_and_documents = sorted(\n zip(pred_group, doc_group), # type: ignore\n key=lambda similarity_document_tuple:\n # assume the last element in logits represents the `has_answer` label\n similarity_document_tuple[0][-1] if logits_dim >= 2 else similarity_document_tuple[0],\n reverse=True,\n )\n\n # rank documents according to scores\n sorted_documents = [\n (score, doc) for score, doc in sorted_scores_and_documents if isinstance(doc, Document)\n ]\n sorted_documents_with_scores = self._add_scores_to_documents(sorted_documents[:top_k], logits_dim)\n\n result.append(sorted_documents_with_scores)\n\n return result\n\n def _preprocess_batch_queries_and_docs(\n self, queries: List[str], documents: Union[List[Document], List[List[Document]]]\n ) -> Tuple[List[int], List[str], List[Document], bool]:\n number_of_docs = []\n all_queries = []\n all_docs: List[Document] = []\n single_list_of_docs = False\n\n # Docs case 1: single list of Documents -> rerank single list of Documents based on single query\n if len(documents) > 0 and isinstance(documents[0], Document):\n if len(queries) != 1:\n raise HaystackError(\"Number of queries must be 1 if a single list of Documents is provided.\")\n query = queries[0]\n number_of_docs = [len(documents)]\n all_queries = [query] * len(documents)\n all_docs = documents # type: ignore\n single_list_of_docs = True\n\n # Docs case 2: list of lists of Documents -> rerank each list of Documents based on corresponding query\n # If queries contains a single query, apply it to each list of Documents\n if len(documents) > 0 and isinstance(documents[0], list):\n if len(queries) == 1:\n queries = queries * len(documents)\n if len(queries) != len(documents):\n raise HaystackError(\"Number of queries must be equal to number of provided Document lists.\")\n for query, cur_docs in zip(queries, documents):\n if not isinstance(cur_docs, list):\n raise HaystackError(f\"cur_docs was of type {type(cur_docs)}, but expected a list of Documents.\")\n number_of_docs.append(len(cur_docs))\n all_queries.extend([query] * len(cur_docs))\n all_docs.extend(cur_docs)\n\n return number_of_docs, all_queries, all_docs, single_list_of_docs\n\n @staticmethod\n def _get_batches(\n all_queries: List[str], all_docs: List[Document], batch_size: Optional[int]\n ) -> Iterator[Tuple[List[str], List[Document]]]:\n if batch_size is None:\n yield all_queries, all_docs\n return\n else:\n for index in range(0, len(all_queries), batch_size):\n yield all_queries[index : index + batch_size], all_docs[index : index + batch_size]\n", "path": "haystack/nodes/ranker/sentence_transformers.py"}]}
| 4,064 | 309 |
gh_patches_debug_36606
|
rasdani/github-patches
|
git_diff
|
electricitymaps__electricitymaps-contrib-3442
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
TW production parser down
## Description
This is an automatic error report generated for Taiwan (TW).
Issues:
- No recent data found for `production` parser
## Suggestions
- Try running the parser locally using the command `poetry run test_parser TW production`
- <a href="https://kibana.electricitymap.org/app/kibana#/discover/10af54f0-0c4a-11e9-85c1-1d63df8c862c?_g=(refreshInterval:('$$hashKey':'object:232',display:'5%20minutes',pause:!f,section:2,value:300000),time:(from:now-24h,mode:quick,to:now))&_a=(columns:!(message,extra.key,level),filters:!(('$state':(store:appState),meta:(alias:!n,disabled:!t,index:'96f67170-0c49-11e9-85c1-1d63df8c862c',key:level,negate:!f,params:(query:ERROR,type:phrase),type:phrase,value:ERROR),query:(match:(level:(query:ERROR,type:phrase)))),('$state':(store:appState),meta:(alias:!n,disabled:!f,index:'96f67170-0c49-11e9-85c1-1d63df8c862c',key:extra.key,negate:!f,params:(query:TW,type:phrase),type:phrase,value:TW),query:(match:(extra.key:(query:TW,type:phrase))))),index:'96f67170-0c49-11e9-85c1-1d63df8c862c',interval:auto,query:(language:lucene,query:''),sort:!('@timestamp',desc))">Explore the runtime logs</a>
You can see an overview of all parser issues [here](https://github.com/tmrowco/electricitymap-contrib/wiki/Parser-issues).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `parsers/TW.py`
Content:
```
1 #!/usr/bin/env python3
2 import arrow
3 import requests
4 import pandas
5 import dateutil
6
7
8 def fetch_production(zone_key='TW', session=None, target_datetime=None, logger=None) -> dict:
9 if target_datetime:
10 raise NotImplementedError('This parser is not yet able to parse past dates')
11
12 url = 'http://www.taipower.com.tw/d006/loadGraph/loadGraph/data/genary.txt'
13 s = session or requests.Session()
14 response = s.get(url)
15 data = response.json()
16
17 dumpDate = data['']
18 prodData = data['aaData']
19
20 tz = 'Asia/Taipei'
21 dumpDate = arrow.get(dumpDate, 'YYYY-MM-DD HH:mm').replace(tzinfo=dateutil.tz.gettz(tz))
22
23 objData = pandas.DataFrame(prodData)
24
25 objData.columns = ['fueltype', 'name', 'capacity', 'output', 'percentage',
26 'additional']
27
28 objData['fueltype'] = objData.fueltype.str.split('(').str[1]
29 objData['fueltype'] = objData.fueltype.str.split(')').str[0]
30 objData.drop('additional', axis=1, inplace=True)
31 objData.drop('percentage', axis=1, inplace=True)
32
33 objData['capacity'] = pandas.to_numeric(objData['capacity'], errors='coerce')
34 objData['output'] = pandas.to_numeric(objData['output'], errors='coerce')
35 production = pandas.DataFrame(objData.groupby('fueltype').sum())
36 production.columns = ['capacity', 'output']
37
38 coal_capacity = production.loc['Coal'].capacity + production.loc['IPP-Coal'].capacity
39 gas_capacity = production.loc['LNG'].capacity + production.loc['IPP-LNG'].capacity
40 oil_capacity = production.loc['Oil'].capacity + production.loc['Diesel'].capacity
41
42 coal_production = production.loc['Coal'].output + production.loc['IPP-Coal'].output
43 gas_production = production.loc['LNG'].output + production.loc['IPP-LNG'].output
44 oil_production = production.loc['Oil'].output + production.loc['Diesel'].output
45
46 # For storage, note that load will be negative, and generation positive.
47 # We require the opposite
48
49 returndata = {
50 'zoneKey': zone_key,
51 'datetime': dumpDate.datetime,
52 'production': {
53 'coal': coal_production,
54 'gas': gas_production,
55 'oil': oil_production,
56 'hydro': production.loc['Hydro'].output,
57 'nuclear': production.loc['Nuclear'].output,
58 'solar': production.loc['Solar'].output,
59 'wind': production.loc['Wind'].output,
60 'unknown': production.loc['Co-Gen'].output
61 },
62 'capacity': {
63 'coal': coal_capacity,
64 'gas': gas_capacity,
65 'oil': oil_capacity,
66 'hydro': production.loc['Hydro'].capacity,
67 'hydro storage':production.loc['Pumping Gen'].capacity,
68 'nuclear': production.loc['Nuclear'].capacity,
69 'solar': production.loc['Solar'].capacity,
70 'wind': production.loc['Wind'].capacity,
71 'unknown': production.loc['Co-Gen'].capacity
72 },
73 'storage': {
74 'hydro': -1 * production.loc['Pumping Load'].output - production.loc['Pumping Gen'].output
75 },
76 'source': 'taipower.com.tw'
77 }
78
79 return returndata
80
81
82 if __name__ == '__main__':
83 print(fetch_production())
84
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/parsers/TW.py b/parsers/TW.py
--- a/parsers/TW.py
+++ b/parsers/TW.py
@@ -1,8 +1,8 @@
#!/usr/bin/env python3
import arrow
-import requests
-import pandas
import dateutil
+import pandas as pd
+import requests
def fetch_production(zone_key='TW', session=None, target_datetime=None, logger=None) -> dict:
@@ -20,21 +20,27 @@
tz = 'Asia/Taipei'
dumpDate = arrow.get(dumpDate, 'YYYY-MM-DD HH:mm').replace(tzinfo=dateutil.tz.gettz(tz))
- objData = pandas.DataFrame(prodData)
+ objData = pd.DataFrame(prodData)
- objData.columns = ['fueltype', 'name', 'capacity', 'output', 'percentage',
- 'additional']
+ columns = ['fueltype', 'additional_1', 'name', 'capacity', 'output', 'percentage', 'additional_2']
+ assert len(objData.iloc[0]) == len(columns), "number of input columns changed"
+ objData.columns = columns
objData['fueltype'] = objData.fueltype.str.split('(').str[1]
objData['fueltype'] = objData.fueltype.str.split(')').str[0]
- objData.drop('additional', axis=1, inplace=True)
- objData.drop('percentage', axis=1, inplace=True)
+ objData.loc[:,['capacity', 'output']] = objData[['capacity', 'output']].apply(pd.to_numeric, errors='coerce')
+ assert not objData.capacity.isna().all(), "capacity data is entirely NaN - input column order may have changed"
+ assert not objData.output.isna().all(), "output data is entirely NaN - input column order may have changed"
- objData['capacity'] = pandas.to_numeric(objData['capacity'], errors='coerce')
- objData['output'] = pandas.to_numeric(objData['output'], errors='coerce')
- production = pandas.DataFrame(objData.groupby('fueltype').sum())
+ objData.drop(columns=['additional_1', 'name', 'additional_2', 'percentage'], axis=1, inplace=True)
+ # summing because items in returned object are for each power plant and operational units
+ production = pd.DataFrame(objData.groupby('fueltype').sum())
production.columns = ['capacity', 'output']
+ # check output values coincide with total capacity by fuel type
+ check_values = production.output <= production.capacity
+ assert check_values.loc[~check_values.index.isin(["Co-Gen"])].all(), "output > capacity" # HACK: Co-Gen capacity is underestimated
+
coal_capacity = production.loc['Coal'].capacity + production.loc['IPP-Coal'].capacity
gas_capacity = production.loc['LNG'].capacity + production.loc['IPP-LNG'].capacity
oil_capacity = production.loc['Oil'].capacity + production.loc['Diesel'].capacity
|
{"golden_diff": "diff --git a/parsers/TW.py b/parsers/TW.py\n--- a/parsers/TW.py\n+++ b/parsers/TW.py\n@@ -1,8 +1,8 @@\n #!/usr/bin/env python3\n import arrow\n-import requests\n-import pandas\n import dateutil\n+import pandas as pd\n+import requests\n \n \n def fetch_production(zone_key='TW', session=None, target_datetime=None, logger=None) -> dict:\n@@ -20,21 +20,27 @@\n tz = 'Asia/Taipei'\n dumpDate = arrow.get(dumpDate, 'YYYY-MM-DD HH:mm').replace(tzinfo=dateutil.tz.gettz(tz))\n \n- objData = pandas.DataFrame(prodData)\n+ objData = pd.DataFrame(prodData)\n \n- objData.columns = ['fueltype', 'name', 'capacity', 'output', 'percentage',\n- 'additional']\n+ columns = ['fueltype', 'additional_1', 'name', 'capacity', 'output', 'percentage', 'additional_2']\n+ assert len(objData.iloc[0]) == len(columns), \"number of input columns changed\"\n+ objData.columns = columns\n \n objData['fueltype'] = objData.fueltype.str.split('(').str[1]\n objData['fueltype'] = objData.fueltype.str.split(')').str[0]\n- objData.drop('additional', axis=1, inplace=True)\n- objData.drop('percentage', axis=1, inplace=True)\n+ objData.loc[:,['capacity', 'output']] = objData[['capacity', 'output']].apply(pd.to_numeric, errors='coerce')\n+ assert not objData.capacity.isna().all(), \"capacity data is entirely NaN - input column order may have changed\"\n+ assert not objData.output.isna().all(), \"output data is entirely NaN - input column order may have changed\"\n \n- objData['capacity'] = pandas.to_numeric(objData['capacity'], errors='coerce')\n- objData['output'] = pandas.to_numeric(objData['output'], errors='coerce')\n- production = pandas.DataFrame(objData.groupby('fueltype').sum())\n+ objData.drop(columns=['additional_1', 'name', 'additional_2', 'percentage'], axis=1, inplace=True)\n+ # summing because items in returned object are for each power plant and operational units\n+ production = pd.DataFrame(objData.groupby('fueltype').sum())\n production.columns = ['capacity', 'output']\n \n+ # check output values coincide with total capacity by fuel type\n+ check_values = production.output <= production.capacity\n+ assert check_values.loc[~check_values.index.isin([\"Co-Gen\"])].all(), \"output > capacity\" # HACK: Co-Gen capacity is underestimated\n+\n coal_capacity = production.loc['Coal'].capacity + production.loc['IPP-Coal'].capacity\n gas_capacity = production.loc['LNG'].capacity + production.loc['IPP-LNG'].capacity\n oil_capacity = production.loc['Oil'].capacity + production.loc['Diesel'].capacity\n", "issue": "TW production parser down\n## Description\n\nThis is an automatic error report generated for Taiwan (TW).\n\nIssues:\n- No recent data found for `production` parser\n\n## Suggestions\n- Try running the parser locally using the command `poetry run test_parser TW production`\n- <a href=\"https://kibana.electricitymap.org/app/kibana#/discover/10af54f0-0c4a-11e9-85c1-1d63df8c862c?_g=(refreshInterval:('$$hashKey':'object:232',display:'5%20minutes',pause:!f,section:2,value:300000),time:(from:now-24h,mode:quick,to:now))&_a=(columns:!(message,extra.key,level),filters:!(('$state':(store:appState),meta:(alias:!n,disabled:!t,index:'96f67170-0c49-11e9-85c1-1d63df8c862c',key:level,negate:!f,params:(query:ERROR,type:phrase),type:phrase,value:ERROR),query:(match:(level:(query:ERROR,type:phrase)))),('$state':(store:appState),meta:(alias:!n,disabled:!f,index:'96f67170-0c49-11e9-85c1-1d63df8c862c',key:extra.key,negate:!f,params:(query:TW,type:phrase),type:phrase,value:TW),query:(match:(extra.key:(query:TW,type:phrase))))),index:'96f67170-0c49-11e9-85c1-1d63df8c862c',interval:auto,query:(language:lucene,query:''),sort:!('@timestamp',desc))\">Explore the runtime logs</a>\n\nYou can see an overview of all parser issues [here](https://github.com/tmrowco/electricitymap-contrib/wiki/Parser-issues).\n\n", "before_files": [{"content": "#!/usr/bin/env python3\nimport arrow\nimport requests\nimport pandas\nimport dateutil\n\n\ndef fetch_production(zone_key='TW', session=None, target_datetime=None, logger=None) -> dict:\n if target_datetime:\n raise NotImplementedError('This parser is not yet able to parse past dates')\n\n url = 'http://www.taipower.com.tw/d006/loadGraph/loadGraph/data/genary.txt'\n s = session or requests.Session()\n response = s.get(url)\n data = response.json()\n\n dumpDate = data['']\n prodData = data['aaData']\n\n tz = 'Asia/Taipei'\n dumpDate = arrow.get(dumpDate, 'YYYY-MM-DD HH:mm').replace(tzinfo=dateutil.tz.gettz(tz))\n\n objData = pandas.DataFrame(prodData)\n\n objData.columns = ['fueltype', 'name', 'capacity', 'output', 'percentage',\n 'additional']\n\n objData['fueltype'] = objData.fueltype.str.split('(').str[1]\n objData['fueltype'] = objData.fueltype.str.split(')').str[0]\n objData.drop('additional', axis=1, inplace=True)\n objData.drop('percentage', axis=1, inplace=True)\n\n objData['capacity'] = pandas.to_numeric(objData['capacity'], errors='coerce')\n objData['output'] = pandas.to_numeric(objData['output'], errors='coerce')\n production = pandas.DataFrame(objData.groupby('fueltype').sum())\n production.columns = ['capacity', 'output']\n\n coal_capacity = production.loc['Coal'].capacity + production.loc['IPP-Coal'].capacity\n gas_capacity = production.loc['LNG'].capacity + production.loc['IPP-LNG'].capacity\n oil_capacity = production.loc['Oil'].capacity + production.loc['Diesel'].capacity\n\n coal_production = production.loc['Coal'].output + production.loc['IPP-Coal'].output\n gas_production = production.loc['LNG'].output + production.loc['IPP-LNG'].output\n oil_production = production.loc['Oil'].output + production.loc['Diesel'].output\n\n # For storage, note that load will be negative, and generation positive.\n # We require the opposite\n\n returndata = {\n 'zoneKey': zone_key,\n 'datetime': dumpDate.datetime,\n 'production': {\n 'coal': coal_production,\n 'gas': gas_production,\n 'oil': oil_production,\n 'hydro': production.loc['Hydro'].output,\n 'nuclear': production.loc['Nuclear'].output,\n 'solar': production.loc['Solar'].output,\n 'wind': production.loc['Wind'].output,\n 'unknown': production.loc['Co-Gen'].output\n },\n 'capacity': {\n 'coal': coal_capacity,\n 'gas': gas_capacity,\n 'oil': oil_capacity,\n 'hydro': production.loc['Hydro'].capacity,\n 'hydro storage':production.loc['Pumping Gen'].capacity,\n 'nuclear': production.loc['Nuclear'].capacity,\n 'solar': production.loc['Solar'].capacity,\n 'wind': production.loc['Wind'].capacity,\n 'unknown': production.loc['Co-Gen'].capacity\n },\n 'storage': {\n 'hydro': -1 * production.loc['Pumping Load'].output - production.loc['Pumping Gen'].output\n },\n 'source': 'taipower.com.tw'\n }\n\n return returndata\n\n\nif __name__ == '__main__':\n print(fetch_production())\n", "path": "parsers/TW.py"}], "after_files": [{"content": "#!/usr/bin/env python3\nimport arrow\nimport dateutil\nimport pandas as pd\nimport requests\n\n\ndef fetch_production(zone_key='TW', session=None, target_datetime=None, logger=None) -> dict:\n if target_datetime:\n raise NotImplementedError('This parser is not yet able to parse past dates')\n\n url = 'http://www.taipower.com.tw/d006/loadGraph/loadGraph/data/genary.txt'\n s = session or requests.Session()\n response = s.get(url)\n data = response.json()\n\n dumpDate = data['']\n prodData = data['aaData']\n\n tz = 'Asia/Taipei'\n dumpDate = arrow.get(dumpDate, 'YYYY-MM-DD HH:mm').replace(tzinfo=dateutil.tz.gettz(tz))\n\n objData = pd.DataFrame(prodData)\n\n columns = ['fueltype', 'additional_1', 'name', 'capacity', 'output', 'percentage', 'additional_2']\n assert len(objData.iloc[0]) == len(columns), \"number of input columns changed\"\n objData.columns = columns\n\n objData['fueltype'] = objData.fueltype.str.split('(').str[1]\n objData['fueltype'] = objData.fueltype.str.split(')').str[0]\n objData.loc[:,['capacity', 'output']] = objData[['capacity', 'output']].apply(pd.to_numeric, errors='coerce')\n assert not objData.capacity.isna().all(), \"capacity data is entirely NaN - input column order may have changed\"\n assert not objData.output.isna().all(), \"output data is entirely NaN - input column order may have changed\"\n\n objData.drop(columns=['additional_1', 'name', 'additional_2', 'percentage'], axis=1, inplace=True)\n # summing because items in returned object are for each power plant and operational units\n production = pd.DataFrame(objData.groupby('fueltype').sum())\n production.columns = ['capacity', 'output']\n\n # check output values coincide with total capacity by fuel type\n check_values = production.output <= production.capacity\n assert check_values.loc[~check_values.index.isin([\"Co-Gen\"])].all(), \"output > capacity\" # HACK: Co-Gen capacity is underestimated\n\n coal_capacity = production.loc['Coal'].capacity + production.loc['IPP-Coal'].capacity\n gas_capacity = production.loc['LNG'].capacity + production.loc['IPP-LNG'].capacity\n oil_capacity = production.loc['Oil'].capacity + production.loc['Diesel'].capacity\n\n coal_production = production.loc['Coal'].output + production.loc['IPP-Coal'].output\n gas_production = production.loc['LNG'].output + production.loc['IPP-LNG'].output\n oil_production = production.loc['Oil'].output + production.loc['Diesel'].output\n\n # For storage, note that load will be negative, and generation positive.\n # We require the opposite\n\n returndata = {\n 'zoneKey': zone_key,\n 'datetime': dumpDate.datetime,\n 'production': {\n 'coal': coal_production,\n 'gas': gas_production,\n 'oil': oil_production,\n 'hydro': production.loc['Hydro'].output,\n 'nuclear': production.loc['Nuclear'].output,\n 'solar': production.loc['Solar'].output,\n 'wind': production.loc['Wind'].output,\n 'unknown': production.loc['Co-Gen'].output\n },\n 'capacity': {\n 'coal': coal_capacity,\n 'gas': gas_capacity,\n 'oil': oil_capacity,\n 'hydro': production.loc['Hydro'].capacity,\n 'hydro storage':production.loc['Pumping Gen'].capacity,\n 'nuclear': production.loc['Nuclear'].capacity,\n 'solar': production.loc['Solar'].capacity,\n 'wind': production.loc['Wind'].capacity,\n 'unknown': production.loc['Co-Gen'].capacity\n },\n 'storage': {\n 'hydro': -1 * production.loc['Pumping Load'].output - production.loc['Pumping Gen'].output\n },\n 'source': 'taipower.com.tw'\n }\n\n return returndata\n\n\nif __name__ == '__main__':\n print(fetch_production())\n", "path": "parsers/TW.py"}]}
| 1,655 | 665 |
gh_patches_debug_9786
|
rasdani/github-patches
|
git_diff
|
mkdocs__mkdocs-430
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unable to add Table of Contents to docs
When I build a markdown file containing the following information with, `mkdocs build --clean`, mkdocs throws: `AttributeError: 'Markdown' object has no attribute 'toc'`. Adding `[TOC]` like this was working before, but for some reason its throwing an exception now. I'm running version `0.11.1`.
Markdown file:
``` markdown
For api overview and usages, check out [this page](overview.md).
[TOC]
Auth
=================================================
## Check if user is registered
`POST` `/auth/is_registered`
**paramaters**
- `email`
## Login
`POST` `/auth`
**Parameters**
- `email`
- `password`
**Response**
The response will be something like this:
```
Stack Trace:
``` bash
Traceback (most recent call last):
"/usr/local/Cellar/python/2.7.8_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/Users/administrator/dev/meet-web/venv/lib/python2.7/site-packages/watchdog/observers/api.py", line 199, in run
self.dispatch_events(self.event_queue, self.timeout)
File "/Users/administrator/dev/meet-web/venv/lib/python2.7/site-packages/watchdog/observers/api.py", line 368, in dispatch_events
handler.dispatch(event)
File "/Users/administrator/dev/meet-web/venv/lib/python2.7/site-packages/watchdog/events.py", line 322, in dispatch
self.on_any_event(event)
File "/Users/administrator/dev/meet-web/venv/lib/python2.7/site-packages/mkdocs/serve.py", line 28, in on_any_event
build(config, live_server=True)
File "/Users/administrator/dev/meet-web/venv/lib/python2.7/site-packages/mkdocs/build.py", line 223, in build
build_pages(config)
File "/Users/administrator/dev/meet-web/venv/lib/python2.7/site-packages/mkdocs/build.py", line 170, in build_pages
extensions=config['markdown_extensions'], strict=config['strict']
File "/Users/administrator/dev/meet-web/venv/lib/python2.7/site-packages/mkdocs/build.py", line 36, in convert_markdown
toc_html = md.toc
AttributeError: 'Markdown' object has no attribute 'toc'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 from __future__ import print_function
5 from setuptools import setup
6 import re
7 import os
8 import sys
9
10
11 name = 'mkdocs'
12 package = 'mkdocs'
13 description = 'Project documentation with Markdown.'
14 url = 'http://www.mkdocs.org'
15 author = 'Tom Christie'
16 author_email = '[email protected]'
17 license = 'BSD'
18 install_requires = [
19 'Jinja2>=2.7.1',
20 'Markdown>=2.3.1,<2.5',
21 'PyYAML>=3.10',
22 'watchdog>=0.7.0',
23 'ghp-import>=0.4.1'
24 ]
25
26 long_description = (
27 "MkDocs is a fast, simple and downright gorgeous static site generator "
28 "that's geared towards building project documentation. Documentation "
29 "source files are written in Markdown, and configured with a single YAML "
30 "configuration file."
31 )
32
33
34 def get_version(package):
35 """
36 Return package version as listed in `__version__` in `init.py`.
37 """
38 init_py = open(os.path.join(package, '__init__.py')).read()
39 return re.search("^__version__ = ['\"]([^'\"]+)['\"]", init_py, re.MULTILINE).group(1)
40
41
42 def get_packages(package):
43 """
44 Return root package and all sub-packages.
45 """
46 return [dirpath
47 for dirpath, dirnames, filenames in os.walk(package)
48 if os.path.exists(os.path.join(dirpath, '__init__.py'))]
49
50
51 def get_package_data(package):
52 """
53 Return all files under the root package, that are not in a
54 package themselves.
55 """
56 walk = [(dirpath.replace(package + os.sep, '', 1), filenames)
57 for dirpath, dirnames, filenames in os.walk(package)
58 if not os.path.exists(os.path.join(dirpath, '__init__.py'))]
59
60 filepaths = []
61 for base, filenames in walk:
62 filepaths.extend([os.path.join(base, filename)
63 for filename in filenames])
64 return {package: filepaths}
65
66
67 if sys.argv[-1] == 'publish':
68 os.system("python setup.py sdist upload")
69 args = {'version': get_version(package)}
70 print("You probably want to also tag the version now:")
71 print(" git tag -a %(version)s -m 'version %(version)s'" % args)
72 print(" git push --tags")
73 sys.exit()
74
75
76 setup(
77 name=name,
78 version=get_version(package),
79 url=url,
80 license=license,
81 description=description,
82 long_description=long_description,
83 author=author,
84 author_email=author_email,
85 packages=get_packages(package),
86 package_data=get_package_data(package),
87 install_requires=install_requires,
88 entry_points={
89 'console_scripts': [
90 'mkdocs = mkdocs.main:run_main',
91 ],
92 },
93 classifiers=[
94 'Development Status :: 5 - Production/Stable',
95 'Environment :: Console',
96 'Environment :: Web Environment',
97 'Intended Audience :: Developers',
98 'License :: OSI Approved :: BSD License',
99 'Operating System :: OS Independent',
100 'Programming Language :: Python',
101 'Programming Language :: Python :: 2',
102 'Programming Language :: Python :: 2.6',
103 'Programming Language :: Python :: 2.7',
104 'Programming Language :: Python :: 3',
105 'Programming Language :: Python :: 3.3',
106 'Programming Language :: Python :: 3.4',
107 'Topic :: Documentation',
108 'Topic :: Text Processing',
109 ]
110 )
111
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -7,6 +7,8 @@
import os
import sys
+PY26 = sys.version_info[:2] == (2, 6)
+
name = 'mkdocs'
package = 'mkdocs'
@@ -16,11 +18,11 @@
author_email = '[email protected]'
license = 'BSD'
install_requires = [
+ 'ghp-import>=0.4.1',
'Jinja2>=2.7.1',
- 'Markdown>=2.3.1,<2.5',
+ 'Markdown>=2.3.1,<2.5' if PY26 else 'Markdown>=2.3.1',
'PyYAML>=3.10',
'watchdog>=0.7.0',
- 'ghp-import>=0.4.1'
]
long_description = (
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -7,6 +7,8 @@\n import os\n import sys\n \n+PY26 = sys.version_info[:2] == (2, 6)\n+\n \n name = 'mkdocs'\n package = 'mkdocs'\n@@ -16,11 +18,11 @@\n author_email = '[email protected]'\n license = 'BSD'\n install_requires = [\n+ 'ghp-import>=0.4.1',\n 'Jinja2>=2.7.1',\n- 'Markdown>=2.3.1,<2.5',\n+ 'Markdown>=2.3.1,<2.5' if PY26 else 'Markdown>=2.3.1',\n 'PyYAML>=3.10',\n 'watchdog>=0.7.0',\n- 'ghp-import>=0.4.1'\n ]\n \n long_description = (\n", "issue": "Unable to add Table of Contents to docs\nWhen I build a markdown file containing the following information with, `mkdocs build --clean`, mkdocs throws: `AttributeError: 'Markdown' object has no attribute 'toc'`. Adding `[TOC]` like this was working before, but for some reason its throwing an exception now. I'm running version `0.11.1`.\n\nMarkdown file:\n\n``` markdown\nFor api overview and usages, check out [this page](overview.md).\n\n[TOC]\n\nAuth\n=================================================\n\n## Check if user is registered\n\n`POST` `/auth/is_registered`\n\n**paramaters**\n\n- `email`\n\n## Login\n\n`POST` `/auth`\n\n**Parameters**\n\n- `email`\n- `password`\n\n**Response**\n\nThe response will be something like this:\n```\n\nStack Trace:\n\n``` bash\nTraceback (most recent call last):\n\"/usr/local/Cellar/python/2.7.8_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py\", line 810, in __bootstrap_inner\n self.run()\n File \"/Users/administrator/dev/meet-web/venv/lib/python2.7/site-packages/watchdog/observers/api.py\", line 199, in run\n self.dispatch_events(self.event_queue, self.timeout)\n File \"/Users/administrator/dev/meet-web/venv/lib/python2.7/site-packages/watchdog/observers/api.py\", line 368, in dispatch_events\n handler.dispatch(event)\n File \"/Users/administrator/dev/meet-web/venv/lib/python2.7/site-packages/watchdog/events.py\", line 322, in dispatch\n self.on_any_event(event)\n File \"/Users/administrator/dev/meet-web/venv/lib/python2.7/site-packages/mkdocs/serve.py\", line 28, in on_any_event\n build(config, live_server=True)\n File \"/Users/administrator/dev/meet-web/venv/lib/python2.7/site-packages/mkdocs/build.py\", line 223, in build\n build_pages(config)\n File \"/Users/administrator/dev/meet-web/venv/lib/python2.7/site-packages/mkdocs/build.py\", line 170, in build_pages\n extensions=config['markdown_extensions'], strict=config['strict']\n File \"/Users/administrator/dev/meet-web/venv/lib/python2.7/site-packages/mkdocs/build.py\", line 36, in convert_markdown\n toc_html = md.toc\nAttributeError: 'Markdown' object has no attribute 'toc'\n```\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nfrom __future__ import print_function\nfrom setuptools import setup\nimport re\nimport os\nimport sys\n\n\nname = 'mkdocs'\npackage = 'mkdocs'\ndescription = 'Project documentation with Markdown.'\nurl = 'http://www.mkdocs.org'\nauthor = 'Tom Christie'\nauthor_email = '[email protected]'\nlicense = 'BSD'\ninstall_requires = [\n 'Jinja2>=2.7.1',\n 'Markdown>=2.3.1,<2.5',\n 'PyYAML>=3.10',\n 'watchdog>=0.7.0',\n 'ghp-import>=0.4.1'\n]\n\nlong_description = (\n \"MkDocs is a fast, simple and downright gorgeous static site generator \"\n \"that's geared towards building project documentation. Documentation \"\n \"source files are written in Markdown, and configured with a single YAML \"\n \"configuration file.\"\n)\n\n\ndef get_version(package):\n \"\"\"\n Return package version as listed in `__version__` in `init.py`.\n \"\"\"\n init_py = open(os.path.join(package, '__init__.py')).read()\n return re.search(\"^__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", init_py, re.MULTILINE).group(1)\n\n\ndef get_packages(package):\n \"\"\"\n Return root package and all sub-packages.\n \"\"\"\n return [dirpath\n for dirpath, dirnames, filenames in os.walk(package)\n if os.path.exists(os.path.join(dirpath, '__init__.py'))]\n\n\ndef get_package_data(package):\n \"\"\"\n Return all files under the root package, that are not in a\n package themselves.\n \"\"\"\n walk = [(dirpath.replace(package + os.sep, '', 1), filenames)\n for dirpath, dirnames, filenames in os.walk(package)\n if not os.path.exists(os.path.join(dirpath, '__init__.py'))]\n\n filepaths = []\n for base, filenames in walk:\n filepaths.extend([os.path.join(base, filename)\n for filename in filenames])\n return {package: filepaths}\n\n\nif sys.argv[-1] == 'publish':\n os.system(\"python setup.py sdist upload\")\n args = {'version': get_version(package)}\n print(\"You probably want to also tag the version now:\")\n print(\" git tag -a %(version)s -m 'version %(version)s'\" % args)\n print(\" git push --tags\")\n sys.exit()\n\n\nsetup(\n name=name,\n version=get_version(package),\n url=url,\n license=license,\n description=description,\n long_description=long_description,\n author=author,\n author_email=author_email,\n packages=get_packages(package),\n package_data=get_package_data(package),\n install_requires=install_requires,\n entry_points={\n 'console_scripts': [\n 'mkdocs = mkdocs.main:run_main',\n ],\n },\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Topic :: Documentation',\n 'Topic :: Text Processing',\n ]\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nfrom __future__ import print_function\nfrom setuptools import setup\nimport re\nimport os\nimport sys\n\nPY26 = sys.version_info[:2] == (2, 6)\n\n\nname = 'mkdocs'\npackage = 'mkdocs'\ndescription = 'Project documentation with Markdown.'\nurl = 'http://www.mkdocs.org'\nauthor = 'Tom Christie'\nauthor_email = '[email protected]'\nlicense = 'BSD'\ninstall_requires = [\n 'ghp-import>=0.4.1',\n 'Jinja2>=2.7.1',\n 'Markdown>=2.3.1,<2.5' if PY26 else 'Markdown>=2.3.1',\n 'PyYAML>=3.10',\n 'watchdog>=0.7.0',\n]\n\nlong_description = (\n \"MkDocs is a fast, simple and downright gorgeous static site generator \"\n \"that's geared towards building project documentation. Documentation \"\n \"source files are written in Markdown, and configured with a single YAML \"\n \"configuration file.\"\n)\n\n\ndef get_version(package):\n \"\"\"\n Return package version as listed in `__version__` in `init.py`.\n \"\"\"\n init_py = open(os.path.join(package, '__init__.py')).read()\n return re.search(\"^__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", init_py, re.MULTILINE).group(1)\n\n\ndef get_packages(package):\n \"\"\"\n Return root package and all sub-packages.\n \"\"\"\n return [dirpath\n for dirpath, dirnames, filenames in os.walk(package)\n if os.path.exists(os.path.join(dirpath, '__init__.py'))]\n\n\ndef get_package_data(package):\n \"\"\"\n Return all files under the root package, that are not in a\n package themselves.\n \"\"\"\n walk = [(dirpath.replace(package + os.sep, '', 1), filenames)\n for dirpath, dirnames, filenames in os.walk(package)\n if not os.path.exists(os.path.join(dirpath, '__init__.py'))]\n\n filepaths = []\n for base, filenames in walk:\n filepaths.extend([os.path.join(base, filename)\n for filename in filenames])\n return {package: filepaths}\n\n\nif sys.argv[-1] == 'publish':\n os.system(\"python setup.py sdist upload\")\n args = {'version': get_version(package)}\n print(\"You probably want to also tag the version now:\")\n print(\" git tag -a %(version)s -m 'version %(version)s'\" % args)\n print(\" git push --tags\")\n sys.exit()\n\n\nsetup(\n name=name,\n version=get_version(package),\n url=url,\n license=license,\n description=description,\n long_description=long_description,\n author=author,\n author_email=author_email,\n packages=get_packages(package),\n package_data=get_package_data(package),\n install_requires=install_requires,\n entry_points={\n 'console_scripts': [\n 'mkdocs = mkdocs.main:run_main',\n ],\n },\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Topic :: Documentation',\n 'Topic :: Text Processing',\n ]\n)\n", "path": "setup.py"}]}
| 1,822 | 212 |
gh_patches_debug_7033
|
rasdani/github-patches
|
git_diff
|
akvo__akvo-rsr-1942
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Organisations list gives timeout
## Test plan
The organisations list should not give a timeout. Since this only happened on Live, it is hard to debug.
## Sentry
See http://sentry.support.akvo-ops.org/rsr/live/group/742/
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `akvo/rsr/views/organisation.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 """Akvo RSR is covered by the GNU Affero General Public License.
4
5 See more details in the license.txt file located at the root folder of the
6 Akvo RSR module. For additional details on the GNU license please
7 see < http://www.gnu.org/licenses/agpl.html >.
8 """
9
10 from django.db.models import Prefetch
11 from django.db.models import Count
12 from django.shortcuts import get_object_or_404, render
13
14 from ..filters import location_choices, OrganisationFilter, remove_empty_querydict_items
15 from ..models import Employment, Organisation, Project, ProjectUpdate
16 from ...utils import pagination, filter_query_string
17 from .utils import apply_keywords, org_projects, show_filter_class
18
19 ###############################################################################
20 # Organisation directory
21 ###############################################################################
22
23
24 def _public_projects():
25 """Return all public projects."""
26 return Project.objects.public().published().select_related('partners').order_by('-id')
27
28
29 def _page_organisations(page):
30 """Dig out the list or organisations to use."""
31 projects = org_projects(page.organisation) if page.partner_projects else _public_projects()
32 keyword_projects = apply_keywords(page, projects)
33 return keyword_projects.all_partners()
34
35
36 def _organisation_directory_coll(request):
37 """Dig out and pass correct organisations to the view."""
38 page = request.rsr_page
39 if not page:
40 return Organisation.objects.all()
41 return _page_organisations(page)
42
43
44 def directory(request):
45 """The Organisation list view."""
46 qs = remove_empty_querydict_items(request.GET)
47
48 # Set show_filters to "in" if any filter is selected
49 filter_class = show_filter_class(qs, ['location', ])
50
51 # Yank Organisation collection
52 all_organisations = _organisation_directory_coll(request)
53
54 # Easter egg feature
55 creator_organisations = request.GET.get('creator', False)
56 if creator_organisations:
57 all_organisations = all_organisations.filter(can_create_projects=True)
58
59 f = OrganisationFilter(qs, queryset=all_organisations)
60
61 # Change filter options further when on an Akvo Page
62 if request.rsr_page:
63 # Filter location filter list to only populated locations
64 f.filters['location'].extra['choices'] = location_choices(all_organisations)
65
66 # Build page
67 page = request.GET.get('page')
68 page, paginator, page_range = pagination(page, f.qs.distinct(), 10)
69
70 # Get organisations to be displayed on the map
71 if request.rsr_page and request.rsr_page.all_maps:
72 map_orgs = all_organisations
73 else:
74 map_orgs = page.object_list
75 map_orgs = map_orgs
76
77 # Get related objects of page at once
78 page.object_list = page.object_list.select_related(
79 'primary_location__country',
80 ).annotate(
81 num_employees=Count('employees', distinct=True),
82 num_projects=Count('projects', distinct=True),
83 num_updates=Count('projects__project_updates', distinct=True),
84 )
85
86 return render(request, 'organisation_directory.html', {
87 'orgs_count': f.qs.distinct().count(),
88 'filter': f,
89 'page': page,
90 'paginator': paginator,
91 'page_range': page_range,
92 'show_filters': filter_class,
93 'q': filter_query_string(qs),
94 'map_organisations': map_orgs,
95 })
96
97
98 ###############################################################################
99 # Organisation main
100 ###############################################################################
101
102
103 def main(request, organisation_id):
104 """The organisation main view."""
105 return render(request, 'organisation_main.html', {
106 'organisation': get_object_or_404(Organisation, pk=organisation_id)})
107
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/akvo/rsr/views/organisation.py b/akvo/rsr/views/organisation.py
--- a/akvo/rsr/views/organisation.py
+++ b/akvo/rsr/views/organisation.py
@@ -77,10 +77,6 @@
# Get related objects of page at once
page.object_list = page.object_list.select_related(
'primary_location__country',
- ).annotate(
- num_employees=Count('employees', distinct=True),
- num_projects=Count('projects', distinct=True),
- num_updates=Count('projects__project_updates', distinct=True),
)
return render(request, 'organisation_directory.html', {
|
{"golden_diff": "diff --git a/akvo/rsr/views/organisation.py b/akvo/rsr/views/organisation.py\n--- a/akvo/rsr/views/organisation.py\n+++ b/akvo/rsr/views/organisation.py\n@@ -77,10 +77,6 @@\n # Get related objects of page at once\n page.object_list = page.object_list.select_related(\n 'primary_location__country',\n- ).annotate(\n- num_employees=Count('employees', distinct=True),\n- num_projects=Count('projects', distinct=True),\n- num_updates=Count('projects__project_updates', distinct=True),\n )\n \n return render(request, 'organisation_directory.html', {\n", "issue": "Organisations list gives timeout\n## Test plan\n\nThe organisations list should not give a timeout. Since this only happened on Live, it is hard to debug.\n## Sentry\n\nSee http://sentry.support.akvo-ops.org/rsr/live/group/742/\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Akvo RSR is covered by the GNU Affero General Public License.\n\nSee more details in the license.txt file located at the root folder of the\nAkvo RSR module. For additional details on the GNU license please\nsee < http://www.gnu.org/licenses/agpl.html >.\n\"\"\"\n\nfrom django.db.models import Prefetch\nfrom django.db.models import Count\nfrom django.shortcuts import get_object_or_404, render\n\nfrom ..filters import location_choices, OrganisationFilter, remove_empty_querydict_items\nfrom ..models import Employment, Organisation, Project, ProjectUpdate\nfrom ...utils import pagination, filter_query_string\nfrom .utils import apply_keywords, org_projects, show_filter_class\n\n###############################################################################\n# Organisation directory\n###############################################################################\n\n\ndef _public_projects():\n \"\"\"Return all public projects.\"\"\"\n return Project.objects.public().published().select_related('partners').order_by('-id')\n\n\ndef _page_organisations(page):\n \"\"\"Dig out the list or organisations to use.\"\"\"\n projects = org_projects(page.organisation) if page.partner_projects else _public_projects()\n keyword_projects = apply_keywords(page, projects)\n return keyword_projects.all_partners()\n\n\ndef _organisation_directory_coll(request):\n \"\"\"Dig out and pass correct organisations to the view.\"\"\"\n page = request.rsr_page\n if not page:\n return Organisation.objects.all()\n return _page_organisations(page)\n\n\ndef directory(request):\n \"\"\"The Organisation list view.\"\"\"\n qs = remove_empty_querydict_items(request.GET)\n\n # Set show_filters to \"in\" if any filter is selected\n filter_class = show_filter_class(qs, ['location', ])\n\n # Yank Organisation collection\n all_organisations = _organisation_directory_coll(request)\n\n # Easter egg feature\n creator_organisations = request.GET.get('creator', False)\n if creator_organisations:\n all_organisations = all_organisations.filter(can_create_projects=True)\n\n f = OrganisationFilter(qs, queryset=all_organisations)\n\n # Change filter options further when on an Akvo Page\n if request.rsr_page:\n # Filter location filter list to only populated locations\n f.filters['location'].extra['choices'] = location_choices(all_organisations)\n\n # Build page\n page = request.GET.get('page')\n page, paginator, page_range = pagination(page, f.qs.distinct(), 10)\n\n # Get organisations to be displayed on the map\n if request.rsr_page and request.rsr_page.all_maps:\n map_orgs = all_organisations\n else:\n map_orgs = page.object_list\n map_orgs = map_orgs\n\n # Get related objects of page at once\n page.object_list = page.object_list.select_related(\n 'primary_location__country',\n ).annotate(\n num_employees=Count('employees', distinct=True),\n num_projects=Count('projects', distinct=True),\n num_updates=Count('projects__project_updates', distinct=True),\n )\n\n return render(request, 'organisation_directory.html', {\n 'orgs_count': f.qs.distinct().count(),\n 'filter': f,\n 'page': page,\n 'paginator': paginator,\n 'page_range': page_range,\n 'show_filters': filter_class,\n 'q': filter_query_string(qs),\n 'map_organisations': map_orgs,\n })\n\n\n###############################################################################\n# Organisation main\n###############################################################################\n\n\ndef main(request, organisation_id):\n \"\"\"The organisation main view.\"\"\"\n return render(request, 'organisation_main.html', {\n 'organisation': get_object_or_404(Organisation, pk=organisation_id)})\n", "path": "akvo/rsr/views/organisation.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Akvo RSR is covered by the GNU Affero General Public License.\n\nSee more details in the license.txt file located at the root folder of the\nAkvo RSR module. For additional details on the GNU license please\nsee < http://www.gnu.org/licenses/agpl.html >.\n\"\"\"\n\nfrom django.db.models import Prefetch\nfrom django.db.models import Count\nfrom django.shortcuts import get_object_or_404, render\n\nfrom ..filters import location_choices, OrganisationFilter, remove_empty_querydict_items\nfrom ..models import Employment, Organisation, Project, ProjectUpdate\nfrom ...utils import pagination, filter_query_string\nfrom .utils import apply_keywords, org_projects, show_filter_class\n\n###############################################################################\n# Organisation directory\n###############################################################################\n\n\ndef _public_projects():\n \"\"\"Return all public projects.\"\"\"\n return Project.objects.public().published().select_related('partners').order_by('-id')\n\n\ndef _page_organisations(page):\n \"\"\"Dig out the list or organisations to use.\"\"\"\n projects = org_projects(page.organisation) if page.partner_projects else _public_projects()\n keyword_projects = apply_keywords(page, projects)\n return keyword_projects.all_partners()\n\n\ndef _organisation_directory_coll(request):\n \"\"\"Dig out and pass correct organisations to the view.\"\"\"\n page = request.rsr_page\n if not page:\n return Organisation.objects.all()\n return _page_organisations(page)\n\n\ndef directory(request):\n \"\"\"The Organisation list view.\"\"\"\n qs = remove_empty_querydict_items(request.GET)\n\n # Set show_filters to \"in\" if any filter is selected\n filter_class = show_filter_class(qs, ['location', ])\n\n # Yank Organisation collection\n all_organisations = _organisation_directory_coll(request)\n\n # Easter egg feature\n creator_organisations = request.GET.get('creator', False)\n if creator_organisations:\n all_organisations = all_organisations.filter(can_create_projects=True)\n\n f = OrganisationFilter(qs, queryset=all_organisations)\n\n # Change filter options further when on an Akvo Page\n if request.rsr_page:\n # Filter location filter list to only populated locations\n f.filters['location'].extra['choices'] = location_choices(all_organisations)\n\n # Build page\n page = request.GET.get('page')\n page, paginator, page_range = pagination(page, f.qs.distinct(), 10)\n\n # Get organisations to be displayed on the map\n if request.rsr_page and request.rsr_page.all_maps:\n map_orgs = all_organisations\n else:\n map_orgs = page.object_list\n map_orgs = map_orgs\n\n # Get related objects of page at once\n page.object_list = page.object_list.select_related(\n 'primary_location__country',\n )\n\n return render(request, 'organisation_directory.html', {\n 'orgs_count': f.qs.distinct().count(),\n 'filter': f,\n 'page': page,\n 'paginator': paginator,\n 'page_range': page_range,\n 'show_filters': filter_class,\n 'q': filter_query_string(qs),\n 'map_organisations': map_orgs,\n })\n\n\n###############################################################################\n# Organisation main\n###############################################################################\n\n\ndef main(request, organisation_id):\n \"\"\"The organisation main view.\"\"\"\n return render(request, 'organisation_main.html', {\n 'organisation': get_object_or_404(Organisation, pk=organisation_id)})\n", "path": "akvo/rsr/views/organisation.py"}]}
| 1,319 | 148 |
gh_patches_debug_16749
|
rasdani/github-patches
|
git_diff
|
Project-MONAI__MONAI-1962
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Always print warning message about min_package
**Describe the bug**
/opt/monai/monai/utils/module.py:100: UserWarning: <module 'pkg_resources' from '/opt/conda/lib/python3.8/site-packages/pkg_resources/__init__.py'> has no attribute __version__ in min_version check.
warnings.warn(f"{the_module} has no attribute __version__ in min_version check.")
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `monai/utils/module.py`
Content:
```
1 # Copyright 2020 - 2021 MONAI Consortium
2 # Licensed under the Apache License, Version 2.0 (the "License");
3 # you may not use this file except in compliance with the License.
4 # You may obtain a copy of the License at
5 # http://www.apache.org/licenses/LICENSE-2.0
6 # Unless required by applicable law or agreed to in writing, software
7 # distributed under the License is distributed on an "AS IS" BASIS,
8 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
9 # See the License for the specific language governing permissions and
10 # limitations under the License.
11
12 import inspect
13 import sys
14 import warnings
15 from importlib import import_module
16 from pkgutil import walk_packages
17 from re import match
18 from typing import Any, Callable, List, Sequence, Tuple, Union
19
20 import torch
21
22 from .misc import ensure_tuple
23
24 OPTIONAL_IMPORT_MSG_FMT = "{}"
25
26 __all__ = [
27 "InvalidPyTorchVersionError",
28 "OptionalImportError",
29 "exact_version",
30 "export",
31 "min_version",
32 "optional_import",
33 "load_submodules",
34 "get_full_type_name",
35 "has_option",
36 "get_package_version",
37 "get_torch_version_tuple",
38 "PT_BEFORE_1_7",
39 ]
40
41
42 def export(modname):
43 """
44 Make the decorated object a member of the named module. This will also add the object under its aliases if it has
45 a `__aliases__` member, thus this decorator should be before the `alias` decorator to pick up those names. Alias
46 names which conflict with package names or existing members will be ignored.
47 """
48
49 def _inner(obj):
50 mod = import_module(modname)
51 if not hasattr(mod, obj.__name__):
52 setattr(mod, obj.__name__, obj)
53
54 # add the aliases for `obj` to the target module
55 for alias in getattr(obj, "__aliases__", ()):
56 if not hasattr(mod, alias):
57 setattr(mod, alias, obj)
58
59 return obj
60
61 return _inner
62
63
64 def load_submodules(basemod, load_all: bool = True, exclude_pattern: str = "(.*[tT]est.*)|(_.*)"):
65 """
66 Traverse the source of the module structure starting with module `basemod`, loading all packages plus all files if
67 `load_all` is True, excluding anything whose name matches `exclude_pattern`.
68 """
69 submodules = []
70 err_mod: List[str] = []
71 for importer, name, is_pkg in walk_packages(
72 basemod.__path__, prefix=basemod.__name__ + ".", onerror=err_mod.append
73 ):
74 if (is_pkg or load_all) and name not in sys.modules and match(exclude_pattern, name) is None:
75 try:
76 mod = import_module(name)
77 importer.find_module(name).load_module(name) # type: ignore
78 submodules.append(mod)
79 except OptionalImportError:
80 pass # could not import the optional deps., they are ignored
81
82 return submodules, err_mod
83
84
85 def get_full_type_name(typeobj):
86 module = typeobj.__module__
87 if module is None or module == str.__class__.__module__:
88 return typeobj.__name__ # Avoid reporting __builtin__
89 return module + "." + typeobj.__name__
90
91
92 def min_version(the_module, min_version_str: str = "") -> bool:
93 """
94 Convert version strings into tuples of int and compare them.
95
96 Returns True if the module's version is greater or equal to the 'min_version'.
97 When min_version_str is not provided, it always returns True.
98 """
99 if not min_version_str:
100 return True # always valid version
101 if not hasattr(the_module, "__version__"):
102 warnings.warn(f"{the_module} has no attribute __version__ in min_version check.")
103 return True # min_version is the default, shouldn't be noisy
104 mod_version = tuple(int(x) for x in the_module.__version__.split(".")[:2])
105 required = tuple(int(x) for x in min_version_str.split(".")[:2])
106 return mod_version >= required
107
108
109 def exact_version(the_module, version_str: str = "") -> bool:
110 """
111 Returns True if the module's __version__ matches version_str
112 """
113 if not hasattr(the_module, "__version__"):
114 warnings.warn(f"{the_module} has no attribute __version__ in exact_version check.")
115 return False
116 return bool(the_module.__version__ == version_str)
117
118
119 class InvalidPyTorchVersionError(Exception):
120 """
121 Raised when called function or method requires a more recent
122 PyTorch version than that installed.
123 """
124
125 def __init__(self, required_version, name):
126 message = f"{name} requires PyTorch version {required_version} or later"
127 super().__init__(message)
128
129
130 class OptionalImportError(ImportError):
131 """
132 Could not import APIs from an optional dependency.
133 """
134
135
136 def optional_import(
137 module: str,
138 version: str = "",
139 version_checker: Callable[..., bool] = min_version,
140 name: str = "",
141 descriptor: str = OPTIONAL_IMPORT_MSG_FMT,
142 version_args=None,
143 allow_namespace_pkg: bool = False,
144 ) -> Tuple[Any, bool]:
145 """
146 Imports an optional module specified by `module` string.
147 Any importing related exceptions will be stored, and exceptions raise lazily
148 when attempting to use the failed-to-import module.
149
150 Args:
151 module: name of the module to be imported.
152 version: version string used by the version_checker.
153 version_checker: a callable to check the module version, Defaults to monai.utils.min_version.
154 name: a non-module attribute (such as method/class) to import from the imported module.
155 descriptor: a format string for the final error message when using a not imported module.
156 version_args: additional parameters to the version checker.
157 allow_namespace_pkg: whether importing a namespace package is allowed. Defaults to False.
158
159 Returns:
160 The imported module and a boolean flag indicating whether the import is successful.
161
162 Examples::
163
164 >>> torch, flag = optional_import('torch', '1.1')
165 >>> print(torch, flag)
166 <module 'torch' from 'python/lib/python3.6/site-packages/torch/__init__.py'> True
167
168 >>> the_module, flag = optional_import('unknown_module')
169 >>> print(flag)
170 False
171 >>> the_module.method # trying to access a module which is not imported
172 OptionalImportError: import unknown_module (No module named 'unknown_module').
173
174 >>> torch, flag = optional_import('torch', '42', exact_version)
175 >>> torch.nn # trying to access a module for which there isn't a proper version imported
176 OptionalImportError: import torch (requires version '42' by 'exact_version').
177
178 >>> conv, flag = optional_import('torch.nn.functional', '1.0', name='conv1d')
179 >>> print(conv)
180 <built-in method conv1d of type object at 0x11a49eac0>
181
182 >>> conv, flag = optional_import('torch.nn.functional', '42', name='conv1d')
183 >>> conv() # trying to use a function from the not successfully imported module (due to unmatched version)
184 OptionalImportError: from torch.nn.functional import conv1d (requires version '42' by 'min_version').
185 """
186
187 tb = None
188 exception_str = ""
189 if name:
190 actual_cmd = f"from {module} import {name}"
191 else:
192 actual_cmd = f"import {module}"
193 try:
194 pkg = __import__(module) # top level module
195 the_module = import_module(module)
196 if not allow_namespace_pkg:
197 is_namespace = getattr(the_module, "__file__", None) is None and hasattr(the_module, "__path__")
198 if is_namespace:
199 raise AssertionError
200 if name: # user specified to load class/function/... from the module
201 the_module = getattr(the_module, name)
202 except Exception as import_exception: # any exceptions during import
203 tb = import_exception.__traceback__
204 exception_str = f"{import_exception}"
205 else: # found the module
206 if version_args and version_checker(pkg, f"{version}", version_args):
207 return the_module, True
208 if not version_args and version_checker(pkg, f"{version}"):
209 return the_module, True
210
211 # preparing lazy error message
212 msg = descriptor.format(actual_cmd)
213 if version and tb is None: # a pure version issue
214 msg += f" (requires '{module} {version}' by '{version_checker.__name__}')"
215 if exception_str:
216 msg += f" ({exception_str})"
217
218 class _LazyRaise:
219 def __init__(self, *_args, **_kwargs):
220 _default_msg = (
221 f"{msg}."
222 + "\n\nFor details about installing the optional dependencies, please visit:"
223 + "\n https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies"
224 )
225 if tb is None:
226 self._exception = OptionalImportError(_default_msg)
227 else:
228 self._exception = OptionalImportError(_default_msg).with_traceback(tb)
229
230 def __getattr__(self, name):
231 """
232 Raises:
233 OptionalImportError: When you call this method.
234 """
235 raise self._exception
236
237 def __call__(self, *_args, **_kwargs):
238 """
239 Raises:
240 OptionalImportError: When you call this method.
241 """
242 raise self._exception
243
244 return _LazyRaise(), False
245
246
247 def has_option(obj, keywords: Union[str, Sequence[str]]) -> bool:
248 """
249 Return a boolean indicating whether the given callable `obj` has the `keywords` in its signature.
250 """
251 if not callable(obj):
252 return False
253 sig = inspect.signature(obj)
254 return all(key in sig.parameters for key in ensure_tuple(keywords))
255
256
257 def get_package_version(dep_name, default="NOT INSTALLED or UNKNOWN VERSION."):
258 """
259 Try to load package and get version. If not found, return `default`.
260 """
261 dep, has_dep = optional_import(dep_name)
262 if has_dep and hasattr(dep, "__version__"):
263 return dep.__version__
264 return default
265
266
267 def get_torch_version_tuple():
268 """
269 Returns:
270 tuple of ints represents the pytorch major/minor version.
271 """
272 return tuple((int(x) for x in torch.__version__.split(".")[:2]))
273
274
275 PT_BEFORE_1_7 = True
276 ver, has_ver = optional_import("pkg_resources", name="parse_version")
277 try:
278 if has_ver:
279 PT_BEFORE_1_7 = ver(torch.__version__) < ver("1.7")
280 else:
281 PT_BEFORE_1_7 = get_torch_version_tuple() < (1, 7)
282 except (AttributeError, TypeError):
283 pass
284
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/monai/utils/module.py b/monai/utils/module.py
--- a/monai/utils/module.py
+++ b/monai/utils/module.py
@@ -96,11 +96,9 @@
Returns True if the module's version is greater or equal to the 'min_version'.
When min_version_str is not provided, it always returns True.
"""
- if not min_version_str:
+ if not min_version_str or not hasattr(the_module, "__version__"):
return True # always valid version
- if not hasattr(the_module, "__version__"):
- warnings.warn(f"{the_module} has no attribute __version__ in min_version check.")
- return True # min_version is the default, shouldn't be noisy
+
mod_version = tuple(int(x) for x in the_module.__version__.split(".")[:2])
required = tuple(int(x) for x in min_version_str.split(".")[:2])
return mod_version >= required
|
{"golden_diff": "diff --git a/monai/utils/module.py b/monai/utils/module.py\n--- a/monai/utils/module.py\n+++ b/monai/utils/module.py\n@@ -96,11 +96,9 @@\n Returns True if the module's version is greater or equal to the 'min_version'.\n When min_version_str is not provided, it always returns True.\n \"\"\"\n- if not min_version_str:\n+ if not min_version_str or not hasattr(the_module, \"__version__\"):\n return True # always valid version\n- if not hasattr(the_module, \"__version__\"):\n- warnings.warn(f\"{the_module} has no attribute __version__ in min_version check.\")\n- return True # min_version is the default, shouldn't be noisy\n+\n mod_version = tuple(int(x) for x in the_module.__version__.split(\".\")[:2])\n required = tuple(int(x) for x in min_version_str.split(\".\")[:2])\n return mod_version >= required\n", "issue": "Always print warning message about min_package\n**Describe the bug**\r\n/opt/monai/monai/utils/module.py:100: UserWarning: <module 'pkg_resources' from '/opt/conda/lib/python3.8/site-packages/pkg_resources/__init__.py'> has no attribute __version__ in min_version check.\r\n warnings.warn(f\"{the_module} has no attribute __version__ in min_version check.\")\r\n\n", "before_files": [{"content": "# Copyright 2020 - 2021 MONAI Consortium\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# http://www.apache.org/licenses/LICENSE-2.0\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport inspect\nimport sys\nimport warnings\nfrom importlib import import_module\nfrom pkgutil import walk_packages\nfrom re import match\nfrom typing import Any, Callable, List, Sequence, Tuple, Union\n\nimport torch\n\nfrom .misc import ensure_tuple\n\nOPTIONAL_IMPORT_MSG_FMT = \"{}\"\n\n__all__ = [\n \"InvalidPyTorchVersionError\",\n \"OptionalImportError\",\n \"exact_version\",\n \"export\",\n \"min_version\",\n \"optional_import\",\n \"load_submodules\",\n \"get_full_type_name\",\n \"has_option\",\n \"get_package_version\",\n \"get_torch_version_tuple\",\n \"PT_BEFORE_1_7\",\n]\n\n\ndef export(modname):\n \"\"\"\n Make the decorated object a member of the named module. This will also add the object under its aliases if it has\n a `__aliases__` member, thus this decorator should be before the `alias` decorator to pick up those names. Alias\n names which conflict with package names or existing members will be ignored.\n \"\"\"\n\n def _inner(obj):\n mod = import_module(modname)\n if not hasattr(mod, obj.__name__):\n setattr(mod, obj.__name__, obj)\n\n # add the aliases for `obj` to the target module\n for alias in getattr(obj, \"__aliases__\", ()):\n if not hasattr(mod, alias):\n setattr(mod, alias, obj)\n\n return obj\n\n return _inner\n\n\ndef load_submodules(basemod, load_all: bool = True, exclude_pattern: str = \"(.*[tT]est.*)|(_.*)\"):\n \"\"\"\n Traverse the source of the module structure starting with module `basemod`, loading all packages plus all files if\n `load_all` is True, excluding anything whose name matches `exclude_pattern`.\n \"\"\"\n submodules = []\n err_mod: List[str] = []\n for importer, name, is_pkg in walk_packages(\n basemod.__path__, prefix=basemod.__name__ + \".\", onerror=err_mod.append\n ):\n if (is_pkg or load_all) and name not in sys.modules and match(exclude_pattern, name) is None:\n try:\n mod = import_module(name)\n importer.find_module(name).load_module(name) # type: ignore\n submodules.append(mod)\n except OptionalImportError:\n pass # could not import the optional deps., they are ignored\n\n return submodules, err_mod\n\n\ndef get_full_type_name(typeobj):\n module = typeobj.__module__\n if module is None or module == str.__class__.__module__:\n return typeobj.__name__ # Avoid reporting __builtin__\n return module + \".\" + typeobj.__name__\n\n\ndef min_version(the_module, min_version_str: str = \"\") -> bool:\n \"\"\"\n Convert version strings into tuples of int and compare them.\n\n Returns True if the module's version is greater or equal to the 'min_version'.\n When min_version_str is not provided, it always returns True.\n \"\"\"\n if not min_version_str:\n return True # always valid version\n if not hasattr(the_module, \"__version__\"):\n warnings.warn(f\"{the_module} has no attribute __version__ in min_version check.\")\n return True # min_version is the default, shouldn't be noisy\n mod_version = tuple(int(x) for x in the_module.__version__.split(\".\")[:2])\n required = tuple(int(x) for x in min_version_str.split(\".\")[:2])\n return mod_version >= required\n\n\ndef exact_version(the_module, version_str: str = \"\") -> bool:\n \"\"\"\n Returns True if the module's __version__ matches version_str\n \"\"\"\n if not hasattr(the_module, \"__version__\"):\n warnings.warn(f\"{the_module} has no attribute __version__ in exact_version check.\")\n return False\n return bool(the_module.__version__ == version_str)\n\n\nclass InvalidPyTorchVersionError(Exception):\n \"\"\"\n Raised when called function or method requires a more recent\n PyTorch version than that installed.\n \"\"\"\n\n def __init__(self, required_version, name):\n message = f\"{name} requires PyTorch version {required_version} or later\"\n super().__init__(message)\n\n\nclass OptionalImportError(ImportError):\n \"\"\"\n Could not import APIs from an optional dependency.\n \"\"\"\n\n\ndef optional_import(\n module: str,\n version: str = \"\",\n version_checker: Callable[..., bool] = min_version,\n name: str = \"\",\n descriptor: str = OPTIONAL_IMPORT_MSG_FMT,\n version_args=None,\n allow_namespace_pkg: bool = False,\n) -> Tuple[Any, bool]:\n \"\"\"\n Imports an optional module specified by `module` string.\n Any importing related exceptions will be stored, and exceptions raise lazily\n when attempting to use the failed-to-import module.\n\n Args:\n module: name of the module to be imported.\n version: version string used by the version_checker.\n version_checker: a callable to check the module version, Defaults to monai.utils.min_version.\n name: a non-module attribute (such as method/class) to import from the imported module.\n descriptor: a format string for the final error message when using a not imported module.\n version_args: additional parameters to the version checker.\n allow_namespace_pkg: whether importing a namespace package is allowed. Defaults to False.\n\n Returns:\n The imported module and a boolean flag indicating whether the import is successful.\n\n Examples::\n\n >>> torch, flag = optional_import('torch', '1.1')\n >>> print(torch, flag)\n <module 'torch' from 'python/lib/python3.6/site-packages/torch/__init__.py'> True\n\n >>> the_module, flag = optional_import('unknown_module')\n >>> print(flag)\n False\n >>> the_module.method # trying to access a module which is not imported\n OptionalImportError: import unknown_module (No module named 'unknown_module').\n\n >>> torch, flag = optional_import('torch', '42', exact_version)\n >>> torch.nn # trying to access a module for which there isn't a proper version imported\n OptionalImportError: import torch (requires version '42' by 'exact_version').\n\n >>> conv, flag = optional_import('torch.nn.functional', '1.0', name='conv1d')\n >>> print(conv)\n <built-in method conv1d of type object at 0x11a49eac0>\n\n >>> conv, flag = optional_import('torch.nn.functional', '42', name='conv1d')\n >>> conv() # trying to use a function from the not successfully imported module (due to unmatched version)\n OptionalImportError: from torch.nn.functional import conv1d (requires version '42' by 'min_version').\n \"\"\"\n\n tb = None\n exception_str = \"\"\n if name:\n actual_cmd = f\"from {module} import {name}\"\n else:\n actual_cmd = f\"import {module}\"\n try:\n pkg = __import__(module) # top level module\n the_module = import_module(module)\n if not allow_namespace_pkg:\n is_namespace = getattr(the_module, \"__file__\", None) is None and hasattr(the_module, \"__path__\")\n if is_namespace:\n raise AssertionError\n if name: # user specified to load class/function/... from the module\n the_module = getattr(the_module, name)\n except Exception as import_exception: # any exceptions during import\n tb = import_exception.__traceback__\n exception_str = f\"{import_exception}\"\n else: # found the module\n if version_args and version_checker(pkg, f\"{version}\", version_args):\n return the_module, True\n if not version_args and version_checker(pkg, f\"{version}\"):\n return the_module, True\n\n # preparing lazy error message\n msg = descriptor.format(actual_cmd)\n if version and tb is None: # a pure version issue\n msg += f\" (requires '{module} {version}' by '{version_checker.__name__}')\"\n if exception_str:\n msg += f\" ({exception_str})\"\n\n class _LazyRaise:\n def __init__(self, *_args, **_kwargs):\n _default_msg = (\n f\"{msg}.\"\n + \"\\n\\nFor details about installing the optional dependencies, please visit:\"\n + \"\\n https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies\"\n )\n if tb is None:\n self._exception = OptionalImportError(_default_msg)\n else:\n self._exception = OptionalImportError(_default_msg).with_traceback(tb)\n\n def __getattr__(self, name):\n \"\"\"\n Raises:\n OptionalImportError: When you call this method.\n \"\"\"\n raise self._exception\n\n def __call__(self, *_args, **_kwargs):\n \"\"\"\n Raises:\n OptionalImportError: When you call this method.\n \"\"\"\n raise self._exception\n\n return _LazyRaise(), False\n\n\ndef has_option(obj, keywords: Union[str, Sequence[str]]) -> bool:\n \"\"\"\n Return a boolean indicating whether the given callable `obj` has the `keywords` in its signature.\n \"\"\"\n if not callable(obj):\n return False\n sig = inspect.signature(obj)\n return all(key in sig.parameters for key in ensure_tuple(keywords))\n\n\ndef get_package_version(dep_name, default=\"NOT INSTALLED or UNKNOWN VERSION.\"):\n \"\"\"\n Try to load package and get version. If not found, return `default`.\n \"\"\"\n dep, has_dep = optional_import(dep_name)\n if has_dep and hasattr(dep, \"__version__\"):\n return dep.__version__\n return default\n\n\ndef get_torch_version_tuple():\n \"\"\"\n Returns:\n tuple of ints represents the pytorch major/minor version.\n \"\"\"\n return tuple((int(x) for x in torch.__version__.split(\".\")[:2]))\n\n\nPT_BEFORE_1_7 = True\nver, has_ver = optional_import(\"pkg_resources\", name=\"parse_version\")\ntry:\n if has_ver:\n PT_BEFORE_1_7 = ver(torch.__version__) < ver(\"1.7\")\n else:\n PT_BEFORE_1_7 = get_torch_version_tuple() < (1, 7)\nexcept (AttributeError, TypeError):\n pass\n", "path": "monai/utils/module.py"}], "after_files": [{"content": "# Copyright 2020 - 2021 MONAI Consortium\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# http://www.apache.org/licenses/LICENSE-2.0\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport inspect\nimport sys\nimport warnings\nfrom importlib import import_module\nfrom pkgutil import walk_packages\nfrom re import match\nfrom typing import Any, Callable, List, Sequence, Tuple, Union\n\nimport torch\n\nfrom .misc import ensure_tuple\n\nOPTIONAL_IMPORT_MSG_FMT = \"{}\"\n\n__all__ = [\n \"InvalidPyTorchVersionError\",\n \"OptionalImportError\",\n \"exact_version\",\n \"export\",\n \"min_version\",\n \"optional_import\",\n \"load_submodules\",\n \"get_full_type_name\",\n \"has_option\",\n \"get_package_version\",\n \"get_torch_version_tuple\",\n \"PT_BEFORE_1_7\",\n]\n\n\ndef export(modname):\n \"\"\"\n Make the decorated object a member of the named module. This will also add the object under its aliases if it has\n a `__aliases__` member, thus this decorator should be before the `alias` decorator to pick up those names. Alias\n names which conflict with package names or existing members will be ignored.\n \"\"\"\n\n def _inner(obj):\n mod = import_module(modname)\n if not hasattr(mod, obj.__name__):\n setattr(mod, obj.__name__, obj)\n\n # add the aliases for `obj` to the target module\n for alias in getattr(obj, \"__aliases__\", ()):\n if not hasattr(mod, alias):\n setattr(mod, alias, obj)\n\n return obj\n\n return _inner\n\n\ndef load_submodules(basemod, load_all: bool = True, exclude_pattern: str = \"(.*[tT]est.*)|(_.*)\"):\n \"\"\"\n Traverse the source of the module structure starting with module `basemod`, loading all packages plus all files if\n `load_all` is True, excluding anything whose name matches `exclude_pattern`.\n \"\"\"\n submodules = []\n err_mod: List[str] = []\n for importer, name, is_pkg in walk_packages(\n basemod.__path__, prefix=basemod.__name__ + \".\", onerror=err_mod.append\n ):\n if (is_pkg or load_all) and name not in sys.modules and match(exclude_pattern, name) is None:\n try:\n mod = import_module(name)\n importer.find_module(name).load_module(name) # type: ignore\n submodules.append(mod)\n except OptionalImportError:\n pass # could not import the optional deps., they are ignored\n\n return submodules, err_mod\n\n\ndef get_full_type_name(typeobj):\n module = typeobj.__module__\n if module is None or module == str.__class__.__module__:\n return typeobj.__name__ # Avoid reporting __builtin__\n return module + \".\" + typeobj.__name__\n\n\ndef min_version(the_module, min_version_str: str = \"\") -> bool:\n \"\"\"\n Convert version strings into tuples of int and compare them.\n\n Returns True if the module's version is greater or equal to the 'min_version'.\n When min_version_str is not provided, it always returns True.\n \"\"\"\n if not min_version_str or not hasattr(the_module, \"__version__\"):\n return True # always valid version\n\n mod_version = tuple(int(x) for x in the_module.__version__.split(\".\")[:2])\n required = tuple(int(x) for x in min_version_str.split(\".\")[:2])\n return mod_version >= required\n\n\ndef exact_version(the_module, version_str: str = \"\") -> bool:\n \"\"\"\n Returns True if the module's __version__ matches version_str\n \"\"\"\n if not hasattr(the_module, \"__version__\"):\n warnings.warn(f\"{the_module} has no attribute __version__ in exact_version check.\")\n return False\n return bool(the_module.__version__ == version_str)\n\n\nclass InvalidPyTorchVersionError(Exception):\n \"\"\"\n Raised when called function or method requires a more recent\n PyTorch version than that installed.\n \"\"\"\n\n def __init__(self, required_version, name):\n message = f\"{name} requires PyTorch version {required_version} or later\"\n super().__init__(message)\n\n\nclass OptionalImportError(ImportError):\n \"\"\"\n Could not import APIs from an optional dependency.\n \"\"\"\n\n\ndef optional_import(\n module: str,\n version: str = \"\",\n version_checker: Callable[..., bool] = min_version,\n name: str = \"\",\n descriptor: str = OPTIONAL_IMPORT_MSG_FMT,\n version_args=None,\n allow_namespace_pkg: bool = False,\n) -> Tuple[Any, bool]:\n \"\"\"\n Imports an optional module specified by `module` string.\n Any importing related exceptions will be stored, and exceptions raise lazily\n when attempting to use the failed-to-import module.\n\n Args:\n module: name of the module to be imported.\n version: version string used by the version_checker.\n version_checker: a callable to check the module version, Defaults to monai.utils.min_version.\n name: a non-module attribute (such as method/class) to import from the imported module.\n descriptor: a format string for the final error message when using a not imported module.\n version_args: additional parameters to the version checker.\n allow_namespace_pkg: whether importing a namespace package is allowed. Defaults to False.\n\n Returns:\n The imported module and a boolean flag indicating whether the import is successful.\n\n Examples::\n\n >>> torch, flag = optional_import('torch', '1.1')\n >>> print(torch, flag)\n <module 'torch' from 'python/lib/python3.6/site-packages/torch/__init__.py'> True\n\n >>> the_module, flag = optional_import('unknown_module')\n >>> print(flag)\n False\n >>> the_module.method # trying to access a module which is not imported\n OptionalImportError: import unknown_module (No module named 'unknown_module').\n\n >>> torch, flag = optional_import('torch', '42', exact_version)\n >>> torch.nn # trying to access a module for which there isn't a proper version imported\n OptionalImportError: import torch (requires version '42' by 'exact_version').\n\n >>> conv, flag = optional_import('torch.nn.functional', '1.0', name='conv1d')\n >>> print(conv)\n <built-in method conv1d of type object at 0x11a49eac0>\n\n >>> conv, flag = optional_import('torch.nn.functional', '42', name='conv1d')\n >>> conv() # trying to use a function from the not successfully imported module (due to unmatched version)\n OptionalImportError: from torch.nn.functional import conv1d (requires version '42' by 'min_version').\n \"\"\"\n\n tb = None\n exception_str = \"\"\n if name:\n actual_cmd = f\"from {module} import {name}\"\n else:\n actual_cmd = f\"import {module}\"\n try:\n pkg = __import__(module) # top level module\n the_module = import_module(module)\n if not allow_namespace_pkg:\n is_namespace = getattr(the_module, \"__file__\", None) is None and hasattr(the_module, \"__path__\")\n if is_namespace:\n raise AssertionError\n if name: # user specified to load class/function/... from the module\n the_module = getattr(the_module, name)\n except Exception as import_exception: # any exceptions during import\n tb = import_exception.__traceback__\n exception_str = f\"{import_exception}\"\n else: # found the module\n if version_args and version_checker(pkg, f\"{version}\", version_args):\n return the_module, True\n if not version_args and version_checker(pkg, f\"{version}\"):\n return the_module, True\n\n # preparing lazy error message\n msg = descriptor.format(actual_cmd)\n if version and tb is None: # a pure version issue\n msg += f\" (requires '{module} {version}' by '{version_checker.__name__}')\"\n if exception_str:\n msg += f\" ({exception_str})\"\n\n class _LazyRaise:\n def __init__(self, *_args, **_kwargs):\n _default_msg = (\n f\"{msg}.\"\n + \"\\n\\nFor details about installing the optional dependencies, please visit:\"\n + \"\\n https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies\"\n )\n if tb is None:\n self._exception = OptionalImportError(_default_msg)\n else:\n self._exception = OptionalImportError(_default_msg).with_traceback(tb)\n\n def __getattr__(self, name):\n \"\"\"\n Raises:\n OptionalImportError: When you call this method.\n \"\"\"\n raise self._exception\n\n def __call__(self, *_args, **_kwargs):\n \"\"\"\n Raises:\n OptionalImportError: When you call this method.\n \"\"\"\n raise self._exception\n\n return _LazyRaise(), False\n\n\ndef has_option(obj, keywords: Union[str, Sequence[str]]) -> bool:\n \"\"\"\n Return a boolean indicating whether the given callable `obj` has the `keywords` in its signature.\n \"\"\"\n if not callable(obj):\n return False\n sig = inspect.signature(obj)\n return all(key in sig.parameters for key in ensure_tuple(keywords))\n\n\ndef get_package_version(dep_name, default=\"NOT INSTALLED or UNKNOWN VERSION.\"):\n \"\"\"\n Try to load package and get version. If not found, return `default`.\n \"\"\"\n dep, has_dep = optional_import(dep_name)\n if has_dep and hasattr(dep, \"__version__\"):\n return dep.__version__\n return default\n\n\ndef get_torch_version_tuple():\n \"\"\"\n Returns:\n tuple of ints represents the pytorch major/minor version.\n \"\"\"\n return tuple((int(x) for x in torch.__version__.split(\".\")[:2]))\n\n\nPT_BEFORE_1_7 = True\nver, has_ver = optional_import(\"pkg_resources\", name=\"parse_version\")\ntry:\n if has_ver:\n PT_BEFORE_1_7 = ver(torch.__version__) < ver(\"1.7\")\n else:\n PT_BEFORE_1_7 = get_torch_version_tuple() < (1, 7)\nexcept (AttributeError, TypeError):\n pass\n", "path": "monai/utils/module.py"}]}
| 3,474 | 213 |
gh_patches_debug_23098
|
rasdani/github-patches
|
git_diff
|
easybuilders__easybuild-framework-4292
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Regression with versionsuffix types
Commit https://github.com/easybuilders/easybuild-framework/commit/0e5ba5c858
introduced a check for string-type for `versionsuffix`, while `None` used to be an accepted value for `versionsuffix`. Our hooks replace many version suffixes with `None`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `easybuild/tools/module_naming_scheme/utilities.py`
Content:
```
1 ##
2 # Copyright 2009-2023 Ghent University
3 #
4 # This file is part of EasyBuild,
5 # originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),
6 # with support of Ghent University (http://ugent.be/hpc),
7 # the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),
8 # Flemish Research Foundation (FWO) (http://www.fwo.be/en)
9 # and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).
10 #
11 # https://github.com/easybuilders/easybuild
12 #
13 # EasyBuild is free software: you can redistribute it and/or modify
14 # it under the terms of the GNU General Public License as published by
15 # the Free Software Foundation v2.
16 #
17 # EasyBuild is distributed in the hope that it will be useful,
18 # but WITHOUT ANY WARRANTY; without even the implied warranty of
19 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
20 # GNU General Public License for more details.
21 #
22 # You should have received a copy of the GNU General Public License
23 # along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.
24 ##
25 """
26 Utility functions for implementating module naming schemes.
27
28 Authors:
29
30 * Stijn De Weirdt (Ghent University)
31 * Dries Verdegem (Ghent University)
32 * Kenneth Hoste (Ghent University)
33 * Pieter De Baets (Ghent University)
34 * Jens Timmerman (Ghent University)
35 * Fotis Georgatos (Uni.Lu, NTUA)
36 """
37 import os
38 import string
39
40 from easybuild.base import fancylogger
41 from easybuild.tools.build_log import EasyBuildError
42 from easybuild.tools.module_naming_scheme.mns import ModuleNamingScheme
43 from easybuild.tools.py2vs3 import string_type
44 from easybuild.tools.toolchain.toolchain import SYSTEM_TOOLCHAIN_NAME, is_system_toolchain
45 from easybuild.tools.utilities import get_subclasses, import_available_modules
46
47 _log = fancylogger.getLogger('module_naming_scheme.utilities', fname=False)
48
49
50 def det_full_ec_version(ec):
51 """
52 Determine exact install version, based on supplied easyconfig.
53 e.g. 1.2.3-goalf-1.1.0-no-OFED or 1.2.3 (for system toolchains)
54 """
55
56 ecver = None
57 toolchain = ec.get('toolchain', {'name': SYSTEM_TOOLCHAIN_NAME})
58
59 # determine main install version based on toolchain
60 if is_system_toolchain(toolchain['name']):
61 ecver = ec['version']
62 else:
63 ecver = "%s-%s-%s" % (ec['version'], toolchain['name'], toolchain['version'])
64
65 # prepend/append version prefix/suffix
66 versionprefix = ec.get('versionprefix', '')
67 if not isinstance(versionprefix, string_type):
68 raise EasyBuildError("versionprefix value should be a string, found '%s': %s (full spec: %s)",
69 type(versionprefix).__name__, versionprefix, ec)
70
71 versionsuffix = ec.get('versionsuffix', '')
72 if not isinstance(versionsuffix, string_type):
73 raise EasyBuildError("versionsuffix value should be a string, found '%s': %s (full spec: %s)",
74 type(versionsuffix).__name__, versionsuffix, ec)
75
76 ecver = ''.join([x for x in [versionprefix, ecver, versionsuffix] if x])
77
78 return ecver
79
80
81 def avail_module_naming_schemes():
82 """
83 Returns a list of available module naming schemes.
84 """
85 # all ModuleNamingScheme subclasses available in easybuild.tools.module_naming_scheme namespace are eligible
86 import_available_modules('easybuild.tools.module_naming_scheme')
87
88 # construct name-to-class dict of available module naming scheme
89 avail_mnss = dict([(x.__name__, x) for x in get_subclasses(ModuleNamingScheme)])
90
91 return avail_mnss
92
93
94 def is_valid_module_name(mod_name):
95 """Check whether the specified value is a valid module name."""
96 # module name must be a string
97 if not isinstance(mod_name, string_type):
98 _log.warning("Wrong type for module name %s (%s), should be a string" % (mod_name, type(mod_name)))
99 return False
100 # module name must be relative path
101 elif mod_name.startswith(os.path.sep):
102 _log.warning("Module name (%s) should be a relative file path" % mod_name)
103 return False
104 # module name should not be empty
105 elif not len(mod_name) > 0:
106 _log.warning("Module name (%s) should have length > 0." % mod_name)
107 return False
108 else:
109 # check whether module name only contains printable characters, since it's used as a filename
110 # (except for carriage-control characters \r, \x0b and \xoc)
111 invalid_chars = [x for x in mod_name if x not in string.printable or x in '\r\x0b\x0c']
112 if len(invalid_chars) > 0:
113 _log.warning("Module name %s contains invalid characters: %s" % (mod_name, invalid_chars))
114 return False
115 _log.debug("Module name %s validated" % mod_name)
116 return True
117
118
119 def det_hidden_modname(modname):
120 """Determine the hidden equivalent of the specified module name."""
121 moddir = os.path.dirname(modname)
122 modfile = os.path.basename(modname)
123 return os.path.join(moddir, '.%s' % modfile).lstrip(os.path.sep)
124
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/easybuild/tools/module_naming_scheme/utilities.py b/easybuild/tools/module_naming_scheme/utilities.py
--- a/easybuild/tools/module_naming_scheme/utilities.py
+++ b/easybuild/tools/module_naming_scheme/utilities.py
@@ -64,16 +64,16 @@
# prepend/append version prefix/suffix
versionprefix = ec.get('versionprefix', '')
- if not isinstance(versionprefix, string_type):
+ if versionprefix and not isinstance(versionprefix, string_type):
raise EasyBuildError("versionprefix value should be a string, found '%s': %s (full spec: %s)",
type(versionprefix).__name__, versionprefix, ec)
versionsuffix = ec.get('versionsuffix', '')
- if not isinstance(versionsuffix, string_type):
+ if versionsuffix and not isinstance(versionsuffix, string_type):
raise EasyBuildError("versionsuffix value should be a string, found '%s': %s (full spec: %s)",
type(versionsuffix).__name__, versionsuffix, ec)
- ecver = ''.join([x for x in [versionprefix, ecver, versionsuffix] if x])
+ ecver = ''.join([x for x in [versionprefix or '', ecver, versionsuffix or ''] if x])
return ecver
|
{"golden_diff": "diff --git a/easybuild/tools/module_naming_scheme/utilities.py b/easybuild/tools/module_naming_scheme/utilities.py\n--- a/easybuild/tools/module_naming_scheme/utilities.py\n+++ b/easybuild/tools/module_naming_scheme/utilities.py\n@@ -64,16 +64,16 @@\n \n # prepend/append version prefix/suffix\n versionprefix = ec.get('versionprefix', '')\n- if not isinstance(versionprefix, string_type):\n+ if versionprefix and not isinstance(versionprefix, string_type):\n raise EasyBuildError(\"versionprefix value should be a string, found '%s': %s (full spec: %s)\",\n type(versionprefix).__name__, versionprefix, ec)\n \n versionsuffix = ec.get('versionsuffix', '')\n- if not isinstance(versionsuffix, string_type):\n+ if versionsuffix and not isinstance(versionsuffix, string_type):\n raise EasyBuildError(\"versionsuffix value should be a string, found '%s': %s (full spec: %s)\",\n type(versionsuffix).__name__, versionsuffix, ec)\n \n- ecver = ''.join([x for x in [versionprefix, ecver, versionsuffix] if x])\n+ ecver = ''.join([x for x in [versionprefix or '', ecver, versionsuffix or ''] if x])\n \n return ecver\n", "issue": "Regression with versionsuffix types\nCommit https://github.com/easybuilders/easybuild-framework/commit/0e5ba5c858\r\nintroduced a check for string-type for `versionsuffix`, while `None` used to be an accepted value for `versionsuffix`. Our hooks replace many version suffixes with `None`. \n", "before_files": [{"content": "##\n# Copyright 2009-2023 Ghent University\n#\n# This file is part of EasyBuild,\n# originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),\n# with support of Ghent University (http://ugent.be/hpc),\n# the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),\n# Flemish Research Foundation (FWO) (http://www.fwo.be/en)\n# and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).\n#\n# https://github.com/easybuilders/easybuild\n#\n# EasyBuild is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation v2.\n#\n# EasyBuild is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.\n##\n\"\"\"\nUtility functions for implementating module naming schemes.\n\nAuthors:\n\n* Stijn De Weirdt (Ghent University)\n* Dries Verdegem (Ghent University)\n* Kenneth Hoste (Ghent University)\n* Pieter De Baets (Ghent University)\n* Jens Timmerman (Ghent University)\n* Fotis Georgatos (Uni.Lu, NTUA)\n\"\"\"\nimport os\nimport string\n\nfrom easybuild.base import fancylogger\nfrom easybuild.tools.build_log import EasyBuildError\nfrom easybuild.tools.module_naming_scheme.mns import ModuleNamingScheme\nfrom easybuild.tools.py2vs3 import string_type\nfrom easybuild.tools.toolchain.toolchain import SYSTEM_TOOLCHAIN_NAME, is_system_toolchain\nfrom easybuild.tools.utilities import get_subclasses, import_available_modules\n\n_log = fancylogger.getLogger('module_naming_scheme.utilities', fname=False)\n\n\ndef det_full_ec_version(ec):\n \"\"\"\n Determine exact install version, based on supplied easyconfig.\n e.g. 1.2.3-goalf-1.1.0-no-OFED or 1.2.3 (for system toolchains)\n \"\"\"\n\n ecver = None\n toolchain = ec.get('toolchain', {'name': SYSTEM_TOOLCHAIN_NAME})\n\n # determine main install version based on toolchain\n if is_system_toolchain(toolchain['name']):\n ecver = ec['version']\n else:\n ecver = \"%s-%s-%s\" % (ec['version'], toolchain['name'], toolchain['version'])\n\n # prepend/append version prefix/suffix\n versionprefix = ec.get('versionprefix', '')\n if not isinstance(versionprefix, string_type):\n raise EasyBuildError(\"versionprefix value should be a string, found '%s': %s (full spec: %s)\",\n type(versionprefix).__name__, versionprefix, ec)\n\n versionsuffix = ec.get('versionsuffix', '')\n if not isinstance(versionsuffix, string_type):\n raise EasyBuildError(\"versionsuffix value should be a string, found '%s': %s (full spec: %s)\",\n type(versionsuffix).__name__, versionsuffix, ec)\n\n ecver = ''.join([x for x in [versionprefix, ecver, versionsuffix] if x])\n\n return ecver\n\n\ndef avail_module_naming_schemes():\n \"\"\"\n Returns a list of available module naming schemes.\n \"\"\"\n # all ModuleNamingScheme subclasses available in easybuild.tools.module_naming_scheme namespace are eligible\n import_available_modules('easybuild.tools.module_naming_scheme')\n\n # construct name-to-class dict of available module naming scheme\n avail_mnss = dict([(x.__name__, x) for x in get_subclasses(ModuleNamingScheme)])\n\n return avail_mnss\n\n\ndef is_valid_module_name(mod_name):\n \"\"\"Check whether the specified value is a valid module name.\"\"\"\n # module name must be a string\n if not isinstance(mod_name, string_type):\n _log.warning(\"Wrong type for module name %s (%s), should be a string\" % (mod_name, type(mod_name)))\n return False\n # module name must be relative path\n elif mod_name.startswith(os.path.sep):\n _log.warning(\"Module name (%s) should be a relative file path\" % mod_name)\n return False\n # module name should not be empty\n elif not len(mod_name) > 0:\n _log.warning(\"Module name (%s) should have length > 0.\" % mod_name)\n return False\n else:\n # check whether module name only contains printable characters, since it's used as a filename\n # (except for carriage-control characters \\r, \\x0b and \\xoc)\n invalid_chars = [x for x in mod_name if x not in string.printable or x in '\\r\\x0b\\x0c']\n if len(invalid_chars) > 0:\n _log.warning(\"Module name %s contains invalid characters: %s\" % (mod_name, invalid_chars))\n return False\n _log.debug(\"Module name %s validated\" % mod_name)\n return True\n\n\ndef det_hidden_modname(modname):\n \"\"\"Determine the hidden equivalent of the specified module name.\"\"\"\n moddir = os.path.dirname(modname)\n modfile = os.path.basename(modname)\n return os.path.join(moddir, '.%s' % modfile).lstrip(os.path.sep)\n", "path": "easybuild/tools/module_naming_scheme/utilities.py"}], "after_files": [{"content": "##\n# Copyright 2009-2023 Ghent University\n#\n# This file is part of EasyBuild,\n# originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),\n# with support of Ghent University (http://ugent.be/hpc),\n# the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),\n# Flemish Research Foundation (FWO) (http://www.fwo.be/en)\n# and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).\n#\n# https://github.com/easybuilders/easybuild\n#\n# EasyBuild is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation v2.\n#\n# EasyBuild is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.\n##\n\"\"\"\nUtility functions for implementating module naming schemes.\n\nAuthors:\n\n* Stijn De Weirdt (Ghent University)\n* Dries Verdegem (Ghent University)\n* Kenneth Hoste (Ghent University)\n* Pieter De Baets (Ghent University)\n* Jens Timmerman (Ghent University)\n* Fotis Georgatos (Uni.Lu, NTUA)\n\"\"\"\nimport os\nimport string\n\nfrom easybuild.base import fancylogger\nfrom easybuild.tools.build_log import EasyBuildError\nfrom easybuild.tools.module_naming_scheme.mns import ModuleNamingScheme\nfrom easybuild.tools.py2vs3 import string_type\nfrom easybuild.tools.toolchain.toolchain import SYSTEM_TOOLCHAIN_NAME, is_system_toolchain\nfrom easybuild.tools.utilities import get_subclasses, import_available_modules\n\n_log = fancylogger.getLogger('module_naming_scheme.utilities', fname=False)\n\n\ndef det_full_ec_version(ec):\n \"\"\"\n Determine exact install version, based on supplied easyconfig.\n e.g. 1.2.3-goalf-1.1.0-no-OFED or 1.2.3 (for system toolchains)\n \"\"\"\n\n ecver = None\n toolchain = ec.get('toolchain', {'name': SYSTEM_TOOLCHAIN_NAME})\n\n # determine main install version based on toolchain\n if is_system_toolchain(toolchain['name']):\n ecver = ec['version']\n else:\n ecver = \"%s-%s-%s\" % (ec['version'], toolchain['name'], toolchain['version'])\n\n # prepend/append version prefix/suffix\n versionprefix = ec.get('versionprefix', '')\n if versionprefix and not isinstance(versionprefix, string_type):\n raise EasyBuildError(\"versionprefix value should be a string, found '%s': %s (full spec: %s)\",\n type(versionprefix).__name__, versionprefix, ec)\n\n versionsuffix = ec.get('versionsuffix', '')\n if versionsuffix and not isinstance(versionsuffix, string_type):\n raise EasyBuildError(\"versionsuffix value should be a string, found '%s': %s (full spec: %s)\",\n type(versionsuffix).__name__, versionsuffix, ec)\n\n ecver = ''.join([x for x in [versionprefix or '', ecver, versionsuffix or ''] if x])\n\n return ecver\n\n\ndef avail_module_naming_schemes():\n \"\"\"\n Returns a list of available module naming schemes.\n \"\"\"\n # all ModuleNamingScheme subclasses available in easybuild.tools.module_naming_scheme namespace are eligible\n import_available_modules('easybuild.tools.module_naming_scheme')\n\n # construct name-to-class dict of available module naming scheme\n avail_mnss = dict([(x.__name__, x) for x in get_subclasses(ModuleNamingScheme)])\n\n return avail_mnss\n\n\ndef is_valid_module_name(mod_name):\n \"\"\"Check whether the specified value is a valid module name.\"\"\"\n # module name must be a string\n if not isinstance(mod_name, string_type):\n _log.warning(\"Wrong type for module name %s (%s), should be a string\" % (mod_name, type(mod_name)))\n return False\n # module name must be relative path\n elif mod_name.startswith(os.path.sep):\n _log.warning(\"Module name (%s) should be a relative file path\" % mod_name)\n return False\n # module name should not be empty\n elif not len(mod_name) > 0:\n _log.warning(\"Module name (%s) should have length > 0.\" % mod_name)\n return False\n else:\n # check whether module name only contains printable characters, since it's used as a filename\n # (except for carriage-control characters \\r, \\x0b and \\xoc)\n invalid_chars = [x for x in mod_name if x not in string.printable or x in '\\r\\x0b\\x0c']\n if len(invalid_chars) > 0:\n _log.warning(\"Module name %s contains invalid characters: %s\" % (mod_name, invalid_chars))\n return False\n _log.debug(\"Module name %s validated\" % mod_name)\n return True\n\n\ndef det_hidden_modname(modname):\n \"\"\"Determine the hidden equivalent of the specified module name.\"\"\"\n moddir = os.path.dirname(modname)\n modfile = os.path.basename(modname)\n return os.path.join(moddir, '.%s' % modfile).lstrip(os.path.sep)\n", "path": "easybuild/tools/module_naming_scheme/utilities.py"}]}
| 1,826 | 295 |
gh_patches_debug_23096
|
rasdani/github-patches
|
git_diff
|
pypa__pip-4355
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Deprecate/Drop Support for Python 3.3?
Currently Python 3.3 support is not a major headache for support (unlike 3.2) because we're largely being limited by Python 2.6 and 2.7 in terms of what features we can support. However, I think it's important to periodically look at the usage and make sure that we're not supporting Python versions that are not really being used as no matter what, each version of Python we support incurs a cost in terms of overhead for support (more build matrix items, minor incompatibilities, etc).
With that in mind, I took a look at what % of the pip initiated traffic that PyPI received in the last month to see what our usage numbers look like.
Only pip 8 initiated traffic:
```
2.7 83.8%
3.5 6.3%
3.4 5.5%
2.6 3.8%
3.3 0.3%
3.6 0.1%
3.2 0.03%
```
All pip initiated traffic:
```
2.7 86.8%
3.4 5.2%
3.5 3.9%
2.6 3.4%
3.3 0.4%
3.2 0.07%
3.6 0.04%
```
Given that 3.3 support is well under 1%, do we want to deprecate support for Python 3.3 with intent to drop support for it in either pip 9 or pip 10? For myself, I say yes-- either as a pip 9 or a pip 10 deprecation.
@pypa/pip-committers ?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pip/basecommand.py`
Content:
```
1 """Base Command class, and related routines"""
2 from __future__ import absolute_import
3
4 import logging
5 import logging.config
6 import os
7 import sys
8 import optparse
9
10 from pip import cmdoptions
11 from pip.index import PackageFinder
12 from pip.locations import running_under_virtualenv
13 from pip.download import PipSession
14 from pip.exceptions import (BadCommand, InstallationError, UninstallationError,
15 CommandError, PreviousBuildDirError)
16
17 from pip.baseparser import ConfigOptionParser, UpdatingDefaultsHelpFormatter
18 from pip.req import InstallRequirement, parse_requirements
19 from pip.status_codes import (
20 SUCCESS, ERROR, UNKNOWN_ERROR, VIRTUALENV_NOT_FOUND,
21 PREVIOUS_BUILD_DIR_ERROR,
22 )
23 from pip.utils import get_prog, normalize_path
24 from pip.utils.logging import IndentingFormatter
25 from pip.utils.outdated import pip_version_check
26
27
28 __all__ = ['Command']
29
30
31 logger = logging.getLogger(__name__)
32
33
34 class Command(object):
35 name = None
36 usage = None
37 hidden = False
38 log_streams = ("ext://sys.stdout", "ext://sys.stderr")
39
40 def __init__(self, isolated=False):
41 parser_kw = {
42 'usage': self.usage,
43 'prog': '%s %s' % (get_prog(), self.name),
44 'formatter': UpdatingDefaultsHelpFormatter(),
45 'add_help_option': False,
46 'name': self.name,
47 'description': self.__doc__,
48 'isolated': isolated,
49 }
50
51 self.parser = ConfigOptionParser(**parser_kw)
52
53 # Commands should add options to this option group
54 optgroup_name = '%s Options' % self.name.capitalize()
55 self.cmd_opts = optparse.OptionGroup(self.parser, optgroup_name)
56
57 # Add the general options
58 gen_opts = cmdoptions.make_option_group(
59 cmdoptions.general_group,
60 self.parser,
61 )
62 self.parser.add_option_group(gen_opts)
63
64 def _build_session(self, options, retries=None, timeout=None):
65 session = PipSession(
66 cache=(
67 normalize_path(os.path.join(options.cache_dir, "http"))
68 if options.cache_dir else None
69 ),
70 retries=retries if retries is not None else options.retries,
71 insecure_hosts=options.trusted_hosts,
72 )
73
74 # Handle custom ca-bundles from the user
75 if options.cert:
76 session.verify = options.cert
77
78 # Handle SSL client certificate
79 if options.client_cert:
80 session.cert = options.client_cert
81
82 # Handle timeouts
83 if options.timeout or timeout:
84 session.timeout = (
85 timeout if timeout is not None else options.timeout
86 )
87
88 # Handle configured proxies
89 if options.proxy:
90 session.proxies = {
91 "http": options.proxy,
92 "https": options.proxy,
93 }
94
95 # Determine if we can prompt the user for authentication or not
96 session.auth.prompting = not options.no_input
97
98 return session
99
100 def parse_args(self, args):
101 # factored out for testability
102 return self.parser.parse_args(args)
103
104 def main(self, args):
105 options, args = self.parse_args(args)
106
107 if options.quiet:
108 if options.quiet == 1:
109 level = "WARNING"
110 if options.quiet == 2:
111 level = "ERROR"
112 else:
113 level = "CRITICAL"
114 elif options.verbose:
115 level = "DEBUG"
116 else:
117 level = "INFO"
118
119 # The root logger should match the "console" level *unless* we
120 # specified "--log" to send debug logs to a file.
121 root_level = level
122 if options.log:
123 root_level = "DEBUG"
124
125 logging.config.dictConfig({
126 "version": 1,
127 "disable_existing_loggers": False,
128 "filters": {
129 "exclude_warnings": {
130 "()": "pip.utils.logging.MaxLevelFilter",
131 "level": logging.WARNING,
132 },
133 },
134 "formatters": {
135 "indent": {
136 "()": IndentingFormatter,
137 "format": "%(message)s",
138 },
139 },
140 "handlers": {
141 "console": {
142 "level": level,
143 "class": "pip.utils.logging.ColorizedStreamHandler",
144 "stream": self.log_streams[0],
145 "filters": ["exclude_warnings"],
146 "formatter": "indent",
147 },
148 "console_errors": {
149 "level": "WARNING",
150 "class": "pip.utils.logging.ColorizedStreamHandler",
151 "stream": self.log_streams[1],
152 "formatter": "indent",
153 },
154 "user_log": {
155 "level": "DEBUG",
156 "class": "pip.utils.logging.BetterRotatingFileHandler",
157 "filename": options.log or "/dev/null",
158 "delay": True,
159 "formatter": "indent",
160 },
161 },
162 "root": {
163 "level": root_level,
164 "handlers": list(filter(None, [
165 "console",
166 "console_errors",
167 "user_log" if options.log else None,
168 ])),
169 },
170 # Disable any logging besides WARNING unless we have DEBUG level
171 # logging enabled. These use both pip._vendor and the bare names
172 # for the case where someone unbundles our libraries.
173 "loggers": dict(
174 (
175 name,
176 {
177 "level": (
178 "WARNING"
179 if level in ["INFO", "ERROR"]
180 else "DEBUG"
181 ),
182 },
183 )
184 for name in ["pip._vendor", "distlib", "requests", "urllib3"]
185 ),
186 })
187
188 # TODO: try to get these passing down from the command?
189 # without resorting to os.environ to hold these.
190
191 if options.no_input:
192 os.environ['PIP_NO_INPUT'] = '1'
193
194 if options.exists_action:
195 os.environ['PIP_EXISTS_ACTION'] = ' '.join(options.exists_action)
196
197 if options.require_venv:
198 # If a venv is required check if it can really be found
199 if not running_under_virtualenv():
200 logger.critical(
201 'Could not find an activated virtualenv (required).'
202 )
203 sys.exit(VIRTUALENV_NOT_FOUND)
204
205 try:
206 status = self.run(options, args)
207 # FIXME: all commands should return an exit status
208 # and when it is done, isinstance is not needed anymore
209 if isinstance(status, int):
210 return status
211 except PreviousBuildDirError as exc:
212 logger.critical(str(exc))
213 logger.debug('Exception information:', exc_info=True)
214
215 return PREVIOUS_BUILD_DIR_ERROR
216 except (InstallationError, UninstallationError, BadCommand) as exc:
217 logger.critical(str(exc))
218 logger.debug('Exception information:', exc_info=True)
219
220 return ERROR
221 except CommandError as exc:
222 logger.critical('ERROR: %s', exc)
223 logger.debug('Exception information:', exc_info=True)
224
225 return ERROR
226 except KeyboardInterrupt:
227 logger.critical('Operation cancelled by user')
228 logger.debug('Exception information:', exc_info=True)
229
230 return ERROR
231 except:
232 logger.critical('Exception:', exc_info=True)
233
234 return UNKNOWN_ERROR
235 finally:
236 # Check if we're using the latest version of pip available
237 if (not options.disable_pip_version_check and not
238 getattr(options, "no_index", False)):
239 with self._build_session(
240 options,
241 retries=0,
242 timeout=min(5, options.timeout)) as session:
243 pip_version_check(session)
244
245 return SUCCESS
246
247
248 class RequirementCommand(Command):
249
250 @staticmethod
251 def populate_requirement_set(requirement_set, args, options, finder,
252 session, name, wheel_cache):
253 """
254 Marshal cmd line args into a requirement set.
255 """
256 for filename in options.constraints:
257 for req in parse_requirements(
258 filename,
259 constraint=True, finder=finder, options=options,
260 session=session, wheel_cache=wheel_cache):
261 requirement_set.add_requirement(req)
262
263 for req in args:
264 requirement_set.add_requirement(
265 InstallRequirement.from_line(
266 req, None, isolated=options.isolated_mode,
267 wheel_cache=wheel_cache
268 )
269 )
270
271 for req in options.editables:
272 requirement_set.add_requirement(
273 InstallRequirement.from_editable(
274 req,
275 isolated=options.isolated_mode,
276 wheel_cache=wheel_cache
277 )
278 )
279
280 for filename in options.requirements:
281 for req in parse_requirements(
282 filename,
283 finder=finder, options=options, session=session,
284 wheel_cache=wheel_cache):
285 requirement_set.add_requirement(req)
286 # If --require-hashes was a line in a requirements file, tell
287 # RequirementSet about it:
288 requirement_set.require_hashes = options.require_hashes
289
290 if not (args or options.editables or options.requirements):
291 opts = {'name': name}
292 if options.find_links:
293 raise CommandError(
294 'You must give at least one requirement to %(name)s '
295 '(maybe you meant "pip %(name)s %(links)s"?)' %
296 dict(opts, links=' '.join(options.find_links)))
297 else:
298 raise CommandError(
299 'You must give at least one requirement to %(name)s '
300 '(see "pip help %(name)s")' % opts)
301
302 def _build_package_finder(self, options, session,
303 platform=None, python_versions=None,
304 abi=None, implementation=None):
305 """
306 Create a package finder appropriate to this requirement command.
307 """
308 index_urls = [options.index_url] + options.extra_index_urls
309 if options.no_index:
310 logger.debug('Ignoring indexes: %s', ','.join(index_urls))
311 index_urls = []
312
313 return PackageFinder(
314 find_links=options.find_links,
315 format_control=options.format_control,
316 index_urls=index_urls,
317 trusted_hosts=options.trusted_hosts,
318 allow_all_prereleases=options.pre,
319 process_dependency_links=options.process_dependency_links,
320 session=session,
321 platform=platform,
322 versions=python_versions,
323 abi=abi,
324 implementation=implementation,
325 )
326
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pip/basecommand.py b/pip/basecommand.py
--- a/pip/basecommand.py
+++ b/pip/basecommand.py
@@ -6,6 +6,7 @@
import os
import sys
import optparse
+import warnings
from pip import cmdoptions
from pip.index import PackageFinder
@@ -20,7 +21,7 @@
SUCCESS, ERROR, UNKNOWN_ERROR, VIRTUALENV_NOT_FOUND,
PREVIOUS_BUILD_DIR_ERROR,
)
-from pip.utils import get_prog, normalize_path
+from pip.utils import deprecation, get_prog, normalize_path
from pip.utils.logging import IndentingFormatter
from pip.utils.outdated import pip_version_check
@@ -185,6 +186,13 @@
),
})
+ if sys.version_info[:2] == (3, 3):
+ warnings.warn(
+ "Python 3.3 supported has been deprecated and support for it "
+ "will be dropped in the future. Please upgrade your Python.",
+ deprecation.RemovedInPip11Warning,
+ )
+
# TODO: try to get these passing down from the command?
# without resorting to os.environ to hold these.
|
{"golden_diff": "diff --git a/pip/basecommand.py b/pip/basecommand.py\n--- a/pip/basecommand.py\n+++ b/pip/basecommand.py\n@@ -6,6 +6,7 @@\n import os\n import sys\n import optparse\n+import warnings\n \n from pip import cmdoptions\n from pip.index import PackageFinder\n@@ -20,7 +21,7 @@\n SUCCESS, ERROR, UNKNOWN_ERROR, VIRTUALENV_NOT_FOUND,\n PREVIOUS_BUILD_DIR_ERROR,\n )\n-from pip.utils import get_prog, normalize_path\n+from pip.utils import deprecation, get_prog, normalize_path\n from pip.utils.logging import IndentingFormatter\n from pip.utils.outdated import pip_version_check\n \n@@ -185,6 +186,13 @@\n ),\n })\n \n+ if sys.version_info[:2] == (3, 3):\n+ warnings.warn(\n+ \"Python 3.3 supported has been deprecated and support for it \"\n+ \"will be dropped in the future. Please upgrade your Python.\",\n+ deprecation.RemovedInPip11Warning,\n+ )\n+\n # TODO: try to get these passing down from the command?\n # without resorting to os.environ to hold these.\n", "issue": "Deprecate/Drop Support for Python 3.3?\nCurrently Python 3.3 support is not a major headache for support (unlike 3.2) because we're largely being limited by Python 2.6 and 2.7 in terms of what features we can support. However, I think it's important to periodically look at the usage and make sure that we're not supporting Python versions that are not really being used as no matter what, each version of Python we support incurs a cost in terms of overhead for support (more build matrix items, minor incompatibilities, etc).\n\nWith that in mind, I took a look at what % of the pip initiated traffic that PyPI received in the last month to see what our usage numbers look like.\n\nOnly pip 8 initiated traffic:\n\n```\n2.7 83.8%\n3.5 6.3%\n3.4 5.5%\n2.6 3.8%\n3.3 0.3%\n3.6 0.1%\n3.2 0.03%\n```\n\nAll pip initiated traffic:\n\n```\n2.7 86.8%\n3.4 5.2%\n3.5 3.9%\n2.6 3.4%\n3.3 0.4%\n3.2 0.07%\n3.6 0.04%\n```\n\nGiven that 3.3 support is well under 1%, do we want to deprecate support for Python 3.3 with intent to drop support for it in either pip 9 or pip 10? For myself, I say yes-- either as a pip 9 or a pip 10 deprecation.\n\n@pypa/pip-committers ?\n\n", "before_files": [{"content": "\"\"\"Base Command class, and related routines\"\"\"\nfrom __future__ import absolute_import\n\nimport logging\nimport logging.config\nimport os\nimport sys\nimport optparse\n\nfrom pip import cmdoptions\nfrom pip.index import PackageFinder\nfrom pip.locations import running_under_virtualenv\nfrom pip.download import PipSession\nfrom pip.exceptions import (BadCommand, InstallationError, UninstallationError,\n CommandError, PreviousBuildDirError)\n\nfrom pip.baseparser import ConfigOptionParser, UpdatingDefaultsHelpFormatter\nfrom pip.req import InstallRequirement, parse_requirements\nfrom pip.status_codes import (\n SUCCESS, ERROR, UNKNOWN_ERROR, VIRTUALENV_NOT_FOUND,\n PREVIOUS_BUILD_DIR_ERROR,\n)\nfrom pip.utils import get_prog, normalize_path\nfrom pip.utils.logging import IndentingFormatter\nfrom pip.utils.outdated import pip_version_check\n\n\n__all__ = ['Command']\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass Command(object):\n name = None\n usage = None\n hidden = False\n log_streams = (\"ext://sys.stdout\", \"ext://sys.stderr\")\n\n def __init__(self, isolated=False):\n parser_kw = {\n 'usage': self.usage,\n 'prog': '%s %s' % (get_prog(), self.name),\n 'formatter': UpdatingDefaultsHelpFormatter(),\n 'add_help_option': False,\n 'name': self.name,\n 'description': self.__doc__,\n 'isolated': isolated,\n }\n\n self.parser = ConfigOptionParser(**parser_kw)\n\n # Commands should add options to this option group\n optgroup_name = '%s Options' % self.name.capitalize()\n self.cmd_opts = optparse.OptionGroup(self.parser, optgroup_name)\n\n # Add the general options\n gen_opts = cmdoptions.make_option_group(\n cmdoptions.general_group,\n self.parser,\n )\n self.parser.add_option_group(gen_opts)\n\n def _build_session(self, options, retries=None, timeout=None):\n session = PipSession(\n cache=(\n normalize_path(os.path.join(options.cache_dir, \"http\"))\n if options.cache_dir else None\n ),\n retries=retries if retries is not None else options.retries,\n insecure_hosts=options.trusted_hosts,\n )\n\n # Handle custom ca-bundles from the user\n if options.cert:\n session.verify = options.cert\n\n # Handle SSL client certificate\n if options.client_cert:\n session.cert = options.client_cert\n\n # Handle timeouts\n if options.timeout or timeout:\n session.timeout = (\n timeout if timeout is not None else options.timeout\n )\n\n # Handle configured proxies\n if options.proxy:\n session.proxies = {\n \"http\": options.proxy,\n \"https\": options.proxy,\n }\n\n # Determine if we can prompt the user for authentication or not\n session.auth.prompting = not options.no_input\n\n return session\n\n def parse_args(self, args):\n # factored out for testability\n return self.parser.parse_args(args)\n\n def main(self, args):\n options, args = self.parse_args(args)\n\n if options.quiet:\n if options.quiet == 1:\n level = \"WARNING\"\n if options.quiet == 2:\n level = \"ERROR\"\n else:\n level = \"CRITICAL\"\n elif options.verbose:\n level = \"DEBUG\"\n else:\n level = \"INFO\"\n\n # The root logger should match the \"console\" level *unless* we\n # specified \"--log\" to send debug logs to a file.\n root_level = level\n if options.log:\n root_level = \"DEBUG\"\n\n logging.config.dictConfig({\n \"version\": 1,\n \"disable_existing_loggers\": False,\n \"filters\": {\n \"exclude_warnings\": {\n \"()\": \"pip.utils.logging.MaxLevelFilter\",\n \"level\": logging.WARNING,\n },\n },\n \"formatters\": {\n \"indent\": {\n \"()\": IndentingFormatter,\n \"format\": \"%(message)s\",\n },\n },\n \"handlers\": {\n \"console\": {\n \"level\": level,\n \"class\": \"pip.utils.logging.ColorizedStreamHandler\",\n \"stream\": self.log_streams[0],\n \"filters\": [\"exclude_warnings\"],\n \"formatter\": \"indent\",\n },\n \"console_errors\": {\n \"level\": \"WARNING\",\n \"class\": \"pip.utils.logging.ColorizedStreamHandler\",\n \"stream\": self.log_streams[1],\n \"formatter\": \"indent\",\n },\n \"user_log\": {\n \"level\": \"DEBUG\",\n \"class\": \"pip.utils.logging.BetterRotatingFileHandler\",\n \"filename\": options.log or \"/dev/null\",\n \"delay\": True,\n \"formatter\": \"indent\",\n },\n },\n \"root\": {\n \"level\": root_level,\n \"handlers\": list(filter(None, [\n \"console\",\n \"console_errors\",\n \"user_log\" if options.log else None,\n ])),\n },\n # Disable any logging besides WARNING unless we have DEBUG level\n # logging enabled. These use both pip._vendor and the bare names\n # for the case where someone unbundles our libraries.\n \"loggers\": dict(\n (\n name,\n {\n \"level\": (\n \"WARNING\"\n if level in [\"INFO\", \"ERROR\"]\n else \"DEBUG\"\n ),\n },\n )\n for name in [\"pip._vendor\", \"distlib\", \"requests\", \"urllib3\"]\n ),\n })\n\n # TODO: try to get these passing down from the command?\n # without resorting to os.environ to hold these.\n\n if options.no_input:\n os.environ['PIP_NO_INPUT'] = '1'\n\n if options.exists_action:\n os.environ['PIP_EXISTS_ACTION'] = ' '.join(options.exists_action)\n\n if options.require_venv:\n # If a venv is required check if it can really be found\n if not running_under_virtualenv():\n logger.critical(\n 'Could not find an activated virtualenv (required).'\n )\n sys.exit(VIRTUALENV_NOT_FOUND)\n\n try:\n status = self.run(options, args)\n # FIXME: all commands should return an exit status\n # and when it is done, isinstance is not needed anymore\n if isinstance(status, int):\n return status\n except PreviousBuildDirError as exc:\n logger.critical(str(exc))\n logger.debug('Exception information:', exc_info=True)\n\n return PREVIOUS_BUILD_DIR_ERROR\n except (InstallationError, UninstallationError, BadCommand) as exc:\n logger.critical(str(exc))\n logger.debug('Exception information:', exc_info=True)\n\n return ERROR\n except CommandError as exc:\n logger.critical('ERROR: %s', exc)\n logger.debug('Exception information:', exc_info=True)\n\n return ERROR\n except KeyboardInterrupt:\n logger.critical('Operation cancelled by user')\n logger.debug('Exception information:', exc_info=True)\n\n return ERROR\n except:\n logger.critical('Exception:', exc_info=True)\n\n return UNKNOWN_ERROR\n finally:\n # Check if we're using the latest version of pip available\n if (not options.disable_pip_version_check and not\n getattr(options, \"no_index\", False)):\n with self._build_session(\n options,\n retries=0,\n timeout=min(5, options.timeout)) as session:\n pip_version_check(session)\n\n return SUCCESS\n\n\nclass RequirementCommand(Command):\n\n @staticmethod\n def populate_requirement_set(requirement_set, args, options, finder,\n session, name, wheel_cache):\n \"\"\"\n Marshal cmd line args into a requirement set.\n \"\"\"\n for filename in options.constraints:\n for req in parse_requirements(\n filename,\n constraint=True, finder=finder, options=options,\n session=session, wheel_cache=wheel_cache):\n requirement_set.add_requirement(req)\n\n for req in args:\n requirement_set.add_requirement(\n InstallRequirement.from_line(\n req, None, isolated=options.isolated_mode,\n wheel_cache=wheel_cache\n )\n )\n\n for req in options.editables:\n requirement_set.add_requirement(\n InstallRequirement.from_editable(\n req,\n isolated=options.isolated_mode,\n wheel_cache=wheel_cache\n )\n )\n\n for filename in options.requirements:\n for req in parse_requirements(\n filename,\n finder=finder, options=options, session=session,\n wheel_cache=wheel_cache):\n requirement_set.add_requirement(req)\n # If --require-hashes was a line in a requirements file, tell\n # RequirementSet about it:\n requirement_set.require_hashes = options.require_hashes\n\n if not (args or options.editables or options.requirements):\n opts = {'name': name}\n if options.find_links:\n raise CommandError(\n 'You must give at least one requirement to %(name)s '\n '(maybe you meant \"pip %(name)s %(links)s\"?)' %\n dict(opts, links=' '.join(options.find_links)))\n else:\n raise CommandError(\n 'You must give at least one requirement to %(name)s '\n '(see \"pip help %(name)s\")' % opts)\n\n def _build_package_finder(self, options, session,\n platform=None, python_versions=None,\n abi=None, implementation=None):\n \"\"\"\n Create a package finder appropriate to this requirement command.\n \"\"\"\n index_urls = [options.index_url] + options.extra_index_urls\n if options.no_index:\n logger.debug('Ignoring indexes: %s', ','.join(index_urls))\n index_urls = []\n\n return PackageFinder(\n find_links=options.find_links,\n format_control=options.format_control,\n index_urls=index_urls,\n trusted_hosts=options.trusted_hosts,\n allow_all_prereleases=options.pre,\n process_dependency_links=options.process_dependency_links,\n session=session,\n platform=platform,\n versions=python_versions,\n abi=abi,\n implementation=implementation,\n )\n", "path": "pip/basecommand.py"}], "after_files": [{"content": "\"\"\"Base Command class, and related routines\"\"\"\nfrom __future__ import absolute_import\n\nimport logging\nimport logging.config\nimport os\nimport sys\nimport optparse\nimport warnings\n\nfrom pip import cmdoptions\nfrom pip.index import PackageFinder\nfrom pip.locations import running_under_virtualenv\nfrom pip.download import PipSession\nfrom pip.exceptions import (BadCommand, InstallationError, UninstallationError,\n CommandError, PreviousBuildDirError)\n\nfrom pip.baseparser import ConfigOptionParser, UpdatingDefaultsHelpFormatter\nfrom pip.req import InstallRequirement, parse_requirements\nfrom pip.status_codes import (\n SUCCESS, ERROR, UNKNOWN_ERROR, VIRTUALENV_NOT_FOUND,\n PREVIOUS_BUILD_DIR_ERROR,\n)\nfrom pip.utils import deprecation, get_prog, normalize_path\nfrom pip.utils.logging import IndentingFormatter\nfrom pip.utils.outdated import pip_version_check\n\n\n__all__ = ['Command']\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass Command(object):\n name = None\n usage = None\n hidden = False\n log_streams = (\"ext://sys.stdout\", \"ext://sys.stderr\")\n\n def __init__(self, isolated=False):\n parser_kw = {\n 'usage': self.usage,\n 'prog': '%s %s' % (get_prog(), self.name),\n 'formatter': UpdatingDefaultsHelpFormatter(),\n 'add_help_option': False,\n 'name': self.name,\n 'description': self.__doc__,\n 'isolated': isolated,\n }\n\n self.parser = ConfigOptionParser(**parser_kw)\n\n # Commands should add options to this option group\n optgroup_name = '%s Options' % self.name.capitalize()\n self.cmd_opts = optparse.OptionGroup(self.parser, optgroup_name)\n\n # Add the general options\n gen_opts = cmdoptions.make_option_group(\n cmdoptions.general_group,\n self.parser,\n )\n self.parser.add_option_group(gen_opts)\n\n def _build_session(self, options, retries=None, timeout=None):\n session = PipSession(\n cache=(\n normalize_path(os.path.join(options.cache_dir, \"http\"))\n if options.cache_dir else None\n ),\n retries=retries if retries is not None else options.retries,\n insecure_hosts=options.trusted_hosts,\n )\n\n # Handle custom ca-bundles from the user\n if options.cert:\n session.verify = options.cert\n\n # Handle SSL client certificate\n if options.client_cert:\n session.cert = options.client_cert\n\n # Handle timeouts\n if options.timeout or timeout:\n session.timeout = (\n timeout if timeout is not None else options.timeout\n )\n\n # Handle configured proxies\n if options.proxy:\n session.proxies = {\n \"http\": options.proxy,\n \"https\": options.proxy,\n }\n\n # Determine if we can prompt the user for authentication or not\n session.auth.prompting = not options.no_input\n\n return session\n\n def parse_args(self, args):\n # factored out for testability\n return self.parser.parse_args(args)\n\n def main(self, args):\n options, args = self.parse_args(args)\n\n if options.quiet:\n if options.quiet == 1:\n level = \"WARNING\"\n if options.quiet == 2:\n level = \"ERROR\"\n else:\n level = \"CRITICAL\"\n elif options.verbose:\n level = \"DEBUG\"\n else:\n level = \"INFO\"\n\n # The root logger should match the \"console\" level *unless* we\n # specified \"--log\" to send debug logs to a file.\n root_level = level\n if options.log:\n root_level = \"DEBUG\"\n\n logging.config.dictConfig({\n \"version\": 1,\n \"disable_existing_loggers\": False,\n \"filters\": {\n \"exclude_warnings\": {\n \"()\": \"pip.utils.logging.MaxLevelFilter\",\n \"level\": logging.WARNING,\n },\n },\n \"formatters\": {\n \"indent\": {\n \"()\": IndentingFormatter,\n \"format\": \"%(message)s\",\n },\n },\n \"handlers\": {\n \"console\": {\n \"level\": level,\n \"class\": \"pip.utils.logging.ColorizedStreamHandler\",\n \"stream\": self.log_streams[0],\n \"filters\": [\"exclude_warnings\"],\n \"formatter\": \"indent\",\n },\n \"console_errors\": {\n \"level\": \"WARNING\",\n \"class\": \"pip.utils.logging.ColorizedStreamHandler\",\n \"stream\": self.log_streams[1],\n \"formatter\": \"indent\",\n },\n \"user_log\": {\n \"level\": \"DEBUG\",\n \"class\": \"pip.utils.logging.BetterRotatingFileHandler\",\n \"filename\": options.log or \"/dev/null\",\n \"delay\": True,\n \"formatter\": \"indent\",\n },\n },\n \"root\": {\n \"level\": root_level,\n \"handlers\": list(filter(None, [\n \"console\",\n \"console_errors\",\n \"user_log\" if options.log else None,\n ])),\n },\n # Disable any logging besides WARNING unless we have DEBUG level\n # logging enabled. These use both pip._vendor and the bare names\n # for the case where someone unbundles our libraries.\n \"loggers\": dict(\n (\n name,\n {\n \"level\": (\n \"WARNING\"\n if level in [\"INFO\", \"ERROR\"]\n else \"DEBUG\"\n ),\n },\n )\n for name in [\"pip._vendor\", \"distlib\", \"requests\", \"urllib3\"]\n ),\n })\n\n if sys.version_info[:2] == (3, 3):\n warnings.warn(\n \"Python 3.3 supported has been deprecated and support for it \"\n \"will be dropped in the future. Please upgrade your Python.\",\n deprecation.RemovedInPip11Warning,\n )\n\n # TODO: try to get these passing down from the command?\n # without resorting to os.environ to hold these.\n\n if options.no_input:\n os.environ['PIP_NO_INPUT'] = '1'\n\n if options.exists_action:\n os.environ['PIP_EXISTS_ACTION'] = ' '.join(options.exists_action)\n\n if options.require_venv:\n # If a venv is required check if it can really be found\n if not running_under_virtualenv():\n logger.critical(\n 'Could not find an activated virtualenv (required).'\n )\n sys.exit(VIRTUALENV_NOT_FOUND)\n\n try:\n status = self.run(options, args)\n # FIXME: all commands should return an exit status\n # and when it is done, isinstance is not needed anymore\n if isinstance(status, int):\n return status\n except PreviousBuildDirError as exc:\n logger.critical(str(exc))\n logger.debug('Exception information:', exc_info=True)\n\n return PREVIOUS_BUILD_DIR_ERROR\n except (InstallationError, UninstallationError, BadCommand) as exc:\n logger.critical(str(exc))\n logger.debug('Exception information:', exc_info=True)\n\n return ERROR\n except CommandError as exc:\n logger.critical('ERROR: %s', exc)\n logger.debug('Exception information:', exc_info=True)\n\n return ERROR\n except KeyboardInterrupt:\n logger.critical('Operation cancelled by user')\n logger.debug('Exception information:', exc_info=True)\n\n return ERROR\n except:\n logger.critical('Exception:', exc_info=True)\n\n return UNKNOWN_ERROR\n finally:\n # Check if we're using the latest version of pip available\n if (not options.disable_pip_version_check and not\n getattr(options, \"no_index\", False)):\n with self._build_session(\n options,\n retries=0,\n timeout=min(5, options.timeout)) as session:\n pip_version_check(session)\n\n return SUCCESS\n\n\nclass RequirementCommand(Command):\n\n @staticmethod\n def populate_requirement_set(requirement_set, args, options, finder,\n session, name, wheel_cache):\n \"\"\"\n Marshal cmd line args into a requirement set.\n \"\"\"\n for filename in options.constraints:\n for req in parse_requirements(\n filename,\n constraint=True, finder=finder, options=options,\n session=session, wheel_cache=wheel_cache):\n requirement_set.add_requirement(req)\n\n for req in args:\n requirement_set.add_requirement(\n InstallRequirement.from_line(\n req, None, isolated=options.isolated_mode,\n wheel_cache=wheel_cache\n )\n )\n\n for req in options.editables:\n requirement_set.add_requirement(\n InstallRequirement.from_editable(\n req,\n isolated=options.isolated_mode,\n wheel_cache=wheel_cache\n )\n )\n\n for filename in options.requirements:\n for req in parse_requirements(\n filename,\n finder=finder, options=options, session=session,\n wheel_cache=wheel_cache):\n requirement_set.add_requirement(req)\n # If --require-hashes was a line in a requirements file, tell\n # RequirementSet about it:\n requirement_set.require_hashes = options.require_hashes\n\n if not (args or options.editables or options.requirements):\n opts = {'name': name}\n if options.find_links:\n raise CommandError(\n 'You must give at least one requirement to %(name)s '\n '(maybe you meant \"pip %(name)s %(links)s\"?)' %\n dict(opts, links=' '.join(options.find_links)))\n else:\n raise CommandError(\n 'You must give at least one requirement to %(name)s '\n '(see \"pip help %(name)s\")' % opts)\n\n def _build_package_finder(self, options, session,\n platform=None, python_versions=None,\n abi=None, implementation=None):\n \"\"\"\n Create a package finder appropriate to this requirement command.\n \"\"\"\n index_urls = [options.index_url] + options.extra_index_urls\n if options.no_index:\n logger.debug('Ignoring indexes: %s', ','.join(index_urls))\n index_urls = []\n\n return PackageFinder(\n find_links=options.find_links,\n format_control=options.format_control,\n index_urls=index_urls,\n trusted_hosts=options.trusted_hosts,\n allow_all_prereleases=options.pre,\n process_dependency_links=options.process_dependency_links,\n session=session,\n platform=platform,\n versions=python_versions,\n abi=abi,\n implementation=implementation,\n )\n", "path": "pip/basecommand.py"}]}
| 3,648 | 265 |
gh_patches_debug_18476
|
rasdani/github-patches
|
git_diff
|
learningequality__kolibri-11433
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Allow site title to be customised
## Overview
Allow the site title to be customised; it’s currently hardcoded as ‘Kolibri’.
#### Description and outcomes
The site title is used in only a few places: the `<title>` of the base page and the ‘unsupported browser’ page, and the name in the PWA manifest.
Almost all of the time, the title is overridden by the plugin being used, via vuejs, so users will typically see something like ‘Explore’ or ‘Library’ instead of ‘Kolibri’.
The place where the default ‘Kolibri’ title is slightly problematic at the moment is in the PWA plugin: the name of the PWA is set to ‘Kolibri’, and that’s shown much more prominently in the browser’s list of PWA apps, or on the desktop app chooser when trying to run it.
For Endless Key in particular, that’s a bit problematic because users will likely try to find the PWA from their desktop by searching for ‘Endless Key’ rather than ‘Kolibri’.
So it would be good to either be able to:
- Separate the site title from the name of the platform (which will always be Kolibri), and allow the site title to be customised.
- Or, specifically set the site title in the configuration for the PWA plugin.
The second option is much more self-contained, but doesn’t seem semantically correct to me. The PWA manifest should be reflecting the main site’s configuration.
#### Resources
- https://developer.mozilla.org/en-US/docs/Web/Manifest/name
- https://developer.mozilla.org/en-US/docs/Web/Manifest/short_name
#### Accessibility Requirements
Having an installed PWA use the name the users will be most familiar with it seems like an accessibility issue, although I have not been approaching it from that angle and don’t know which specific accessibility spec applies here.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kolibri/core/templatetags/core_tags.py`
Content:
```
1 """
2 Kolibri template tags
3 =====================
4 """
5 from __future__ import absolute_import
6 from __future__ import print_function
7 from __future__ import unicode_literals
8
9 from django import template
10 from django.templatetags.static import static
11 from django.utils.html import format_html
12
13 from kolibri.core.hooks import FrontEndBaseASyncHook
14 from kolibri.core.hooks import FrontEndBaseHeadHook
15 from kolibri.core.hooks import FrontEndBaseSyncHook
16 from kolibri.core.theme_hook import ThemeHook
17
18 register = template.Library()
19
20
21 @register.simple_tag()
22 def frontend_base_assets():
23 """
24 This is a script tag for all ``FrontEndAssetHook`` hooks that implement a
25 render_to_html() method - this is used in ``/base.html`` template to
26 populate any Javascript and CSS that should be loaded at page load.
27
28 :return: HTML of script tags to insert into base.html
29 """
30 return FrontEndBaseSyncHook.html()
31
32
33 @register.simple_tag()
34 def frontend_base_async_assets():
35 """
36 This is a script tag for all ``FrontEndAssetHook`` hooks that implement a
37 render_to_html() method - this is used in ``/base.html`` template to
38 populate any Javascript and CSS that should be loaded at page load.
39
40 :return: HTML of script tags to insert into base.html
41 """
42 return FrontEndBaseASyncHook.html()
43
44
45 @register.simple_tag()
46 def frontend_base_head_markup():
47 """
48 This is a script tag for all ``FrontEndBaseHeadHook`` hooks that implement
49 a render_to_html() method - this is used in the ``/base.html`` template to
50 inject arbitrary markup into the ``<head>`` element.
51
52 :return: HTML to insert into head of base.html
53 """
54 return FrontEndBaseHeadHook.html()
55
56
57 @register.simple_tag()
58 def theme_favicon():
59 """
60 Render a favicon link to put in the <head> tag of base.html, if a favicon is
61 provided by the theme. If not, a default will be returned.
62 """
63 favicon_urls = [
64 logo["src"]
65 for logo in ThemeHook.get_theme().get("logos", [])
66 if logo.get("content_type", "") == "image/vnd.microsoft.icon"
67 ]
68
69 # Choose the first available .ico file. It's unlikely there's more than
70 # one specified in the theme.
71 favicon_url = favicon_urls[0] if favicon_urls else static("assets/logo.ico")
72
73 return format_html('<link rel="shortcut icon" href="{}">', favicon_url)
74
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kolibri/core/templatetags/core_tags.py b/kolibri/core/templatetags/core_tags.py
--- a/kolibri/core/templatetags/core_tags.py
+++ b/kolibri/core/templatetags/core_tags.py
@@ -14,6 +14,7 @@
from kolibri.core.hooks import FrontEndBaseHeadHook
from kolibri.core.hooks import FrontEndBaseSyncHook
from kolibri.core.theme_hook import ThemeHook
+from kolibri.utils.translation import ugettext as _
register = template.Library()
@@ -71,3 +72,13 @@
favicon_url = favicon_urls[0] if favicon_urls else static("assets/logo.ico")
return format_html('<link rel="shortcut icon" href="{}">', favicon_url)
+
+
[email protected]_tag()
+def site_title():
+ """
+ Return the text of the site title, if provided by the theme. If not, the
+ default will be returned. The site title may be translated, to allow for
+ transliteration into other alphabets where needed.
+ """
+ return ThemeHook.get_theme().get("siteTitle", _("Kolibri"))
|
{"golden_diff": "diff --git a/kolibri/core/templatetags/core_tags.py b/kolibri/core/templatetags/core_tags.py\n--- a/kolibri/core/templatetags/core_tags.py\n+++ b/kolibri/core/templatetags/core_tags.py\n@@ -14,6 +14,7 @@\n from kolibri.core.hooks import FrontEndBaseHeadHook\n from kolibri.core.hooks import FrontEndBaseSyncHook\n from kolibri.core.theme_hook import ThemeHook\n+from kolibri.utils.translation import ugettext as _\n \n register = template.Library()\n \n@@ -71,3 +72,13 @@\n favicon_url = favicon_urls[0] if favicon_urls else static(\"assets/logo.ico\")\n \n return format_html('<link rel=\"shortcut icon\" href=\"{}\">', favicon_url)\n+\n+\[email protected]_tag()\n+def site_title():\n+ \"\"\"\n+ Return the text of the site title, if provided by the theme. If not, the\n+ default will be returned. The site title may be translated, to allow for\n+ transliteration into other alphabets where needed.\n+ \"\"\"\n+ return ThemeHook.get_theme().get(\"siteTitle\", _(\"Kolibri\"))\n", "issue": "Allow site title to be customised\n## Overview\r\n\r\nAllow the site title to be customised; it\u2019s currently hardcoded as \u2018Kolibri\u2019.\r\n\r\n#### Description and outcomes\r\n\r\nThe site title is used in only a few places: the `<title>` of the base page and the \u2018unsupported browser\u2019 page, and the name in the PWA manifest.\r\n\r\nAlmost all of the time, the title is overridden by the plugin being used, via vuejs, so users will typically see something like \u2018Explore\u2019 or \u2018Library\u2019 instead of \u2018Kolibri\u2019.\r\n\r\nThe place where the default \u2018Kolibri\u2019 title is slightly problematic at the moment is in the PWA plugin: the name of the PWA is set to \u2018Kolibri\u2019, and that\u2019s shown much more prominently in the browser\u2019s list of PWA apps, or on the desktop app chooser when trying to run it.\r\n\r\nFor Endless Key in particular, that\u2019s a bit problematic because users will likely try to find the PWA from their desktop by searching for \u2018Endless Key\u2019 rather than \u2018Kolibri\u2019.\r\n\r\nSo it would be good to either be able to:\r\n - Separate the site title from the name of the platform (which will always be Kolibri), and allow the site title to be customised.\r\n - Or, specifically set the site title in the configuration for the PWA plugin.\r\n\r\nThe second option is much more self-contained, but doesn\u2019t seem semantically correct to me. The PWA manifest should be reflecting the main site\u2019s configuration.\r\n\r\n#### Resources\r\n\r\n - https://developer.mozilla.org/en-US/docs/Web/Manifest/name\r\n - https://developer.mozilla.org/en-US/docs/Web/Manifest/short_name\r\n\r\n#### Accessibility Requirements\r\n\r\nHaving an installed PWA use the name the users will be most familiar with it seems like an accessibility issue, although I have not been approaching it from that angle and don\u2019t know which specific accessibility spec applies here.\n", "before_files": [{"content": "\"\"\"\nKolibri template tags\n=====================\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nfrom django import template\nfrom django.templatetags.static import static\nfrom django.utils.html import format_html\n\nfrom kolibri.core.hooks import FrontEndBaseASyncHook\nfrom kolibri.core.hooks import FrontEndBaseHeadHook\nfrom kolibri.core.hooks import FrontEndBaseSyncHook\nfrom kolibri.core.theme_hook import ThemeHook\n\nregister = template.Library()\n\n\[email protected]_tag()\ndef frontend_base_assets():\n \"\"\"\n This is a script tag for all ``FrontEndAssetHook`` hooks that implement a\n render_to_html() method - this is used in ``/base.html`` template to\n populate any Javascript and CSS that should be loaded at page load.\n\n :return: HTML of script tags to insert into base.html\n \"\"\"\n return FrontEndBaseSyncHook.html()\n\n\[email protected]_tag()\ndef frontend_base_async_assets():\n \"\"\"\n This is a script tag for all ``FrontEndAssetHook`` hooks that implement a\n render_to_html() method - this is used in ``/base.html`` template to\n populate any Javascript and CSS that should be loaded at page load.\n\n :return: HTML of script tags to insert into base.html\n \"\"\"\n return FrontEndBaseASyncHook.html()\n\n\[email protected]_tag()\ndef frontend_base_head_markup():\n \"\"\"\n This is a script tag for all ``FrontEndBaseHeadHook`` hooks that implement\n a render_to_html() method - this is used in the ``/base.html`` template to\n inject arbitrary markup into the ``<head>`` element.\n\n :return: HTML to insert into head of base.html\n \"\"\"\n return FrontEndBaseHeadHook.html()\n\n\[email protected]_tag()\ndef theme_favicon():\n \"\"\"\n Render a favicon link to put in the <head> tag of base.html, if a favicon is\n provided by the theme. If not, a default will be returned.\n \"\"\"\n favicon_urls = [\n logo[\"src\"]\n for logo in ThemeHook.get_theme().get(\"logos\", [])\n if logo.get(\"content_type\", \"\") == \"image/vnd.microsoft.icon\"\n ]\n\n # Choose the first available .ico file. It's unlikely there's more than\n # one specified in the theme.\n favicon_url = favicon_urls[0] if favicon_urls else static(\"assets/logo.ico\")\n\n return format_html('<link rel=\"shortcut icon\" href=\"{}\">', favicon_url)\n", "path": "kolibri/core/templatetags/core_tags.py"}], "after_files": [{"content": "\"\"\"\nKolibri template tags\n=====================\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nfrom django import template\nfrom django.templatetags.static import static\nfrom django.utils.html import format_html\n\nfrom kolibri.core.hooks import FrontEndBaseASyncHook\nfrom kolibri.core.hooks import FrontEndBaseHeadHook\nfrom kolibri.core.hooks import FrontEndBaseSyncHook\nfrom kolibri.core.theme_hook import ThemeHook\nfrom kolibri.utils.translation import ugettext as _\n\nregister = template.Library()\n\n\[email protected]_tag()\ndef frontend_base_assets():\n \"\"\"\n This is a script tag for all ``FrontEndAssetHook`` hooks that implement a\n render_to_html() method - this is used in ``/base.html`` template to\n populate any Javascript and CSS that should be loaded at page load.\n\n :return: HTML of script tags to insert into base.html\n \"\"\"\n return FrontEndBaseSyncHook.html()\n\n\[email protected]_tag()\ndef frontend_base_async_assets():\n \"\"\"\n This is a script tag for all ``FrontEndAssetHook`` hooks that implement a\n render_to_html() method - this is used in ``/base.html`` template to\n populate any Javascript and CSS that should be loaded at page load.\n\n :return: HTML of script tags to insert into base.html\n \"\"\"\n return FrontEndBaseASyncHook.html()\n\n\[email protected]_tag()\ndef frontend_base_head_markup():\n \"\"\"\n This is a script tag for all ``FrontEndBaseHeadHook`` hooks that implement\n a render_to_html() method - this is used in the ``/base.html`` template to\n inject arbitrary markup into the ``<head>`` element.\n\n :return: HTML to insert into head of base.html\n \"\"\"\n return FrontEndBaseHeadHook.html()\n\n\[email protected]_tag()\ndef theme_favicon():\n \"\"\"\n Render a favicon link to put in the <head> tag of base.html, if a favicon is\n provided by the theme. If not, a default will be returned.\n \"\"\"\n favicon_urls = [\n logo[\"src\"]\n for logo in ThemeHook.get_theme().get(\"logos\", [])\n if logo.get(\"content_type\", \"\") == \"image/vnd.microsoft.icon\"\n ]\n\n # Choose the first available .ico file. It's unlikely there's more than\n # one specified in the theme.\n favicon_url = favicon_urls[0] if favicon_urls else static(\"assets/logo.ico\")\n\n return format_html('<link rel=\"shortcut icon\" href=\"{}\">', favicon_url)\n\n\[email protected]_tag()\ndef site_title():\n \"\"\"\n Return the text of the site title, if provided by the theme. If not, the\n default will be returned. The site title may be translated, to allow for\n transliteration into other alphabets where needed.\n \"\"\"\n return ThemeHook.get_theme().get(\"siteTitle\", _(\"Kolibri\"))\n", "path": "kolibri/core/templatetags/core_tags.py"}]}
| 1,350 | 265 |
gh_patches_debug_12329
|
rasdani/github-patches
|
git_diff
|
mitmproxy__mitmproxy-2833
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Configuration file] keys are not used
##### Steps to reproduce the problem:
1. Create a configuration file at .mitmproxy/config.yaml
2. Set this configuration:
mode: "transparent"
showhost: true
3. Start mitmproxy using this command : "mitmproxy --conf config.yaml" (I'm on the .mitmproxy folder of course)
The process is started but the traffic is not working
4. Start mitmproxy using this command : "mitmproxy -T --host"
The process is started and the traffic is working
##### Any other comments? What have you tried so far?
I tried to use the ":" and "=" as separator for the YAML file but only the ":" is recognized apparently
##### System information
I have the last release of mitmproxy (v2.0.2)
Well I've just notice that there is a v3 release, maybe this could help me ?
I prefer to create this ticket if someone else has the same issue :)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mitmproxy/addons/script.py`
Content:
```
1 import os
2 import importlib.util
3 import importlib.machinery
4 import time
5 import sys
6 import types
7 import typing
8
9 from mitmproxy import addonmanager
10 from mitmproxy import exceptions
11 from mitmproxy import flow
12 from mitmproxy import command
13 from mitmproxy import eventsequence
14 from mitmproxy import ctx
15
16
17 def load_script(path: str) -> types.ModuleType:
18 fullname = "__mitmproxy_script__.{}".format(
19 os.path.splitext(os.path.basename(path))[0]
20 )
21 # the fullname is not unique among scripts, so if there already is an existing script with said
22 # fullname, remove it.
23 sys.modules.pop(fullname, None)
24 oldpath = sys.path
25 sys.path.insert(0, os.path.dirname(path))
26 try:
27 loader = importlib.machinery.SourceFileLoader(fullname, path)
28 spec = importlib.util.spec_from_loader(fullname, loader=loader)
29 m = importlib.util.module_from_spec(spec)
30 loader.exec_module(m)
31 if not getattr(m, "name", None):
32 m.name = path # type: ignore
33 return m
34 finally:
35 sys.path[:] = oldpath
36
37
38 class Script:
39 """
40 An addon that manages a single script.
41 """
42 ReloadInterval = 2
43
44 def __init__(self, path):
45 self.name = "scriptmanager:" + path
46 self.path = path
47 self.fullpath = os.path.expanduser(path)
48 self.ns = None
49
50 self.last_load = 0
51 self.last_mtime = 0
52 if not os.path.isfile(self.fullpath):
53 raise exceptions.OptionsError("No such script: %s" % path)
54
55 @property
56 def addons(self):
57 return [self.ns] if self.ns else []
58
59 def tick(self):
60 if time.time() - self.last_load > self.ReloadInterval:
61 try:
62 mtime = os.stat(self.fullpath).st_mtime
63 except FileNotFoundError:
64 scripts = list(ctx.options.scripts)
65 scripts.remove(self.path)
66 ctx.options.update(scripts=scripts)
67 return
68
69 if mtime > self.last_mtime:
70 ctx.log.info("Loading script: %s" % self.path)
71 if self.ns:
72 ctx.master.addons.remove(self.ns)
73 self.ns = None
74 with addonmanager.safecall():
75 ns = load_script(self.fullpath)
76 ctx.master.addons.register(ns)
77 self.ns = ns
78 if self.ns:
79 # We're already running, so we have to explicitly register and
80 # configure the addon
81 ctx.master.addons.invoke_addon(self.ns, "running")
82 ctx.master.addons.invoke_addon(
83 self.ns,
84 "configure",
85 ctx.options.keys()
86 )
87 self.last_load = time.time()
88 self.last_mtime = mtime
89
90
91 class ScriptLoader:
92 """
93 An addon that manages loading scripts from options.
94 """
95 def __init__(self):
96 self.is_running = False
97 self.addons = []
98
99 def running(self):
100 self.is_running = True
101
102 @command.command("script.run")
103 def script_run(self, flows: typing.Sequence[flow.Flow], path: str) -> None:
104 """
105 Run a script on the specified flows. The script is loaded with
106 default options, and all lifecycle events for each flow are
107 simulated.
108 """
109 try:
110 s = Script(path)
111 l = addonmanager.Loader(ctx.master)
112 ctx.master.addons.invoke_addon(s, "load", l)
113 ctx.master.addons.invoke_addon(s, "configure", ctx.options.keys())
114 # Script is loaded on the first tick
115 ctx.master.addons.invoke_addon(s, "tick")
116 for f in flows:
117 for evt, arg in eventsequence.iterate(f):
118 ctx.master.addons.invoke_addon(s, evt, arg)
119 except exceptions.OptionsError as e:
120 raise exceptions.CommandError("Error running script: %s" % e) from e
121
122 def configure(self, updated):
123 if "scripts" in updated:
124 for s in ctx.options.scripts:
125 if ctx.options.scripts.count(s) > 1:
126 raise exceptions.OptionsError("Duplicate script: %s" % s)
127
128 for a in self.addons[:]:
129 if a.path not in ctx.options.scripts:
130 ctx.log.info("Un-loading script: %s" % a.name)
131 ctx.master.addons.remove(a)
132 self.addons.remove(a)
133
134 # The machinations below are to ensure that:
135 # - Scripts remain in the same order
136 # - Scripts are not initialized un-necessarily. If only a
137 # script's order in the script list has changed, it is just
138 # moved.
139
140 current = {}
141 for a in self.addons:
142 current[a.path] = a
143
144 ordered = []
145 newscripts = []
146 for s in ctx.options.scripts:
147 if s in current:
148 ordered.append(current[s])
149 else:
150 sc = Script(s)
151 ordered.append(sc)
152 newscripts.append(sc)
153
154 self.addons = ordered
155
156 for s in newscripts:
157 ctx.master.addons.register(s)
158 if self.is_running:
159 # If we're already running, we configure and tell the addon
160 # we're up and running.
161 ctx.master.addons.invoke_addon(s, "running")
162
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mitmproxy/addons/script.py b/mitmproxy/addons/script.py
--- a/mitmproxy/addons/script.py
+++ b/mitmproxy/addons/script.py
@@ -44,13 +44,15 @@
def __init__(self, path):
self.name = "scriptmanager:" + path
self.path = path
- self.fullpath = os.path.expanduser(path)
+ self.fullpath = os.path.expanduser(
+ path.strip("'\" ")
+ )
self.ns = None
self.last_load = 0
self.last_mtime = 0
if not os.path.isfile(self.fullpath):
- raise exceptions.OptionsError("No such script: %s" % path)
+ raise exceptions.OptionsError('No such script: "%s"' % self.fullpath)
@property
def addons(self):
|
{"golden_diff": "diff --git a/mitmproxy/addons/script.py b/mitmproxy/addons/script.py\n--- a/mitmproxy/addons/script.py\n+++ b/mitmproxy/addons/script.py\n@@ -44,13 +44,15 @@\n def __init__(self, path):\n self.name = \"scriptmanager:\" + path\n self.path = path\n- self.fullpath = os.path.expanduser(path)\n+ self.fullpath = os.path.expanduser(\n+ path.strip(\"'\\\" \")\n+ )\n self.ns = None\n \n self.last_load = 0\n self.last_mtime = 0\n if not os.path.isfile(self.fullpath):\n- raise exceptions.OptionsError(\"No such script: %s\" % path)\n+ raise exceptions.OptionsError('No such script: \"%s\"' % self.fullpath)\n \n @property\n def addons(self):\n", "issue": "[Configuration file] keys are not used\n##### Steps to reproduce the problem:\r\n\r\n1. Create a configuration file at .mitmproxy/config.yaml\r\n2. Set this configuration:\r\nmode: \"transparent\"\r\nshowhost: true \r\n3. Start mitmproxy using this command : \"mitmproxy --conf config.yaml\" (I'm on the .mitmproxy folder of course)\r\nThe process is started but the traffic is not working\r\n4. Start mitmproxy using this command : \"mitmproxy -T --host\"\r\nThe process is started and the traffic is working\r\n\r\n\r\n##### Any other comments? What have you tried so far?\r\n\r\nI tried to use the \":\" and \"=\" as separator for the YAML file but only the \":\" is recognized apparently\r\n\r\n\r\n##### System information\r\nI have the last release of mitmproxy (v2.0.2)\r\n\r\nWell I've just notice that there is a v3 release, maybe this could help me ?\r\nI prefer to create this ticket if someone else has the same issue :)\n", "before_files": [{"content": "import os\nimport importlib.util\nimport importlib.machinery\nimport time\nimport sys\nimport types\nimport typing\n\nfrom mitmproxy import addonmanager\nfrom mitmproxy import exceptions\nfrom mitmproxy import flow\nfrom mitmproxy import command\nfrom mitmproxy import eventsequence\nfrom mitmproxy import ctx\n\n\ndef load_script(path: str) -> types.ModuleType:\n fullname = \"__mitmproxy_script__.{}\".format(\n os.path.splitext(os.path.basename(path))[0]\n )\n # the fullname is not unique among scripts, so if there already is an existing script with said\n # fullname, remove it.\n sys.modules.pop(fullname, None)\n oldpath = sys.path\n sys.path.insert(0, os.path.dirname(path))\n try:\n loader = importlib.machinery.SourceFileLoader(fullname, path)\n spec = importlib.util.spec_from_loader(fullname, loader=loader)\n m = importlib.util.module_from_spec(spec)\n loader.exec_module(m)\n if not getattr(m, \"name\", None):\n m.name = path # type: ignore\n return m\n finally:\n sys.path[:] = oldpath\n\n\nclass Script:\n \"\"\"\n An addon that manages a single script.\n \"\"\"\n ReloadInterval = 2\n\n def __init__(self, path):\n self.name = \"scriptmanager:\" + path\n self.path = path\n self.fullpath = os.path.expanduser(path)\n self.ns = None\n\n self.last_load = 0\n self.last_mtime = 0\n if not os.path.isfile(self.fullpath):\n raise exceptions.OptionsError(\"No such script: %s\" % path)\n\n @property\n def addons(self):\n return [self.ns] if self.ns else []\n\n def tick(self):\n if time.time() - self.last_load > self.ReloadInterval:\n try:\n mtime = os.stat(self.fullpath).st_mtime\n except FileNotFoundError:\n scripts = list(ctx.options.scripts)\n scripts.remove(self.path)\n ctx.options.update(scripts=scripts)\n return\n\n if mtime > self.last_mtime:\n ctx.log.info(\"Loading script: %s\" % self.path)\n if self.ns:\n ctx.master.addons.remove(self.ns)\n self.ns = None\n with addonmanager.safecall():\n ns = load_script(self.fullpath)\n ctx.master.addons.register(ns)\n self.ns = ns\n if self.ns:\n # We're already running, so we have to explicitly register and\n # configure the addon\n ctx.master.addons.invoke_addon(self.ns, \"running\")\n ctx.master.addons.invoke_addon(\n self.ns,\n \"configure\",\n ctx.options.keys()\n )\n self.last_load = time.time()\n self.last_mtime = mtime\n\n\nclass ScriptLoader:\n \"\"\"\n An addon that manages loading scripts from options.\n \"\"\"\n def __init__(self):\n self.is_running = False\n self.addons = []\n\n def running(self):\n self.is_running = True\n\n @command.command(\"script.run\")\n def script_run(self, flows: typing.Sequence[flow.Flow], path: str) -> None:\n \"\"\"\n Run a script on the specified flows. The script is loaded with\n default options, and all lifecycle events for each flow are\n simulated.\n \"\"\"\n try:\n s = Script(path)\n l = addonmanager.Loader(ctx.master)\n ctx.master.addons.invoke_addon(s, \"load\", l)\n ctx.master.addons.invoke_addon(s, \"configure\", ctx.options.keys())\n # Script is loaded on the first tick\n ctx.master.addons.invoke_addon(s, \"tick\")\n for f in flows:\n for evt, arg in eventsequence.iterate(f):\n ctx.master.addons.invoke_addon(s, evt, arg)\n except exceptions.OptionsError as e:\n raise exceptions.CommandError(\"Error running script: %s\" % e) from e\n\n def configure(self, updated):\n if \"scripts\" in updated:\n for s in ctx.options.scripts:\n if ctx.options.scripts.count(s) > 1:\n raise exceptions.OptionsError(\"Duplicate script: %s\" % s)\n\n for a in self.addons[:]:\n if a.path not in ctx.options.scripts:\n ctx.log.info(\"Un-loading script: %s\" % a.name)\n ctx.master.addons.remove(a)\n self.addons.remove(a)\n\n # The machinations below are to ensure that:\n # - Scripts remain in the same order\n # - Scripts are not initialized un-necessarily. If only a\n # script's order in the script list has changed, it is just\n # moved.\n\n current = {}\n for a in self.addons:\n current[a.path] = a\n\n ordered = []\n newscripts = []\n for s in ctx.options.scripts:\n if s in current:\n ordered.append(current[s])\n else:\n sc = Script(s)\n ordered.append(sc)\n newscripts.append(sc)\n\n self.addons = ordered\n\n for s in newscripts:\n ctx.master.addons.register(s)\n if self.is_running:\n # If we're already running, we configure and tell the addon\n # we're up and running.\n ctx.master.addons.invoke_addon(s, \"running\")\n", "path": "mitmproxy/addons/script.py"}], "after_files": [{"content": "import os\nimport importlib.util\nimport importlib.machinery\nimport time\nimport sys\nimport types\nimport typing\n\nfrom mitmproxy import addonmanager\nfrom mitmproxy import exceptions\nfrom mitmproxy import flow\nfrom mitmproxy import command\nfrom mitmproxy import eventsequence\nfrom mitmproxy import ctx\n\n\ndef load_script(path: str) -> types.ModuleType:\n fullname = \"__mitmproxy_script__.{}\".format(\n os.path.splitext(os.path.basename(path))[0]\n )\n # the fullname is not unique among scripts, so if there already is an existing script with said\n # fullname, remove it.\n sys.modules.pop(fullname, None)\n oldpath = sys.path\n sys.path.insert(0, os.path.dirname(path))\n try:\n loader = importlib.machinery.SourceFileLoader(fullname, path)\n spec = importlib.util.spec_from_loader(fullname, loader=loader)\n m = importlib.util.module_from_spec(spec)\n loader.exec_module(m)\n if not getattr(m, \"name\", None):\n m.name = path # type: ignore\n return m\n finally:\n sys.path[:] = oldpath\n\n\nclass Script:\n \"\"\"\n An addon that manages a single script.\n \"\"\"\n ReloadInterval = 2\n\n def __init__(self, path):\n self.name = \"scriptmanager:\" + path\n self.path = path\n self.fullpath = os.path.expanduser(\n path.strip(\"'\\\" \")\n )\n self.ns = None\n\n self.last_load = 0\n self.last_mtime = 0\n if not os.path.isfile(self.fullpath):\n raise exceptions.OptionsError('No such script: \"%s\"' % self.fullpath)\n\n @property\n def addons(self):\n return [self.ns] if self.ns else []\n\n def tick(self):\n if time.time() - self.last_load > self.ReloadInterval:\n try:\n mtime = os.stat(self.fullpath).st_mtime\n except FileNotFoundError:\n scripts = list(ctx.options.scripts)\n scripts.remove(self.path)\n ctx.options.update(scripts=scripts)\n return\n\n if mtime > self.last_mtime:\n ctx.log.info(\"Loading script: %s\" % self.path)\n if self.ns:\n ctx.master.addons.remove(self.ns)\n self.ns = None\n with addonmanager.safecall():\n ns = load_script(self.fullpath)\n ctx.master.addons.register(ns)\n self.ns = ns\n if self.ns:\n # We're already running, so we have to explicitly register and\n # configure the addon\n ctx.master.addons.invoke_addon(self.ns, \"running\")\n ctx.master.addons.invoke_addon(\n self.ns,\n \"configure\",\n ctx.options.keys()\n )\n self.last_load = time.time()\n self.last_mtime = mtime\n\n\nclass ScriptLoader:\n \"\"\"\n An addon that manages loading scripts from options.\n \"\"\"\n def __init__(self):\n self.is_running = False\n self.addons = []\n\n def running(self):\n self.is_running = True\n\n @command.command(\"script.run\")\n def script_run(self, flows: typing.Sequence[flow.Flow], path: str) -> None:\n \"\"\"\n Run a script on the specified flows. The script is loaded with\n default options, and all lifecycle events for each flow are\n simulated.\n \"\"\"\n try:\n s = Script(path)\n l = addonmanager.Loader(ctx.master)\n ctx.master.addons.invoke_addon(s, \"load\", l)\n ctx.master.addons.invoke_addon(s, \"configure\", ctx.options.keys())\n # Script is loaded on the first tick\n ctx.master.addons.invoke_addon(s, \"tick\")\n for f in flows:\n for evt, arg in eventsequence.iterate(f):\n ctx.master.addons.invoke_addon(s, evt, arg)\n except exceptions.OptionsError as e:\n raise exceptions.CommandError(\"Error running script: %s\" % e) from e\n\n def configure(self, updated):\n if \"scripts\" in updated:\n for s in ctx.options.scripts:\n if ctx.options.scripts.count(s) > 1:\n raise exceptions.OptionsError(\"Duplicate script: %s\" % s)\n\n for a in self.addons[:]:\n if a.path not in ctx.options.scripts:\n ctx.log.info(\"Un-loading script: %s\" % a.name)\n ctx.master.addons.remove(a)\n self.addons.remove(a)\n\n # The machinations below are to ensure that:\n # - Scripts remain in the same order\n # - Scripts are not initialized un-necessarily. If only a\n # script's order in the script list has changed, it is just\n # moved.\n\n current = {}\n for a in self.addons:\n current[a.path] = a\n\n ordered = []\n newscripts = []\n for s in ctx.options.scripts:\n if s in current:\n ordered.append(current[s])\n else:\n sc = Script(s)\n ordered.append(sc)\n newscripts.append(sc)\n\n self.addons = ordered\n\n for s in newscripts:\n ctx.master.addons.register(s)\n if self.is_running:\n # If we're already running, we configure and tell the addon\n # we're up and running.\n ctx.master.addons.invoke_addon(s, \"running\")\n", "path": "mitmproxy/addons/script.py"}]}
| 2,013 | 191 |
gh_patches_debug_756
|
rasdani/github-patches
|
git_diff
|
vllm-project__vllm-1212
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[v0.2.0] Release Tracker
## Major changes
* Up to 60% performance improvement by optimizing de-tokenization and sampler
* Initial support for AWQ (performance not optimized)
* Support for RoPE scaling and LongChat
* Support for Mistral-7B
## PRs to be merged before the release
- [x] Vectorized sampler: #1048, #820
- [x] LongChat: #555
- [x] `TORCH_CUDA_ARCH_LIST` build option: #1074
- [x] Support for Mistral-7B: #1196
- [x] #1198
- ~~[ ] FP32 RoPE kernel: #1061~~ (deferred to the next PR)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `vllm/__init__.py`
Content:
```
1 """vLLM: a high-throughput and memory-efficient inference engine for LLMs"""
2
3 from vllm.engine.arg_utils import AsyncEngineArgs, EngineArgs
4 from vllm.engine.async_llm_engine import AsyncLLMEngine
5 from vllm.engine.llm_engine import LLMEngine
6 from vllm.engine.ray_utils import initialize_cluster
7 from vllm.entrypoints.llm import LLM
8 from vllm.outputs import CompletionOutput, RequestOutput
9 from vllm.sampling_params import SamplingParams
10
11 __version__ = "0.1.7"
12
13 __all__ = [
14 "LLM",
15 "SamplingParams",
16 "RequestOutput",
17 "CompletionOutput",
18 "LLMEngine",
19 "EngineArgs",
20 "AsyncLLMEngine",
21 "AsyncEngineArgs",
22 "initialize_cluster",
23 ]
24
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/vllm/__init__.py b/vllm/__init__.py
--- a/vllm/__init__.py
+++ b/vllm/__init__.py
@@ -8,7 +8,7 @@
from vllm.outputs import CompletionOutput, RequestOutput
from vllm.sampling_params import SamplingParams
-__version__ = "0.1.7"
+__version__ = "0.2.0"
__all__ = [
"LLM",
|
{"golden_diff": "diff --git a/vllm/__init__.py b/vllm/__init__.py\n--- a/vllm/__init__.py\n+++ b/vllm/__init__.py\n@@ -8,7 +8,7 @@\n from vllm.outputs import CompletionOutput, RequestOutput\n from vllm.sampling_params import SamplingParams\n \n-__version__ = \"0.1.7\"\n+__version__ = \"0.2.0\"\n \n __all__ = [\n \"LLM\",\n", "issue": "[v0.2.0] Release Tracker\n## Major changes\r\n\r\n* Up to 60% performance improvement by optimizing de-tokenization and sampler\r\n* Initial support for AWQ (performance not optimized)\r\n* Support for RoPE scaling and LongChat\r\n* Support for Mistral-7B\r\n\r\n## PRs to be merged before the release\r\n\r\n- [x] Vectorized sampler: #1048, #820 \r\n- [x] LongChat: #555 \r\n- [x] `TORCH_CUDA_ARCH_LIST` build option: #1074 \r\n- [x] Support for Mistral-7B: #1196 \r\n- [x] #1198 \r\n- ~~[ ] FP32 RoPE kernel: #1061~~ (deferred to the next PR)\n", "before_files": [{"content": "\"\"\"vLLM: a high-throughput and memory-efficient inference engine for LLMs\"\"\"\n\nfrom vllm.engine.arg_utils import AsyncEngineArgs, EngineArgs\nfrom vllm.engine.async_llm_engine import AsyncLLMEngine\nfrom vllm.engine.llm_engine import LLMEngine\nfrom vllm.engine.ray_utils import initialize_cluster\nfrom vllm.entrypoints.llm import LLM\nfrom vllm.outputs import CompletionOutput, RequestOutput\nfrom vllm.sampling_params import SamplingParams\n\n__version__ = \"0.1.7\"\n\n__all__ = [\n \"LLM\",\n \"SamplingParams\",\n \"RequestOutput\",\n \"CompletionOutput\",\n \"LLMEngine\",\n \"EngineArgs\",\n \"AsyncLLMEngine\",\n \"AsyncEngineArgs\",\n \"initialize_cluster\",\n]\n", "path": "vllm/__init__.py"}], "after_files": [{"content": "\"\"\"vLLM: a high-throughput and memory-efficient inference engine for LLMs\"\"\"\n\nfrom vllm.engine.arg_utils import AsyncEngineArgs, EngineArgs\nfrom vllm.engine.async_llm_engine import AsyncLLMEngine\nfrom vllm.engine.llm_engine import LLMEngine\nfrom vllm.engine.ray_utils import initialize_cluster\nfrom vllm.entrypoints.llm import LLM\nfrom vllm.outputs import CompletionOutput, RequestOutput\nfrom vllm.sampling_params import SamplingParams\n\n__version__ = \"0.2.0\"\n\n__all__ = [\n \"LLM\",\n \"SamplingParams\",\n \"RequestOutput\",\n \"CompletionOutput\",\n \"LLMEngine\",\n \"EngineArgs\",\n \"AsyncLLMEngine\",\n \"AsyncEngineArgs\",\n \"initialize_cluster\",\n]\n", "path": "vllm/__init__.py"}]}
| 653 | 108 |
gh_patches_debug_30325
|
rasdani/github-patches
|
git_diff
|
mito-ds__mito-213
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Allow the installer to go pro after the user has already installed!
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
Please include the relevant dataset if the bug you encountered is dataset specific. Make sure to anonymize the data properly.
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: [e.g. Windows 11]
- Browser [e.g. Chrome, Firefox]
- Mito Version [e.g. 0.3.331] (you can find this with `pip list`)
**Additional context**
Add any other context about the problem here.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mitoinstaller/mitoinstaller/user_install.py`
Content:
```
1 import json
2 import os
3 from typing import Optional
4 import uuid
5 from copy import deepcopy
6
7 from mitoinstaller import __version__
8
9 # Where all global .mito files are stored
10 MITO_FOLDER = os.path.join(os.path.expanduser("~"), '.mito')
11
12 # The path of the user.json file, which notably is the same
13 # path as the USER_JSON_PATH in mitosheet
14 USER_JSON_PATH = os.path.join(MITO_FOLDER, 'user.json')
15
16 def get_random_id() -> str:
17 """
18 Creates a new random ID for the user, which for any given user,
19 should only happen once.
20 """
21 return str(uuid.uuid1())
22
23 def is_running_test() -> bool:
24 """
25 A helper function that quickly returns if the current code is running inside
26 of a test, which is useful for making sure we don't generate tons of logs
27 """
28 running_pytests = "PYTEST_CURRENT_TEST" in os.environ
29 running_ci = 'CI' in os.environ and os.environ['CI'] is not None
30
31 return running_pytests or running_ci
32
33
34 # NOTE: the installer only creates the static id for the user, and
35 # otherwise does nothing with the user_json file. This makes sure
36 # we keep the dependencies as simple as possible with this file.
37 # We also add the telemetry, which we turn off if the user has a
38 # pro subscription.
39 # NOTE: if you delete a field from this, you need to update the
40 # user_json_is_installer_default to handle this properly
41 USER_JSON_DEFAULT = {
42 'static_user_id': get_random_id() if not is_running_test() else 'github_action',
43 'mitosheet_telemetry': True,
44 'mitosheet_pro': False,
45 }
46
47 def try_create_user_json_file(is_pro: bool=False) -> None:
48 # Create the mito folder if it does not exist
49 if not os.path.exists(MITO_FOLDER):
50 os.mkdir(MITO_FOLDER)
51
52 # We only create a user.json file if it does not exist
53 if not os.path.exists(USER_JSON_PATH):
54 with open(USER_JSON_PATH, 'w+') as f:
55 # And write the default object
56 default_user_json = deepcopy(USER_JSON_DEFAULT)
57 default_user_json['mitosheet_telemetry'] = not is_pro
58 default_user_json['mitosheet_pro'] = is_pro
59
60 f.write(json.dumps(default_user_json))
61 else:
62 # Otherwise, we make sure to update the mitosheet_telemetry variable
63 with open(USER_JSON_PATH, 'r') as f:
64 updated_user_json = json.loads(f.read())
65 updated_user_json['mitosheet_telemetry'] = not is_pro
66 updated_user_json['mitosheet_pro'] = is_pro
67 with open(USER_JSON_PATH, 'w') as f:
68 f.write(json.dumps(updated_user_json))
69
70
71 def get_static_user_id() -> Optional[str]:
72 try:
73 with open(USER_JSON_PATH) as f:
74 return json.load(f)['static_user_id']
75 except:
76 return None
77
78 def get_mitosheet_telemetry() -> bool:
79 try:
80 with open(USER_JSON_PATH) as f:
81 return json.load(f)['mitosheet_telemetry']
82 except:
83 return True
84
85 def user_json_is_installer_default() -> bool:
86 """
87 Returns True if the user.json file is the installer default,
88 and otherwise returns False.
89
90 This allows us to not call identify if we have already done
91 so in the mitosheet package (which would overwrite things
92 we don't want to).
93 """
94 try:
95 with open(USER_JSON_PATH) as f:
96 user_json_object = json.load(f)
97 return len(user_json_object) <= len(USER_JSON_DEFAULT)
98 except:
99 return False
100
```
Path: `mitoinstaller/mitoinstaller/installer_steps/initial_installer_steps.py`
Content:
```
1 import os
2 import sys
3
4 from mitoinstaller import __version__
5 from mitoinstaller.commands import upgrade_mito_installer
6 from mitoinstaller.installer_steps.installer_step import InstallerStep
7 from mitoinstaller.log_utils import identify, log
8 from mitoinstaller.user_install import (USER_JSON_PATH,
9 try_create_user_json_file)
10
11
12 def initial_install_step_create_user():
13
14 if not os.path.exists(USER_JSON_PATH):
15 try_create_user_json_file(is_pro=('--pro' in sys.argv))
16
17 # Only try and log if we're not pro
18 if not ('--pro' in sys.argv):
19 identify()
20 log('install_started', {
21 'mitoinstaller_version': __version__
22 })
23
24
25 INITIAL_INSTALLER_STEPS = [
26 InstallerStep(
27 'Create mito user',
28 initial_install_step_create_user
29 ),
30 InstallerStep(
31 'Upgrade mitoinstaller',
32 upgrade_mito_installer,
33 optional=True
34 ),
35 ]
36
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mitoinstaller/mitoinstaller/installer_steps/initial_installer_steps.py b/mitoinstaller/mitoinstaller/installer_steps/initial_installer_steps.py
--- a/mitoinstaller/mitoinstaller/installer_steps/initial_installer_steps.py
+++ b/mitoinstaller/mitoinstaller/installer_steps/initial_installer_steps.py
@@ -5,7 +5,7 @@
from mitoinstaller.commands import upgrade_mito_installer
from mitoinstaller.installer_steps.installer_step import InstallerStep
from mitoinstaller.log_utils import identify, log
-from mitoinstaller.user_install import (USER_JSON_PATH,
+from mitoinstaller.user_install import (USER_JSON_PATH, go_pro,
try_create_user_json_file)
@@ -14,13 +14,15 @@
if not os.path.exists(USER_JSON_PATH):
try_create_user_json_file(is_pro=('--pro' in sys.argv))
- # Only try and log if we're not pro
if not ('--pro' in sys.argv):
+ # Only try and log if we're not pro
identify()
log('install_started', {
'mitoinstaller_version': __version__
})
-
+ else:
+ # If the user is going pro, make sure they are set to pro
+ go_pro()
INITIAL_INSTALLER_STEPS = [
InstallerStep(
diff --git a/mitoinstaller/mitoinstaller/user_install.py b/mitoinstaller/mitoinstaller/user_install.py
--- a/mitoinstaller/mitoinstaller/user_install.py
+++ b/mitoinstaller/mitoinstaller/user_install.py
@@ -97,3 +97,12 @@
return len(user_json_object) <= len(USER_JSON_DEFAULT)
except:
return False
+
+def go_pro() -> None:
+ with open(USER_JSON_PATH, 'r') as f:
+ updated_user_json = json.loads(f.read())
+ updated_user_json['mitosheet_telemetry'] = False
+ updated_user_json['mitosheet_pro'] = True
+
+ with open(USER_JSON_PATH, 'w') as f:
+ f.write(json.dumps(updated_user_json))
\ No newline at end of file
|
{"golden_diff": "diff --git a/mitoinstaller/mitoinstaller/installer_steps/initial_installer_steps.py b/mitoinstaller/mitoinstaller/installer_steps/initial_installer_steps.py\n--- a/mitoinstaller/mitoinstaller/installer_steps/initial_installer_steps.py\n+++ b/mitoinstaller/mitoinstaller/installer_steps/initial_installer_steps.py\n@@ -5,7 +5,7 @@\n from mitoinstaller.commands import upgrade_mito_installer\n from mitoinstaller.installer_steps.installer_step import InstallerStep\n from mitoinstaller.log_utils import identify, log\n-from mitoinstaller.user_install import (USER_JSON_PATH,\n+from mitoinstaller.user_install import (USER_JSON_PATH, go_pro,\n try_create_user_json_file)\n \n \n@@ -14,13 +14,15 @@\n if not os.path.exists(USER_JSON_PATH):\n try_create_user_json_file(is_pro=('--pro' in sys.argv))\n \n- # Only try and log if we're not pro\n if not ('--pro' in sys.argv):\n+ # Only try and log if we're not pro\n identify()\n log('install_started', {\n 'mitoinstaller_version': __version__\n })\n-\n+ else:\n+ # If the user is going pro, make sure they are set to pro\n+ go_pro()\n \n INITIAL_INSTALLER_STEPS = [\n InstallerStep(\ndiff --git a/mitoinstaller/mitoinstaller/user_install.py b/mitoinstaller/mitoinstaller/user_install.py\n--- a/mitoinstaller/mitoinstaller/user_install.py\n+++ b/mitoinstaller/mitoinstaller/user_install.py\n@@ -97,3 +97,12 @@\n return len(user_json_object) <= len(USER_JSON_DEFAULT)\n except:\n return False\n+\n+def go_pro() -> None:\n+ with open(USER_JSON_PATH, 'r') as f:\n+ updated_user_json = json.loads(f.read())\n+ updated_user_json['mitosheet_telemetry'] = False\n+ updated_user_json['mitosheet_pro'] = True\n+ \n+ with open(USER_JSON_PATH, 'w') as f:\n+ f.write(json.dumps(updated_user_json))\n\\ No newline at end of file\n", "issue": "Allow the installer to go pro after the user has already installed!\n**Describe the bug**\r\nA clear and concise description of what the bug is.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Go to '...'\r\n2. Click on '....'\r\n3. Scroll down to '....'\r\n4. See error\r\n\r\nPlease include the relevant dataset if the bug you encountered is dataset specific. Make sure to anonymize the data properly.\r\n\r\n**Expected behavior**\r\nA clear and concise description of what you expected to happen.\r\n\r\n**Screenshots**\r\nIf applicable, add screenshots to help explain your problem.\r\n\r\n**Desktop (please complete the following information):**\r\n - OS: [e.g. Windows 11]\r\n - Browser [e.g. Chrome, Firefox]\r\n - Mito Version [e.g. 0.3.331] (you can find this with `pip list`)\r\n\r\n**Additional context**\r\nAdd any other context about the problem here.\r\n\n", "before_files": [{"content": "import json\nimport os\nfrom typing import Optional\nimport uuid\nfrom copy import deepcopy\n\nfrom mitoinstaller import __version__\n\n# Where all global .mito files are stored\nMITO_FOLDER = os.path.join(os.path.expanduser(\"~\"), '.mito')\n\n# The path of the user.json file, which notably is the same\n# path as the USER_JSON_PATH in mitosheet\nUSER_JSON_PATH = os.path.join(MITO_FOLDER, 'user.json')\n\ndef get_random_id() -> str:\n \"\"\"\n Creates a new random ID for the user, which for any given user,\n should only happen once.\n \"\"\"\n return str(uuid.uuid1())\n\ndef is_running_test() -> bool:\n \"\"\"\n A helper function that quickly returns if the current code is running inside \n of a test, which is useful for making sure we don't generate tons of logs \n \"\"\"\n running_pytests = \"PYTEST_CURRENT_TEST\" in os.environ\n running_ci = 'CI' in os.environ and os.environ['CI'] is not None\n\n return running_pytests or running_ci\n\n\n# NOTE: the installer only creates the static id for the user, and\n# otherwise does nothing with the user_json file. This makes sure\n# we keep the dependencies as simple as possible with this file. \n# We also add the telemetry, which we turn off if the user has a \n# pro subscription.\n# NOTE: if you delete a field from this, you need to update the \n# user_json_is_installer_default to handle this properly\nUSER_JSON_DEFAULT = {\n 'static_user_id': get_random_id() if not is_running_test() else 'github_action',\n 'mitosheet_telemetry': True,\n 'mitosheet_pro': False,\n}\n\ndef try_create_user_json_file(is_pro: bool=False) -> None:\n # Create the mito folder if it does not exist\n if not os.path.exists(MITO_FOLDER):\n os.mkdir(MITO_FOLDER)\n \n # We only create a user.json file if it does not exist\n if not os.path.exists(USER_JSON_PATH):\n with open(USER_JSON_PATH, 'w+') as f:\n # And write the default object\n default_user_json = deepcopy(USER_JSON_DEFAULT)\n default_user_json['mitosheet_telemetry'] = not is_pro\n default_user_json['mitosheet_pro'] = is_pro\n\n f.write(json.dumps(default_user_json))\n else:\n # Otherwise, we make sure to update the mitosheet_telemetry variable \n with open(USER_JSON_PATH, 'r') as f:\n updated_user_json = json.loads(f.read())\n updated_user_json['mitosheet_telemetry'] = not is_pro\n updated_user_json['mitosheet_pro'] = is_pro \n with open(USER_JSON_PATH, 'w') as f:\n f.write(json.dumps(updated_user_json))\n\n\ndef get_static_user_id() -> Optional[str]:\n try:\n with open(USER_JSON_PATH) as f:\n return json.load(f)['static_user_id']\n except: \n return None\n\ndef get_mitosheet_telemetry() -> bool:\n try:\n with open(USER_JSON_PATH) as f:\n return json.load(f)['mitosheet_telemetry']\n except: \n return True\n\ndef user_json_is_installer_default() -> bool:\n \"\"\"\n Returns True if the user.json file is the installer default, \n and otherwise returns False. \n\n This allows us to not call identify if we have already done\n so in the mitosheet package (which would overwrite things\n we don't want to).\n \"\"\"\n try:\n with open(USER_JSON_PATH) as f:\n user_json_object = json.load(f)\n return len(user_json_object) <= len(USER_JSON_DEFAULT)\n except:\n return False\n", "path": "mitoinstaller/mitoinstaller/user_install.py"}, {"content": "import os\nimport sys\n\nfrom mitoinstaller import __version__\nfrom mitoinstaller.commands import upgrade_mito_installer\nfrom mitoinstaller.installer_steps.installer_step import InstallerStep\nfrom mitoinstaller.log_utils import identify, log\nfrom mitoinstaller.user_install import (USER_JSON_PATH,\n try_create_user_json_file)\n\n\ndef initial_install_step_create_user():\n\n if not os.path.exists(USER_JSON_PATH):\n try_create_user_json_file(is_pro=('--pro' in sys.argv))\n\n # Only try and log if we're not pro\n if not ('--pro' in sys.argv):\n identify()\n log('install_started', {\n 'mitoinstaller_version': __version__\n })\n\n\nINITIAL_INSTALLER_STEPS = [\n InstallerStep(\n 'Create mito user',\n initial_install_step_create_user\n ),\n InstallerStep(\n 'Upgrade mitoinstaller',\n upgrade_mito_installer,\n optional=True\n ),\n]\n", "path": "mitoinstaller/mitoinstaller/installer_steps/initial_installer_steps.py"}], "after_files": [{"content": "import json\nimport os\nfrom typing import Optional\nimport uuid\nfrom copy import deepcopy\n\nfrom mitoinstaller import __version__\n\n# Where all global .mito files are stored\nMITO_FOLDER = os.path.join(os.path.expanduser(\"~\"), '.mito')\n\n# The path of the user.json file, which notably is the same\n# path as the USER_JSON_PATH in mitosheet\nUSER_JSON_PATH = os.path.join(MITO_FOLDER, 'user.json')\n\ndef get_random_id() -> str:\n \"\"\"\n Creates a new random ID for the user, which for any given user,\n should only happen once.\n \"\"\"\n return str(uuid.uuid1())\n\ndef is_running_test() -> bool:\n \"\"\"\n A helper function that quickly returns if the current code is running inside \n of a test, which is useful for making sure we don't generate tons of logs \n \"\"\"\n running_pytests = \"PYTEST_CURRENT_TEST\" in os.environ\n running_ci = 'CI' in os.environ and os.environ['CI'] is not None\n\n return running_pytests or running_ci\n\n\n# NOTE: the installer only creates the static id for the user, and\n# otherwise does nothing with the user_json file. This makes sure\n# we keep the dependencies as simple as possible with this file. \n# We also add the telemetry, which we turn off if the user has a \n# pro subscription.\n# NOTE: if you delete a field from this, you need to update the \n# user_json_is_installer_default to handle this properly\nUSER_JSON_DEFAULT = {\n 'static_user_id': get_random_id() if not is_running_test() else 'github_action',\n 'mitosheet_telemetry': True,\n 'mitosheet_pro': False,\n}\n\ndef try_create_user_json_file(is_pro: bool=False) -> None:\n # Create the mito folder if it does not exist\n if not os.path.exists(MITO_FOLDER):\n os.mkdir(MITO_FOLDER)\n \n # We only create a user.json file if it does not exist\n if not os.path.exists(USER_JSON_PATH):\n with open(USER_JSON_PATH, 'w+') as f:\n # And write the default object\n default_user_json = deepcopy(USER_JSON_DEFAULT)\n default_user_json['mitosheet_telemetry'] = not is_pro\n default_user_json['mitosheet_pro'] = is_pro\n\n f.write(json.dumps(default_user_json))\n else:\n # Otherwise, we make sure to update the mitosheet_telemetry variable \n with open(USER_JSON_PATH, 'r') as f:\n updated_user_json = json.loads(f.read())\n updated_user_json['mitosheet_telemetry'] = not is_pro\n updated_user_json['mitosheet_pro'] = is_pro \n with open(USER_JSON_PATH, 'w') as f:\n f.write(json.dumps(updated_user_json))\n\n\ndef get_static_user_id() -> Optional[str]:\n try:\n with open(USER_JSON_PATH) as f:\n return json.load(f)['static_user_id']\n except: \n return None\n\ndef get_mitosheet_telemetry() -> bool:\n try:\n with open(USER_JSON_PATH) as f:\n return json.load(f)['mitosheet_telemetry']\n except: \n return True\n\ndef user_json_is_installer_default() -> bool:\n \"\"\"\n Returns True if the user.json file is the installer default, \n and otherwise returns False. \n\n This allows us to not call identify if we have already done\n so in the mitosheet package (which would overwrite things\n we don't want to).\n \"\"\"\n try:\n with open(USER_JSON_PATH) as f:\n user_json_object = json.load(f)\n return len(user_json_object) <= len(USER_JSON_DEFAULT)\n except:\n return False\n\ndef go_pro() -> None:\n with open(USER_JSON_PATH, 'r') as f:\n updated_user_json = json.loads(f.read())\n updated_user_json['mitosheet_telemetry'] = False\n updated_user_json['mitosheet_pro'] = True\n \n with open(USER_JSON_PATH, 'w') as f:\n f.write(json.dumps(updated_user_json))", "path": "mitoinstaller/mitoinstaller/user_install.py"}, {"content": "import os\nimport sys\n\nfrom mitoinstaller import __version__\nfrom mitoinstaller.commands import upgrade_mito_installer\nfrom mitoinstaller.installer_steps.installer_step import InstallerStep\nfrom mitoinstaller.log_utils import identify, log\nfrom mitoinstaller.user_install import (USER_JSON_PATH, go_pro,\n try_create_user_json_file)\n\n\ndef initial_install_step_create_user():\n\n if not os.path.exists(USER_JSON_PATH):\n try_create_user_json_file(is_pro=('--pro' in sys.argv))\n\n if not ('--pro' in sys.argv):\n # Only try and log if we're not pro\n identify()\n log('install_started', {\n 'mitoinstaller_version': __version__\n })\n else:\n # If the user is going pro, make sure they are set to pro\n go_pro()\n\nINITIAL_INSTALLER_STEPS = [\n InstallerStep(\n 'Create mito user',\n initial_install_step_create_user\n ),\n InstallerStep(\n 'Upgrade mitoinstaller',\n upgrade_mito_installer,\n optional=True\n ),\n]\n", "path": "mitoinstaller/mitoinstaller/installer_steps/initial_installer_steps.py"}]}
| 1,782 | 496 |
gh_patches_debug_4574
|
rasdani/github-patches
|
git_diff
|
qtile__qtile-2716
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
stack trace from Clipboard widget
```
2021-08-13 06:48:23,421 ERROR libqtile hook.py:fire():L381 Error in hook selection_change
Traceback (most recent call last):
File "/home/tycho/.local/lib/python3.9/site-packages/libqtile/hook.py", line 379, in fire
i(*args, **kwargs)
File "/home/tycho/.local/lib/python3.9/site-packages/libqtile/widget/clipboard.py", line 82, in hook_change
if self.is_blacklisted(selection["owner"]):
File "/home/tycho/.local/lib/python3.9/site-packages/libqtile/widget/clipboard.py", line 69, in is_blacklisted
owner = xcbq.Window(self.qtile.core.conn, owner_id)
AttributeError: module 'libqtile.backend.x11.xcbq' has no attribute 'Window'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `libqtile/widget/clipboard.py`
Content:
```
1 # Copyright (c) 2014 Sean Vig
2 # Copyright (c) 2014 roger
3 # Copyright (c) 2014 Adi Sieker
4 # Copyright (c) 2014 Tycho Andersen
5 #
6 # Permission is hereby granted, free of charge, to any person obtaining a copy
7 # of this software and associated documentation files (the "Software"), to deal
8 # in the Software without restriction, including without limitation the rights
9 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
10 # copies of the Software, and to permit persons to whom the Software is
11 # furnished to do so, subject to the following conditions:
12 #
13 # The above copyright notice and this permission notice shall be included in
14 # all copies or substantial portions of the Software.
15 #
16 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
17 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
18 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
19 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
20 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
21 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
22 # SOFTWARE.
23
24 from libqtile import bar, hook
25 from libqtile.backend.x11 import xcbq
26 from libqtile.widget import base
27
28
29 class Clipboard(base._TextBox):
30 """Display current clipboard contents"""
31 orientations = base.ORIENTATION_HORIZONTAL
32 defaults = [
33 ("selection", "CLIPBOARD",
34 "the selection to display(CLIPBOARD or PRIMARY)"),
35 ("max_width", 10, "maximum number of characters to display "
36 "(None for all, useful when width is bar.STRETCH)"),
37 ("timeout", 10,
38 "Default timeout (seconds) for display text, None to keep forever"),
39 ("blacklist", ["keepassx"],
40 "list with blacklisted wm_class, sadly not every "
41 "clipboard window sets them, keepassx does."
42 "Clipboard contents from blacklisted wm_classes "
43 "will be replaced by the value of ``blacklist_text``."),
44 ("blacklist_text", "***********",
45 "text to display when the wm_class is blacklisted")
46 ]
47
48 def __init__(self, width=bar.CALCULATED, **config):
49 base._TextBox.__init__(self, "", width, **config)
50 self.add_defaults(Clipboard.defaults)
51 self.timeout_id = None
52
53 def _configure(self, qtile, bar):
54 base._TextBox._configure(self, qtile, bar)
55 self.text = ""
56 self.setup_hooks()
57
58 def clear(self, *args):
59 self.text = ""
60 self.bar.draw()
61
62 def is_blacklisted(self, owner_id):
63 if not self.blacklist:
64 return False
65
66 if owner_id in self.qtile.windows_map:
67 owner = self.qtile.windows_map[owner_id].window
68 else:
69 owner = xcbq.Window(self.qtile.core.conn, owner_id)
70
71 owner_class = owner.get_wm_class()
72 if owner_class:
73 for wm_class in self.blacklist:
74 if wm_class in owner_class:
75 return True
76
77 def setup_hooks(self):
78 def hook_change(name, selection):
79 if name != self.selection:
80 return
81
82 if self.is_blacklisted(selection["owner"]):
83 text = self.blacklist_text
84 else:
85 text = selection["selection"].replace("\n", " ")
86
87 text = text.strip()
88 if self.max_width is not None and len(text) > self.max_width:
89 text = text[:self.max_width] + "..."
90
91 self.text = text
92
93 if self.timeout_id:
94 self.timeout_id.cancel()
95 self.timeout_id = None
96
97 if self.timeout:
98 self.timeout_id = self.timeout_add(self.timeout, self.clear)
99 self.bar.draw()
100
101 def hook_notify(name, selection):
102 if name != self.selection:
103 return
104
105 if self.timeout_id:
106 self.timeout_id.cancel()
107 self.timeout_id = None
108
109 # only clear if don't change don't apply in .5 seconds
110 if self.timeout:
111 self.timeout_id = self.timeout_add(self.timeout, self.clear)
112 self.bar.draw()
113
114 hook.subscribe.selection_notify(hook_notify)
115 hook.subscribe.selection_change(hook_change)
116
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/libqtile/widget/clipboard.py b/libqtile/widget/clipboard.py
--- a/libqtile/widget/clipboard.py
+++ b/libqtile/widget/clipboard.py
@@ -66,7 +66,7 @@
if owner_id in self.qtile.windows_map:
owner = self.qtile.windows_map[owner_id].window
else:
- owner = xcbq.Window(self.qtile.core.conn, owner_id)
+ owner = xcbq.window.XWindow(self.qtile.core.conn, owner_id)
owner_class = owner.get_wm_class()
if owner_class:
|
{"golden_diff": "diff --git a/libqtile/widget/clipboard.py b/libqtile/widget/clipboard.py\n--- a/libqtile/widget/clipboard.py\n+++ b/libqtile/widget/clipboard.py\n@@ -66,7 +66,7 @@\n if owner_id in self.qtile.windows_map:\n owner = self.qtile.windows_map[owner_id].window\n else:\n- owner = xcbq.Window(self.qtile.core.conn, owner_id)\n+ owner = xcbq.window.XWindow(self.qtile.core.conn, owner_id)\n \n owner_class = owner.get_wm_class()\n if owner_class:\n", "issue": "stack trace from Clipboard widget\n```\r\n2021-08-13 06:48:23,421 ERROR libqtile hook.py:fire():L381 Error in hook selection_change\r\nTraceback (most recent call last):\r\n File \"/home/tycho/.local/lib/python3.9/site-packages/libqtile/hook.py\", line 379, in fire\r\n i(*args, **kwargs)\r\n File \"/home/tycho/.local/lib/python3.9/site-packages/libqtile/widget/clipboard.py\", line 82, in hook_change\r\n if self.is_blacklisted(selection[\"owner\"]):\r\n File \"/home/tycho/.local/lib/python3.9/site-packages/libqtile/widget/clipboard.py\", line 69, in is_blacklisted\r\n owner = xcbq.Window(self.qtile.core.conn, owner_id)\r\nAttributeError: module 'libqtile.backend.x11.xcbq' has no attribute 'Window'\r\n```\n", "before_files": [{"content": "# Copyright (c) 2014 Sean Vig\n# Copyright (c) 2014 roger\n# Copyright (c) 2014 Adi Sieker\n# Copyright (c) 2014 Tycho Andersen\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\n\nfrom libqtile import bar, hook\nfrom libqtile.backend.x11 import xcbq\nfrom libqtile.widget import base\n\n\nclass Clipboard(base._TextBox):\n \"\"\"Display current clipboard contents\"\"\"\n orientations = base.ORIENTATION_HORIZONTAL\n defaults = [\n (\"selection\", \"CLIPBOARD\",\n \"the selection to display(CLIPBOARD or PRIMARY)\"),\n (\"max_width\", 10, \"maximum number of characters to display \"\n \"(None for all, useful when width is bar.STRETCH)\"),\n (\"timeout\", 10,\n \"Default timeout (seconds) for display text, None to keep forever\"),\n (\"blacklist\", [\"keepassx\"],\n \"list with blacklisted wm_class, sadly not every \"\n \"clipboard window sets them, keepassx does.\"\n \"Clipboard contents from blacklisted wm_classes \"\n \"will be replaced by the value of ``blacklist_text``.\"),\n (\"blacklist_text\", \"***********\",\n \"text to display when the wm_class is blacklisted\")\n ]\n\n def __init__(self, width=bar.CALCULATED, **config):\n base._TextBox.__init__(self, \"\", width, **config)\n self.add_defaults(Clipboard.defaults)\n self.timeout_id = None\n\n def _configure(self, qtile, bar):\n base._TextBox._configure(self, qtile, bar)\n self.text = \"\"\n self.setup_hooks()\n\n def clear(self, *args):\n self.text = \"\"\n self.bar.draw()\n\n def is_blacklisted(self, owner_id):\n if not self.blacklist:\n return False\n\n if owner_id in self.qtile.windows_map:\n owner = self.qtile.windows_map[owner_id].window\n else:\n owner = xcbq.Window(self.qtile.core.conn, owner_id)\n\n owner_class = owner.get_wm_class()\n if owner_class:\n for wm_class in self.blacklist:\n if wm_class in owner_class:\n return True\n\n def setup_hooks(self):\n def hook_change(name, selection):\n if name != self.selection:\n return\n\n if self.is_blacklisted(selection[\"owner\"]):\n text = self.blacklist_text\n else:\n text = selection[\"selection\"].replace(\"\\n\", \" \")\n\n text = text.strip()\n if self.max_width is not None and len(text) > self.max_width:\n text = text[:self.max_width] + \"...\"\n\n self.text = text\n\n if self.timeout_id:\n self.timeout_id.cancel()\n self.timeout_id = None\n\n if self.timeout:\n self.timeout_id = self.timeout_add(self.timeout, self.clear)\n self.bar.draw()\n\n def hook_notify(name, selection):\n if name != self.selection:\n return\n\n if self.timeout_id:\n self.timeout_id.cancel()\n self.timeout_id = None\n\n # only clear if don't change don't apply in .5 seconds\n if self.timeout:\n self.timeout_id = self.timeout_add(self.timeout, self.clear)\n self.bar.draw()\n\n hook.subscribe.selection_notify(hook_notify)\n hook.subscribe.selection_change(hook_change)\n", "path": "libqtile/widget/clipboard.py"}], "after_files": [{"content": "# Copyright (c) 2014 Sean Vig\n# Copyright (c) 2014 roger\n# Copyright (c) 2014 Adi Sieker\n# Copyright (c) 2014 Tycho Andersen\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\n\nfrom libqtile import bar, hook\nfrom libqtile.backend.x11 import xcbq\nfrom libqtile.widget import base\n\n\nclass Clipboard(base._TextBox):\n \"\"\"Display current clipboard contents\"\"\"\n orientations = base.ORIENTATION_HORIZONTAL\n defaults = [\n (\"selection\", \"CLIPBOARD\",\n \"the selection to display(CLIPBOARD or PRIMARY)\"),\n (\"max_width\", 10, \"maximum number of characters to display \"\n \"(None for all, useful when width is bar.STRETCH)\"),\n (\"timeout\", 10,\n \"Default timeout (seconds) for display text, None to keep forever\"),\n (\"blacklist\", [\"keepassx\"],\n \"list with blacklisted wm_class, sadly not every \"\n \"clipboard window sets them, keepassx does.\"\n \"Clipboard contents from blacklisted wm_classes \"\n \"will be replaced by the value of ``blacklist_text``.\"),\n (\"blacklist_text\", \"***********\",\n \"text to display when the wm_class is blacklisted\")\n ]\n\n def __init__(self, width=bar.CALCULATED, **config):\n base._TextBox.__init__(self, \"\", width, **config)\n self.add_defaults(Clipboard.defaults)\n self.timeout_id = None\n\n def _configure(self, qtile, bar):\n base._TextBox._configure(self, qtile, bar)\n self.text = \"\"\n self.setup_hooks()\n\n def clear(self, *args):\n self.text = \"\"\n self.bar.draw()\n\n def is_blacklisted(self, owner_id):\n if not self.blacklist:\n return False\n\n if owner_id in self.qtile.windows_map:\n owner = self.qtile.windows_map[owner_id].window\n else:\n owner = xcbq.window.XWindow(self.qtile.core.conn, owner_id)\n\n owner_class = owner.get_wm_class()\n if owner_class:\n for wm_class in self.blacklist:\n if wm_class in owner_class:\n return True\n\n def setup_hooks(self):\n def hook_change(name, selection):\n if name != self.selection:\n return\n\n if self.is_blacklisted(selection[\"owner\"]):\n text = self.blacklist_text\n else:\n text = selection[\"selection\"].replace(\"\\n\", \" \")\n\n text = text.strip()\n if self.max_width is not None and len(text) > self.max_width:\n text = text[:self.max_width] + \"...\"\n\n self.text = text\n\n if self.timeout_id:\n self.timeout_id.cancel()\n self.timeout_id = None\n\n if self.timeout:\n self.timeout_id = self.timeout_add(self.timeout, self.clear)\n self.bar.draw()\n\n def hook_notify(name, selection):\n if name != self.selection:\n return\n\n if self.timeout_id:\n self.timeout_id.cancel()\n self.timeout_id = None\n\n # only clear if don't change don't apply in .5 seconds\n if self.timeout:\n self.timeout_id = self.timeout_add(self.timeout, self.clear)\n self.bar.draw()\n\n hook.subscribe.selection_notify(hook_notify)\n hook.subscribe.selection_change(hook_change)\n", "path": "libqtile/widget/clipboard.py"}]}
| 1,668 | 130 |
gh_patches_debug_3982
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-48536
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix logic bug where Group.Status is archived_until_escalating but the GroupInbox.Status is escalating

## Objective:
We are not transitioning the GroupStatus to escalating but transitioned GroupInboxStatus.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/sentry/issues/escalating.py`
Content:
```
1 """This module has the logic for querying Snuba for the hourly event count for a list of groups.
2 This is later used for generating group forecasts for determining when a group may be escalating.
3 """
4
5 import logging
6 from collections import defaultdict
7 from datetime import datetime, timedelta
8 from typing import Dict, List, Optional, Sequence, Tuple, TypedDict
9
10 from snuba_sdk import (
11 Column,
12 Condition,
13 Direction,
14 Entity,
15 Function,
16 Limit,
17 Offset,
18 Op,
19 OrderBy,
20 Query,
21 Request,
22 )
23
24 from sentry import analytics
25 from sentry.issues.escalating_group_forecast import EscalatingGroupForecast
26 from sentry.issues.escalating_issues_alg import GroupCount
27 from sentry.issues.grouptype import GroupCategory
28 from sentry.models import Group
29 from sentry.models.group import GroupStatus
30 from sentry.models.groupinbox import GroupInboxReason, add_group_to_inbox
31 from sentry.snuba.dataset import Dataset, EntityKey
32 from sentry.types.group import GroupSubStatus
33 from sentry.utils.cache import cache
34 from sentry.utils.snuba import raw_snql_query
35
36 logger = logging.getLogger(__name__)
37
38 __all__ = ["query_groups_past_counts", "parse_groups_past_counts"]
39
40 REFERRER = "sentry.issues.escalating"
41 ELEMENTS_PER_SNUBA_PAGE = 10000 # This is the maximum value for Snuba
42 # The amount of data needed to generate a group forecast
43 BUCKETS_PER_GROUP = 7 * 24
44 ONE_WEEK_DURATION = 7
45 IS_ESCALATING_REFERRER = "sentry.issues.escalating.is_escalating"
46 GROUP_HOURLY_COUNT_TTL = 60
47
48 GroupsCountResponse = TypedDict(
49 "GroupsCountResponse",
50 {"group_id": int, "hourBucket": str, "count()": int, "project_id": int},
51 )
52
53 ParsedGroupsCount = Dict[int, GroupCount]
54
55
56 def query_groups_past_counts(groups: Sequence[Group]) -> List[GroupsCountResponse]:
57 """Query Snuba for the counts for every group bucketed into hours.
58
59 It optimizes the query by guaranteeing that we look at group_ids that are from the same project id.
60 This is important for Snuba as the data is stored in blocks related to the project id.
61
62 We maximize the number of projects and groups to reduce the total number of Snuba queries.
63 Each project may not have enough groups in order to reach the max number of returned
64 elements (ELEMENTS_PER_SNUBA_PAGE), thus, projects with few groups should be grouped together until
65 we get at least a certain number of groups.
66
67 NOTE: Groups with less than the maximum number of buckets (think of groups with just 1 event or less
68 than 7 days old) will skew the optimization since we may only get one page and less elements than the max
69 ELEMENTS_PER_SNUBA_PAGE.
70 """
71 all_results = [] # type: ignore[var-annotated]
72 if not groups:
73 return all_results
74
75 start_date, end_date = _start_and_end_dates()
76
77 # Error groups use the events dataset while profile and perf groups use the issue platform dataset
78 error_groups: List[Group] = []
79 other_groups: List[Group] = []
80 for g in groups:
81 if g.issue_category == GroupCategory.ERROR:
82 error_groups.append(g)
83 else:
84 other_groups.append(g)
85
86 all_results += _process_groups(error_groups, start_date, end_date, GroupCategory.ERROR)
87 all_results += _process_groups(other_groups, start_date, end_date)
88
89 return all_results
90
91
92 def _process_groups(
93 groups: Sequence[Group],
94 start_date: datetime,
95 end_date: datetime,
96 category: Optional[GroupCategory] = None,
97 ) -> List[GroupsCountResponse]:
98 """Given a list of groups, query Snuba for their hourly bucket count.
99 The category defines which Snuba dataset and entity we query."""
100 all_results = [] # type: ignore[var-annotated]
101 if not groups:
102 return all_results
103
104 group_ids_by_project = _extract_project_and_group_ids(groups)
105 proj_ids, group_ids = [], []
106 processed_projects = 0
107 total_projects_count = len(group_ids_by_project)
108 organization_id = groups[0].project.organization.id
109
110 # This iteration guarantees that all groups for a project will be queried in the same call
111 # and only one page where the groups could be mixed with groups from another project
112 # Iterating over the sorted keys guarantees results for tests
113 for proj_id in sorted(group_ids_by_project.keys()):
114 _group_ids = group_ids_by_project[proj_id]
115 # Add them to the list of projects and groups to query
116 proj_ids.append(proj_id)
117 group_ids += _group_ids
118 processed_projects += 1
119 potential_num_elements = len(_group_ids) * BUCKETS_PER_GROUP
120 # This is trying to maximize the number of groups on the first page
121 if (
122 processed_projects < total_projects_count
123 and potential_num_elements < ELEMENTS_PER_SNUBA_PAGE
124 ):
125 continue
126
127 # TODO: Write this as a dispatcher type task and fire off a separate task per proj_ids
128 all_results += _query_with_pagination(
129 organization_id, proj_ids, group_ids, start_date, end_date, category
130 )
131 # We're ready for a new set of projects and ids
132 proj_ids, group_ids = [], []
133
134 return all_results
135
136
137 def _query_with_pagination(
138 organization_id: int,
139 project_ids: Sequence[int],
140 group_ids: Sequence[int],
141 start_date: datetime,
142 end_date: datetime,
143 category: Optional[GroupCategory],
144 ) -> List[GroupsCountResponse]:
145 """Query Snuba for event counts for the given list of project ids and groups ids in
146 a time range."""
147 all_results = []
148 offset = 0
149 while True:
150 query = _generate_query(project_ids, group_ids, offset, start_date, end_date, category)
151 request = Request(
152 dataset=_issue_category_dataset(category),
153 app_id=REFERRER,
154 query=query,
155 tenant_ids={"referrer": REFERRER, "organization_id": organization_id},
156 )
157 results = raw_snql_query(request, referrer=REFERRER)["data"]
158 all_results += results
159 offset += ELEMENTS_PER_SNUBA_PAGE
160 if not results or len(results) < ELEMENTS_PER_SNUBA_PAGE:
161 break
162
163 return all_results
164
165
166 def _generate_query(
167 project_ids: Sequence[int],
168 group_ids: Sequence[int],
169 offset: int,
170 start_date: datetime,
171 end_date: datetime,
172 category: Optional[GroupCategory],
173 ) -> Query:
174 """This simply generates a query based on the passed parameters"""
175 group_id_col = Column("group_id")
176 proj_id_col = Column("project_id")
177 return Query(
178 match=Entity(_issue_category_entity(category)),
179 select=[
180 proj_id_col,
181 group_id_col,
182 Function("toStartOfHour", [Column("timestamp")], "hourBucket"),
183 Function("count", []),
184 ],
185 groupby=[proj_id_col, group_id_col, Column("hourBucket")],
186 where=[
187 Condition(proj_id_col, Op.IN, Function("tuple", project_ids)),
188 Condition(Column("group_id"), Op.IN, Function("tuple", group_ids)),
189 Condition(Column("timestamp"), Op.GTE, start_date),
190 Condition(Column("timestamp"), Op.LT, end_date),
191 ],
192 limit=Limit(ELEMENTS_PER_SNUBA_PAGE),
193 offset=Offset(offset),
194 orderby=[
195 OrderBy(proj_id_col, Direction.ASC),
196 OrderBy(group_id_col, Direction.ASC),
197 OrderBy(Column("hourBucket"), Direction.ASC),
198 ],
199 )
200
201
202 def _start_and_end_dates(hours: int = BUCKETS_PER_GROUP) -> Tuple[datetime, datetime]:
203 """Return the start and end date of N hours time range."""
204 end_datetime = datetime.now()
205 return end_datetime - timedelta(hours=hours), end_datetime
206
207
208 def _extract_project_and_group_ids(groups: Sequence[Group]) -> Dict[int, List[int]]:
209 """Return all project and group IDs from a list of Group"""
210 group_ids_by_project: Dict[int, List[int]] = defaultdict(list)
211 for group in groups:
212 group_ids_by_project[group.project_id].append(group.id)
213
214 return group_ids_by_project
215
216
217 def get_group_hourly_count(group: Group) -> int:
218 """Return the number of events a group has had today in the last hour"""
219 key = f"hourly-group-count:{group.project.id}:{group.id}"
220 hourly_count = cache.get(key)
221
222 if hourly_count is None:
223 now = datetime.now()
224 current_hour = now.replace(minute=0, second=0, microsecond=0)
225 query = Query(
226 match=Entity(_issue_category_entity(group.issue_category)),
227 select=[
228 Function("count", []),
229 ],
230 where=[
231 Condition(Column("project_id"), Op.EQ, group.project.id),
232 Condition(Column("group_id"), Op.EQ, group.id),
233 Condition(Column("timestamp"), Op.GTE, current_hour),
234 Condition(Column("timestamp"), Op.LT, now),
235 ],
236 )
237 request = Request(
238 dataset=_issue_category_dataset(group.issue_category),
239 app_id=IS_ESCALATING_REFERRER,
240 query=query,
241 tenant_ids={
242 "referrer": IS_ESCALATING_REFERRER,
243 "organization_id": group.project.organization.id,
244 },
245 )
246 hourly_count = int(
247 raw_snql_query(request, referrer=IS_ESCALATING_REFERRER)["data"][0]["count()"]
248 )
249 cache.set(key, hourly_count, GROUP_HOURLY_COUNT_TTL)
250 return int(hourly_count)
251
252
253 def is_escalating(group: Group) -> bool:
254 """Return boolean depending on if the group is escalating or not"""
255 group_hourly_count = get_group_hourly_count(group)
256 forecast_today = EscalatingGroupForecast.fetch_todays_forecast(group.project.id, group.id)
257 # Check if current event occurance is greater than forecast for today's date
258 if group_hourly_count > forecast_today:
259 group.substatus = GroupSubStatus.ESCALATING
260 group.status = GroupStatus.UNRESOLVED
261 add_group_to_inbox(group, GroupInboxReason.ESCALATING)
262
263 analytics.record(
264 "issue.escalating",
265 organization_id=group.project.organization.id,
266 project_id=group.project.id,
267 group_id=group.id,
268 )
269 return True
270 return False
271
272
273 def parse_groups_past_counts(response: Sequence[GroupsCountResponse]) -> ParsedGroupsCount:
274 """
275 Return the parsed snuba response for groups past counts to be used in generate_issue_forecast.
276 ParsedGroupCount is of the form {<group_id>: {"intervals": [str], "data": [int]}}.
277
278 `response`: Snuba response for group event counts
279 """
280 group_counts: ParsedGroupsCount = {}
281 group_ids_list = group_counts.keys()
282 for data in response:
283 group_id = data["group_id"]
284 if group_id not in group_ids_list:
285 group_counts[group_id] = {
286 "intervals": [data["hourBucket"]],
287 "data": [data["count()"]],
288 }
289 else:
290 group_counts[group_id]["intervals"].append(data["hourBucket"])
291 group_counts[group_id]["data"].append(data["count()"])
292 return group_counts
293
294
295 def _issue_category_dataset(category: Optional[GroupCategory]) -> Dataset:
296 return Dataset.Events.value if category == GroupCategory.ERROR else Dataset.IssuePlatform.value
297
298
299 def _issue_category_entity(category: Optional[GroupCategory]) -> EntityKey:
300 return (
301 EntityKey.Events.value if category == GroupCategory.ERROR else EntityKey.IssuePlatform.value
302 )
303
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/sentry/issues/escalating.py b/src/sentry/issues/escalating.py
--- a/src/sentry/issues/escalating.py
+++ b/src/sentry/issues/escalating.py
@@ -258,6 +258,7 @@
if group_hourly_count > forecast_today:
group.substatus = GroupSubStatus.ESCALATING
group.status = GroupStatus.UNRESOLVED
+ group.save()
add_group_to_inbox(group, GroupInboxReason.ESCALATING)
analytics.record(
|
{"golden_diff": "diff --git a/src/sentry/issues/escalating.py b/src/sentry/issues/escalating.py\n--- a/src/sentry/issues/escalating.py\n+++ b/src/sentry/issues/escalating.py\n@@ -258,6 +258,7 @@\n if group_hourly_count > forecast_today:\n group.substatus = GroupSubStatus.ESCALATING\n group.status = GroupStatus.UNRESOLVED\n+ group.save()\n add_group_to_inbox(group, GroupInboxReason.ESCALATING)\n \n analytics.record(\n", "issue": "Fix logic bug where Group.Status is archived_until_escalating but the GroupInbox.Status is escalating\n\n\n\n\n## Objective:\nWe are not transitioning the GroupStatus to escalating but transitioned GroupInboxStatus.\n", "before_files": [{"content": "\"\"\"This module has the logic for querying Snuba for the hourly event count for a list of groups.\nThis is later used for generating group forecasts for determining when a group may be escalating.\n\"\"\"\n\nimport logging\nfrom collections import defaultdict\nfrom datetime import datetime, timedelta\nfrom typing import Dict, List, Optional, Sequence, Tuple, TypedDict\n\nfrom snuba_sdk import (\n Column,\n Condition,\n Direction,\n Entity,\n Function,\n Limit,\n Offset,\n Op,\n OrderBy,\n Query,\n Request,\n)\n\nfrom sentry import analytics\nfrom sentry.issues.escalating_group_forecast import EscalatingGroupForecast\nfrom sentry.issues.escalating_issues_alg import GroupCount\nfrom sentry.issues.grouptype import GroupCategory\nfrom sentry.models import Group\nfrom sentry.models.group import GroupStatus\nfrom sentry.models.groupinbox import GroupInboxReason, add_group_to_inbox\nfrom sentry.snuba.dataset import Dataset, EntityKey\nfrom sentry.types.group import GroupSubStatus\nfrom sentry.utils.cache import cache\nfrom sentry.utils.snuba import raw_snql_query\n\nlogger = logging.getLogger(__name__)\n\n__all__ = [\"query_groups_past_counts\", \"parse_groups_past_counts\"]\n\nREFERRER = \"sentry.issues.escalating\"\nELEMENTS_PER_SNUBA_PAGE = 10000 # This is the maximum value for Snuba\n# The amount of data needed to generate a group forecast\nBUCKETS_PER_GROUP = 7 * 24\nONE_WEEK_DURATION = 7\nIS_ESCALATING_REFERRER = \"sentry.issues.escalating.is_escalating\"\nGROUP_HOURLY_COUNT_TTL = 60\n\nGroupsCountResponse = TypedDict(\n \"GroupsCountResponse\",\n {\"group_id\": int, \"hourBucket\": str, \"count()\": int, \"project_id\": int},\n)\n\nParsedGroupsCount = Dict[int, GroupCount]\n\n\ndef query_groups_past_counts(groups: Sequence[Group]) -> List[GroupsCountResponse]:\n \"\"\"Query Snuba for the counts for every group bucketed into hours.\n\n It optimizes the query by guaranteeing that we look at group_ids that are from the same project id.\n This is important for Snuba as the data is stored in blocks related to the project id.\n\n We maximize the number of projects and groups to reduce the total number of Snuba queries.\n Each project may not have enough groups in order to reach the max number of returned\n elements (ELEMENTS_PER_SNUBA_PAGE), thus, projects with few groups should be grouped together until\n we get at least a certain number of groups.\n\n NOTE: Groups with less than the maximum number of buckets (think of groups with just 1 event or less\n than 7 days old) will skew the optimization since we may only get one page and less elements than the max\n ELEMENTS_PER_SNUBA_PAGE.\n \"\"\"\n all_results = [] # type: ignore[var-annotated]\n if not groups:\n return all_results\n\n start_date, end_date = _start_and_end_dates()\n\n # Error groups use the events dataset while profile and perf groups use the issue platform dataset\n error_groups: List[Group] = []\n other_groups: List[Group] = []\n for g in groups:\n if g.issue_category == GroupCategory.ERROR:\n error_groups.append(g)\n else:\n other_groups.append(g)\n\n all_results += _process_groups(error_groups, start_date, end_date, GroupCategory.ERROR)\n all_results += _process_groups(other_groups, start_date, end_date)\n\n return all_results\n\n\ndef _process_groups(\n groups: Sequence[Group],\n start_date: datetime,\n end_date: datetime,\n category: Optional[GroupCategory] = None,\n) -> List[GroupsCountResponse]:\n \"\"\"Given a list of groups, query Snuba for their hourly bucket count.\n The category defines which Snuba dataset and entity we query.\"\"\"\n all_results = [] # type: ignore[var-annotated]\n if not groups:\n return all_results\n\n group_ids_by_project = _extract_project_and_group_ids(groups)\n proj_ids, group_ids = [], []\n processed_projects = 0\n total_projects_count = len(group_ids_by_project)\n organization_id = groups[0].project.organization.id\n\n # This iteration guarantees that all groups for a project will be queried in the same call\n # and only one page where the groups could be mixed with groups from another project\n # Iterating over the sorted keys guarantees results for tests\n for proj_id in sorted(group_ids_by_project.keys()):\n _group_ids = group_ids_by_project[proj_id]\n # Add them to the list of projects and groups to query\n proj_ids.append(proj_id)\n group_ids += _group_ids\n processed_projects += 1\n potential_num_elements = len(_group_ids) * BUCKETS_PER_GROUP\n # This is trying to maximize the number of groups on the first page\n if (\n processed_projects < total_projects_count\n and potential_num_elements < ELEMENTS_PER_SNUBA_PAGE\n ):\n continue\n\n # TODO: Write this as a dispatcher type task and fire off a separate task per proj_ids\n all_results += _query_with_pagination(\n organization_id, proj_ids, group_ids, start_date, end_date, category\n )\n # We're ready for a new set of projects and ids\n proj_ids, group_ids = [], []\n\n return all_results\n\n\ndef _query_with_pagination(\n organization_id: int,\n project_ids: Sequence[int],\n group_ids: Sequence[int],\n start_date: datetime,\n end_date: datetime,\n category: Optional[GroupCategory],\n) -> List[GroupsCountResponse]:\n \"\"\"Query Snuba for event counts for the given list of project ids and groups ids in\n a time range.\"\"\"\n all_results = []\n offset = 0\n while True:\n query = _generate_query(project_ids, group_ids, offset, start_date, end_date, category)\n request = Request(\n dataset=_issue_category_dataset(category),\n app_id=REFERRER,\n query=query,\n tenant_ids={\"referrer\": REFERRER, \"organization_id\": organization_id},\n )\n results = raw_snql_query(request, referrer=REFERRER)[\"data\"]\n all_results += results\n offset += ELEMENTS_PER_SNUBA_PAGE\n if not results or len(results) < ELEMENTS_PER_SNUBA_PAGE:\n break\n\n return all_results\n\n\ndef _generate_query(\n project_ids: Sequence[int],\n group_ids: Sequence[int],\n offset: int,\n start_date: datetime,\n end_date: datetime,\n category: Optional[GroupCategory],\n) -> Query:\n \"\"\"This simply generates a query based on the passed parameters\"\"\"\n group_id_col = Column(\"group_id\")\n proj_id_col = Column(\"project_id\")\n return Query(\n match=Entity(_issue_category_entity(category)),\n select=[\n proj_id_col,\n group_id_col,\n Function(\"toStartOfHour\", [Column(\"timestamp\")], \"hourBucket\"),\n Function(\"count\", []),\n ],\n groupby=[proj_id_col, group_id_col, Column(\"hourBucket\")],\n where=[\n Condition(proj_id_col, Op.IN, Function(\"tuple\", project_ids)),\n Condition(Column(\"group_id\"), Op.IN, Function(\"tuple\", group_ids)),\n Condition(Column(\"timestamp\"), Op.GTE, start_date),\n Condition(Column(\"timestamp\"), Op.LT, end_date),\n ],\n limit=Limit(ELEMENTS_PER_SNUBA_PAGE),\n offset=Offset(offset),\n orderby=[\n OrderBy(proj_id_col, Direction.ASC),\n OrderBy(group_id_col, Direction.ASC),\n OrderBy(Column(\"hourBucket\"), Direction.ASC),\n ],\n )\n\n\ndef _start_and_end_dates(hours: int = BUCKETS_PER_GROUP) -> Tuple[datetime, datetime]:\n \"\"\"Return the start and end date of N hours time range.\"\"\"\n end_datetime = datetime.now()\n return end_datetime - timedelta(hours=hours), end_datetime\n\n\ndef _extract_project_and_group_ids(groups: Sequence[Group]) -> Dict[int, List[int]]:\n \"\"\"Return all project and group IDs from a list of Group\"\"\"\n group_ids_by_project: Dict[int, List[int]] = defaultdict(list)\n for group in groups:\n group_ids_by_project[group.project_id].append(group.id)\n\n return group_ids_by_project\n\n\ndef get_group_hourly_count(group: Group) -> int:\n \"\"\"Return the number of events a group has had today in the last hour\"\"\"\n key = f\"hourly-group-count:{group.project.id}:{group.id}\"\n hourly_count = cache.get(key)\n\n if hourly_count is None:\n now = datetime.now()\n current_hour = now.replace(minute=0, second=0, microsecond=0)\n query = Query(\n match=Entity(_issue_category_entity(group.issue_category)),\n select=[\n Function(\"count\", []),\n ],\n where=[\n Condition(Column(\"project_id\"), Op.EQ, group.project.id),\n Condition(Column(\"group_id\"), Op.EQ, group.id),\n Condition(Column(\"timestamp\"), Op.GTE, current_hour),\n Condition(Column(\"timestamp\"), Op.LT, now),\n ],\n )\n request = Request(\n dataset=_issue_category_dataset(group.issue_category),\n app_id=IS_ESCALATING_REFERRER,\n query=query,\n tenant_ids={\n \"referrer\": IS_ESCALATING_REFERRER,\n \"organization_id\": group.project.organization.id,\n },\n )\n hourly_count = int(\n raw_snql_query(request, referrer=IS_ESCALATING_REFERRER)[\"data\"][0][\"count()\"]\n )\n cache.set(key, hourly_count, GROUP_HOURLY_COUNT_TTL)\n return int(hourly_count)\n\n\ndef is_escalating(group: Group) -> bool:\n \"\"\"Return boolean depending on if the group is escalating or not\"\"\"\n group_hourly_count = get_group_hourly_count(group)\n forecast_today = EscalatingGroupForecast.fetch_todays_forecast(group.project.id, group.id)\n # Check if current event occurance is greater than forecast for today's date\n if group_hourly_count > forecast_today:\n group.substatus = GroupSubStatus.ESCALATING\n group.status = GroupStatus.UNRESOLVED\n add_group_to_inbox(group, GroupInboxReason.ESCALATING)\n\n analytics.record(\n \"issue.escalating\",\n organization_id=group.project.organization.id,\n project_id=group.project.id,\n group_id=group.id,\n )\n return True\n return False\n\n\ndef parse_groups_past_counts(response: Sequence[GroupsCountResponse]) -> ParsedGroupsCount:\n \"\"\"\n Return the parsed snuba response for groups past counts to be used in generate_issue_forecast.\n ParsedGroupCount is of the form {<group_id>: {\"intervals\": [str], \"data\": [int]}}.\n\n `response`: Snuba response for group event counts\n \"\"\"\n group_counts: ParsedGroupsCount = {}\n group_ids_list = group_counts.keys()\n for data in response:\n group_id = data[\"group_id\"]\n if group_id not in group_ids_list:\n group_counts[group_id] = {\n \"intervals\": [data[\"hourBucket\"]],\n \"data\": [data[\"count()\"]],\n }\n else:\n group_counts[group_id][\"intervals\"].append(data[\"hourBucket\"])\n group_counts[group_id][\"data\"].append(data[\"count()\"])\n return group_counts\n\n\ndef _issue_category_dataset(category: Optional[GroupCategory]) -> Dataset:\n return Dataset.Events.value if category == GroupCategory.ERROR else Dataset.IssuePlatform.value\n\n\ndef _issue_category_entity(category: Optional[GroupCategory]) -> EntityKey:\n return (\n EntityKey.Events.value if category == GroupCategory.ERROR else EntityKey.IssuePlatform.value\n )\n", "path": "src/sentry/issues/escalating.py"}], "after_files": [{"content": "\"\"\"This module has the logic for querying Snuba for the hourly event count for a list of groups.\nThis is later used for generating group forecasts for determining when a group may be escalating.\n\"\"\"\n\nimport logging\nfrom collections import defaultdict\nfrom datetime import datetime, timedelta\nfrom typing import Dict, List, Optional, Sequence, Tuple, TypedDict\n\nfrom snuba_sdk import (\n Column,\n Condition,\n Direction,\n Entity,\n Function,\n Limit,\n Offset,\n Op,\n OrderBy,\n Query,\n Request,\n)\n\nfrom sentry import analytics\nfrom sentry.issues.escalating_group_forecast import EscalatingGroupForecast\nfrom sentry.issues.escalating_issues_alg import GroupCount\nfrom sentry.issues.grouptype import GroupCategory\nfrom sentry.models import Group\nfrom sentry.models.group import GroupStatus\nfrom sentry.models.groupinbox import GroupInboxReason, add_group_to_inbox\nfrom sentry.snuba.dataset import Dataset, EntityKey\nfrom sentry.types.group import GroupSubStatus\nfrom sentry.utils.cache import cache\nfrom sentry.utils.snuba import raw_snql_query\n\nlogger = logging.getLogger(__name__)\n\n__all__ = [\"query_groups_past_counts\", \"parse_groups_past_counts\"]\n\nREFERRER = \"sentry.issues.escalating\"\nELEMENTS_PER_SNUBA_PAGE = 10000 # This is the maximum value for Snuba\n# The amount of data needed to generate a group forecast\nBUCKETS_PER_GROUP = 7 * 24\nONE_WEEK_DURATION = 7\nIS_ESCALATING_REFERRER = \"sentry.issues.escalating.is_escalating\"\nGROUP_HOURLY_COUNT_TTL = 60\n\nGroupsCountResponse = TypedDict(\n \"GroupsCountResponse\",\n {\"group_id\": int, \"hourBucket\": str, \"count()\": int, \"project_id\": int},\n)\n\nParsedGroupsCount = Dict[int, GroupCount]\n\n\ndef query_groups_past_counts(groups: Sequence[Group]) -> List[GroupsCountResponse]:\n \"\"\"Query Snuba for the counts for every group bucketed into hours.\n\n It optimizes the query by guaranteeing that we look at group_ids that are from the same project id.\n This is important for Snuba as the data is stored in blocks related to the project id.\n\n We maximize the number of projects and groups to reduce the total number of Snuba queries.\n Each project may not have enough groups in order to reach the max number of returned\n elements (ELEMENTS_PER_SNUBA_PAGE), thus, projects with few groups should be grouped together until\n we get at least a certain number of groups.\n\n NOTE: Groups with less than the maximum number of buckets (think of groups with just 1 event or less\n than 7 days old) will skew the optimization since we may only get one page and less elements than the max\n ELEMENTS_PER_SNUBA_PAGE.\n \"\"\"\n all_results = [] # type: ignore[var-annotated]\n if not groups:\n return all_results\n\n start_date, end_date = _start_and_end_dates()\n\n # Error groups use the events dataset while profile and perf groups use the issue platform dataset\n error_groups: List[Group] = []\n other_groups: List[Group] = []\n for g in groups:\n if g.issue_category == GroupCategory.ERROR:\n error_groups.append(g)\n else:\n other_groups.append(g)\n\n all_results += _process_groups(error_groups, start_date, end_date, GroupCategory.ERROR)\n all_results += _process_groups(other_groups, start_date, end_date)\n\n return all_results\n\n\ndef _process_groups(\n groups: Sequence[Group],\n start_date: datetime,\n end_date: datetime,\n category: Optional[GroupCategory] = None,\n) -> List[GroupsCountResponse]:\n \"\"\"Given a list of groups, query Snuba for their hourly bucket count.\n The category defines which Snuba dataset and entity we query.\"\"\"\n all_results = [] # type: ignore[var-annotated]\n if not groups:\n return all_results\n\n group_ids_by_project = _extract_project_and_group_ids(groups)\n proj_ids, group_ids = [], []\n processed_projects = 0\n total_projects_count = len(group_ids_by_project)\n organization_id = groups[0].project.organization.id\n\n # This iteration guarantees that all groups for a project will be queried in the same call\n # and only one page where the groups could be mixed with groups from another project\n # Iterating over the sorted keys guarantees results for tests\n for proj_id in sorted(group_ids_by_project.keys()):\n _group_ids = group_ids_by_project[proj_id]\n # Add them to the list of projects and groups to query\n proj_ids.append(proj_id)\n group_ids += _group_ids\n processed_projects += 1\n potential_num_elements = len(_group_ids) * BUCKETS_PER_GROUP\n # This is trying to maximize the number of groups on the first page\n if (\n processed_projects < total_projects_count\n and potential_num_elements < ELEMENTS_PER_SNUBA_PAGE\n ):\n continue\n\n # TODO: Write this as a dispatcher type task and fire off a separate task per proj_ids\n all_results += _query_with_pagination(\n organization_id, proj_ids, group_ids, start_date, end_date, category\n )\n # We're ready for a new set of projects and ids\n proj_ids, group_ids = [], []\n\n return all_results\n\n\ndef _query_with_pagination(\n organization_id: int,\n project_ids: Sequence[int],\n group_ids: Sequence[int],\n start_date: datetime,\n end_date: datetime,\n category: Optional[GroupCategory],\n) -> List[GroupsCountResponse]:\n \"\"\"Query Snuba for event counts for the given list of project ids and groups ids in\n a time range.\"\"\"\n all_results = []\n offset = 0\n while True:\n query = _generate_query(project_ids, group_ids, offset, start_date, end_date, category)\n request = Request(\n dataset=_issue_category_dataset(category),\n app_id=REFERRER,\n query=query,\n tenant_ids={\"referrer\": REFERRER, \"organization_id\": organization_id},\n )\n results = raw_snql_query(request, referrer=REFERRER)[\"data\"]\n all_results += results\n offset += ELEMENTS_PER_SNUBA_PAGE\n if not results or len(results) < ELEMENTS_PER_SNUBA_PAGE:\n break\n\n return all_results\n\n\ndef _generate_query(\n project_ids: Sequence[int],\n group_ids: Sequence[int],\n offset: int,\n start_date: datetime,\n end_date: datetime,\n category: Optional[GroupCategory],\n) -> Query:\n \"\"\"This simply generates a query based on the passed parameters\"\"\"\n group_id_col = Column(\"group_id\")\n proj_id_col = Column(\"project_id\")\n return Query(\n match=Entity(_issue_category_entity(category)),\n select=[\n proj_id_col,\n group_id_col,\n Function(\"toStartOfHour\", [Column(\"timestamp\")], \"hourBucket\"),\n Function(\"count\", []),\n ],\n groupby=[proj_id_col, group_id_col, Column(\"hourBucket\")],\n where=[\n Condition(proj_id_col, Op.IN, Function(\"tuple\", project_ids)),\n Condition(Column(\"group_id\"), Op.IN, Function(\"tuple\", group_ids)),\n Condition(Column(\"timestamp\"), Op.GTE, start_date),\n Condition(Column(\"timestamp\"), Op.LT, end_date),\n ],\n limit=Limit(ELEMENTS_PER_SNUBA_PAGE),\n offset=Offset(offset),\n orderby=[\n OrderBy(proj_id_col, Direction.ASC),\n OrderBy(group_id_col, Direction.ASC),\n OrderBy(Column(\"hourBucket\"), Direction.ASC),\n ],\n )\n\n\ndef _start_and_end_dates(hours: int = BUCKETS_PER_GROUP) -> Tuple[datetime, datetime]:\n \"\"\"Return the start and end date of N hours time range.\"\"\"\n end_datetime = datetime.now()\n return end_datetime - timedelta(hours=hours), end_datetime\n\n\ndef _extract_project_and_group_ids(groups: Sequence[Group]) -> Dict[int, List[int]]:\n \"\"\"Return all project and group IDs from a list of Group\"\"\"\n group_ids_by_project: Dict[int, List[int]] = defaultdict(list)\n for group in groups:\n group_ids_by_project[group.project_id].append(group.id)\n\n return group_ids_by_project\n\n\ndef get_group_hourly_count(group: Group) -> int:\n \"\"\"Return the number of events a group has had today in the last hour\"\"\"\n key = f\"hourly-group-count:{group.project.id}:{group.id}\"\n hourly_count = cache.get(key)\n\n if hourly_count is None:\n now = datetime.now()\n current_hour = now.replace(minute=0, second=0, microsecond=0)\n query = Query(\n match=Entity(_issue_category_entity(group.issue_category)),\n select=[\n Function(\"count\", []),\n ],\n where=[\n Condition(Column(\"project_id\"), Op.EQ, group.project.id),\n Condition(Column(\"group_id\"), Op.EQ, group.id),\n Condition(Column(\"timestamp\"), Op.GTE, current_hour),\n Condition(Column(\"timestamp\"), Op.LT, now),\n ],\n )\n request = Request(\n dataset=_issue_category_dataset(group.issue_category),\n app_id=IS_ESCALATING_REFERRER,\n query=query,\n tenant_ids={\n \"referrer\": IS_ESCALATING_REFERRER,\n \"organization_id\": group.project.organization.id,\n },\n )\n hourly_count = int(\n raw_snql_query(request, referrer=IS_ESCALATING_REFERRER)[\"data\"][0][\"count()\"]\n )\n cache.set(key, hourly_count, GROUP_HOURLY_COUNT_TTL)\n return int(hourly_count)\n\n\ndef is_escalating(group: Group) -> bool:\n \"\"\"Return boolean depending on if the group is escalating or not\"\"\"\n group_hourly_count = get_group_hourly_count(group)\n forecast_today = EscalatingGroupForecast.fetch_todays_forecast(group.project.id, group.id)\n # Check if current event occurance is greater than forecast for today's date\n if group_hourly_count > forecast_today:\n group.substatus = GroupSubStatus.ESCALATING\n group.status = GroupStatus.UNRESOLVED\n group.save()\n add_group_to_inbox(group, GroupInboxReason.ESCALATING)\n\n analytics.record(\n \"issue.escalating\",\n organization_id=group.project.organization.id,\n project_id=group.project.id,\n group_id=group.id,\n )\n return True\n return False\n\n\ndef parse_groups_past_counts(response: Sequence[GroupsCountResponse]) -> ParsedGroupsCount:\n \"\"\"\n Return the parsed snuba response for groups past counts to be used in generate_issue_forecast.\n ParsedGroupCount is of the form {<group_id>: {\"intervals\": [str], \"data\": [int]}}.\n\n `response`: Snuba response for group event counts\n \"\"\"\n group_counts: ParsedGroupsCount = {}\n group_ids_list = group_counts.keys()\n for data in response:\n group_id = data[\"group_id\"]\n if group_id not in group_ids_list:\n group_counts[group_id] = {\n \"intervals\": [data[\"hourBucket\"]],\n \"data\": [data[\"count()\"]],\n }\n else:\n group_counts[group_id][\"intervals\"].append(data[\"hourBucket\"])\n group_counts[group_id][\"data\"].append(data[\"count()\"])\n return group_counts\n\n\ndef _issue_category_dataset(category: Optional[GroupCategory]) -> Dataset:\n return Dataset.Events.value if category == GroupCategory.ERROR else Dataset.IssuePlatform.value\n\n\ndef _issue_category_entity(category: Optional[GroupCategory]) -> EntityKey:\n return (\n EntityKey.Events.value if category == GroupCategory.ERROR else EntityKey.IssuePlatform.value\n )\n", "path": "src/sentry/issues/escalating.py"}]}
| 3,755 | 119 |
gh_patches_debug_586
|
rasdani/github-patches
|
git_diff
|
pex-tool__pex-1275
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release 2.1.34
On the docket:
+ [x] Allow command-line arguments to be read from a file #1271
+ [x] Issue when running a module inside pex file #1018
+ [x] Guard against concurrent re-imports. #1270
+ [x] Ensure Pip logs to stderr. #1268
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pex/version.py`
Content:
```
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.33"
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.33"
+__version__ = "2.1.34"
|
{"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.33\"\n+__version__ = \"2.1.34\"\n", "issue": "Release 2.1.34\nOn the docket:\r\n+ [x] Allow command-line arguments to be read from a file #1271\r\n+ [x] Issue when running a module inside pex file #1018\r\n+ [x] Guard against concurrent re-imports. #1270\r\n+ [x] Ensure Pip logs to stderr. #1268\r\n\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.33\"\n", "path": "pex/version.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.34\"\n", "path": "pex/version.py"}]}
| 394 | 96 |
gh_patches_debug_1723
|
rasdani/github-patches
|
git_diff
|
ansible__ansible-40863
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
homebrew_tap fails to tap caskroom/cask now.
<!---
Verify first that your issue/request is not already reported on GitHub.
THIS FORM WILL BE READ BY A MACHINE, COMPLETE ALL SECTIONS AS DESCRIBED.
Also test if the latest release, and devel branch are affected too.
ALWAYS add information AFTER (OUTSIDE) these html comments.
Otherwise it may end up being automatically closed by our bot. -->
##### SUMMARY
<!--- Explain the problem briefly -->
Running the task `homebrew_tap: name=caskroom/cask` fails due to the fact that caskroom/cask has migrated to homebrew/cask. See https://github.com/Homebrew/brew/pull/4210
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Insert, BELOW THIS COMMENT, the name of the module, plugin, task or feature.
Do not include extra details here, e.g. "vyos_command" not "the network module vyos_command" or the full path-->
homebrew_tap
##### ANSIBLE VERSION
<!--- Paste, BELOW THIS COMMENT, verbatim output from "ansible --version" between quotes below -->
```
```
##### CONFIGURATION
<!--- If using Ansible 2.4 or above, paste, BELOW THIS COMMENT, the results of "ansible-config dump --only-changed"
Otherwise, mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).-->
```
(14:49:33) C02W513SHTD8:tmp aso$ ansible-config dump --only-changed
(14:49:35) C02W513SHTD8:tmp aso$
```
##### OS / ENVIRONMENT
<!--- Mention, BELOW THIS COMMENT, the OS you are running Ansible from, and the OS you are
managing, or say "N/A" for anything that is not platform-specific.
Also mention the specific version of what you are trying to control,
e.g. if this is a network bug the version of firmware on the network device.-->
From macOS 10.13.4
To macOS 10.13.4
##### STEPS TO REPRODUCE
<!--- For bugs, show exactly how to reproduce the problem, using a minimal test-case.
For new features, show how the feature would be used. -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: install homebrew cask
homebrew_tap: name=caskroom/cask
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
The task should have succeeded and running `brew tap` should have resulted in caskroom/cask being listed.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
The task failed even though it successfully tapped a homebrew cask. Running `brew tap` results in homebrew/cask being listed.
<!--- Paste verbatim command output between quotes below -->
```
Alberts-Mac:bin bambooagent$ brew tap
homebrew/cask
homebrew/core
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/ansible/modules/packaging/os/homebrew_tap.py`
Content:
```
1 #!/usr/bin/python
2 # -*- coding: utf-8 -*-
3
4 # (c) 2013, Daniel Jaouen <[email protected]>
5 # (c) 2016, Indrajit Raychaudhuri <[email protected]>
6 #
7 # Based on homebrew (Andrew Dunham <[email protected]>)
8 #
9 # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
10
11 from __future__ import absolute_import, division, print_function
12 __metaclass__ = type
13
14
15 ANSIBLE_METADATA = {'metadata_version': '1.1',
16 'status': ['preview'],
17 'supported_by': 'community'}
18
19
20 DOCUMENTATION = '''
21 ---
22 module: homebrew_tap
23 author:
24 - "Indrajit Raychaudhuri (@indrajitr)"
25 - "Daniel Jaouen (@danieljaouen)"
26 short_description: Tap a Homebrew repository.
27 description:
28 - Tap external Homebrew repositories.
29 version_added: "1.6"
30 options:
31 name:
32 description:
33 - The GitHub user/organization repository to tap.
34 required: true
35 aliases: ['tap']
36 url:
37 description:
38 - The optional git URL of the repository to tap. The URL is not
39 assumed to be on GitHub, and the protocol doesn't have to be HTTP.
40 Any location and protocol that git can handle is fine.
41 - I(name) option may not be a list of multiple taps (but a single
42 tap instead) when this option is provided.
43 required: false
44 version_added: "2.2"
45 state:
46 description:
47 - state of the repository.
48 choices: [ 'present', 'absent' ]
49 required: false
50 default: 'present'
51 requirements: [ homebrew ]
52 '''
53
54 EXAMPLES = '''
55 - homebrew_tap:
56 name: homebrew/dupes
57
58 - homebrew_tap:
59 name: homebrew/dupes
60 state: absent
61
62 - homebrew_tap:
63 name: homebrew/dupes,homebrew/science
64 state: present
65
66 - homebrew_tap:
67 name: telemachus/brew
68 url: 'https://bitbucket.org/telemachus/brew'
69 '''
70
71 import re
72
73 from ansible.module_utils.basic import AnsibleModule
74
75
76 def a_valid_tap(tap):
77 '''Returns True if the tap is valid.'''
78 regex = re.compile(r'^([\w-]+)/(homebrew-)?([\w-]+)$')
79 return regex.match(tap)
80
81
82 def already_tapped(module, brew_path, tap):
83 '''Returns True if already tapped.'''
84
85 rc, out, err = module.run_command([
86 brew_path,
87 'tap',
88 ])
89
90 taps = [tap_.strip().lower() for tap_ in out.split('\n') if tap_]
91 tap_name = re.sub('homebrew-', '', tap.lower())
92
93 return tap_name in taps
94
95
96 def add_tap(module, brew_path, tap, url=None):
97 '''Adds a single tap.'''
98 failed, changed, msg = False, False, ''
99
100 if not a_valid_tap(tap):
101 failed = True
102 msg = 'not a valid tap: %s' % tap
103
104 elif not already_tapped(module, brew_path, tap):
105 if module.check_mode:
106 module.exit_json(changed=True)
107
108 rc, out, err = module.run_command([
109 brew_path,
110 'tap',
111 tap,
112 url,
113 ])
114 if already_tapped(module, brew_path, tap):
115 changed = True
116 msg = 'successfully tapped: %s' % tap
117 else:
118 failed = True
119 msg = 'failed to tap: %s' % tap
120
121 else:
122 msg = 'already tapped: %s' % tap
123
124 return (failed, changed, msg)
125
126
127 def add_taps(module, brew_path, taps):
128 '''Adds one or more taps.'''
129 failed, unchanged, added, msg = False, 0, 0, ''
130
131 for tap in taps:
132 (failed, changed, msg) = add_tap(module, brew_path, tap)
133 if failed:
134 break
135 if changed:
136 added += 1
137 else:
138 unchanged += 1
139
140 if failed:
141 msg = 'added: %d, unchanged: %d, error: ' + msg
142 msg = msg % (added, unchanged)
143 elif added:
144 changed = True
145 msg = 'added: %d, unchanged: %d' % (added, unchanged)
146 else:
147 msg = 'added: %d, unchanged: %d' % (added, unchanged)
148
149 return (failed, changed, msg)
150
151
152 def remove_tap(module, brew_path, tap):
153 '''Removes a single tap.'''
154 failed, changed, msg = False, False, ''
155
156 if not a_valid_tap(tap):
157 failed = True
158 msg = 'not a valid tap: %s' % tap
159
160 elif already_tapped(module, brew_path, tap):
161 if module.check_mode:
162 module.exit_json(changed=True)
163
164 rc, out, err = module.run_command([
165 brew_path,
166 'untap',
167 tap,
168 ])
169 if not already_tapped(module, brew_path, tap):
170 changed = True
171 msg = 'successfully untapped: %s' % tap
172 else:
173 failed = True
174 msg = 'failed to untap: %s' % tap
175
176 else:
177 msg = 'already untapped: %s' % tap
178
179 return (failed, changed, msg)
180
181
182 def remove_taps(module, brew_path, taps):
183 '''Removes one or more taps.'''
184 failed, unchanged, removed, msg = False, 0, 0, ''
185
186 for tap in taps:
187 (failed, changed, msg) = remove_tap(module, brew_path, tap)
188 if failed:
189 break
190 if changed:
191 removed += 1
192 else:
193 unchanged += 1
194
195 if failed:
196 msg = 'removed: %d, unchanged: %d, error: ' + msg
197 msg = msg % (removed, unchanged)
198 elif removed:
199 changed = True
200 msg = 'removed: %d, unchanged: %d' % (removed, unchanged)
201 else:
202 msg = 'removed: %d, unchanged: %d' % (removed, unchanged)
203
204 return (failed, changed, msg)
205
206
207 def main():
208 module = AnsibleModule(
209 argument_spec=dict(
210 name=dict(aliases=['tap'], type='list', required=True),
211 url=dict(default=None, required=False),
212 state=dict(default='present', choices=['present', 'absent']),
213 ),
214 supports_check_mode=True,
215 )
216
217 brew_path = module.get_bin_path(
218 'brew',
219 required=True,
220 opt_dirs=['/usr/local/bin']
221 )
222
223 taps = module.params['name']
224 url = module.params['url']
225
226 if module.params['state'] == 'present':
227 if url is None:
228 # No tap URL provided explicitly, continue with bulk addition
229 # of all the taps.
230 failed, changed, msg = add_taps(module, brew_path, taps)
231 else:
232 # When an tap URL is provided explicitly, we allow adding
233 # *single* tap only. Validate and proceed to add single tap.
234 if len(taps) > 1:
235 msg = "List of multiple taps may not be provided with 'url' option."
236 module.fail_json(msg=msg)
237 else:
238 failed, changed, msg = add_tap(module, brew_path, taps[0], url)
239
240 if failed:
241 module.fail_json(msg=msg)
242 else:
243 module.exit_json(changed=changed, msg=msg)
244
245 elif module.params['state'] == 'absent':
246 failed, changed, msg = remove_taps(module, brew_path, taps)
247
248 if failed:
249 module.fail_json(msg=msg)
250 else:
251 module.exit_json(changed=changed, msg=msg)
252
253
254 if __name__ == '__main__':
255 main()
256
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/lib/ansible/modules/packaging/os/homebrew_tap.py b/lib/ansible/modules/packaging/os/homebrew_tap.py
--- a/lib/ansible/modules/packaging/os/homebrew_tap.py
+++ b/lib/ansible/modules/packaging/os/homebrew_tap.py
@@ -111,7 +111,7 @@
tap,
url,
])
- if already_tapped(module, brew_path, tap):
+ if rc == 0:
changed = True
msg = 'successfully tapped: %s' % tap
else:
|
{"golden_diff": "diff --git a/lib/ansible/modules/packaging/os/homebrew_tap.py b/lib/ansible/modules/packaging/os/homebrew_tap.py\n--- a/lib/ansible/modules/packaging/os/homebrew_tap.py\n+++ b/lib/ansible/modules/packaging/os/homebrew_tap.py\n@@ -111,7 +111,7 @@\n tap,\n url,\n ])\n- if already_tapped(module, brew_path, tap):\n+ if rc == 0:\n changed = True\n msg = 'successfully tapped: %s' % tap\n else:\n", "issue": "homebrew_tap fails to tap caskroom/cask now.\n<!---\r\nVerify first that your issue/request is not already reported on GitHub.\r\nTHIS FORM WILL BE READ BY A MACHINE, COMPLETE ALL SECTIONS AS DESCRIBED.\r\nAlso test if the latest release, and devel branch are affected too.\r\nALWAYS add information AFTER (OUTSIDE) these html comments.\r\nOtherwise it may end up being automatically closed by our bot. -->\r\n\r\n##### SUMMARY\r\n<!--- Explain the problem briefly -->\r\nRunning the task `homebrew_tap: name=caskroom/cask` fails due to the fact that caskroom/cask has migrated to homebrew/cask. See https://github.com/Homebrew/brew/pull/4210\r\n##### ISSUE TYPE\r\n - Bug Report\r\n\r\n##### COMPONENT NAME\r\n<!--- Insert, BELOW THIS COMMENT, the name of the module, plugin, task or feature.\r\nDo not include extra details here, e.g. \"vyos_command\" not \"the network module vyos_command\" or the full path-->\r\nhomebrew_tap\r\n\r\n##### ANSIBLE VERSION\r\n<!--- Paste, BELOW THIS COMMENT, verbatim output from \"ansible --version\" between quotes below -->\r\n```\r\n\r\n```\r\n\r\n##### CONFIGURATION\r\n<!--- If using Ansible 2.4 or above, paste, BELOW THIS COMMENT, the results of \"ansible-config dump --only-changed\"\r\nOtherwise, mention any settings you have changed/added/removed in ansible.cfg\r\n(or using the ANSIBLE_* environment variables).-->\r\n```\r\n(14:49:33) C02W513SHTD8:tmp aso$ ansible-config dump --only-changed\r\n(14:49:35) C02W513SHTD8:tmp aso$\r\n```\r\n\r\n##### OS / ENVIRONMENT\r\n<!--- Mention, BELOW THIS COMMENT, the OS you are running Ansible from, and the OS you are\r\nmanaging, or say \"N/A\" for anything that is not platform-specific.\r\nAlso mention the specific version of what you are trying to control,\r\ne.g. if this is a network bug the version of firmware on the network device.-->\r\nFrom macOS 10.13.4\r\nTo macOS 10.13.4\r\n\r\n##### STEPS TO REPRODUCE\r\n<!--- For bugs, show exactly how to reproduce the problem, using a minimal test-case.\r\nFor new features, show how the feature would be used. -->\r\n\r\n<!--- Paste example playbooks or commands between quotes below -->\r\n```yaml\r\n- name: install homebrew cask\r\n homebrew_tap: name=caskroom/cask\r\n```\r\n\r\n<!--- You can also paste gist.github.com links for larger files -->\r\n\r\n##### EXPECTED RESULTS\r\n<!--- What did you expect to happen when running the steps above? -->\r\nThe task should have succeeded and running `brew tap` should have resulted in caskroom/cask being listed.\r\n\r\n##### ACTUAL RESULTS\r\n<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->\r\nThe task failed even though it successfully tapped a homebrew cask. Running `brew tap` results in homebrew/cask being listed.\r\n\r\n<!--- Paste verbatim command output between quotes below -->\r\n```\r\nAlberts-Mac:bin bambooagent$ brew tap\r\nhomebrew/cask\r\nhomebrew/core\r\n```\r\n\n", "before_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# (c) 2013, Daniel Jaouen <[email protected]>\n# (c) 2016, Indrajit Raychaudhuri <[email protected]>\n#\n# Based on homebrew (Andrew Dunham <[email protected]>)\n#\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nANSIBLE_METADATA = {'metadata_version': '1.1',\n 'status': ['preview'],\n 'supported_by': 'community'}\n\n\nDOCUMENTATION = '''\n---\nmodule: homebrew_tap\nauthor:\n - \"Indrajit Raychaudhuri (@indrajitr)\"\n - \"Daniel Jaouen (@danieljaouen)\"\nshort_description: Tap a Homebrew repository.\ndescription:\n - Tap external Homebrew repositories.\nversion_added: \"1.6\"\noptions:\n name:\n description:\n - The GitHub user/organization repository to tap.\n required: true\n aliases: ['tap']\n url:\n description:\n - The optional git URL of the repository to tap. The URL is not\n assumed to be on GitHub, and the protocol doesn't have to be HTTP.\n Any location and protocol that git can handle is fine.\n - I(name) option may not be a list of multiple taps (but a single\n tap instead) when this option is provided.\n required: false\n version_added: \"2.2\"\n state:\n description:\n - state of the repository.\n choices: [ 'present', 'absent' ]\n required: false\n default: 'present'\nrequirements: [ homebrew ]\n'''\n\nEXAMPLES = '''\n- homebrew_tap:\n name: homebrew/dupes\n\n- homebrew_tap:\n name: homebrew/dupes\n state: absent\n\n- homebrew_tap:\n name: homebrew/dupes,homebrew/science\n state: present\n\n- homebrew_tap:\n name: telemachus/brew\n url: 'https://bitbucket.org/telemachus/brew'\n'''\n\nimport re\n\nfrom ansible.module_utils.basic import AnsibleModule\n\n\ndef a_valid_tap(tap):\n '''Returns True if the tap is valid.'''\n regex = re.compile(r'^([\\w-]+)/(homebrew-)?([\\w-]+)$')\n return regex.match(tap)\n\n\ndef already_tapped(module, brew_path, tap):\n '''Returns True if already tapped.'''\n\n rc, out, err = module.run_command([\n brew_path,\n 'tap',\n ])\n\n taps = [tap_.strip().lower() for tap_ in out.split('\\n') if tap_]\n tap_name = re.sub('homebrew-', '', tap.lower())\n\n return tap_name in taps\n\n\ndef add_tap(module, brew_path, tap, url=None):\n '''Adds a single tap.'''\n failed, changed, msg = False, False, ''\n\n if not a_valid_tap(tap):\n failed = True\n msg = 'not a valid tap: %s' % tap\n\n elif not already_tapped(module, brew_path, tap):\n if module.check_mode:\n module.exit_json(changed=True)\n\n rc, out, err = module.run_command([\n brew_path,\n 'tap',\n tap,\n url,\n ])\n if already_tapped(module, brew_path, tap):\n changed = True\n msg = 'successfully tapped: %s' % tap\n else:\n failed = True\n msg = 'failed to tap: %s' % tap\n\n else:\n msg = 'already tapped: %s' % tap\n\n return (failed, changed, msg)\n\n\ndef add_taps(module, brew_path, taps):\n '''Adds one or more taps.'''\n failed, unchanged, added, msg = False, 0, 0, ''\n\n for tap in taps:\n (failed, changed, msg) = add_tap(module, brew_path, tap)\n if failed:\n break\n if changed:\n added += 1\n else:\n unchanged += 1\n\n if failed:\n msg = 'added: %d, unchanged: %d, error: ' + msg\n msg = msg % (added, unchanged)\n elif added:\n changed = True\n msg = 'added: %d, unchanged: %d' % (added, unchanged)\n else:\n msg = 'added: %d, unchanged: %d' % (added, unchanged)\n\n return (failed, changed, msg)\n\n\ndef remove_tap(module, brew_path, tap):\n '''Removes a single tap.'''\n failed, changed, msg = False, False, ''\n\n if not a_valid_tap(tap):\n failed = True\n msg = 'not a valid tap: %s' % tap\n\n elif already_tapped(module, brew_path, tap):\n if module.check_mode:\n module.exit_json(changed=True)\n\n rc, out, err = module.run_command([\n brew_path,\n 'untap',\n tap,\n ])\n if not already_tapped(module, brew_path, tap):\n changed = True\n msg = 'successfully untapped: %s' % tap\n else:\n failed = True\n msg = 'failed to untap: %s' % tap\n\n else:\n msg = 'already untapped: %s' % tap\n\n return (failed, changed, msg)\n\n\ndef remove_taps(module, brew_path, taps):\n '''Removes one or more taps.'''\n failed, unchanged, removed, msg = False, 0, 0, ''\n\n for tap in taps:\n (failed, changed, msg) = remove_tap(module, brew_path, tap)\n if failed:\n break\n if changed:\n removed += 1\n else:\n unchanged += 1\n\n if failed:\n msg = 'removed: %d, unchanged: %d, error: ' + msg\n msg = msg % (removed, unchanged)\n elif removed:\n changed = True\n msg = 'removed: %d, unchanged: %d' % (removed, unchanged)\n else:\n msg = 'removed: %d, unchanged: %d' % (removed, unchanged)\n\n return (failed, changed, msg)\n\n\ndef main():\n module = AnsibleModule(\n argument_spec=dict(\n name=dict(aliases=['tap'], type='list', required=True),\n url=dict(default=None, required=False),\n state=dict(default='present', choices=['present', 'absent']),\n ),\n supports_check_mode=True,\n )\n\n brew_path = module.get_bin_path(\n 'brew',\n required=True,\n opt_dirs=['/usr/local/bin']\n )\n\n taps = module.params['name']\n url = module.params['url']\n\n if module.params['state'] == 'present':\n if url is None:\n # No tap URL provided explicitly, continue with bulk addition\n # of all the taps.\n failed, changed, msg = add_taps(module, brew_path, taps)\n else:\n # When an tap URL is provided explicitly, we allow adding\n # *single* tap only. Validate and proceed to add single tap.\n if len(taps) > 1:\n msg = \"List of multiple taps may not be provided with 'url' option.\"\n module.fail_json(msg=msg)\n else:\n failed, changed, msg = add_tap(module, brew_path, taps[0], url)\n\n if failed:\n module.fail_json(msg=msg)\n else:\n module.exit_json(changed=changed, msg=msg)\n\n elif module.params['state'] == 'absent':\n failed, changed, msg = remove_taps(module, brew_path, taps)\n\n if failed:\n module.fail_json(msg=msg)\n else:\n module.exit_json(changed=changed, msg=msg)\n\n\nif __name__ == '__main__':\n main()\n", "path": "lib/ansible/modules/packaging/os/homebrew_tap.py"}], "after_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# (c) 2013, Daniel Jaouen <[email protected]>\n# (c) 2016, Indrajit Raychaudhuri <[email protected]>\n#\n# Based on homebrew (Andrew Dunham <[email protected]>)\n#\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nANSIBLE_METADATA = {'metadata_version': '1.1',\n 'status': ['preview'],\n 'supported_by': 'community'}\n\n\nDOCUMENTATION = '''\n---\nmodule: homebrew_tap\nauthor:\n - \"Indrajit Raychaudhuri (@indrajitr)\"\n - \"Daniel Jaouen (@danieljaouen)\"\nshort_description: Tap a Homebrew repository.\ndescription:\n - Tap external Homebrew repositories.\nversion_added: \"1.6\"\noptions:\n name:\n description:\n - The GitHub user/organization repository to tap.\n required: true\n aliases: ['tap']\n url:\n description:\n - The optional git URL of the repository to tap. The URL is not\n assumed to be on GitHub, and the protocol doesn't have to be HTTP.\n Any location and protocol that git can handle is fine.\n - I(name) option may not be a list of multiple taps (but a single\n tap instead) when this option is provided.\n required: false\n version_added: \"2.2\"\n state:\n description:\n - state of the repository.\n choices: [ 'present', 'absent' ]\n required: false\n default: 'present'\nrequirements: [ homebrew ]\n'''\n\nEXAMPLES = '''\n- homebrew_tap:\n name: homebrew/dupes\n\n- homebrew_tap:\n name: homebrew/dupes\n state: absent\n\n- homebrew_tap:\n name: homebrew/dupes,homebrew/science\n state: present\n\n- homebrew_tap:\n name: telemachus/brew\n url: 'https://bitbucket.org/telemachus/brew'\n'''\n\nimport re\n\nfrom ansible.module_utils.basic import AnsibleModule\n\n\ndef a_valid_tap(tap):\n '''Returns True if the tap is valid.'''\n regex = re.compile(r'^([\\w-]+)/(homebrew-)?([\\w-]+)$')\n return regex.match(tap)\n\n\ndef already_tapped(module, brew_path, tap):\n '''Returns True if already tapped.'''\n\n rc, out, err = module.run_command([\n brew_path,\n 'tap',\n ])\n\n taps = [tap_.strip().lower() for tap_ in out.split('\\n') if tap_]\n tap_name = re.sub('homebrew-', '', tap.lower())\n\n return tap_name in taps\n\n\ndef add_tap(module, brew_path, tap, url=None):\n '''Adds a single tap.'''\n failed, changed, msg = False, False, ''\n\n if not a_valid_tap(tap):\n failed = True\n msg = 'not a valid tap: %s' % tap\n\n elif not already_tapped(module, brew_path, tap):\n if module.check_mode:\n module.exit_json(changed=True)\n\n rc, out, err = module.run_command([\n brew_path,\n 'tap',\n tap,\n url,\n ])\n if rc == 0:\n changed = True\n msg = 'successfully tapped: %s' % tap\n else:\n failed = True\n msg = 'failed to tap: %s' % tap\n\n else:\n msg = 'already tapped: %s' % tap\n\n return (failed, changed, msg)\n\n\ndef add_taps(module, brew_path, taps):\n '''Adds one or more taps.'''\n failed, unchanged, added, msg = False, 0, 0, ''\n\n for tap in taps:\n (failed, changed, msg) = add_tap(module, brew_path, tap)\n if failed:\n break\n if changed:\n added += 1\n else:\n unchanged += 1\n\n if failed:\n msg = 'added: %d, unchanged: %d, error: ' + msg\n msg = msg % (added, unchanged)\n elif added:\n changed = True\n msg = 'added: %d, unchanged: %d' % (added, unchanged)\n else:\n msg = 'added: %d, unchanged: %d' % (added, unchanged)\n\n return (failed, changed, msg)\n\n\ndef remove_tap(module, brew_path, tap):\n '''Removes a single tap.'''\n failed, changed, msg = False, False, ''\n\n if not a_valid_tap(tap):\n failed = True\n msg = 'not a valid tap: %s' % tap\n\n elif already_tapped(module, brew_path, tap):\n if module.check_mode:\n module.exit_json(changed=True)\n\n rc, out, err = module.run_command([\n brew_path,\n 'untap',\n tap,\n ])\n if not already_tapped(module, brew_path, tap):\n changed = True\n msg = 'successfully untapped: %s' % tap\n else:\n failed = True\n msg = 'failed to untap: %s' % tap\n\n else:\n msg = 'already untapped: %s' % tap\n\n return (failed, changed, msg)\n\n\ndef remove_taps(module, brew_path, taps):\n '''Removes one or more taps.'''\n failed, unchanged, removed, msg = False, 0, 0, ''\n\n for tap in taps:\n (failed, changed, msg) = remove_tap(module, brew_path, tap)\n if failed:\n break\n if changed:\n removed += 1\n else:\n unchanged += 1\n\n if failed:\n msg = 'removed: %d, unchanged: %d, error: ' + msg\n msg = msg % (removed, unchanged)\n elif removed:\n changed = True\n msg = 'removed: %d, unchanged: %d' % (removed, unchanged)\n else:\n msg = 'removed: %d, unchanged: %d' % (removed, unchanged)\n\n return (failed, changed, msg)\n\n\ndef main():\n module = AnsibleModule(\n argument_spec=dict(\n name=dict(aliases=['tap'], type='list', required=True),\n url=dict(default=None, required=False),\n state=dict(default='present', choices=['present', 'absent']),\n ),\n supports_check_mode=True,\n )\n\n brew_path = module.get_bin_path(\n 'brew',\n required=True,\n opt_dirs=['/usr/local/bin']\n )\n\n taps = module.params['name']\n url = module.params['url']\n\n if module.params['state'] == 'present':\n if url is None:\n # No tap URL provided explicitly, continue with bulk addition\n # of all the taps.\n failed, changed, msg = add_taps(module, brew_path, taps)\n else:\n # When an tap URL is provided explicitly, we allow adding\n # *single* tap only. Validate and proceed to add single tap.\n if len(taps) > 1:\n msg = \"List of multiple taps may not be provided with 'url' option.\"\n module.fail_json(msg=msg)\n else:\n failed, changed, msg = add_tap(module, brew_path, taps[0], url)\n\n if failed:\n module.fail_json(msg=msg)\n else:\n module.exit_json(changed=changed, msg=msg)\n\n elif module.params['state'] == 'absent':\n failed, changed, msg = remove_taps(module, brew_path, taps)\n\n if failed:\n module.fail_json(msg=msg)\n else:\n module.exit_json(changed=changed, msg=msg)\n\n\nif __name__ == '__main__':\n main()\n", "path": "lib/ansible/modules/packaging/os/homebrew_tap.py"}]}
| 3,421 | 128 |
gh_patches_debug_344
|
rasdani/github-patches
|
git_diff
|
ManimCommunity__manim-3166
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Not all arrow tips are accessible
## Description of bug / unexpected behavior
<!-- Add a clear and concise description of the problem you encountered. -->
The [manim.mobject.geometry.tips](https://docs.manim.community/en/stable/_modules/manim/mobject/geometry/tips.html#ArrowTriangleFilledTip) file has presents of some arrow tips to use. The list `__all__` contains:
```py
__all__ = [
"ArrowTip",
"ArrowCircleFilledTip",
"ArrowCircleTip",
"ArrowSquareTip",
"ArrowSquareFilledTip",
]
```
## Expected behavior
<!-- Add a clear and concise description of what you expected to happen. -->
Instead, it should have:
```py
__all__ = [
"ArrowTip",
"ArrowCircleFilledTip",
"ArrowCircleTip",
"ArrowSquareTip",
"ArrowSquareFilledTip"
"ArrowTriangleTip", # added
"ArrowTriangleFilledTip", # added
]
```
## How to reproduce the issue
<!-- Provide a piece of code illustrating the undesired behavior. -->
<details><summary>Code for reproducing the problem</summary>
```py
class Test(Scene):
def construct(self):
my_line = Line()
my_line.add_tip(ArrowTriangleFilledTip(fill_color=WHITE))
self.add(my_line)
```
</details>
## Additional media files
<!-- Paste in the files manim produced on rendering the code above. -->
None
<!-- Insert screenshots here (only when absolutely necessary, we prefer copy/pasted output!) -->
</details>
## System specifications
<details><summary>System Details</summary>
- OS: macOS 13.0.1 (Ventura)
- RAM: 8GB
- Python version: Python 3.10.9
- Installed modules: manim 0.17.2
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `manim/mobject/geometry/tips.py`
Content:
```
1 r"""A collection of tip mobjects for use with :class:`~.TipableVMobject`."""
2
3 from __future__ import annotations
4
5 __all__ = [
6 "ArrowTip",
7 "ArrowCircleFilledTip",
8 "ArrowCircleTip",
9 "ArrowSquareTip",
10 "ArrowSquareFilledTip",
11 ]
12
13 import numpy as np
14
15 from manim.constants import *
16 from manim.mobject.geometry.arc import Circle
17 from manim.mobject.geometry.polygram import Square, Triangle
18 from manim.mobject.opengl.opengl_compatibility import ConvertToOpenGL
19 from manim.mobject.types.vectorized_mobject import VMobject
20 from manim.utils.space_ops import angle_of_vector
21
22
23 class ArrowTip(VMobject, metaclass=ConvertToOpenGL):
24 r"""Base class for arrow tips.
25
26 .. seealso::
27 :class:`ArrowTriangleTip`
28 :class:`ArrowTriangleFilledTip`
29 :class:`ArrowCircleTip`
30 :class:`ArrowCircleFilledTip`
31 :class:`ArrowSquareTip`
32 :class:`ArrowSquareFilledTip`
33
34 Examples
35 --------
36 Cannot be used directly, only intended for inheritance::
37
38 >>> tip = ArrowTip()
39 Traceback (most recent call last):
40 ...
41 NotImplementedError: Has to be implemented in inheriting subclasses.
42
43 Instead, use one of the pre-defined ones, or make
44 a custom one like this:
45
46 .. manim:: CustomTipExample
47
48 >>> from manim import RegularPolygon, Arrow
49 >>> class MyCustomArrowTip(ArrowTip, RegularPolygon):
50 ... def __init__(self, length=0.35, **kwargs):
51 ... RegularPolygon.__init__(self, n=5, **kwargs)
52 ... self.width = length
53 ... self.stretch_to_fit_height(length)
54 >>> arr = Arrow(np.array([-2, -2, 0]), np.array([2, 2, 0]),
55 ... tip_shape=MyCustomArrowTip)
56 >>> isinstance(arr.tip, RegularPolygon)
57 True
58 >>> from manim import Scene, Create
59 >>> class CustomTipExample(Scene):
60 ... def construct(self):
61 ... self.play(Create(arr))
62
63 Using a class inherited from :class:`ArrowTip` to get a non-filled
64 tip is a shorthand to manually specifying the arrow tip style as follows::
65
66 >>> arrow = Arrow(np.array([0, 0, 0]), np.array([1, 1, 0]),
67 ... tip_style={'fill_opacity': 0, 'stroke_width': 3})
68
69 The following example illustrates the usage of all of the predefined
70 arrow tips.
71
72 .. manim:: ArrowTipsShowcase
73 :save_last_frame:
74
75 from manim.mobject.geometry.tips import ArrowTriangleTip,\
76 ArrowSquareTip, ArrowSquareFilledTip,\
77 ArrowCircleTip, ArrowCircleFilledTip
78 class ArrowTipsShowcase(Scene):
79 def construct(self):
80 a00 = Arrow(start=[-2, 3, 0], end=[2, 3, 0], color=YELLOW)
81 a11 = Arrow(start=[-2, 2, 0], end=[2, 2, 0], tip_shape=ArrowTriangleTip)
82 a12 = Arrow(start=[-2, 1, 0], end=[2, 1, 0])
83 a21 = Arrow(start=[-2, 0, 0], end=[2, 0, 0], tip_shape=ArrowSquareTip)
84 a22 = Arrow([-2, -1, 0], [2, -1, 0], tip_shape=ArrowSquareFilledTip)
85 a31 = Arrow([-2, -2, 0], [2, -2, 0], tip_shape=ArrowCircleTip)
86 a32 = Arrow([-2, -3, 0], [2, -3, 0], tip_shape=ArrowCircleFilledTip)
87 b11 = a11.copy().scale(0.5, scale_tips=True).next_to(a11, RIGHT)
88 b12 = a12.copy().scale(0.5, scale_tips=True).next_to(a12, RIGHT)
89 b21 = a21.copy().scale(0.5, scale_tips=True).next_to(a21, RIGHT)
90 self.add(a00, a11, a12, a21, a22, a31, a32, b11, b12, b21)
91
92 """
93
94 def __init__(self, *args, **kwargs):
95 raise NotImplementedError("Has to be implemented in inheriting subclasses.")
96
97 @property
98 def base(self):
99 r"""The base point of the arrow tip.
100
101 This is the point connecting to the arrow line.
102
103 Examples
104 --------
105 ::
106
107 >>> from manim import Arrow
108 >>> arrow = Arrow(np.array([0, 0, 0]), np.array([2, 0, 0]), buff=0)
109 >>> arrow.tip.base.round(2) + 0. # add 0. to avoid negative 0 in output
110 array([1.65, 0. , 0. ])
111
112 """
113 return self.point_from_proportion(0.5)
114
115 @property
116 def tip_point(self):
117 r"""The tip point of the arrow tip.
118
119 Examples
120 --------
121 ::
122
123 >>> from manim import Arrow
124 >>> arrow = Arrow(np.array([0, 0, 0]), np.array([2, 0, 0]), buff=0)
125 >>> arrow.tip.tip_point.round(2) + 0.
126 array([2., 0., 0.])
127
128 """
129 return self.points[0]
130
131 @property
132 def vector(self):
133 r"""The vector pointing from the base point to the tip point.
134
135 Examples
136 --------
137 ::
138
139 >>> from manim import Arrow
140 >>> arrow = Arrow(np.array([0, 0, 0]), np.array([2, 2, 0]), buff=0)
141 >>> arrow.tip.vector.round(2) + 0.
142 array([0.25, 0.25, 0. ])
143
144 """
145 return self.tip_point - self.base
146
147 @property
148 def tip_angle(self):
149 r"""The angle of the arrow tip.
150
151 Examples
152 --------
153 ::
154
155 >>> from manim import Arrow
156 >>> arrow = Arrow(np.array([0, 0, 0]), np.array([1, 1, 0]), buff=0)
157 >>> round(arrow.tip.tip_angle, 5) == round(PI/4, 5)
158 True
159
160 """
161 return angle_of_vector(self.vector)
162
163 @property
164 def length(self):
165 r"""The length of the arrow tip.
166
167 Examples
168 --------
169 ::
170
171 >>> from manim import Arrow
172 >>> arrow = Arrow(np.array([0, 0, 0]), np.array([1, 2, 0]))
173 >>> round(arrow.tip.length, 3)
174 0.35
175
176 """
177 return np.linalg.norm(self.vector)
178
179
180 class ArrowTriangleTip(ArrowTip, Triangle):
181 r"""Triangular arrow tip."""
182
183 def __init__(
184 self,
185 fill_opacity=0,
186 stroke_width=3,
187 length=DEFAULT_ARROW_TIP_LENGTH,
188 width=DEFAULT_ARROW_TIP_LENGTH,
189 start_angle=PI,
190 **kwargs,
191 ):
192 Triangle.__init__(
193 self,
194 fill_opacity=fill_opacity,
195 stroke_width=stroke_width,
196 start_angle=start_angle,
197 **kwargs,
198 )
199 self.width = width
200
201 self.stretch_to_fit_width(length)
202 self.stretch_to_fit_height(width)
203
204
205 class ArrowTriangleFilledTip(ArrowTriangleTip):
206 r"""Triangular arrow tip with filled tip.
207
208 This is the default arrow tip shape.
209 """
210
211 def __init__(self, fill_opacity=1, stroke_width=0, **kwargs):
212 super().__init__(fill_opacity=fill_opacity, stroke_width=stroke_width, **kwargs)
213
214
215 class ArrowCircleTip(ArrowTip, Circle):
216 r"""Circular arrow tip."""
217
218 def __init__(
219 self,
220 fill_opacity=0,
221 stroke_width=3,
222 length=DEFAULT_ARROW_TIP_LENGTH,
223 start_angle=PI,
224 **kwargs,
225 ):
226 self.start_angle = start_angle
227 Circle.__init__(
228 self, fill_opacity=fill_opacity, stroke_width=stroke_width, **kwargs
229 )
230 self.width = length
231 self.stretch_to_fit_height(length)
232
233
234 class ArrowCircleFilledTip(ArrowCircleTip):
235 r"""Circular arrow tip with filled tip."""
236
237 def __init__(self, fill_opacity=1, stroke_width=0, **kwargs):
238 super().__init__(fill_opacity=fill_opacity, stroke_width=stroke_width, **kwargs)
239
240
241 class ArrowSquareTip(ArrowTip, Square):
242 r"""Square arrow tip."""
243
244 def __init__(
245 self,
246 fill_opacity=0,
247 stroke_width=3,
248 length=DEFAULT_ARROW_TIP_LENGTH,
249 start_angle=PI,
250 **kwargs,
251 ):
252 self.start_angle = start_angle
253 Square.__init__(
254 self,
255 fill_opacity=fill_opacity,
256 stroke_width=stroke_width,
257 side_length=length,
258 **kwargs,
259 )
260 self.width = length
261 self.stretch_to_fit_height(length)
262
263
264 class ArrowSquareFilledTip(ArrowSquareTip):
265 r"""Square arrow tip with filled tip."""
266
267 def __init__(self, fill_opacity=1, stroke_width=0, **kwargs):
268 super().__init__(fill_opacity=fill_opacity, stroke_width=stroke_width, **kwargs)
269
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/manim/mobject/geometry/tips.py b/manim/mobject/geometry/tips.py
--- a/manim/mobject/geometry/tips.py
+++ b/manim/mobject/geometry/tips.py
@@ -8,6 +8,8 @@
"ArrowCircleTip",
"ArrowSquareTip",
"ArrowSquareFilledTip",
+ "ArrowTriangleTip",
+ "ArrowTriangleFilledTip",
]
import numpy as np
|
{"golden_diff": "diff --git a/manim/mobject/geometry/tips.py b/manim/mobject/geometry/tips.py\n--- a/manim/mobject/geometry/tips.py\n+++ b/manim/mobject/geometry/tips.py\n@@ -8,6 +8,8 @@\n \"ArrowCircleTip\",\n \"ArrowSquareTip\",\n \"ArrowSquareFilledTip\",\n+ \"ArrowTriangleTip\",\n+ \"ArrowTriangleFilledTip\",\n ]\n \n import numpy as np\n", "issue": "Not all arrow tips are accessible\n## Description of bug / unexpected behavior\r\n<!-- Add a clear and concise description of the problem you encountered. -->\r\nThe [manim.mobject.geometry.tips](https://docs.manim.community/en/stable/_modules/manim/mobject/geometry/tips.html#ArrowTriangleFilledTip) file has presents of some arrow tips to use. The list `__all__` contains:\r\n```py\r\n__all__ = [\r\n \"ArrowTip\",\r\n \"ArrowCircleFilledTip\",\r\n \"ArrowCircleTip\",\r\n \"ArrowSquareTip\",\r\n \"ArrowSquareFilledTip\",\r\n]\r\n```\r\n\r\n## Expected behavior\r\n<!-- Add a clear and concise description of what you expected to happen. -->\r\nInstead, it should have:\r\n\r\n```py\r\n__all__ = [\r\n \"ArrowTip\",\r\n \"ArrowCircleFilledTip\",\r\n \"ArrowCircleTip\",\r\n \"ArrowSquareTip\",\r\n \"ArrowSquareFilledTip\"\r\n \"ArrowTriangleTip\", # added\r\n \"ArrowTriangleFilledTip\", # added\r\n]\r\n```\r\n\r\n## How to reproduce the issue\r\n<!-- Provide a piece of code illustrating the undesired behavior. -->\r\n\r\n<details><summary>Code for reproducing the problem</summary>\r\n\r\n```py\r\nclass Test(Scene):\r\n def construct(self):\r\n my_line = Line()\r\n my_line.add_tip(ArrowTriangleFilledTip(fill_color=WHITE))\r\n self.add(my_line)\r\n```\r\n\r\n</details>\r\n\r\n\r\n## Additional media files\r\n<!-- Paste in the files manim produced on rendering the code above. -->\r\nNone\r\n\r\n\r\n\r\n<!-- Insert screenshots here (only when absolutely necessary, we prefer copy/pasted output!) -->\r\n\r\n</details>\r\n\r\n\r\n## System specifications\r\n\r\n<details><summary>System Details</summary>\r\n\r\n- OS: macOS 13.0.1 (Ventura)\r\n- RAM: 8GB\r\n- Python version: Python 3.10.9\r\n- Installed modules: manim 0.17.2\r\n\r\n\n", "before_files": [{"content": "r\"\"\"A collection of tip mobjects for use with :class:`~.TipableVMobject`.\"\"\"\n\nfrom __future__ import annotations\n\n__all__ = [\n \"ArrowTip\",\n \"ArrowCircleFilledTip\",\n \"ArrowCircleTip\",\n \"ArrowSquareTip\",\n \"ArrowSquareFilledTip\",\n]\n\nimport numpy as np\n\nfrom manim.constants import *\nfrom manim.mobject.geometry.arc import Circle\nfrom manim.mobject.geometry.polygram import Square, Triangle\nfrom manim.mobject.opengl.opengl_compatibility import ConvertToOpenGL\nfrom manim.mobject.types.vectorized_mobject import VMobject\nfrom manim.utils.space_ops import angle_of_vector\n\n\nclass ArrowTip(VMobject, metaclass=ConvertToOpenGL):\n r\"\"\"Base class for arrow tips.\n\n .. seealso::\n :class:`ArrowTriangleTip`\n :class:`ArrowTriangleFilledTip`\n :class:`ArrowCircleTip`\n :class:`ArrowCircleFilledTip`\n :class:`ArrowSquareTip`\n :class:`ArrowSquareFilledTip`\n\n Examples\n --------\n Cannot be used directly, only intended for inheritance::\n\n >>> tip = ArrowTip()\n Traceback (most recent call last):\n ...\n NotImplementedError: Has to be implemented in inheriting subclasses.\n\n Instead, use one of the pre-defined ones, or make\n a custom one like this:\n\n .. manim:: CustomTipExample\n\n >>> from manim import RegularPolygon, Arrow\n >>> class MyCustomArrowTip(ArrowTip, RegularPolygon):\n ... def __init__(self, length=0.35, **kwargs):\n ... RegularPolygon.__init__(self, n=5, **kwargs)\n ... self.width = length\n ... self.stretch_to_fit_height(length)\n >>> arr = Arrow(np.array([-2, -2, 0]), np.array([2, 2, 0]),\n ... tip_shape=MyCustomArrowTip)\n >>> isinstance(arr.tip, RegularPolygon)\n True\n >>> from manim import Scene, Create\n >>> class CustomTipExample(Scene):\n ... def construct(self):\n ... self.play(Create(arr))\n\n Using a class inherited from :class:`ArrowTip` to get a non-filled\n tip is a shorthand to manually specifying the arrow tip style as follows::\n\n >>> arrow = Arrow(np.array([0, 0, 0]), np.array([1, 1, 0]),\n ... tip_style={'fill_opacity': 0, 'stroke_width': 3})\n\n The following example illustrates the usage of all of the predefined\n arrow tips.\n\n .. manim:: ArrowTipsShowcase\n :save_last_frame:\n\n from manim.mobject.geometry.tips import ArrowTriangleTip,\\\n ArrowSquareTip, ArrowSquareFilledTip,\\\n ArrowCircleTip, ArrowCircleFilledTip\n class ArrowTipsShowcase(Scene):\n def construct(self):\n a00 = Arrow(start=[-2, 3, 0], end=[2, 3, 0], color=YELLOW)\n a11 = Arrow(start=[-2, 2, 0], end=[2, 2, 0], tip_shape=ArrowTriangleTip)\n a12 = Arrow(start=[-2, 1, 0], end=[2, 1, 0])\n a21 = Arrow(start=[-2, 0, 0], end=[2, 0, 0], tip_shape=ArrowSquareTip)\n a22 = Arrow([-2, -1, 0], [2, -1, 0], tip_shape=ArrowSquareFilledTip)\n a31 = Arrow([-2, -2, 0], [2, -2, 0], tip_shape=ArrowCircleTip)\n a32 = Arrow([-2, -3, 0], [2, -3, 0], tip_shape=ArrowCircleFilledTip)\n b11 = a11.copy().scale(0.5, scale_tips=True).next_to(a11, RIGHT)\n b12 = a12.copy().scale(0.5, scale_tips=True).next_to(a12, RIGHT)\n b21 = a21.copy().scale(0.5, scale_tips=True).next_to(a21, RIGHT)\n self.add(a00, a11, a12, a21, a22, a31, a32, b11, b12, b21)\n\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n raise NotImplementedError(\"Has to be implemented in inheriting subclasses.\")\n\n @property\n def base(self):\n r\"\"\"The base point of the arrow tip.\n\n This is the point connecting to the arrow line.\n\n Examples\n --------\n ::\n\n >>> from manim import Arrow\n >>> arrow = Arrow(np.array([0, 0, 0]), np.array([2, 0, 0]), buff=0)\n >>> arrow.tip.base.round(2) + 0. # add 0. to avoid negative 0 in output\n array([1.65, 0. , 0. ])\n\n \"\"\"\n return self.point_from_proportion(0.5)\n\n @property\n def tip_point(self):\n r\"\"\"The tip point of the arrow tip.\n\n Examples\n --------\n ::\n\n >>> from manim import Arrow\n >>> arrow = Arrow(np.array([0, 0, 0]), np.array([2, 0, 0]), buff=0)\n >>> arrow.tip.tip_point.round(2) + 0.\n array([2., 0., 0.])\n\n \"\"\"\n return self.points[0]\n\n @property\n def vector(self):\n r\"\"\"The vector pointing from the base point to the tip point.\n\n Examples\n --------\n ::\n\n >>> from manim import Arrow\n >>> arrow = Arrow(np.array([0, 0, 0]), np.array([2, 2, 0]), buff=0)\n >>> arrow.tip.vector.round(2) + 0.\n array([0.25, 0.25, 0. ])\n\n \"\"\"\n return self.tip_point - self.base\n\n @property\n def tip_angle(self):\n r\"\"\"The angle of the arrow tip.\n\n Examples\n --------\n ::\n\n >>> from manim import Arrow\n >>> arrow = Arrow(np.array([0, 0, 0]), np.array([1, 1, 0]), buff=0)\n >>> round(arrow.tip.tip_angle, 5) == round(PI/4, 5)\n True\n\n \"\"\"\n return angle_of_vector(self.vector)\n\n @property\n def length(self):\n r\"\"\"The length of the arrow tip.\n\n Examples\n --------\n ::\n\n >>> from manim import Arrow\n >>> arrow = Arrow(np.array([0, 0, 0]), np.array([1, 2, 0]))\n >>> round(arrow.tip.length, 3)\n 0.35\n\n \"\"\"\n return np.linalg.norm(self.vector)\n\n\nclass ArrowTriangleTip(ArrowTip, Triangle):\n r\"\"\"Triangular arrow tip.\"\"\"\n\n def __init__(\n self,\n fill_opacity=0,\n stroke_width=3,\n length=DEFAULT_ARROW_TIP_LENGTH,\n width=DEFAULT_ARROW_TIP_LENGTH,\n start_angle=PI,\n **kwargs,\n ):\n Triangle.__init__(\n self,\n fill_opacity=fill_opacity,\n stroke_width=stroke_width,\n start_angle=start_angle,\n **kwargs,\n )\n self.width = width\n\n self.stretch_to_fit_width(length)\n self.stretch_to_fit_height(width)\n\n\nclass ArrowTriangleFilledTip(ArrowTriangleTip):\n r\"\"\"Triangular arrow tip with filled tip.\n\n This is the default arrow tip shape.\n \"\"\"\n\n def __init__(self, fill_opacity=1, stroke_width=0, **kwargs):\n super().__init__(fill_opacity=fill_opacity, stroke_width=stroke_width, **kwargs)\n\n\nclass ArrowCircleTip(ArrowTip, Circle):\n r\"\"\"Circular arrow tip.\"\"\"\n\n def __init__(\n self,\n fill_opacity=0,\n stroke_width=3,\n length=DEFAULT_ARROW_TIP_LENGTH,\n start_angle=PI,\n **kwargs,\n ):\n self.start_angle = start_angle\n Circle.__init__(\n self, fill_opacity=fill_opacity, stroke_width=stroke_width, **kwargs\n )\n self.width = length\n self.stretch_to_fit_height(length)\n\n\nclass ArrowCircleFilledTip(ArrowCircleTip):\n r\"\"\"Circular arrow tip with filled tip.\"\"\"\n\n def __init__(self, fill_opacity=1, stroke_width=0, **kwargs):\n super().__init__(fill_opacity=fill_opacity, stroke_width=stroke_width, **kwargs)\n\n\nclass ArrowSquareTip(ArrowTip, Square):\n r\"\"\"Square arrow tip.\"\"\"\n\n def __init__(\n self,\n fill_opacity=0,\n stroke_width=3,\n length=DEFAULT_ARROW_TIP_LENGTH,\n start_angle=PI,\n **kwargs,\n ):\n self.start_angle = start_angle\n Square.__init__(\n self,\n fill_opacity=fill_opacity,\n stroke_width=stroke_width,\n side_length=length,\n **kwargs,\n )\n self.width = length\n self.stretch_to_fit_height(length)\n\n\nclass ArrowSquareFilledTip(ArrowSquareTip):\n r\"\"\"Square arrow tip with filled tip.\"\"\"\n\n def __init__(self, fill_opacity=1, stroke_width=0, **kwargs):\n super().__init__(fill_opacity=fill_opacity, stroke_width=stroke_width, **kwargs)\n", "path": "manim/mobject/geometry/tips.py"}], "after_files": [{"content": "r\"\"\"A collection of tip mobjects for use with :class:`~.TipableVMobject`.\"\"\"\n\nfrom __future__ import annotations\n\n__all__ = [\n \"ArrowTip\",\n \"ArrowCircleFilledTip\",\n \"ArrowCircleTip\",\n \"ArrowSquareTip\",\n \"ArrowSquareFilledTip\",\n \"ArrowTriangleTip\",\n \"ArrowTriangleFilledTip\",\n]\n\nimport numpy as np\n\nfrom manim.constants import *\nfrom manim.mobject.geometry.arc import Circle\nfrom manim.mobject.geometry.polygram import Square, Triangle\nfrom manim.mobject.opengl.opengl_compatibility import ConvertToOpenGL\nfrom manim.mobject.types.vectorized_mobject import VMobject\nfrom manim.utils.space_ops import angle_of_vector\n\n\nclass ArrowTip(VMobject, metaclass=ConvertToOpenGL):\n r\"\"\"Base class for arrow tips.\n\n .. seealso::\n :class:`ArrowTriangleTip`\n :class:`ArrowTriangleFilledTip`\n :class:`ArrowCircleTip`\n :class:`ArrowCircleFilledTip`\n :class:`ArrowSquareTip`\n :class:`ArrowSquareFilledTip`\n\n Examples\n --------\n Cannot be used directly, only intended for inheritance::\n\n >>> tip = ArrowTip()\n Traceback (most recent call last):\n ...\n NotImplementedError: Has to be implemented in inheriting subclasses.\n\n Instead, use one of the pre-defined ones, or make\n a custom one like this:\n\n .. manim:: CustomTipExample\n\n >>> from manim import RegularPolygon, Arrow\n >>> class MyCustomArrowTip(ArrowTip, RegularPolygon):\n ... def __init__(self, length=0.35, **kwargs):\n ... RegularPolygon.__init__(self, n=5, **kwargs)\n ... self.width = length\n ... self.stretch_to_fit_height(length)\n >>> arr = Arrow(np.array([-2, -2, 0]), np.array([2, 2, 0]),\n ... tip_shape=MyCustomArrowTip)\n >>> isinstance(arr.tip, RegularPolygon)\n True\n >>> from manim import Scene, Create\n >>> class CustomTipExample(Scene):\n ... def construct(self):\n ... self.play(Create(arr))\n\n Using a class inherited from :class:`ArrowTip` to get a non-filled\n tip is a shorthand to manually specifying the arrow tip style as follows::\n\n >>> arrow = Arrow(np.array([0, 0, 0]), np.array([1, 1, 0]),\n ... tip_style={'fill_opacity': 0, 'stroke_width': 3})\n\n The following example illustrates the usage of all of the predefined\n arrow tips.\n\n .. manim:: ArrowTipsShowcase\n :save_last_frame:\n\n from manim.mobject.geometry.tips import ArrowTriangleTip,\\\n ArrowSquareTip, ArrowSquareFilledTip,\\\n ArrowCircleTip, ArrowCircleFilledTip\n class ArrowTipsShowcase(Scene):\n def construct(self):\n a00 = Arrow(start=[-2, 3, 0], end=[2, 3, 0], color=YELLOW)\n a11 = Arrow(start=[-2, 2, 0], end=[2, 2, 0], tip_shape=ArrowTriangleTip)\n a12 = Arrow(start=[-2, 1, 0], end=[2, 1, 0])\n a21 = Arrow(start=[-2, 0, 0], end=[2, 0, 0], tip_shape=ArrowSquareTip)\n a22 = Arrow([-2, -1, 0], [2, -1, 0], tip_shape=ArrowSquareFilledTip)\n a31 = Arrow([-2, -2, 0], [2, -2, 0], tip_shape=ArrowCircleTip)\n a32 = Arrow([-2, -3, 0], [2, -3, 0], tip_shape=ArrowCircleFilledTip)\n b11 = a11.copy().scale(0.5, scale_tips=True).next_to(a11, RIGHT)\n b12 = a12.copy().scale(0.5, scale_tips=True).next_to(a12, RIGHT)\n b21 = a21.copy().scale(0.5, scale_tips=True).next_to(a21, RIGHT)\n self.add(a00, a11, a12, a21, a22, a31, a32, b11, b12, b21)\n\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n raise NotImplementedError(\"Has to be implemented in inheriting subclasses.\")\n\n @property\n def base(self):\n r\"\"\"The base point of the arrow tip.\n\n This is the point connecting to the arrow line.\n\n Examples\n --------\n ::\n\n >>> from manim import Arrow\n >>> arrow = Arrow(np.array([0, 0, 0]), np.array([2, 0, 0]), buff=0)\n >>> arrow.tip.base.round(2) + 0. # add 0. to avoid negative 0 in output\n array([1.65, 0. , 0. ])\n\n \"\"\"\n return self.point_from_proportion(0.5)\n\n @property\n def tip_point(self):\n r\"\"\"The tip point of the arrow tip.\n\n Examples\n --------\n ::\n\n >>> from manim import Arrow\n >>> arrow = Arrow(np.array([0, 0, 0]), np.array([2, 0, 0]), buff=0)\n >>> arrow.tip.tip_point.round(2) + 0.\n array([2., 0., 0.])\n\n \"\"\"\n return self.points[0]\n\n @property\n def vector(self):\n r\"\"\"The vector pointing from the base point to the tip point.\n\n Examples\n --------\n ::\n\n >>> from manim import Arrow\n >>> arrow = Arrow(np.array([0, 0, 0]), np.array([2, 2, 0]), buff=0)\n >>> arrow.tip.vector.round(2) + 0.\n array([0.25, 0.25, 0. ])\n\n \"\"\"\n return self.tip_point - self.base\n\n @property\n def tip_angle(self):\n r\"\"\"The angle of the arrow tip.\n\n Examples\n --------\n ::\n\n >>> from manim import Arrow\n >>> arrow = Arrow(np.array([0, 0, 0]), np.array([1, 1, 0]), buff=0)\n >>> round(arrow.tip.tip_angle, 5) == round(PI/4, 5)\n True\n\n \"\"\"\n return angle_of_vector(self.vector)\n\n @property\n def length(self):\n r\"\"\"The length of the arrow tip.\n\n Examples\n --------\n ::\n\n >>> from manim import Arrow\n >>> arrow = Arrow(np.array([0, 0, 0]), np.array([1, 2, 0]))\n >>> round(arrow.tip.length, 3)\n 0.35\n\n \"\"\"\n return np.linalg.norm(self.vector)\n\n\nclass ArrowTriangleTip(ArrowTip, Triangle):\n r\"\"\"Triangular arrow tip.\"\"\"\n\n def __init__(\n self,\n fill_opacity=0,\n stroke_width=3,\n length=DEFAULT_ARROW_TIP_LENGTH,\n width=DEFAULT_ARROW_TIP_LENGTH,\n start_angle=PI,\n **kwargs,\n ):\n Triangle.__init__(\n self,\n fill_opacity=fill_opacity,\n stroke_width=stroke_width,\n start_angle=start_angle,\n **kwargs,\n )\n self.width = width\n\n self.stretch_to_fit_width(length)\n self.stretch_to_fit_height(width)\n\n\nclass ArrowTriangleFilledTip(ArrowTriangleTip):\n r\"\"\"Triangular arrow tip with filled tip.\n\n This is the default arrow tip shape.\n \"\"\"\n\n def __init__(self, fill_opacity=1, stroke_width=0, **kwargs):\n super().__init__(fill_opacity=fill_opacity, stroke_width=stroke_width, **kwargs)\n\n\nclass ArrowCircleTip(ArrowTip, Circle):\n r\"\"\"Circular arrow tip.\"\"\"\n\n def __init__(\n self,\n fill_opacity=0,\n stroke_width=3,\n length=DEFAULT_ARROW_TIP_LENGTH,\n start_angle=PI,\n **kwargs,\n ):\n self.start_angle = start_angle\n Circle.__init__(\n self, fill_opacity=fill_opacity, stroke_width=stroke_width, **kwargs\n )\n self.width = length\n self.stretch_to_fit_height(length)\n\n\nclass ArrowCircleFilledTip(ArrowCircleTip):\n r\"\"\"Circular arrow tip with filled tip.\"\"\"\n\n def __init__(self, fill_opacity=1, stroke_width=0, **kwargs):\n super().__init__(fill_opacity=fill_opacity, stroke_width=stroke_width, **kwargs)\n\n\nclass ArrowSquareTip(ArrowTip, Square):\n r\"\"\"Square arrow tip.\"\"\"\n\n def __init__(\n self,\n fill_opacity=0,\n stroke_width=3,\n length=DEFAULT_ARROW_TIP_LENGTH,\n start_angle=PI,\n **kwargs,\n ):\n self.start_angle = start_angle\n Square.__init__(\n self,\n fill_opacity=fill_opacity,\n stroke_width=stroke_width,\n side_length=length,\n **kwargs,\n )\n self.width = length\n self.stretch_to_fit_height(length)\n\n\nclass ArrowSquareFilledTip(ArrowSquareTip):\n r\"\"\"Square arrow tip with filled tip.\"\"\"\n\n def __init__(self, fill_opacity=1, stroke_width=0, **kwargs):\n super().__init__(fill_opacity=fill_opacity, stroke_width=stroke_width, **kwargs)\n", "path": "manim/mobject/geometry/tips.py"}]}
| 3,502 | 99 |
gh_patches_debug_33607
|
rasdani/github-patches
|
git_diff
|
vas3k__vas3k.club-405
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Отсутствует robots.txt
Уверен, что в клубе всем похуй, но мне каждый раз больно, когда вижу, что нет robots.txt.
Можно хотяб стандартный добавить, вроде:
User-agent: *
Sitemap: https://vas3k.club/sitemap.xml
Host: https://vas3k.club
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `misc/views.py`
Content:
```
1 from django.shortcuts import render
2
3 from auth.helpers import auth_required
4 from landing.models import GodSettings
5 from users.models.achievements import Achievement
6
7
8 @auth_required
9 def achievements(request):
10 achievements = Achievement.objects.filter(is_visible=True)
11 return render(request, "pages/achievements.html", {
12 "achievements": achievements
13 })
14
15
16 @auth_required
17 def network(request):
18 secret_page_html = GodSettings.objects.first().network_page
19 return render(request, "pages/network.html", {
20 "page_html": secret_page_html,
21 })
22
```
Path: `club/urls.py`
Content:
```
1 from django.conf import settings
2 from django.contrib.sitemaps.views import sitemap
3 from django.urls import path, include, re_path
4
5 from auth.helpers import auth_switch
6 from auth.views.auth import login, logout, debug_dev_login, debug_random_login, join
7 from auth.views.email import email_login, email_login_code
8 from auth.views.external import external_login
9 from auth.views.patreon import patreon_login, patreon_oauth_callback
10 from bot.views import webhook_telegram, link_telegram
11 from comments.views import create_comment, edit_comment, delete_comment, show_comment, upvote_comment, \
12 retract_comment_vote, pin_comment
13 from landing.views import landing, docs, god_settings
14 from misc.views import achievements, network
15 from notifications.views import weekly_digest, email_unsubscribe, email_confirm, daily_digest, email_digest_switch
16 from payments.views import membership_expired, pay, done, stripe_webhook, stop_subscription
17 from posts.api import md_show_post, api_show_post
18 from posts.models.post import Post
19 from posts.rss import NewPostsRss
20 from posts.sitemaps import sitemaps
21 from posts.views.admin import admin_post, announce_post
22 from posts.views.api import toggle_post_bookmark
23 from posts.views.feed import feed
24 from posts.views.posts import show_post, edit_post, upvote_post, retract_post_vote, compose, compose_type, \
25 toggle_post_subscription
26 from bookmarks.views import bookmarks
27 from search.views import search
28 from users.api import api_profile
29 from users.views.delete_account import request_delete_account, confirm_delete_account
30 from users.views.messages import on_review, rejected, banned
31 from users.views.profile import profile, toggle_tag, add_expertise, delete_expertise
32 from users.views.settings import profile_settings, edit_profile, edit_account, edit_notifications, edit_payments, \
33 edit_bot, edit_data, request_data
34 from users.views.intro import intro
35 from users.views.admin import admin_profile
36 from users.views.people import people
37
38 POST_TYPE_RE = r"(?P<post_type>(all|{}))".format("|".join(dict(Post.TYPES).keys()))
39 ORDERING_RE = r"(?P<ordering>(activity|new|top|top_week|top_month))"
40
41 urlpatterns = [
42 path("", auth_switch(landing, feed), name="index"),
43
44 path("join/", join, name="join"),
45 path("auth/login/", login, name="login"),
46 path("auth/logout/", logout, name="logout"),
47 path("auth/patreon/", patreon_login, name="patreon_login"),
48 path("auth/patreon_callback/", patreon_oauth_callback, name="patreon_oauth_callback"),
49 path("auth/email/", email_login, name="email_login"),
50 path("auth/email/code/", email_login_code, name="email_login_code"),
51 path("auth/external/", external_login, name="external_login"),
52
53 path("monies/", pay, name="pay"),
54 path("monies/done/", done, name="done"),
55 path("monies/membership_expired/", membership_expired, name="membership_expired"),
56 path("monies/stripe/webhook/", stripe_webhook, name="stripe_webhook"),
57 path("monies/subscription/<str:subscription_id>/stop/", stop_subscription, name="stop_subscription"),
58
59 path("user/<slug:user_slug>/", profile, name="profile"),
60 path("user/<slug:user_slug>.json", api_profile, name="api_profile"),
61 path("user/<slug:user_slug>/edit/", profile_settings, name="profile_settings"),
62 path("user/<slug:user_slug>/edit/profile/", edit_profile, name="edit_profile"),
63 path("user/<slug:user_slug>/edit/account/", edit_account, name="edit_account"),
64 path("user/<slug:user_slug>/edit/bot/", edit_bot, name="edit_bot"),
65 path("user/<slug:user_slug>/edit/notifications/", edit_notifications, name="edit_notifications"),
66 path("user/<slug:user_slug>/edit/monies/", edit_payments, name="edit_payments"),
67 path("user/<slug:user_slug>/edit/data/", edit_data, name="edit_data"),
68 path("user/<slug:user_slug>/edit/data/request/", request_data, name="request_user_data"),
69 path("user/<slug:user_slug>/admin/", admin_profile, name="admin_profile"),
70 path("user/<slug:user_slug>/delete/", request_delete_account, name="request_delete_account"),
71 path("user/<slug:user_slug>/delete/confirm/", confirm_delete_account, name="confirm_delete_account"),
72
73 path("intro/", intro, name="intro"),
74 path("people/", people, name="people"),
75 path("achievements/", achievements, name="achievements"),
76 path("profile/tag/<slug:tag_code>/toggle/", toggle_tag, name="toggle_tag"),
77 path("profile/expertise/add/", add_expertise, name="add_expertise"),
78 path("profile/expertise/<slug:expertise>/delete/", delete_expertise, name="delete_expertise"),
79 path("profile/on_review/", on_review, name="on_review"),
80 path("profile/rejected/", rejected, name="rejected"),
81 path("profile/banned/", banned, name="banned"),
82
83 path("create/", compose, name="compose"),
84 path("create/<slug:post_type>/", compose_type, name="compose_type"),
85 path("post/<slug:post_slug>/edit/", edit_post, name="edit_post"),
86 path("post/<slug:post_slug>/bookmark/", toggle_post_bookmark, name="toggle_post_bookmark"),
87 path("post/<slug:post_slug>/upvote/", upvote_post, name="upvote_post"),
88 path("post/<slug:post_slug>/retract_vote/", retract_post_vote, name="retract_post_vote"),
89 path("post/<slug:post_slug>/subscription/", toggle_post_subscription, name="toggle_post_subscription"),
90 path("post/<slug:post_slug>/admin/", admin_post, name="admin_post"),
91 path("post/<slug:post_slug>/announce/", announce_post, name="announce_post"),
92 path("post/<slug:post_slug>/comment/create/", create_comment, name="create_comment"),
93 path("post/<slug:post_slug>/comment/<uuid:comment_id>/", show_comment, name="show_comment", ),
94
95 path("bookmarks/", bookmarks, name="bookmarks"),
96
97 path("search/", search, name="search"),
98 path("room/<slug:topic_slug>/", feed, name="feed_topic"),
99 path("room/<slug:topic_slug>/<slug:ordering>/", feed, name="feed_topic_ordering"),
100
101 path("comment/<uuid:comment_id>/upvote/", upvote_comment, name="upvote_comment"),
102 path("comment/<uuid:comment_id>/retract_vote/", retract_comment_vote, name="retract_comment_vote"),
103 path("comment/<uuid:comment_id>/edit/", edit_comment, name="edit_comment"),
104 path("comment/<uuid:comment_id>/pin/", pin_comment, name="pin_comment"),
105 path("comment/<uuid:comment_id>/delete/", delete_comment, name="delete_comment"),
106
107 path("telegram/link/", link_telegram, name="link_telegram"),
108 path("telegram/webhook/<str:token>/", webhook_telegram, name="webhook_telegram"),
109
110 path("notifications/confirm/<str:secret>/", email_confirm, name="email_confirm"),
111 path("notifications/confirm/<str:secret>/<str:legacy_code>/", email_confirm, name="email_confirm_legacy"),
112 path("notifications/unsubscribe/<str:user_id>/<str:secret>/", email_unsubscribe, name="email_unsubscribe"),
113 path("notifications/switch/<str:digest_type>/<str:user_id>/<str:secret>/", email_digest_switch,
114 name="email_digest_switch"),
115 path("notifications/renderer/digest/weekly/", weekly_digest, name="render_weekly_digest"),
116 path("notifications/renderer/digest/daily/<slug:user_slug>/", daily_digest, name="render_daily_digest"),
117
118 path("docs/<slug:doc_slug>/", docs, name="docs"),
119
120 path("network/", network, name="network"),
121
122 path("godmode/", god_settings, name="god_settings"),
123 path("godmode/dev_login/", debug_dev_login, name="debug_dev_login"),
124 path("godmode/random_login/", debug_random_login, name="debug_random_login"),
125
126 # feeds
127 path("sitemap.xml", sitemap, {"sitemaps": sitemaps}, name="sitemap"),
128 path("posts.rss", NewPostsRss(), name="rss"),
129
130 # keep these guys at the bottom
131 re_path(r"^{}/$".format(POST_TYPE_RE), feed, name="feed_type"),
132 re_path(r"^{}/{}/$".format(POST_TYPE_RE, ORDERING_RE), feed, name="feed_ordering"),
133 path("<slug:post_type>/<slug:post_slug>/", show_post, name="show_post"),
134 path("<slug:post_type>/<slug:post_slug>.md", md_show_post, name="md_show_post"),
135 path("<slug:post_type>/<slug:post_slug>.json", api_show_post, name="api_show_post"),
136 ]
137
138 if settings.DEBUG:
139 import debug_toolbar
140
141 urlpatterns = [path("__debug__/", include(debug_toolbar.urls))] + urlpatterns
142
143 # According to django doc: https://docs.djangoproject.com/en/3.1/topics/testing/overview/#other-test-conditions
144 # Regardless of the value of the DEBUG setting in your configuration file, all Django tests run with DEBUG=False
145 # so we use separate special var instead of settings.DEBUG
146 if settings.TESTS_RUN:
147 from debug.api import api_me
148
149 urlpatterns.append(path("debug/me", api_me, name="debug_api_me"))
150
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/club/urls.py b/club/urls.py
--- a/club/urls.py
+++ b/club/urls.py
@@ -11,7 +11,7 @@
from comments.views import create_comment, edit_comment, delete_comment, show_comment, upvote_comment, \
retract_comment_vote, pin_comment
from landing.views import landing, docs, god_settings
-from misc.views import achievements, network
+from misc.views import achievements, network, robots
from notifications.views import weekly_digest, email_unsubscribe, email_confirm, daily_digest, email_digest_switch
from payments.views import membership_expired, pay, done, stripe_webhook, stop_subscription
from posts.api import md_show_post, api_show_post
@@ -127,6 +127,8 @@
path("sitemap.xml", sitemap, {"sitemaps": sitemaps}, name="sitemap"),
path("posts.rss", NewPostsRss(), name="rss"),
+ path("robots.txt", robots, name="robots"),
+
# keep these guys at the bottom
re_path(r"^{}/$".format(POST_TYPE_RE), feed, name="feed_type"),
re_path(r"^{}/{}/$".format(POST_TYPE_RE, ORDERING_RE), feed, name="feed_ordering"),
diff --git a/misc/views.py b/misc/views.py
--- a/misc/views.py
+++ b/misc/views.py
@@ -1,4 +1,6 @@
+from django.http import HttpResponse
from django.shortcuts import render
+from django.views.decorators.http import require_GET
from auth.helpers import auth_required
from landing.models import GodSettings
@@ -19,3 +21,12 @@
return render(request, "pages/network.html", {
"page_html": secret_page_html,
})
+
+@require_GET
+def robots(request):
+ lines = [
+ "User-agent: *",
+ "Sitemap: https://vas3k.club/sitemap.xml",
+ "Host: https://vas3k.club",
+ ]
+ return HttpResponse("\n".join(lines), content_type="text/plain")
|
{"golden_diff": "diff --git a/club/urls.py b/club/urls.py\n--- a/club/urls.py\n+++ b/club/urls.py\n@@ -11,7 +11,7 @@\n from comments.views import create_comment, edit_comment, delete_comment, show_comment, upvote_comment, \\\n retract_comment_vote, pin_comment\n from landing.views import landing, docs, god_settings\n-from misc.views import achievements, network\n+from misc.views import achievements, network, robots\n from notifications.views import weekly_digest, email_unsubscribe, email_confirm, daily_digest, email_digest_switch\n from payments.views import membership_expired, pay, done, stripe_webhook, stop_subscription\n from posts.api import md_show_post, api_show_post\n@@ -127,6 +127,8 @@\n path(\"sitemap.xml\", sitemap, {\"sitemaps\": sitemaps}, name=\"sitemap\"),\n path(\"posts.rss\", NewPostsRss(), name=\"rss\"),\n \n+ path(\"robots.txt\", robots, name=\"robots\"),\n+\n # keep these guys at the bottom\n re_path(r\"^{}/$\".format(POST_TYPE_RE), feed, name=\"feed_type\"),\n re_path(r\"^{}/{}/$\".format(POST_TYPE_RE, ORDERING_RE), feed, name=\"feed_ordering\"),\ndiff --git a/misc/views.py b/misc/views.py\n--- a/misc/views.py\n+++ b/misc/views.py\n@@ -1,4 +1,6 @@\n+from django.http import HttpResponse\n from django.shortcuts import render\n+from django.views.decorators.http import require_GET\n \n from auth.helpers import auth_required\n from landing.models import GodSettings\n@@ -19,3 +21,12 @@\n return render(request, \"pages/network.html\", {\n \"page_html\": secret_page_html,\n })\n+\n+@require_GET\n+def robots(request):\n+ lines = [\n+ \"User-agent: *\",\n+ \"Sitemap: https://vas3k.club/sitemap.xml\",\n+ \"Host: https://vas3k.club\",\n+ ]\n+ return HttpResponse(\"\\n\".join(lines), content_type=\"text/plain\")\n", "issue": "\u041e\u0442\u0441\u0443\u0442\u0441\u0442\u0432\u0443\u0435\u0442 robots.txt\n\u0423\u0432\u0435\u0440\u0435\u043d, \u0447\u0442\u043e \u0432 \u043a\u043b\u0443\u0431\u0435 \u0432\u0441\u0435\u043c \u043f\u043e\u0445\u0443\u0439, \u043d\u043e \u043c\u043d\u0435 \u043a\u0430\u0436\u0434\u044b\u0439 \u0440\u0430\u0437 \u0431\u043e\u043b\u044c\u043d\u043e, \u043a\u043e\u0433\u0434\u0430 \u0432\u0438\u0436\u0443, \u0447\u0442\u043e \u043d\u0435\u0442 robots.txt.\r\n\r\n\u041c\u043e\u0436\u043d\u043e \u0445\u043e\u0442\u044f\u0431 \u0441\u0442\u0430\u043d\u0434\u0430\u0440\u0442\u043d\u044b\u0439 \u0434\u043e\u0431\u0430\u0432\u0438\u0442\u044c, \u0432\u0440\u043e\u0434\u0435:\r\n\r\nUser-agent: *\r\nSitemap: https://vas3k.club/sitemap.xml\r\nHost: https://vas3k.club\n", "before_files": [{"content": "from django.shortcuts import render\n\nfrom auth.helpers import auth_required\nfrom landing.models import GodSettings\nfrom users.models.achievements import Achievement\n\n\n@auth_required\ndef achievements(request):\n achievements = Achievement.objects.filter(is_visible=True)\n return render(request, \"pages/achievements.html\", {\n \"achievements\": achievements\n })\n\n\n@auth_required\ndef network(request):\n secret_page_html = GodSettings.objects.first().network_page\n return render(request, \"pages/network.html\", {\n \"page_html\": secret_page_html,\n })\n", "path": "misc/views.py"}, {"content": "from django.conf import settings\nfrom django.contrib.sitemaps.views import sitemap\nfrom django.urls import path, include, re_path\n\nfrom auth.helpers import auth_switch\nfrom auth.views.auth import login, logout, debug_dev_login, debug_random_login, join\nfrom auth.views.email import email_login, email_login_code\nfrom auth.views.external import external_login\nfrom auth.views.patreon import patreon_login, patreon_oauth_callback\nfrom bot.views import webhook_telegram, link_telegram\nfrom comments.views import create_comment, edit_comment, delete_comment, show_comment, upvote_comment, \\\n retract_comment_vote, pin_comment\nfrom landing.views import landing, docs, god_settings\nfrom misc.views import achievements, network\nfrom notifications.views import weekly_digest, email_unsubscribe, email_confirm, daily_digest, email_digest_switch\nfrom payments.views import membership_expired, pay, done, stripe_webhook, stop_subscription\nfrom posts.api import md_show_post, api_show_post\nfrom posts.models.post import Post\nfrom posts.rss import NewPostsRss\nfrom posts.sitemaps import sitemaps\nfrom posts.views.admin import admin_post, announce_post\nfrom posts.views.api import toggle_post_bookmark\nfrom posts.views.feed import feed\nfrom posts.views.posts import show_post, edit_post, upvote_post, retract_post_vote, compose, compose_type, \\\n toggle_post_subscription\nfrom bookmarks.views import bookmarks\nfrom search.views import search\nfrom users.api import api_profile\nfrom users.views.delete_account import request_delete_account, confirm_delete_account\nfrom users.views.messages import on_review, rejected, banned\nfrom users.views.profile import profile, toggle_tag, add_expertise, delete_expertise\nfrom users.views.settings import profile_settings, edit_profile, edit_account, edit_notifications, edit_payments, \\\n edit_bot, edit_data, request_data\nfrom users.views.intro import intro\nfrom users.views.admin import admin_profile\nfrom users.views.people import people\n\nPOST_TYPE_RE = r\"(?P<post_type>(all|{}))\".format(\"|\".join(dict(Post.TYPES).keys()))\nORDERING_RE = r\"(?P<ordering>(activity|new|top|top_week|top_month))\"\n\nurlpatterns = [\n path(\"\", auth_switch(landing, feed), name=\"index\"),\n\n path(\"join/\", join, name=\"join\"),\n path(\"auth/login/\", login, name=\"login\"),\n path(\"auth/logout/\", logout, name=\"logout\"),\n path(\"auth/patreon/\", patreon_login, name=\"patreon_login\"),\n path(\"auth/patreon_callback/\", patreon_oauth_callback, name=\"patreon_oauth_callback\"),\n path(\"auth/email/\", email_login, name=\"email_login\"),\n path(\"auth/email/code/\", email_login_code, name=\"email_login_code\"),\n path(\"auth/external/\", external_login, name=\"external_login\"),\n\n path(\"monies/\", pay, name=\"pay\"),\n path(\"monies/done/\", done, name=\"done\"),\n path(\"monies/membership_expired/\", membership_expired, name=\"membership_expired\"),\n path(\"monies/stripe/webhook/\", stripe_webhook, name=\"stripe_webhook\"),\n path(\"monies/subscription/<str:subscription_id>/stop/\", stop_subscription, name=\"stop_subscription\"),\n\n path(\"user/<slug:user_slug>/\", profile, name=\"profile\"),\n path(\"user/<slug:user_slug>.json\", api_profile, name=\"api_profile\"),\n path(\"user/<slug:user_slug>/edit/\", profile_settings, name=\"profile_settings\"),\n path(\"user/<slug:user_slug>/edit/profile/\", edit_profile, name=\"edit_profile\"),\n path(\"user/<slug:user_slug>/edit/account/\", edit_account, name=\"edit_account\"),\n path(\"user/<slug:user_slug>/edit/bot/\", edit_bot, name=\"edit_bot\"),\n path(\"user/<slug:user_slug>/edit/notifications/\", edit_notifications, name=\"edit_notifications\"),\n path(\"user/<slug:user_slug>/edit/monies/\", edit_payments, name=\"edit_payments\"),\n path(\"user/<slug:user_slug>/edit/data/\", edit_data, name=\"edit_data\"),\n path(\"user/<slug:user_slug>/edit/data/request/\", request_data, name=\"request_user_data\"),\n path(\"user/<slug:user_slug>/admin/\", admin_profile, name=\"admin_profile\"),\n path(\"user/<slug:user_slug>/delete/\", request_delete_account, name=\"request_delete_account\"),\n path(\"user/<slug:user_slug>/delete/confirm/\", confirm_delete_account, name=\"confirm_delete_account\"),\n\n path(\"intro/\", intro, name=\"intro\"),\n path(\"people/\", people, name=\"people\"),\n path(\"achievements/\", achievements, name=\"achievements\"),\n path(\"profile/tag/<slug:tag_code>/toggle/\", toggle_tag, name=\"toggle_tag\"),\n path(\"profile/expertise/add/\", add_expertise, name=\"add_expertise\"),\n path(\"profile/expertise/<slug:expertise>/delete/\", delete_expertise, name=\"delete_expertise\"),\n path(\"profile/on_review/\", on_review, name=\"on_review\"),\n path(\"profile/rejected/\", rejected, name=\"rejected\"),\n path(\"profile/banned/\", banned, name=\"banned\"),\n\n path(\"create/\", compose, name=\"compose\"),\n path(\"create/<slug:post_type>/\", compose_type, name=\"compose_type\"),\n path(\"post/<slug:post_slug>/edit/\", edit_post, name=\"edit_post\"),\n path(\"post/<slug:post_slug>/bookmark/\", toggle_post_bookmark, name=\"toggle_post_bookmark\"),\n path(\"post/<slug:post_slug>/upvote/\", upvote_post, name=\"upvote_post\"),\n path(\"post/<slug:post_slug>/retract_vote/\", retract_post_vote, name=\"retract_post_vote\"),\n path(\"post/<slug:post_slug>/subscription/\", toggle_post_subscription, name=\"toggle_post_subscription\"),\n path(\"post/<slug:post_slug>/admin/\", admin_post, name=\"admin_post\"),\n path(\"post/<slug:post_slug>/announce/\", announce_post, name=\"announce_post\"),\n path(\"post/<slug:post_slug>/comment/create/\", create_comment, name=\"create_comment\"),\n path(\"post/<slug:post_slug>/comment/<uuid:comment_id>/\", show_comment, name=\"show_comment\", ),\n\n path(\"bookmarks/\", bookmarks, name=\"bookmarks\"),\n\n path(\"search/\", search, name=\"search\"),\n path(\"room/<slug:topic_slug>/\", feed, name=\"feed_topic\"),\n path(\"room/<slug:topic_slug>/<slug:ordering>/\", feed, name=\"feed_topic_ordering\"),\n\n path(\"comment/<uuid:comment_id>/upvote/\", upvote_comment, name=\"upvote_comment\"),\n path(\"comment/<uuid:comment_id>/retract_vote/\", retract_comment_vote, name=\"retract_comment_vote\"),\n path(\"comment/<uuid:comment_id>/edit/\", edit_comment, name=\"edit_comment\"),\n path(\"comment/<uuid:comment_id>/pin/\", pin_comment, name=\"pin_comment\"),\n path(\"comment/<uuid:comment_id>/delete/\", delete_comment, name=\"delete_comment\"),\n\n path(\"telegram/link/\", link_telegram, name=\"link_telegram\"),\n path(\"telegram/webhook/<str:token>/\", webhook_telegram, name=\"webhook_telegram\"),\n\n path(\"notifications/confirm/<str:secret>/\", email_confirm, name=\"email_confirm\"),\n path(\"notifications/confirm/<str:secret>/<str:legacy_code>/\", email_confirm, name=\"email_confirm_legacy\"),\n path(\"notifications/unsubscribe/<str:user_id>/<str:secret>/\", email_unsubscribe, name=\"email_unsubscribe\"),\n path(\"notifications/switch/<str:digest_type>/<str:user_id>/<str:secret>/\", email_digest_switch,\n name=\"email_digest_switch\"),\n path(\"notifications/renderer/digest/weekly/\", weekly_digest, name=\"render_weekly_digest\"),\n path(\"notifications/renderer/digest/daily/<slug:user_slug>/\", daily_digest, name=\"render_daily_digest\"),\n\n path(\"docs/<slug:doc_slug>/\", docs, name=\"docs\"),\n\n path(\"network/\", network, name=\"network\"),\n\n path(\"godmode/\", god_settings, name=\"god_settings\"),\n path(\"godmode/dev_login/\", debug_dev_login, name=\"debug_dev_login\"),\n path(\"godmode/random_login/\", debug_random_login, name=\"debug_random_login\"),\n\n # feeds\n path(\"sitemap.xml\", sitemap, {\"sitemaps\": sitemaps}, name=\"sitemap\"),\n path(\"posts.rss\", NewPostsRss(), name=\"rss\"),\n\n # keep these guys at the bottom\n re_path(r\"^{}/$\".format(POST_TYPE_RE), feed, name=\"feed_type\"),\n re_path(r\"^{}/{}/$\".format(POST_TYPE_RE, ORDERING_RE), feed, name=\"feed_ordering\"),\n path(\"<slug:post_type>/<slug:post_slug>/\", show_post, name=\"show_post\"),\n path(\"<slug:post_type>/<slug:post_slug>.md\", md_show_post, name=\"md_show_post\"),\n path(\"<slug:post_type>/<slug:post_slug>.json\", api_show_post, name=\"api_show_post\"),\n]\n\nif settings.DEBUG:\n import debug_toolbar\n\n urlpatterns = [path(\"__debug__/\", include(debug_toolbar.urls))] + urlpatterns\n\n# According to django doc: https://docs.djangoproject.com/en/3.1/topics/testing/overview/#other-test-conditions\n# Regardless of the value of the DEBUG setting in your configuration file, all Django tests run with DEBUG=False\n# so we use separate special var instead of settings.DEBUG\nif settings.TESTS_RUN:\n from debug.api import api_me\n\n urlpatterns.append(path(\"debug/me\", api_me, name=\"debug_api_me\"))\n", "path": "club/urls.py"}], "after_files": [{"content": "from django.http import HttpResponse\nfrom django.shortcuts import render\nfrom django.views.decorators.http import require_GET\n\nfrom auth.helpers import auth_required\nfrom landing.models import GodSettings\nfrom users.models.achievements import Achievement\n\n\n@auth_required\ndef achievements(request):\n achievements = Achievement.objects.filter(is_visible=True)\n return render(request, \"pages/achievements.html\", {\n \"achievements\": achievements\n })\n\n\n@auth_required\ndef network(request):\n secret_page_html = GodSettings.objects.first().network_page\n return render(request, \"pages/network.html\", {\n \"page_html\": secret_page_html,\n })\n\n@require_GET\ndef robots(request):\n lines = [\n \"User-agent: *\",\n \"Sitemap: https://vas3k.club/sitemap.xml\",\n \"Host: https://vas3k.club\",\n ]\n return HttpResponse(\"\\n\".join(lines), content_type=\"text/plain\")\n", "path": "misc/views.py"}, {"content": "from django.conf import settings\nfrom django.contrib.sitemaps.views import sitemap\nfrom django.urls import path, include, re_path\n\nfrom auth.helpers import auth_switch\nfrom auth.views.auth import login, logout, debug_dev_login, debug_random_login, join\nfrom auth.views.email import email_login, email_login_code\nfrom auth.views.external import external_login\nfrom auth.views.patreon import patreon_login, patreon_oauth_callback\nfrom bot.views import webhook_telegram, link_telegram\nfrom comments.views import create_comment, edit_comment, delete_comment, show_comment, upvote_comment, \\\n retract_comment_vote, pin_comment\nfrom landing.views import landing, docs, god_settings\nfrom misc.views import achievements, network, robots\nfrom notifications.views import weekly_digest, email_unsubscribe, email_confirm, daily_digest, email_digest_switch\nfrom payments.views import membership_expired, pay, done, stripe_webhook, stop_subscription\nfrom posts.api import md_show_post, api_show_post\nfrom posts.models.post import Post\nfrom posts.rss import NewPostsRss\nfrom posts.sitemaps import sitemaps\nfrom posts.views.admin import admin_post, announce_post\nfrom posts.views.api import toggle_post_bookmark\nfrom posts.views.feed import feed\nfrom posts.views.posts import show_post, edit_post, upvote_post, retract_post_vote, compose, compose_type, \\\n toggle_post_subscription\nfrom bookmarks.views import bookmarks\nfrom search.views import search\nfrom users.api import api_profile\nfrom users.views.delete_account import request_delete_account, confirm_delete_account\nfrom users.views.messages import on_review, rejected, banned\nfrom users.views.profile import profile, toggle_tag, add_expertise, delete_expertise\nfrom users.views.settings import profile_settings, edit_profile, edit_account, edit_notifications, edit_payments, \\\n edit_bot, edit_data, request_data\nfrom users.views.intro import intro\nfrom users.views.admin import admin_profile\nfrom users.views.people import people\n\nPOST_TYPE_RE = r\"(?P<post_type>(all|{}))\".format(\"|\".join(dict(Post.TYPES).keys()))\nORDERING_RE = r\"(?P<ordering>(activity|new|top|top_week|top_month))\"\n\nurlpatterns = [\n path(\"\", auth_switch(landing, feed), name=\"index\"),\n\n path(\"join/\", join, name=\"join\"),\n path(\"auth/login/\", login, name=\"login\"),\n path(\"auth/logout/\", logout, name=\"logout\"),\n path(\"auth/patreon/\", patreon_login, name=\"patreon_login\"),\n path(\"auth/patreon_callback/\", patreon_oauth_callback, name=\"patreon_oauth_callback\"),\n path(\"auth/email/\", email_login, name=\"email_login\"),\n path(\"auth/email/code/\", email_login_code, name=\"email_login_code\"),\n path(\"auth/external/\", external_login, name=\"external_login\"),\n\n path(\"monies/\", pay, name=\"pay\"),\n path(\"monies/done/\", done, name=\"done\"),\n path(\"monies/membership_expired/\", membership_expired, name=\"membership_expired\"),\n path(\"monies/stripe/webhook/\", stripe_webhook, name=\"stripe_webhook\"),\n path(\"monies/subscription/<str:subscription_id>/stop/\", stop_subscription, name=\"stop_subscription\"),\n\n path(\"user/<slug:user_slug>/\", profile, name=\"profile\"),\n path(\"user/<slug:user_slug>.json\", api_profile, name=\"api_profile\"),\n path(\"user/<slug:user_slug>/edit/\", profile_settings, name=\"profile_settings\"),\n path(\"user/<slug:user_slug>/edit/profile/\", edit_profile, name=\"edit_profile\"),\n path(\"user/<slug:user_slug>/edit/account/\", edit_account, name=\"edit_account\"),\n path(\"user/<slug:user_slug>/edit/bot/\", edit_bot, name=\"edit_bot\"),\n path(\"user/<slug:user_slug>/edit/notifications/\", edit_notifications, name=\"edit_notifications\"),\n path(\"user/<slug:user_slug>/edit/monies/\", edit_payments, name=\"edit_payments\"),\n path(\"user/<slug:user_slug>/edit/data/\", edit_data, name=\"edit_data\"),\n path(\"user/<slug:user_slug>/edit/data/request/\", request_data, name=\"request_user_data\"),\n path(\"user/<slug:user_slug>/admin/\", admin_profile, name=\"admin_profile\"),\n path(\"user/<slug:user_slug>/delete/\", request_delete_account, name=\"request_delete_account\"),\n path(\"user/<slug:user_slug>/delete/confirm/\", confirm_delete_account, name=\"confirm_delete_account\"),\n\n path(\"intro/\", intro, name=\"intro\"),\n path(\"people/\", people, name=\"people\"),\n path(\"achievements/\", achievements, name=\"achievements\"),\n path(\"profile/tag/<slug:tag_code>/toggle/\", toggle_tag, name=\"toggle_tag\"),\n path(\"profile/expertise/add/\", add_expertise, name=\"add_expertise\"),\n path(\"profile/expertise/<slug:expertise>/delete/\", delete_expertise, name=\"delete_expertise\"),\n path(\"profile/on_review/\", on_review, name=\"on_review\"),\n path(\"profile/rejected/\", rejected, name=\"rejected\"),\n path(\"profile/banned/\", banned, name=\"banned\"),\n\n path(\"create/\", compose, name=\"compose\"),\n path(\"create/<slug:post_type>/\", compose_type, name=\"compose_type\"),\n path(\"post/<slug:post_slug>/edit/\", edit_post, name=\"edit_post\"),\n path(\"post/<slug:post_slug>/bookmark/\", toggle_post_bookmark, name=\"toggle_post_bookmark\"),\n path(\"post/<slug:post_slug>/upvote/\", upvote_post, name=\"upvote_post\"),\n path(\"post/<slug:post_slug>/retract_vote/\", retract_post_vote, name=\"retract_post_vote\"),\n path(\"post/<slug:post_slug>/subscription/\", toggle_post_subscription, name=\"toggle_post_subscription\"),\n path(\"post/<slug:post_slug>/admin/\", admin_post, name=\"admin_post\"),\n path(\"post/<slug:post_slug>/announce/\", announce_post, name=\"announce_post\"),\n path(\"post/<slug:post_slug>/comment/create/\", create_comment, name=\"create_comment\"),\n path(\"post/<slug:post_slug>/comment/<uuid:comment_id>/\", show_comment, name=\"show_comment\", ),\n\n path(\"bookmarks/\", bookmarks, name=\"bookmarks\"),\n\n path(\"search/\", search, name=\"search\"),\n path(\"room/<slug:topic_slug>/\", feed, name=\"feed_topic\"),\n path(\"room/<slug:topic_slug>/<slug:ordering>/\", feed, name=\"feed_topic_ordering\"),\n\n path(\"comment/<uuid:comment_id>/upvote/\", upvote_comment, name=\"upvote_comment\"),\n path(\"comment/<uuid:comment_id>/retract_vote/\", retract_comment_vote, name=\"retract_comment_vote\"),\n path(\"comment/<uuid:comment_id>/edit/\", edit_comment, name=\"edit_comment\"),\n path(\"comment/<uuid:comment_id>/pin/\", pin_comment, name=\"pin_comment\"),\n path(\"comment/<uuid:comment_id>/delete/\", delete_comment, name=\"delete_comment\"),\n\n path(\"telegram/link/\", link_telegram, name=\"link_telegram\"),\n path(\"telegram/webhook/<str:token>/\", webhook_telegram, name=\"webhook_telegram\"),\n\n path(\"notifications/confirm/<str:secret>/\", email_confirm, name=\"email_confirm\"),\n path(\"notifications/confirm/<str:secret>/<str:legacy_code>/\", email_confirm, name=\"email_confirm_legacy\"),\n path(\"notifications/unsubscribe/<str:user_id>/<str:secret>/\", email_unsubscribe, name=\"email_unsubscribe\"),\n path(\"notifications/switch/<str:digest_type>/<str:user_id>/<str:secret>/\", email_digest_switch,\n name=\"email_digest_switch\"),\n path(\"notifications/renderer/digest/weekly/\", weekly_digest, name=\"render_weekly_digest\"),\n path(\"notifications/renderer/digest/daily/<slug:user_slug>/\", daily_digest, name=\"render_daily_digest\"),\n\n path(\"docs/<slug:doc_slug>/\", docs, name=\"docs\"),\n\n path(\"network/\", network, name=\"network\"),\n\n path(\"godmode/\", god_settings, name=\"god_settings\"),\n path(\"godmode/dev_login/\", debug_dev_login, name=\"debug_dev_login\"),\n path(\"godmode/random_login/\", debug_random_login, name=\"debug_random_login\"),\n\n # feeds\n path(\"sitemap.xml\", sitemap, {\"sitemaps\": sitemaps}, name=\"sitemap\"),\n path(\"posts.rss\", NewPostsRss(), name=\"rss\"),\n\n path(\"robots.txt\", robots, name=\"robots\"),\n\n # keep these guys at the bottom\n re_path(r\"^{}/$\".format(POST_TYPE_RE), feed, name=\"feed_type\"),\n re_path(r\"^{}/{}/$\".format(POST_TYPE_RE, ORDERING_RE), feed, name=\"feed_ordering\"),\n path(\"<slug:post_type>/<slug:post_slug>/\", show_post, name=\"show_post\"),\n path(\"<slug:post_type>/<slug:post_slug>.md\", md_show_post, name=\"md_show_post\"),\n path(\"<slug:post_type>/<slug:post_slug>.json\", api_show_post, name=\"api_show_post\"),\n]\n\nif settings.DEBUG:\n import debug_toolbar\n\n urlpatterns = [path(\"__debug__/\", include(debug_toolbar.urls))] + urlpatterns\n\n# According to django doc: https://docs.djangoproject.com/en/3.1/topics/testing/overview/#other-test-conditions\n# Regardless of the value of the DEBUG setting in your configuration file, all Django tests run with DEBUG=False\n# so we use separate special var instead of settings.DEBUG\nif settings.TESTS_RUN:\n from debug.api import api_me\n\n urlpatterns.append(path(\"debug/me\", api_me, name=\"debug_api_me\"))\n", "path": "club/urls.py"}]}
| 2,914 | 457 |
gh_patches_debug_19678
|
rasdani/github-patches
|
git_diff
|
fedora-infra__bodhi-2887
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
bodhi.server.consumers.updates.UpdatesHandler.consumer uses an assert statement
```assert``` statements get removed from optimized code, which is what gets built in Koji (which is where production Bodhi builds come from). Thus, the assertion will not be present in production.
If the assertion is important, we should either reconcile the database discrepancy, or raise an Exception. It might make the most sense to reconcile the database discrepancy.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bodhi/server/consumers/updates.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Copyright 2015-2018 Red Hat Inc., and others.
3 #
4 # This file is part of Bodhi.
5 #
6 # This program is free software; you can redistribute it and/or
7 # modify it under the terms of the GNU General Public License
8 # as published by the Free Software Foundation; either version 2
9 # of the License, or (at your option) any later version.
10 #
11 # This program is distributed in the hope that it will be useful,
12 # but WITHOUT ANY WARRANTY; without even the implied warranty of
13 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 # GNU General Public License for more details.
15 #
16 # You should have received a copy of the GNU General Public License along with
17 # this program; if not, write to the Free Software Foundation, Inc., 51
18 # Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
19 """
20 The "updates handler".
21
22 This module is responsible for doing value-added work "offline" that used to be
23 done when updates were submitted. Specifically, when someone submits an update
24 we used to:
25
26 - Update any bugs in bugzilla associated with the update.
27 - Check for test cases in the wiki.
28
29 Those things could sometimes take a *very* long time, especially if there were
30 lots of builds and lots of bugs in the update.
31
32 Now, update-submission breezes by those steps and simply tells the user "OK".
33 A fedmsg message gets published when their update goes through, and *that*
34 message gets received here and triggers us to do all that network-laden heavy
35 lifting.
36 """
37
38 import logging
39 import pprint
40 import time
41
42 import fedmsg.consumers
43
44 from bodhi.server import initialize_db, util, bugs as bug_module
45 from bodhi.server.config import config
46 from bodhi.server.exceptions import BodhiException
47 from bodhi.server.models import Bug, Update, UpdateType
48
49
50 log = logging.getLogger('bodhi')
51
52
53 class UpdatesHandler(fedmsg.consumers.FedmsgConsumer):
54 """
55 Perform background tasks when updates are created or edited.
56
57 This fedmsg listener waits for messages from the frontend about new or edited updates, and
58 performs background tasks such as modifying Bugzilla issues (and loading information from
59 Bugzilla so we can display it to the user) and looking up wiki test cases.
60
61 Attributes:
62 db_factory (bodhi.server.util.TransactionalSessionMaker): A context manager that yields a
63 database session.
64 handle_bugs (bool): If True, interact with Bugzilla. Else do not.
65 topic (list): A list of strings that indicate which fedmsg topics this consumer listens to.
66 """
67
68 config_key = 'updates_handler'
69
70 def __init__(self, hub, *args, **kwargs):
71 """
72 Initialize the UpdatesHandler, subscribing it to the appropriate topics.
73
74 Args:
75 hub (moksha.hub.hub.CentralMokshaHub): The hub this handler is consuming messages from.
76 It is used to look up the hub config.
77 """
78 initialize_db(config)
79 self.db_factory = util.transactional_session_maker()
80
81 prefix = hub.config.get('topic_prefix')
82 env = hub.config.get('environment')
83 self.topic = [
84 prefix + '.' + env + '.bodhi.update.request.testing',
85 prefix + '.' + env + '.bodhi.update.edit',
86 ]
87
88 self.handle_bugs = bool(config.get('bodhi_email'))
89 if not self.handle_bugs:
90 log.warning("No bodhi_email defined; not fetching bug details")
91 else:
92 bug_module.set_bugtracker()
93
94 super(UpdatesHandler, self).__init__(hub, *args, **kwargs)
95 log.info('Bodhi updates handler listening on:\n'
96 '%s' % pprint.pformat(self.topic))
97
98 def consume(self, message):
99 """
100 Process the given message, updating relevant bugs and test cases.
101
102 Args:
103 message (munch.Munch): A fedmsg about a new or edited update.
104 """
105 msg = message['body']['msg']
106 topic = message['topic']
107 alias = msg['update'].get('alias')
108
109 log.info("Updates Handler handling %s, %s" % (alias, topic))
110
111 # Go to sleep for a second to try and avoid a race condition
112 # https://github.com/fedora-infra/bodhi/issues/458
113 time.sleep(1)
114
115 if not alias:
116 log.error("Update Handler got update with no "
117 "alias %s." % pprint.pformat(msg))
118 return
119
120 with self.db_factory() as session:
121 update = Update.get(alias)
122 if not update:
123 raise BodhiException("Couldn't find alias '%s' in DB" % alias)
124
125 if topic.endswith('update.edit'):
126 bugs = [Bug.get(idx) for idx in msg['new_bugs']]
127 # Sanity check
128 for bug in bugs:
129 assert bug in update.bugs
130 elif topic.endswith('update.request.testing'):
131 bugs = update.bugs
132 else:
133 raise NotImplementedError("Should never get here.")
134
135 self.work_on_bugs(session, update, bugs)
136 self.fetch_test_cases(session, update)
137
138 if config['test_gating.required']:
139 with self.db_factory() as session:
140 update = Update.get(alias)
141 update.update_test_gating_status()
142
143 log.info("Updates Handler done with %s, %s" % (alias, topic))
144
145 def fetch_test_cases(self, session, update):
146 """
147 Query the wiki for test cases for each package on the given update.
148
149 Args:
150 session (sqlalchemy.orm.session.Session): A database session.
151 update (bodhi.server.models.Update): The update's builds are iterated upon to find test
152 cases for their associated Packages..
153 """
154 for build in update.builds:
155 build.package.fetch_test_cases(session)
156
157 def work_on_bugs(self, session, update, bugs):
158 """
159 Iterate the list of bugs, retrieving information from Bugzilla and modifying them.
160
161 Iterate the given list of bugs associated with the given update. For each bug, retrieve
162 details from Bugzilla, comment on the bug to let watchers know about the update, and mark
163 the bug as MODIFIED. If the bug is a security issue, mark the update as a security update.
164
165 If the bug is private, Bodhi can't retrieve any information, comment on it, or modify
166 it, so we just associate the bug id with the update and mark it to be private.
167
168 If handle_bugs is not True, return and do nothing.
169
170 Args:
171 session (sqlalchemy.orm.session.Session): A database session.
172 update (bodhi.server.models.Update): The update that the bugs are associated with.
173 bugs (list): A list of bodhi.server.models.Bug instances that we wish to act on.
174 """
175 if not self.handle_bugs:
176 log.warning("Not configured to handle bugs")
177 return
178
179 log.info("Got %i bugs to sync for %r" % (len(bugs), update.alias))
180 for bug in bugs:
181 log.info("Getting RHBZ bug %r" % bug.bug_id)
182 try:
183 rhbz_bug = bug_module.bugtracker.getbug(bug.bug_id)
184
185 log.info("Updating our details for %r" % bug.bug_id)
186 bug.update_details(rhbz_bug)
187 if bug.private:
188 # Bodhi can't retrieve any information so just continue with the next bug
189 log.info(" Skipping bug %r because it is private" % (bug.bug_id))
190 continue
191 log.info(" Got title %r for %r" % (bug.title, bug.bug_id))
192
193 # If you set the type of your update to 'enhancement' but you
194 # attach a security bug, we automatically change the type of your
195 # update to 'security'. We need to do this first, so we don't
196 # accidentally comment on stuff that we shouldn't.
197 if bug.security:
198 log.info("Setting our UpdateType to security.")
199 update.type = UpdateType.security
200
201 log.info("Commenting on %r" % bug.bug_id)
202 comment = config['initial_bug_msg'] % (
203 update.title, update.release.long_name, update.abs_url())
204
205 log.info("Modifying %r" % bug.bug_id)
206 bug.modified(update, comment)
207 except Exception:
208 log.warning('Error occurred during updating single bug', exc_info=True)
209
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/bodhi/server/consumers/updates.py b/bodhi/server/consumers/updates.py
--- a/bodhi/server/consumers/updates.py
+++ b/bodhi/server/consumers/updates.py
@@ -122,11 +122,21 @@
if not update:
raise BodhiException("Couldn't find alias '%s' in DB" % alias)
+ bugs = []
if topic.endswith('update.edit'):
- bugs = [Bug.get(idx) for idx in msg['new_bugs']]
- # Sanity check
- for bug in bugs:
- assert bug in update.bugs
+ for idx in msg['new_bugs']:
+ bug = Bug.get(idx)
+
+ # Sanity check
+ if bug is None or bug not in update.bugs:
+ update_bugs_ids = [b.bug_id for b in update.bugs]
+ update.update_bugs(update_bugs_ids + [idx], session)
+
+ # Now, after update.update_bugs, bug with idx should exists in DB
+ bug = Bug.get(idx)
+
+ bugs.append(bug)
+
elif topic.endswith('update.request.testing'):
bugs = update.bugs
else:
|
{"golden_diff": "diff --git a/bodhi/server/consumers/updates.py b/bodhi/server/consumers/updates.py\n--- a/bodhi/server/consumers/updates.py\n+++ b/bodhi/server/consumers/updates.py\n@@ -122,11 +122,21 @@\n if not update:\n raise BodhiException(\"Couldn't find alias '%s' in DB\" % alias)\n \n+ bugs = []\n if topic.endswith('update.edit'):\n- bugs = [Bug.get(idx) for idx in msg['new_bugs']]\n- # Sanity check\n- for bug in bugs:\n- assert bug in update.bugs\n+ for idx in msg['new_bugs']:\n+ bug = Bug.get(idx)\n+\n+ # Sanity check\n+ if bug is None or bug not in update.bugs:\n+ update_bugs_ids = [b.bug_id for b in update.bugs]\n+ update.update_bugs(update_bugs_ids + [idx], session)\n+\n+ # Now, after update.update_bugs, bug with idx should exists in DB\n+ bug = Bug.get(idx)\n+\n+ bugs.append(bug)\n+\n elif topic.endswith('update.request.testing'):\n bugs = update.bugs\n else:\n", "issue": "bodhi.server.consumers.updates.UpdatesHandler.consumer uses an assert statement\n```assert``` statements get removed from optimized code, which is what gets built in Koji (which is where production Bodhi builds come from). Thus, the assertion will not be present in production.\r\n\r\nIf the assertion is important, we should either reconcile the database discrepancy, or raise an Exception. It might make the most sense to reconcile the database discrepancy.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2015-2018 Red Hat Inc., and others.\n#\n# This file is part of Bodhi.\n#\n# This program is free software; you can redistribute it and/or\n# modify it under the terms of the GNU General Public License\n# as published by the Free Software Foundation; either version 2\n# of the License, or (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.\n\"\"\"\nThe \"updates handler\".\n\nThis module is responsible for doing value-added work \"offline\" that used to be\ndone when updates were submitted. Specifically, when someone submits an update\nwe used to:\n\n- Update any bugs in bugzilla associated with the update.\n- Check for test cases in the wiki.\n\nThose things could sometimes take a *very* long time, especially if there were\nlots of builds and lots of bugs in the update.\n\nNow, update-submission breezes by those steps and simply tells the user \"OK\".\nA fedmsg message gets published when their update goes through, and *that*\nmessage gets received here and triggers us to do all that network-laden heavy\nlifting.\n\"\"\"\n\nimport logging\nimport pprint\nimport time\n\nimport fedmsg.consumers\n\nfrom bodhi.server import initialize_db, util, bugs as bug_module\nfrom bodhi.server.config import config\nfrom bodhi.server.exceptions import BodhiException\nfrom bodhi.server.models import Bug, Update, UpdateType\n\n\nlog = logging.getLogger('bodhi')\n\n\nclass UpdatesHandler(fedmsg.consumers.FedmsgConsumer):\n \"\"\"\n Perform background tasks when updates are created or edited.\n\n This fedmsg listener waits for messages from the frontend about new or edited updates, and\n performs background tasks such as modifying Bugzilla issues (and loading information from\n Bugzilla so we can display it to the user) and looking up wiki test cases.\n\n Attributes:\n db_factory (bodhi.server.util.TransactionalSessionMaker): A context manager that yields a\n database session.\n handle_bugs (bool): If True, interact with Bugzilla. Else do not.\n topic (list): A list of strings that indicate which fedmsg topics this consumer listens to.\n \"\"\"\n\n config_key = 'updates_handler'\n\n def __init__(self, hub, *args, **kwargs):\n \"\"\"\n Initialize the UpdatesHandler, subscribing it to the appropriate topics.\n\n Args:\n hub (moksha.hub.hub.CentralMokshaHub): The hub this handler is consuming messages from.\n It is used to look up the hub config.\n \"\"\"\n initialize_db(config)\n self.db_factory = util.transactional_session_maker()\n\n prefix = hub.config.get('topic_prefix')\n env = hub.config.get('environment')\n self.topic = [\n prefix + '.' + env + '.bodhi.update.request.testing',\n prefix + '.' + env + '.bodhi.update.edit',\n ]\n\n self.handle_bugs = bool(config.get('bodhi_email'))\n if not self.handle_bugs:\n log.warning(\"No bodhi_email defined; not fetching bug details\")\n else:\n bug_module.set_bugtracker()\n\n super(UpdatesHandler, self).__init__(hub, *args, **kwargs)\n log.info('Bodhi updates handler listening on:\\n'\n '%s' % pprint.pformat(self.topic))\n\n def consume(self, message):\n \"\"\"\n Process the given message, updating relevant bugs and test cases.\n\n Args:\n message (munch.Munch): A fedmsg about a new or edited update.\n \"\"\"\n msg = message['body']['msg']\n topic = message['topic']\n alias = msg['update'].get('alias')\n\n log.info(\"Updates Handler handling %s, %s\" % (alias, topic))\n\n # Go to sleep for a second to try and avoid a race condition\n # https://github.com/fedora-infra/bodhi/issues/458\n time.sleep(1)\n\n if not alias:\n log.error(\"Update Handler got update with no \"\n \"alias %s.\" % pprint.pformat(msg))\n return\n\n with self.db_factory() as session:\n update = Update.get(alias)\n if not update:\n raise BodhiException(\"Couldn't find alias '%s' in DB\" % alias)\n\n if topic.endswith('update.edit'):\n bugs = [Bug.get(idx) for idx in msg['new_bugs']]\n # Sanity check\n for bug in bugs:\n assert bug in update.bugs\n elif topic.endswith('update.request.testing'):\n bugs = update.bugs\n else:\n raise NotImplementedError(\"Should never get here.\")\n\n self.work_on_bugs(session, update, bugs)\n self.fetch_test_cases(session, update)\n\n if config['test_gating.required']:\n with self.db_factory() as session:\n update = Update.get(alias)\n update.update_test_gating_status()\n\n log.info(\"Updates Handler done with %s, %s\" % (alias, topic))\n\n def fetch_test_cases(self, session, update):\n \"\"\"\n Query the wiki for test cases for each package on the given update.\n\n Args:\n session (sqlalchemy.orm.session.Session): A database session.\n update (bodhi.server.models.Update): The update's builds are iterated upon to find test\n cases for their associated Packages..\n \"\"\"\n for build in update.builds:\n build.package.fetch_test_cases(session)\n\n def work_on_bugs(self, session, update, bugs):\n \"\"\"\n Iterate the list of bugs, retrieving information from Bugzilla and modifying them.\n\n Iterate the given list of bugs associated with the given update. For each bug, retrieve\n details from Bugzilla, comment on the bug to let watchers know about the update, and mark\n the bug as MODIFIED. If the bug is a security issue, mark the update as a security update.\n\n If the bug is private, Bodhi can't retrieve any information, comment on it, or modify\n it, so we just associate the bug id with the update and mark it to be private.\n\n If handle_bugs is not True, return and do nothing.\n\n Args:\n session (sqlalchemy.orm.session.Session): A database session.\n update (bodhi.server.models.Update): The update that the bugs are associated with.\n bugs (list): A list of bodhi.server.models.Bug instances that we wish to act on.\n \"\"\"\n if not self.handle_bugs:\n log.warning(\"Not configured to handle bugs\")\n return\n\n log.info(\"Got %i bugs to sync for %r\" % (len(bugs), update.alias))\n for bug in bugs:\n log.info(\"Getting RHBZ bug %r\" % bug.bug_id)\n try:\n rhbz_bug = bug_module.bugtracker.getbug(bug.bug_id)\n\n log.info(\"Updating our details for %r\" % bug.bug_id)\n bug.update_details(rhbz_bug)\n if bug.private:\n # Bodhi can't retrieve any information so just continue with the next bug\n log.info(\" Skipping bug %r because it is private\" % (bug.bug_id))\n continue\n log.info(\" Got title %r for %r\" % (bug.title, bug.bug_id))\n\n # If you set the type of your update to 'enhancement' but you\n # attach a security bug, we automatically change the type of your\n # update to 'security'. We need to do this first, so we don't\n # accidentally comment on stuff that we shouldn't.\n if bug.security:\n log.info(\"Setting our UpdateType to security.\")\n update.type = UpdateType.security\n\n log.info(\"Commenting on %r\" % bug.bug_id)\n comment = config['initial_bug_msg'] % (\n update.title, update.release.long_name, update.abs_url())\n\n log.info(\"Modifying %r\" % bug.bug_id)\n bug.modified(update, comment)\n except Exception:\n log.warning('Error occurred during updating single bug', exc_info=True)\n", "path": "bodhi/server/consumers/updates.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2015-2018 Red Hat Inc., and others.\n#\n# This file is part of Bodhi.\n#\n# This program is free software; you can redistribute it and/or\n# modify it under the terms of the GNU General Public License\n# as published by the Free Software Foundation; either version 2\n# of the License, or (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.\n\"\"\"\nThe \"updates handler\".\n\nThis module is responsible for doing value-added work \"offline\" that used to be\ndone when updates were submitted. Specifically, when someone submits an update\nwe used to:\n\n- Update any bugs in bugzilla associated with the update.\n- Check for test cases in the wiki.\n\nThose things could sometimes take a *very* long time, especially if there were\nlots of builds and lots of bugs in the update.\n\nNow, update-submission breezes by those steps and simply tells the user \"OK\".\nA fedmsg message gets published when their update goes through, and *that*\nmessage gets received here and triggers us to do all that network-laden heavy\nlifting.\n\"\"\"\n\nimport logging\nimport pprint\nimport time\n\nimport fedmsg.consumers\n\nfrom bodhi.server import initialize_db, util, bugs as bug_module\nfrom bodhi.server.config import config\nfrom bodhi.server.exceptions import BodhiException\nfrom bodhi.server.models import Bug, Update, UpdateType\n\n\nlog = logging.getLogger('bodhi')\n\n\nclass UpdatesHandler(fedmsg.consumers.FedmsgConsumer):\n \"\"\"\n Perform background tasks when updates are created or edited.\n\n This fedmsg listener waits for messages from the frontend about new or edited updates, and\n performs background tasks such as modifying Bugzilla issues (and loading information from\n Bugzilla so we can display it to the user) and looking up wiki test cases.\n\n Attributes:\n db_factory (bodhi.server.util.TransactionalSessionMaker): A context manager that yields a\n database session.\n handle_bugs (bool): If True, interact with Bugzilla. Else do not.\n topic (list): A list of strings that indicate which fedmsg topics this consumer listens to.\n \"\"\"\n\n config_key = 'updates_handler'\n\n def __init__(self, hub, *args, **kwargs):\n \"\"\"\n Initialize the UpdatesHandler, subscribing it to the appropriate topics.\n\n Args:\n hub (moksha.hub.hub.CentralMokshaHub): The hub this handler is consuming messages from.\n It is used to look up the hub config.\n \"\"\"\n initialize_db(config)\n self.db_factory = util.transactional_session_maker()\n\n prefix = hub.config.get('topic_prefix')\n env = hub.config.get('environment')\n self.topic = [\n prefix + '.' + env + '.bodhi.update.request.testing',\n prefix + '.' + env + '.bodhi.update.edit',\n ]\n\n self.handle_bugs = bool(config.get('bodhi_email'))\n if not self.handle_bugs:\n log.warning(\"No bodhi_email defined; not fetching bug details\")\n else:\n bug_module.set_bugtracker()\n\n super(UpdatesHandler, self).__init__(hub, *args, **kwargs)\n log.info('Bodhi updates handler listening on:\\n'\n '%s' % pprint.pformat(self.topic))\n\n def consume(self, message):\n \"\"\"\n Process the given message, updating relevant bugs and test cases.\n\n Args:\n message (munch.Munch): A fedmsg about a new or edited update.\n \"\"\"\n msg = message['body']['msg']\n topic = message['topic']\n alias = msg['update'].get('alias')\n\n log.info(\"Updates Handler handling %s, %s\" % (alias, topic))\n\n # Go to sleep for a second to try and avoid a race condition\n # https://github.com/fedora-infra/bodhi/issues/458\n time.sleep(1)\n\n if not alias:\n log.error(\"Update Handler got update with no \"\n \"alias %s.\" % pprint.pformat(msg))\n return\n\n with self.db_factory() as session:\n update = Update.get(alias)\n if not update:\n raise BodhiException(\"Couldn't find alias '%s' in DB\" % alias)\n\n bugs = []\n if topic.endswith('update.edit'):\n for idx in msg['new_bugs']:\n bug = Bug.get(idx)\n\n # Sanity check\n if bug is None or bug not in update.bugs:\n update_bugs_ids = [b.bug_id for b in update.bugs]\n update.update_bugs(update_bugs_ids + [idx], session)\n\n # Now, after update.update_bugs, bug with idx should exists in DB\n bug = Bug.get(idx)\n\n bugs.append(bug)\n\n elif topic.endswith('update.request.testing'):\n bugs = update.bugs\n else:\n raise NotImplementedError(\"Should never get here.\")\n\n self.work_on_bugs(session, update, bugs)\n self.fetch_test_cases(session, update)\n\n if config['test_gating.required']:\n with self.db_factory() as session:\n update = Update.get(alias)\n update.update_test_gating_status()\n\n log.info(\"Updates Handler done with %s, %s\" % (alias, topic))\n\n def fetch_test_cases(self, session, update):\n \"\"\"\n Query the wiki for test cases for each package on the given update.\n\n Args:\n session (sqlalchemy.orm.session.Session): A database session.\n update (bodhi.server.models.Update): The update's builds are iterated upon to find test\n cases for their associated Packages..\n \"\"\"\n for build in update.builds:\n build.package.fetch_test_cases(session)\n\n def work_on_bugs(self, session, update, bugs):\n \"\"\"\n Iterate the list of bugs, retrieving information from Bugzilla and modifying them.\n\n Iterate the given list of bugs associated with the given update. For each bug, retrieve\n details from Bugzilla, comment on the bug to let watchers know about the update, and mark\n the bug as MODIFIED. If the bug is a security issue, mark the update as a security update.\n\n If the bug is private, Bodhi can't retrieve any information, comment on it, or modify\n it, so we just associate the bug id with the update and mark it to be private.\n\n If handle_bugs is not True, return and do nothing.\n\n Args:\n session (sqlalchemy.orm.session.Session): A database session.\n update (bodhi.server.models.Update): The update that the bugs are associated with.\n bugs (list): A list of bodhi.server.models.Bug instances that we wish to act on.\n \"\"\"\n if not self.handle_bugs:\n log.warning(\"Not configured to handle bugs\")\n return\n\n log.info(\"Got %i bugs to sync for %r\" % (len(bugs), update.alias))\n for bug in bugs:\n log.info(\"Getting RHBZ bug %r\" % bug.bug_id)\n try:\n rhbz_bug = bug_module.bugtracker.getbug(bug.bug_id)\n\n log.info(\"Updating our details for %r\" % bug.bug_id)\n bug.update_details(rhbz_bug)\n if bug.private:\n # Bodhi can't retrieve any information so just continue with the next bug\n log.info(\" Skipping bug %r because it is private\" % (bug.bug_id))\n continue\n log.info(\" Got title %r for %r\" % (bug.title, bug.bug_id))\n\n # If you set the type of your update to 'enhancement' but you\n # attach a security bug, we automatically change the type of your\n # update to 'security'. We need to do this first, so we don't\n # accidentally comment on stuff that we shouldn't.\n if bug.security:\n log.info(\"Setting our UpdateType to security.\")\n update.type = UpdateType.security\n\n log.info(\"Commenting on %r\" % bug.bug_id)\n comment = config['initial_bug_msg'] % (\n update.title, update.release.long_name, update.abs_url())\n\n log.info(\"Modifying %r\" % bug.bug_id)\n bug.modified(update, comment)\n except Exception:\n log.warning('Error occurred during updating single bug', exc_info=True)\n", "path": "bodhi/server/consumers/updates.py"}]}
| 2,729 | 277 |
gh_patches_debug_21500
|
rasdani/github-patches
|
git_diff
|
saleor__saleor-11924
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug: Giftcard created event is not triggered when gift card is bought
### What are you trying to achieve?
I'm trying to inform the external system about the creation of the gift card.
### Steps to reproduce the problem
1. Create a Product type of Gift card
2. Create gift card -> Product
3. Create webhook which triggers on gift card creation
4. Create draft order and add newly gift card
5. Finalize draft order
### What did you expect to happen?
Saleor should properly send a webhook after the gift card is created.
### Logs
_No response_
### Environment
Saleor version: 3.10
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `saleor/giftcard/utils.py`
Content:
```
1 from collections import defaultdict
2 from datetime import date
3 from typing import TYPE_CHECKING, DefaultDict, Iterable, List, Optional
4 from uuid import UUID
5
6 from dateutil.relativedelta import relativedelta
7 from django.db import transaction
8 from django.db.models.expressions import Exists, OuterRef
9 from django.utils import timezone
10
11 from ..checkout.models import Checkout
12 from ..core.exceptions import GiftCardNotApplicable
13 from ..core.tracing import traced_atomic_transaction
14 from ..core.utils.promo_code import InvalidPromoCode, generate_promo_code
15 from ..order.actions import OrderFulfillmentLineInfo, create_fulfillments
16 from ..order.models import OrderLine
17 from ..site import GiftCardSettingsExpiryType
18 from . import GiftCardEvents, GiftCardLineData, events
19 from .models import GiftCard, GiftCardEvent
20 from .notifications import send_gift_card_notification
21
22 if TYPE_CHECKING:
23 from django.db.models import QuerySet
24
25 from ..account.models import User
26 from ..app.models import App
27 from ..order.models import Order
28 from ..plugins.manager import PluginsManager
29 from ..site.models import SiteSettings
30
31
32 def add_gift_card_code_to_checkout(
33 checkout: Checkout, email: str, promo_code: str, currency: str
34 ):
35 """Add gift card data to checkout by code.
36
37 Raise ValidationError if email is not provided.
38 Raise InvalidPromoCode if gift card cannot be applied.
39 """
40 from ..checkout.checkout_cleaner import validate_checkout_email
41
42 validate_checkout_email(checkout)
43
44 try:
45 # only active gift card with currency the same as channel currency can be used
46 gift_card = (
47 GiftCard.objects.active(date=date.today())
48 .filter(currency=currency)
49 .get(code=promo_code)
50 )
51 except GiftCard.DoesNotExist:
52 raise InvalidPromoCode()
53
54 used_by_email = gift_card.used_by_email
55 # gift card can be used only by one user
56 if used_by_email and used_by_email != email:
57 raise InvalidPromoCode()
58
59 checkout.gift_cards.add(gift_card)
60 checkout.save(update_fields=["last_change"])
61
62
63 def remove_gift_card_code_from_checkout(checkout: Checkout, gift_card_code: str):
64 """Remove gift card data from checkout by code.
65
66 Return information whether promo code was removed.
67 """
68 gift_card = checkout.gift_cards.filter(code=gift_card_code).first()
69 if gift_card:
70 checkout.gift_cards.remove(gift_card)
71 checkout.save(update_fields=["last_change"])
72 return True
73 return False
74
75
76 def deactivate_gift_card(gift_card: GiftCard):
77 """Set gift card status as inactive."""
78 if gift_card.is_active:
79 gift_card.is_active = False
80 gift_card.save(update_fields=["is_active"])
81
82
83 def activate_gift_card(gift_card: GiftCard):
84 """Set gift card status as active."""
85 if not gift_card.is_active:
86 gift_card.is_active = True
87 gift_card.save(update_fields=["is_active"])
88
89
90 def fulfill_non_shippable_gift_cards(
91 order: "Order",
92 order_lines: Iterable[OrderLine],
93 settings: "SiteSettings",
94 requestor_user: Optional["User"],
95 app: Optional["App"],
96 manager: "PluginsManager",
97 ):
98 gift_card_lines = get_non_shippable_gift_card_lines(order_lines)
99 if not gift_card_lines:
100 return
101 fulfill_gift_card_lines(
102 gift_card_lines, requestor_user, app, order, settings, manager
103 )
104
105
106 def get_non_shippable_gift_card_lines(lines: Iterable[OrderLine]) -> "QuerySet":
107 gift_card_lines = get_gift_card_lines(lines)
108 non_shippable_lines = OrderLine.objects.filter(
109 id__in=[line.pk for line in gift_card_lines], is_shipping_required=False
110 )
111 return non_shippable_lines
112
113
114 def get_gift_card_lines(lines: Iterable[OrderLine]):
115 gift_card_lines = [line for line in lines if line.is_gift_card]
116 return gift_card_lines
117
118
119 def fulfill_gift_card_lines(
120 gift_card_lines: "QuerySet",
121 requestor_user: Optional["User"],
122 app: Optional["App"],
123 order: "Order",
124 settings: "SiteSettings",
125 manager: "PluginsManager",
126 ):
127 lines_for_warehouses: DefaultDict[
128 UUID, List[OrderFulfillmentLineInfo]
129 ] = defaultdict(list)
130 channel_slug = order.channel.slug
131 for line in gift_card_lines.prefetch_related(
132 "allocations__stock", "variant__stocks"
133 ):
134 if allocations := line.allocations.all():
135 for allocation in allocations:
136 quantity = allocation.quantity_allocated
137 if quantity > 0:
138 warehouse_pk = allocation.stock.warehouse_id
139 lines_for_warehouses[warehouse_pk].append(
140 {"order_line": line, "quantity": quantity}
141 )
142 else:
143 stock = line.variant.stocks.for_channel_and_country(channel_slug).first()
144 if not stock:
145 raise GiftCardNotApplicable(
146 message="Lack of gift card stock for checkout channel.",
147 )
148 warehouse_pk = stock.warehouse_id
149 lines_for_warehouses[warehouse_pk].append(
150 {"order_line": line, "quantity": line.quantity}
151 )
152
153 return create_fulfillments(
154 requestor_user,
155 app,
156 order,
157 dict(lines_for_warehouses),
158 manager,
159 settings,
160 notify_customer=True,
161 )
162
163
164 @traced_atomic_transaction()
165 def gift_cards_create(
166 order: "Order",
167 gift_card_lines_info: Iterable["GiftCardLineData"],
168 settings: "SiteSettings",
169 requestor_user: Optional["User"],
170 app: Optional["App"],
171 manager: "PluginsManager",
172 ):
173 """Create purchased gift cards."""
174 customer_user = order.user
175 user_email = order.user_email
176 gift_cards = []
177 non_shippable_gift_cards = []
178 expiry_date = calculate_expiry_date(settings)
179 for line_data in gift_card_lines_info:
180 order_line = line_data.order_line
181 price = order_line.unit_price_gross
182 line_gift_cards = [
183 GiftCard( # type: ignore[misc] # see below:
184 code=generate_promo_code(),
185 initial_balance=price, # money field not supported by mypy_django_plugin # noqa: E501
186 current_balance=price, # money field not supported by mypy_django_plugin # noqa: E501
187 created_by=customer_user,
188 created_by_email=user_email,
189 product=line_data.variant.product if line_data.variant else None,
190 fulfillment_line=line_data.fulfillment_line,
191 expiry_date=expiry_date,
192 )
193 for _ in range(line_data.quantity)
194 ]
195 gift_cards.extend(line_gift_cards)
196 if not order_line.is_shipping_required:
197 non_shippable_gift_cards.extend(line_gift_cards)
198
199 gift_cards = GiftCard.objects.bulk_create(gift_cards)
200 events.gift_cards_bought_event(gift_cards, order, requestor_user, app)
201
202 channel_slug = order.channel.slug
203 # send to customer all non-shippable gift cards
204 transaction.on_commit(
205 lambda: send_gift_cards_to_customer(
206 non_shippable_gift_cards,
207 user_email,
208 requestor_user,
209 app,
210 customer_user,
211 manager,
212 channel_slug,
213 )
214 )
215 return gift_cards
216
217
218 def calculate_expiry_date(settings):
219 """Calculate expiry date based on gift card settings."""
220 today = timezone.now().date()
221 expiry_date = None
222 if settings.gift_card_expiry_type == GiftCardSettingsExpiryType.EXPIRY_PERIOD:
223 expiry_period_type = settings.gift_card_expiry_period_type
224 time_delta = {f"{expiry_period_type}s": settings.gift_card_expiry_period}
225 expiry_date = today + relativedelta(**time_delta)
226 return expiry_date
227
228
229 def send_gift_cards_to_customer(
230 gift_cards: Iterable[GiftCard],
231 user_email: str,
232 requestor_user: Optional["User"],
233 app: Optional["App"],
234 customer_user: Optional["User"],
235 manager: "PluginsManager",
236 channel_slug: str,
237 ):
238 for gift_card in gift_cards:
239 send_gift_card_notification(
240 requestor_user,
241 app,
242 customer_user,
243 user_email,
244 gift_card,
245 manager,
246 channel_slug,
247 resending=False,
248 )
249
250
251 def deactivate_order_gift_cards(
252 order_id: UUID, user: Optional["User"], app: Optional["App"]
253 ):
254 gift_card_events = GiftCardEvent.objects.filter(
255 type=GiftCardEvents.BOUGHT, order_id=order_id
256 )
257 gift_cards = GiftCard.objects.filter(
258 Exists(gift_card_events.filter(gift_card_id=OuterRef("id")))
259 )
260 gift_cards.update(is_active=False)
261 events.gift_cards_deactivated_event(
262 gift_cards.values_list("id", flat=True), user, app
263 )
264
265
266 def order_has_gift_card_lines(order):
267 return any(order.lines.filter(is_gift_card=True))
268
269
270 def assign_user_gift_cards(user):
271 GiftCard.objects.filter(used_by_email=user.email).update(used_by=user)
272 GiftCard.objects.filter(created_by_email=user.email).update(created_by=user)
273
274
275 def is_gift_card_expired(gift_card: GiftCard):
276 """Return True when gift card expiry date pass."""
277 today = timezone.now().date()
278 return bool(gift_card.expiry_date) and gift_card.expiry_date < today # type: ignore
279
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/saleor/giftcard/utils.py b/saleor/giftcard/utils.py
--- a/saleor/giftcard/utils.py
+++ b/saleor/giftcard/utils.py
@@ -11,6 +11,7 @@
from ..checkout.models import Checkout
from ..core.exceptions import GiftCardNotApplicable
from ..core.tracing import traced_atomic_transaction
+from ..core.utils.events import call_event
from ..core.utils.promo_code import InvalidPromoCode, generate_promo_code
from ..order.actions import OrderFulfillmentLineInfo, create_fulfillments
from ..order.models import OrderLine
@@ -199,6 +200,9 @@
gift_cards = GiftCard.objects.bulk_create(gift_cards)
events.gift_cards_bought_event(gift_cards, order, requestor_user, app)
+ for gift_card in gift_cards:
+ call_event(manager.gift_card_created, gift_card)
+
channel_slug = order.channel.slug
# send to customer all non-shippable gift cards
transaction.on_commit(
|
{"golden_diff": "diff --git a/saleor/giftcard/utils.py b/saleor/giftcard/utils.py\n--- a/saleor/giftcard/utils.py\n+++ b/saleor/giftcard/utils.py\n@@ -11,6 +11,7 @@\n from ..checkout.models import Checkout\n from ..core.exceptions import GiftCardNotApplicable\n from ..core.tracing import traced_atomic_transaction\n+from ..core.utils.events import call_event\n from ..core.utils.promo_code import InvalidPromoCode, generate_promo_code\n from ..order.actions import OrderFulfillmentLineInfo, create_fulfillments\n from ..order.models import OrderLine\n@@ -199,6 +200,9 @@\n gift_cards = GiftCard.objects.bulk_create(gift_cards)\n events.gift_cards_bought_event(gift_cards, order, requestor_user, app)\n \n+ for gift_card in gift_cards:\n+ call_event(manager.gift_card_created, gift_card)\n+\n channel_slug = order.channel.slug\n # send to customer all non-shippable gift cards\n transaction.on_commit(\n", "issue": "Bug: Giftcard created event is not triggered when gift card is bought \n### What are you trying to achieve?\n\nI'm trying to inform the external system about the creation of the gift card. \n\n### Steps to reproduce the problem\n\n1. Create a Product type of Gift card\r\n2. Create gift card -> Product\r\n3. Create webhook which triggers on gift card creation \r\n4. Create draft order and add newly gift card \r\n5. Finalize draft order \n\n### What did you expect to happen?\n\nSaleor should properly send a webhook after the gift card is created.\n\n### Logs\n\n_No response_\n\n### Environment\n\nSaleor version: 3.10\r\n\n", "before_files": [{"content": "from collections import defaultdict\nfrom datetime import date\nfrom typing import TYPE_CHECKING, DefaultDict, Iterable, List, Optional\nfrom uuid import UUID\n\nfrom dateutil.relativedelta import relativedelta\nfrom django.db import transaction\nfrom django.db.models.expressions import Exists, OuterRef\nfrom django.utils import timezone\n\nfrom ..checkout.models import Checkout\nfrom ..core.exceptions import GiftCardNotApplicable\nfrom ..core.tracing import traced_atomic_transaction\nfrom ..core.utils.promo_code import InvalidPromoCode, generate_promo_code\nfrom ..order.actions import OrderFulfillmentLineInfo, create_fulfillments\nfrom ..order.models import OrderLine\nfrom ..site import GiftCardSettingsExpiryType\nfrom . import GiftCardEvents, GiftCardLineData, events\nfrom .models import GiftCard, GiftCardEvent\nfrom .notifications import send_gift_card_notification\n\nif TYPE_CHECKING:\n from django.db.models import QuerySet\n\n from ..account.models import User\n from ..app.models import App\n from ..order.models import Order\n from ..plugins.manager import PluginsManager\n from ..site.models import SiteSettings\n\n\ndef add_gift_card_code_to_checkout(\n checkout: Checkout, email: str, promo_code: str, currency: str\n):\n \"\"\"Add gift card data to checkout by code.\n\n Raise ValidationError if email is not provided.\n Raise InvalidPromoCode if gift card cannot be applied.\n \"\"\"\n from ..checkout.checkout_cleaner import validate_checkout_email\n\n validate_checkout_email(checkout)\n\n try:\n # only active gift card with currency the same as channel currency can be used\n gift_card = (\n GiftCard.objects.active(date=date.today())\n .filter(currency=currency)\n .get(code=promo_code)\n )\n except GiftCard.DoesNotExist:\n raise InvalidPromoCode()\n\n used_by_email = gift_card.used_by_email\n # gift card can be used only by one user\n if used_by_email and used_by_email != email:\n raise InvalidPromoCode()\n\n checkout.gift_cards.add(gift_card)\n checkout.save(update_fields=[\"last_change\"])\n\n\ndef remove_gift_card_code_from_checkout(checkout: Checkout, gift_card_code: str):\n \"\"\"Remove gift card data from checkout by code.\n\n Return information whether promo code was removed.\n \"\"\"\n gift_card = checkout.gift_cards.filter(code=gift_card_code).first()\n if gift_card:\n checkout.gift_cards.remove(gift_card)\n checkout.save(update_fields=[\"last_change\"])\n return True\n return False\n\n\ndef deactivate_gift_card(gift_card: GiftCard):\n \"\"\"Set gift card status as inactive.\"\"\"\n if gift_card.is_active:\n gift_card.is_active = False\n gift_card.save(update_fields=[\"is_active\"])\n\n\ndef activate_gift_card(gift_card: GiftCard):\n \"\"\"Set gift card status as active.\"\"\"\n if not gift_card.is_active:\n gift_card.is_active = True\n gift_card.save(update_fields=[\"is_active\"])\n\n\ndef fulfill_non_shippable_gift_cards(\n order: \"Order\",\n order_lines: Iterable[OrderLine],\n settings: \"SiteSettings\",\n requestor_user: Optional[\"User\"],\n app: Optional[\"App\"],\n manager: \"PluginsManager\",\n):\n gift_card_lines = get_non_shippable_gift_card_lines(order_lines)\n if not gift_card_lines:\n return\n fulfill_gift_card_lines(\n gift_card_lines, requestor_user, app, order, settings, manager\n )\n\n\ndef get_non_shippable_gift_card_lines(lines: Iterable[OrderLine]) -> \"QuerySet\":\n gift_card_lines = get_gift_card_lines(lines)\n non_shippable_lines = OrderLine.objects.filter(\n id__in=[line.pk for line in gift_card_lines], is_shipping_required=False\n )\n return non_shippable_lines\n\n\ndef get_gift_card_lines(lines: Iterable[OrderLine]):\n gift_card_lines = [line for line in lines if line.is_gift_card]\n return gift_card_lines\n\n\ndef fulfill_gift_card_lines(\n gift_card_lines: \"QuerySet\",\n requestor_user: Optional[\"User\"],\n app: Optional[\"App\"],\n order: \"Order\",\n settings: \"SiteSettings\",\n manager: \"PluginsManager\",\n):\n lines_for_warehouses: DefaultDict[\n UUID, List[OrderFulfillmentLineInfo]\n ] = defaultdict(list)\n channel_slug = order.channel.slug\n for line in gift_card_lines.prefetch_related(\n \"allocations__stock\", \"variant__stocks\"\n ):\n if allocations := line.allocations.all():\n for allocation in allocations:\n quantity = allocation.quantity_allocated\n if quantity > 0:\n warehouse_pk = allocation.stock.warehouse_id\n lines_for_warehouses[warehouse_pk].append(\n {\"order_line\": line, \"quantity\": quantity}\n )\n else:\n stock = line.variant.stocks.for_channel_and_country(channel_slug).first()\n if not stock:\n raise GiftCardNotApplicable(\n message=\"Lack of gift card stock for checkout channel.\",\n )\n warehouse_pk = stock.warehouse_id\n lines_for_warehouses[warehouse_pk].append(\n {\"order_line\": line, \"quantity\": line.quantity}\n )\n\n return create_fulfillments(\n requestor_user,\n app,\n order,\n dict(lines_for_warehouses),\n manager,\n settings,\n notify_customer=True,\n )\n\n\n@traced_atomic_transaction()\ndef gift_cards_create(\n order: \"Order\",\n gift_card_lines_info: Iterable[\"GiftCardLineData\"],\n settings: \"SiteSettings\",\n requestor_user: Optional[\"User\"],\n app: Optional[\"App\"],\n manager: \"PluginsManager\",\n):\n \"\"\"Create purchased gift cards.\"\"\"\n customer_user = order.user\n user_email = order.user_email\n gift_cards = []\n non_shippable_gift_cards = []\n expiry_date = calculate_expiry_date(settings)\n for line_data in gift_card_lines_info:\n order_line = line_data.order_line\n price = order_line.unit_price_gross\n line_gift_cards = [\n GiftCard( # type: ignore[misc] # see below:\n code=generate_promo_code(),\n initial_balance=price, # money field not supported by mypy_django_plugin # noqa: E501\n current_balance=price, # money field not supported by mypy_django_plugin # noqa: E501\n created_by=customer_user,\n created_by_email=user_email,\n product=line_data.variant.product if line_data.variant else None,\n fulfillment_line=line_data.fulfillment_line,\n expiry_date=expiry_date,\n )\n for _ in range(line_data.quantity)\n ]\n gift_cards.extend(line_gift_cards)\n if not order_line.is_shipping_required:\n non_shippable_gift_cards.extend(line_gift_cards)\n\n gift_cards = GiftCard.objects.bulk_create(gift_cards)\n events.gift_cards_bought_event(gift_cards, order, requestor_user, app)\n\n channel_slug = order.channel.slug\n # send to customer all non-shippable gift cards\n transaction.on_commit(\n lambda: send_gift_cards_to_customer(\n non_shippable_gift_cards,\n user_email,\n requestor_user,\n app,\n customer_user,\n manager,\n channel_slug,\n )\n )\n return gift_cards\n\n\ndef calculate_expiry_date(settings):\n \"\"\"Calculate expiry date based on gift card settings.\"\"\"\n today = timezone.now().date()\n expiry_date = None\n if settings.gift_card_expiry_type == GiftCardSettingsExpiryType.EXPIRY_PERIOD:\n expiry_period_type = settings.gift_card_expiry_period_type\n time_delta = {f\"{expiry_period_type}s\": settings.gift_card_expiry_period}\n expiry_date = today + relativedelta(**time_delta)\n return expiry_date\n\n\ndef send_gift_cards_to_customer(\n gift_cards: Iterable[GiftCard],\n user_email: str,\n requestor_user: Optional[\"User\"],\n app: Optional[\"App\"],\n customer_user: Optional[\"User\"],\n manager: \"PluginsManager\",\n channel_slug: str,\n):\n for gift_card in gift_cards:\n send_gift_card_notification(\n requestor_user,\n app,\n customer_user,\n user_email,\n gift_card,\n manager,\n channel_slug,\n resending=False,\n )\n\n\ndef deactivate_order_gift_cards(\n order_id: UUID, user: Optional[\"User\"], app: Optional[\"App\"]\n):\n gift_card_events = GiftCardEvent.objects.filter(\n type=GiftCardEvents.BOUGHT, order_id=order_id\n )\n gift_cards = GiftCard.objects.filter(\n Exists(gift_card_events.filter(gift_card_id=OuterRef(\"id\")))\n )\n gift_cards.update(is_active=False)\n events.gift_cards_deactivated_event(\n gift_cards.values_list(\"id\", flat=True), user, app\n )\n\n\ndef order_has_gift_card_lines(order):\n return any(order.lines.filter(is_gift_card=True))\n\n\ndef assign_user_gift_cards(user):\n GiftCard.objects.filter(used_by_email=user.email).update(used_by=user)\n GiftCard.objects.filter(created_by_email=user.email).update(created_by=user)\n\n\ndef is_gift_card_expired(gift_card: GiftCard):\n \"\"\"Return True when gift card expiry date pass.\"\"\"\n today = timezone.now().date()\n return bool(gift_card.expiry_date) and gift_card.expiry_date < today # type: ignore\n", "path": "saleor/giftcard/utils.py"}], "after_files": [{"content": "from collections import defaultdict\nfrom datetime import date\nfrom typing import TYPE_CHECKING, DefaultDict, Iterable, List, Optional\nfrom uuid import UUID\n\nfrom dateutil.relativedelta import relativedelta\nfrom django.db import transaction\nfrom django.db.models.expressions import Exists, OuterRef\nfrom django.utils import timezone\n\nfrom ..checkout.models import Checkout\nfrom ..core.exceptions import GiftCardNotApplicable\nfrom ..core.tracing import traced_atomic_transaction\nfrom ..core.utils.events import call_event\nfrom ..core.utils.promo_code import InvalidPromoCode, generate_promo_code\nfrom ..order.actions import OrderFulfillmentLineInfo, create_fulfillments\nfrom ..order.models import OrderLine\nfrom ..site import GiftCardSettingsExpiryType\nfrom . import GiftCardEvents, GiftCardLineData, events\nfrom .models import GiftCard, GiftCardEvent\nfrom .notifications import send_gift_card_notification\n\nif TYPE_CHECKING:\n from django.db.models import QuerySet\n\n from ..account.models import User\n from ..app.models import App\n from ..order.models import Order\n from ..plugins.manager import PluginsManager\n from ..site.models import SiteSettings\n\n\ndef add_gift_card_code_to_checkout(\n checkout: Checkout, email: str, promo_code: str, currency: str\n):\n \"\"\"Add gift card data to checkout by code.\n\n Raise ValidationError if email is not provided.\n Raise InvalidPromoCode if gift card cannot be applied.\n \"\"\"\n from ..checkout.checkout_cleaner import validate_checkout_email\n\n validate_checkout_email(checkout)\n\n try:\n # only active gift card with currency the same as channel currency can be used\n gift_card = (\n GiftCard.objects.active(date=date.today())\n .filter(currency=currency)\n .get(code=promo_code)\n )\n except GiftCard.DoesNotExist:\n raise InvalidPromoCode()\n\n used_by_email = gift_card.used_by_email\n # gift card can be used only by one user\n if used_by_email and used_by_email != email:\n raise InvalidPromoCode()\n\n checkout.gift_cards.add(gift_card)\n checkout.save(update_fields=[\"last_change\"])\n\n\ndef remove_gift_card_code_from_checkout(checkout: Checkout, gift_card_code: str):\n \"\"\"Remove gift card data from checkout by code.\n\n Return information whether promo code was removed.\n \"\"\"\n gift_card = checkout.gift_cards.filter(code=gift_card_code).first()\n if gift_card:\n checkout.gift_cards.remove(gift_card)\n checkout.save(update_fields=[\"last_change\"])\n return True\n return False\n\n\ndef deactivate_gift_card(gift_card: GiftCard):\n \"\"\"Set gift card status as inactive.\"\"\"\n if gift_card.is_active:\n gift_card.is_active = False\n gift_card.save(update_fields=[\"is_active\"])\n\n\ndef activate_gift_card(gift_card: GiftCard):\n \"\"\"Set gift card status as active.\"\"\"\n if not gift_card.is_active:\n gift_card.is_active = True\n gift_card.save(update_fields=[\"is_active\"])\n\n\ndef fulfill_non_shippable_gift_cards(\n order: \"Order\",\n order_lines: Iterable[OrderLine],\n settings: \"SiteSettings\",\n requestor_user: Optional[\"User\"],\n app: Optional[\"App\"],\n manager: \"PluginsManager\",\n):\n gift_card_lines = get_non_shippable_gift_card_lines(order_lines)\n if not gift_card_lines:\n return\n fulfill_gift_card_lines(\n gift_card_lines, requestor_user, app, order, settings, manager\n )\n\n\ndef get_non_shippable_gift_card_lines(lines: Iterable[OrderLine]) -> \"QuerySet\":\n gift_card_lines = get_gift_card_lines(lines)\n non_shippable_lines = OrderLine.objects.filter(\n id__in=[line.pk for line in gift_card_lines], is_shipping_required=False\n )\n return non_shippable_lines\n\n\ndef get_gift_card_lines(lines: Iterable[OrderLine]):\n gift_card_lines = [line for line in lines if line.is_gift_card]\n return gift_card_lines\n\n\ndef fulfill_gift_card_lines(\n gift_card_lines: \"QuerySet\",\n requestor_user: Optional[\"User\"],\n app: Optional[\"App\"],\n order: \"Order\",\n settings: \"SiteSettings\",\n manager: \"PluginsManager\",\n):\n lines_for_warehouses: DefaultDict[\n UUID, List[OrderFulfillmentLineInfo]\n ] = defaultdict(list)\n channel_slug = order.channel.slug\n for line in gift_card_lines.prefetch_related(\n \"allocations__stock\", \"variant__stocks\"\n ):\n if allocations := line.allocations.all():\n for allocation in allocations:\n quantity = allocation.quantity_allocated\n if quantity > 0:\n warehouse_pk = allocation.stock.warehouse_id\n lines_for_warehouses[warehouse_pk].append(\n {\"order_line\": line, \"quantity\": quantity}\n )\n else:\n stock = line.variant.stocks.for_channel_and_country(channel_slug).first()\n if not stock:\n raise GiftCardNotApplicable(\n message=\"Lack of gift card stock for checkout channel.\",\n )\n warehouse_pk = stock.warehouse_id\n lines_for_warehouses[warehouse_pk].append(\n {\"order_line\": line, \"quantity\": line.quantity}\n )\n\n return create_fulfillments(\n requestor_user,\n app,\n order,\n dict(lines_for_warehouses),\n manager,\n settings,\n notify_customer=True,\n )\n\n\n@traced_atomic_transaction()\ndef gift_cards_create(\n order: \"Order\",\n gift_card_lines_info: Iterable[\"GiftCardLineData\"],\n settings: \"SiteSettings\",\n requestor_user: Optional[\"User\"],\n app: Optional[\"App\"],\n manager: \"PluginsManager\",\n):\n \"\"\"Create purchased gift cards.\"\"\"\n customer_user = order.user\n user_email = order.user_email\n gift_cards = []\n non_shippable_gift_cards = []\n expiry_date = calculate_expiry_date(settings)\n for line_data in gift_card_lines_info:\n order_line = line_data.order_line\n price = order_line.unit_price_gross\n line_gift_cards = [\n GiftCard( # type: ignore[misc] # see below:\n code=generate_promo_code(),\n initial_balance=price, # money field not supported by mypy_django_plugin # noqa: E501\n current_balance=price, # money field not supported by mypy_django_plugin # noqa: E501\n created_by=customer_user,\n created_by_email=user_email,\n product=line_data.variant.product if line_data.variant else None,\n fulfillment_line=line_data.fulfillment_line,\n expiry_date=expiry_date,\n )\n for _ in range(line_data.quantity)\n ]\n gift_cards.extend(line_gift_cards)\n if not order_line.is_shipping_required:\n non_shippable_gift_cards.extend(line_gift_cards)\n\n gift_cards = GiftCard.objects.bulk_create(gift_cards)\n events.gift_cards_bought_event(gift_cards, order, requestor_user, app)\n\n for gift_card in gift_cards:\n call_event(manager.gift_card_created, gift_card)\n\n channel_slug = order.channel.slug\n # send to customer all non-shippable gift cards\n transaction.on_commit(\n lambda: send_gift_cards_to_customer(\n non_shippable_gift_cards,\n user_email,\n requestor_user,\n app,\n customer_user,\n manager,\n channel_slug,\n )\n )\n return gift_cards\n\n\ndef calculate_expiry_date(settings):\n \"\"\"Calculate expiry date based on gift card settings.\"\"\"\n today = timezone.now().date()\n expiry_date = None\n if settings.gift_card_expiry_type == GiftCardSettingsExpiryType.EXPIRY_PERIOD:\n expiry_period_type = settings.gift_card_expiry_period_type\n time_delta = {f\"{expiry_period_type}s\": settings.gift_card_expiry_period}\n expiry_date = today + relativedelta(**time_delta)\n return expiry_date\n\n\ndef send_gift_cards_to_customer(\n gift_cards: Iterable[GiftCard],\n user_email: str,\n requestor_user: Optional[\"User\"],\n app: Optional[\"App\"],\n customer_user: Optional[\"User\"],\n manager: \"PluginsManager\",\n channel_slug: str,\n):\n for gift_card in gift_cards:\n send_gift_card_notification(\n requestor_user,\n app,\n customer_user,\n user_email,\n gift_card,\n manager,\n channel_slug,\n resending=False,\n )\n\n\ndef deactivate_order_gift_cards(\n order_id: UUID, user: Optional[\"User\"], app: Optional[\"App\"]\n):\n gift_card_events = GiftCardEvent.objects.filter(\n type=GiftCardEvents.BOUGHT, order_id=order_id\n )\n gift_cards = GiftCard.objects.filter(\n Exists(gift_card_events.filter(gift_card_id=OuterRef(\"id\")))\n )\n gift_cards.update(is_active=False)\n events.gift_cards_deactivated_event(\n gift_cards.values_list(\"id\", flat=True), user, app\n )\n\n\ndef order_has_gift_card_lines(order):\n return any(order.lines.filter(is_gift_card=True))\n\n\ndef assign_user_gift_cards(user):\n GiftCard.objects.filter(used_by_email=user.email).update(used_by=user)\n GiftCard.objects.filter(created_by_email=user.email).update(created_by=user)\n\n\ndef is_gift_card_expired(gift_card: GiftCard):\n \"\"\"Return True when gift card expiry date pass.\"\"\"\n today = timezone.now().date()\n return bool(gift_card.expiry_date) and gift_card.expiry_date < today # type: ignore\n", "path": "saleor/giftcard/utils.py"}]}
| 3,159 | 234 |
gh_patches_debug_31371
|
rasdani/github-patches
|
git_diff
|
pantsbuild__pants-5177
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[pantsd] sending ctrl-\ in a `./pants repl` can lead to a hung pantsd-runner
repro:
```
[omerta pants-release (master)]$ ps -ef |grep pantsd-runner |grep -v grep
[omerta pants-release (master)]$ ./pants -q repl 3rdparty/python:psutil
Python 2.7.10 (default, Dec 16 2015, 14:09:45)
[GCC 4.2.1 Compatible Apple LLVM 7.0.2 (clang-700.1.81)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>> ^\Quit: 3
[omerta pants-release (master)]$ ps -ef |grep pantsd-runner |grep -v grep
501 67669 1 0 10:37PM ?? 0:01.14 pantsd-runner [./pants -q repl 3rdparty/python:psutil]
501 67670 67669 0 10:37PM ?? 0:00.00 pantsd-runner [./pants -q repl 3rdparty/python:psutil]
501 67671 67669 0 10:37PM ?? 0:00.00 pantsd-runner [./pants -q repl 3rdparty/python:psutil]
501 67672 67669 0 10:37PM ?? 0:00.00 pantsd-runner [./pants -q repl 3rdparty/python:psutil]
501 67673 67669 0 10:37PM ?? 0:00.00 pantsd-runner [./pants -q repl 3rdparty/python:psutil]
501 67674 67669 0 10:37PM ?? 0:00.00 pantsd-runner [./pants -q repl 3rdparty/python:psutil]
501 67675 67669 0 10:37PM ?? 0:00.00 pantsd-runner [./pants -q repl 3rdparty/python:psutil]
501 67676 67669 0 10:37PM ?? 0:00.00 pantsd-runner [./pants -q repl 3rdparty/python:psutil]
501 67677 67669 0 10:37PM ?? 0:00.00 pantsd-runner [./pants -q repl 3rdparty/python:psutil]
[omerta pants-release (master)]$
```
we'll want to better handle `SIGQUIT` in the thin client side of the runner to avoid this.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/python/pants/bin/remote_pants_runner.py`
Content:
```
1 # coding=utf-8
2 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
3 # Licensed under the Apache License, Version 2.0 (see LICENSE).
4
5 from __future__ import (absolute_import, division, generators, nested_scopes, print_function,
6 unicode_literals, with_statement)
7
8 import logging
9 import signal
10 import sys
11 from contextlib import contextmanager
12
13 from pants.java.nailgun_client import NailgunClient
14 from pants.java.nailgun_protocol import NailgunProtocol
15 from pants.pantsd.pants_daemon import PantsDaemon
16 from pants.util.collections import combined_dict
17 from pants.util.memo import memoized_property
18
19
20 logger = logging.getLogger(__name__)
21
22
23 class RemotePantsRunner(object):
24 """A thin client variant of PantsRunner."""
25
26 class Fallback(Exception):
27 """Raised when fallback to an alternate execution mode is requested."""
28
29 class PortNotFound(Exception):
30 """Raised when the pailgun port can't be found."""
31
32 PANTS_COMMAND = 'pants'
33 RECOVERABLE_EXCEPTIONS = (PortNotFound, NailgunClient.NailgunConnectionError)
34
35 def __init__(self, exiter, args, env, bootstrap_options, stdin=None, stdout=None, stderr=None):
36 """
37 :param Exiter exiter: The Exiter instance to use for this run.
38 :param list args: The arguments (e.g. sys.argv) for this run.
39 :param dict env: The environment (e.g. os.environ) for this run.
40 :param Options bootstrap_options: The Options bag containing the bootstrap options.
41 :param file stdin: The stream representing stdin.
42 :param file stdout: The stream representing stdout.
43 :param file stderr: The stream representing stderr.
44 """
45 self._exiter = exiter
46 self._args = args
47 self._env = env
48 self._bootstrap_options = bootstrap_options
49 self._stdin = stdin or sys.stdin
50 self._stdout = stdout or sys.stdout
51 self._stderr = stderr or sys.stderr
52
53 @memoized_property
54 def pantsd(self):
55 return PantsDaemon.Factory.create(bootstrap_options=self._bootstrap_options)
56
57 @contextmanager
58 def _trapped_control_c(self, client):
59 """A contextmanager that overrides the SIGINT (control-c) handler and handles it remotely."""
60 def handle_control_c(signum, frame):
61 client.send_control_c()
62
63 existing_sigint_handler = signal.signal(signal.SIGINT, handle_control_c)
64 signal.siginterrupt(signal.SIGINT, False) # Retry interrupted system calls.
65 try:
66 yield
67 finally:
68 signal.signal(signal.SIGINT, existing_sigint_handler)
69
70 def _setup_logging(self):
71 """Sets up basic stdio logging for the thin client."""
72 log_level = logging.getLevelName(self._bootstrap_options.for_global_scope().level.upper())
73
74 formatter = logging.Formatter('%(levelname)s] %(message)s')
75 handler = logging.StreamHandler(sys.stdout)
76 handler.setLevel(log_level)
77 handler.setFormatter(formatter)
78
79 root = logging.getLogger()
80 root.setLevel(log_level)
81 root.addHandler(handler)
82
83 def _connect_and_execute(self, port):
84 # Merge the nailgun TTY capability environment variables with the passed environment dict.
85 ng_env = NailgunProtocol.isatty_to_env(self._stdin, self._stdout, self._stderr)
86 modified_env = combined_dict(self._env, ng_env)
87
88 assert isinstance(port, int), 'port {} is not an integer!'.format(port)
89
90 # Instantiate a NailgunClient.
91 client = NailgunClient(port=port,
92 ins=self._stdin,
93 out=self._stdout,
94 err=self._stderr,
95 exit_on_broken_pipe=True)
96
97 with self._trapped_control_c(client):
98 # Execute the command on the pailgun.
99 result = client.execute(self.PANTS_COMMAND, *self._args, **modified_env)
100
101 # Exit.
102 self._exiter.exit(result)
103
104 def run(self, args=None):
105 self._setup_logging()
106 port = self.pantsd.maybe_launch()
107
108 logger.debug('connecting to pailgun on port {}'.format(port))
109 try:
110 self._connect_and_execute(port)
111 except self.RECOVERABLE_EXCEPTIONS as e:
112 raise self.Fallback(e)
113
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/python/pants/bin/remote_pants_runner.py b/src/python/pants/bin/remote_pants_runner.py
--- a/src/python/pants/bin/remote_pants_runner.py
+++ b/src/python/pants/bin/remote_pants_runner.py
@@ -55,17 +55,25 @@
return PantsDaemon.Factory.create(bootstrap_options=self._bootstrap_options)
@contextmanager
- def _trapped_control_c(self, client):
- """A contextmanager that overrides the SIGINT (control-c) handler and handles it remotely."""
+ def _trapped_signals(self, client):
+ """A contextmanager that overrides the SIGINT (control-c) and SIGQUIT (control-\) handlers
+ and handles them remotely."""
def handle_control_c(signum, frame):
client.send_control_c()
existing_sigint_handler = signal.signal(signal.SIGINT, handle_control_c)
- signal.siginterrupt(signal.SIGINT, False) # Retry interrupted system calls.
+ # N.B. SIGQUIT will abruptly kill the pantsd-runner, which will shut down the other end
+ # of the Pailgun connection - so we send a gentler SIGINT here instead.
+ existing_sigquit_handler = signal.signal(signal.SIGQUIT, handle_control_c)
+
+ # Retry interrupted system calls.
+ signal.siginterrupt(signal.SIGINT, False)
+ signal.siginterrupt(signal.SIGQUIT, False)
try:
yield
finally:
signal.signal(signal.SIGINT, existing_sigint_handler)
+ signal.signal(signal.SIGQUIT, existing_sigquit_handler)
def _setup_logging(self):
"""Sets up basic stdio logging for the thin client."""
@@ -94,7 +102,7 @@
err=self._stderr,
exit_on_broken_pipe=True)
- with self._trapped_control_c(client):
+ with self._trapped_signals(client):
# Execute the command on the pailgun.
result = client.execute(self.PANTS_COMMAND, *self._args, **modified_env)
|
{"golden_diff": "diff --git a/src/python/pants/bin/remote_pants_runner.py b/src/python/pants/bin/remote_pants_runner.py\n--- a/src/python/pants/bin/remote_pants_runner.py\n+++ b/src/python/pants/bin/remote_pants_runner.py\n@@ -55,17 +55,25 @@\n return PantsDaemon.Factory.create(bootstrap_options=self._bootstrap_options)\n \n @contextmanager\n- def _trapped_control_c(self, client):\n- \"\"\"A contextmanager that overrides the SIGINT (control-c) handler and handles it remotely.\"\"\"\n+ def _trapped_signals(self, client):\n+ \"\"\"A contextmanager that overrides the SIGINT (control-c) and SIGQUIT (control-\\) handlers\n+ and handles them remotely.\"\"\"\n def handle_control_c(signum, frame):\n client.send_control_c()\n \n existing_sigint_handler = signal.signal(signal.SIGINT, handle_control_c)\n- signal.siginterrupt(signal.SIGINT, False) # Retry interrupted system calls.\n+ # N.B. SIGQUIT will abruptly kill the pantsd-runner, which will shut down the other end\n+ # of the Pailgun connection - so we send a gentler SIGINT here instead.\n+ existing_sigquit_handler = signal.signal(signal.SIGQUIT, handle_control_c)\n+\n+ # Retry interrupted system calls.\n+ signal.siginterrupt(signal.SIGINT, False)\n+ signal.siginterrupt(signal.SIGQUIT, False)\n try:\n yield\n finally:\n signal.signal(signal.SIGINT, existing_sigint_handler)\n+ signal.signal(signal.SIGQUIT, existing_sigquit_handler)\n \n def _setup_logging(self):\n \"\"\"Sets up basic stdio logging for the thin client.\"\"\"\n@@ -94,7 +102,7 @@\n err=self._stderr,\n exit_on_broken_pipe=True)\n \n- with self._trapped_control_c(client):\n+ with self._trapped_signals(client):\n # Execute the command on the pailgun.\n result = client.execute(self.PANTS_COMMAND, *self._args, **modified_env)\n", "issue": "[pantsd] sending ctrl-\\ in a `./pants repl` can lead to a hung pantsd-runner\nrepro:\r\n\r\n```\r\n[omerta pants-release (master)]$ ps -ef |grep pantsd-runner |grep -v grep\r\n[omerta pants-release (master)]$ ./pants -q repl 3rdparty/python:psutil\r\n\r\nPython 2.7.10 (default, Dec 16 2015, 14:09:45) \r\n[GCC 4.2.1 Compatible Apple LLVM 7.0.2 (clang-700.1.81)] on darwin\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n(InteractiveConsole)\r\n>>> ^\\Quit: 3\r\n[omerta pants-release (master)]$ ps -ef |grep pantsd-runner |grep -v grep\r\n 501 67669 1 0 10:37PM ?? 0:01.14 pantsd-runner [./pants -q repl 3rdparty/python:psutil] \r\n 501 67670 67669 0 10:37PM ?? 0:00.00 pantsd-runner [./pants -q repl 3rdparty/python:psutil] \r\n 501 67671 67669 0 10:37PM ?? 0:00.00 pantsd-runner [./pants -q repl 3rdparty/python:psutil] \r\n 501 67672 67669 0 10:37PM ?? 0:00.00 pantsd-runner [./pants -q repl 3rdparty/python:psutil] \r\n 501 67673 67669 0 10:37PM ?? 0:00.00 pantsd-runner [./pants -q repl 3rdparty/python:psutil] \r\n 501 67674 67669 0 10:37PM ?? 0:00.00 pantsd-runner [./pants -q repl 3rdparty/python:psutil] \r\n 501 67675 67669 0 10:37PM ?? 0:00.00 pantsd-runner [./pants -q repl 3rdparty/python:psutil] \r\n 501 67676 67669 0 10:37PM ?? 0:00.00 pantsd-runner [./pants -q repl 3rdparty/python:psutil] \r\n 501 67677 67669 0 10:37PM ?? 0:00.00 pantsd-runner [./pants -q repl 3rdparty/python:psutil] \r\n[omerta pants-release (master)]$ \r\n```\r\n\r\nwe'll want to better handle `SIGQUIT` in the thin client side of the runner to avoid this.\n", "before_files": [{"content": "# coding=utf-8\n# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import (absolute_import, division, generators, nested_scopes, print_function,\n unicode_literals, with_statement)\n\nimport logging\nimport signal\nimport sys\nfrom contextlib import contextmanager\n\nfrom pants.java.nailgun_client import NailgunClient\nfrom pants.java.nailgun_protocol import NailgunProtocol\nfrom pants.pantsd.pants_daemon import PantsDaemon\nfrom pants.util.collections import combined_dict\nfrom pants.util.memo import memoized_property\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass RemotePantsRunner(object):\n \"\"\"A thin client variant of PantsRunner.\"\"\"\n\n class Fallback(Exception):\n \"\"\"Raised when fallback to an alternate execution mode is requested.\"\"\"\n\n class PortNotFound(Exception):\n \"\"\"Raised when the pailgun port can't be found.\"\"\"\n\n PANTS_COMMAND = 'pants'\n RECOVERABLE_EXCEPTIONS = (PortNotFound, NailgunClient.NailgunConnectionError)\n\n def __init__(self, exiter, args, env, bootstrap_options, stdin=None, stdout=None, stderr=None):\n \"\"\"\n :param Exiter exiter: The Exiter instance to use for this run.\n :param list args: The arguments (e.g. sys.argv) for this run.\n :param dict env: The environment (e.g. os.environ) for this run.\n :param Options bootstrap_options: The Options bag containing the bootstrap options.\n :param file stdin: The stream representing stdin.\n :param file stdout: The stream representing stdout.\n :param file stderr: The stream representing stderr.\n \"\"\"\n self._exiter = exiter\n self._args = args\n self._env = env\n self._bootstrap_options = bootstrap_options\n self._stdin = stdin or sys.stdin\n self._stdout = stdout or sys.stdout\n self._stderr = stderr or sys.stderr\n\n @memoized_property\n def pantsd(self):\n return PantsDaemon.Factory.create(bootstrap_options=self._bootstrap_options)\n\n @contextmanager\n def _trapped_control_c(self, client):\n \"\"\"A contextmanager that overrides the SIGINT (control-c) handler and handles it remotely.\"\"\"\n def handle_control_c(signum, frame):\n client.send_control_c()\n\n existing_sigint_handler = signal.signal(signal.SIGINT, handle_control_c)\n signal.siginterrupt(signal.SIGINT, False) # Retry interrupted system calls.\n try:\n yield\n finally:\n signal.signal(signal.SIGINT, existing_sigint_handler)\n\n def _setup_logging(self):\n \"\"\"Sets up basic stdio logging for the thin client.\"\"\"\n log_level = logging.getLevelName(self._bootstrap_options.for_global_scope().level.upper())\n\n formatter = logging.Formatter('%(levelname)s] %(message)s')\n handler = logging.StreamHandler(sys.stdout)\n handler.setLevel(log_level)\n handler.setFormatter(formatter)\n\n root = logging.getLogger()\n root.setLevel(log_level)\n root.addHandler(handler)\n\n def _connect_and_execute(self, port):\n # Merge the nailgun TTY capability environment variables with the passed environment dict.\n ng_env = NailgunProtocol.isatty_to_env(self._stdin, self._stdout, self._stderr)\n modified_env = combined_dict(self._env, ng_env)\n\n assert isinstance(port, int), 'port {} is not an integer!'.format(port)\n\n # Instantiate a NailgunClient.\n client = NailgunClient(port=port,\n ins=self._stdin,\n out=self._stdout,\n err=self._stderr,\n exit_on_broken_pipe=True)\n\n with self._trapped_control_c(client):\n # Execute the command on the pailgun.\n result = client.execute(self.PANTS_COMMAND, *self._args, **modified_env)\n\n # Exit.\n self._exiter.exit(result)\n\n def run(self, args=None):\n self._setup_logging()\n port = self.pantsd.maybe_launch()\n\n logger.debug('connecting to pailgun on port {}'.format(port))\n try:\n self._connect_and_execute(port)\n except self.RECOVERABLE_EXCEPTIONS as e:\n raise self.Fallback(e)\n", "path": "src/python/pants/bin/remote_pants_runner.py"}], "after_files": [{"content": "# coding=utf-8\n# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import (absolute_import, division, generators, nested_scopes, print_function,\n unicode_literals, with_statement)\n\nimport logging\nimport signal\nimport sys\nfrom contextlib import contextmanager\n\nfrom pants.java.nailgun_client import NailgunClient\nfrom pants.java.nailgun_protocol import NailgunProtocol\nfrom pants.pantsd.pants_daemon import PantsDaemon\nfrom pants.util.collections import combined_dict\nfrom pants.util.memo import memoized_property\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass RemotePantsRunner(object):\n \"\"\"A thin client variant of PantsRunner.\"\"\"\n\n class Fallback(Exception):\n \"\"\"Raised when fallback to an alternate execution mode is requested.\"\"\"\n\n class PortNotFound(Exception):\n \"\"\"Raised when the pailgun port can't be found.\"\"\"\n\n PANTS_COMMAND = 'pants'\n RECOVERABLE_EXCEPTIONS = (PortNotFound, NailgunClient.NailgunConnectionError)\n\n def __init__(self, exiter, args, env, bootstrap_options, stdin=None, stdout=None, stderr=None):\n \"\"\"\n :param Exiter exiter: The Exiter instance to use for this run.\n :param list args: The arguments (e.g. sys.argv) for this run.\n :param dict env: The environment (e.g. os.environ) for this run.\n :param Options bootstrap_options: The Options bag containing the bootstrap options.\n :param file stdin: The stream representing stdin.\n :param file stdout: The stream representing stdout.\n :param file stderr: The stream representing stderr.\n \"\"\"\n self._exiter = exiter\n self._args = args\n self._env = env\n self._bootstrap_options = bootstrap_options\n self._stdin = stdin or sys.stdin\n self._stdout = stdout or sys.stdout\n self._stderr = stderr or sys.stderr\n\n @memoized_property\n def pantsd(self):\n return PantsDaemon.Factory.create(bootstrap_options=self._bootstrap_options)\n\n @contextmanager\n def _trapped_signals(self, client):\n \"\"\"A contextmanager that overrides the SIGINT (control-c) and SIGQUIT (control-\\) handlers\n and handles them remotely.\"\"\"\n def handle_control_c(signum, frame):\n client.send_control_c()\n\n existing_sigint_handler = signal.signal(signal.SIGINT, handle_control_c)\n # N.B. SIGQUIT will abruptly kill the pantsd-runner, which will shut down the other end\n # of the Pailgun connection - so we send a gentler SIGINT here instead.\n existing_sigquit_handler = signal.signal(signal.SIGQUIT, handle_control_c)\n\n # Retry interrupted system calls.\n signal.siginterrupt(signal.SIGINT, False)\n signal.siginterrupt(signal.SIGQUIT, False)\n try:\n yield\n finally:\n signal.signal(signal.SIGINT, existing_sigint_handler)\n signal.signal(signal.SIGQUIT, existing_sigquit_handler)\n\n def _setup_logging(self):\n \"\"\"Sets up basic stdio logging for the thin client.\"\"\"\n log_level = logging.getLevelName(self._bootstrap_options.for_global_scope().level.upper())\n\n formatter = logging.Formatter('%(levelname)s] %(message)s')\n handler = logging.StreamHandler(sys.stdout)\n handler.setLevel(log_level)\n handler.setFormatter(formatter)\n\n root = logging.getLogger()\n root.setLevel(log_level)\n root.addHandler(handler)\n\n def _connect_and_execute(self, port):\n # Merge the nailgun TTY capability environment variables with the passed environment dict.\n ng_env = NailgunProtocol.isatty_to_env(self._stdin, self._stdout, self._stderr)\n modified_env = combined_dict(self._env, ng_env)\n\n assert isinstance(port, int), 'port {} is not an integer!'.format(port)\n\n # Instantiate a NailgunClient.\n client = NailgunClient(port=port,\n ins=self._stdin,\n out=self._stdout,\n err=self._stderr,\n exit_on_broken_pipe=True)\n\n with self._trapped_signals(client):\n # Execute the command on the pailgun.\n result = client.execute(self.PANTS_COMMAND, *self._args, **modified_env)\n\n # Exit.\n self._exiter.exit(result)\n\n def run(self, args=None):\n self._setup_logging()\n port = self.pantsd.maybe_launch()\n\n logger.debug('connecting to pailgun on port {}'.format(port))\n try:\n self._connect_and_execute(port)\n except self.RECOVERABLE_EXCEPTIONS as e:\n raise self.Fallback(e)\n", "path": "src/python/pants/bin/remote_pants_runner.py"}]}
| 2,138 | 443 |
gh_patches_debug_16629
|
rasdani/github-patches
|
git_diff
|
modin-project__modin-3184
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
modin/pandas/test/test_io.py::TestCsv::test_read_csv_compression tests fail with numpy>1.20.3
### System information
- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**:
Ubuntu 20.04.1 LTS
- **Modin version** (`modin.__version__`):
0.10.0+6.gf9c99c38
- **Python version**:
Python 3.8.6
- **Code we can use to reproduce**:
Update numpy to the latest version (currently 1.21.0) and run modin/pandas/test/test_io.py::TestCsv::test_read_csv_compression tests. When they are ran together, tests with python engine fail with exception.
```
Traceback (most recent call last):
File "/localdisk/gashiman/modin/modin/pandas/test/utils.py", line 668, in execute_callable
pd_result = fn(pandas_df, **pd_kwargs)
File "/localdisk/gashiman/modin/modin/pandas/test/utils.py", line 756, in applyier
result = getattr(module, fn_name)(*args, **kwargs)
File "/localdisk/gashiman/miniconda3/envs/modin/lib/python3.8/site-packages/pandas/io/parsers.py", line 610, in read_csv
return _read(filepath_or_buffer, kwds)
File "/localdisk/gashiman/miniconda3/envs/modin/lib/python3.8/site-packages/pandas/io/parsers.py", line 468, in _read
return parser.read(nrows)
File "/localdisk/gashiman/miniconda3/envs/modin/lib/python3.8/site-packages/pandas/io/parsers.py", line 1057, in read
index, columns, col_dict = self._engine.read(nrows)
File "/localdisk/gashiman/miniconda3/envs/modin/lib/python3.8/site-packages/pandas/io/parsers.py", line 2502, in read
data = self._convert_data(data)
File "/localdisk/gashiman/miniconda3/envs/modin/lib/python3.8/site-packages/pandas/io/parsers.py", line 2582, in _convert_data
return self._convert_to_ndarrays(
File "/localdisk/gashiman/miniconda3/envs/modin/lib/python3.8/site-packages/pandas/io/parsers.py", line 1704, in _convert_to_ndarrays
cvals, na_count = self._infer_types(
File "/localdisk/gashiman/miniconda3/envs/modin/lib/python3.8/site-packages/pandas/io/parsers.py", line 1765, in _infer_types
na_count = isna(result).sum()
File "/localdisk/gashiman/miniconda3/envs/modin/lib/python3.8/site-packages/numpy/core/_methods.py", line 48, in _sum
return umr_sum(a, axis, dtype, out, keepdims, initial, where)
TypeError: int() argument must be a string, a bytes-like object or a number, not '_NoValueType'
```
<!--
You can obtain the Modin version with
python -c "import modin; print(modin.__version__)"
-->
### Describe the problem
<!-- Describe the problem clearly here. -->
### Source code / logs
<!-- Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem. -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from setuptools import setup, find_packages
2 import versioneer
3 import os
4 from setuptools.dist import Distribution
5
6 try:
7 from wheel.bdist_wheel import bdist_wheel
8
9 HAS_WHEEL = True
10 except ImportError:
11 HAS_WHEEL = False
12
13 with open("README.md", "r", encoding="utf-8") as fh:
14 long_description = fh.read()
15
16 if HAS_WHEEL:
17
18 class ModinWheel(bdist_wheel):
19 def finalize_options(self):
20 bdist_wheel.finalize_options(self)
21 self.root_is_pure = False
22
23 def get_tag(self):
24 _, _, plat = bdist_wheel.get_tag(self)
25 py = "py3"
26 abi = "none"
27 return py, abi, plat
28
29
30 class ModinDistribution(Distribution):
31 def __init__(self, *attrs):
32 Distribution.__init__(self, *attrs)
33 if HAS_WHEEL:
34 self.cmdclass["bdist_wheel"] = ModinWheel
35
36 def is_pure(self):
37 return False
38
39
40 dask_deps = ["dask>=2.12.0,<=2.19.0", "distributed>=2.12.0,<=2.19.0"]
41 ray_deps = ["ray>=1.4.0", "pyarrow>=1.0"]
42 remote_deps = ["rpyc==4.1.5", "cloudpickle==1.4.1", "boto3==1.4.8"]
43 spreadsheet_deps = ["modin-spreadsheet>=0.1.0"]
44 sql_deps = ["dfsql>=0.2.6"]
45 all_deps = dask_deps + ray_deps + remote_deps + spreadsheet_deps
46
47 # dfsql does not support Windows yet
48 if os.name != 'nt':
49 all_deps += sql_deps
50
51 setup(
52 name="modin",
53 version=versioneer.get_version(),
54 cmdclass=versioneer.get_cmdclass(),
55 distclass=ModinDistribution,
56 description="Modin: Make your pandas code run faster by changing one line of code.",
57 packages=find_packages(),
58 include_package_data=True,
59 license="Apache 2",
60 url="https://github.com/modin-project/modin",
61 long_description=long_description,
62 long_description_content_type="text/markdown",
63 # Restrition for numpy upper version is because of #3182
64 install_requires=["pandas==1.2.5", "packaging", "numpy>=1.16.5,<=1.20.3"],
65 extras_require={
66 # can be installed by pip install modin[dask]
67 "dask": dask_deps,
68 "ray": ray_deps,
69 "remote": remote_deps,
70 "spreadsheet": spreadsheet_deps,
71 "sql": sql_deps,
72 "all": all_deps,
73 },
74 python_requires=">=3.7.1",
75 )
76
```
Path: `modin/experimental/pandas/numpy_wrap.py`
Content:
```
1 # Licensed to Modin Development Team under one or more contributor license agreements.
2 # See the NOTICE file distributed with this work for additional information regarding
3 # copyright ownership. The Modin Development Team licenses this file to you under the
4 # Apache License, Version 2.0 (the "License"); you may not use this file except in
5 # compliance with the License. You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software distributed under
10 # the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 # ANY KIND, either express or implied. See the License for the specific language
12 # governing permissions and limitations under the License.
13
14 """
15 The module replaces real NumPy from future "import numpy" statements.
16
17 Replacement occurs with a wrapping module that serves attributes from either
18 local or "remote" NumPy depending on active execution context.
19 """
20
21 import sys
22
23 _CAUGHT_NUMPY = "numpy" not in sys.modules
24 try:
25 import numpy as real_numpy
26 except ImportError:
27 pass
28 else:
29 import types
30 import copyreg
31 from modin.config import Engine
32 from modin.data_management.factories import REMOTE_ENGINES
33 import modin
34 import pandas
35 import os
36
37 _EXCLUDE_MODULES = [modin, pandas]
38 try:
39 import rpyc
40 except ImportError:
41 pass
42 else:
43 _EXCLUDE_MODULES.append(rpyc)
44 _EXCLUDE_PATHS = tuple(
45 os.path.dirname(mod.__file__) + os.sep for mod in _EXCLUDE_MODULES
46 )
47
48 class InterceptedNumpy(types.ModuleType):
49 """
50 The class is intended to replace the "numpy" module as seen by outer world.
51
52 Replacement occurs by getting attributes from either local NumPy or remote one when remote context
53 is activated.
54 It also registers helpers for pickling local NumPy objects in remote context
55 and vice versa.
56
57 Attributes
58 ----------
59 __own_attrs__ : set
60 Attributes that are defined in this class so access to them must never be proxied.
61 __current_numpy : ModuleType
62 The module to which getting NumPy attributes redirects. For example,
63 NumPy on remote machine.
64 __prev_numpy : ModuleType
65 The previous module that was accessed to get the NumPy attributes.
66 __has_to_warn : bool
67 Determines the situation when it is necessary to give a warning.
68 __reducers : dict
69 Custom routines that Pickle calls to serialize an instance of a class.
70 """
71
72 __own_attrs__ = set(["__own_attrs__"])
73
74 __spec__ = real_numpy.__spec__
75 __current_numpy = real_numpy
76 __prev_numpy = real_numpy
77 __has_to_warn = not _CAUGHT_NUMPY
78 __reducers = {}
79
80 def __init__(self):
81 self.__own_attrs__ = set(type(self).__dict__.keys())
82 Engine.subscribe(self.__update_engine)
83
84 def __swap_numpy(self, other_numpy=None):
85 self.__current_numpy, self.__prev_numpy = (
86 other_numpy or self.__prev_numpy,
87 self.__current_numpy,
88 )
89 if self.__current_numpy is not real_numpy and self.__has_to_warn:
90 import warnings
91
92 warnings.warn(
93 "Was not able to intercept all numpy imports. "
94 "To intercept all of these please do 'import modin.experimental.pandas' as early as possible"
95 )
96 self.__has_to_warn = False
97
98 def __update_engine(self, _):
99 if Engine.get() in REMOTE_ENGINES:
100 from modin.experimental.cloud import get_connection
101
102 self.__swap_numpy(get_connection().modules["numpy"])
103 else:
104 self.__swap_numpy()
105
106 def __make_reducer(self, name):
107 """
108 Prepare a "reducer" routine - the one Pickle calls to serialize an instance of a class.
109
110 Note that we need this to allow pickling a local numpy object in "remote numpy" context,
111 because without a custom reduce callback pickle complains that what it reduced has a
112 different "numpy" class than original.
113 """
114 try:
115 reducer = self.__reducers[name]
116 except KeyError:
117
118 def reducer(
119 obj,
120 real_obj=getattr(real_numpy, name),
121 real_obj_reducer=getattr(real_numpy, name).__reduce__,
122 ):
123 # See details on __reduce__ protocol in Python docs:
124 # https://docs.python.org/3.6/library/pickle.html#object.__reduce__
125 reduced = real_obj_reducer(obj)
126 if not isinstance(reduced, tuple):
127 return reduced
128 assert isinstance(
129 reduced[0],
130 (type, types.FunctionType, types.BuiltinFunctionType),
131 ), "Do not know how to support this reconstructor"
132
133 modobj = self.__current_numpy
134 for submod in reduced[0].__module__.split(".")[1:]:
135 modobj = getattr(modobj, submod)
136 reconstruct = getattr(modobj, reduced[0].__name__)
137 # TODO: see if replacing all "real numpy" things in reduced[1:] is needed
138 return (reconstruct,) + reduced[1:]
139
140 self.__reducers[name] = reducer
141 return reducer
142
143 def __get_numpy(self):
144 frame = sys._getframe()
145 try:
146 # get the path to module where caller of caller is defined;
147 # this function is expected to be called from one of
148 # __getattr__, __setattr__ or __delattr__, so this
149 # "caller_file" should point to the file that wants a
150 # numpy attribute; we want to always give local numpy
151 # to modin, numpy and rpyc as it's all internal for us
152 caller_file = frame.f_back.f_back.f_code.co_filename
153 except AttributeError:
154 return self.__current_numpy
155 finally:
156 del frame
157 if any(caller_file.startswith(mod_path) for mod_path in _EXCLUDE_PATHS):
158 return real_numpy
159 return self.__current_numpy
160
161 def __getattr__(self, name): # noqa: D105
162 # note that __getattr__ is not symmetric to __setattr__, as it is
163 # only called when an attribute is not found by usual lookups
164 obj = getattr(self.__get_numpy(), name)
165 if isinstance(obj, type):
166 # register a special callback for pickling
167 copyreg.pickle(obj, self.__make_reducer(name))
168 return obj
169
170 def __setattr__(self, name, value): # noqa: D105
171 # set our own attributes on the self instance, but pass through
172 # setting other attributes to numpy being wrapped
173 if name in self.__own_attrs__:
174 super().__setattr__(name, value)
175 else:
176 setattr(self.__get_numpy(), name, value)
177
178 def __delattr__(self, name): # noqa: D105
179 # do not allow to delete our own attributes, pass through
180 # deletion of others to numpy being wrapped
181 if name not in self.__own_attrs__:
182 delattr(self.__get_numpy(), name)
183
184 sys.modules["numpy"] = InterceptedNumpy()
185
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/modin/experimental/pandas/numpy_wrap.py b/modin/experimental/pandas/numpy_wrap.py
--- a/modin/experimental/pandas/numpy_wrap.py
+++ b/modin/experimental/pandas/numpy_wrap.py
@@ -34,7 +34,7 @@
import pandas
import os
- _EXCLUDE_MODULES = [modin, pandas]
+ _EXCLUDE_MODULES = [modin, pandas, real_numpy]
try:
import rpyc
except ImportError:
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -60,8 +60,7 @@
url="https://github.com/modin-project/modin",
long_description=long_description,
long_description_content_type="text/markdown",
- # Restrition for numpy upper version is because of #3182
- install_requires=["pandas==1.2.5", "packaging", "numpy>=1.16.5,<=1.20.3"],
+ install_requires=["pandas==1.2.5", "packaging", "numpy>=1.16.5"],
extras_require={
# can be installed by pip install modin[dask]
"dask": dask_deps,
|
{"golden_diff": "diff --git a/modin/experimental/pandas/numpy_wrap.py b/modin/experimental/pandas/numpy_wrap.py\n--- a/modin/experimental/pandas/numpy_wrap.py\n+++ b/modin/experimental/pandas/numpy_wrap.py\n@@ -34,7 +34,7 @@\n import pandas\n import os\n \n- _EXCLUDE_MODULES = [modin, pandas]\n+ _EXCLUDE_MODULES = [modin, pandas, real_numpy]\n try:\n import rpyc\n except ImportError:\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -60,8 +60,7 @@\n url=\"https://github.com/modin-project/modin\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n- # Restrition for numpy upper version is because of #3182\n- install_requires=[\"pandas==1.2.5\", \"packaging\", \"numpy>=1.16.5,<=1.20.3\"],\n+ install_requires=[\"pandas==1.2.5\", \"packaging\", \"numpy>=1.16.5\"],\n extras_require={\n # can be installed by pip install modin[dask]\n \"dask\": dask_deps,\n", "issue": "modin/pandas/test/test_io.py::TestCsv::test_read_csv_compression tests fail with numpy>1.20.3\n### System information\r\n- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**:\r\n\r\nUbuntu 20.04.1 LTS\r\n\r\n- **Modin version** (`modin.__version__`):\r\n\r\n0.10.0+6.gf9c99c38\r\n\r\n- **Python version**:\r\n\r\nPython 3.8.6\r\n\r\n- **Code we can use to reproduce**:\r\n\r\nUpdate numpy to the latest version (currently 1.21.0) and run modin/pandas/test/test_io.py::TestCsv::test_read_csv_compression tests. When they are ran together, tests with python engine fail with exception.\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/localdisk/gashiman/modin/modin/pandas/test/utils.py\", line 668, in execute_callable\r\n pd_result = fn(pandas_df, **pd_kwargs)\r\n File \"/localdisk/gashiman/modin/modin/pandas/test/utils.py\", line 756, in applyier\r\n result = getattr(module, fn_name)(*args, **kwargs)\r\n File \"/localdisk/gashiman/miniconda3/envs/modin/lib/python3.8/site-packages/pandas/io/parsers.py\", line 610, in read_csv\r\n return _read(filepath_or_buffer, kwds)\r\n File \"/localdisk/gashiman/miniconda3/envs/modin/lib/python3.8/site-packages/pandas/io/parsers.py\", line 468, in _read\r\n return parser.read(nrows)\r\n File \"/localdisk/gashiman/miniconda3/envs/modin/lib/python3.8/site-packages/pandas/io/parsers.py\", line 1057, in read\r\n index, columns, col_dict = self._engine.read(nrows)\r\n File \"/localdisk/gashiman/miniconda3/envs/modin/lib/python3.8/site-packages/pandas/io/parsers.py\", line 2502, in read\r\n data = self._convert_data(data)\r\n File \"/localdisk/gashiman/miniconda3/envs/modin/lib/python3.8/site-packages/pandas/io/parsers.py\", line 2582, in _convert_data\r\n return self._convert_to_ndarrays(\r\n File \"/localdisk/gashiman/miniconda3/envs/modin/lib/python3.8/site-packages/pandas/io/parsers.py\", line 1704, in _convert_to_ndarrays\r\n cvals, na_count = self._infer_types(\r\n File \"/localdisk/gashiman/miniconda3/envs/modin/lib/python3.8/site-packages/pandas/io/parsers.py\", line 1765, in _infer_types\r\n na_count = isna(result).sum()\r\n File \"/localdisk/gashiman/miniconda3/envs/modin/lib/python3.8/site-packages/numpy/core/_methods.py\", line 48, in _sum\r\n return umr_sum(a, axis, dtype, out, keepdims, initial, where)\r\nTypeError: int() argument must be a string, a bytes-like object or a number, not '_NoValueType'\r\n```\r\n\r\n<!--\r\nYou can obtain the Modin version with\r\n\r\npython -c \"import modin; print(modin.__version__)\"\r\n-->\r\n\r\n### Describe the problem\r\n<!-- Describe the problem clearly here. -->\r\n\r\n### Source code / logs\r\n<!-- Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem. -->\r\n\n", "before_files": [{"content": "from setuptools import setup, find_packages\nimport versioneer\nimport os\nfrom setuptools.dist import Distribution\n\ntry:\n from wheel.bdist_wheel import bdist_wheel\n\n HAS_WHEEL = True\nexcept ImportError:\n HAS_WHEEL = False\n\nwith open(\"README.md\", \"r\", encoding=\"utf-8\") as fh:\n long_description = fh.read()\n\nif HAS_WHEEL:\n\n class ModinWheel(bdist_wheel):\n def finalize_options(self):\n bdist_wheel.finalize_options(self)\n self.root_is_pure = False\n\n def get_tag(self):\n _, _, plat = bdist_wheel.get_tag(self)\n py = \"py3\"\n abi = \"none\"\n return py, abi, plat\n\n\nclass ModinDistribution(Distribution):\n def __init__(self, *attrs):\n Distribution.__init__(self, *attrs)\n if HAS_WHEEL:\n self.cmdclass[\"bdist_wheel\"] = ModinWheel\n\n def is_pure(self):\n return False\n\n\ndask_deps = [\"dask>=2.12.0,<=2.19.0\", \"distributed>=2.12.0,<=2.19.0\"]\nray_deps = [\"ray>=1.4.0\", \"pyarrow>=1.0\"]\nremote_deps = [\"rpyc==4.1.5\", \"cloudpickle==1.4.1\", \"boto3==1.4.8\"]\nspreadsheet_deps = [\"modin-spreadsheet>=0.1.0\"]\nsql_deps = [\"dfsql>=0.2.6\"]\nall_deps = dask_deps + ray_deps + remote_deps + spreadsheet_deps\n\n# dfsql does not support Windows yet\nif os.name != 'nt':\n all_deps += sql_deps\n\nsetup(\n name=\"modin\",\n version=versioneer.get_version(),\n cmdclass=versioneer.get_cmdclass(),\n distclass=ModinDistribution,\n description=\"Modin: Make your pandas code run faster by changing one line of code.\",\n packages=find_packages(),\n include_package_data=True,\n license=\"Apache 2\",\n url=\"https://github.com/modin-project/modin\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n # Restrition for numpy upper version is because of #3182\n install_requires=[\"pandas==1.2.5\", \"packaging\", \"numpy>=1.16.5,<=1.20.3\"],\n extras_require={\n # can be installed by pip install modin[dask]\n \"dask\": dask_deps,\n \"ray\": ray_deps,\n \"remote\": remote_deps,\n \"spreadsheet\": spreadsheet_deps,\n \"sql\": sql_deps,\n \"all\": all_deps,\n },\n python_requires=\">=3.7.1\",\n)\n", "path": "setup.py"}, {"content": "# Licensed to Modin Development Team under one or more contributor license agreements.\n# See the NOTICE file distributed with this work for additional information regarding\n# copyright ownership. The Modin Development Team licenses this file to you under the\n# Apache License, Version 2.0 (the \"License\"); you may not use this file except in\n# compliance with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software distributed under\n# the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific language\n# governing permissions and limitations under the License.\n\n\"\"\"\nThe module replaces real NumPy from future \"import numpy\" statements.\n\nReplacement occurs with a wrapping module that serves attributes from either\nlocal or \"remote\" NumPy depending on active execution context.\n\"\"\"\n\nimport sys\n\n_CAUGHT_NUMPY = \"numpy\" not in sys.modules\ntry:\n import numpy as real_numpy\nexcept ImportError:\n pass\nelse:\n import types\n import copyreg\n from modin.config import Engine\n from modin.data_management.factories import REMOTE_ENGINES\n import modin\n import pandas\n import os\n\n _EXCLUDE_MODULES = [modin, pandas]\n try:\n import rpyc\n except ImportError:\n pass\n else:\n _EXCLUDE_MODULES.append(rpyc)\n _EXCLUDE_PATHS = tuple(\n os.path.dirname(mod.__file__) + os.sep for mod in _EXCLUDE_MODULES\n )\n\n class InterceptedNumpy(types.ModuleType):\n \"\"\"\n The class is intended to replace the \"numpy\" module as seen by outer world.\n\n Replacement occurs by getting attributes from either local NumPy or remote one when remote context\n is activated.\n It also registers helpers for pickling local NumPy objects in remote context\n and vice versa.\n\n Attributes\n ----------\n __own_attrs__ : set\n Attributes that are defined in this class so access to them must never be proxied.\n __current_numpy : ModuleType\n The module to which getting NumPy attributes redirects. For example,\n NumPy on remote machine.\n __prev_numpy : ModuleType\n The previous module that was accessed to get the NumPy attributes.\n __has_to_warn : bool\n Determines the situation when it is necessary to give a warning.\n __reducers : dict\n Custom routines that Pickle calls to serialize an instance of a class.\n \"\"\"\n\n __own_attrs__ = set([\"__own_attrs__\"])\n\n __spec__ = real_numpy.__spec__\n __current_numpy = real_numpy\n __prev_numpy = real_numpy\n __has_to_warn = not _CAUGHT_NUMPY\n __reducers = {}\n\n def __init__(self):\n self.__own_attrs__ = set(type(self).__dict__.keys())\n Engine.subscribe(self.__update_engine)\n\n def __swap_numpy(self, other_numpy=None):\n self.__current_numpy, self.__prev_numpy = (\n other_numpy or self.__prev_numpy,\n self.__current_numpy,\n )\n if self.__current_numpy is not real_numpy and self.__has_to_warn:\n import warnings\n\n warnings.warn(\n \"Was not able to intercept all numpy imports. \"\n \"To intercept all of these please do 'import modin.experimental.pandas' as early as possible\"\n )\n self.__has_to_warn = False\n\n def __update_engine(self, _):\n if Engine.get() in REMOTE_ENGINES:\n from modin.experimental.cloud import get_connection\n\n self.__swap_numpy(get_connection().modules[\"numpy\"])\n else:\n self.__swap_numpy()\n\n def __make_reducer(self, name):\n \"\"\"\n Prepare a \"reducer\" routine - the one Pickle calls to serialize an instance of a class.\n\n Note that we need this to allow pickling a local numpy object in \"remote numpy\" context,\n because without a custom reduce callback pickle complains that what it reduced has a\n different \"numpy\" class than original.\n \"\"\"\n try:\n reducer = self.__reducers[name]\n except KeyError:\n\n def reducer(\n obj,\n real_obj=getattr(real_numpy, name),\n real_obj_reducer=getattr(real_numpy, name).__reduce__,\n ):\n # See details on __reduce__ protocol in Python docs:\n # https://docs.python.org/3.6/library/pickle.html#object.__reduce__\n reduced = real_obj_reducer(obj)\n if not isinstance(reduced, tuple):\n return reduced\n assert isinstance(\n reduced[0],\n (type, types.FunctionType, types.BuiltinFunctionType),\n ), \"Do not know how to support this reconstructor\"\n\n modobj = self.__current_numpy\n for submod in reduced[0].__module__.split(\".\")[1:]:\n modobj = getattr(modobj, submod)\n reconstruct = getattr(modobj, reduced[0].__name__)\n # TODO: see if replacing all \"real numpy\" things in reduced[1:] is needed\n return (reconstruct,) + reduced[1:]\n\n self.__reducers[name] = reducer\n return reducer\n\n def __get_numpy(self):\n frame = sys._getframe()\n try:\n # get the path to module where caller of caller is defined;\n # this function is expected to be called from one of\n # __getattr__, __setattr__ or __delattr__, so this\n # \"caller_file\" should point to the file that wants a\n # numpy attribute; we want to always give local numpy\n # to modin, numpy and rpyc as it's all internal for us\n caller_file = frame.f_back.f_back.f_code.co_filename\n except AttributeError:\n return self.__current_numpy\n finally:\n del frame\n if any(caller_file.startswith(mod_path) for mod_path in _EXCLUDE_PATHS):\n return real_numpy\n return self.__current_numpy\n\n def __getattr__(self, name): # noqa: D105\n # note that __getattr__ is not symmetric to __setattr__, as it is\n # only called when an attribute is not found by usual lookups\n obj = getattr(self.__get_numpy(), name)\n if isinstance(obj, type):\n # register a special callback for pickling\n copyreg.pickle(obj, self.__make_reducer(name))\n return obj\n\n def __setattr__(self, name, value): # noqa: D105\n # set our own attributes on the self instance, but pass through\n # setting other attributes to numpy being wrapped\n if name in self.__own_attrs__:\n super().__setattr__(name, value)\n else:\n setattr(self.__get_numpy(), name, value)\n\n def __delattr__(self, name): # noqa: D105\n # do not allow to delete our own attributes, pass through\n # deletion of others to numpy being wrapped\n if name not in self.__own_attrs__:\n delattr(self.__get_numpy(), name)\n\n sys.modules[\"numpy\"] = InterceptedNumpy()\n", "path": "modin/experimental/pandas/numpy_wrap.py"}], "after_files": [{"content": "from setuptools import setup, find_packages\nimport versioneer\nimport os\nfrom setuptools.dist import Distribution\n\ntry:\n from wheel.bdist_wheel import bdist_wheel\n\n HAS_WHEEL = True\nexcept ImportError:\n HAS_WHEEL = False\n\nwith open(\"README.md\", \"r\", encoding=\"utf-8\") as fh:\n long_description = fh.read()\n\nif HAS_WHEEL:\n\n class ModinWheel(bdist_wheel):\n def finalize_options(self):\n bdist_wheel.finalize_options(self)\n self.root_is_pure = False\n\n def get_tag(self):\n _, _, plat = bdist_wheel.get_tag(self)\n py = \"py3\"\n abi = \"none\"\n return py, abi, plat\n\n\nclass ModinDistribution(Distribution):\n def __init__(self, *attrs):\n Distribution.__init__(self, *attrs)\n if HAS_WHEEL:\n self.cmdclass[\"bdist_wheel\"] = ModinWheel\n\n def is_pure(self):\n return False\n\n\ndask_deps = [\"dask>=2.12.0,<=2.19.0\", \"distributed>=2.12.0,<=2.19.0\"]\nray_deps = [\"ray>=1.4.0\", \"pyarrow>=1.0\"]\nremote_deps = [\"rpyc==4.1.5\", \"cloudpickle==1.4.1\", \"boto3==1.4.8\"]\nspreadsheet_deps = [\"modin-spreadsheet>=0.1.0\"]\nsql_deps = [\"dfsql>=0.2.6\"]\nall_deps = dask_deps + ray_deps + remote_deps + spreadsheet_deps\n\n# dfsql does not support Windows yet\nif os.name != 'nt':\n all_deps += sql_deps\n\nsetup(\n name=\"modin\",\n version=versioneer.get_version(),\n cmdclass=versioneer.get_cmdclass(),\n distclass=ModinDistribution,\n description=\"Modin: Make your pandas code run faster by changing one line of code.\",\n packages=find_packages(),\n include_package_data=True,\n license=\"Apache 2\",\n url=\"https://github.com/modin-project/modin\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n install_requires=[\"pandas==1.2.5\", \"packaging\", \"numpy>=1.16.5\"],\n extras_require={\n # can be installed by pip install modin[dask]\n \"dask\": dask_deps,\n \"ray\": ray_deps,\n \"remote\": remote_deps,\n \"spreadsheet\": spreadsheet_deps,\n \"sql\": sql_deps,\n \"all\": all_deps,\n },\n python_requires=\">=3.7.1\",\n)\n", "path": "setup.py"}, {"content": "# Licensed to Modin Development Team under one or more contributor license agreements.\n# See the NOTICE file distributed with this work for additional information regarding\n# copyright ownership. The Modin Development Team licenses this file to you under the\n# Apache License, Version 2.0 (the \"License\"); you may not use this file except in\n# compliance with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software distributed under\n# the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific language\n# governing permissions and limitations under the License.\n\n\"\"\"\nThe module replaces real NumPy from future \"import numpy\" statements.\n\nReplacement occurs with a wrapping module that serves attributes from either\nlocal or \"remote\" NumPy depending on active execution context.\n\"\"\"\n\nimport sys\n\n_CAUGHT_NUMPY = \"numpy\" not in sys.modules\ntry:\n import numpy as real_numpy\nexcept ImportError:\n pass\nelse:\n import types\n import copyreg\n from modin.config import Engine\n from modin.data_management.factories import REMOTE_ENGINES\n import modin\n import pandas\n import os\n\n _EXCLUDE_MODULES = [modin, pandas, real_numpy]\n try:\n import rpyc\n except ImportError:\n pass\n else:\n _EXCLUDE_MODULES.append(rpyc)\n _EXCLUDE_PATHS = tuple(\n os.path.dirname(mod.__file__) + os.sep for mod in _EXCLUDE_MODULES\n )\n\n class InterceptedNumpy(types.ModuleType):\n \"\"\"\n The class is intended to replace the \"numpy\" module as seen by outer world.\n\n Replacement occurs by getting attributes from either local NumPy or remote one when remote context\n is activated.\n It also registers helpers for pickling local NumPy objects in remote context\n and vice versa.\n\n Attributes\n ----------\n __own_attrs__ : set\n Attributes that are defined in this class so access to them must never be proxied.\n __current_numpy : ModuleType\n The module to which getting NumPy attributes redirects. For example,\n NumPy on remote machine.\n __prev_numpy : ModuleType\n The previous module that was accessed to get the NumPy attributes.\n __has_to_warn : bool\n Determines the situation when it is necessary to give a warning.\n __reducers : dict\n Custom routines that Pickle calls to serialize an instance of a class.\n \"\"\"\n\n __own_attrs__ = set([\"__own_attrs__\"])\n\n __spec__ = real_numpy.__spec__\n __current_numpy = real_numpy\n __prev_numpy = real_numpy\n __has_to_warn = not _CAUGHT_NUMPY\n __reducers = {}\n\n def __init__(self):\n self.__own_attrs__ = set(type(self).__dict__.keys())\n Engine.subscribe(self.__update_engine)\n\n def __swap_numpy(self, other_numpy=None):\n self.__current_numpy, self.__prev_numpy = (\n other_numpy or self.__prev_numpy,\n self.__current_numpy,\n )\n if self.__current_numpy is not real_numpy and self.__has_to_warn:\n import warnings\n\n warnings.warn(\n \"Was not able to intercept all numpy imports. \"\n \"To intercept all of these please do 'import modin.experimental.pandas' as early as possible\"\n )\n self.__has_to_warn = False\n\n def __update_engine(self, _):\n if Engine.get() in REMOTE_ENGINES:\n from modin.experimental.cloud import get_connection\n\n self.__swap_numpy(get_connection().modules[\"numpy\"])\n else:\n self.__swap_numpy()\n\n def __make_reducer(self, name):\n \"\"\"\n Prepare a \"reducer\" routine - the one Pickle calls to serialize an instance of a class.\n\n Note that we need this to allow pickling a local numpy object in \"remote numpy\" context,\n because without a custom reduce callback pickle complains that what it reduced has a\n different \"numpy\" class than original.\n \"\"\"\n try:\n reducer = self.__reducers[name]\n except KeyError:\n\n def reducer(\n obj,\n real_obj=getattr(real_numpy, name),\n real_obj_reducer=getattr(real_numpy, name).__reduce__,\n ):\n # See details on __reduce__ protocol in Python docs:\n # https://docs.python.org/3.6/library/pickle.html#object.__reduce__\n reduced = real_obj_reducer(obj)\n if not isinstance(reduced, tuple):\n return reduced\n assert isinstance(\n reduced[0],\n (type, types.FunctionType, types.BuiltinFunctionType),\n ), \"Do not know how to support this reconstructor\"\n\n modobj = self.__current_numpy\n for submod in reduced[0].__module__.split(\".\")[1:]:\n modobj = getattr(modobj, submod)\n reconstruct = getattr(modobj, reduced[0].__name__)\n # TODO: see if replacing all \"real numpy\" things in reduced[1:] is needed\n return (reconstruct,) + reduced[1:]\n\n self.__reducers[name] = reducer\n return reducer\n\n def __get_numpy(self):\n frame = sys._getframe()\n try:\n # get the path to module where caller of caller is defined;\n # this function is expected to be called from one of\n # __getattr__, __setattr__ or __delattr__, so this\n # \"caller_file\" should point to the file that wants a\n # numpy attribute; we want to always give local numpy\n # to modin, numpy and rpyc as it's all internal for us\n caller_file = frame.f_back.f_back.f_code.co_filename\n except AttributeError:\n return self.__current_numpy\n finally:\n del frame\n if any(caller_file.startswith(mod_path) for mod_path in _EXCLUDE_PATHS):\n return real_numpy\n return self.__current_numpy\n\n def __getattr__(self, name): # noqa: D105\n # note that __getattr__ is not symmetric to __setattr__, as it is\n # only called when an attribute is not found by usual lookups\n obj = getattr(self.__get_numpy(), name)\n if isinstance(obj, type):\n # register a special callback for pickling\n copyreg.pickle(obj, self.__make_reducer(name))\n return obj\n\n def __setattr__(self, name, value): # noqa: D105\n # set our own attributes on the self instance, but pass through\n # setting other attributes to numpy being wrapped\n if name in self.__own_attrs__:\n super().__setattr__(name, value)\n else:\n setattr(self.__get_numpy(), name, value)\n\n def __delattr__(self, name): # noqa: D105\n # do not allow to delete our own attributes, pass through\n # deletion of others to numpy being wrapped\n if name not in self.__own_attrs__:\n delattr(self.__get_numpy(), name)\n\n sys.modules[\"numpy\"] = InterceptedNumpy()\n", "path": "modin/experimental/pandas/numpy_wrap.py"}]}
| 3,863 | 284 |
gh_patches_debug_66030
|
rasdani/github-patches
|
git_diff
|
pypa__pip-2810
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pip install --allow-all-external in requirements.txt file fails
**With pip 7**
requirements.txt file:
``` txt
--allow-all-external
mysql-connector-python
```
On comandline
``` bash
pip install -r requirements.txt
pip: error: no such option: --allow-all-external
```
**With pip 6.1.1**
Collecting mysql-connector-python (from -r requirements.txt (line 2))
Downloading http://cdn.mysql.com/Downloads/Connector-Python/mysql-connector-python-2.0.3.zip (275kB)
100% |████████████████████████████████| 278kB 3.0MB/s
Installing collected packages: mysql-connector-python
Running setup.py install for mysql-connector-python
Successfully installed mysql-connector-python-2.0.3
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pip/req/req_file.py`
Content:
```
1 """
2 Requirements file parsing
3 """
4
5 from __future__ import absolute_import
6
7 import os
8 import re
9 import shlex
10 import optparse
11
12 from pip._vendor.six.moves.urllib import parse as urllib_parse
13 from pip._vendor.six.moves import filterfalse
14
15 import pip
16 from pip.download import get_file_content
17 from pip.req.req_install import InstallRequirement
18 from pip.exceptions import (RequirementsFileParseError)
19 from pip.utils import normalize_name
20 from pip import cmdoptions
21
22 __all__ = ['parse_requirements']
23
24 SCHEME_RE = re.compile(r'^(http|https|file):', re.I)
25 COMMENT_RE = re.compile(r'(^|\s)+#.*$')
26
27 SUPPORTED_OPTIONS = [
28 cmdoptions.editable,
29 cmdoptions.requirements,
30 cmdoptions.no_index,
31 cmdoptions.index_url,
32 cmdoptions.find_links,
33 cmdoptions.extra_index_url,
34 cmdoptions.allow_external,
35 cmdoptions.no_allow_external,
36 cmdoptions.allow_unsafe,
37 cmdoptions.no_allow_unsafe,
38 cmdoptions.use_wheel,
39 cmdoptions.no_use_wheel,
40 cmdoptions.always_unzip,
41 cmdoptions.no_binary,
42 cmdoptions.only_binary,
43 ]
44
45 # options to be passed to requirements
46 SUPPORTED_OPTIONS_REQ = [
47 cmdoptions.install_options,
48 cmdoptions.global_options
49 ]
50
51 # the 'dest' string values
52 SUPPORTED_OPTIONS_REQ_DEST = [o().dest for o in SUPPORTED_OPTIONS_REQ]
53
54
55 def parse_requirements(filename, finder=None, comes_from=None, options=None,
56 session=None, wheel_cache=None):
57 """
58 Parse a requirements file and yield InstallRequirement instances.
59
60 :param filename: Path or url of requirements file.
61 :param finder: Instance of pip.index.PackageFinder.
62 :param comes_from: Origin description of requirements.
63 :param options: Global options.
64 :param session: Instance of pip.download.PipSession.
65 :param wheel_cache: Instance of pip.wheel.WheelCache
66 """
67 if session is None:
68 raise TypeError(
69 "parse_requirements() missing 1 required keyword argument: "
70 "'session'"
71 )
72
73 _, content = get_file_content(
74 filename, comes_from=comes_from, session=session
75 )
76
77 lines = content.splitlines()
78 lines = ignore_comments(lines)
79 lines = join_lines(lines)
80 lines = skip_regex(lines, options)
81
82 for line_number, line in enumerate(lines, 1):
83 req_iter = process_line(line, filename, line_number, finder,
84 comes_from, options, session, wheel_cache)
85 for req in req_iter:
86 yield req
87
88
89 def process_line(line, filename, line_number, finder=None, comes_from=None,
90 options=None, session=None, wheel_cache=None):
91 """Process a single requirements line; This can result in creating/yielding
92 requirements, or updating the finder.
93
94 For lines that contain requirements, the only options that have an effect
95 are from SUPPORTED_OPTIONS_REQ, and they are scoped to the
96 requirement. Other options from SUPPORTED_OPTIONS may be present, but are
97 ignored.
98
99 For lines that do not contain requirements, the only options that have an
100 effect are from SUPPORTED_OPTIONS. Options from SUPPORTED_OPTIONS_REQ may
101 be present, but are ignored. These lines may contain multiple options
102 (although our docs imply only one is supported), and all our parsed and
103 affect the finder.
104
105 """
106
107 parser = build_parser()
108 defaults = parser.get_default_values()
109 defaults.index_url = None
110 if finder:
111 # `finder.format_control` will be updated during parsing
112 defaults.format_control = finder.format_control
113 opts, args = parser.parse_args(shlex.split(line), defaults)
114
115 # yield a line requirement
116 if args:
117 args_line = ' '.join(args)
118 comes_from = '-r %s (line %s)' % (filename, line_number)
119 isolated = options.isolated_mode if options else False
120 if options:
121 cmdoptions.check_install_build_global(options, opts)
122 # get the options that apply to requirements
123 req_options = {}
124 for dest in SUPPORTED_OPTIONS_REQ_DEST:
125 if dest in opts.__dict__ and opts.__dict__[dest]:
126 req_options[dest] = opts.__dict__[dest]
127 yield InstallRequirement.from_line(
128 args_line, comes_from, isolated=isolated, options=req_options,
129 wheel_cache=wheel_cache
130 )
131
132 # yield an editable requirement
133 elif opts.editables:
134 comes_from = '-r %s (line %s)' % (filename, line_number)
135 isolated = options.isolated_mode if options else False
136 default_vcs = options.default_vcs if options else None
137 yield InstallRequirement.from_editable(
138 opts.editables[0], comes_from=comes_from,
139 default_vcs=default_vcs, isolated=isolated,
140 wheel_cache=wheel_cache
141 )
142
143 # parse a nested requirements file
144 elif opts.requirements:
145 req_path = opts.requirements[0]
146 # original file is over http
147 if SCHEME_RE.search(filename):
148 # do a url join so relative paths work
149 req_path = urllib_parse.urljoin(filename, req_path)
150 # original file and nested file are paths
151 elif not SCHEME_RE.search(req_path):
152 # do a join so relative paths work
153 req_dir = os.path.dirname(filename)
154 req_path = os.path.join(os.path.dirname(filename), req_path)
155 # TODO: Why not use `comes_from='-r {} (line {})'` here as well?
156 parser = parse_requirements(
157 req_path, finder, comes_from, options, session,
158 wheel_cache=wheel_cache
159 )
160 for req in parser:
161 yield req
162
163 # set finder options
164 elif finder:
165 if opts.index_url:
166 finder.index_urls = [opts.index_url]
167 if opts.use_wheel is False:
168 finder.use_wheel = False
169 pip.index.fmt_ctl_no_use_wheel(finder.format_control)
170 if opts.no_index is True:
171 finder.index_urls = []
172 if opts.allow_all_external:
173 finder.allow_all_external = opts.allow_all_external
174 if opts.extra_index_urls:
175 finder.index_urls.extend(opts.extra_index_urls)
176 if opts.allow_external:
177 finder.allow_external |= set(
178 [normalize_name(v).lower() for v in opts.allow_external])
179 if opts.allow_unverified:
180 # Remove after 7.0
181 finder.allow_unverified |= set(
182 [normalize_name(v).lower() for v in opts.allow_unverified])
183 if opts.find_links:
184 # FIXME: it would be nice to keep track of the source
185 # of the find_links: support a find-links local path
186 # relative to a requirements file.
187 value = opts.find_links[0]
188 req_dir = os.path.dirname(os.path.abspath(filename))
189 relative_to_reqs_file = os.path.join(req_dir, value)
190 if os.path.exists(relative_to_reqs_file):
191 value = relative_to_reqs_file
192 finder.find_links.append(value)
193
194
195 def build_parser():
196 """
197 Return a parser for parsing requirement lines
198 """
199 parser = optparse.OptionParser(add_help_option=False)
200
201 option_factories = SUPPORTED_OPTIONS + SUPPORTED_OPTIONS_REQ
202 for option_factory in option_factories:
203 option = option_factory()
204 parser.add_option(option)
205
206 # By default optparse sys.exits on parsing errors. We want to wrap
207 # that in our own exception.
208 def parser_exit(self, msg):
209 raise RequirementsFileParseError(msg)
210 parser.exit = parser_exit
211
212 return parser
213
214
215 def join_lines(iterator):
216 """
217 Joins a line ending in '\' with the previous line.
218 """
219 lines = []
220 for line in iterator:
221 if not line.endswith('\\'):
222 if lines:
223 lines.append(line)
224 yield ''.join(lines)
225 lines = []
226 else:
227 yield line
228 else:
229 lines.append(line.strip('\\'))
230
231 # TODO: handle space after '\'.
232 # TODO: handle '\' on last line.
233
234
235 def ignore_comments(iterator):
236 """
237 Strips and filters empty or commented lines.
238 """
239 for line in iterator:
240 line = COMMENT_RE.sub('', line)
241 line = line.strip()
242 if line:
243 yield line
244
245
246 def skip_regex(lines, options):
247 """
248 Optionally exclude lines that match '--skip-requirements-regex'
249 """
250 skip_regex = options.skip_requirements_regex if options else None
251 if skip_regex:
252 lines = filterfalse(re.compile(skip_regex).search, lines)
253 return lines
254
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pip/req/req_file.py b/pip/req/req_file.py
--- a/pip/req/req_file.py
+++ b/pip/req/req_file.py
@@ -32,6 +32,7 @@
cmdoptions.find_links,
cmdoptions.extra_index_url,
cmdoptions.allow_external,
+ cmdoptions.allow_all_external,
cmdoptions.no_allow_external,
cmdoptions.allow_unsafe,
cmdoptions.no_allow_unsafe,
|
{"golden_diff": "diff --git a/pip/req/req_file.py b/pip/req/req_file.py\n--- a/pip/req/req_file.py\n+++ b/pip/req/req_file.py\n@@ -32,6 +32,7 @@\n cmdoptions.find_links,\n cmdoptions.extra_index_url,\n cmdoptions.allow_external,\n+ cmdoptions.allow_all_external,\n cmdoptions.no_allow_external,\n cmdoptions.allow_unsafe,\n cmdoptions.no_allow_unsafe,\n", "issue": "pip install --allow-all-external in requirements.txt file fails\n**With pip 7**\nrequirements.txt file:\n\n``` txt\n--allow-all-external\nmysql-connector-python\n```\n\nOn comandline\n\n``` bash\npip install -r requirements.txt \npip: error: no such option: --allow-all-external\n```\n\n**With pip 6.1.1**\nCollecting mysql-connector-python (from -r requirements.txt (line 2))\n Downloading http://cdn.mysql.com/Downloads/Connector-Python/mysql-connector-python-2.0.3.zip (275kB)\n 100% |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 278kB 3.0MB/s \nInstalling collected packages: mysql-connector-python\n Running setup.py install for mysql-connector-python\nSuccessfully installed mysql-connector-python-2.0.3\n\n", "before_files": [{"content": "\"\"\"\nRequirements file parsing\n\"\"\"\n\nfrom __future__ import absolute_import\n\nimport os\nimport re\nimport shlex\nimport optparse\n\nfrom pip._vendor.six.moves.urllib import parse as urllib_parse\nfrom pip._vendor.six.moves import filterfalse\n\nimport pip\nfrom pip.download import get_file_content\nfrom pip.req.req_install import InstallRequirement\nfrom pip.exceptions import (RequirementsFileParseError)\nfrom pip.utils import normalize_name\nfrom pip import cmdoptions\n\n__all__ = ['parse_requirements']\n\nSCHEME_RE = re.compile(r'^(http|https|file):', re.I)\nCOMMENT_RE = re.compile(r'(^|\\s)+#.*$')\n\nSUPPORTED_OPTIONS = [\n cmdoptions.editable,\n cmdoptions.requirements,\n cmdoptions.no_index,\n cmdoptions.index_url,\n cmdoptions.find_links,\n cmdoptions.extra_index_url,\n cmdoptions.allow_external,\n cmdoptions.no_allow_external,\n cmdoptions.allow_unsafe,\n cmdoptions.no_allow_unsafe,\n cmdoptions.use_wheel,\n cmdoptions.no_use_wheel,\n cmdoptions.always_unzip,\n cmdoptions.no_binary,\n cmdoptions.only_binary,\n]\n\n# options to be passed to requirements\nSUPPORTED_OPTIONS_REQ = [\n cmdoptions.install_options,\n cmdoptions.global_options\n]\n\n# the 'dest' string values\nSUPPORTED_OPTIONS_REQ_DEST = [o().dest for o in SUPPORTED_OPTIONS_REQ]\n\n\ndef parse_requirements(filename, finder=None, comes_from=None, options=None,\n session=None, wheel_cache=None):\n \"\"\"\n Parse a requirements file and yield InstallRequirement instances.\n\n :param filename: Path or url of requirements file.\n :param finder: Instance of pip.index.PackageFinder.\n :param comes_from: Origin description of requirements.\n :param options: Global options.\n :param session: Instance of pip.download.PipSession.\n :param wheel_cache: Instance of pip.wheel.WheelCache\n \"\"\"\n if session is None:\n raise TypeError(\n \"parse_requirements() missing 1 required keyword argument: \"\n \"'session'\"\n )\n\n _, content = get_file_content(\n filename, comes_from=comes_from, session=session\n )\n\n lines = content.splitlines()\n lines = ignore_comments(lines)\n lines = join_lines(lines)\n lines = skip_regex(lines, options)\n\n for line_number, line in enumerate(lines, 1):\n req_iter = process_line(line, filename, line_number, finder,\n comes_from, options, session, wheel_cache)\n for req in req_iter:\n yield req\n\n\ndef process_line(line, filename, line_number, finder=None, comes_from=None,\n options=None, session=None, wheel_cache=None):\n \"\"\"Process a single requirements line; This can result in creating/yielding\n requirements, or updating the finder.\n\n For lines that contain requirements, the only options that have an effect\n are from SUPPORTED_OPTIONS_REQ, and they are scoped to the\n requirement. Other options from SUPPORTED_OPTIONS may be present, but are\n ignored.\n\n For lines that do not contain requirements, the only options that have an\n effect are from SUPPORTED_OPTIONS. Options from SUPPORTED_OPTIONS_REQ may\n be present, but are ignored. These lines may contain multiple options\n (although our docs imply only one is supported), and all our parsed and\n affect the finder.\n\n \"\"\"\n\n parser = build_parser()\n defaults = parser.get_default_values()\n defaults.index_url = None\n if finder:\n # `finder.format_control` will be updated during parsing\n defaults.format_control = finder.format_control\n opts, args = parser.parse_args(shlex.split(line), defaults)\n\n # yield a line requirement\n if args:\n args_line = ' '.join(args)\n comes_from = '-r %s (line %s)' % (filename, line_number)\n isolated = options.isolated_mode if options else False\n if options:\n cmdoptions.check_install_build_global(options, opts)\n # get the options that apply to requirements\n req_options = {}\n for dest in SUPPORTED_OPTIONS_REQ_DEST:\n if dest in opts.__dict__ and opts.__dict__[dest]:\n req_options[dest] = opts.__dict__[dest]\n yield InstallRequirement.from_line(\n args_line, comes_from, isolated=isolated, options=req_options,\n wheel_cache=wheel_cache\n )\n\n # yield an editable requirement\n elif opts.editables:\n comes_from = '-r %s (line %s)' % (filename, line_number)\n isolated = options.isolated_mode if options else False\n default_vcs = options.default_vcs if options else None\n yield InstallRequirement.from_editable(\n opts.editables[0], comes_from=comes_from,\n default_vcs=default_vcs, isolated=isolated,\n wheel_cache=wheel_cache\n )\n\n # parse a nested requirements file\n elif opts.requirements:\n req_path = opts.requirements[0]\n # original file is over http\n if SCHEME_RE.search(filename):\n # do a url join so relative paths work\n req_path = urllib_parse.urljoin(filename, req_path)\n # original file and nested file are paths\n elif not SCHEME_RE.search(req_path):\n # do a join so relative paths work\n req_dir = os.path.dirname(filename)\n req_path = os.path.join(os.path.dirname(filename), req_path)\n # TODO: Why not use `comes_from='-r {} (line {})'` here as well?\n parser = parse_requirements(\n req_path, finder, comes_from, options, session,\n wheel_cache=wheel_cache\n )\n for req in parser:\n yield req\n\n # set finder options\n elif finder:\n if opts.index_url:\n finder.index_urls = [opts.index_url]\n if opts.use_wheel is False:\n finder.use_wheel = False\n pip.index.fmt_ctl_no_use_wheel(finder.format_control)\n if opts.no_index is True:\n finder.index_urls = []\n if opts.allow_all_external:\n finder.allow_all_external = opts.allow_all_external\n if opts.extra_index_urls:\n finder.index_urls.extend(opts.extra_index_urls)\n if opts.allow_external:\n finder.allow_external |= set(\n [normalize_name(v).lower() for v in opts.allow_external])\n if opts.allow_unverified:\n # Remove after 7.0\n finder.allow_unverified |= set(\n [normalize_name(v).lower() for v in opts.allow_unverified])\n if opts.find_links:\n # FIXME: it would be nice to keep track of the source\n # of the find_links: support a find-links local path\n # relative to a requirements file.\n value = opts.find_links[0]\n req_dir = os.path.dirname(os.path.abspath(filename))\n relative_to_reqs_file = os.path.join(req_dir, value)\n if os.path.exists(relative_to_reqs_file):\n value = relative_to_reqs_file\n finder.find_links.append(value)\n\n\ndef build_parser():\n \"\"\"\n Return a parser for parsing requirement lines\n \"\"\"\n parser = optparse.OptionParser(add_help_option=False)\n\n option_factories = SUPPORTED_OPTIONS + SUPPORTED_OPTIONS_REQ\n for option_factory in option_factories:\n option = option_factory()\n parser.add_option(option)\n\n # By default optparse sys.exits on parsing errors. We want to wrap\n # that in our own exception.\n def parser_exit(self, msg):\n raise RequirementsFileParseError(msg)\n parser.exit = parser_exit\n\n return parser\n\n\ndef join_lines(iterator):\n \"\"\"\n Joins a line ending in '\\' with the previous line.\n \"\"\"\n lines = []\n for line in iterator:\n if not line.endswith('\\\\'):\n if lines:\n lines.append(line)\n yield ''.join(lines)\n lines = []\n else:\n yield line\n else:\n lines.append(line.strip('\\\\'))\n\n # TODO: handle space after '\\'.\n # TODO: handle '\\' on last line.\n\n\ndef ignore_comments(iterator):\n \"\"\"\n Strips and filters empty or commented lines.\n \"\"\"\n for line in iterator:\n line = COMMENT_RE.sub('', line)\n line = line.strip()\n if line:\n yield line\n\n\ndef skip_regex(lines, options):\n \"\"\"\n Optionally exclude lines that match '--skip-requirements-regex'\n \"\"\"\n skip_regex = options.skip_requirements_regex if options else None\n if skip_regex:\n lines = filterfalse(re.compile(skip_regex).search, lines)\n return lines\n", "path": "pip/req/req_file.py"}], "after_files": [{"content": "\"\"\"\nRequirements file parsing\n\"\"\"\n\nfrom __future__ import absolute_import\n\nimport os\nimport re\nimport shlex\nimport optparse\n\nfrom pip._vendor.six.moves.urllib import parse as urllib_parse\nfrom pip._vendor.six.moves import filterfalse\n\nimport pip\nfrom pip.download import get_file_content\nfrom pip.req.req_install import InstallRequirement\nfrom pip.exceptions import (RequirementsFileParseError)\nfrom pip.utils import normalize_name\nfrom pip import cmdoptions\n\n__all__ = ['parse_requirements']\n\nSCHEME_RE = re.compile(r'^(http|https|file):', re.I)\nCOMMENT_RE = re.compile(r'(^|\\s)+#.*$')\n\nSUPPORTED_OPTIONS = [\n cmdoptions.editable,\n cmdoptions.requirements,\n cmdoptions.no_index,\n cmdoptions.index_url,\n cmdoptions.find_links,\n cmdoptions.extra_index_url,\n cmdoptions.allow_external,\n cmdoptions.allow_all_external,\n cmdoptions.no_allow_external,\n cmdoptions.allow_unsafe,\n cmdoptions.no_allow_unsafe,\n cmdoptions.use_wheel,\n cmdoptions.no_use_wheel,\n cmdoptions.always_unzip,\n cmdoptions.no_binary,\n cmdoptions.only_binary,\n]\n\n# options to be passed to requirements\nSUPPORTED_OPTIONS_REQ = [\n cmdoptions.install_options,\n cmdoptions.global_options\n]\n\n# the 'dest' string values\nSUPPORTED_OPTIONS_REQ_DEST = [o().dest for o in SUPPORTED_OPTIONS_REQ]\n\n\ndef parse_requirements(filename, finder=None, comes_from=None, options=None,\n session=None, wheel_cache=None):\n \"\"\"\n Parse a requirements file and yield InstallRequirement instances.\n\n :param filename: Path or url of requirements file.\n :param finder: Instance of pip.index.PackageFinder.\n :param comes_from: Origin description of requirements.\n :param options: Global options.\n :param session: Instance of pip.download.PipSession.\n :param wheel_cache: Instance of pip.wheel.WheelCache\n \"\"\"\n if session is None:\n raise TypeError(\n \"parse_requirements() missing 1 required keyword argument: \"\n \"'session'\"\n )\n\n _, content = get_file_content(\n filename, comes_from=comes_from, session=session\n )\n\n lines = content.splitlines()\n lines = ignore_comments(lines)\n lines = join_lines(lines)\n lines = skip_regex(lines, options)\n\n for line_number, line in enumerate(lines, 1):\n req_iter = process_line(line, filename, line_number, finder,\n comes_from, options, session, wheel_cache)\n for req in req_iter:\n yield req\n\n\ndef process_line(line, filename, line_number, finder=None, comes_from=None,\n options=None, session=None, wheel_cache=None):\n \"\"\"Process a single requirements line; This can result in creating/yielding\n requirements, or updating the finder.\n\n For lines that contain requirements, the only options that have an effect\n are from SUPPORTED_OPTIONS_REQ, and they are scoped to the\n requirement. Other options from SUPPORTED_OPTIONS may be present, but are\n ignored.\n\n For lines that do not contain requirements, the only options that have an\n effect are from SUPPORTED_OPTIONS. Options from SUPPORTED_OPTIONS_REQ may\n be present, but are ignored. These lines may contain multiple options\n (although our docs imply only one is supported), and all our parsed and\n affect the finder.\n\n \"\"\"\n\n parser = build_parser()\n defaults = parser.get_default_values()\n defaults.index_url = None\n if finder:\n # `finder.format_control` will be updated during parsing\n defaults.format_control = finder.format_control\n opts, args = parser.parse_args(shlex.split(line), defaults)\n\n # yield a line requirement\n if args:\n args_line = ' '.join(args)\n comes_from = '-r %s (line %s)' % (filename, line_number)\n isolated = options.isolated_mode if options else False\n if options:\n cmdoptions.check_install_build_global(options, opts)\n # get the options that apply to requirements\n req_options = {}\n for dest in SUPPORTED_OPTIONS_REQ_DEST:\n if dest in opts.__dict__ and opts.__dict__[dest]:\n req_options[dest] = opts.__dict__[dest]\n yield InstallRequirement.from_line(\n args_line, comes_from, isolated=isolated, options=req_options,\n wheel_cache=wheel_cache\n )\n\n # yield an editable requirement\n elif opts.editables:\n comes_from = '-r %s (line %s)' % (filename, line_number)\n isolated = options.isolated_mode if options else False\n default_vcs = options.default_vcs if options else None\n yield InstallRequirement.from_editable(\n opts.editables[0], comes_from=comes_from,\n default_vcs=default_vcs, isolated=isolated,\n wheel_cache=wheel_cache\n )\n\n # parse a nested requirements file\n elif opts.requirements:\n req_path = opts.requirements[0]\n # original file is over http\n if SCHEME_RE.search(filename):\n # do a url join so relative paths work\n req_path = urllib_parse.urljoin(filename, req_path)\n # original file and nested file are paths\n elif not SCHEME_RE.search(req_path):\n # do a join so relative paths work\n req_dir = os.path.dirname(filename)\n req_path = os.path.join(os.path.dirname(filename), req_path)\n # TODO: Why not use `comes_from='-r {} (line {})'` here as well?\n parser = parse_requirements(\n req_path, finder, comes_from, options, session,\n wheel_cache=wheel_cache\n )\n for req in parser:\n yield req\n\n # set finder options\n elif finder:\n if opts.index_url:\n finder.index_urls = [opts.index_url]\n if opts.use_wheel is False:\n finder.use_wheel = False\n pip.index.fmt_ctl_no_use_wheel(finder.format_control)\n if opts.no_index is True:\n finder.index_urls = []\n if opts.allow_all_external:\n finder.allow_all_external = opts.allow_all_external\n if opts.extra_index_urls:\n finder.index_urls.extend(opts.extra_index_urls)\n if opts.allow_external:\n finder.allow_external |= set(\n [normalize_name(v).lower() for v in opts.allow_external])\n if opts.allow_unverified:\n # Remove after 7.0\n finder.allow_unverified |= set(\n [normalize_name(v).lower() for v in opts.allow_unverified])\n if opts.find_links:\n # FIXME: it would be nice to keep track of the source\n # of the find_links: support a find-links local path\n # relative to a requirements file.\n value = opts.find_links[0]\n req_dir = os.path.dirname(os.path.abspath(filename))\n relative_to_reqs_file = os.path.join(req_dir, value)\n if os.path.exists(relative_to_reqs_file):\n value = relative_to_reqs_file\n finder.find_links.append(value)\n\n\ndef build_parser():\n \"\"\"\n Return a parser for parsing requirement lines\n \"\"\"\n parser = optparse.OptionParser(add_help_option=False)\n\n option_factories = SUPPORTED_OPTIONS + SUPPORTED_OPTIONS_REQ\n for option_factory in option_factories:\n option = option_factory()\n parser.add_option(option)\n\n # By default optparse sys.exits on parsing errors. We want to wrap\n # that in our own exception.\n def parser_exit(self, msg):\n raise RequirementsFileParseError(msg)\n parser.exit = parser_exit\n\n return parser\n\n\ndef join_lines(iterator):\n \"\"\"\n Joins a line ending in '\\' with the previous line.\n \"\"\"\n lines = []\n for line in iterator:\n if not line.endswith('\\\\'):\n if lines:\n lines.append(line)\n yield ''.join(lines)\n lines = []\n else:\n yield line\n else:\n lines.append(line.strip('\\\\'))\n\n # TODO: handle space after '\\'.\n # TODO: handle '\\' on last line.\n\n\ndef ignore_comments(iterator):\n \"\"\"\n Strips and filters empty or commented lines.\n \"\"\"\n for line in iterator:\n line = COMMENT_RE.sub('', line)\n line = line.strip()\n if line:\n yield line\n\n\ndef skip_regex(lines, options):\n \"\"\"\n Optionally exclude lines that match '--skip-requirements-regex'\n \"\"\"\n skip_regex = options.skip_requirements_regex if options else None\n if skip_regex:\n lines = filterfalse(re.compile(skip_regex).search, lines)\n return lines\n", "path": "pip/req/req_file.py"}]}
| 2,945 | 105 |
gh_patches_debug_6675
|
rasdani/github-patches
|
git_diff
|
fal-ai__dbt-fal-197
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug] Too many messages received before initialization
> mmeasic: Hey, I get this log message on dbt version 0.21.0:
```Logged from file /Users/mmeasic/.virtualenvs/bi-etl-dbt/lib/python3.8/site-packages/dbt/parser/manifest.py, line 792
Traceback (most recent call last):
File "/Users/mmeasic/.virtualenvs/bi-etl-dbt/lib/python3.8/site-packages/logbook/handlers.py", line 216, in handle
self.emit(record)
File "/Users/mmeasic/.virtualenvs/bi-etl-dbt/lib/python3.8/site-packages/dbt/logger.py", line 478, in emit
assert len(self._msg_buffer) < self._bufmax, \
AssertionError: too many messages received before initilization!
```
*****
> jstrom40: did your job run after it gave you this error message? i have had this problem when i have had too many threads set up in dbt. i also had it when i tried to run the fal tool but my actual job still ran after it popped out this message
*****
> mmeasic: It did run.
> I actually have 4 threads set for the target
[Thread link](https://discord.com/channels/908693336280432750/908693336280432755/930791100803850283)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/fal/cli/cli.py`
Content:
```
1 from typing import List
2 import sys
3 from dbt.logger import log_manager, GLOBAL_LOGGER as logger
4 from fal.cli.flow_runner import fal_flow_run
5 from faldbt.lib import DBT_VCURRENT, DBT_V1
6 from .args import parse_args
7 from .fal_runner import fal_run
8 from fal.telemetry import telemetry
9
10
11 @telemetry.log_call("cli")
12 def cli(argv: List[str] = sys.argv):
13 parsed = parse_args(argv[1:])
14
15 # TODO: remove `action="extend"` to match exactly what dbt does
16 selects_count = (
17 argv.count("-s")
18 + argv.count("--select")
19 + argv.count("-m")
20 + argv.count("--model")
21 )
22 exclude_count = argv.count("--exclude")
23 script_count = argv.count("--script")
24
25 if parsed.disable_logging:
26 logger.disable()
27 # Re-enable logging for 1.0.0 through old API of logger
28 elif DBT_VCURRENT.compare(DBT_V1) >= 0:
29 if logger.disabled:
30 logger.enable()
31
32 with log_manager.applicationbound():
33 if parsed.debug:
34 log_manager.set_debug()
35
36 if parsed.command == "flow":
37 if parsed.flow_command == "run":
38 fal_flow_run(parsed)
39
40 elif parsed.command == "run":
41 fal_run(
42 parsed,
43 selects_count=selects_count,
44 exclude_count=exclude_count,
45 script_count=script_count,
46 )
47
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/fal/cli/cli.py b/src/fal/cli/cli.py
--- a/src/fal/cli/cli.py
+++ b/src/fal/cli/cli.py
@@ -20,6 +20,10 @@
exclude_count = argv.count("--exclude")
script_count = argv.count("--script")
+ # Disabling the dbt.logger.DelayedFileHandler manually
+ # since we do not use the new dbt logging system
+ # This fixes issue https://github.com/fal-ai/fal/issues/97
+ log_manager.set_path(None)
if parsed.disable_logging:
logger.disable()
# Re-enable logging for 1.0.0 through old API of logger
|
{"golden_diff": "diff --git a/src/fal/cli/cli.py b/src/fal/cli/cli.py\n--- a/src/fal/cli/cli.py\n+++ b/src/fal/cli/cli.py\n@@ -20,6 +20,10 @@\n exclude_count = argv.count(\"--exclude\")\n script_count = argv.count(\"--script\")\n \n+ # Disabling the dbt.logger.DelayedFileHandler manually\n+ # since we do not use the new dbt logging system\n+ # This fixes issue https://github.com/fal-ai/fal/issues/97\n+ log_manager.set_path(None)\n if parsed.disable_logging:\n logger.disable()\n # Re-enable logging for 1.0.0 through old API of logger\n", "issue": "[Bug] Too many messages received before initialization\n> mmeasic: Hey, I get this log message on dbt version 0.21.0:\r\n\r\n```Logged from file /Users/mmeasic/.virtualenvs/bi-etl-dbt/lib/python3.8/site-packages/dbt/parser/manifest.py, line 792\r\nTraceback (most recent call last):\r\n File \"/Users/mmeasic/.virtualenvs/bi-etl-dbt/lib/python3.8/site-packages/logbook/handlers.py\", line 216, in handle\r\n self.emit(record)\r\n File \"/Users/mmeasic/.virtualenvs/bi-etl-dbt/lib/python3.8/site-packages/dbt/logger.py\", line 478, in emit\r\n assert len(self._msg_buffer) < self._bufmax, \\\r\nAssertionError: too many messages received before initilization!\r\n```\r\n\r\n*****\r\n\r\n> jstrom40: did your job run after it gave you this error message? i have had this problem when i have had too many threads set up in dbt. i also had it when i tried to run the fal tool but my actual job still ran after it popped out this message\r\n\r\n*****\r\n\r\n> mmeasic: It did run.\r\n> I actually have 4 threads set for the target\r\n\r\n[Thread link](https://discord.com/channels/908693336280432750/908693336280432755/930791100803850283)\n", "before_files": [{"content": "from typing import List\nimport sys\nfrom dbt.logger import log_manager, GLOBAL_LOGGER as logger\nfrom fal.cli.flow_runner import fal_flow_run\nfrom faldbt.lib import DBT_VCURRENT, DBT_V1\nfrom .args import parse_args\nfrom .fal_runner import fal_run\nfrom fal.telemetry import telemetry\n\n\[email protected]_call(\"cli\")\ndef cli(argv: List[str] = sys.argv):\n parsed = parse_args(argv[1:])\n\n # TODO: remove `action=\"extend\"` to match exactly what dbt does\n selects_count = (\n argv.count(\"-s\")\n + argv.count(\"--select\")\n + argv.count(\"-m\")\n + argv.count(\"--model\")\n )\n exclude_count = argv.count(\"--exclude\")\n script_count = argv.count(\"--script\")\n\n if parsed.disable_logging:\n logger.disable()\n # Re-enable logging for 1.0.0 through old API of logger\n elif DBT_VCURRENT.compare(DBT_V1) >= 0:\n if logger.disabled:\n logger.enable()\n\n with log_manager.applicationbound():\n if parsed.debug:\n log_manager.set_debug()\n\n if parsed.command == \"flow\":\n if parsed.flow_command == \"run\":\n fal_flow_run(parsed)\n\n elif parsed.command == \"run\":\n fal_run(\n parsed,\n selects_count=selects_count,\n exclude_count=exclude_count,\n script_count=script_count,\n )\n", "path": "src/fal/cli/cli.py"}], "after_files": [{"content": "from typing import List\nimport sys\nfrom dbt.logger import log_manager, GLOBAL_LOGGER as logger\nfrom fal.cli.flow_runner import fal_flow_run\nfrom faldbt.lib import DBT_VCURRENT, DBT_V1\nfrom .args import parse_args\nfrom .fal_runner import fal_run\n\n\ndef cli(argv: List[str] = sys.argv):\n parsed = parse_args(argv[1:])\n\n # TODO: remove `action=\"extend\"` to match exactly what dbt does\n selects_count = (\n argv.count(\"-s\")\n + argv.count(\"--select\")\n + argv.count(\"-m\")\n + argv.count(\"--model\")\n )\n exclude_count = argv.count(\"--exclude\")\n script_count = argv.count(\"--script\")\n\n # Disabling the dbt.logger.DelayedFileHandler manually\n # since we do not use the new dbt logging system\n # This fixes issue https://github.com/fal-ai/fal/issues/97\n log_manager.set_path(None)\n if parsed.disable_logging:\n logger.disable()\n # Re-enable logging for 1.0.0 through old API of logger\n elif DBT_VCURRENT.compare(DBT_V1) >= 0:\n if logger.disabled:\n logger.enable()\n\n with log_manager.applicationbound():\n if parsed.debug:\n log_manager.set_debug()\n\n if parsed.command == \"flow\":\n if parsed.flow_command == \"run\":\n fal_flow_run(parsed)\n\n elif parsed.command == \"run\":\n fal_run(\n parsed,\n selects_count=selects_count,\n exclude_count=exclude_count,\n script_count=script_count,\n )\n", "path": "src/fal/cli/cli.py"}]}
| 999 | 155 |
gh_patches_debug_23183
|
rasdani/github-patches
|
git_diff
|
facebookresearch__ParlAI-3067
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
'PathManagerBase' object has no attribute 'makedirs'
In attempting to create the tensorboard directory with PathManager we're calling a nonexistent function.
To repro:
```bash
$ python -m parlai.scripts.train_model -t personachat -m transformer/ranker -mf /tmp/model_tr6 --n-layers 1 --embedding-size 300 --ffn-size 600 --n-heads 4 --num-epochs 2 -veps 0.25 -bs 64 -lr 0.001 --dropout 0.1 --embedding-type fasttext_cc --candidates batch --tensorboard-log true
```
Exception hit:
```
File "/Users/spoff/ParlAI/parlai/core/logs.py", line 56, in __init__
PathManager.makedirs(tbpath)
AttributeError: 'PathManagerBase' object has no attribute 'makedirs'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python3
2
3 # Copyright (c) Facebook, Inc. and its affiliates.
4 # This source code is licensed under the MIT license found in the
5 # LICENSE file in the root directory of this source tree.
6
7
8 import sys
9
10 from setuptools import setup, find_packages
11
12 VERSION = '0.9.1' # if you update, update parlai/__init__.py too!
13
14 if sys.version_info < (3, 6):
15 sys.exit('Sorry, Python >=3.6 is required for ParlAI.')
16
17 with open('README.md', encoding="utf8") as f:
18 # strip the header and badges etc
19 readme = f.read().split('--------------------')[-1]
20
21 with open('requirements.txt') as f:
22 reqs = []
23 for line in f:
24 line = line.strip()
25 reqs.append(line.split('==')[0])
26
27
28 if __name__ == '__main__':
29 setup(
30 name='parlai',
31 version=VERSION,
32 description='Unified platform for dialogue research.',
33 long_description=readme,
34 long_description_content_type='text/markdown',
35 url='http://parl.ai/',
36 python_requires='>=3.6',
37 packages=find_packages(
38 exclude=('data', 'docs', 'examples', 'tests', 'parlai_internal*')
39 ),
40 install_requires=reqs,
41 include_package_data=True,
42 package_data={'': ['*.txt', '*.md']},
43 entry_points={
44 "flake8.extension": ["PAI = parlai.utils.flake8:ParlAIChecker"],
45 "console_scripts": ["parlai=parlai.__main__:main"],
46 },
47 classifiers=[
48 "Programming Language :: Python :: 3",
49 "License :: OSI Approved :: MIT License",
50 "Topic :: Scientific/Engineering :: Artificial Intelligence",
51 "Natural Language :: English",
52 ],
53 )
54
```
Path: `parlai/core/logs.py`
Content:
```
1 #!/usr/bin/env python3
2
3 # Copyright (c) Facebook, Inc. and its affiliates.
4 # This source code is licensed under the MIT license found in the
5 # LICENSE file in the root directory of this source tree.
6 """
7 Log metrics to tensorboard.
8
9 This file provides interface to log any metrics in tensorboard, could be
10 extended to any other tool like visdom.
11
12 .. code-block: none
13
14 tensorboard --logdir <PARLAI_DATA/tensorboard> --port 8888.
15 """
16
17 import json
18 import numbers
19 from parlai.core.opt import Opt
20 from parlai.core.metrics import Metric
21 from parlai.utils.io import PathManager
22 import parlai.utils.logging as logging
23
24
25 class TensorboardLogger(object):
26 """
27 Log objects to tensorboard.
28 """
29
30 @staticmethod
31 def add_cmdline_args(argparser):
32 """
33 Add tensorboard CLI args.
34 """
35 logger = argparser.add_argument_group('Tensorboard Arguments')
36 logger.add_argument(
37 '-tblog',
38 '--tensorboard-log',
39 type='bool',
40 default=False,
41 help="Tensorboard logging of metrics, default is %(default)s",
42 hidden=False,
43 )
44
45 def __init__(self, opt: Opt):
46 try:
47 # tensorboard is a very expensive thing to import. Wait until the
48 # last second to import it.
49 from tensorboardX import SummaryWriter
50 except ImportError:
51 raise ImportError('Please run `pip install tensorboard tensorboardX`.')
52
53 tbpath = opt['model_file'] + '.tensorboard'
54 logging.debug(f'Saving tensorboard logs to: {tbpath}')
55 if not PathManager.exists(tbpath):
56 PathManager.makedirs(tbpath)
57 self.writer = SummaryWriter(tbpath, comment=json.dumps(opt))
58
59 def log_metrics(self, setting, step, report):
60 """
61 Add all metrics from tensorboard_metrics opt key.
62
63 :param setting:
64 One of train/valid/test. Will be used as the title for the graph.
65 :param step:
66 Number of parleys
67 :param report:
68 The report to log
69 """
70 for k, v in report.items():
71 if isinstance(v, numbers.Number):
72 self.writer.add_scalar(f'{k}/{setting}', v, global_step=step)
73 elif isinstance(v, Metric):
74 self.writer.add_scalar(f'{k}/{setting}', v.value(), global_step=step)
75 else:
76 logging.error(f'k {k} v {v} is not a number')
77
78 def flush(self):
79 self.writer.flush()
80
```
Path: `parlai/__init__.py`
Content:
```
1 #!/usr/bin/env python3
2
3 # Copyright (c) Facebook, Inc. and its affiliates.
4 # This source code is licensed under the MIT license found in the
5 # LICENSE file in the root directory of this source tree.
6
7 __version__ = '0.9.1'
8
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/parlai/__init__.py b/parlai/__init__.py
--- a/parlai/__init__.py
+++ b/parlai/__init__.py
@@ -4,4 +4,4 @@
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
-__version__ = '0.9.1'
+__version__ = '0.9.2'
diff --git a/parlai/core/logs.py b/parlai/core/logs.py
--- a/parlai/core/logs.py
+++ b/parlai/core/logs.py
@@ -53,7 +53,7 @@
tbpath = opt['model_file'] + '.tensorboard'
logging.debug(f'Saving tensorboard logs to: {tbpath}')
if not PathManager.exists(tbpath):
- PathManager.makedirs(tbpath)
+ PathManager.mkdirs(tbpath)
self.writer = SummaryWriter(tbpath, comment=json.dumps(opt))
def log_metrics(self, setting, step, report):
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -9,7 +9,7 @@
from setuptools import setup, find_packages
-VERSION = '0.9.1' # if you update, update parlai/__init__.py too!
+VERSION = '0.9.2' # if you update, update parlai/__init__.py too!
if sys.version_info < (3, 6):
sys.exit('Sorry, Python >=3.6 is required for ParlAI.')
|
{"golden_diff": "diff --git a/parlai/__init__.py b/parlai/__init__.py\n--- a/parlai/__init__.py\n+++ b/parlai/__init__.py\n@@ -4,4 +4,4 @@\n # This source code is licensed under the MIT license found in the\n # LICENSE file in the root directory of this source tree.\n \n-__version__ = '0.9.1'\n+__version__ = '0.9.2'\ndiff --git a/parlai/core/logs.py b/parlai/core/logs.py\n--- a/parlai/core/logs.py\n+++ b/parlai/core/logs.py\n@@ -53,7 +53,7 @@\n tbpath = opt['model_file'] + '.tensorboard'\n logging.debug(f'Saving tensorboard logs to: {tbpath}')\n if not PathManager.exists(tbpath):\n- PathManager.makedirs(tbpath)\n+ PathManager.mkdirs(tbpath)\n self.writer = SummaryWriter(tbpath, comment=json.dumps(opt))\n \n def log_metrics(self, setting, step, report):\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -9,7 +9,7 @@\n \n from setuptools import setup, find_packages\n \n-VERSION = '0.9.1' # if you update, update parlai/__init__.py too!\n+VERSION = '0.9.2' # if you update, update parlai/__init__.py too!\n \n if sys.version_info < (3, 6):\n sys.exit('Sorry, Python >=3.6 is required for ParlAI.')\n", "issue": "'PathManagerBase' object has no attribute 'makedirs'\nIn attempting to create the tensorboard directory with PathManager we're calling a nonexistent function.\r\n\r\nTo repro:\r\n```bash\r\n$ python -m parlai.scripts.train_model -t personachat -m transformer/ranker -mf /tmp/model_tr6 --n-layers 1 --embedding-size 300 --ffn-size 600 --n-heads 4 --num-epochs 2 -veps 0.25 -bs 64 -lr 0.001 --dropout 0.1 --embedding-type fasttext_cc --candidates batch --tensorboard-log true\r\n```\r\n\r\nException hit:\r\n```\r\nFile \"/Users/spoff/ParlAI/parlai/core/logs.py\", line 56, in __init__\r\n PathManager.makedirs(tbpath)\r\nAttributeError: 'PathManagerBase' object has no attribute 'makedirs'\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\n\nimport sys\n\nfrom setuptools import setup, find_packages\n\nVERSION = '0.9.1' # if you update, update parlai/__init__.py too!\n\nif sys.version_info < (3, 6):\n sys.exit('Sorry, Python >=3.6 is required for ParlAI.')\n\nwith open('README.md', encoding=\"utf8\") as f:\n # strip the header and badges etc\n readme = f.read().split('--------------------')[-1]\n\nwith open('requirements.txt') as f:\n reqs = []\n for line in f:\n line = line.strip()\n reqs.append(line.split('==')[0])\n\n\nif __name__ == '__main__':\n setup(\n name='parlai',\n version=VERSION,\n description='Unified platform for dialogue research.',\n long_description=readme,\n long_description_content_type='text/markdown',\n url='http://parl.ai/',\n python_requires='>=3.6',\n packages=find_packages(\n exclude=('data', 'docs', 'examples', 'tests', 'parlai_internal*')\n ),\n install_requires=reqs,\n include_package_data=True,\n package_data={'': ['*.txt', '*.md']},\n entry_points={\n \"flake8.extension\": [\"PAI = parlai.utils.flake8:ParlAIChecker\"],\n \"console_scripts\": [\"parlai=parlai.__main__:main\"],\n },\n classifiers=[\n \"Programming Language :: Python :: 3\",\n \"License :: OSI Approved :: MIT License\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n \"Natural Language :: English\",\n ],\n )\n", "path": "setup.py"}, {"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\"\"\"\nLog metrics to tensorboard.\n\nThis file provides interface to log any metrics in tensorboard, could be\nextended to any other tool like visdom.\n\n.. code-block: none\n\n tensorboard --logdir <PARLAI_DATA/tensorboard> --port 8888.\n\"\"\"\n\nimport json\nimport numbers\nfrom parlai.core.opt import Opt\nfrom parlai.core.metrics import Metric\nfrom parlai.utils.io import PathManager\nimport parlai.utils.logging as logging\n\n\nclass TensorboardLogger(object):\n \"\"\"\n Log objects to tensorboard.\n \"\"\"\n\n @staticmethod\n def add_cmdline_args(argparser):\n \"\"\"\n Add tensorboard CLI args.\n \"\"\"\n logger = argparser.add_argument_group('Tensorboard Arguments')\n logger.add_argument(\n '-tblog',\n '--tensorboard-log',\n type='bool',\n default=False,\n help=\"Tensorboard logging of metrics, default is %(default)s\",\n hidden=False,\n )\n\n def __init__(self, opt: Opt):\n try:\n # tensorboard is a very expensive thing to import. Wait until the\n # last second to import it.\n from tensorboardX import SummaryWriter\n except ImportError:\n raise ImportError('Please run `pip install tensorboard tensorboardX`.')\n\n tbpath = opt['model_file'] + '.tensorboard'\n logging.debug(f'Saving tensorboard logs to: {tbpath}')\n if not PathManager.exists(tbpath):\n PathManager.makedirs(tbpath)\n self.writer = SummaryWriter(tbpath, comment=json.dumps(opt))\n\n def log_metrics(self, setting, step, report):\n \"\"\"\n Add all metrics from tensorboard_metrics opt key.\n\n :param setting:\n One of train/valid/test. Will be used as the title for the graph.\n :param step:\n Number of parleys\n :param report:\n The report to log\n \"\"\"\n for k, v in report.items():\n if isinstance(v, numbers.Number):\n self.writer.add_scalar(f'{k}/{setting}', v, global_step=step)\n elif isinstance(v, Metric):\n self.writer.add_scalar(f'{k}/{setting}', v.value(), global_step=step)\n else:\n logging.error(f'k {k} v {v} is not a number')\n\n def flush(self):\n self.writer.flush()\n", "path": "parlai/core/logs.py"}, {"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\n__version__ = '0.9.1'\n", "path": "parlai/__init__.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\n\nimport sys\n\nfrom setuptools import setup, find_packages\n\nVERSION = '0.9.2' # if you update, update parlai/__init__.py too!\n\nif sys.version_info < (3, 6):\n sys.exit('Sorry, Python >=3.6 is required for ParlAI.')\n\nwith open('README.md', encoding=\"utf8\") as f:\n # strip the header and badges etc\n readme = f.read().split('--------------------')[-1]\n\nwith open('requirements.txt') as f:\n reqs = []\n for line in f:\n line = line.strip()\n reqs.append(line.split('==')[0])\n\n\nif __name__ == '__main__':\n setup(\n name='parlai',\n version=VERSION,\n description='Unified platform for dialogue research.',\n long_description=readme,\n long_description_content_type='text/markdown',\n url='http://parl.ai/',\n python_requires='>=3.6',\n packages=find_packages(\n exclude=('data', 'docs', 'examples', 'tests', 'parlai_internal*')\n ),\n install_requires=reqs,\n include_package_data=True,\n package_data={'': ['*.txt', '*.md']},\n entry_points={\n \"flake8.extension\": [\"PAI = parlai.utils.flake8:ParlAIChecker\"],\n \"console_scripts\": [\"parlai=parlai.__main__:main\"],\n },\n classifiers=[\n \"Programming Language :: Python :: 3\",\n \"License :: OSI Approved :: MIT License\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n \"Natural Language :: English\",\n ],\n )\n", "path": "setup.py"}, {"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\"\"\"\nLog metrics to tensorboard.\n\nThis file provides interface to log any metrics in tensorboard, could be\nextended to any other tool like visdom.\n\n.. code-block: none\n\n tensorboard --logdir <PARLAI_DATA/tensorboard> --port 8888.\n\"\"\"\n\nimport json\nimport numbers\nfrom parlai.core.opt import Opt\nfrom parlai.core.metrics import Metric\nfrom parlai.utils.io import PathManager\nimport parlai.utils.logging as logging\n\n\nclass TensorboardLogger(object):\n \"\"\"\n Log objects to tensorboard.\n \"\"\"\n\n @staticmethod\n def add_cmdline_args(argparser):\n \"\"\"\n Add tensorboard CLI args.\n \"\"\"\n logger = argparser.add_argument_group('Tensorboard Arguments')\n logger.add_argument(\n '-tblog',\n '--tensorboard-log',\n type='bool',\n default=False,\n help=\"Tensorboard logging of metrics, default is %(default)s\",\n hidden=False,\n )\n\n def __init__(self, opt: Opt):\n try:\n # tensorboard is a very expensive thing to import. Wait until the\n # last second to import it.\n from tensorboardX import SummaryWriter\n except ImportError:\n raise ImportError('Please run `pip install tensorboard tensorboardX`.')\n\n tbpath = opt['model_file'] + '.tensorboard'\n logging.debug(f'Saving tensorboard logs to: {tbpath}')\n if not PathManager.exists(tbpath):\n PathManager.mkdirs(tbpath)\n self.writer = SummaryWriter(tbpath, comment=json.dumps(opt))\n\n def log_metrics(self, setting, step, report):\n \"\"\"\n Add all metrics from tensorboard_metrics opt key.\n\n :param setting:\n One of train/valid/test. Will be used as the title for the graph.\n :param step:\n Number of parleys\n :param report:\n The report to log\n \"\"\"\n for k, v in report.items():\n if isinstance(v, numbers.Number):\n self.writer.add_scalar(f'{k}/{setting}', v, global_step=step)\n elif isinstance(v, Metric):\n self.writer.add_scalar(f'{k}/{setting}', v.value(), global_step=step)\n else:\n logging.error(f'k {k} v {v} is not a number')\n\n def flush(self):\n self.writer.flush()\n", "path": "parlai/core/logs.py"}, {"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\n__version__ = '0.9.2'\n", "path": "parlai/__init__.py"}]}
| 1,759 | 356 |
gh_patches_debug_17281
|
rasdani/github-patches
|
git_diff
|
kivy__kivy-3303
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Can't create package for windows with kivy 1.9 portable
I'm looking to port an existing kivy 1.8 project to kivy 1.9. I've just downloaded the portable version and have the application working.
However when packaging the app using pyinstaller and the instructions on http://kivy.org/docs/guide/packaging-windows.html the app packages, but on execution immediately fails with error:
```
Traceback (most recent call last):
File "<string>", line 34, in <module>
ImportError: No module named pygame.pkgdata
```
I've tried using my old .spec file and generating a new one with exactly the same results.
I'm a bit mystified where this is coming from as pygame isn't imported anywhere in my application and I thought it had been replaced with sdl2 in kivy 1.9. I'm also confused that the application works when run directly.
Anyone come across this issue or can point me in the right direction?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kivy/tools/packaging/pyinstaller_hooks/rt-hook-kivy.py`
Content:
```
1 from os.path import join, dirname
2 from os import environ, chdir, putenv
3 import sys
4
5 root = 'kivy_install'
6 if hasattr(sys, '_MEIPASS'):
7 # PyInstaller >= 1.6
8 chdir(sys._MEIPASS)
9 root = join(sys._MEIPASS, root)
10 elif '_MEIPASS2' in environ:
11 # PyInstaller < 1.6 (tested on 1.5 only)
12 chdir(environ['_MEIPASS2'])
13 root = join(environ['_MEIPASS2'], root)
14 else:
15 chdir(dirname(sys.argv[0]))
16 root = join(dirname(sys.argv[0]), root)
17
18
19 sys.path += [join(root, '_libs')]
20
21 if sys.platform == 'darwin':
22 sitepackages = join(root, '..', 'sitepackages')
23 sys.path += [sitepackages, join(sitepackages, 'gst-0.10')]
24 putenv('GST_REGISTRY_FORK', 'no')
25
26 environ['GST_PLUGIN_PATH'] = join(root, '..', 'gst-plugins')
27 environ['KIVY_DATA_DIR'] = join(root, 'data')
28 environ['KIVY_EXTS_DIR'] = join(root, 'extensions')
29 environ['KIVY_MODULES_DIR'] = join(root, 'modules')
30 environ['KIVY_EMBED'] = '1'
31
32 # Monkey-patch pygame to get around an issue with Pygame window icon and
33 # PyInstaller 2.1. See kivy issue #1638
34 # Uncomment the following to package pygame
35 #import pygame.pkgdata
36 #_original_getResource = pygame.pkgdata.getResource
37 #
38 #
39 #def getResource(identifier, *args, **kwargs):
40 # if identifier == 'pygame_icon.tiff':
41 # raise IOError()
42 # return _original_getResource(identifier, *args, **kwargs)
43 #pygame.pkgdata.getResource = getResource
44
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kivy/tools/packaging/pyinstaller_hooks/rt-hook-kivy.py b/kivy/tools/packaging/pyinstaller_hooks/rt-hook-kivy.py
--- a/kivy/tools/packaging/pyinstaller_hooks/rt-hook-kivy.py
+++ b/kivy/tools/packaging/pyinstaller_hooks/rt-hook-kivy.py
@@ -29,15 +29,17 @@
environ['KIVY_MODULES_DIR'] = join(root, 'modules')
environ['KIVY_EMBED'] = '1'
+
# Monkey-patch pygame to get around an issue with Pygame window icon and
# PyInstaller 2.1. See kivy issue #1638
-# Uncomment the following to package pygame
-#import pygame.pkgdata
-#_original_getResource = pygame.pkgdata.getResource
-#
-#
-#def getResource(identifier, *args, **kwargs):
-# if identifier == 'pygame_icon.tiff':
-# raise IOError()
-# return _original_getResource(identifier, *args, **kwargs)
-#pygame.pkgdata.getResource = getResource
+def getResource(identifier, *args, **kwargs):
+ if identifier == 'pygame_icon.tiff':
+ raise IOError()
+ return _original_getResource(identifier, *args, **kwargs)
+
+try:
+ import pygame.pkgdata
+ _original_getResource = pygame.pkgdata.getResource
+ pygame.pkgdata.getResource = getResource
+except ImportError:
+ pass
|
{"golden_diff": "diff --git a/kivy/tools/packaging/pyinstaller_hooks/rt-hook-kivy.py b/kivy/tools/packaging/pyinstaller_hooks/rt-hook-kivy.py\n--- a/kivy/tools/packaging/pyinstaller_hooks/rt-hook-kivy.py\n+++ b/kivy/tools/packaging/pyinstaller_hooks/rt-hook-kivy.py\n@@ -29,15 +29,17 @@\n environ['KIVY_MODULES_DIR'] = join(root, 'modules')\n environ['KIVY_EMBED'] = '1'\n \n+\n # Monkey-patch pygame to get around an issue with Pygame window icon and\n # PyInstaller 2.1. See kivy issue #1638\n-# Uncomment the following to package pygame\n-#import pygame.pkgdata\n-#_original_getResource = pygame.pkgdata.getResource\n-#\n-#\n-#def getResource(identifier, *args, **kwargs):\n-# if identifier == 'pygame_icon.tiff':\n-# raise IOError()\n-# return _original_getResource(identifier, *args, **kwargs)\n-#pygame.pkgdata.getResource = getResource\n+def getResource(identifier, *args, **kwargs):\n+ if identifier == 'pygame_icon.tiff':\n+ raise IOError()\n+ return _original_getResource(identifier, *args, **kwargs)\n+\n+try:\n+ import pygame.pkgdata\n+ _original_getResource = pygame.pkgdata.getResource\n+ pygame.pkgdata.getResource = getResource\n+except ImportError:\n+ pass\n", "issue": "Can't create package for windows with kivy 1.9 portable\nI'm looking to port an existing kivy 1.8 project to kivy 1.9. I've just downloaded the portable version and have the application working.\n\nHowever when packaging the app using pyinstaller and the instructions on http://kivy.org/docs/guide/packaging-windows.html the app packages, but on execution immediately fails with error:\n\n```\nTraceback (most recent call last):\n File \"<string>\", line 34, in <module>\nImportError: No module named pygame.pkgdata\n```\n\nI've tried using my old .spec file and generating a new one with exactly the same results.\n\nI'm a bit mystified where this is coming from as pygame isn't imported anywhere in my application and I thought it had been replaced with sdl2 in kivy 1.9. I'm also confused that the application works when run directly.\n\nAnyone come across this issue or can point me in the right direction?\n\n", "before_files": [{"content": "from os.path import join, dirname\nfrom os import environ, chdir, putenv\nimport sys\n\nroot = 'kivy_install'\nif hasattr(sys, '_MEIPASS'):\n # PyInstaller >= 1.6\n chdir(sys._MEIPASS)\n root = join(sys._MEIPASS, root)\nelif '_MEIPASS2' in environ:\n # PyInstaller < 1.6 (tested on 1.5 only)\n chdir(environ['_MEIPASS2'])\n root = join(environ['_MEIPASS2'], root)\nelse:\n chdir(dirname(sys.argv[0]))\n root = join(dirname(sys.argv[0]), root)\n\n\nsys.path += [join(root, '_libs')]\n\nif sys.platform == 'darwin':\n sitepackages = join(root, '..', 'sitepackages')\n sys.path += [sitepackages, join(sitepackages, 'gst-0.10')]\n putenv('GST_REGISTRY_FORK', 'no')\n\nenviron['GST_PLUGIN_PATH'] = join(root, '..', 'gst-plugins')\nenviron['KIVY_DATA_DIR'] = join(root, 'data')\nenviron['KIVY_EXTS_DIR'] = join(root, 'extensions')\nenviron['KIVY_MODULES_DIR'] = join(root, 'modules')\nenviron['KIVY_EMBED'] = '1'\n\n# Monkey-patch pygame to get around an issue with Pygame window icon and\n# PyInstaller 2.1. See kivy issue #1638\n# Uncomment the following to package pygame\n#import pygame.pkgdata\n#_original_getResource = pygame.pkgdata.getResource\n#\n#\n#def getResource(identifier, *args, **kwargs):\n# if identifier == 'pygame_icon.tiff':\n# raise IOError()\n# return _original_getResource(identifier, *args, **kwargs)\n#pygame.pkgdata.getResource = getResource\n", "path": "kivy/tools/packaging/pyinstaller_hooks/rt-hook-kivy.py"}], "after_files": [{"content": "from os.path import join, dirname\nfrom os import environ, chdir, putenv\nimport sys\n\nroot = 'kivy_install'\nif hasattr(sys, '_MEIPASS'):\n # PyInstaller >= 1.6\n chdir(sys._MEIPASS)\n root = join(sys._MEIPASS, root)\nelif '_MEIPASS2' in environ:\n # PyInstaller < 1.6 (tested on 1.5 only)\n chdir(environ['_MEIPASS2'])\n root = join(environ['_MEIPASS2'], root)\nelse:\n chdir(dirname(sys.argv[0]))\n root = join(dirname(sys.argv[0]), root)\n\n\nsys.path += [join(root, '_libs')]\n\nif sys.platform == 'darwin':\n sitepackages = join(root, '..', 'sitepackages')\n sys.path += [sitepackages, join(sitepackages, 'gst-0.10')]\n putenv('GST_REGISTRY_FORK', 'no')\n\nenviron['GST_PLUGIN_PATH'] = join(root, '..', 'gst-plugins')\nenviron['KIVY_DATA_DIR'] = join(root, 'data')\nenviron['KIVY_EXTS_DIR'] = join(root, 'extensions')\nenviron['KIVY_MODULES_DIR'] = join(root, 'modules')\nenviron['KIVY_EMBED'] = '1'\n\n\n# Monkey-patch pygame to get around an issue with Pygame window icon and\n# PyInstaller 2.1. See kivy issue #1638\ndef getResource(identifier, *args, **kwargs):\n if identifier == 'pygame_icon.tiff':\n raise IOError()\n return _original_getResource(identifier, *args, **kwargs)\n\ntry:\n import pygame.pkgdata\n _original_getResource = pygame.pkgdata.getResource\n pygame.pkgdata.getResource = getResource\nexcept ImportError:\n pass\n", "path": "kivy/tools/packaging/pyinstaller_hooks/rt-hook-kivy.py"}]}
| 952 | 319 |
gh_patches_debug_20296
|
rasdani/github-patches
|
git_diff
|
frappe__hrms-1584
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
IFSC Code showing wrong value in Bank Remittance Report
### Information about bug
IFSC Code showing wrong value in Bank Remittance Report. It is showing the same IFSC Code for all the employee in the list.
### Module
Payroll
### Version
ERPNext: v14.52.1 (HEAD)
Frappe Framework: v14.57.0 (HEAD)
Frappe HR: v14.18.1 (HEAD)
### Installation method
FrappeCloud
### Relevant log output / Stack trace / Full Error Message.
_No response_
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hrms/payroll/report/bank_remittance/bank_remittance.py`
Content:
```
1 # Copyright (c) 2013, Frappe Technologies Pvt. Ltd. and contributors
2 # For license information, please see license.txt
3
4
5 import frappe
6 from frappe import _, get_all
7
8
9 def execute(filters=None):
10 columns = [
11 {
12 "label": _("Payroll Number"),
13 "fieldtype": "Link",
14 "fieldname": "payroll_no",
15 "options": "Payroll Entry",
16 "width": 150,
17 },
18 {
19 "label": _("Debit A/C Number"),
20 "fieldtype": "Int",
21 "fieldname": "debit_account",
22 "hidden": 1,
23 "width": 200,
24 },
25 {"label": _("Payment Date"), "fieldtype": "Data", "fieldname": "payment_date", "width": 100},
26 {
27 "label": _("Employee Name"),
28 "fieldtype": "Link",
29 "fieldname": "employee_name",
30 "options": "Employee",
31 "width": 200,
32 },
33 {"label": _("Bank Name"), "fieldtype": "Data", "fieldname": "bank_name", "width": 50},
34 {
35 "label": _("Employee A/C Number"),
36 "fieldtype": "Int",
37 "fieldname": "employee_account_no",
38 "width": 50,
39 },
40 ]
41
42 if frappe.db.has_column("Employee", "ifsc_code"):
43 columns.append(
44 {"label": _("IFSC Code"), "fieldtype": "Data", "fieldname": "bank_code", "width": 100}
45 )
46
47 columns += [
48 {"label": _("Currency"), "fieldtype": "Data", "fieldname": "currency", "width": 50},
49 {
50 "label": _("Net Salary Amount"),
51 "fieldtype": "Currency",
52 "options": "currency",
53 "fieldname": "amount",
54 "width": 100,
55 },
56 ]
57
58 data = []
59
60 accounts = get_bank_accounts()
61 payroll_entries = get_payroll_entries(accounts, filters)
62 salary_slips = get_salary_slips(payroll_entries)
63
64 if frappe.db.has_column("Employee", "ifsc_code"):
65 get_emp_bank_ifsc_code(salary_slips)
66
67 for salary in salary_slips:
68 if (
69 salary.bank_name
70 and salary.bank_account_no
71 and salary.debit_acc_no
72 and salary.status in ["Submitted", "Paid"]
73 ):
74 row = {
75 "payroll_no": salary.payroll_entry,
76 "debit_account": salary.debit_acc_no,
77 "payment_date": frappe.utils.formatdate(salary.modified.strftime("%Y-%m-%d")),
78 "bank_name": salary.bank_name,
79 "employee_account_no": salary.bank_account_no,
80 "bank_code": salary.ifsc_code,
81 "employee_name": salary.employee + ": " + salary.employee_name,
82 "currency": frappe.get_cached_value("Company", filters.company, "default_currency"),
83 "amount": salary.net_pay,
84 }
85 data.append(row)
86
87 return columns, data
88
89
90 def get_bank_accounts():
91 accounts = [d.name for d in get_all("Account", filters={"account_type": "Bank"})]
92 return accounts
93
94
95 def get_payroll_entries(accounts, filters):
96 payroll_filter = [
97 ("payment_account", "IN", accounts),
98 ("number_of_employees", ">", 0),
99 ("Company", "=", filters.company),
100 ]
101 if filters.to_date:
102 payroll_filter.append(("posting_date", "<", filters.to_date))
103
104 if filters.from_date:
105 payroll_filter.append(("posting_date", ">", filters.from_date))
106
107 entries = get_all("Payroll Entry", payroll_filter, ["name", "payment_account"])
108
109 payment_accounts = [d.payment_account for d in entries]
110 entries = set_company_account(payment_accounts, entries)
111 return entries
112
113
114 def get_salary_slips(payroll_entries):
115 payroll = [d.name for d in payroll_entries]
116 salary_slips = get_all(
117 "Salary Slip",
118 filters=[("payroll_entry", "IN", payroll)],
119 fields=[
120 "modified",
121 "net_pay",
122 "bank_name",
123 "bank_account_no",
124 "payroll_entry",
125 "employee",
126 "employee_name",
127 "status",
128 ],
129 )
130
131 payroll_entry_map = {}
132 for entry in payroll_entries:
133 payroll_entry_map[entry.name] = entry
134
135 # appending company debit accounts
136 for slip in salary_slips:
137 if slip.payroll_entry:
138 slip["debit_acc_no"] = payroll_entry_map[slip.payroll_entry]["company_account"]
139 else:
140 slip["debit_acc_no"] = None
141
142 return salary_slips
143
144
145 def get_emp_bank_ifsc_code(salary_slips):
146 emp_names = [d.employee for d in salary_slips]
147 ifsc_codes = get_all("Employee", [("name", "IN", emp_names)], ["ifsc_code", "name"])
148
149 ifsc_codes_map = {}
150 for code in ifsc_codes:
151 ifsc_codes_map[code.name] = code
152
153 for slip in salary_slips:
154 slip["ifsc_code"] = ifsc_codes_map[code.name]["ifsc_code"]
155
156 return salary_slips
157
158
159 def set_company_account(payment_accounts, payroll_entries):
160 company_accounts = get_all(
161 "Bank Account", [("account", "in", payment_accounts)], ["account", "bank_account_no"]
162 )
163 company_accounts_map = {}
164 for acc in company_accounts:
165 company_accounts_map[acc.account] = acc
166
167 for entry in payroll_entries:
168 company_account = ""
169 if entry.payment_account in company_accounts_map:
170 company_account = company_accounts_map[entry.payment_account]["bank_account_no"]
171 entry["company_account"] = company_account
172
173 return payroll_entries
174
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/hrms/payroll/report/bank_remittance/bank_remittance.py b/hrms/payroll/report/bank_remittance/bank_remittance.py
--- a/hrms/payroll/report/bank_remittance/bank_remittance.py
+++ b/hrms/payroll/report/bank_remittance/bank_remittance.py
@@ -22,7 +22,12 @@
"hidden": 1,
"width": 200,
},
- {"label": _("Payment Date"), "fieldtype": "Data", "fieldname": "payment_date", "width": 100},
+ {
+ "label": _("Payment Date"),
+ "fieldtype": "Data",
+ "fieldname": "payment_date",
+ "width": 100,
+ },
{
"label": _("Employee Name"),
"fieldtype": "Link",
@@ -146,12 +151,10 @@
emp_names = [d.employee for d in salary_slips]
ifsc_codes = get_all("Employee", [("name", "IN", emp_names)], ["ifsc_code", "name"])
- ifsc_codes_map = {}
- for code in ifsc_codes:
- ifsc_codes_map[code.name] = code
+ ifsc_codes_map = {code.name: code.ifsc_code for code in ifsc_codes}
for slip in salary_slips:
- slip["ifsc_code"] = ifsc_codes_map[code.name]["ifsc_code"]
+ slip["ifsc_code"] = ifsc_codes_map[slip.employee]
return salary_slips
|
{"golden_diff": "diff --git a/hrms/payroll/report/bank_remittance/bank_remittance.py b/hrms/payroll/report/bank_remittance/bank_remittance.py\n--- a/hrms/payroll/report/bank_remittance/bank_remittance.py\n+++ b/hrms/payroll/report/bank_remittance/bank_remittance.py\n@@ -22,7 +22,12 @@\n \t\t\t\"hidden\": 1,\n \t\t\t\"width\": 200,\n \t\t},\n-\t\t{\"label\": _(\"Payment Date\"), \"fieldtype\": \"Data\", \"fieldname\": \"payment_date\", \"width\": 100},\n+\t\t{\n+\t\t\t\"label\": _(\"Payment Date\"),\n+\t\t\t\"fieldtype\": \"Data\",\n+\t\t\t\"fieldname\": \"payment_date\",\n+\t\t\t\"width\": 100,\n+\t\t},\n \t\t{\n \t\t\t\"label\": _(\"Employee Name\"),\n \t\t\t\"fieldtype\": \"Link\",\n@@ -146,12 +151,10 @@\n \temp_names = [d.employee for d in salary_slips]\n \tifsc_codes = get_all(\"Employee\", [(\"name\", \"IN\", emp_names)], [\"ifsc_code\", \"name\"])\n \n-\tifsc_codes_map = {}\n-\tfor code in ifsc_codes:\n-\t\tifsc_codes_map[code.name] = code\n+\tifsc_codes_map = {code.name: code.ifsc_code for code in ifsc_codes}\n \n \tfor slip in salary_slips:\n-\t\tslip[\"ifsc_code\"] = ifsc_codes_map[code.name][\"ifsc_code\"]\n+\t\tslip[\"ifsc_code\"] = ifsc_codes_map[slip.employee]\n \n \treturn salary_slips\n", "issue": "IFSC Code showing wrong value in Bank Remittance Report\n### Information about bug\n\nIFSC Code showing wrong value in Bank Remittance Report. It is showing the same IFSC Code for all the employee in the list.\n\n### Module\n\nPayroll\n\n### Version\n\nERPNext: v14.52.1 (HEAD)\r\nFrappe Framework: v14.57.0 (HEAD)\r\nFrappe HR: v14.18.1 (HEAD)\n\n### Installation method\n\nFrappeCloud\n\n### Relevant log output / Stack trace / Full Error Message.\n\n_No response_\n\n### Code of Conduct\n\n- [X] I agree to follow this project's Code of Conduct\n", "before_files": [{"content": "# Copyright (c) 2013, Frappe Technologies Pvt. Ltd. and contributors\n# For license information, please see license.txt\n\n\nimport frappe\nfrom frappe import _, get_all\n\n\ndef execute(filters=None):\n\tcolumns = [\n\t\t{\n\t\t\t\"label\": _(\"Payroll Number\"),\n\t\t\t\"fieldtype\": \"Link\",\n\t\t\t\"fieldname\": \"payroll_no\",\n\t\t\t\"options\": \"Payroll Entry\",\n\t\t\t\"width\": 150,\n\t\t},\n\t\t{\n\t\t\t\"label\": _(\"Debit A/C Number\"),\n\t\t\t\"fieldtype\": \"Int\",\n\t\t\t\"fieldname\": \"debit_account\",\n\t\t\t\"hidden\": 1,\n\t\t\t\"width\": 200,\n\t\t},\n\t\t{\"label\": _(\"Payment Date\"), \"fieldtype\": \"Data\", \"fieldname\": \"payment_date\", \"width\": 100},\n\t\t{\n\t\t\t\"label\": _(\"Employee Name\"),\n\t\t\t\"fieldtype\": \"Link\",\n\t\t\t\"fieldname\": \"employee_name\",\n\t\t\t\"options\": \"Employee\",\n\t\t\t\"width\": 200,\n\t\t},\n\t\t{\"label\": _(\"Bank Name\"), \"fieldtype\": \"Data\", \"fieldname\": \"bank_name\", \"width\": 50},\n\t\t{\n\t\t\t\"label\": _(\"Employee A/C Number\"),\n\t\t\t\"fieldtype\": \"Int\",\n\t\t\t\"fieldname\": \"employee_account_no\",\n\t\t\t\"width\": 50,\n\t\t},\n\t]\n\n\tif frappe.db.has_column(\"Employee\", \"ifsc_code\"):\n\t\tcolumns.append(\n\t\t\t{\"label\": _(\"IFSC Code\"), \"fieldtype\": \"Data\", \"fieldname\": \"bank_code\", \"width\": 100}\n\t\t)\n\n\tcolumns += [\n\t\t{\"label\": _(\"Currency\"), \"fieldtype\": \"Data\", \"fieldname\": \"currency\", \"width\": 50},\n\t\t{\n\t\t\t\"label\": _(\"Net Salary Amount\"),\n\t\t\t\"fieldtype\": \"Currency\",\n\t\t\t\"options\": \"currency\",\n\t\t\t\"fieldname\": \"amount\",\n\t\t\t\"width\": 100,\n\t\t},\n\t]\n\n\tdata = []\n\n\taccounts = get_bank_accounts()\n\tpayroll_entries = get_payroll_entries(accounts, filters)\n\tsalary_slips = get_salary_slips(payroll_entries)\n\n\tif frappe.db.has_column(\"Employee\", \"ifsc_code\"):\n\t\tget_emp_bank_ifsc_code(salary_slips)\n\n\tfor salary in salary_slips:\n\t\tif (\n\t\t\tsalary.bank_name\n\t\t\tand salary.bank_account_no\n\t\t\tand salary.debit_acc_no\n\t\t\tand salary.status in [\"Submitted\", \"Paid\"]\n\t\t):\n\t\t\trow = {\n\t\t\t\t\"payroll_no\": salary.payroll_entry,\n\t\t\t\t\"debit_account\": salary.debit_acc_no,\n\t\t\t\t\"payment_date\": frappe.utils.formatdate(salary.modified.strftime(\"%Y-%m-%d\")),\n\t\t\t\t\"bank_name\": salary.bank_name,\n\t\t\t\t\"employee_account_no\": salary.bank_account_no,\n\t\t\t\t\"bank_code\": salary.ifsc_code,\n\t\t\t\t\"employee_name\": salary.employee + \": \" + salary.employee_name,\n\t\t\t\t\"currency\": frappe.get_cached_value(\"Company\", filters.company, \"default_currency\"),\n\t\t\t\t\"amount\": salary.net_pay,\n\t\t\t}\n\t\t\tdata.append(row)\n\n\treturn columns, data\n\n\ndef get_bank_accounts():\n\taccounts = [d.name for d in get_all(\"Account\", filters={\"account_type\": \"Bank\"})]\n\treturn accounts\n\n\ndef get_payroll_entries(accounts, filters):\n\tpayroll_filter = [\n\t\t(\"payment_account\", \"IN\", accounts),\n\t\t(\"number_of_employees\", \">\", 0),\n\t\t(\"Company\", \"=\", filters.company),\n\t]\n\tif filters.to_date:\n\t\tpayroll_filter.append((\"posting_date\", \"<\", filters.to_date))\n\n\tif filters.from_date:\n\t\tpayroll_filter.append((\"posting_date\", \">\", filters.from_date))\n\n\tentries = get_all(\"Payroll Entry\", payroll_filter, [\"name\", \"payment_account\"])\n\n\tpayment_accounts = [d.payment_account for d in entries]\n\tentries = set_company_account(payment_accounts, entries)\n\treturn entries\n\n\ndef get_salary_slips(payroll_entries):\n\tpayroll = [d.name for d in payroll_entries]\n\tsalary_slips = get_all(\n\t\t\"Salary Slip\",\n\t\tfilters=[(\"payroll_entry\", \"IN\", payroll)],\n\t\tfields=[\n\t\t\t\"modified\",\n\t\t\t\"net_pay\",\n\t\t\t\"bank_name\",\n\t\t\t\"bank_account_no\",\n\t\t\t\"payroll_entry\",\n\t\t\t\"employee\",\n\t\t\t\"employee_name\",\n\t\t\t\"status\",\n\t\t],\n\t)\n\n\tpayroll_entry_map = {}\n\tfor entry in payroll_entries:\n\t\tpayroll_entry_map[entry.name] = entry\n\n\t# appending company debit accounts\n\tfor slip in salary_slips:\n\t\tif slip.payroll_entry:\n\t\t\tslip[\"debit_acc_no\"] = payroll_entry_map[slip.payroll_entry][\"company_account\"]\n\t\telse:\n\t\t\tslip[\"debit_acc_no\"] = None\n\n\treturn salary_slips\n\n\ndef get_emp_bank_ifsc_code(salary_slips):\n\temp_names = [d.employee for d in salary_slips]\n\tifsc_codes = get_all(\"Employee\", [(\"name\", \"IN\", emp_names)], [\"ifsc_code\", \"name\"])\n\n\tifsc_codes_map = {}\n\tfor code in ifsc_codes:\n\t\tifsc_codes_map[code.name] = code\n\n\tfor slip in salary_slips:\n\t\tslip[\"ifsc_code\"] = ifsc_codes_map[code.name][\"ifsc_code\"]\n\n\treturn salary_slips\n\n\ndef set_company_account(payment_accounts, payroll_entries):\n\tcompany_accounts = get_all(\n\t\t\"Bank Account\", [(\"account\", \"in\", payment_accounts)], [\"account\", \"bank_account_no\"]\n\t)\n\tcompany_accounts_map = {}\n\tfor acc in company_accounts:\n\t\tcompany_accounts_map[acc.account] = acc\n\n\tfor entry in payroll_entries:\n\t\tcompany_account = \"\"\n\t\tif entry.payment_account in company_accounts_map:\n\t\t\tcompany_account = company_accounts_map[entry.payment_account][\"bank_account_no\"]\n\t\tentry[\"company_account\"] = company_account\n\n\treturn payroll_entries\n", "path": "hrms/payroll/report/bank_remittance/bank_remittance.py"}], "after_files": [{"content": "# Copyright (c) 2013, Frappe Technologies Pvt. Ltd. and contributors\n# For license information, please see license.txt\n\n\nimport frappe\nfrom frappe import _, get_all\n\n\ndef execute(filters=None):\n\tcolumns = [\n\t\t{\n\t\t\t\"label\": _(\"Payroll Number\"),\n\t\t\t\"fieldtype\": \"Link\",\n\t\t\t\"fieldname\": \"payroll_no\",\n\t\t\t\"options\": \"Payroll Entry\",\n\t\t\t\"width\": 150,\n\t\t},\n\t\t{\n\t\t\t\"label\": _(\"Debit A/C Number\"),\n\t\t\t\"fieldtype\": \"Int\",\n\t\t\t\"fieldname\": \"debit_account\",\n\t\t\t\"hidden\": 1,\n\t\t\t\"width\": 200,\n\t\t},\n\t\t{\n\t\t\t\"label\": _(\"Payment Date\"),\n\t\t\t\"fieldtype\": \"Data\",\n\t\t\t\"fieldname\": \"payment_date\",\n\t\t\t\"width\": 100,\n\t\t},\n\t\t{\n\t\t\t\"label\": _(\"Employee Name\"),\n\t\t\t\"fieldtype\": \"Link\",\n\t\t\t\"fieldname\": \"employee_name\",\n\t\t\t\"options\": \"Employee\",\n\t\t\t\"width\": 200,\n\t\t},\n\t\t{\"label\": _(\"Bank Name\"), \"fieldtype\": \"Data\", \"fieldname\": \"bank_name\", \"width\": 50},\n\t\t{\n\t\t\t\"label\": _(\"Employee A/C Number\"),\n\t\t\t\"fieldtype\": \"Int\",\n\t\t\t\"fieldname\": \"employee_account_no\",\n\t\t\t\"width\": 50,\n\t\t},\n\t]\n\n\tif frappe.db.has_column(\"Employee\", \"ifsc_code\"):\n\t\tcolumns.append(\n\t\t\t{\"label\": _(\"IFSC Code\"), \"fieldtype\": \"Data\", \"fieldname\": \"bank_code\", \"width\": 100}\n\t\t)\n\n\tcolumns += [\n\t\t{\"label\": _(\"Currency\"), \"fieldtype\": \"Data\", \"fieldname\": \"currency\", \"width\": 50},\n\t\t{\n\t\t\t\"label\": _(\"Net Salary Amount\"),\n\t\t\t\"fieldtype\": \"Currency\",\n\t\t\t\"options\": \"currency\",\n\t\t\t\"fieldname\": \"amount\",\n\t\t\t\"width\": 100,\n\t\t},\n\t]\n\n\tdata = []\n\n\taccounts = get_bank_accounts()\n\tpayroll_entries = get_payroll_entries(accounts, filters)\n\tsalary_slips = get_salary_slips(payroll_entries)\n\n\tif frappe.db.has_column(\"Employee\", \"ifsc_code\"):\n\t\tget_emp_bank_ifsc_code(salary_slips)\n\n\tfor salary in salary_slips:\n\t\tif (\n\t\t\tsalary.bank_name\n\t\t\tand salary.bank_account_no\n\t\t\tand salary.debit_acc_no\n\t\t\tand salary.status in [\"Submitted\", \"Paid\"]\n\t\t):\n\t\t\trow = {\n\t\t\t\t\"payroll_no\": salary.payroll_entry,\n\t\t\t\t\"debit_account\": salary.debit_acc_no,\n\t\t\t\t\"payment_date\": frappe.utils.formatdate(salary.modified.strftime(\"%Y-%m-%d\")),\n\t\t\t\t\"bank_name\": salary.bank_name,\n\t\t\t\t\"employee_account_no\": salary.bank_account_no,\n\t\t\t\t\"bank_code\": salary.ifsc_code,\n\t\t\t\t\"employee_name\": salary.employee + \": \" + salary.employee_name,\n\t\t\t\t\"currency\": frappe.get_cached_value(\"Company\", filters.company, \"default_currency\"),\n\t\t\t\t\"amount\": salary.net_pay,\n\t\t\t}\n\t\t\tdata.append(row)\n\n\treturn columns, data\n\n\ndef get_bank_accounts():\n\taccounts = [d.name for d in get_all(\"Account\", filters={\"account_type\": \"Bank\"})]\n\treturn accounts\n\n\ndef get_payroll_entries(accounts, filters):\n\tpayroll_filter = [\n\t\t(\"payment_account\", \"IN\", accounts),\n\t\t(\"number_of_employees\", \">\", 0),\n\t\t(\"Company\", \"=\", filters.company),\n\t]\n\tif filters.to_date:\n\t\tpayroll_filter.append((\"posting_date\", \"<\", filters.to_date))\n\n\tif filters.from_date:\n\t\tpayroll_filter.append((\"posting_date\", \">\", filters.from_date))\n\n\tentries = get_all(\"Payroll Entry\", payroll_filter, [\"name\", \"payment_account\"])\n\n\tpayment_accounts = [d.payment_account for d in entries]\n\tentries = set_company_account(payment_accounts, entries)\n\treturn entries\n\n\ndef get_salary_slips(payroll_entries):\n\tpayroll = [d.name for d in payroll_entries]\n\tsalary_slips = get_all(\n\t\t\"Salary Slip\",\n\t\tfilters=[(\"payroll_entry\", \"IN\", payroll)],\n\t\tfields=[\n\t\t\t\"modified\",\n\t\t\t\"net_pay\",\n\t\t\t\"bank_name\",\n\t\t\t\"bank_account_no\",\n\t\t\t\"payroll_entry\",\n\t\t\t\"employee\",\n\t\t\t\"employee_name\",\n\t\t\t\"status\",\n\t\t],\n\t)\n\n\tpayroll_entry_map = {}\n\tfor entry in payroll_entries:\n\t\tpayroll_entry_map[entry.name] = entry\n\n\t# appending company debit accounts\n\tfor slip in salary_slips:\n\t\tif slip.payroll_entry:\n\t\t\tslip[\"debit_acc_no\"] = payroll_entry_map[slip.payroll_entry][\"company_account\"]\n\t\telse:\n\t\t\tslip[\"debit_acc_no\"] = None\n\n\treturn salary_slips\n\n\ndef get_emp_bank_ifsc_code(salary_slips):\n\temp_names = [d.employee for d in salary_slips]\n\tifsc_codes = get_all(\"Employee\", [(\"name\", \"IN\", emp_names)], [\"ifsc_code\", \"name\"])\n\n\tifsc_codes_map = {code.name: code.ifsc_code for code in ifsc_codes}\n\n\tfor slip in salary_slips:\n\t\tslip[\"ifsc_code\"] = ifsc_codes_map[slip.employee]\n\n\treturn salary_slips\n\n\ndef set_company_account(payment_accounts, payroll_entries):\n\tcompany_accounts = get_all(\n\t\t\"Bank Account\", [(\"account\", \"in\", payment_accounts)], [\"account\", \"bank_account_no\"]\n\t)\n\tcompany_accounts_map = {}\n\tfor acc in company_accounts:\n\t\tcompany_accounts_map[acc.account] = acc\n\n\tfor entry in payroll_entries:\n\t\tcompany_account = \"\"\n\t\tif entry.payment_account in company_accounts_map:\n\t\t\tcompany_account = company_accounts_map[entry.payment_account][\"bank_account_no\"]\n\t\tentry[\"company_account\"] = company_account\n\n\treturn payroll_entries\n", "path": "hrms/payroll/report/bank_remittance/bank_remittance.py"}]}
| 2,193 | 365 |
gh_patches_debug_21993
|
rasdani/github-patches
|
git_diff
|
vega__altair-2415
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add resource section in the documentation
What do you think about adding a section in the docs with links to and short descriptions of learning resources and packages related to / building on Altair?
I went through all Github repos mentioning Altair that was updated since Aug 2020 and labeled as either a Python and Notebook repo. From that search, the resource section could look something like this:
---
# Resources
## Learning material
- [Jupyter Notebooks tutorials and examples](https://github.com/altair-viz/altair_notebooks)
- [A data visualization curriculum from the UW data group](https://uwdata.github.io/visualization-curriculum)
- [Altair tutorial given at PyCon 2018](https://altair-viz.github.io/altair-tutorial)
## Related packages
- [Vega-Lite - The higher-level visualization grammar that Altair implements in Python](https://vega.github.io/vega-lite/)
- [altair_saver - Enables saving charts to a variety of output types](https://github.com/altair-viz/altair_saver)
- [altair_data_server - Data transformer plugin that transparently serves data for charts](https://github.com/altair-viz/altair_data_server)
- [altair_pandas - Altair backend for the pandas plotting API](https://github.com/altair-viz/altair_pandas)
- [vega_datasets - Offline access to the Vega datasets used in the Altair documentation](https://github.com/altair-viz/vega_datasets)
- [nx_altair - Draw interactive NetworkX graphs with Altair](https://github.com/Zsailer/nx_altair)
- [gif - Create animated Altair gifs via loops](https://github.com/maxhumber/gif)
- [Vegawidget - R interface to Altair](https://vegawidget.github.io/altair/)
---
For full disclosure, I am working on a data visualization course using Altair at the University of British Columbia and a wrapper package for making a few EDA plots quicker with Altair. When they are further along, I will suggest adding them here, but also happy if this stays as is and just makes the info easier to find.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sphinxext/altairgallery.py`
Content:
```
1 import hashlib
2 import os
3 import json
4 import random
5 import collections
6 from operator import itemgetter
7 import warnings
8 import shutil
9
10 import jinja2
11
12 from docutils import nodes
13 from docutils.statemachine import ViewList
14 from docutils.parsers.rst import Directive
15 from docutils.parsers.rst.directives import flag
16
17 from sphinx.util.nodes import nested_parse_with_titles
18
19 from .utils import (
20 get_docstring_and_rest,
21 prev_this_next,
22 create_thumbnail,
23 create_generic_image,
24 )
25 from altair.utils.execeval import eval_block
26 from tests.examples_arguments_syntax import iter_examples_arguments_syntax
27
28
29 EXAMPLE_MODULE = "altair.examples"
30
31
32 GALLERY_TEMPLATE = jinja2.Template(
33 """
34 .. This document is auto-generated by the altair-gallery extension. Do not modify directly.
35
36 .. _{{ gallery_ref }}:
37
38 {{ title }}
39 {% for char in title %}-{% endfor %}
40
41 This gallery contains a selection of examples of the plots Altair can create.
42
43 Some may seem fairly complicated at first glance, but they are built by combining a simple set of declarative building blocks.
44
45 Many draw upon sample datasets compiled by the `Vega <https://vega.github.io/vega/>`_ project. To access them yourself, install `vega_datasets <https://github.com/altair-viz/vega_datasets>`_.
46
47 .. code-block:: none
48
49 python -m pip install vega_datasets
50
51 {% for grouper, group in examples %}
52
53 .. _gallery-category-{{ grouper }}:
54
55 {{ grouper }}
56 {% for char in grouper %}~{% endfor %}
57
58 .. raw:: html
59
60 <span class="gallery">
61 {% for example in group %}
62 <a class="imagegroup" href="{{ example.name }}.html">
63 <span
64 class="image" alt="{{ example.title }}"
65 {% if example['use_svg'] %}
66 style="background-image: url(..{{ image_dir }}/{{ example.name }}-thumb.svg);"
67 {% else %}
68 style="background-image: url(..{{ image_dir }}/{{ example.name }}-thumb.png);"
69 {% endif %}
70 ></span>
71
72 <span class="image-title">{{ example.title }}</span>
73 </a>
74 {% endfor %}
75 </span>
76
77 <div style='clear:both;'></div>
78
79 {% endfor %}
80
81
82 .. toctree::
83 :maxdepth: 2
84 :caption: Examples
85 :hidden:
86
87 Gallery <self>
88 Tutorials <../case_studies/exploring-weather>
89 """
90 )
91
92 MINIGALLERY_TEMPLATE = jinja2.Template(
93 """
94 .. raw:: html
95
96 <div id="showcase">
97 <div class="examples">
98 {% for example in examples %}
99 <a
100 class="preview" href="{{ gallery_dir }}/{{ example.name }}.html"
101 {% if example['use_svg'] %}
102 style="background-image: url(.{{ image_dir }}/{{ example.name }}-thumb.svg)"
103 {% else %}
104 style="background-image: url(.{{ image_dir }}/{{ example.name }}-thumb.png)"
105 {% endif %}
106 ></a>
107 {% endfor %}
108 </div>
109 </div>
110 """
111 )
112
113
114 EXAMPLE_TEMPLATE = jinja2.Template(
115 """
116 :orphan:
117 :html_theme.sidebar_secondary.remove:
118
119 .. This document is auto-generated by the altair-gallery extension. Do not modify directly.
120
121 .. _gallery_{{ name }}:
122
123 {{ docstring }}
124
125 .. altair-plot::
126 {% if code_below %}:code-below:{% endif %}
127 {% if strict %}:strict:{% endif %}
128
129 {{ code | indent(4) }}
130 """
131 )
132
133
134 def save_example_pngs(examples, image_dir, make_thumbnails=True):
135 """Save example pngs and (optionally) thumbnails"""
136 if not os.path.exists(image_dir):
137 os.makedirs(image_dir)
138
139 # store hashes so that we know whether images need to be generated
140 hash_file = os.path.join(image_dir, "_image_hashes.json")
141
142 if os.path.exists(hash_file):
143 with open(hash_file) as f:
144 hashes = json.load(f)
145 else:
146 hashes = {}
147
148 for example in examples:
149 filename = example["name"] + (".svg" if example["use_svg"] else ".png")
150 image_file = os.path.join(image_dir, filename)
151
152 example_hash = hashlib.md5(example["code"].encode()).hexdigest()
153 hashes_match = hashes.get(filename, "") == example_hash
154
155 if hashes_match and os.path.exists(image_file):
156 print("-> using cached {}".format(image_file))
157 else:
158 # the file changed or the image file does not exist. Generate it.
159 print("-> saving {}".format(image_file))
160 chart = eval_block(example["code"])
161 try:
162 chart.save(image_file)
163 hashes[filename] = example_hash
164 except ImportError:
165 warnings.warn("Unable to save image: using generic image")
166 create_generic_image(image_file)
167
168 with open(hash_file, "w") as f:
169 json.dump(hashes, f)
170
171 if make_thumbnails:
172 params = example.get("galleryParameters", {})
173 if example["use_svg"]:
174 # Thumbnail for SVG is identical to original image
175 thumb_file = os.path.join(image_dir, example["name"] + "-thumb.svg")
176 shutil.copyfile(image_file, thumb_file)
177 else:
178 thumb_file = os.path.join(image_dir, example["name"] + "-thumb.png")
179 create_thumbnail(image_file, thumb_file, **params)
180
181 # Save hashes so we know whether we need to re-generate plots
182 with open(hash_file, "w") as f:
183 json.dump(hashes, f)
184
185
186 def populate_examples(**kwds):
187 """Iterate through Altair examples and extract code"""
188
189 examples = sorted(iter_examples_arguments_syntax(), key=itemgetter("name"))
190
191 for example in examples:
192 docstring, category, code, lineno = get_docstring_and_rest(example["filename"])
193 example.update(kwds)
194 if category is None:
195 raise Exception(
196 f"The example {example['name']} is not assigned to a category"
197 )
198 example.update(
199 {
200 "docstring": docstring,
201 "title": docstring.strip().split("\n")[0],
202 "code": code,
203 "category": category.title(),
204 "lineno": lineno,
205 }
206 )
207
208 return examples
209
210
211 class AltairMiniGalleryDirective(Directive):
212 has_content = False
213
214 option_spec = {
215 "size": int,
216 "names": str,
217 "indices": lambda x: list(map(int, x.split())),
218 "shuffle": flag,
219 "seed": int,
220 "titles": bool,
221 "width": str,
222 }
223
224 def run(self):
225 size = self.options.get("size", 15)
226 names = [name.strip() for name in self.options.get("names", "").split(",")]
227 indices = self.options.get("indices", [])
228 shuffle = "shuffle" in self.options
229 seed = self.options.get("seed", 42)
230 titles = self.options.get("titles", False)
231 width = self.options.get("width", None)
232
233 env = self.state.document.settings.env
234 app = env.app
235
236 gallery_dir = app.builder.config.altair_gallery_dir
237
238 examples = populate_examples()
239
240 if names:
241 if len(names) < size:
242 raise ValueError(
243 "altair-minigallery: if names are specified, "
244 "the list must be at least as long as size."
245 )
246 mapping = {example["name"]: example for example in examples}
247 examples = [mapping[name] for name in names]
248 else:
249 if indices:
250 examples = [examples[i] for i in indices]
251 if shuffle:
252 random.seed(seed)
253 random.shuffle(examples)
254 if size:
255 examples = examples[:size]
256
257 include = MINIGALLERY_TEMPLATE.render(
258 image_dir="/_static",
259 gallery_dir=gallery_dir,
260 examples=examples,
261 titles=titles,
262 width=width,
263 )
264
265 # parse and return documentation
266 result = ViewList()
267 for line in include.split("\n"):
268 result.append(line, "<altair-minigallery>")
269 node = nodes.paragraph()
270 node.document = self.state.document
271 nested_parse_with_titles(self.state, result, node)
272
273 return node.children
274
275
276 def main(app):
277 gallery_dir = app.builder.config.altair_gallery_dir
278 target_dir = os.path.join(app.builder.srcdir, gallery_dir)
279 image_dir = os.path.join(app.builder.srcdir, "_images")
280
281 gallery_ref = app.builder.config.altair_gallery_ref
282 gallery_title = app.builder.config.altair_gallery_title
283 examples = populate_examples(gallery_ref=gallery_ref, code_below=True, strict=False)
284
285 if not os.path.exists(target_dir):
286 os.makedirs(target_dir)
287
288 examples = sorted(examples, key=lambda x: x["title"])
289 examples_toc = collections.OrderedDict(
290 {
291 "Simple Charts": [],
292 "Bar Charts": [],
293 "Line Charts": [],
294 "Area Charts": [],
295 "Circular Plots": [],
296 "Scatter Plots": [],
297 "Uncertainties And Trends": [],
298 "Distributions": [],
299 "Tables": [],
300 "Maps": [],
301 "Interactive Charts": [],
302 "Advanced Calculations": [],
303 "Case Studies": [],
304 }
305 )
306 for d in examples:
307 examples_toc[d["category"]].append(d)
308
309 # Write the gallery index file
310 with open(os.path.join(target_dir, "index.rst"), "w") as f:
311 f.write(
312 GALLERY_TEMPLATE.render(
313 title=gallery_title,
314 examples=examples_toc.items(),
315 image_dir="/_static",
316 gallery_ref=gallery_ref,
317 )
318 )
319
320 # save the images to file
321 save_example_pngs(examples, image_dir)
322
323 # Write the individual example files
324 for prev_ex, example, next_ex in prev_this_next(examples):
325 if prev_ex:
326 example["prev_ref"] = "gallery_{name}".format(**prev_ex)
327 if next_ex:
328 example["next_ref"] = "gallery_{name}".format(**next_ex)
329 target_filename = os.path.join(target_dir, example["name"] + ".rst")
330 with open(os.path.join(target_filename), "w", encoding="utf-8") as f:
331 f.write(EXAMPLE_TEMPLATE.render(example))
332
333
334 def setup(app):
335 app.connect("builder-inited", main)
336 app.add_css_file("altair-gallery.css")
337 app.add_config_value("altair_gallery_dir", "gallery", "env")
338 app.add_config_value("altair_gallery_ref", "example-gallery", "env")
339 app.add_config_value("altair_gallery_title", "Example Gallery", "env")
340 app.add_directive_to_domain("py", "altair-minigallery", AltairMiniGalleryDirective)
341
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sphinxext/altairgallery.py b/sphinxext/altairgallery.py
--- a/sphinxext/altairgallery.py
+++ b/sphinxext/altairgallery.py
@@ -38,9 +38,7 @@
{{ title }}
{% for char in title %}-{% endfor %}
-This gallery contains a selection of examples of the plots Altair can create.
-
-Some may seem fairly complicated at first glance, but they are built by combining a simple set of declarative building blocks.
+This gallery contains a selection of examples of the plots Altair can create. Some may seem fairly complicated at first glance, but they are built by combining a simple set of declarative building blocks.
Many draw upon sample datasets compiled by the `Vega <https://vega.github.io/vega/>`_ project. To access them yourself, install `vega_datasets <https://github.com/altair-viz/vega_datasets>`_.
@@ -48,6 +46,8 @@
python -m pip install vega_datasets
+If you can't find the plots you are looking for here, make sure to check out the :ref:`altair-ecosystem` section, which has links to packages for making e.g. network diagrams and animations.
+
{% for grouper, group in examples %}
.. _gallery-category-{{ grouper }}:
|
{"golden_diff": "diff --git a/sphinxext/altairgallery.py b/sphinxext/altairgallery.py\n--- a/sphinxext/altairgallery.py\n+++ b/sphinxext/altairgallery.py\n@@ -38,9 +38,7 @@\n {{ title }}\n {% for char in title %}-{% endfor %}\n \n-This gallery contains a selection of examples of the plots Altair can create.\n-\n-Some may seem fairly complicated at first glance, but they are built by combining a simple set of declarative building blocks.\n+This gallery contains a selection of examples of the plots Altair can create. Some may seem fairly complicated at first glance, but they are built by combining a simple set of declarative building blocks.\n \n Many draw upon sample datasets compiled by the `Vega <https://vega.github.io/vega/>`_ project. To access them yourself, install `vega_datasets <https://github.com/altair-viz/vega_datasets>`_.\n \n@@ -48,6 +46,8 @@\n \n python -m pip install vega_datasets\n \n+If you can't find the plots you are looking for here, make sure to check out the :ref:`altair-ecosystem` section, which has links to packages for making e.g. network diagrams and animations.\n+\n {% for grouper, group in examples %}\n \n .. _gallery-category-{{ grouper }}:\n", "issue": "Add resource section in the documentation\nWhat do you think about adding a section in the docs with links to and short descriptions of learning resources and packages related to / building on Altair?\r\n\r\nI went through all Github repos mentioning Altair that was updated since Aug 2020 and labeled as either a Python and Notebook repo. From that search, the resource section could look something like this:\r\n\r\n---\r\n\r\n# Resources\r\n\r\n## Learning material\r\n\r\n- [Jupyter Notebooks tutorials and examples](https://github.com/altair-viz/altair_notebooks)\r\n- [A data visualization curriculum from the UW data group](https://uwdata.github.io/visualization-curriculum)\r\n- [Altair tutorial given at PyCon 2018](https://altair-viz.github.io/altair-tutorial)\r\n\r\n## Related packages\r\n\r\n- [Vega-Lite - The higher-level visualization grammar that Altair implements in Python](https://vega.github.io/vega-lite/)\r\n- [altair_saver - Enables saving charts to a variety of output types](https://github.com/altair-viz/altair_saver)\r\n- [altair_data_server - Data transformer plugin that transparently serves data for charts](https://github.com/altair-viz/altair_data_server)\r\n- [altair_pandas - Altair backend for the pandas plotting API](https://github.com/altair-viz/altair_pandas)\r\n- [vega_datasets - Offline access to the Vega datasets used in the Altair documentation](https://github.com/altair-viz/vega_datasets)\r\n- [nx_altair - Draw interactive NetworkX graphs with Altair](https://github.com/Zsailer/nx_altair)\r\n- [gif - Create animated Altair gifs via loops](https://github.com/maxhumber/gif)\r\n- [Vegawidget - R interface to Altair](https://vegawidget.github.io/altair/)\r\n\r\n---\r\n\r\nFor full disclosure, I am working on a data visualization course using Altair at the University of British Columbia and a wrapper package for making a few EDA plots quicker with Altair. When they are further along, I will suggest adding them here, but also happy if this stays as is and just makes the info easier to find.\r\n\r\n\n", "before_files": [{"content": "import hashlib\nimport os\nimport json\nimport random\nimport collections\nfrom operator import itemgetter\nimport warnings\nimport shutil\n\nimport jinja2\n\nfrom docutils import nodes\nfrom docutils.statemachine import ViewList\nfrom docutils.parsers.rst import Directive\nfrom docutils.parsers.rst.directives import flag\n\nfrom sphinx.util.nodes import nested_parse_with_titles\n\nfrom .utils import (\n get_docstring_and_rest,\n prev_this_next,\n create_thumbnail,\n create_generic_image,\n)\nfrom altair.utils.execeval import eval_block\nfrom tests.examples_arguments_syntax import iter_examples_arguments_syntax\n\n\nEXAMPLE_MODULE = \"altair.examples\"\n\n\nGALLERY_TEMPLATE = jinja2.Template(\n \"\"\"\n.. This document is auto-generated by the altair-gallery extension. Do not modify directly.\n\n.. _{{ gallery_ref }}:\n\n{{ title }}\n{% for char in title %}-{% endfor %}\n\nThis gallery contains a selection of examples of the plots Altair can create.\n\nSome may seem fairly complicated at first glance, but they are built by combining a simple set of declarative building blocks.\n\nMany draw upon sample datasets compiled by the `Vega <https://vega.github.io/vega/>`_ project. To access them yourself, install `vega_datasets <https://github.com/altair-viz/vega_datasets>`_.\n\n.. code-block:: none\n\n python -m pip install vega_datasets\n\n{% for grouper, group in examples %}\n\n.. _gallery-category-{{ grouper }}:\n\n{{ grouper }}\n{% for char in grouper %}~{% endfor %}\n\n.. raw:: html\n\n <span class=\"gallery\">\n {% for example in group %}\n <a class=\"imagegroup\" href=\"{{ example.name }}.html\">\n <span\n class=\"image\" alt=\"{{ example.title }}\"\n{% if example['use_svg'] %}\n style=\"background-image: url(..{{ image_dir }}/{{ example.name }}-thumb.svg);\"\n{% else %}\n style=\"background-image: url(..{{ image_dir }}/{{ example.name }}-thumb.png);\"\n{% endif %}\n ></span>\n\n <span class=\"image-title\">{{ example.title }}</span>\n </a>\n {% endfor %}\n </span>\n\n <div style='clear:both;'></div>\n\n{% endfor %}\n\n\n.. toctree::\n :maxdepth: 2\n :caption: Examples\n :hidden:\n\n Gallery <self>\n Tutorials <../case_studies/exploring-weather>\n\"\"\"\n)\n\nMINIGALLERY_TEMPLATE = jinja2.Template(\n \"\"\"\n.. raw:: html\n\n <div id=\"showcase\">\n <div class=\"examples\">\n {% for example in examples %}\n <a\n class=\"preview\" href=\"{{ gallery_dir }}/{{ example.name }}.html\"\n{% if example['use_svg'] %}\n style=\"background-image: url(.{{ image_dir }}/{{ example.name }}-thumb.svg)\"\n{% else %}\n style=\"background-image: url(.{{ image_dir }}/{{ example.name }}-thumb.png)\"\n{% endif %}\n ></a>\n {% endfor %}\n </div>\n </div>\n\"\"\"\n)\n\n\nEXAMPLE_TEMPLATE = jinja2.Template(\n \"\"\"\n:orphan:\n:html_theme.sidebar_secondary.remove:\n\n.. This document is auto-generated by the altair-gallery extension. Do not modify directly.\n\n.. _gallery_{{ name }}:\n\n{{ docstring }}\n\n.. altair-plot::\n {% if code_below %}:code-below:{% endif %}\n {% if strict %}:strict:{% endif %}\n\n {{ code | indent(4) }}\n\"\"\"\n)\n\n\ndef save_example_pngs(examples, image_dir, make_thumbnails=True):\n \"\"\"Save example pngs and (optionally) thumbnails\"\"\"\n if not os.path.exists(image_dir):\n os.makedirs(image_dir)\n\n # store hashes so that we know whether images need to be generated\n hash_file = os.path.join(image_dir, \"_image_hashes.json\")\n\n if os.path.exists(hash_file):\n with open(hash_file) as f:\n hashes = json.load(f)\n else:\n hashes = {}\n\n for example in examples:\n filename = example[\"name\"] + (\".svg\" if example[\"use_svg\"] else \".png\")\n image_file = os.path.join(image_dir, filename)\n\n example_hash = hashlib.md5(example[\"code\"].encode()).hexdigest()\n hashes_match = hashes.get(filename, \"\") == example_hash\n\n if hashes_match and os.path.exists(image_file):\n print(\"-> using cached {}\".format(image_file))\n else:\n # the file changed or the image file does not exist. Generate it.\n print(\"-> saving {}\".format(image_file))\n chart = eval_block(example[\"code\"])\n try:\n chart.save(image_file)\n hashes[filename] = example_hash\n except ImportError:\n warnings.warn(\"Unable to save image: using generic image\")\n create_generic_image(image_file)\n\n with open(hash_file, \"w\") as f:\n json.dump(hashes, f)\n\n if make_thumbnails:\n params = example.get(\"galleryParameters\", {})\n if example[\"use_svg\"]:\n # Thumbnail for SVG is identical to original image\n thumb_file = os.path.join(image_dir, example[\"name\"] + \"-thumb.svg\")\n shutil.copyfile(image_file, thumb_file)\n else:\n thumb_file = os.path.join(image_dir, example[\"name\"] + \"-thumb.png\")\n create_thumbnail(image_file, thumb_file, **params)\n\n # Save hashes so we know whether we need to re-generate plots\n with open(hash_file, \"w\") as f:\n json.dump(hashes, f)\n\n\ndef populate_examples(**kwds):\n \"\"\"Iterate through Altair examples and extract code\"\"\"\n\n examples = sorted(iter_examples_arguments_syntax(), key=itemgetter(\"name\"))\n\n for example in examples:\n docstring, category, code, lineno = get_docstring_and_rest(example[\"filename\"])\n example.update(kwds)\n if category is None:\n raise Exception(\n f\"The example {example['name']} is not assigned to a category\"\n )\n example.update(\n {\n \"docstring\": docstring,\n \"title\": docstring.strip().split(\"\\n\")[0],\n \"code\": code,\n \"category\": category.title(),\n \"lineno\": lineno,\n }\n )\n\n return examples\n\n\nclass AltairMiniGalleryDirective(Directive):\n has_content = False\n\n option_spec = {\n \"size\": int,\n \"names\": str,\n \"indices\": lambda x: list(map(int, x.split())),\n \"shuffle\": flag,\n \"seed\": int,\n \"titles\": bool,\n \"width\": str,\n }\n\n def run(self):\n size = self.options.get(\"size\", 15)\n names = [name.strip() for name in self.options.get(\"names\", \"\").split(\",\")]\n indices = self.options.get(\"indices\", [])\n shuffle = \"shuffle\" in self.options\n seed = self.options.get(\"seed\", 42)\n titles = self.options.get(\"titles\", False)\n width = self.options.get(\"width\", None)\n\n env = self.state.document.settings.env\n app = env.app\n\n gallery_dir = app.builder.config.altair_gallery_dir\n\n examples = populate_examples()\n\n if names:\n if len(names) < size:\n raise ValueError(\n \"altair-minigallery: if names are specified, \"\n \"the list must be at least as long as size.\"\n )\n mapping = {example[\"name\"]: example for example in examples}\n examples = [mapping[name] for name in names]\n else:\n if indices:\n examples = [examples[i] for i in indices]\n if shuffle:\n random.seed(seed)\n random.shuffle(examples)\n if size:\n examples = examples[:size]\n\n include = MINIGALLERY_TEMPLATE.render(\n image_dir=\"/_static\",\n gallery_dir=gallery_dir,\n examples=examples,\n titles=titles,\n width=width,\n )\n\n # parse and return documentation\n result = ViewList()\n for line in include.split(\"\\n\"):\n result.append(line, \"<altair-minigallery>\")\n node = nodes.paragraph()\n node.document = self.state.document\n nested_parse_with_titles(self.state, result, node)\n\n return node.children\n\n\ndef main(app):\n gallery_dir = app.builder.config.altair_gallery_dir\n target_dir = os.path.join(app.builder.srcdir, gallery_dir)\n image_dir = os.path.join(app.builder.srcdir, \"_images\")\n\n gallery_ref = app.builder.config.altair_gallery_ref\n gallery_title = app.builder.config.altair_gallery_title\n examples = populate_examples(gallery_ref=gallery_ref, code_below=True, strict=False)\n\n if not os.path.exists(target_dir):\n os.makedirs(target_dir)\n\n examples = sorted(examples, key=lambda x: x[\"title\"])\n examples_toc = collections.OrderedDict(\n {\n \"Simple Charts\": [],\n \"Bar Charts\": [],\n \"Line Charts\": [],\n \"Area Charts\": [],\n \"Circular Plots\": [],\n \"Scatter Plots\": [],\n \"Uncertainties And Trends\": [],\n \"Distributions\": [],\n \"Tables\": [],\n \"Maps\": [],\n \"Interactive Charts\": [],\n \"Advanced Calculations\": [],\n \"Case Studies\": [],\n }\n )\n for d in examples:\n examples_toc[d[\"category\"]].append(d)\n\n # Write the gallery index file\n with open(os.path.join(target_dir, \"index.rst\"), \"w\") as f:\n f.write(\n GALLERY_TEMPLATE.render(\n title=gallery_title,\n examples=examples_toc.items(),\n image_dir=\"/_static\",\n gallery_ref=gallery_ref,\n )\n )\n\n # save the images to file\n save_example_pngs(examples, image_dir)\n\n # Write the individual example files\n for prev_ex, example, next_ex in prev_this_next(examples):\n if prev_ex:\n example[\"prev_ref\"] = \"gallery_{name}\".format(**prev_ex)\n if next_ex:\n example[\"next_ref\"] = \"gallery_{name}\".format(**next_ex)\n target_filename = os.path.join(target_dir, example[\"name\"] + \".rst\")\n with open(os.path.join(target_filename), \"w\", encoding=\"utf-8\") as f:\n f.write(EXAMPLE_TEMPLATE.render(example))\n\n\ndef setup(app):\n app.connect(\"builder-inited\", main)\n app.add_css_file(\"altair-gallery.css\")\n app.add_config_value(\"altair_gallery_dir\", \"gallery\", \"env\")\n app.add_config_value(\"altair_gallery_ref\", \"example-gallery\", \"env\")\n app.add_config_value(\"altair_gallery_title\", \"Example Gallery\", \"env\")\n app.add_directive_to_domain(\"py\", \"altair-minigallery\", AltairMiniGalleryDirective)\n", "path": "sphinxext/altairgallery.py"}], "after_files": [{"content": "import hashlib\nimport os\nimport json\nimport random\nimport collections\nfrom operator import itemgetter\nimport warnings\nimport shutil\n\nimport jinja2\n\nfrom docutils import nodes\nfrom docutils.statemachine import ViewList\nfrom docutils.parsers.rst import Directive\nfrom docutils.parsers.rst.directives import flag\n\nfrom sphinx.util.nodes import nested_parse_with_titles\n\nfrom .utils import (\n get_docstring_and_rest,\n prev_this_next,\n create_thumbnail,\n create_generic_image,\n)\nfrom altair.utils.execeval import eval_block\nfrom tests.examples_arguments_syntax import iter_examples_arguments_syntax\n\n\nEXAMPLE_MODULE = \"altair.examples\"\n\n\nGALLERY_TEMPLATE = jinja2.Template(\n \"\"\"\n.. This document is auto-generated by the altair-gallery extension. Do not modify directly.\n\n.. _{{ gallery_ref }}:\n\n{{ title }}\n{% for char in title %}-{% endfor %}\n\nThis gallery contains a selection of examples of the plots Altair can create. Some may seem fairly complicated at first glance, but they are built by combining a simple set of declarative building blocks.\n\nMany draw upon sample datasets compiled by the `Vega <https://vega.github.io/vega/>`_ project. To access them yourself, install `vega_datasets <https://github.com/altair-viz/vega_datasets>`_.\n\n.. code-block:: none\n\n python -m pip install vega_datasets\n\nIf you can't find the plots you are looking for here, make sure to check out the :ref:`altair-ecosystem` section, which has links to packages for making e.g. network diagrams and animations.\n\n{% for grouper, group in examples %}\n\n.. _gallery-category-{{ grouper }}:\n\n{{ grouper }}\n{% for char in grouper %}~{% endfor %}\n\n.. raw:: html\n\n <span class=\"gallery\">\n {% for example in group %}\n <a class=\"imagegroup\" href=\"{{ example.name }}.html\">\n <span\n class=\"image\" alt=\"{{ example.title }}\"\n{% if example['use_svg'] %}\n style=\"background-image: url(..{{ image_dir }}/{{ example.name }}-thumb.svg);\"\n{% else %}\n style=\"background-image: url(..{{ image_dir }}/{{ example.name }}-thumb.png);\"\n{% endif %}\n ></span>\n\n <span class=\"image-title\">{{ example.title }}</span>\n </a>\n {% endfor %}\n </span>\n\n <div style='clear:both;'></div>\n\n{% endfor %}\n\n\n.. toctree::\n :maxdepth: 2\n :caption: Examples\n :hidden:\n\n Gallery <self>\n Tutorials <../case_studies/exploring-weather>\n\"\"\"\n)\n\nMINIGALLERY_TEMPLATE = jinja2.Template(\n \"\"\"\n.. raw:: html\n\n <div id=\"showcase\">\n <div class=\"examples\">\n {% for example in examples %}\n <a\n class=\"preview\" href=\"{{ gallery_dir }}/{{ example.name }}.html\"\n{% if example['use_svg'] %}\n style=\"background-image: url(.{{ image_dir }}/{{ example.name }}-thumb.svg)\"\n{% else %}\n style=\"background-image: url(.{{ image_dir }}/{{ example.name }}-thumb.png)\"\n{% endif %}\n ></a>\n {% endfor %}\n </div>\n </div>\n\"\"\"\n)\n\n\nEXAMPLE_TEMPLATE = jinja2.Template(\n \"\"\"\n:orphan:\n:html_theme.sidebar_secondary.remove:\n\n.. This document is auto-generated by the altair-gallery extension. Do not modify directly.\n\n.. _gallery_{{ name }}:\n\n{{ docstring }}\n\n.. altair-plot::\n {% if code_below %}:code-below:{% endif %}\n {% if strict %}:strict:{% endif %}\n\n {{ code | indent(4) }}\n\"\"\"\n)\n\n\ndef save_example_pngs(examples, image_dir, make_thumbnails=True):\n \"\"\"Save example pngs and (optionally) thumbnails\"\"\"\n if not os.path.exists(image_dir):\n os.makedirs(image_dir)\n\n # store hashes so that we know whether images need to be generated\n hash_file = os.path.join(image_dir, \"_image_hashes.json\")\n\n if os.path.exists(hash_file):\n with open(hash_file) as f:\n hashes = json.load(f)\n else:\n hashes = {}\n\n for example in examples:\n filename = example[\"name\"] + (\".svg\" if example[\"use_svg\"] else \".png\")\n image_file = os.path.join(image_dir, filename)\n\n example_hash = hashlib.md5(example[\"code\"].encode()).hexdigest()\n hashes_match = hashes.get(filename, \"\") == example_hash\n\n if hashes_match and os.path.exists(image_file):\n print(\"-> using cached {}\".format(image_file))\n else:\n # the file changed or the image file does not exist. Generate it.\n print(\"-> saving {}\".format(image_file))\n chart = eval_block(example[\"code\"])\n try:\n chart.save(image_file)\n hashes[filename] = example_hash\n except ImportError:\n warnings.warn(\"Unable to save image: using generic image\")\n create_generic_image(image_file)\n\n with open(hash_file, \"w\") as f:\n json.dump(hashes, f)\n\n if make_thumbnails:\n params = example.get(\"galleryParameters\", {})\n if example[\"use_svg\"]:\n # Thumbnail for SVG is identical to original image\n thumb_file = os.path.join(image_dir, example[\"name\"] + \"-thumb.svg\")\n shutil.copyfile(image_file, thumb_file)\n else:\n thumb_file = os.path.join(image_dir, example[\"name\"] + \"-thumb.png\")\n create_thumbnail(image_file, thumb_file, **params)\n\n # Save hashes so we know whether we need to re-generate plots\n with open(hash_file, \"w\") as f:\n json.dump(hashes, f)\n\n\ndef populate_examples(**kwds):\n \"\"\"Iterate through Altair examples and extract code\"\"\"\n\n examples = sorted(iter_examples_arguments_syntax(), key=itemgetter(\"name\"))\n\n for example in examples:\n docstring, category, code, lineno = get_docstring_and_rest(example[\"filename\"])\n example.update(kwds)\n if category is None:\n raise Exception(\n f\"The example {example['name']} is not assigned to a category\"\n )\n example.update(\n {\n \"docstring\": docstring,\n \"title\": docstring.strip().split(\"\\n\")[0],\n \"code\": code,\n \"category\": category.title(),\n \"lineno\": lineno,\n }\n )\n\n return examples\n\n\nclass AltairMiniGalleryDirective(Directive):\n has_content = False\n\n option_spec = {\n \"size\": int,\n \"names\": str,\n \"indices\": lambda x: list(map(int, x.split())),\n \"shuffle\": flag,\n \"seed\": int,\n \"titles\": bool,\n \"width\": str,\n }\n\n def run(self):\n size = self.options.get(\"size\", 15)\n names = [name.strip() for name in self.options.get(\"names\", \"\").split(\",\")]\n indices = self.options.get(\"indices\", [])\n shuffle = \"shuffle\" in self.options\n seed = self.options.get(\"seed\", 42)\n titles = self.options.get(\"titles\", False)\n width = self.options.get(\"width\", None)\n\n env = self.state.document.settings.env\n app = env.app\n\n gallery_dir = app.builder.config.altair_gallery_dir\n\n examples = populate_examples()\n\n if names:\n if len(names) < size:\n raise ValueError(\n \"altair-minigallery: if names are specified, \"\n \"the list must be at least as long as size.\"\n )\n mapping = {example[\"name\"]: example for example in examples}\n examples = [mapping[name] for name in names]\n else:\n if indices:\n examples = [examples[i] for i in indices]\n if shuffle:\n random.seed(seed)\n random.shuffle(examples)\n if size:\n examples = examples[:size]\n\n include = MINIGALLERY_TEMPLATE.render(\n image_dir=\"/_static\",\n gallery_dir=gallery_dir,\n examples=examples,\n titles=titles,\n width=width,\n )\n\n # parse and return documentation\n result = ViewList()\n for line in include.split(\"\\n\"):\n result.append(line, \"<altair-minigallery>\")\n node = nodes.paragraph()\n node.document = self.state.document\n nested_parse_with_titles(self.state, result, node)\n\n return node.children\n\n\ndef main(app):\n gallery_dir = app.builder.config.altair_gallery_dir\n target_dir = os.path.join(app.builder.srcdir, gallery_dir)\n image_dir = os.path.join(app.builder.srcdir, \"_images\")\n\n gallery_ref = app.builder.config.altair_gallery_ref\n gallery_title = app.builder.config.altair_gallery_title\n examples = populate_examples(gallery_ref=gallery_ref, code_below=True, strict=False)\n\n if not os.path.exists(target_dir):\n os.makedirs(target_dir)\n\n examples = sorted(examples, key=lambda x: x[\"title\"])\n examples_toc = collections.OrderedDict(\n {\n \"Simple Charts\": [],\n \"Bar Charts\": [],\n \"Line Charts\": [],\n \"Area Charts\": [],\n \"Circular Plots\": [],\n \"Scatter Plots\": [],\n \"Uncertainties And Trends\": [],\n \"Distributions\": [],\n \"Tables\": [],\n \"Maps\": [],\n \"Interactive Charts\": [],\n \"Advanced Calculations\": [],\n \"Case Studies\": [],\n }\n )\n for d in examples:\n examples_toc[d[\"category\"]].append(d)\n\n # Write the gallery index file\n with open(os.path.join(target_dir, \"index.rst\"), \"w\") as f:\n f.write(\n GALLERY_TEMPLATE.render(\n title=gallery_title,\n examples=examples_toc.items(),\n image_dir=\"/_static\",\n gallery_ref=gallery_ref,\n )\n )\n\n # save the images to file\n save_example_pngs(examples, image_dir)\n\n # Write the individual example files\n for prev_ex, example, next_ex in prev_this_next(examples):\n if prev_ex:\n example[\"prev_ref\"] = \"gallery_{name}\".format(**prev_ex)\n if next_ex:\n example[\"next_ref\"] = \"gallery_{name}\".format(**next_ex)\n target_filename = os.path.join(target_dir, example[\"name\"] + \".rst\")\n with open(os.path.join(target_filename), \"w\", encoding=\"utf-8\") as f:\n f.write(EXAMPLE_TEMPLATE.render(example))\n\n\ndef setup(app):\n app.connect(\"builder-inited\", main)\n app.add_css_file(\"altair-gallery.css\")\n app.add_config_value(\"altair_gallery_dir\", \"gallery\", \"env\")\n app.add_config_value(\"altair_gallery_ref\", \"example-gallery\", \"env\")\n app.add_config_value(\"altair_gallery_title\", \"Example Gallery\", \"env\")\n app.add_directive_to_domain(\"py\", \"altair-minigallery\", AltairMiniGalleryDirective)\n", "path": "sphinxext/altairgallery.py"}]}
| 4,013 | 299 |
gh_patches_debug_54111
|
rasdani/github-patches
|
git_diff
|
bookwyrm-social__bookwyrm-505
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
urls in parens don't parse as links
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bookwyrm/outgoing.py`
Content:
```
1 ''' handles all the activity coming out of the server '''
2 import re
3
4 from django.db import IntegrityError, transaction
5 from django.http import JsonResponse
6 from django.shortcuts import get_object_or_404
7 from django.views.decorators.csrf import csrf_exempt
8 from django.views.decorators.http import require_GET
9 from markdown import markdown
10 from requests import HTTPError
11
12 from bookwyrm import activitypub
13 from bookwyrm import models
14 from bookwyrm.connectors import get_data, ConnectorException
15 from bookwyrm.broadcast import broadcast
16 from bookwyrm.sanitize_html import InputHtmlParser
17 from bookwyrm.status import create_notification
18 from bookwyrm.status import create_generated_note
19 from bookwyrm.status import delete_status
20 from bookwyrm.settings import DOMAIN
21 from bookwyrm.utils import regex
22
23
24 @csrf_exempt
25 @require_GET
26 def outbox(request, username):
27 ''' outbox for the requested user '''
28 user = get_object_or_404(models.User, localname=username)
29 filter_type = request.GET.get('type')
30 if filter_type not in models.status_models:
31 filter_type = None
32
33 return JsonResponse(
34 user.to_outbox(**request.GET, filter_type=filter_type),
35 encoder=activitypub.ActivityEncoder
36 )
37
38
39 def handle_remote_webfinger(query):
40 ''' webfingerin' other servers '''
41 user = None
42
43 # usernames could be @user@domain or user@domain
44 if not query:
45 return None
46
47 if query[0] == '@':
48 query = query[1:]
49
50 try:
51 domain = query.split('@')[1]
52 except IndexError:
53 return None
54
55 try:
56 user = models.User.objects.get(username=query)
57 except models.User.DoesNotExist:
58 url = 'https://%s/.well-known/webfinger?resource=acct:%s' % \
59 (domain, query)
60 try:
61 data = get_data(url)
62 except (ConnectorException, HTTPError):
63 return None
64
65 for link in data.get('links'):
66 if link.get('rel') == 'self':
67 try:
68 user = activitypub.resolve_remote_id(
69 models.User, link['href']
70 )
71 except KeyError:
72 return None
73 return user
74
75
76 def handle_follow(user, to_follow):
77 ''' someone local wants to follow someone '''
78 relationship, _ = models.UserFollowRequest.objects.get_or_create(
79 user_subject=user,
80 user_object=to_follow,
81 )
82 activity = relationship.to_activity()
83 broadcast(user, activity, privacy='direct', direct_recipients=[to_follow])
84
85
86 def handle_unfollow(user, to_unfollow):
87 ''' someone local wants to follow someone '''
88 relationship = models.UserFollows.objects.get(
89 user_subject=user,
90 user_object=to_unfollow
91 )
92 activity = relationship.to_undo_activity(user)
93 broadcast(user, activity, privacy='direct', direct_recipients=[to_unfollow])
94 to_unfollow.followers.remove(user)
95
96
97 def handle_accept(follow_request):
98 ''' send an acceptance message to a follow request '''
99 user = follow_request.user_subject
100 to_follow = follow_request.user_object
101 with transaction.atomic():
102 relationship = models.UserFollows.from_request(follow_request)
103 follow_request.delete()
104 relationship.save()
105
106 activity = relationship.to_accept_activity()
107 broadcast(to_follow, activity, privacy='direct', direct_recipients=[user])
108
109
110 def handle_reject(follow_request):
111 ''' a local user who managed follows rejects a follow request '''
112 user = follow_request.user_subject
113 to_follow = follow_request.user_object
114 activity = follow_request.to_reject_activity()
115 follow_request.delete()
116 broadcast(to_follow, activity, privacy='direct', direct_recipients=[user])
117
118
119 def handle_shelve(user, book, shelf):
120 ''' a local user is getting a book put on their shelf '''
121 # update the database
122 shelve = models.ShelfBook(book=book, shelf=shelf, added_by=user)
123 shelve.save()
124
125 broadcast(user, shelve.to_add_activity(user))
126
127
128 def handle_unshelve(user, book, shelf):
129 ''' a local user is getting a book put on their shelf '''
130 # update the database
131 row = models.ShelfBook.objects.get(book=book, shelf=shelf)
132 activity = row.to_remove_activity(user)
133 row.delete()
134
135 broadcast(user, activity)
136
137
138 def handle_reading_status(user, shelf, book, privacy):
139 ''' post about a user reading a book '''
140 # tell the world about this cool thing that happened
141 try:
142 message = {
143 'to-read': 'wants to read',
144 'reading': 'started reading',
145 'read': 'finished reading'
146 }[shelf.identifier]
147 except KeyError:
148 # it's a non-standard shelf, don't worry about it
149 return
150
151 status = create_generated_note(
152 user,
153 message,
154 mention_books=[book],
155 privacy=privacy
156 )
157 status.save()
158
159 broadcast(user, status.to_create_activity(user))
160
161
162 def handle_imported_book(user, item, include_reviews, privacy):
163 ''' process a goodreads csv and then post about it '''
164 if isinstance(item.book, models.Work):
165 item.book = item.book.default_edition
166 if not item.book:
167 return
168
169 existing_shelf = models.ShelfBook.objects.filter(
170 book=item.book, added_by=user).exists()
171
172 # shelve the book if it hasn't been shelved already
173 if item.shelf and not existing_shelf:
174 desired_shelf = models.Shelf.objects.get(
175 identifier=item.shelf,
176 user=user
177 )
178 shelf_book = models.ShelfBook.objects.create(
179 book=item.book, shelf=desired_shelf, added_by=user)
180 broadcast(user, shelf_book.to_add_activity(user), privacy=privacy)
181
182 for read in item.reads:
183 read.book = item.book
184 read.user = user
185 read.save()
186
187 if include_reviews and (item.rating or item.review):
188 review_title = 'Review of {!r} on Goodreads'.format(
189 item.book.title,
190 ) if item.review else ''
191
192 # we don't know the publication date of the review,
193 # but "now" is a bad guess
194 published_date_guess = item.date_read or item.date_added
195 review = models.Review.objects.create(
196 user=user,
197 book=item.book,
198 name=review_title,
199 content=item.review,
200 rating=item.rating,
201 published_date=published_date_guess,
202 privacy=privacy,
203 )
204 # we don't need to send out pure activities because non-bookwyrm
205 # instances don't need this data
206 broadcast(user, review.to_create_activity(user), privacy=privacy)
207
208
209 def handle_delete_status(user, status):
210 ''' delete a status and broadcast deletion to other servers '''
211 delete_status(status)
212 broadcast(user, status.to_delete_activity(user))
213
214
215 def handle_status(user, form):
216 ''' generic handler for statuses '''
217 status = form.save(commit=False)
218 if not status.sensitive and status.content_warning:
219 # the cw text field remains populated when you click "remove"
220 status.content_warning = None
221 status.save()
222
223 # inspect the text for user tags
224 content = status.content
225 for (mention_text, mention_user) in find_mentions(content):
226 # add them to status mentions fk
227 status.mention_users.add(mention_user)
228
229 # turn the mention into a link
230 content = re.sub(
231 r'%s([^@]|$)' % mention_text,
232 r'<a href="%s">%s</a>\g<1>' % \
233 (mention_user.remote_id, mention_text),
234 content)
235
236 # add reply parent to mentions and notify
237 if status.reply_parent:
238 status.mention_users.add(status.reply_parent.user)
239 for mention_user in status.reply_parent.mention_users.all():
240 status.mention_users.add(mention_user)
241
242 if status.reply_parent.user.local:
243 create_notification(
244 status.reply_parent.user,
245 'REPLY',
246 related_user=user,
247 related_status=status
248 )
249
250 # deduplicate mentions
251 status.mention_users.set(set(status.mention_users.all()))
252 # create mention notifications
253 for mention_user in status.mention_users.all():
254 if status.reply_parent and mention_user == status.reply_parent.user:
255 continue
256 if mention_user.local:
257 create_notification(
258 mention_user,
259 'MENTION',
260 related_user=user,
261 related_status=status
262 )
263
264 # don't apply formatting to generated notes
265 if not isinstance(status, models.GeneratedNote):
266 status.content = to_markdown(content)
267 # do apply formatting to quotes
268 if hasattr(status, 'quote'):
269 status.quote = to_markdown(status.quote)
270
271 status.save()
272
273 broadcast(user, status.to_create_activity(user), software='bookwyrm')
274
275 # re-format the activity for non-bookwyrm servers
276 remote_activity = status.to_create_activity(user, pure=True)
277 broadcast(user, remote_activity, software='other')
278
279
280 def find_mentions(content):
281 ''' detect @mentions in raw status content '''
282 for match in re.finditer(regex.strict_username, content):
283 username = match.group().strip().split('@')[1:]
284 if len(username) == 1:
285 # this looks like a local user (@user), fill in the domain
286 username.append(DOMAIN)
287 username = '@'.join(username)
288
289 mention_user = handle_remote_webfinger(username)
290 if not mention_user:
291 # we can ignore users we don't know about
292 continue
293 yield (match.group(), mention_user)
294
295
296 def format_links(content):
297 ''' detect and format links '''
298 return re.sub(
299 r'([^(href=")]|^)(https?:\/\/(%s([\w\.\-_\/+&\?=:;,])*))' % \
300 regex.domain,
301 r'\g<1><a href="\g<2>">\g<3></a>',
302 content)
303
304 def to_markdown(content):
305 ''' catch links and convert to markdown '''
306 content = format_links(content)
307 content = markdown(content)
308 # sanitize resulting html
309 sanitizer = InputHtmlParser()
310 sanitizer.feed(content)
311 return sanitizer.get_output()
312
313
314 def handle_favorite(user, status):
315 ''' a user likes a status '''
316 try:
317 favorite = models.Favorite.objects.create(
318 status=status,
319 user=user
320 )
321 except IntegrityError:
322 # you already fav'ed that
323 return
324
325 fav_activity = favorite.to_activity()
326 broadcast(
327 user, fav_activity, privacy='direct', direct_recipients=[status.user])
328 create_notification(
329 status.user,
330 'FAVORITE',
331 related_user=user,
332 related_status=status
333 )
334
335
336 def handle_unfavorite(user, status):
337 ''' a user likes a status '''
338 try:
339 favorite = models.Favorite.objects.get(
340 status=status,
341 user=user
342 )
343 except models.Favorite.DoesNotExist:
344 # can't find that status, idk
345 return
346
347 fav_activity = favorite.to_undo_activity(user)
348 favorite.delete()
349 broadcast(user, fav_activity, direct_recipients=[status.user])
350
351
352 def handle_boost(user, status):
353 ''' a user wishes to boost a status '''
354 # is it boostable?
355 if not status.boostable:
356 return
357
358 if models.Boost.objects.filter(
359 boosted_status=status, user=user).exists():
360 # you already boosted that.
361 return
362 boost = models.Boost.objects.create(
363 boosted_status=status,
364 privacy=status.privacy,
365 user=user,
366 )
367
368 boost_activity = boost.to_activity()
369 broadcast(user, boost_activity)
370
371 create_notification(
372 status.user,
373 'BOOST',
374 related_user=user,
375 related_status=status
376 )
377
378
379 def handle_unboost(user, status):
380 ''' a user regrets boosting a status '''
381 boost = models.Boost.objects.filter(
382 boosted_status=status, user=user
383 ).first()
384 activity = boost.to_undo_activity(user)
385
386 boost.delete()
387 broadcast(user, activity)
388
389
390 def handle_update_book_data(user, item):
391 ''' broadcast the news about our book '''
392 broadcast(user, item.to_update_activity(user))
393
394
395 def handle_update_user(user):
396 ''' broadcast editing a user's profile '''
397 broadcast(user, user.to_update_activity(user))
398
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/bookwyrm/outgoing.py b/bookwyrm/outgoing.py
--- a/bookwyrm/outgoing.py
+++ b/bookwyrm/outgoing.py
@@ -296,7 +296,7 @@
def format_links(content):
''' detect and format links '''
return re.sub(
- r'([^(href=")]|^)(https?:\/\/(%s([\w\.\-_\/+&\?=:;,])*))' % \
+ r'([^(href=")]|^|\()(https?:\/\/(%s([\w\.\-_\/+&\?=:;,])*))' % \
regex.domain,
r'\g<1><a href="\g<2>">\g<3></a>',
content)
|
{"golden_diff": "diff --git a/bookwyrm/outgoing.py b/bookwyrm/outgoing.py\n--- a/bookwyrm/outgoing.py\n+++ b/bookwyrm/outgoing.py\n@@ -296,7 +296,7 @@\n def format_links(content):\n ''' detect and format links '''\n return re.sub(\n- r'([^(href=\")]|^)(https?:\\/\\/(%s([\\w\\.\\-_\\/+&\\?=:;,])*))' % \\\n+ r'([^(href=\")]|^|\\()(https?:\\/\\/(%s([\\w\\.\\-_\\/+&\\?=:;,])*))' % \\\n regex.domain,\n r'\\g<1><a href=\"\\g<2>\">\\g<3></a>',\n content)\n", "issue": "urls in parens don't parse as links\n\n", "before_files": [{"content": "''' handles all the activity coming out of the server '''\nimport re\n\nfrom django.db import IntegrityError, transaction\nfrom django.http import JsonResponse\nfrom django.shortcuts import get_object_or_404\nfrom django.views.decorators.csrf import csrf_exempt\nfrom django.views.decorators.http import require_GET\nfrom markdown import markdown\nfrom requests import HTTPError\n\nfrom bookwyrm import activitypub\nfrom bookwyrm import models\nfrom bookwyrm.connectors import get_data, ConnectorException\nfrom bookwyrm.broadcast import broadcast\nfrom bookwyrm.sanitize_html import InputHtmlParser\nfrom bookwyrm.status import create_notification\nfrom bookwyrm.status import create_generated_note\nfrom bookwyrm.status import delete_status\nfrom bookwyrm.settings import DOMAIN\nfrom bookwyrm.utils import regex\n\n\n@csrf_exempt\n@require_GET\ndef outbox(request, username):\n ''' outbox for the requested user '''\n user = get_object_or_404(models.User, localname=username)\n filter_type = request.GET.get('type')\n if filter_type not in models.status_models:\n filter_type = None\n\n return JsonResponse(\n user.to_outbox(**request.GET, filter_type=filter_type),\n encoder=activitypub.ActivityEncoder\n )\n\n\ndef handle_remote_webfinger(query):\n ''' webfingerin' other servers '''\n user = None\n\n # usernames could be @user@domain or user@domain\n if not query:\n return None\n\n if query[0] == '@':\n query = query[1:]\n\n try:\n domain = query.split('@')[1]\n except IndexError:\n return None\n\n try:\n user = models.User.objects.get(username=query)\n except models.User.DoesNotExist:\n url = 'https://%s/.well-known/webfinger?resource=acct:%s' % \\\n (domain, query)\n try:\n data = get_data(url)\n except (ConnectorException, HTTPError):\n return None\n\n for link in data.get('links'):\n if link.get('rel') == 'self':\n try:\n user = activitypub.resolve_remote_id(\n models.User, link['href']\n )\n except KeyError:\n return None\n return user\n\n\ndef handle_follow(user, to_follow):\n ''' someone local wants to follow someone '''\n relationship, _ = models.UserFollowRequest.objects.get_or_create(\n user_subject=user,\n user_object=to_follow,\n )\n activity = relationship.to_activity()\n broadcast(user, activity, privacy='direct', direct_recipients=[to_follow])\n\n\ndef handle_unfollow(user, to_unfollow):\n ''' someone local wants to follow someone '''\n relationship = models.UserFollows.objects.get(\n user_subject=user,\n user_object=to_unfollow\n )\n activity = relationship.to_undo_activity(user)\n broadcast(user, activity, privacy='direct', direct_recipients=[to_unfollow])\n to_unfollow.followers.remove(user)\n\n\ndef handle_accept(follow_request):\n ''' send an acceptance message to a follow request '''\n user = follow_request.user_subject\n to_follow = follow_request.user_object\n with transaction.atomic():\n relationship = models.UserFollows.from_request(follow_request)\n follow_request.delete()\n relationship.save()\n\n activity = relationship.to_accept_activity()\n broadcast(to_follow, activity, privacy='direct', direct_recipients=[user])\n\n\ndef handle_reject(follow_request):\n ''' a local user who managed follows rejects a follow request '''\n user = follow_request.user_subject\n to_follow = follow_request.user_object\n activity = follow_request.to_reject_activity()\n follow_request.delete()\n broadcast(to_follow, activity, privacy='direct', direct_recipients=[user])\n\n\ndef handle_shelve(user, book, shelf):\n ''' a local user is getting a book put on their shelf '''\n # update the database\n shelve = models.ShelfBook(book=book, shelf=shelf, added_by=user)\n shelve.save()\n\n broadcast(user, shelve.to_add_activity(user))\n\n\ndef handle_unshelve(user, book, shelf):\n ''' a local user is getting a book put on their shelf '''\n # update the database\n row = models.ShelfBook.objects.get(book=book, shelf=shelf)\n activity = row.to_remove_activity(user)\n row.delete()\n\n broadcast(user, activity)\n\n\ndef handle_reading_status(user, shelf, book, privacy):\n ''' post about a user reading a book '''\n # tell the world about this cool thing that happened\n try:\n message = {\n 'to-read': 'wants to read',\n 'reading': 'started reading',\n 'read': 'finished reading'\n }[shelf.identifier]\n except KeyError:\n # it's a non-standard shelf, don't worry about it\n return\n\n status = create_generated_note(\n user,\n message,\n mention_books=[book],\n privacy=privacy\n )\n status.save()\n\n broadcast(user, status.to_create_activity(user))\n\n\ndef handle_imported_book(user, item, include_reviews, privacy):\n ''' process a goodreads csv and then post about it '''\n if isinstance(item.book, models.Work):\n item.book = item.book.default_edition\n if not item.book:\n return\n\n existing_shelf = models.ShelfBook.objects.filter(\n book=item.book, added_by=user).exists()\n\n # shelve the book if it hasn't been shelved already\n if item.shelf and not existing_shelf:\n desired_shelf = models.Shelf.objects.get(\n identifier=item.shelf,\n user=user\n )\n shelf_book = models.ShelfBook.objects.create(\n book=item.book, shelf=desired_shelf, added_by=user)\n broadcast(user, shelf_book.to_add_activity(user), privacy=privacy)\n\n for read in item.reads:\n read.book = item.book\n read.user = user\n read.save()\n\n if include_reviews and (item.rating or item.review):\n review_title = 'Review of {!r} on Goodreads'.format(\n item.book.title,\n ) if item.review else ''\n\n # we don't know the publication date of the review,\n # but \"now\" is a bad guess\n published_date_guess = item.date_read or item.date_added\n review = models.Review.objects.create(\n user=user,\n book=item.book,\n name=review_title,\n content=item.review,\n rating=item.rating,\n published_date=published_date_guess,\n privacy=privacy,\n )\n # we don't need to send out pure activities because non-bookwyrm\n # instances don't need this data\n broadcast(user, review.to_create_activity(user), privacy=privacy)\n\n\ndef handle_delete_status(user, status):\n ''' delete a status and broadcast deletion to other servers '''\n delete_status(status)\n broadcast(user, status.to_delete_activity(user))\n\n\ndef handle_status(user, form):\n ''' generic handler for statuses '''\n status = form.save(commit=False)\n if not status.sensitive and status.content_warning:\n # the cw text field remains populated when you click \"remove\"\n status.content_warning = None\n status.save()\n\n # inspect the text for user tags\n content = status.content\n for (mention_text, mention_user) in find_mentions(content):\n # add them to status mentions fk\n status.mention_users.add(mention_user)\n\n # turn the mention into a link\n content = re.sub(\n r'%s([^@]|$)' % mention_text,\n r'<a href=\"%s\">%s</a>\\g<1>' % \\\n (mention_user.remote_id, mention_text),\n content)\n\n # add reply parent to mentions and notify\n if status.reply_parent:\n status.mention_users.add(status.reply_parent.user)\n for mention_user in status.reply_parent.mention_users.all():\n status.mention_users.add(mention_user)\n\n if status.reply_parent.user.local:\n create_notification(\n status.reply_parent.user,\n 'REPLY',\n related_user=user,\n related_status=status\n )\n\n # deduplicate mentions\n status.mention_users.set(set(status.mention_users.all()))\n # create mention notifications\n for mention_user in status.mention_users.all():\n if status.reply_parent and mention_user == status.reply_parent.user:\n continue\n if mention_user.local:\n create_notification(\n mention_user,\n 'MENTION',\n related_user=user,\n related_status=status\n )\n\n # don't apply formatting to generated notes\n if not isinstance(status, models.GeneratedNote):\n status.content = to_markdown(content)\n # do apply formatting to quotes\n if hasattr(status, 'quote'):\n status.quote = to_markdown(status.quote)\n\n status.save()\n\n broadcast(user, status.to_create_activity(user), software='bookwyrm')\n\n # re-format the activity for non-bookwyrm servers\n remote_activity = status.to_create_activity(user, pure=True)\n broadcast(user, remote_activity, software='other')\n\n\ndef find_mentions(content):\n ''' detect @mentions in raw status content '''\n for match in re.finditer(regex.strict_username, content):\n username = match.group().strip().split('@')[1:]\n if len(username) == 1:\n # this looks like a local user (@user), fill in the domain\n username.append(DOMAIN)\n username = '@'.join(username)\n\n mention_user = handle_remote_webfinger(username)\n if not mention_user:\n # we can ignore users we don't know about\n continue\n yield (match.group(), mention_user)\n\n\ndef format_links(content):\n ''' detect and format links '''\n return re.sub(\n r'([^(href=\")]|^)(https?:\\/\\/(%s([\\w\\.\\-_\\/+&\\?=:;,])*))' % \\\n regex.domain,\n r'\\g<1><a href=\"\\g<2>\">\\g<3></a>',\n content)\n\ndef to_markdown(content):\n ''' catch links and convert to markdown '''\n content = format_links(content)\n content = markdown(content)\n # sanitize resulting html\n sanitizer = InputHtmlParser()\n sanitizer.feed(content)\n return sanitizer.get_output()\n\n\ndef handle_favorite(user, status):\n ''' a user likes a status '''\n try:\n favorite = models.Favorite.objects.create(\n status=status,\n user=user\n )\n except IntegrityError:\n # you already fav'ed that\n return\n\n fav_activity = favorite.to_activity()\n broadcast(\n user, fav_activity, privacy='direct', direct_recipients=[status.user])\n create_notification(\n status.user,\n 'FAVORITE',\n related_user=user,\n related_status=status\n )\n\n\ndef handle_unfavorite(user, status):\n ''' a user likes a status '''\n try:\n favorite = models.Favorite.objects.get(\n status=status,\n user=user\n )\n except models.Favorite.DoesNotExist:\n # can't find that status, idk\n return\n\n fav_activity = favorite.to_undo_activity(user)\n favorite.delete()\n broadcast(user, fav_activity, direct_recipients=[status.user])\n\n\ndef handle_boost(user, status):\n ''' a user wishes to boost a status '''\n # is it boostable?\n if not status.boostable:\n return\n\n if models.Boost.objects.filter(\n boosted_status=status, user=user).exists():\n # you already boosted that.\n return\n boost = models.Boost.objects.create(\n boosted_status=status,\n privacy=status.privacy,\n user=user,\n )\n\n boost_activity = boost.to_activity()\n broadcast(user, boost_activity)\n\n create_notification(\n status.user,\n 'BOOST',\n related_user=user,\n related_status=status\n )\n\n\ndef handle_unboost(user, status):\n ''' a user regrets boosting a status '''\n boost = models.Boost.objects.filter(\n boosted_status=status, user=user\n ).first()\n activity = boost.to_undo_activity(user)\n\n boost.delete()\n broadcast(user, activity)\n\n\ndef handle_update_book_data(user, item):\n ''' broadcast the news about our book '''\n broadcast(user, item.to_update_activity(user))\n\n\ndef handle_update_user(user):\n ''' broadcast editing a user's profile '''\n broadcast(user, user.to_update_activity(user))\n", "path": "bookwyrm/outgoing.py"}], "after_files": [{"content": "''' handles all the activity coming out of the server '''\nimport re\n\nfrom django.db import IntegrityError, transaction\nfrom django.http import JsonResponse\nfrom django.shortcuts import get_object_or_404\nfrom django.views.decorators.csrf import csrf_exempt\nfrom django.views.decorators.http import require_GET\nfrom markdown import markdown\nfrom requests import HTTPError\n\nfrom bookwyrm import activitypub\nfrom bookwyrm import models\nfrom bookwyrm.connectors import get_data, ConnectorException\nfrom bookwyrm.broadcast import broadcast\nfrom bookwyrm.sanitize_html import InputHtmlParser\nfrom bookwyrm.status import create_notification\nfrom bookwyrm.status import create_generated_note\nfrom bookwyrm.status import delete_status\nfrom bookwyrm.settings import DOMAIN\nfrom bookwyrm.utils import regex\n\n\n@csrf_exempt\n@require_GET\ndef outbox(request, username):\n ''' outbox for the requested user '''\n user = get_object_or_404(models.User, localname=username)\n filter_type = request.GET.get('type')\n if filter_type not in models.status_models:\n filter_type = None\n\n return JsonResponse(\n user.to_outbox(**request.GET, filter_type=filter_type),\n encoder=activitypub.ActivityEncoder\n )\n\n\ndef handle_remote_webfinger(query):\n ''' webfingerin' other servers '''\n user = None\n\n # usernames could be @user@domain or user@domain\n if not query:\n return None\n\n if query[0] == '@':\n query = query[1:]\n\n try:\n domain = query.split('@')[1]\n except IndexError:\n return None\n\n try:\n user = models.User.objects.get(username=query)\n except models.User.DoesNotExist:\n url = 'https://%s/.well-known/webfinger?resource=acct:%s' % \\\n (domain, query)\n try:\n data = get_data(url)\n except (ConnectorException, HTTPError):\n return None\n\n for link in data.get('links'):\n if link.get('rel') == 'self':\n try:\n user = activitypub.resolve_remote_id(\n models.User, link['href']\n )\n except KeyError:\n return None\n return user\n\n\ndef handle_follow(user, to_follow):\n ''' someone local wants to follow someone '''\n relationship, _ = models.UserFollowRequest.objects.get_or_create(\n user_subject=user,\n user_object=to_follow,\n )\n activity = relationship.to_activity()\n broadcast(user, activity, privacy='direct', direct_recipients=[to_follow])\n\n\ndef handle_unfollow(user, to_unfollow):\n ''' someone local wants to follow someone '''\n relationship = models.UserFollows.objects.get(\n user_subject=user,\n user_object=to_unfollow\n )\n activity = relationship.to_undo_activity(user)\n broadcast(user, activity, privacy='direct', direct_recipients=[to_unfollow])\n to_unfollow.followers.remove(user)\n\n\ndef handle_accept(follow_request):\n ''' send an acceptance message to a follow request '''\n user = follow_request.user_subject\n to_follow = follow_request.user_object\n with transaction.atomic():\n relationship = models.UserFollows.from_request(follow_request)\n follow_request.delete()\n relationship.save()\n\n activity = relationship.to_accept_activity()\n broadcast(to_follow, activity, privacy='direct', direct_recipients=[user])\n\n\ndef handle_reject(follow_request):\n ''' a local user who managed follows rejects a follow request '''\n user = follow_request.user_subject\n to_follow = follow_request.user_object\n activity = follow_request.to_reject_activity()\n follow_request.delete()\n broadcast(to_follow, activity, privacy='direct', direct_recipients=[user])\n\n\ndef handle_shelve(user, book, shelf):\n ''' a local user is getting a book put on their shelf '''\n # update the database\n shelve = models.ShelfBook(book=book, shelf=shelf, added_by=user)\n shelve.save()\n\n broadcast(user, shelve.to_add_activity(user))\n\n\ndef handle_unshelve(user, book, shelf):\n ''' a local user is getting a book put on their shelf '''\n # update the database\n row = models.ShelfBook.objects.get(book=book, shelf=shelf)\n activity = row.to_remove_activity(user)\n row.delete()\n\n broadcast(user, activity)\n\n\ndef handle_reading_status(user, shelf, book, privacy):\n ''' post about a user reading a book '''\n # tell the world about this cool thing that happened\n try:\n message = {\n 'to-read': 'wants to read',\n 'reading': 'started reading',\n 'read': 'finished reading'\n }[shelf.identifier]\n except KeyError:\n # it's a non-standard shelf, don't worry about it\n return\n\n status = create_generated_note(\n user,\n message,\n mention_books=[book],\n privacy=privacy\n )\n status.save()\n\n broadcast(user, status.to_create_activity(user))\n\n\ndef handle_imported_book(user, item, include_reviews, privacy):\n ''' process a goodreads csv and then post about it '''\n if isinstance(item.book, models.Work):\n item.book = item.book.default_edition\n if not item.book:\n return\n\n existing_shelf = models.ShelfBook.objects.filter(\n book=item.book, added_by=user).exists()\n\n # shelve the book if it hasn't been shelved already\n if item.shelf and not existing_shelf:\n desired_shelf = models.Shelf.objects.get(\n identifier=item.shelf,\n user=user\n )\n shelf_book = models.ShelfBook.objects.create(\n book=item.book, shelf=desired_shelf, added_by=user)\n broadcast(user, shelf_book.to_add_activity(user), privacy=privacy)\n\n for read in item.reads:\n read.book = item.book\n read.user = user\n read.save()\n\n if include_reviews and (item.rating or item.review):\n review_title = 'Review of {!r} on Goodreads'.format(\n item.book.title,\n ) if item.review else ''\n\n # we don't know the publication date of the review,\n # but \"now\" is a bad guess\n published_date_guess = item.date_read or item.date_added\n review = models.Review.objects.create(\n user=user,\n book=item.book,\n name=review_title,\n content=item.review,\n rating=item.rating,\n published_date=published_date_guess,\n privacy=privacy,\n )\n # we don't need to send out pure activities because non-bookwyrm\n # instances don't need this data\n broadcast(user, review.to_create_activity(user), privacy=privacy)\n\n\ndef handle_delete_status(user, status):\n ''' delete a status and broadcast deletion to other servers '''\n delete_status(status)\n broadcast(user, status.to_delete_activity(user))\n\n\ndef handle_status(user, form):\n ''' generic handler for statuses '''\n status = form.save(commit=False)\n if not status.sensitive and status.content_warning:\n # the cw text field remains populated when you click \"remove\"\n status.content_warning = None\n status.save()\n\n # inspect the text for user tags\n content = status.content\n for (mention_text, mention_user) in find_mentions(content):\n # add them to status mentions fk\n status.mention_users.add(mention_user)\n\n # turn the mention into a link\n content = re.sub(\n r'%s([^@]|$)' % mention_text,\n r'<a href=\"%s\">%s</a>\\g<1>' % \\\n (mention_user.remote_id, mention_text),\n content)\n\n # add reply parent to mentions and notify\n if status.reply_parent:\n status.mention_users.add(status.reply_parent.user)\n for mention_user in status.reply_parent.mention_users.all():\n status.mention_users.add(mention_user)\n\n if status.reply_parent.user.local:\n create_notification(\n status.reply_parent.user,\n 'REPLY',\n related_user=user,\n related_status=status\n )\n\n # deduplicate mentions\n status.mention_users.set(set(status.mention_users.all()))\n # create mention notifications\n for mention_user in status.mention_users.all():\n if status.reply_parent and mention_user == status.reply_parent.user:\n continue\n if mention_user.local:\n create_notification(\n mention_user,\n 'MENTION',\n related_user=user,\n related_status=status\n )\n\n # don't apply formatting to generated notes\n if not isinstance(status, models.GeneratedNote):\n status.content = to_markdown(content)\n # do apply formatting to quotes\n if hasattr(status, 'quote'):\n status.quote = to_markdown(status.quote)\n\n status.save()\n\n broadcast(user, status.to_create_activity(user), software='bookwyrm')\n\n # re-format the activity for non-bookwyrm servers\n remote_activity = status.to_create_activity(user, pure=True)\n broadcast(user, remote_activity, software='other')\n\n\ndef find_mentions(content):\n ''' detect @mentions in raw status content '''\n for match in re.finditer(regex.strict_username, content):\n username = match.group().strip().split('@')[1:]\n if len(username) == 1:\n # this looks like a local user (@user), fill in the domain\n username.append(DOMAIN)\n username = '@'.join(username)\n\n mention_user = handle_remote_webfinger(username)\n if not mention_user:\n # we can ignore users we don't know about\n continue\n yield (match.group(), mention_user)\n\n\ndef format_links(content):\n ''' detect and format links '''\n return re.sub(\n r'([^(href=\")]|^|\\()(https?:\\/\\/(%s([\\w\\.\\-_\\/+&\\?=:;,])*))' % \\\n regex.domain,\n r'\\g<1><a href=\"\\g<2>\">\\g<3></a>',\n content)\n\ndef to_markdown(content):\n ''' catch links and convert to markdown '''\n content = format_links(content)\n content = markdown(content)\n # sanitize resulting html\n sanitizer = InputHtmlParser()\n sanitizer.feed(content)\n return sanitizer.get_output()\n\n\ndef handle_favorite(user, status):\n ''' a user likes a status '''\n try:\n favorite = models.Favorite.objects.create(\n status=status,\n user=user\n )\n except IntegrityError:\n # you already fav'ed that\n return\n\n fav_activity = favorite.to_activity()\n broadcast(\n user, fav_activity, privacy='direct', direct_recipients=[status.user])\n create_notification(\n status.user,\n 'FAVORITE',\n related_user=user,\n related_status=status\n )\n\n\ndef handle_unfavorite(user, status):\n ''' a user likes a status '''\n try:\n favorite = models.Favorite.objects.get(\n status=status,\n user=user\n )\n except models.Favorite.DoesNotExist:\n # can't find that status, idk\n return\n\n fav_activity = favorite.to_undo_activity(user)\n favorite.delete()\n broadcast(user, fav_activity, direct_recipients=[status.user])\n\n\ndef handle_boost(user, status):\n ''' a user wishes to boost a status '''\n # is it boostable?\n if not status.boostable:\n return\n\n if models.Boost.objects.filter(\n boosted_status=status, user=user).exists():\n # you already boosted that.\n return\n boost = models.Boost.objects.create(\n boosted_status=status,\n privacy=status.privacy,\n user=user,\n )\n\n boost_activity = boost.to_activity()\n broadcast(user, boost_activity)\n\n create_notification(\n status.user,\n 'BOOST',\n related_user=user,\n related_status=status\n )\n\n\ndef handle_unboost(user, status):\n ''' a user regrets boosting a status '''\n boost = models.Boost.objects.filter(\n boosted_status=status, user=user\n ).first()\n activity = boost.to_undo_activity(user)\n\n boost.delete()\n broadcast(user, activity)\n\n\ndef handle_update_book_data(user, item):\n ''' broadcast the news about our book '''\n broadcast(user, item.to_update_activity(user))\n\n\ndef handle_update_user(user):\n ''' broadcast editing a user's profile '''\n broadcast(user, user.to_update_activity(user))\n", "path": "bookwyrm/outgoing.py"}]}
| 3,990 | 162 |
gh_patches_debug_33986
|
rasdani/github-patches
|
git_diff
|
facebookresearch__fairseq-190
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
the raw output data can not be trained
when I use preprocess.py to ouput raw data by use `--output-format raw`, the output data file can not be used to train.py which I also use `--raw-text`. Through looking the source code, I change the output data file mentioned above in this way: rename 'train.src' to 'train.src-tgt.src' and 'train.tgt' to 'train.src-tgt.tgt' (assume I use `--source-lang=src --target-lang=tgt` ) and this can run.
I think it's a bug and is easy to fix :)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `preprocess.py`
Content:
```
1 #!/usr/bin/env python3
2 # Copyright (c) 2017-present, Facebook, Inc.
3 # All rights reserved.
4 #
5 # This source code is licensed under the license found in the LICENSE file in
6 # the root directory of this source tree. An additional grant of patent rights
7 # can be found in the PATENTS file in the same directory.
8 #
9
10 import argparse
11 from itertools import zip_longest
12 import os
13 import shutil
14
15 from fairseq.data import indexed_dataset, dictionary
16 from fairseq.tokenizer import Tokenizer, tokenize_line
17
18
19 def get_parser():
20 parser = argparse.ArgumentParser(
21 description='Data pre-processing: Create dictionary and store data in binary format')
22 parser.add_argument('-s', '--source-lang', default=None, metavar='SRC', help='source language')
23 parser.add_argument('-t', '--target-lang', default=None, metavar='TARGET', help='target language')
24 parser.add_argument('--trainpref', metavar='FP', default=None, help='target language')
25 parser.add_argument('--validpref', metavar='FP', default=None, help='comma separated, valid language prefixes')
26 parser.add_argument('--testpref', metavar='FP', default=None, help='comma separated, test language prefixes')
27 parser.add_argument('--destdir', metavar='DIR', default='data-bin', help='destination dir')
28 parser.add_argument('--thresholdtgt', metavar='N', default=0, type=int,
29 help='map words appearing less than threshold times to unknown')
30 parser.add_argument('--thresholdsrc', metavar='N', default=0, type=int,
31 help='map words appearing less than threshold times to unknown')
32 parser.add_argument('--tgtdict', metavar='FP', help='reuse given target dictionary')
33 parser.add_argument('--srcdict', metavar='FP', help='reuse given source dictionary')
34 parser.add_argument('--nwordstgt', metavar='N', default=-1, type=int, help='number of target words to retain')
35 parser.add_argument('--nwordssrc', metavar='N', default=-1, type=int, help='number of source words to retain')
36 parser.add_argument('--alignfile', metavar='ALIGN', default=None, help='an alignment file (optional)')
37 parser.add_argument('--output-format', metavar='FORMAT', default='binary', choices=['binary', 'raw'],
38 help='output format (optional)')
39 parser.add_argument('--joined-dictionary', action='store_true', help='Generate joined dictionary')
40 parser.add_argument('--only-source', action='store_true', help='Only process the source language')
41 parser.add_argument('--padding-factor', metavar='N', default=8, type=int,
42 help='Pad dictionary size to be multiple of N')
43 return parser
44
45
46 def main(args):
47 print(args)
48 os.makedirs(args.destdir, exist_ok=True)
49 target = not args.only_source
50
51 def build_dictionary(filenames):
52 d = dictionary.Dictionary()
53 for filename in filenames:
54 Tokenizer.add_file_to_dictionary(filename, d, tokenize_line)
55 return d
56
57 def train_path(lang):
58 return '{}{}'.format(args.trainpref, ('.' + lang) if lang else '')
59
60 def file_name(prefix, lang):
61 fname = prefix
62 if lang is not None:
63 fname += f'.{lang}'
64 return fname
65
66 def dest_path(prefix, lang):
67 return os.path.join(args.destdir, file_name(prefix, lang))
68
69 def dict_path(lang):
70 return dest_path('dict', lang) + '.txt'
71
72 def dataset_dest_path(output_prefix, lang, extension):
73 base = f'{args.destdir}/{output_prefix}'
74 lang_part = f'.{args.source_lang}-{args.target_lang}.{lang}' if lang is not None else ''
75 return f'{base}{lang_part}.{extension}'
76
77 if args.joined_dictionary:
78 assert not args.srcdict, 'cannot combine --srcdict and --joined-dictionary'
79 assert not args.tgtdict, 'cannot combine --tgtdict and --joined-dictionary'
80 src_dict = build_dictionary(set([
81 train_path(lang)
82 for lang in [args.source_lang, args.target_lang]
83 ]))
84 tgt_dict = src_dict
85 else:
86 if args.srcdict:
87 src_dict = dictionary.Dictionary.load(args.srcdict)
88 else:
89 assert args.trainpref, "--trainpref must be set if --srcdict is not specified"
90 src_dict = build_dictionary([train_path(args.source_lang)])
91 if target:
92 if args.tgtdict:
93 tgt_dict = dictionary.Dictionary.load(args.tgtdict)
94 else:
95 assert args.trainpref, "--trainpref must be set if --tgtdict is not specified"
96 tgt_dict = build_dictionary([train_path(args.target_lang)])
97
98 src_dict.finalize(
99 threshold=args.thresholdsrc,
100 nwords=args.nwordssrc,
101 padding_factor=args.padding_factor,
102 )
103 src_dict.save(dict_path(args.source_lang))
104 if target:
105 if not args.joined_dictionary:
106 tgt_dict.finalize(
107 threshold=args.thresholdtgt,
108 nwords=args.nwordstgt,
109 padding_factor=args.padding_factor,
110 )
111 tgt_dict.save(dict_path(args.target_lang))
112
113 def make_binary_dataset(input_prefix, output_prefix, lang):
114 dict = dictionary.Dictionary.load(dict_path(lang))
115 print('| [{}] Dictionary: {} types'.format(lang, len(dict) - 1))
116
117 ds = indexed_dataset.IndexedDatasetBuilder(dataset_dest_path(output_prefix, lang, 'bin'))
118
119 def consumer(tensor):
120 ds.add_item(tensor)
121
122 input_file = '{}{}'.format(input_prefix, ('.' + lang) if lang is not None else '')
123 res = Tokenizer.binarize(input_file, dict, consumer)
124 print('| [{}] {}: {} sents, {} tokens, {:.3}% replaced by {}'.format(
125 lang, input_file, res['nseq'], res['ntok'],
126 100 * res['nunk'] / res['ntok'], dict.unk_word))
127 ds.finalize(dataset_dest_path(output_prefix, lang, 'idx'))
128
129 def make_dataset(input_prefix, output_prefix, lang, output_format='binary'):
130 if output_format == 'binary':
131 make_binary_dataset(input_prefix, output_prefix, lang)
132 elif output_format == 'raw':
133 # Copy original text file to destination folder
134 output_text_file = dest_path(output_prefix, lang)
135 shutil.copyfile(file_name(input_prefix, lang), output_text_file)
136
137 def make_all(args, make_dataset, lang):
138 if args.trainpref:
139 make_dataset(args.trainpref, 'train', lang, args.output_format)
140 if args.validpref:
141 for k, validpref in enumerate(args.validpref.split(',')):
142 outprefix = 'valid{}'.format(k) if k > 0 else 'valid'
143 make_dataset(validpref, outprefix, lang, args.output_format)
144 if args.testpref:
145 for k, testpref in enumerate(args.testpref.split(',')):
146 outprefix = 'test{}'.format(k) if k > 0 else 'test'
147 make_dataset(testpref, outprefix, lang, args.output_format)
148
149 make_all(args, make_dataset, args.source_lang)
150 if target:
151 make_all(args, make_dataset, args.target_lang)
152
153 print('| Wrote preprocessed data to {}'.format(args.destdir))
154
155 if args.alignfile:
156 assert args.trainpref, "--trainpref must be set if --alignfile is specified"
157 src_file_name = train_path(args.source_lang)
158 tgt_file_name = train_path(args.target_lang)
159 src_dict = dictionary.Dictionary.load(dict_path(args.source_lang))
160 tgt_dict = dictionary.Dictionary.load(dict_path(args.target_lang))
161 freq_map = {}
162 with open(args.alignfile, 'r') as align_file:
163 with open(src_file_name, 'r') as src_file:
164 with open(tgt_file_name, 'r') as tgt_file:
165 for a, s, t in zip_longest(align_file, src_file, tgt_file):
166 si = Tokenizer.tokenize(s, src_dict, add_if_not_exist=False)
167 ti = Tokenizer.tokenize(t, tgt_dict, add_if_not_exist=False)
168 ai = list(map(lambda x: tuple(x.split('-')), a.split()))
169 for sai, tai in ai:
170 srcidx = si[int(sai)]
171 tgtidx = ti[int(tai)]
172 if srcidx != src_dict.unk() and tgtidx != tgt_dict.unk():
173 assert srcidx != src_dict.pad()
174 assert srcidx != src_dict.eos()
175 assert tgtidx != tgt_dict.pad()
176 assert tgtidx != tgt_dict.eos()
177
178 if srcidx not in freq_map:
179 freq_map[srcidx] = {}
180 if tgtidx not in freq_map[srcidx]:
181 freq_map[srcidx][tgtidx] = 1
182 else:
183 freq_map[srcidx][tgtidx] += 1
184
185 align_dict = {}
186 for srcidx in freq_map.keys():
187 align_dict[srcidx] = max(freq_map[srcidx], key=freq_map[srcidx].get)
188
189 with open(os.path.join(args.destdir, 'alignment.{}-{}.txt'.format(
190 args.source_lang, args.target_lang)), 'w') as f:
191 for k, v in align_dict.items():
192 print('{} {}'.format(src_dict[k], tgt_dict[v]), file=f)
193
194
195 if __name__ == '__main__':
196 parser = get_parser()
197 args = parser.parse_args()
198 main(args)
199
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/preprocess.py b/preprocess.py
--- a/preprocess.py
+++ b/preprocess.py
@@ -126,29 +126,32 @@
100 * res['nunk'] / res['ntok'], dict.unk_word))
ds.finalize(dataset_dest_path(output_prefix, lang, 'idx'))
- def make_dataset(input_prefix, output_prefix, lang, output_format='binary'):
- if output_format == 'binary':
+ def make_dataset(input_prefix, output_prefix, lang):
+ if args.output_format == 'binary':
make_binary_dataset(input_prefix, output_prefix, lang)
- elif output_format == 'raw':
+ elif args.output_format == 'raw':
# Copy original text file to destination folder
- output_text_file = dest_path(output_prefix, lang)
+ output_text_file = dest_path(
+ output_prefix + '.{}-{}'.format(args.source_lang, args.target_lang),
+ lang,
+ )
shutil.copyfile(file_name(input_prefix, lang), output_text_file)
- def make_all(args, make_dataset, lang):
+ def make_all(lang):
if args.trainpref:
- make_dataset(args.trainpref, 'train', lang, args.output_format)
+ make_dataset(args.trainpref, 'train', lang)
if args.validpref:
for k, validpref in enumerate(args.validpref.split(',')):
outprefix = 'valid{}'.format(k) if k > 0 else 'valid'
- make_dataset(validpref, outprefix, lang, args.output_format)
+ make_dataset(validpref, outprefix, lang)
if args.testpref:
for k, testpref in enumerate(args.testpref.split(',')):
outprefix = 'test{}'.format(k) if k > 0 else 'test'
- make_dataset(testpref, outprefix, lang, args.output_format)
+ make_dataset(testpref, outprefix, lang)
- make_all(args, make_dataset, args.source_lang)
+ make_all(args.source_lang)
if target:
- make_all(args, make_dataset, args.target_lang)
+ make_all(args.target_lang)
print('| Wrote preprocessed data to {}'.format(args.destdir))
|
{"golden_diff": "diff --git a/preprocess.py b/preprocess.py\n--- a/preprocess.py\n+++ b/preprocess.py\n@@ -126,29 +126,32 @@\n 100 * res['nunk'] / res['ntok'], dict.unk_word))\n ds.finalize(dataset_dest_path(output_prefix, lang, 'idx'))\n \n- def make_dataset(input_prefix, output_prefix, lang, output_format='binary'):\n- if output_format == 'binary':\n+ def make_dataset(input_prefix, output_prefix, lang):\n+ if args.output_format == 'binary':\n make_binary_dataset(input_prefix, output_prefix, lang)\n- elif output_format == 'raw':\n+ elif args.output_format == 'raw':\n # Copy original text file to destination folder\n- output_text_file = dest_path(output_prefix, lang)\n+ output_text_file = dest_path(\n+ output_prefix + '.{}-{}'.format(args.source_lang, args.target_lang),\n+ lang,\n+ )\n shutil.copyfile(file_name(input_prefix, lang), output_text_file)\n \n- def make_all(args, make_dataset, lang):\n+ def make_all(lang):\n if args.trainpref:\n- make_dataset(args.trainpref, 'train', lang, args.output_format)\n+ make_dataset(args.trainpref, 'train', lang)\n if args.validpref:\n for k, validpref in enumerate(args.validpref.split(',')):\n outprefix = 'valid{}'.format(k) if k > 0 else 'valid'\n- make_dataset(validpref, outprefix, lang, args.output_format)\n+ make_dataset(validpref, outprefix, lang)\n if args.testpref:\n for k, testpref in enumerate(args.testpref.split(',')):\n outprefix = 'test{}'.format(k) if k > 0 else 'test'\n- make_dataset(testpref, outprefix, lang, args.output_format)\n+ make_dataset(testpref, outprefix, lang)\n \n- make_all(args, make_dataset, args.source_lang)\n+ make_all(args.source_lang)\n if target:\n- make_all(args, make_dataset, args.target_lang)\n+ make_all(args.target_lang)\n \n print('| Wrote preprocessed data to {}'.format(args.destdir))\n", "issue": "the raw output data can not be trained\nwhen I use preprocess.py to ouput raw data by use `--output-format raw`, the output data file can not be used to train.py which I also use `--raw-text`. Through looking the source code, I change the output data file mentioned above in this way: rename 'train.src' to 'train.src-tgt.src' and 'train.tgt' to 'train.src-tgt.tgt' (assume I use `--source-lang=src --target-lang=tgt` ) and this can run.\r\nI think it's a bug and is easy to fix :)\n", "before_files": [{"content": "#!/usr/bin/env python3\n# Copyright (c) 2017-present, Facebook, Inc.\n# All rights reserved.\n#\n# This source code is licensed under the license found in the LICENSE file in\n# the root directory of this source tree. An additional grant of patent rights\n# can be found in the PATENTS file in the same directory.\n#\n\nimport argparse\nfrom itertools import zip_longest\nimport os\nimport shutil\n\nfrom fairseq.data import indexed_dataset, dictionary\nfrom fairseq.tokenizer import Tokenizer, tokenize_line\n\n\ndef get_parser():\n parser = argparse.ArgumentParser(\n description='Data pre-processing: Create dictionary and store data in binary format')\n parser.add_argument('-s', '--source-lang', default=None, metavar='SRC', help='source language')\n parser.add_argument('-t', '--target-lang', default=None, metavar='TARGET', help='target language')\n parser.add_argument('--trainpref', metavar='FP', default=None, help='target language')\n parser.add_argument('--validpref', metavar='FP', default=None, help='comma separated, valid language prefixes')\n parser.add_argument('--testpref', metavar='FP', default=None, help='comma separated, test language prefixes')\n parser.add_argument('--destdir', metavar='DIR', default='data-bin', help='destination dir')\n parser.add_argument('--thresholdtgt', metavar='N', default=0, type=int,\n help='map words appearing less than threshold times to unknown')\n parser.add_argument('--thresholdsrc', metavar='N', default=0, type=int,\n help='map words appearing less than threshold times to unknown')\n parser.add_argument('--tgtdict', metavar='FP', help='reuse given target dictionary')\n parser.add_argument('--srcdict', metavar='FP', help='reuse given source dictionary')\n parser.add_argument('--nwordstgt', metavar='N', default=-1, type=int, help='number of target words to retain')\n parser.add_argument('--nwordssrc', metavar='N', default=-1, type=int, help='number of source words to retain')\n parser.add_argument('--alignfile', metavar='ALIGN', default=None, help='an alignment file (optional)')\n parser.add_argument('--output-format', metavar='FORMAT', default='binary', choices=['binary', 'raw'],\n help='output format (optional)')\n parser.add_argument('--joined-dictionary', action='store_true', help='Generate joined dictionary')\n parser.add_argument('--only-source', action='store_true', help='Only process the source language')\n parser.add_argument('--padding-factor', metavar='N', default=8, type=int,\n help='Pad dictionary size to be multiple of N')\n return parser\n\n\ndef main(args):\n print(args)\n os.makedirs(args.destdir, exist_ok=True)\n target = not args.only_source\n\n def build_dictionary(filenames):\n d = dictionary.Dictionary()\n for filename in filenames:\n Tokenizer.add_file_to_dictionary(filename, d, tokenize_line)\n return d\n\n def train_path(lang):\n return '{}{}'.format(args.trainpref, ('.' + lang) if lang else '')\n\n def file_name(prefix, lang):\n fname = prefix\n if lang is not None:\n fname += f'.{lang}'\n return fname\n\n def dest_path(prefix, lang):\n return os.path.join(args.destdir, file_name(prefix, lang))\n\n def dict_path(lang):\n return dest_path('dict', lang) + '.txt'\n\n def dataset_dest_path(output_prefix, lang, extension):\n base = f'{args.destdir}/{output_prefix}'\n lang_part = f'.{args.source_lang}-{args.target_lang}.{lang}' if lang is not None else ''\n return f'{base}{lang_part}.{extension}'\n\n if args.joined_dictionary:\n assert not args.srcdict, 'cannot combine --srcdict and --joined-dictionary'\n assert not args.tgtdict, 'cannot combine --tgtdict and --joined-dictionary'\n src_dict = build_dictionary(set([\n train_path(lang)\n for lang in [args.source_lang, args.target_lang]\n ]))\n tgt_dict = src_dict\n else:\n if args.srcdict:\n src_dict = dictionary.Dictionary.load(args.srcdict)\n else:\n assert args.trainpref, \"--trainpref must be set if --srcdict is not specified\"\n src_dict = build_dictionary([train_path(args.source_lang)])\n if target:\n if args.tgtdict:\n tgt_dict = dictionary.Dictionary.load(args.tgtdict)\n else:\n assert args.trainpref, \"--trainpref must be set if --tgtdict is not specified\"\n tgt_dict = build_dictionary([train_path(args.target_lang)])\n\n src_dict.finalize(\n threshold=args.thresholdsrc,\n nwords=args.nwordssrc,\n padding_factor=args.padding_factor,\n )\n src_dict.save(dict_path(args.source_lang))\n if target:\n if not args.joined_dictionary:\n tgt_dict.finalize(\n threshold=args.thresholdtgt,\n nwords=args.nwordstgt,\n padding_factor=args.padding_factor,\n )\n tgt_dict.save(dict_path(args.target_lang))\n\n def make_binary_dataset(input_prefix, output_prefix, lang):\n dict = dictionary.Dictionary.load(dict_path(lang))\n print('| [{}] Dictionary: {} types'.format(lang, len(dict) - 1))\n\n ds = indexed_dataset.IndexedDatasetBuilder(dataset_dest_path(output_prefix, lang, 'bin'))\n\n def consumer(tensor):\n ds.add_item(tensor)\n\n input_file = '{}{}'.format(input_prefix, ('.' + lang) if lang is not None else '')\n res = Tokenizer.binarize(input_file, dict, consumer)\n print('| [{}] {}: {} sents, {} tokens, {:.3}% replaced by {}'.format(\n lang, input_file, res['nseq'], res['ntok'],\n 100 * res['nunk'] / res['ntok'], dict.unk_word))\n ds.finalize(dataset_dest_path(output_prefix, lang, 'idx'))\n\n def make_dataset(input_prefix, output_prefix, lang, output_format='binary'):\n if output_format == 'binary':\n make_binary_dataset(input_prefix, output_prefix, lang)\n elif output_format == 'raw':\n # Copy original text file to destination folder\n output_text_file = dest_path(output_prefix, lang)\n shutil.copyfile(file_name(input_prefix, lang), output_text_file)\n\n def make_all(args, make_dataset, lang):\n if args.trainpref:\n make_dataset(args.trainpref, 'train', lang, args.output_format)\n if args.validpref:\n for k, validpref in enumerate(args.validpref.split(',')):\n outprefix = 'valid{}'.format(k) if k > 0 else 'valid'\n make_dataset(validpref, outprefix, lang, args.output_format)\n if args.testpref:\n for k, testpref in enumerate(args.testpref.split(',')):\n outprefix = 'test{}'.format(k) if k > 0 else 'test'\n make_dataset(testpref, outprefix, lang, args.output_format)\n\n make_all(args, make_dataset, args.source_lang)\n if target:\n make_all(args, make_dataset, args.target_lang)\n\n print('| Wrote preprocessed data to {}'.format(args.destdir))\n\n if args.alignfile:\n assert args.trainpref, \"--trainpref must be set if --alignfile is specified\"\n src_file_name = train_path(args.source_lang)\n tgt_file_name = train_path(args.target_lang)\n src_dict = dictionary.Dictionary.load(dict_path(args.source_lang))\n tgt_dict = dictionary.Dictionary.load(dict_path(args.target_lang))\n freq_map = {}\n with open(args.alignfile, 'r') as align_file:\n with open(src_file_name, 'r') as src_file:\n with open(tgt_file_name, 'r') as tgt_file:\n for a, s, t in zip_longest(align_file, src_file, tgt_file):\n si = Tokenizer.tokenize(s, src_dict, add_if_not_exist=False)\n ti = Tokenizer.tokenize(t, tgt_dict, add_if_not_exist=False)\n ai = list(map(lambda x: tuple(x.split('-')), a.split()))\n for sai, tai in ai:\n srcidx = si[int(sai)]\n tgtidx = ti[int(tai)]\n if srcidx != src_dict.unk() and tgtidx != tgt_dict.unk():\n assert srcidx != src_dict.pad()\n assert srcidx != src_dict.eos()\n assert tgtidx != tgt_dict.pad()\n assert tgtidx != tgt_dict.eos()\n\n if srcidx not in freq_map:\n freq_map[srcidx] = {}\n if tgtidx not in freq_map[srcidx]:\n freq_map[srcidx][tgtidx] = 1\n else:\n freq_map[srcidx][tgtidx] += 1\n\n align_dict = {}\n for srcidx in freq_map.keys():\n align_dict[srcidx] = max(freq_map[srcidx], key=freq_map[srcidx].get)\n\n with open(os.path.join(args.destdir, 'alignment.{}-{}.txt'.format(\n args.source_lang, args.target_lang)), 'w') as f:\n for k, v in align_dict.items():\n print('{} {}'.format(src_dict[k], tgt_dict[v]), file=f)\n\n\nif __name__ == '__main__':\n parser = get_parser()\n args = parser.parse_args()\n main(args)\n", "path": "preprocess.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n# Copyright (c) 2017-present, Facebook, Inc.\n# All rights reserved.\n#\n# This source code is licensed under the license found in the LICENSE file in\n# the root directory of this source tree. An additional grant of patent rights\n# can be found in the PATENTS file in the same directory.\n#\n\nimport argparse\nfrom itertools import zip_longest\nimport os\nimport shutil\n\nfrom fairseq.data import indexed_dataset, dictionary\nfrom fairseq.tokenizer import Tokenizer, tokenize_line\n\n\ndef get_parser():\n parser = argparse.ArgumentParser(\n description='Data pre-processing: Create dictionary and store data in binary format')\n parser.add_argument('-s', '--source-lang', default=None, metavar='SRC', help='source language')\n parser.add_argument('-t', '--target-lang', default=None, metavar='TARGET', help='target language')\n parser.add_argument('--trainpref', metavar='FP', default=None, help='target language')\n parser.add_argument('--validpref', metavar='FP', default=None, help='comma separated, valid language prefixes')\n parser.add_argument('--testpref', metavar='FP', default=None, help='comma separated, test language prefixes')\n parser.add_argument('--destdir', metavar='DIR', default='data-bin', help='destination dir')\n parser.add_argument('--thresholdtgt', metavar='N', default=0, type=int,\n help='map words appearing less than threshold times to unknown')\n parser.add_argument('--thresholdsrc', metavar='N', default=0, type=int,\n help='map words appearing less than threshold times to unknown')\n parser.add_argument('--tgtdict', metavar='FP', help='reuse given target dictionary')\n parser.add_argument('--srcdict', metavar='FP', help='reuse given source dictionary')\n parser.add_argument('--nwordstgt', metavar='N', default=-1, type=int, help='number of target words to retain')\n parser.add_argument('--nwordssrc', metavar='N', default=-1, type=int, help='number of source words to retain')\n parser.add_argument('--alignfile', metavar='ALIGN', default=None, help='an alignment file (optional)')\n parser.add_argument('--output-format', metavar='FORMAT', default='binary', choices=['binary', 'raw'],\n help='output format (optional)')\n parser.add_argument('--joined-dictionary', action='store_true', help='Generate joined dictionary')\n parser.add_argument('--only-source', action='store_true', help='Only process the source language')\n parser.add_argument('--padding-factor', metavar='N', default=8, type=int,\n help='Pad dictionary size to be multiple of N')\n return parser\n\n\ndef main(args):\n print(args)\n os.makedirs(args.destdir, exist_ok=True)\n target = not args.only_source\n\n def build_dictionary(filenames):\n d = dictionary.Dictionary()\n for filename in filenames:\n Tokenizer.add_file_to_dictionary(filename, d, tokenize_line)\n return d\n\n def train_path(lang):\n return '{}{}'.format(args.trainpref, ('.' + lang) if lang else '')\n\n def file_name(prefix, lang):\n fname = prefix\n if lang is not None:\n fname += f'.{lang}'\n return fname\n\n def dest_path(prefix, lang):\n return os.path.join(args.destdir, file_name(prefix, lang))\n\n def dict_path(lang):\n return dest_path('dict', lang) + '.txt'\n\n def dataset_dest_path(output_prefix, lang, extension):\n base = f'{args.destdir}/{output_prefix}'\n lang_part = f'.{args.source_lang}-{args.target_lang}.{lang}' if lang is not None else ''\n return f'{base}{lang_part}.{extension}'\n\n if args.joined_dictionary:\n assert not args.srcdict, 'cannot combine --srcdict and --joined-dictionary'\n assert not args.tgtdict, 'cannot combine --tgtdict and --joined-dictionary'\n src_dict = build_dictionary(set([\n train_path(lang)\n for lang in [args.source_lang, args.target_lang]\n ]))\n tgt_dict = src_dict\n else:\n if args.srcdict:\n src_dict = dictionary.Dictionary.load(args.srcdict)\n else:\n assert args.trainpref, \"--trainpref must be set if --srcdict is not specified\"\n src_dict = build_dictionary([train_path(args.source_lang)])\n if target:\n if args.tgtdict:\n tgt_dict = dictionary.Dictionary.load(args.tgtdict)\n else:\n assert args.trainpref, \"--trainpref must be set if --tgtdict is not specified\"\n tgt_dict = build_dictionary([train_path(args.target_lang)])\n\n src_dict.finalize(\n threshold=args.thresholdsrc,\n nwords=args.nwordssrc,\n padding_factor=args.padding_factor,\n )\n src_dict.save(dict_path(args.source_lang))\n if target:\n if not args.joined_dictionary:\n tgt_dict.finalize(\n threshold=args.thresholdtgt,\n nwords=args.nwordstgt,\n padding_factor=args.padding_factor,\n )\n tgt_dict.save(dict_path(args.target_lang))\n\n def make_binary_dataset(input_prefix, output_prefix, lang):\n dict = dictionary.Dictionary.load(dict_path(lang))\n print('| [{}] Dictionary: {} types'.format(lang, len(dict) - 1))\n\n ds = indexed_dataset.IndexedDatasetBuilder(dataset_dest_path(output_prefix, lang, 'bin'))\n\n def consumer(tensor):\n ds.add_item(tensor)\n\n input_file = '{}{}'.format(input_prefix, ('.' + lang) if lang is not None else '')\n res = Tokenizer.binarize(input_file, dict, consumer)\n print('| [{}] {}: {} sents, {} tokens, {:.3}% replaced by {}'.format(\n lang, input_file, res['nseq'], res['ntok'],\n 100 * res['nunk'] / res['ntok'], dict.unk_word))\n ds.finalize(dataset_dest_path(output_prefix, lang, 'idx'))\n\n def make_dataset(input_prefix, output_prefix, lang):\n if args.output_format == 'binary':\n make_binary_dataset(input_prefix, output_prefix, lang)\n elif args.output_format == 'raw':\n # Copy original text file to destination folder\n output_text_file = dest_path(\n output_prefix + '.{}-{}'.format(args.source_lang, args.target_lang),\n lang,\n )\n shutil.copyfile(file_name(input_prefix, lang), output_text_file)\n\n def make_all(lang):\n if args.trainpref:\n make_dataset(args.trainpref, 'train', lang)\n if args.validpref:\n for k, validpref in enumerate(args.validpref.split(',')):\n outprefix = 'valid{}'.format(k) if k > 0 else 'valid'\n make_dataset(validpref, outprefix, lang)\n if args.testpref:\n for k, testpref in enumerate(args.testpref.split(',')):\n outprefix = 'test{}'.format(k) if k > 0 else 'test'\n make_dataset(testpref, outprefix, lang)\n\n make_all(args.source_lang)\n if target:\n make_all(args.target_lang)\n\n print('| Wrote preprocessed data to {}'.format(args.destdir))\n\n if args.alignfile:\n assert args.trainpref, \"--trainpref must be set if --alignfile is specified\"\n src_file_name = train_path(args.source_lang)\n tgt_file_name = train_path(args.target_lang)\n src_dict = dictionary.Dictionary.load(dict_path(args.source_lang))\n tgt_dict = dictionary.Dictionary.load(dict_path(args.target_lang))\n freq_map = {}\n with open(args.alignfile, 'r') as align_file:\n with open(src_file_name, 'r') as src_file:\n with open(tgt_file_name, 'r') as tgt_file:\n for a, s, t in zip_longest(align_file, src_file, tgt_file):\n si = Tokenizer.tokenize(s, src_dict, add_if_not_exist=False)\n ti = Tokenizer.tokenize(t, tgt_dict, add_if_not_exist=False)\n ai = list(map(lambda x: tuple(x.split('-')), a.split()))\n for sai, tai in ai:\n srcidx = si[int(sai)]\n tgtidx = ti[int(tai)]\n if srcidx != src_dict.unk() and tgtidx != tgt_dict.unk():\n assert srcidx != src_dict.pad()\n assert srcidx != src_dict.eos()\n assert tgtidx != tgt_dict.pad()\n assert tgtidx != tgt_dict.eos()\n\n if srcidx not in freq_map:\n freq_map[srcidx] = {}\n if tgtidx not in freq_map[srcidx]:\n freq_map[srcidx][tgtidx] = 1\n else:\n freq_map[srcidx][tgtidx] += 1\n\n align_dict = {}\n for srcidx in freq_map.keys():\n align_dict[srcidx] = max(freq_map[srcidx], key=freq_map[srcidx].get)\n\n with open(os.path.join(args.destdir, 'alignment.{}-{}.txt'.format(\n args.source_lang, args.target_lang)), 'w') as f:\n for k, v in align_dict.items():\n print('{} {}'.format(src_dict[k], tgt_dict[v]), file=f)\n\n\nif __name__ == '__main__':\n parser = get_parser()\n args = parser.parse_args()\n main(args)\n", "path": "preprocess.py"}]}
| 2,898 | 485 |
gh_patches_debug_31802
|
rasdani/github-patches
|
git_diff
|
microsoft__botbuilder-python-1240
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
SkillHandler doesn't return ResourceResponse when forwarding activities (Python)
See [parent](https://github.com/microsoft/botframework-sdk/issues/5919)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `libraries/botbuilder-core/botbuilder/core/skills/skill_handler.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 from uuid import uuid4
5
6 from botbuilder.core import Bot, BotAdapter, ChannelServiceHandler, TurnContext
7 from botbuilder.schema import (
8 Activity,
9 ActivityTypes,
10 ResourceResponse,
11 CallerIdConstants,
12 )
13 from botframework.connector.auth import (
14 AuthenticationConfiguration,
15 AuthenticationConstants,
16 ChannelProvider,
17 ClaimsIdentity,
18 CredentialProvider,
19 GovernmentConstants,
20 JwtTokenValidation,
21 )
22 from .skill_conversation_reference import SkillConversationReference
23 from .conversation_id_factory import ConversationIdFactoryBase
24
25
26 class SkillHandler(ChannelServiceHandler):
27
28 SKILL_CONVERSATION_REFERENCE_KEY = (
29 "botbuilder.core.skills.SkillConversationReference"
30 )
31
32 def __init__(
33 self,
34 adapter: BotAdapter,
35 bot: Bot,
36 conversation_id_factory: ConversationIdFactoryBase,
37 credential_provider: CredentialProvider,
38 auth_configuration: AuthenticationConfiguration,
39 channel_provider: ChannelProvider = None,
40 logger: object = None,
41 ):
42 super().__init__(credential_provider, auth_configuration, channel_provider)
43
44 if not adapter:
45 raise TypeError("adapter can't be None")
46 if not bot:
47 raise TypeError("bot can't be None")
48 if not conversation_id_factory:
49 raise TypeError("conversation_id_factory can't be None")
50
51 self._adapter = adapter
52 self._bot = bot
53 self._conversation_id_factory = conversation_id_factory
54 self._logger = logger
55
56 async def on_send_to_conversation(
57 self, claims_identity: ClaimsIdentity, conversation_id: str, activity: Activity,
58 ) -> ResourceResponse:
59 """
60 send_to_conversation() API for Skill
61
62 This method allows you to send an activity to the end of a conversation.
63
64 This is slightly different from ReplyToActivity().
65 * SendToConversation(conversationId) - will append the activity to the end
66 of the conversation according to the timestamp or semantics of the channel.
67 * ReplyToActivity(conversationId,ActivityId) - adds the activity as a reply
68 to another activity, if the channel supports it. If the channel does not
69 support nested replies, ReplyToActivity falls back to SendToConversation.
70
71 Use ReplyToActivity when replying to a specific activity in the
72 conversation.
73
74 Use SendToConversation in all other cases.
75 :param claims_identity: Claims identity for the bot.
76 :type claims_identity: :class:`botframework.connector.auth.ClaimsIdentity`
77 :param conversation_id:The conversation ID.
78 :type conversation_id: str
79 :param activity: Activity to send.
80 :type activity: Activity
81 :return:
82 """
83 return await self._process_activity(
84 claims_identity, conversation_id, None, activity,
85 )
86
87 async def on_reply_to_activity(
88 self,
89 claims_identity: ClaimsIdentity,
90 conversation_id: str,
91 activity_id: str,
92 activity: Activity,
93 ) -> ResourceResponse:
94 """
95 reply_to_activity() API for Skill.
96
97 This method allows you to reply to an activity.
98
99 This is slightly different from SendToConversation().
100 * SendToConversation(conversationId) - will append the activity to the end
101 of the conversation according to the timestamp or semantics of the channel.
102 * ReplyToActivity(conversationId,ActivityId) - adds the activity as a reply
103 to another activity, if the channel supports it. If the channel does not
104 support nested replies, ReplyToActivity falls back to SendToConversation.
105
106 Use ReplyToActivity when replying to a specific activity in the
107 conversation.
108
109 Use SendToConversation in all other cases.
110 :param claims_identity: Claims identity for the bot.
111 :type claims_identity: :class:`botframework.connector.auth.ClaimsIdentity`
112 :param conversation_id:The conversation ID.
113 :type conversation_id: str
114 :param activity: Activity to send.
115 :type activity: Activity
116 :return:
117 """
118 return await self._process_activity(
119 claims_identity, conversation_id, activity_id, activity,
120 )
121
122 async def _process_activity(
123 self,
124 claims_identity: ClaimsIdentity,
125 conversation_id: str,
126 reply_to_activity_id: str,
127 activity: Activity,
128 ) -> ResourceResponse:
129 # Get the SkillsConversationReference
130 conversation_reference_result = await self._conversation_id_factory.get_conversation_reference(
131 conversation_id
132 )
133
134 # ConversationIdFactory can return either a SkillConversationReference (the newer way),
135 # or a ConversationReference (the old way, but still here for compatibility). If a
136 # ConversationReference is returned, build a new SkillConversationReference to simplify
137 # the remainder of this method.
138 skill_conversation_reference: SkillConversationReference = None
139 if isinstance(conversation_reference_result, SkillConversationReference):
140 skill_conversation_reference = conversation_reference_result
141 else:
142 skill_conversation_reference = SkillConversationReference(
143 conversation_reference=conversation_reference_result,
144 oauth_scope=(
145 GovernmentConstants.TO_CHANNEL_FROM_BOT_OAUTH_SCOPE
146 if self._channel_provider and self._channel_provider.is_government()
147 else AuthenticationConstants.TO_CHANNEL_FROM_BOT_OAUTH_SCOPE
148 ),
149 )
150
151 if not skill_conversation_reference:
152 raise KeyError("SkillConversationReference not found")
153
154 async def callback(context: TurnContext):
155 context.turn_state[
156 SkillHandler.SKILL_CONVERSATION_REFERENCE_KEY
157 ] = skill_conversation_reference
158
159 TurnContext.apply_conversation_reference(
160 activity, skill_conversation_reference.conversation_reference
161 )
162
163 context.activity.id = reply_to_activity_id
164
165 app_id = JwtTokenValidation.get_app_id_from_claims(claims_identity.claims)
166 context.activity.caller_id = (
167 f"{CallerIdConstants.bot_to_bot_prefix}{app_id}"
168 )
169
170 if activity.type == ActivityTypes.end_of_conversation:
171 await self._conversation_id_factory.delete_conversation_reference(
172 conversation_id
173 )
174 self._apply_eoc_to_turn_context_activity(context, activity)
175 await self._bot.on_turn(context)
176 elif activity.type == ActivityTypes.event:
177 self._apply_event_to_turn_context_activity(context, activity)
178 await self._bot.on_turn(context)
179 else:
180 await context.send_activity(activity)
181
182 await self._adapter.continue_conversation(
183 skill_conversation_reference.conversation_reference,
184 callback,
185 claims_identity=claims_identity,
186 audience=skill_conversation_reference.oauth_scope,
187 )
188 return ResourceResponse(id=str(uuid4()))
189
190 @staticmethod
191 def _apply_eoc_to_turn_context_activity(
192 context: TurnContext, end_of_conversation_activity: Activity
193 ):
194 context.activity.type = end_of_conversation_activity.type
195 context.activity.text = end_of_conversation_activity.text
196 context.activity.code = end_of_conversation_activity.code
197
198 context.activity.reply_to_id = end_of_conversation_activity.reply_to_id
199 context.activity.value = end_of_conversation_activity.value
200 context.activity.entities = end_of_conversation_activity.entities
201 context.activity.locale = end_of_conversation_activity.locale
202 context.activity.local_timestamp = end_of_conversation_activity.local_timestamp
203 context.activity.timestamp = end_of_conversation_activity.timestamp
204 context.activity.channel_data = end_of_conversation_activity.channel_data
205 context.activity.additional_properties = (
206 end_of_conversation_activity.additional_properties
207 )
208
209 @staticmethod
210 def _apply_event_to_turn_context_activity(
211 context: TurnContext, event_activity: Activity
212 ):
213 context.activity.type = event_activity.type
214 context.activity.name = event_activity.name
215 context.activity.value = event_activity.value
216 context.activity.relates_to = event_activity.relates_to
217
218 context.activity.reply_to_id = event_activity.reply_to_id
219 context.activity.value = event_activity.value
220 context.activity.entities = event_activity.entities
221 context.activity.locale = event_activity.locale
222 context.activity.local_timestamp = event_activity.local_timestamp
223 context.activity.timestamp = event_activity.timestamp
224 context.activity.channel_data = event_activity.channel_data
225 context.activity.additional_properties = event_activity.additional_properties
226
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/libraries/botbuilder-core/botbuilder/core/skills/skill_handler.py b/libraries/botbuilder-core/botbuilder/core/skills/skill_handler.py
--- a/libraries/botbuilder-core/botbuilder/core/skills/skill_handler.py
+++ b/libraries/botbuilder-core/botbuilder/core/skills/skill_handler.py
@@ -151,7 +151,14 @@
if not skill_conversation_reference:
raise KeyError("SkillConversationReference not found")
+ if not skill_conversation_reference.conversation_reference:
+ raise KeyError("conversationReference not found")
+
+ # If an activity is sent, return the ResourceResponse
+ resource_response: ResourceResponse = None
+
async def callback(context: TurnContext):
+ nonlocal resource_response
context.turn_state[
SkillHandler.SKILL_CONVERSATION_REFERENCE_KEY
] = skill_conversation_reference
@@ -177,7 +184,7 @@
self._apply_event_to_turn_context_activity(context, activity)
await self._bot.on_turn(context)
else:
- await context.send_activity(activity)
+ resource_response = await context.send_activity(activity)
await self._adapter.continue_conversation(
skill_conversation_reference.conversation_reference,
@@ -185,7 +192,11 @@
claims_identity=claims_identity,
audience=skill_conversation_reference.oauth_scope,
)
- return ResourceResponse(id=str(uuid4()))
+
+ if not resource_response:
+ resource_response = ResourceResponse(id=str(uuid4()))
+
+ return resource_response
@staticmethod
def _apply_eoc_to_turn_context_activity(
|
{"golden_diff": "diff --git a/libraries/botbuilder-core/botbuilder/core/skills/skill_handler.py b/libraries/botbuilder-core/botbuilder/core/skills/skill_handler.py\n--- a/libraries/botbuilder-core/botbuilder/core/skills/skill_handler.py\n+++ b/libraries/botbuilder-core/botbuilder/core/skills/skill_handler.py\n@@ -151,7 +151,14 @@\n if not skill_conversation_reference:\n raise KeyError(\"SkillConversationReference not found\")\n \n+ if not skill_conversation_reference.conversation_reference:\n+ raise KeyError(\"conversationReference not found\")\n+\n+ # If an activity is sent, return the ResourceResponse\n+ resource_response: ResourceResponse = None\n+\n async def callback(context: TurnContext):\n+ nonlocal resource_response\n context.turn_state[\n SkillHandler.SKILL_CONVERSATION_REFERENCE_KEY\n ] = skill_conversation_reference\n@@ -177,7 +184,7 @@\n self._apply_event_to_turn_context_activity(context, activity)\n await self._bot.on_turn(context)\n else:\n- await context.send_activity(activity)\n+ resource_response = await context.send_activity(activity)\n \n await self._adapter.continue_conversation(\n skill_conversation_reference.conversation_reference,\n@@ -185,7 +192,11 @@\n claims_identity=claims_identity,\n audience=skill_conversation_reference.oauth_scope,\n )\n- return ResourceResponse(id=str(uuid4()))\n+\n+ if not resource_response:\n+ resource_response = ResourceResponse(id=str(uuid4()))\n+\n+ return resource_response\n \n @staticmethod\n def _apply_eoc_to_turn_context_activity(\n", "issue": "SkillHandler doesn't return ResourceResponse when forwarding activities (Python)\nSee [parent](https://github.com/microsoft/botframework-sdk/issues/5919)\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nfrom uuid import uuid4\n\nfrom botbuilder.core import Bot, BotAdapter, ChannelServiceHandler, TurnContext\nfrom botbuilder.schema import (\n Activity,\n ActivityTypes,\n ResourceResponse,\n CallerIdConstants,\n)\nfrom botframework.connector.auth import (\n AuthenticationConfiguration,\n AuthenticationConstants,\n ChannelProvider,\n ClaimsIdentity,\n CredentialProvider,\n GovernmentConstants,\n JwtTokenValidation,\n)\nfrom .skill_conversation_reference import SkillConversationReference\nfrom .conversation_id_factory import ConversationIdFactoryBase\n\n\nclass SkillHandler(ChannelServiceHandler):\n\n SKILL_CONVERSATION_REFERENCE_KEY = (\n \"botbuilder.core.skills.SkillConversationReference\"\n )\n\n def __init__(\n self,\n adapter: BotAdapter,\n bot: Bot,\n conversation_id_factory: ConversationIdFactoryBase,\n credential_provider: CredentialProvider,\n auth_configuration: AuthenticationConfiguration,\n channel_provider: ChannelProvider = None,\n logger: object = None,\n ):\n super().__init__(credential_provider, auth_configuration, channel_provider)\n\n if not adapter:\n raise TypeError(\"adapter can't be None\")\n if not bot:\n raise TypeError(\"bot can't be None\")\n if not conversation_id_factory:\n raise TypeError(\"conversation_id_factory can't be None\")\n\n self._adapter = adapter\n self._bot = bot\n self._conversation_id_factory = conversation_id_factory\n self._logger = logger\n\n async def on_send_to_conversation(\n self, claims_identity: ClaimsIdentity, conversation_id: str, activity: Activity,\n ) -> ResourceResponse:\n \"\"\"\n send_to_conversation() API for Skill\n\n This method allows you to send an activity to the end of a conversation.\n\n This is slightly different from ReplyToActivity().\n * SendToConversation(conversationId) - will append the activity to the end\n of the conversation according to the timestamp or semantics of the channel.\n * ReplyToActivity(conversationId,ActivityId) - adds the activity as a reply\n to another activity, if the channel supports it. If the channel does not\n support nested replies, ReplyToActivity falls back to SendToConversation.\n\n Use ReplyToActivity when replying to a specific activity in the\n conversation.\n\n Use SendToConversation in all other cases.\n :param claims_identity: Claims identity for the bot.\n :type claims_identity: :class:`botframework.connector.auth.ClaimsIdentity`\n :param conversation_id:The conversation ID.\n :type conversation_id: str\n :param activity: Activity to send.\n :type activity: Activity\n :return:\n \"\"\"\n return await self._process_activity(\n claims_identity, conversation_id, None, activity,\n )\n\n async def on_reply_to_activity(\n self,\n claims_identity: ClaimsIdentity,\n conversation_id: str,\n activity_id: str,\n activity: Activity,\n ) -> ResourceResponse:\n \"\"\"\n reply_to_activity() API for Skill.\n\n This method allows you to reply to an activity.\n\n This is slightly different from SendToConversation().\n * SendToConversation(conversationId) - will append the activity to the end\n of the conversation according to the timestamp or semantics of the channel.\n * ReplyToActivity(conversationId,ActivityId) - adds the activity as a reply\n to another activity, if the channel supports it. If the channel does not\n support nested replies, ReplyToActivity falls back to SendToConversation.\n\n Use ReplyToActivity when replying to a specific activity in the\n conversation.\n\n Use SendToConversation in all other cases.\n :param claims_identity: Claims identity for the bot.\n :type claims_identity: :class:`botframework.connector.auth.ClaimsIdentity`\n :param conversation_id:The conversation ID.\n :type conversation_id: str\n :param activity: Activity to send.\n :type activity: Activity\n :return:\n \"\"\"\n return await self._process_activity(\n claims_identity, conversation_id, activity_id, activity,\n )\n\n async def _process_activity(\n self,\n claims_identity: ClaimsIdentity,\n conversation_id: str,\n reply_to_activity_id: str,\n activity: Activity,\n ) -> ResourceResponse:\n # Get the SkillsConversationReference\n conversation_reference_result = await self._conversation_id_factory.get_conversation_reference(\n conversation_id\n )\n\n # ConversationIdFactory can return either a SkillConversationReference (the newer way),\n # or a ConversationReference (the old way, but still here for compatibility). If a\n # ConversationReference is returned, build a new SkillConversationReference to simplify\n # the remainder of this method.\n skill_conversation_reference: SkillConversationReference = None\n if isinstance(conversation_reference_result, SkillConversationReference):\n skill_conversation_reference = conversation_reference_result\n else:\n skill_conversation_reference = SkillConversationReference(\n conversation_reference=conversation_reference_result,\n oauth_scope=(\n GovernmentConstants.TO_CHANNEL_FROM_BOT_OAUTH_SCOPE\n if self._channel_provider and self._channel_provider.is_government()\n else AuthenticationConstants.TO_CHANNEL_FROM_BOT_OAUTH_SCOPE\n ),\n )\n\n if not skill_conversation_reference:\n raise KeyError(\"SkillConversationReference not found\")\n\n async def callback(context: TurnContext):\n context.turn_state[\n SkillHandler.SKILL_CONVERSATION_REFERENCE_KEY\n ] = skill_conversation_reference\n\n TurnContext.apply_conversation_reference(\n activity, skill_conversation_reference.conversation_reference\n )\n\n context.activity.id = reply_to_activity_id\n\n app_id = JwtTokenValidation.get_app_id_from_claims(claims_identity.claims)\n context.activity.caller_id = (\n f\"{CallerIdConstants.bot_to_bot_prefix}{app_id}\"\n )\n\n if activity.type == ActivityTypes.end_of_conversation:\n await self._conversation_id_factory.delete_conversation_reference(\n conversation_id\n )\n self._apply_eoc_to_turn_context_activity(context, activity)\n await self._bot.on_turn(context)\n elif activity.type == ActivityTypes.event:\n self._apply_event_to_turn_context_activity(context, activity)\n await self._bot.on_turn(context)\n else:\n await context.send_activity(activity)\n\n await self._adapter.continue_conversation(\n skill_conversation_reference.conversation_reference,\n callback,\n claims_identity=claims_identity,\n audience=skill_conversation_reference.oauth_scope,\n )\n return ResourceResponse(id=str(uuid4()))\n\n @staticmethod\n def _apply_eoc_to_turn_context_activity(\n context: TurnContext, end_of_conversation_activity: Activity\n ):\n context.activity.type = end_of_conversation_activity.type\n context.activity.text = end_of_conversation_activity.text\n context.activity.code = end_of_conversation_activity.code\n\n context.activity.reply_to_id = end_of_conversation_activity.reply_to_id\n context.activity.value = end_of_conversation_activity.value\n context.activity.entities = end_of_conversation_activity.entities\n context.activity.locale = end_of_conversation_activity.locale\n context.activity.local_timestamp = end_of_conversation_activity.local_timestamp\n context.activity.timestamp = end_of_conversation_activity.timestamp\n context.activity.channel_data = end_of_conversation_activity.channel_data\n context.activity.additional_properties = (\n end_of_conversation_activity.additional_properties\n )\n\n @staticmethod\n def _apply_event_to_turn_context_activity(\n context: TurnContext, event_activity: Activity\n ):\n context.activity.type = event_activity.type\n context.activity.name = event_activity.name\n context.activity.value = event_activity.value\n context.activity.relates_to = event_activity.relates_to\n\n context.activity.reply_to_id = event_activity.reply_to_id\n context.activity.value = event_activity.value\n context.activity.entities = event_activity.entities\n context.activity.locale = event_activity.locale\n context.activity.local_timestamp = event_activity.local_timestamp\n context.activity.timestamp = event_activity.timestamp\n context.activity.channel_data = event_activity.channel_data\n context.activity.additional_properties = event_activity.additional_properties\n", "path": "libraries/botbuilder-core/botbuilder/core/skills/skill_handler.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nfrom uuid import uuid4\n\nfrom botbuilder.core import Bot, BotAdapter, ChannelServiceHandler, TurnContext\nfrom botbuilder.schema import (\n Activity,\n ActivityTypes,\n ResourceResponse,\n CallerIdConstants,\n)\nfrom botframework.connector.auth import (\n AuthenticationConfiguration,\n AuthenticationConstants,\n ChannelProvider,\n ClaimsIdentity,\n CredentialProvider,\n GovernmentConstants,\n JwtTokenValidation,\n)\nfrom .skill_conversation_reference import SkillConversationReference\nfrom .conversation_id_factory import ConversationIdFactoryBase\n\n\nclass SkillHandler(ChannelServiceHandler):\n\n SKILL_CONVERSATION_REFERENCE_KEY = (\n \"botbuilder.core.skills.SkillConversationReference\"\n )\n\n def __init__(\n self,\n adapter: BotAdapter,\n bot: Bot,\n conversation_id_factory: ConversationIdFactoryBase,\n credential_provider: CredentialProvider,\n auth_configuration: AuthenticationConfiguration,\n channel_provider: ChannelProvider = None,\n logger: object = None,\n ):\n super().__init__(credential_provider, auth_configuration, channel_provider)\n\n if not adapter:\n raise TypeError(\"adapter can't be None\")\n if not bot:\n raise TypeError(\"bot can't be None\")\n if not conversation_id_factory:\n raise TypeError(\"conversation_id_factory can't be None\")\n\n self._adapter = adapter\n self._bot = bot\n self._conversation_id_factory = conversation_id_factory\n self._logger = logger\n\n async def on_send_to_conversation(\n self, claims_identity: ClaimsIdentity, conversation_id: str, activity: Activity,\n ) -> ResourceResponse:\n \"\"\"\n send_to_conversation() API for Skill\n\n This method allows you to send an activity to the end of a conversation.\n\n This is slightly different from ReplyToActivity().\n * SendToConversation(conversationId) - will append the activity to the end\n of the conversation according to the timestamp or semantics of the channel.\n * ReplyToActivity(conversationId,ActivityId) - adds the activity as a reply\n to another activity, if the channel supports it. If the channel does not\n support nested replies, ReplyToActivity falls back to SendToConversation.\n\n Use ReplyToActivity when replying to a specific activity in the\n conversation.\n\n Use SendToConversation in all other cases.\n :param claims_identity: Claims identity for the bot.\n :type claims_identity: :class:`botframework.connector.auth.ClaimsIdentity`\n :param conversation_id:The conversation ID.\n :type conversation_id: str\n :param activity: Activity to send.\n :type activity: Activity\n :return:\n \"\"\"\n return await self._process_activity(\n claims_identity, conversation_id, None, activity,\n )\n\n async def on_reply_to_activity(\n self,\n claims_identity: ClaimsIdentity,\n conversation_id: str,\n activity_id: str,\n activity: Activity,\n ) -> ResourceResponse:\n \"\"\"\n reply_to_activity() API for Skill.\n\n This method allows you to reply to an activity.\n\n This is slightly different from SendToConversation().\n * SendToConversation(conversationId) - will append the activity to the end\n of the conversation according to the timestamp or semantics of the channel.\n * ReplyToActivity(conversationId,ActivityId) - adds the activity as a reply\n to another activity, if the channel supports it. If the channel does not\n support nested replies, ReplyToActivity falls back to SendToConversation.\n\n Use ReplyToActivity when replying to a specific activity in the\n conversation.\n\n Use SendToConversation in all other cases.\n :param claims_identity: Claims identity for the bot.\n :type claims_identity: :class:`botframework.connector.auth.ClaimsIdentity`\n :param conversation_id:The conversation ID.\n :type conversation_id: str\n :param activity: Activity to send.\n :type activity: Activity\n :return:\n \"\"\"\n return await self._process_activity(\n claims_identity, conversation_id, activity_id, activity,\n )\n\n async def _process_activity(\n self,\n claims_identity: ClaimsIdentity,\n conversation_id: str,\n reply_to_activity_id: str,\n activity: Activity,\n ) -> ResourceResponse:\n # Get the SkillsConversationReference\n conversation_reference_result = await self._conversation_id_factory.get_conversation_reference(\n conversation_id\n )\n\n # ConversationIdFactory can return either a SkillConversationReference (the newer way),\n # or a ConversationReference (the old way, but still here for compatibility). If a\n # ConversationReference is returned, build a new SkillConversationReference to simplify\n # the remainder of this method.\n skill_conversation_reference: SkillConversationReference = None\n if isinstance(conversation_reference_result, SkillConversationReference):\n skill_conversation_reference = conversation_reference_result\n else:\n skill_conversation_reference = SkillConversationReference(\n conversation_reference=conversation_reference_result,\n oauth_scope=(\n GovernmentConstants.TO_CHANNEL_FROM_BOT_OAUTH_SCOPE\n if self._channel_provider and self._channel_provider.is_government()\n else AuthenticationConstants.TO_CHANNEL_FROM_BOT_OAUTH_SCOPE\n ),\n )\n\n if not skill_conversation_reference:\n raise KeyError(\"SkillConversationReference not found\")\n\n if not skill_conversation_reference.conversation_reference:\n raise KeyError(\"conversationReference not found\")\n\n # If an activity is sent, return the ResourceResponse\n resource_response: ResourceResponse = None\n\n async def callback(context: TurnContext):\n nonlocal resource_response\n context.turn_state[\n SkillHandler.SKILL_CONVERSATION_REFERENCE_KEY\n ] = skill_conversation_reference\n\n TurnContext.apply_conversation_reference(\n activity, skill_conversation_reference.conversation_reference\n )\n\n context.activity.id = reply_to_activity_id\n\n app_id = JwtTokenValidation.get_app_id_from_claims(claims_identity.claims)\n context.activity.caller_id = (\n f\"{CallerIdConstants.bot_to_bot_prefix}{app_id}\"\n )\n\n if activity.type == ActivityTypes.end_of_conversation:\n await self._conversation_id_factory.delete_conversation_reference(\n conversation_id\n )\n self._apply_eoc_to_turn_context_activity(context, activity)\n await self._bot.on_turn(context)\n elif activity.type == ActivityTypes.event:\n self._apply_event_to_turn_context_activity(context, activity)\n await self._bot.on_turn(context)\n else:\n resource_response = await context.send_activity(activity)\n\n await self._adapter.continue_conversation(\n skill_conversation_reference.conversation_reference,\n callback,\n claims_identity=claims_identity,\n audience=skill_conversation_reference.oauth_scope,\n )\n\n if not resource_response:\n resource_response = ResourceResponse(id=str(uuid4()))\n\n return resource_response\n\n @staticmethod\n def _apply_eoc_to_turn_context_activity(\n context: TurnContext, end_of_conversation_activity: Activity\n ):\n context.activity.type = end_of_conversation_activity.type\n context.activity.text = end_of_conversation_activity.text\n context.activity.code = end_of_conversation_activity.code\n\n context.activity.reply_to_id = end_of_conversation_activity.reply_to_id\n context.activity.value = end_of_conversation_activity.value\n context.activity.entities = end_of_conversation_activity.entities\n context.activity.locale = end_of_conversation_activity.locale\n context.activity.local_timestamp = end_of_conversation_activity.local_timestamp\n context.activity.timestamp = end_of_conversation_activity.timestamp\n context.activity.channel_data = end_of_conversation_activity.channel_data\n context.activity.additional_properties = (\n end_of_conversation_activity.additional_properties\n )\n\n @staticmethod\n def _apply_event_to_turn_context_activity(\n context: TurnContext, event_activity: Activity\n ):\n context.activity.type = event_activity.type\n context.activity.name = event_activity.name\n context.activity.value = event_activity.value\n context.activity.relates_to = event_activity.relates_to\n\n context.activity.reply_to_id = event_activity.reply_to_id\n context.activity.value = event_activity.value\n context.activity.entities = event_activity.entities\n context.activity.locale = event_activity.locale\n context.activity.local_timestamp = event_activity.local_timestamp\n context.activity.timestamp = event_activity.timestamp\n context.activity.channel_data = event_activity.channel_data\n context.activity.additional_properties = event_activity.additional_properties\n", "path": "libraries/botbuilder-core/botbuilder/core/skills/skill_handler.py"}]}
| 2,591 | 362 |
gh_patches_debug_35684
|
rasdani/github-patches
|
git_diff
|
ManageIQ__integration_tests-296
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Better YAML overriding
Now it does not take just the root element into the account, but it crawls throught the dictionary and only updates the values that are present in the new dictionary. It converts all dicts to Configs, other values than specified in override dict are not touched.
It also improves the `__getattribute__` behaviour - now it propagates the interface to the child nodes by converting all `dict` to `Config` before returning the value, so the dot operator can be used everywhere.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `utils/conf_loader.py`
Content:
```
1 import os
2 from collections import OrderedDict
3
4 import py.path
5 import yaml
6 from yaml.loader import Loader
7
8
9 class OrderedYamlLoader(Loader):
10 def construct_yaml_map(self, node):
11 data = OrderedDict()
12 yield data
13 value = self.construct_mapping(node)
14 data.update(value)
15
16
17 class ConfigNotFoundException(Exception):
18 pass
19
20
21 class Config(dict):
22 """A dict subclass with knowledge of conf yamls and how to load them
23
24 Also supports descriptor access, e.g. conf.configfile
25 (compared to the normal dict access, conf['configfile'])
26 """
27 # Stash the exception on the class for convenience, e.g.
28 # try:
29 # conf[does_not_exist]
30 # except conf.NotFoundException
31 # ...
32 NotFoundException = ConfigNotFoundException
33
34 # Support for descriptor access, e.g. instance.attrname
35 # Note that this is only on the get side, for support of nefarious things
36 # like setting and deleting, use the normal dict interface.
37 def __getattribute__(self, attr):
38 # Attempt normal object attr lookup; delegate to the dict interface if that fails
39 try:
40 return super(Config, self).__getattribute__(attr)
41 except AttributeError:
42 return self[attr]
43
44 def __getitem__(self, key):
45 # Attempt a normal dict lookup to pull a cached conf
46 try:
47 return super(Config, self).__getitem__(key)
48 except KeyError:
49 # Cache miss, load the requested yaml
50 yaml_dict = load_yaml(key)
51
52 # Graft in local yaml updates if they're available
53 try:
54 local_yaml = '%s.local' % key
55 local_yaml_dict = load_yaml(local_yaml)
56 yaml_dict.update(local_yaml_dict)
57 except ConfigNotFoundException:
58 pass
59
60 # Returning self[key] instead of yaml_dict as a small sanity check
61 self[key] = yaml_dict
62 return self[key]
63
64
65 def load_yaml(filename=None):
66 # Find the requested yaml in the config dir, relative to this file's location
67 # (aiming for cfme_tests/config)
68 this_file = os.path.abspath(__file__)
69 path = py.path.local(this_file).new(basename='../conf/%s.yaml' % filename)
70
71 if path.check():
72 with path.open() as config_fh:
73 return yaml.load(config_fh, Loader=OrderedYamlLoader)
74 else:
75 msg = 'Unable to load configuration file at %s' % path
76 raise ConfigNotFoundException(msg)
77
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/utils/conf_loader.py b/utils/conf_loader.py
--- a/utils/conf_loader.py
+++ b/utils/conf_loader.py
@@ -1,17 +1,19 @@
import os
-from collections import OrderedDict
import py.path
import yaml
from yaml.loader import Loader
-class OrderedYamlLoader(Loader):
+class YamlConfigLoader(Loader):
+ # Override the root yaml node to be a RecursiveUpdateDict
def construct_yaml_map(self, node):
- data = OrderedDict()
+ data = RecursiveUpdateDict()
yield data
value = self.construct_mapping(node)
data.update(value)
+# Do the same for child nodes of the yaml mapping type
+YamlConfigLoader.add_constructor('tag:yaml.org,2002:map', YamlConfigLoader.construct_yaml_map)
class ConfigNotFoundException(Exception):
@@ -62,6 +64,43 @@
return self[key]
+class RecursiveUpdateDict(dict):
+ def update(self, new_data):
+ """ More intelligent dictionary update.
+
+ This method changes just data that have been changed. How does it work?
+ Imagine you want to change just VM name, other things should stay the same.
+
+ Original config:
+ something:
+ somewhere:
+ VM:
+ a: 1
+ b: 2
+ name: qwer
+ c: 3
+
+ Instead of copying the whole part from original to the override with just 'name' changed,
+ you will write this:
+
+ something:
+ somewhere:
+ VM:
+ name: tzui
+
+ This digging deeper affects only dictionary values. Lists are unaffected! And so do other
+ types.
+
+ Args:
+ new_data: Update data.
+ """
+ for key, value in new_data.iteritems():
+ if isinstance(value, type(self)) and key in self:
+ type(self).update(self[key], value)
+ else:
+ self[key] = new_data[key]
+
+
def load_yaml(filename=None):
# Find the requested yaml in the config dir, relative to this file's location
# (aiming for cfme_tests/config)
@@ -70,7 +109,7 @@
if path.check():
with path.open() as config_fh:
- return yaml.load(config_fh, Loader=OrderedYamlLoader)
+ return yaml.load(config_fh, Loader=YamlConfigLoader)
else:
msg = 'Unable to load configuration file at %s' % path
raise ConfigNotFoundException(msg)
|
{"golden_diff": "diff --git a/utils/conf_loader.py b/utils/conf_loader.py\n--- a/utils/conf_loader.py\n+++ b/utils/conf_loader.py\n@@ -1,17 +1,19 @@\n import os\n-from collections import OrderedDict\n \n import py.path\n import yaml\n from yaml.loader import Loader\n \n \n-class OrderedYamlLoader(Loader):\n+class YamlConfigLoader(Loader):\n+ # Override the root yaml node to be a RecursiveUpdateDict\n def construct_yaml_map(self, node):\n- data = OrderedDict()\n+ data = RecursiveUpdateDict()\n yield data\n value = self.construct_mapping(node)\n data.update(value)\n+# Do the same for child nodes of the yaml mapping type\n+YamlConfigLoader.add_constructor('tag:yaml.org,2002:map', YamlConfigLoader.construct_yaml_map)\n \n \n class ConfigNotFoundException(Exception):\n@@ -62,6 +64,43 @@\n return self[key]\n \n \n+class RecursiveUpdateDict(dict):\n+ def update(self, new_data):\n+ \"\"\" More intelligent dictionary update.\n+\n+ This method changes just data that have been changed. How does it work?\n+ Imagine you want to change just VM name, other things should stay the same.\n+\n+ Original config:\n+ something:\n+ somewhere:\n+ VM:\n+ a: 1\n+ b: 2\n+ name: qwer\n+ c: 3\n+\n+ Instead of copying the whole part from original to the override with just 'name' changed,\n+ you will write this:\n+\n+ something:\n+ somewhere:\n+ VM:\n+ name: tzui\n+\n+ This digging deeper affects only dictionary values. Lists are unaffected! And so do other\n+ types.\n+\n+ Args:\n+ new_data: Update data.\n+ \"\"\"\n+ for key, value in new_data.iteritems():\n+ if isinstance(value, type(self)) and key in self:\n+ type(self).update(self[key], value)\n+ else:\n+ self[key] = new_data[key]\n+\n+\n def load_yaml(filename=None):\n # Find the requested yaml in the config dir, relative to this file's location\n # (aiming for cfme_tests/config)\n@@ -70,7 +109,7 @@\n \n if path.check():\n with path.open() as config_fh:\n- return yaml.load(config_fh, Loader=OrderedYamlLoader)\n+ return yaml.load(config_fh, Loader=YamlConfigLoader)\n else:\n msg = 'Unable to load configuration file at %s' % path\n raise ConfigNotFoundException(msg)\n", "issue": "Better YAML overriding\nNow it does not take just the root element into the account, but it crawls throught the dictionary and only updates the values that are present in the new dictionary. It converts all dicts to Configs, other values than specified in override dict are not touched.\n\nIt also improves the `__getattribute__` behaviour - now it propagates the interface to the child nodes by converting all `dict` to `Config` before returning the value, so the dot operator can be used everywhere.\n\n", "before_files": [{"content": "import os\nfrom collections import OrderedDict\n\nimport py.path\nimport yaml\nfrom yaml.loader import Loader\n\n\nclass OrderedYamlLoader(Loader):\n def construct_yaml_map(self, node):\n data = OrderedDict()\n yield data\n value = self.construct_mapping(node)\n data.update(value)\n\n\nclass ConfigNotFoundException(Exception):\n pass\n\n\nclass Config(dict):\n \"\"\"A dict subclass with knowledge of conf yamls and how to load them\n\n Also supports descriptor access, e.g. conf.configfile\n (compared to the normal dict access, conf['configfile'])\n \"\"\"\n # Stash the exception on the class for convenience, e.g.\n # try:\n # conf[does_not_exist]\n # except conf.NotFoundException\n # ...\n NotFoundException = ConfigNotFoundException\n\n # Support for descriptor access, e.g. instance.attrname\n # Note that this is only on the get side, for support of nefarious things\n # like setting and deleting, use the normal dict interface.\n def __getattribute__(self, attr):\n # Attempt normal object attr lookup; delegate to the dict interface if that fails\n try:\n return super(Config, self).__getattribute__(attr)\n except AttributeError:\n return self[attr]\n\n def __getitem__(self, key):\n # Attempt a normal dict lookup to pull a cached conf\n try:\n return super(Config, self).__getitem__(key)\n except KeyError:\n # Cache miss, load the requested yaml\n yaml_dict = load_yaml(key)\n\n # Graft in local yaml updates if they're available\n try:\n local_yaml = '%s.local' % key\n local_yaml_dict = load_yaml(local_yaml)\n yaml_dict.update(local_yaml_dict)\n except ConfigNotFoundException:\n pass\n\n # Returning self[key] instead of yaml_dict as a small sanity check\n self[key] = yaml_dict\n return self[key]\n\n\ndef load_yaml(filename=None):\n # Find the requested yaml in the config dir, relative to this file's location\n # (aiming for cfme_tests/config)\n this_file = os.path.abspath(__file__)\n path = py.path.local(this_file).new(basename='../conf/%s.yaml' % filename)\n\n if path.check():\n with path.open() as config_fh:\n return yaml.load(config_fh, Loader=OrderedYamlLoader)\n else:\n msg = 'Unable to load configuration file at %s' % path\n raise ConfigNotFoundException(msg)\n", "path": "utils/conf_loader.py"}], "after_files": [{"content": "import os\n\nimport py.path\nimport yaml\nfrom yaml.loader import Loader\n\n\nclass YamlConfigLoader(Loader):\n # Override the root yaml node to be a RecursiveUpdateDict\n def construct_yaml_map(self, node):\n data = RecursiveUpdateDict()\n yield data\n value = self.construct_mapping(node)\n data.update(value)\n# Do the same for child nodes of the yaml mapping type\nYamlConfigLoader.add_constructor('tag:yaml.org,2002:map', YamlConfigLoader.construct_yaml_map)\n\n\nclass ConfigNotFoundException(Exception):\n pass\n\n\nclass Config(dict):\n \"\"\"A dict subclass with knowledge of conf yamls and how to load them\n\n Also supports descriptor access, e.g. conf.configfile\n (compared to the normal dict access, conf['configfile'])\n \"\"\"\n # Stash the exception on the class for convenience, e.g.\n # try:\n # conf[does_not_exist]\n # except conf.NotFoundException\n # ...\n NotFoundException = ConfigNotFoundException\n\n # Support for descriptor access, e.g. instance.attrname\n # Note that this is only on the get side, for support of nefarious things\n # like setting and deleting, use the normal dict interface.\n def __getattribute__(self, attr):\n # Attempt normal object attr lookup; delegate to the dict interface if that fails\n try:\n return super(Config, self).__getattribute__(attr)\n except AttributeError:\n return self[attr]\n\n def __getitem__(self, key):\n # Attempt a normal dict lookup to pull a cached conf\n try:\n return super(Config, self).__getitem__(key)\n except KeyError:\n # Cache miss, load the requested yaml\n yaml_dict = load_yaml(key)\n\n # Graft in local yaml updates if they're available\n try:\n local_yaml = '%s.local' % key\n local_yaml_dict = load_yaml(local_yaml)\n yaml_dict.update(local_yaml_dict)\n except ConfigNotFoundException:\n pass\n\n # Returning self[key] instead of yaml_dict as a small sanity check\n self[key] = yaml_dict\n return self[key]\n\n\nclass RecursiveUpdateDict(dict):\n def update(self, new_data):\n \"\"\" More intelligent dictionary update.\n\n This method changes just data that have been changed. How does it work?\n Imagine you want to change just VM name, other things should stay the same.\n\n Original config:\n something:\n somewhere:\n VM:\n a: 1\n b: 2\n name: qwer\n c: 3\n\n Instead of copying the whole part from original to the override with just 'name' changed,\n you will write this:\n\n something:\n somewhere:\n VM:\n name: tzui\n\n This digging deeper affects only dictionary values. Lists are unaffected! And so do other\n types.\n\n Args:\n new_data: Update data.\n \"\"\"\n for key, value in new_data.iteritems():\n if isinstance(value, type(self)) and key in self:\n type(self).update(self[key], value)\n else:\n self[key] = new_data[key]\n\n\ndef load_yaml(filename=None):\n # Find the requested yaml in the config dir, relative to this file's location\n # (aiming for cfme_tests/config)\n this_file = os.path.abspath(__file__)\n path = py.path.local(this_file).new(basename='../conf/%s.yaml' % filename)\n\n if path.check():\n with path.open() as config_fh:\n return yaml.load(config_fh, Loader=YamlConfigLoader)\n else:\n msg = 'Unable to load configuration file at %s' % path\n raise ConfigNotFoundException(msg)\n", "path": "utils/conf_loader.py"}]}
| 1,042 | 563 |
gh_patches_debug_20213
|
rasdani/github-patches
|
git_diff
|
ray-project__ray-1523
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[rllib] [docs] Document multi-agent support
We should document the new multi-agent support in rllib and have some examples in readthedocs. It would be good to cover the supported cases and which ones are not yet supported (or provide workarounds).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/ray/rllib/examples/multiagent_pendulum_env.py`
Content:
```
1 from gym.spaces import Box, Tuple
2 from gym.utils import seeding
3 from gym.envs.classic_control.pendulum import PendulumEnv
4 import numpy as np
5
6 """
7 Multiagent pendulum that sums its torques to generate an action
8 """
9
10
11 class MultiAgentPendulumEnv(PendulumEnv):
12 metadata = {
13 'render.modes': ['human', 'rgb_array'],
14 'video.frames_per_second': 30
15 }
16
17 def __init__(self):
18 self.max_speed = 8
19 self.max_torque = 2.
20 self.dt = .05
21 self.viewer = None
22
23 high = np.array([1., 1., self.max_speed])
24 self.action_space = [Box(low=-self.max_torque / 2,
25 high=self.max_torque / 2, shape=(1,))
26 for _ in range(2)]
27 self.observation_space = Tuple(tuple(Box(low=-high, high=high)
28 for _ in range(2)))
29
30 self._seed()
31
32 def _seed(self, seed=None):
33 self.np_random, seed = seeding.np_random(seed)
34 return [seed]
35
36 def _step(self, u):
37 th, thdot = self.state # th := theta
38
39 summed_u = np.sum(u)
40 g = 10.
41 m = 1.
42 length = 1.
43 dt = self.dt
44
45 summed_u = np.clip(summed_u, -self.max_torque, self.max_torque)
46 self.last_u = summed_u # for rendering
47 costs = self.angle_normalize(th) ** 2 + .1 * thdot ** 2 + \
48 .001 * (summed_u ** 2)
49
50 newthdot = thdot + (-3 * g / (2 * length) * np.sin(th + np.pi) +
51 3. / (m * length ** 2) * summed_u) * dt
52 newth = th + newthdot * dt
53 newthdot = np.clip(newthdot, -self.max_speed, self.max_speed)
54
55 self.state = np.array([newth, newthdot])
56 return self._get_obs(), -costs, False, {}
57
58 def _reset(self):
59 high = np.array([np.pi, 1])
60 self.state = self.np_random.uniform(low=-high, high=high)
61 self.last_u = None
62 return self._get_obs()
63
64 def _get_obs(self):
65 theta, thetadot = self.state
66 return [np.array([np.cos(theta), np.sin(theta), thetadot])
67 for _ in range(2)]
68
69 def angle_normalize(self, x):
70 return (((x + np.pi) % (2 * np.pi)) - np.pi)
71
```
Path: `python/ray/rllib/examples/multiagent_mountaincar_env.py`
Content:
```
1 import math
2 from gym.spaces import Box, Tuple, Discrete
3 import numpy as np
4 from gym.envs.classic_control.mountain_car import MountainCarEnv
5
6 """
7 Multiagent mountain car that sums and then
8 averages its actions to produce the velocity
9 """
10
11
12 class MultiAgentMountainCarEnv(MountainCarEnv):
13 def __init__(self):
14 self.min_position = -1.2
15 self.max_position = 0.6
16 self.max_speed = 0.07
17 self.goal_position = 0.5
18
19 self.low = np.array([self.min_position, -self.max_speed])
20 self.high = np.array([self.max_position, self.max_speed])
21
22 self.viewer = None
23
24 self.action_space = [Discrete(3) for _ in range(2)]
25 self.observation_space = Tuple(tuple(Box(self.low, self.high)
26 for _ in range(2)))
27
28 self._seed()
29 self.reset()
30
31 def _step(self, action):
32 summed_act = 0.5 * np.sum(action)
33
34 position, velocity = self.state
35 velocity += (summed_act - 1) * 0.001
36 velocity += math.cos(3 * position) * (-0.0025)
37 velocity = np.clip(velocity, -self.max_speed, self.max_speed)
38 position += velocity
39 position = np.clip(position, self.min_position, self.max_position)
40 if (position == self.min_position and velocity < 0):
41 velocity = 0
42
43 done = bool(position >= self.goal_position)
44
45 reward = position
46
47 self.state = (position, velocity)
48 return [np.array(self.state) for _ in range(2)], reward, done, {}
49
50 def _reset(self):
51 self.state = np.array([self.np_random.uniform(low=-0.6, high=-0.4), 0])
52 return [np.array(self.state) for _ in range(2)]
53
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/python/ray/rllib/examples/multiagent_mountaincar_env.py b/python/ray/rllib/examples/multiagent_mountaincar_env.py
--- a/python/ray/rllib/examples/multiagent_mountaincar_env.py
+++ b/python/ray/rllib/examples/multiagent_mountaincar_env.py
@@ -22,8 +22,8 @@
self.viewer = None
self.action_space = [Discrete(3) for _ in range(2)]
- self.observation_space = Tuple(tuple(Box(self.low, self.high)
- for _ in range(2)))
+ self.observation_space = Tuple([
+ Box(self.low, self.high) for _ in range(2)])
self._seed()
self.reset()
diff --git a/python/ray/rllib/examples/multiagent_pendulum_env.py b/python/ray/rllib/examples/multiagent_pendulum_env.py
--- a/python/ray/rllib/examples/multiagent_pendulum_env.py
+++ b/python/ray/rllib/examples/multiagent_pendulum_env.py
@@ -24,8 +24,8 @@
self.action_space = [Box(low=-self.max_torque / 2,
high=self.max_torque / 2, shape=(1,))
for _ in range(2)]
- self.observation_space = Tuple(tuple(Box(low=-high, high=high)
- for _ in range(2)))
+ self.observation_space = Tuple([
+ Box(low=-high, high=high) for _ in range(2)])
self._seed()
|
{"golden_diff": "diff --git a/python/ray/rllib/examples/multiagent_mountaincar_env.py b/python/ray/rllib/examples/multiagent_mountaincar_env.py\n--- a/python/ray/rllib/examples/multiagent_mountaincar_env.py\n+++ b/python/ray/rllib/examples/multiagent_mountaincar_env.py\n@@ -22,8 +22,8 @@\n self.viewer = None\n \n self.action_space = [Discrete(3) for _ in range(2)]\n- self.observation_space = Tuple(tuple(Box(self.low, self.high)\n- for _ in range(2)))\n+ self.observation_space = Tuple([\n+ Box(self.low, self.high) for _ in range(2)])\n \n self._seed()\n self.reset()\ndiff --git a/python/ray/rllib/examples/multiagent_pendulum_env.py b/python/ray/rllib/examples/multiagent_pendulum_env.py\n--- a/python/ray/rllib/examples/multiagent_pendulum_env.py\n+++ b/python/ray/rllib/examples/multiagent_pendulum_env.py\n@@ -24,8 +24,8 @@\n self.action_space = [Box(low=-self.max_torque / 2,\n high=self.max_torque / 2, shape=(1,))\n for _ in range(2)]\n- self.observation_space = Tuple(tuple(Box(low=-high, high=high)\n- for _ in range(2)))\n+ self.observation_space = Tuple([\n+ Box(low=-high, high=high) for _ in range(2)])\n \n self._seed()\n", "issue": "[rllib] [docs] Document multi-agent support\nWe should document the new multi-agent support in rllib and have some examples in readthedocs. It would be good to cover the supported cases and which ones are not yet supported (or provide workarounds).\n", "before_files": [{"content": "from gym.spaces import Box, Tuple\nfrom gym.utils import seeding\nfrom gym.envs.classic_control.pendulum import PendulumEnv\nimport numpy as np\n\n\"\"\"\n Multiagent pendulum that sums its torques to generate an action\n\"\"\"\n\n\nclass MultiAgentPendulumEnv(PendulumEnv):\n metadata = {\n 'render.modes': ['human', 'rgb_array'],\n 'video.frames_per_second': 30\n }\n\n def __init__(self):\n self.max_speed = 8\n self.max_torque = 2.\n self.dt = .05\n self.viewer = None\n\n high = np.array([1., 1., self.max_speed])\n self.action_space = [Box(low=-self.max_torque / 2,\n high=self.max_torque / 2, shape=(1,))\n for _ in range(2)]\n self.observation_space = Tuple(tuple(Box(low=-high, high=high)\n for _ in range(2)))\n\n self._seed()\n\n def _seed(self, seed=None):\n self.np_random, seed = seeding.np_random(seed)\n return [seed]\n\n def _step(self, u):\n th, thdot = self.state # th := theta\n\n summed_u = np.sum(u)\n g = 10.\n m = 1.\n length = 1.\n dt = self.dt\n\n summed_u = np.clip(summed_u, -self.max_torque, self.max_torque)\n self.last_u = summed_u # for rendering\n costs = self.angle_normalize(th) ** 2 + .1 * thdot ** 2 + \\\n .001 * (summed_u ** 2)\n\n newthdot = thdot + (-3 * g / (2 * length) * np.sin(th + np.pi) +\n 3. / (m * length ** 2) * summed_u) * dt\n newth = th + newthdot * dt\n newthdot = np.clip(newthdot, -self.max_speed, self.max_speed)\n\n self.state = np.array([newth, newthdot])\n return self._get_obs(), -costs, False, {}\n\n def _reset(self):\n high = np.array([np.pi, 1])\n self.state = self.np_random.uniform(low=-high, high=high)\n self.last_u = None\n return self._get_obs()\n\n def _get_obs(self):\n theta, thetadot = self.state\n return [np.array([np.cos(theta), np.sin(theta), thetadot])\n for _ in range(2)]\n\n def angle_normalize(self, x):\n return (((x + np.pi) % (2 * np.pi)) - np.pi)\n", "path": "python/ray/rllib/examples/multiagent_pendulum_env.py"}, {"content": "import math\nfrom gym.spaces import Box, Tuple, Discrete\nimport numpy as np\nfrom gym.envs.classic_control.mountain_car import MountainCarEnv\n\n\"\"\"\nMultiagent mountain car that sums and then\naverages its actions to produce the velocity\n\"\"\"\n\n\nclass MultiAgentMountainCarEnv(MountainCarEnv):\n def __init__(self):\n self.min_position = -1.2\n self.max_position = 0.6\n self.max_speed = 0.07\n self.goal_position = 0.5\n\n self.low = np.array([self.min_position, -self.max_speed])\n self.high = np.array([self.max_position, self.max_speed])\n\n self.viewer = None\n\n self.action_space = [Discrete(3) for _ in range(2)]\n self.observation_space = Tuple(tuple(Box(self.low, self.high)\n for _ in range(2)))\n\n self._seed()\n self.reset()\n\n def _step(self, action):\n summed_act = 0.5 * np.sum(action)\n\n position, velocity = self.state\n velocity += (summed_act - 1) * 0.001\n velocity += math.cos(3 * position) * (-0.0025)\n velocity = np.clip(velocity, -self.max_speed, self.max_speed)\n position += velocity\n position = np.clip(position, self.min_position, self.max_position)\n if (position == self.min_position and velocity < 0):\n velocity = 0\n\n done = bool(position >= self.goal_position)\n\n reward = position\n\n self.state = (position, velocity)\n return [np.array(self.state) for _ in range(2)], reward, done, {}\n\n def _reset(self):\n self.state = np.array([self.np_random.uniform(low=-0.6, high=-0.4), 0])\n return [np.array(self.state) for _ in range(2)]\n", "path": "python/ray/rllib/examples/multiagent_mountaincar_env.py"}], "after_files": [{"content": "from gym.spaces import Box, Tuple\nfrom gym.utils import seeding\nfrom gym.envs.classic_control.pendulum import PendulumEnv\nimport numpy as np\n\n\"\"\"\n Multiagent pendulum that sums its torques to generate an action\n\"\"\"\n\n\nclass MultiAgentPendulumEnv(PendulumEnv):\n metadata = {\n 'render.modes': ['human', 'rgb_array'],\n 'video.frames_per_second': 30\n }\n\n def __init__(self):\n self.max_speed = 8\n self.max_torque = 2.\n self.dt = .05\n self.viewer = None\n\n high = np.array([1., 1., self.max_speed])\n self.action_space = [Box(low=-self.max_torque / 2,\n high=self.max_torque / 2, shape=(1,))\n for _ in range(2)]\n self.observation_space = Tuple([\n Box(low=-high, high=high) for _ in range(2)])\n\n self._seed()\n\n def _seed(self, seed=None):\n self.np_random, seed = seeding.np_random(seed)\n return [seed]\n\n def _step(self, u):\n th, thdot = self.state # th := theta\n\n summed_u = np.sum(u)\n g = 10.\n m = 1.\n length = 1.\n dt = self.dt\n\n summed_u = np.clip(summed_u, -self.max_torque, self.max_torque)\n self.last_u = summed_u # for rendering\n costs = self.angle_normalize(th) ** 2 + .1 * thdot ** 2 + \\\n .001 * (summed_u ** 2)\n\n newthdot = thdot + (-3 * g / (2 * length) * np.sin(th + np.pi) +\n 3. / (m * length ** 2) * summed_u) * dt\n newth = th + newthdot * dt\n newthdot = np.clip(newthdot, -self.max_speed, self.max_speed)\n\n self.state = np.array([newth, newthdot])\n return self._get_obs(), -costs, False, {}\n\n def _reset(self):\n high = np.array([np.pi, 1])\n self.state = self.np_random.uniform(low=-high, high=high)\n self.last_u = None\n return self._get_obs()\n\n def _get_obs(self):\n theta, thetadot = self.state\n return [np.array([np.cos(theta), np.sin(theta), thetadot])\n for _ in range(2)]\n\n def angle_normalize(self, x):\n return (((x + np.pi) % (2 * np.pi)) - np.pi)\n", "path": "python/ray/rllib/examples/multiagent_pendulum_env.py"}, {"content": "import math\nfrom gym.spaces import Box, Tuple, Discrete\nimport numpy as np\nfrom gym.envs.classic_control.mountain_car import MountainCarEnv\n\n\"\"\"\nMultiagent mountain car that sums and then\naverages its actions to produce the velocity\n\"\"\"\n\n\nclass MultiAgentMountainCarEnv(MountainCarEnv):\n def __init__(self):\n self.min_position = -1.2\n self.max_position = 0.6\n self.max_speed = 0.07\n self.goal_position = 0.5\n\n self.low = np.array([self.min_position, -self.max_speed])\n self.high = np.array([self.max_position, self.max_speed])\n\n self.viewer = None\n\n self.action_space = [Discrete(3) for _ in range(2)]\n self.observation_space = Tuple([\n Box(self.low, self.high) for _ in range(2)])\n\n self._seed()\n self.reset()\n\n def _step(self, action):\n summed_act = 0.5 * np.sum(action)\n\n position, velocity = self.state\n velocity += (summed_act - 1) * 0.001\n velocity += math.cos(3 * position) * (-0.0025)\n velocity = np.clip(velocity, -self.max_speed, self.max_speed)\n position += velocity\n position = np.clip(position, self.min_position, self.max_position)\n if (position == self.min_position and velocity < 0):\n velocity = 0\n\n done = bool(position >= self.goal_position)\n\n reward = position\n\n self.state = (position, velocity)\n return [np.array(self.state) for _ in range(2)], reward, done, {}\n\n def _reset(self):\n self.state = np.array([self.np_random.uniform(low=-0.6, high=-0.4), 0])\n return [np.array(self.state) for _ in range(2)]\n", "path": "python/ray/rllib/examples/multiagent_mountaincar_env.py"}]}
| 1,617 | 343 |
gh_patches_debug_10894
|
rasdani/github-patches
|
git_diff
|
obspy__obspy-2310
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
incorrect bitmask in libgcf
libgcf.py, line 107, masks the bottom 4 bits for the compresssion code:
` compression = compress & 0b00001111 # get compression code`
This should mask off only the bottom 3 bits (the 4th is allocated for something else in the near future):
` compression = compress & 0b00000111 # get compression code`
incorrect bitmask in libgcf
libgcf.py, line 107, masks the bottom 4 bits for the compresssion code:
` compression = compress & 0b00001111 # get compression code`
This should mask off only the bottom 3 bits (the 4th is allocated for something else in the near future):
` compression = compress & 0b00000111 # get compression code`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `obspy/io/gcf/libgcf.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # reads Guralp Compressed Format (GCF) Files
3 # By Ran Novitsky Nof @ BSL, 2016
4 # [email protected]
5 # Based on Guralp's GCF reference (GCF-RFC-GCFR, Issue C, 2011-01-05)
6 # more details available from: http://www.guralp.com/apps/ok?doc=GCF_Intro
7 # last access: June, 2016
8 from __future__ import (absolute_import, division, print_function,
9 unicode_literals)
10 from future.builtins import * # NOQA
11
12 import numpy as np
13
14 from obspy import UTCDateTime
15
16 SPS_D = { # Table 3.1: special sample rates
17 157: 0.1,
18 161: 0.125,
19 162: 0.2,
20 164: 0.25,
21 167: 0.5,
22 171: 400,
23 174: 500,
24 176: 1000,
25 179: 2000,
26 181: 4000}
27 TIME_OFFSETS_D = { # Table 3.1: Time fractional offset denominator
28 171: 8.,
29 174: 2.,
30 176: 4.,
31 179: 8.,
32 181: 16.}
33 COMPRESSION_D = { # Table 3.2: format field to data type
34 1: '>i4',
35 2: '>i2',
36 4: '>i1'}
37
38
39 def is_gcf(f):
40 """
41 Test if file is GCF by reading at least 1 data block
42 """
43 header, data = read_data_block(f)
44
45
46 def decode36(data):
47 """
48 Converts an integer into a base36 string.
49 """
50 # http://geophysics.eas.gatech.edu/GTEQ/Scream4.4/Decoding_Base_36_numbers_C.htm
51 s = ''
52 while data:
53 imed = data % 36
54 if imed > 9:
55 c = chr(imed - 10 + ord('A'))
56 else:
57 c = chr(imed + ord('0'))
58 s = c + s
59 data = data // 36
60 return s
61
62
63 def decode_date_time(data):
64 """
65 Decode date and time field.
66
67 The date code is a 32 bit value specifying the start time of the block.
68 Bits 0-16 contain the number of seconds since midnight,
69 and bits 17-31 the number of days since 17th November 1989.
70 """
71 # prevent numpy array
72 days = int(data >> 17)
73 secs = int(data & 0x1FFFF)
74 starttime = UTCDateTime('1989-11-17') + days * 86400 + secs
75 return starttime
76
77
78 def read_data_block(f, headonly=False, channel_prefix="HH", **kwargs):
79 """
80 Read one data block from GCF file.
81
82 more details can be found here:
83 http://geophysics.eas.gatech.edu/GTEQ/Scream4.4/GCF_Specification.htm
84 f - file object to read from
85 if skipData is True, Only header is returned.
86 if not a data block (SPS=0) - returns None.
87 """
88 # get ID
89 sysid = f.read(4)
90 if not sysid:
91 raise EOFError # got to EOF
92 sysid = np.frombuffer(sysid, count=1, dtype='>u4')
93 if sysid >> 31 & 0b1 > 0:
94 sysid = (sysid << 6) >> 6
95 sysid = decode36(sysid)
96 # get Stream ID
97 stid = np.frombuffer(f.read(4), count=1, dtype='>u4')
98 stid = decode36(stid)
99 # get Date & Time
100 data = np.frombuffer(f.read(4), count=1, dtype='>u4')
101 starttime = decode_date_time(data)
102 # get data format
103 # get reserved, SPS, data type compression,
104 # number of 32bit records (num_records)
105 reserved, sps, compress, num_records = np.frombuffer(f.read(4), count=4,
106 dtype='>u1')
107 compression = compress & 0b00001111 # get compression code
108 t_offset = compress >> 4 # get time offset
109 if t_offset > 0:
110 starttime = starttime + t_offset / TIME_OFFSETS_D[sps]
111 if sps in SPS_D:
112 sps = SPS_D[sps] # get special SPS value if needed
113 if not sps:
114 f.seek(num_records * 4, 1) # skip if not a data block
115 if 1008 - num_records * 4 > 0:
116 # keep skipping to get 1008 record
117 f.seek(1008 - num_records * 4, 1)
118 return None
119 npts = num_records * compression # number of samples
120 header = {}
121 header['starttime'] = starttime
122 header['station'] = stid[:4]
123 header['channel'] = (channel_prefix[:2] + stid[4]).upper()
124 header['sampling_rate'] = float(sps)
125 header['npts'] = npts
126 if headonly:
127 f.seek(4 * (num_records + 2), 1) # skip data part (inc. FIC and RIC)
128 # skip to end of block if only partly filled with data
129 if 1000 - num_records * 4 > 0:
130 f.seek(1000 - num_records * 4, 1)
131 return header
132 else:
133 # get FIC
134 fic = np.frombuffer(f.read(4), count=1, dtype='>i4')
135 # get incremental data
136 data = np.frombuffer(f.read(4 * num_records), count=npts,
137 dtype=COMPRESSION_D[compression])
138 # construct time series
139 data = (fic + np.cumsum(data)).astype('i4')
140 # get RIC
141 ric = np.frombuffer(f.read(4), count=1, dtype='>i4')
142 # skip to end of block if only partly filled with data
143 if 1000 - num_records * 4 > 0:
144 f.seek(1000 - num_records * 4, 1)
145 # verify last data sample matches RIC
146 if not data[-1] == ric:
147 raise ValueError("Last sample mismatch with RIC")
148 return header, data
149
150
151 def read_header(f, **kwargs):
152 """
153 Reads header only from GCF file.
154 """
155 return read_data_block(f, headonly=True, **kwargs)
156
157
158 def read(f, **kwargs):
159 """
160 Reads header and data from GCF file.
161 """
162 return read_data_block(f, headonly=False, **kwargs)
163
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/obspy/io/gcf/libgcf.py b/obspy/io/gcf/libgcf.py
--- a/obspy/io/gcf/libgcf.py
+++ b/obspy/io/gcf/libgcf.py
@@ -104,7 +104,7 @@
# number of 32bit records (num_records)
reserved, sps, compress, num_records = np.frombuffer(f.read(4), count=4,
dtype='>u1')
- compression = compress & 0b00001111 # get compression code
+ compression = compress & 0b00000111 # get compression code
t_offset = compress >> 4 # get time offset
if t_offset > 0:
starttime = starttime + t_offset / TIME_OFFSETS_D[sps]
|
{"golden_diff": "diff --git a/obspy/io/gcf/libgcf.py b/obspy/io/gcf/libgcf.py\n--- a/obspy/io/gcf/libgcf.py\n+++ b/obspy/io/gcf/libgcf.py\n@@ -104,7 +104,7 @@\n # number of 32bit records (num_records)\n reserved, sps, compress, num_records = np.frombuffer(f.read(4), count=4,\n dtype='>u1')\n- compression = compress & 0b00001111 # get compression code\n+ compression = compress & 0b00000111 # get compression code\n t_offset = compress >> 4 # get time offset\n if t_offset > 0:\n starttime = starttime + t_offset / TIME_OFFSETS_D[sps]\n", "issue": "incorrect bitmask in libgcf\nlibgcf.py, line 107, masks the bottom 4 bits for the compresssion code:\r\n` compression = compress & 0b00001111 # get compression code`\r\n\r\nThis should mask off only the bottom 3 bits (the 4th is allocated for something else in the near future):\r\n` compression = compress & 0b00000111 # get compression code`\r\n\nincorrect bitmask in libgcf\nlibgcf.py, line 107, masks the bottom 4 bits for the compresssion code:\r\n` compression = compress & 0b00001111 # get compression code`\r\n\r\nThis should mask off only the bottom 3 bits (the 4th is allocated for something else in the near future):\r\n` compression = compress & 0b00000111 # get compression code`\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# reads Guralp Compressed Format (GCF) Files\n# By Ran Novitsky Nof @ BSL, 2016\n# [email protected]\n# Based on Guralp's GCF reference (GCF-RFC-GCFR, Issue C, 2011-01-05)\n# more details available from: http://www.guralp.com/apps/ok?doc=GCF_Intro\n# last access: June, 2016\nfrom __future__ import (absolute_import, division, print_function,\n unicode_literals)\nfrom future.builtins import * # NOQA\n\nimport numpy as np\n\nfrom obspy import UTCDateTime\n\nSPS_D = { # Table 3.1: special sample rates\n 157: 0.1,\n 161: 0.125,\n 162: 0.2,\n 164: 0.25,\n 167: 0.5,\n 171: 400,\n 174: 500,\n 176: 1000,\n 179: 2000,\n 181: 4000}\nTIME_OFFSETS_D = { # Table 3.1: Time fractional offset denominator\n 171: 8.,\n 174: 2.,\n 176: 4.,\n 179: 8.,\n 181: 16.}\nCOMPRESSION_D = { # Table 3.2: format field to data type\n 1: '>i4',\n 2: '>i2',\n 4: '>i1'}\n\n\ndef is_gcf(f):\n \"\"\"\n Test if file is GCF by reading at least 1 data block\n \"\"\"\n header, data = read_data_block(f)\n\n\ndef decode36(data):\n \"\"\"\n Converts an integer into a base36 string.\n \"\"\"\n # http://geophysics.eas.gatech.edu/GTEQ/Scream4.4/Decoding_Base_36_numbers_C.htm\n s = ''\n while data:\n imed = data % 36\n if imed > 9:\n c = chr(imed - 10 + ord('A'))\n else:\n c = chr(imed + ord('0'))\n s = c + s\n data = data // 36\n return s\n\n\ndef decode_date_time(data):\n \"\"\"\n Decode date and time field.\n\n The date code is a 32 bit value specifying the start time of the block.\n Bits 0-16 contain the number of seconds since midnight,\n and bits 17-31 the number of days since 17th November 1989.\n \"\"\"\n # prevent numpy array\n days = int(data >> 17)\n secs = int(data & 0x1FFFF)\n starttime = UTCDateTime('1989-11-17') + days * 86400 + secs\n return starttime\n\n\ndef read_data_block(f, headonly=False, channel_prefix=\"HH\", **kwargs):\n \"\"\"\n Read one data block from GCF file.\n\n more details can be found here:\n http://geophysics.eas.gatech.edu/GTEQ/Scream4.4/GCF_Specification.htm\n f - file object to read from\n if skipData is True, Only header is returned.\n if not a data block (SPS=0) - returns None.\n \"\"\"\n # get ID\n sysid = f.read(4)\n if not sysid:\n raise EOFError # got to EOF\n sysid = np.frombuffer(sysid, count=1, dtype='>u4')\n if sysid >> 31 & 0b1 > 0:\n sysid = (sysid << 6) >> 6\n sysid = decode36(sysid)\n # get Stream ID\n stid = np.frombuffer(f.read(4), count=1, dtype='>u4')\n stid = decode36(stid)\n # get Date & Time\n data = np.frombuffer(f.read(4), count=1, dtype='>u4')\n starttime = decode_date_time(data)\n # get data format\n # get reserved, SPS, data type compression,\n # number of 32bit records (num_records)\n reserved, sps, compress, num_records = np.frombuffer(f.read(4), count=4,\n dtype='>u1')\n compression = compress & 0b00001111 # get compression code\n t_offset = compress >> 4 # get time offset\n if t_offset > 0:\n starttime = starttime + t_offset / TIME_OFFSETS_D[sps]\n if sps in SPS_D:\n sps = SPS_D[sps] # get special SPS value if needed\n if not sps:\n f.seek(num_records * 4, 1) # skip if not a data block\n if 1008 - num_records * 4 > 0:\n # keep skipping to get 1008 record\n f.seek(1008 - num_records * 4, 1)\n return None\n npts = num_records * compression # number of samples\n header = {}\n header['starttime'] = starttime\n header['station'] = stid[:4]\n header['channel'] = (channel_prefix[:2] + stid[4]).upper()\n header['sampling_rate'] = float(sps)\n header['npts'] = npts\n if headonly:\n f.seek(4 * (num_records + 2), 1) # skip data part (inc. FIC and RIC)\n # skip to end of block if only partly filled with data\n if 1000 - num_records * 4 > 0:\n f.seek(1000 - num_records * 4, 1)\n return header\n else:\n # get FIC\n fic = np.frombuffer(f.read(4), count=1, dtype='>i4')\n # get incremental data\n data = np.frombuffer(f.read(4 * num_records), count=npts,\n dtype=COMPRESSION_D[compression])\n # construct time series\n data = (fic + np.cumsum(data)).astype('i4')\n # get RIC\n ric = np.frombuffer(f.read(4), count=1, dtype='>i4')\n # skip to end of block if only partly filled with data\n if 1000 - num_records * 4 > 0:\n f.seek(1000 - num_records * 4, 1)\n # verify last data sample matches RIC\n if not data[-1] == ric:\n raise ValueError(\"Last sample mismatch with RIC\")\n return header, data\n\n\ndef read_header(f, **kwargs):\n \"\"\"\n Reads header only from GCF file.\n \"\"\"\n return read_data_block(f, headonly=True, **kwargs)\n\n\ndef read(f, **kwargs):\n \"\"\"\n Reads header and data from GCF file.\n \"\"\"\n return read_data_block(f, headonly=False, **kwargs)\n", "path": "obspy/io/gcf/libgcf.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# reads Guralp Compressed Format (GCF) Files\n# By Ran Novitsky Nof @ BSL, 2016\n# [email protected]\n# Based on Guralp's GCF reference (GCF-RFC-GCFR, Issue C, 2011-01-05)\n# more details available from: http://www.guralp.com/apps/ok?doc=GCF_Intro\n# last access: June, 2016\nfrom __future__ import (absolute_import, division, print_function,\n unicode_literals)\nfrom future.builtins import * # NOQA\n\nimport numpy as np\n\nfrom obspy import UTCDateTime\n\nSPS_D = { # Table 3.1: special sample rates\n 157: 0.1,\n 161: 0.125,\n 162: 0.2,\n 164: 0.25,\n 167: 0.5,\n 171: 400,\n 174: 500,\n 176: 1000,\n 179: 2000,\n 181: 4000}\nTIME_OFFSETS_D = { # Table 3.1: Time fractional offset denominator\n 171: 8.,\n 174: 2.,\n 176: 4.,\n 179: 8.,\n 181: 16.}\nCOMPRESSION_D = { # Table 3.2: format field to data type\n 1: '>i4',\n 2: '>i2',\n 4: '>i1'}\n\n\ndef is_gcf(f):\n \"\"\"\n Test if file is GCF by reading at least 1 data block\n \"\"\"\n header, data = read_data_block(f)\n\n\ndef decode36(data):\n \"\"\"\n Converts an integer into a base36 string.\n \"\"\"\n # http://geophysics.eas.gatech.edu/GTEQ/Scream4.4/Decoding_Base_36_numbers_C.htm\n s = ''\n while data:\n imed = data % 36\n if imed > 9:\n c = chr(imed - 10 + ord('A'))\n else:\n c = chr(imed + ord('0'))\n s = c + s\n data = data // 36\n return s\n\n\ndef decode_date_time(data):\n \"\"\"\n Decode date and time field.\n\n The date code is a 32 bit value specifying the start time of the block.\n Bits 0-16 contain the number of seconds since midnight,\n and bits 17-31 the number of days since 17th November 1989.\n \"\"\"\n # prevent numpy array\n days = int(data >> 17)\n secs = int(data & 0x1FFFF)\n starttime = UTCDateTime('1989-11-17') + days * 86400 + secs\n return starttime\n\n\ndef read_data_block(f, headonly=False, channel_prefix=\"HH\", **kwargs):\n \"\"\"\n Read one data block from GCF file.\n\n more details can be found here:\n http://geophysics.eas.gatech.edu/GTEQ/Scream4.4/GCF_Specification.htm\n f - file object to read from\n if skipData is True, Only header is returned.\n if not a data block (SPS=0) - returns None.\n \"\"\"\n # get ID\n sysid = f.read(4)\n if not sysid:\n raise EOFError # got to EOF\n sysid = np.frombuffer(sysid, count=1, dtype='>u4')\n if sysid >> 31 & 0b1 > 0:\n sysid = (sysid << 6) >> 6\n sysid = decode36(sysid)\n # get Stream ID\n stid = np.frombuffer(f.read(4), count=1, dtype='>u4')\n stid = decode36(stid)\n # get Date & Time\n data = np.frombuffer(f.read(4), count=1, dtype='>u4')\n starttime = decode_date_time(data)\n # get data format\n # get reserved, SPS, data type compression,\n # number of 32bit records (num_records)\n reserved, sps, compress, num_records = np.frombuffer(f.read(4), count=4,\n dtype='>u1')\n compression = compress & 0b00000111 # get compression code\n t_offset = compress >> 4 # get time offset\n if t_offset > 0:\n starttime = starttime + t_offset / TIME_OFFSETS_D[sps]\n if sps in SPS_D:\n sps = SPS_D[sps] # get special SPS value if needed\n if not sps:\n f.seek(num_records * 4, 1) # skip if not a data block\n if 1008 - num_records * 4 > 0:\n # keep skipping to get 1008 record\n f.seek(1008 - num_records * 4, 1)\n return None\n npts = num_records * compression # number of samples\n header = {}\n header['starttime'] = starttime\n header['station'] = stid[:4]\n header['channel'] = (channel_prefix[:2] + stid[4]).upper()\n header['sampling_rate'] = float(sps)\n header['npts'] = npts\n if headonly:\n f.seek(4 * (num_records + 2), 1) # skip data part (inc. FIC and RIC)\n # skip to end of block if only partly filled with data\n if 1000 - num_records * 4 > 0:\n f.seek(1000 - num_records * 4, 1)\n return header\n else:\n # get FIC\n fic = np.frombuffer(f.read(4), count=1, dtype='>i4')\n # get incremental data\n data = np.frombuffer(f.read(4 * num_records), count=npts,\n dtype=COMPRESSION_D[compression])\n # construct time series\n data = (fic + np.cumsum(data)).astype('i4')\n # get RIC\n ric = np.frombuffer(f.read(4), count=1, dtype='>i4')\n # skip to end of block if only partly filled with data\n if 1000 - num_records * 4 > 0:\n f.seek(1000 - num_records * 4, 1)\n # verify last data sample matches RIC\n if not data[-1] == ric:\n raise ValueError(\"Last sample mismatch with RIC\")\n return header, data\n\n\ndef read_header(f, **kwargs):\n \"\"\"\n Reads header only from GCF file.\n \"\"\"\n return read_data_block(f, headonly=True, **kwargs)\n\n\ndef read(f, **kwargs):\n \"\"\"\n Reads header and data from GCF file.\n \"\"\"\n return read_data_block(f, headonly=False, **kwargs)\n", "path": "obspy/io/gcf/libgcf.py"}]}
| 2,503 | 192 |
gh_patches_debug_2319
|
rasdani/github-patches
|
git_diff
|
open-mmlab__mmdetection-6104
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Shape of ONNX export dynamic batch
Thanks for your error report and we appreciate it a lot.
https://github.com/open-mmlab/mmdetection/blob/master/tools/deployment/pytorch2onnx.py#L71
This line should be
```
if dynamic_export:
dynamic_axes = {
input_name: {
0: 'batch',
2: 'height',
3: 'width'
},
'dets': {
0: 'batch',
1: 'num_dets',
},
'labels': {
0: 'batch',
1: 'num_dets',
},
}
```
Based on the way the line below is implemented the dynamic axis should be `bs x c x h x w`.
https://github.com/open-mmlab/mmdetection/blob/master/tools/deployment/pytorch2onnx.py#L139
I can create a quick PR for this if needed.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tools/deployment/pytorch2onnx.py`
Content:
```
1 # Copyright (c) OpenMMLab. All rights reserved.
2 import argparse
3 import os.path as osp
4 import warnings
5 from functools import partial
6
7 import numpy as np
8 import onnx
9 import torch
10 from mmcv import Config, DictAction
11
12 from mmdet.core.export import build_model_from_cfg, preprocess_example_input
13 from mmdet.core.export.model_wrappers import ONNXRuntimeDetector
14
15
16 def pytorch2onnx(model,
17 input_img,
18 input_shape,
19 normalize_cfg,
20 opset_version=11,
21 show=False,
22 output_file='tmp.onnx',
23 verify=False,
24 test_img=None,
25 do_simplify=False,
26 dynamic_export=None,
27 skip_postprocess=False):
28
29 input_config = {
30 'input_shape': input_shape,
31 'input_path': input_img,
32 'normalize_cfg': normalize_cfg
33 }
34 # prepare input
35 one_img, one_meta = preprocess_example_input(input_config)
36 img_list, img_meta_list = [one_img], [[one_meta]]
37
38 if skip_postprocess:
39 warnings.warn('Not all models support export onnx without post '
40 'process, especially two stage detectors!')
41 model.forward = model.forward_dummy
42 torch.onnx.export(
43 model,
44 one_img,
45 output_file,
46 input_names=['input'],
47 export_params=True,
48 keep_initializers_as_inputs=True,
49 do_constant_folding=True,
50 verbose=show,
51 opset_version=opset_version)
52
53 print(f'Successfully exported ONNX model without '
54 f'post process: {output_file}')
55 return
56
57 # replace original forward function
58 origin_forward = model.forward
59 model.forward = partial(
60 model.forward,
61 img_metas=img_meta_list,
62 return_loss=False,
63 rescale=False)
64
65 output_names = ['dets', 'labels']
66 if model.with_mask:
67 output_names.append('masks')
68 input_name = 'input'
69 dynamic_axes = None
70 if dynamic_export:
71 dynamic_axes = {
72 input_name: {
73 0: 'batch',
74 2: 'width',
75 3: 'height'
76 },
77 'dets': {
78 0: 'batch',
79 1: 'num_dets',
80 },
81 'labels': {
82 0: 'batch',
83 1: 'num_dets',
84 },
85 }
86 if model.with_mask:
87 dynamic_axes['masks'] = {0: 'batch', 1: 'num_dets'}
88
89 torch.onnx.export(
90 model,
91 img_list,
92 output_file,
93 input_names=[input_name],
94 output_names=output_names,
95 export_params=True,
96 keep_initializers_as_inputs=True,
97 do_constant_folding=True,
98 verbose=show,
99 opset_version=opset_version,
100 dynamic_axes=dynamic_axes)
101
102 model.forward = origin_forward
103
104 # get the custom op path
105 ort_custom_op_path = ''
106 try:
107 from mmcv.ops import get_onnxruntime_op_path
108 ort_custom_op_path = get_onnxruntime_op_path()
109 except (ImportError, ModuleNotFoundError):
110 warnings.warn('If input model has custom op from mmcv, \
111 you may have to build mmcv with ONNXRuntime from source.')
112
113 if do_simplify:
114 import onnxsim
115
116 from mmdet import digit_version
117
118 min_required_version = '0.3.0'
119 assert digit_version(onnxsim.__version__) >= digit_version(
120 min_required_version
121 ), f'Requires to install onnx-simplify>={min_required_version}'
122
123 input_dic = {'input': img_list[0].detach().cpu().numpy()}
124 onnxsim.simplify(
125 output_file, input_data=input_dic, custom_lib=ort_custom_op_path)
126 print(f'Successfully exported ONNX model: {output_file}')
127
128 if verify:
129 # check by onnx
130 onnx_model = onnx.load(output_file)
131 onnx.checker.check_model(onnx_model)
132
133 # wrap onnx model
134 onnx_model = ONNXRuntimeDetector(output_file, model.CLASSES, 0)
135 if dynamic_export:
136 # scale up to test dynamic shape
137 h, w = [int((_ * 1.5) // 32 * 32) for _ in input_shape[2:]]
138 h, w = min(1344, h), min(1344, w)
139 input_config['input_shape'] = (1, 3, h, w)
140
141 if test_img is None:
142 input_config['input_path'] = input_img
143
144 # prepare input once again
145 one_img, one_meta = preprocess_example_input(input_config)
146 img_list, img_meta_list = [one_img], [[one_meta]]
147
148 # get pytorch output
149 with torch.no_grad():
150 pytorch_results = model(
151 img_list,
152 img_metas=img_meta_list,
153 return_loss=False,
154 rescale=True)[0]
155
156 img_list = [_.cuda().contiguous() for _ in img_list]
157 if dynamic_export:
158 img_list = img_list + [_.flip(-1).contiguous() for _ in img_list]
159 img_meta_list = img_meta_list * 2
160 # get onnx output
161 onnx_results = onnx_model(
162 img_list, img_metas=img_meta_list, return_loss=False)[0]
163 # visualize predictions
164 score_thr = 0.3
165 if show:
166 out_file_ort, out_file_pt = None, None
167 else:
168 out_file_ort, out_file_pt = 'show-ort.png', 'show-pt.png'
169
170 show_img = one_meta['show_img']
171 model.show_result(
172 show_img,
173 pytorch_results,
174 score_thr=score_thr,
175 show=True,
176 win_name='PyTorch',
177 out_file=out_file_pt)
178 onnx_model.show_result(
179 show_img,
180 onnx_results,
181 score_thr=score_thr,
182 show=True,
183 win_name='ONNXRuntime',
184 out_file=out_file_ort)
185
186 # compare a part of result
187 if model.with_mask:
188 compare_pairs = list(zip(onnx_results, pytorch_results))
189 else:
190 compare_pairs = [(onnx_results, pytorch_results)]
191 err_msg = 'The numerical values are different between Pytorch' + \
192 ' and ONNX, but it does not necessarily mean the' + \
193 ' exported ONNX model is problematic.'
194 # check the numerical value
195 for onnx_res, pytorch_res in compare_pairs:
196 for o_res, p_res in zip(onnx_res, pytorch_res):
197 np.testing.assert_allclose(
198 o_res, p_res, rtol=1e-03, atol=1e-05, err_msg=err_msg)
199 print('The numerical values are the same between Pytorch and ONNX')
200
201
202 def parse_normalize_cfg(test_pipeline):
203 transforms = None
204 for pipeline in test_pipeline:
205 if 'transforms' in pipeline:
206 transforms = pipeline['transforms']
207 break
208 assert transforms is not None, 'Failed to find `transforms`'
209 norm_config_li = [_ for _ in transforms if _['type'] == 'Normalize']
210 assert len(norm_config_li) == 1, '`norm_config` should only have one'
211 norm_config = norm_config_li[0]
212 return norm_config
213
214
215 def parse_args():
216 parser = argparse.ArgumentParser(
217 description='Convert MMDetection models to ONNX')
218 parser.add_argument('config', help='test config file path')
219 parser.add_argument('checkpoint', help='checkpoint file')
220 parser.add_argument('--input-img', type=str, help='Images for input')
221 parser.add_argument(
222 '--show',
223 action='store_true',
224 help='Show onnx graph and detection outputs')
225 parser.add_argument('--output-file', type=str, default='tmp.onnx')
226 parser.add_argument('--opset-version', type=int, default=11)
227 parser.add_argument(
228 '--test-img', type=str, default=None, help='Images for test')
229 parser.add_argument(
230 '--dataset',
231 type=str,
232 default='coco',
233 help='Dataset name. This argument is deprecated and will be removed \
234 in future releases.')
235 parser.add_argument(
236 '--verify',
237 action='store_true',
238 help='verify the onnx model output against pytorch output')
239 parser.add_argument(
240 '--simplify',
241 action='store_true',
242 help='Whether to simplify onnx model.')
243 parser.add_argument(
244 '--shape',
245 type=int,
246 nargs='+',
247 default=[800, 1216],
248 help='input image size')
249 parser.add_argument(
250 '--mean',
251 type=float,
252 nargs='+',
253 default=[123.675, 116.28, 103.53],
254 help='mean value used for preprocess input data.This argument \
255 is deprecated and will be removed in future releases.')
256 parser.add_argument(
257 '--std',
258 type=float,
259 nargs='+',
260 default=[58.395, 57.12, 57.375],
261 help='variance value used for preprocess input data. '
262 'This argument is deprecated and will be removed in future releases.')
263 parser.add_argument(
264 '--cfg-options',
265 nargs='+',
266 action=DictAction,
267 help='Override some settings in the used config, the key-value pair '
268 'in xxx=yyy format will be merged into config file. If the value to '
269 'be overwritten is a list, it should be like key="[a,b]" or key=a,b '
270 'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" '
271 'Note that the quotation marks are necessary and that no white space '
272 'is allowed.')
273 parser.add_argument(
274 '--dynamic-export',
275 action='store_true',
276 help='Whether to export onnx with dynamic axis.')
277 parser.add_argument(
278 '--skip-postprocess',
279 action='store_true',
280 help='Whether to export model without post process. Experimental '
281 'option. We do not guarantee the correctness of the exported '
282 'model.')
283 args = parser.parse_args()
284 return args
285
286
287 if __name__ == '__main__':
288 args = parse_args()
289 warnings.warn('Arguments like `--mean`, `--std`, `--dataset` would be \
290 parsed directly from config file and are deprecated and \
291 will be removed in future releases.')
292
293 assert args.opset_version == 11, 'MMDet only support opset 11 now'
294
295 try:
296 from mmcv.onnx.symbolic import register_extra_symbolics
297 except ModuleNotFoundError:
298 raise NotImplementedError('please update mmcv to version>=v1.0.4')
299 register_extra_symbolics(args.opset_version)
300
301 cfg = Config.fromfile(args.config)
302 if args.cfg_options is not None:
303 cfg.merge_from_dict(args.cfg_options)
304
305 if args.shape is None:
306 img_scale = cfg.test_pipeline[1]['img_scale']
307 input_shape = (1, 3, img_scale[1], img_scale[0])
308 elif len(args.shape) == 1:
309 input_shape = (1, 3, args.shape[0], args.shape[0])
310 elif len(args.shape) == 2:
311 input_shape = (1, 3) + tuple(args.shape)
312 else:
313 raise ValueError('invalid input shape')
314
315 # build the model and load checkpoint
316 model = build_model_from_cfg(args.config, args.checkpoint,
317 args.cfg_options)
318
319 if not args.input_img:
320 args.input_img = osp.join(osp.dirname(__file__), '../../demo/demo.jpg')
321
322 normalize_cfg = parse_normalize_cfg(cfg.test_pipeline)
323
324 # convert model to onnx file
325 pytorch2onnx(
326 model,
327 args.input_img,
328 input_shape,
329 normalize_cfg,
330 opset_version=args.opset_version,
331 show=args.show,
332 output_file=args.output_file,
333 verify=args.verify,
334 test_img=args.test_img,
335 do_simplify=args.simplify,
336 dynamic_export=args.dynamic_export,
337 skip_postprocess=args.skip_postprocess)
338
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/tools/deployment/pytorch2onnx.py b/tools/deployment/pytorch2onnx.py
--- a/tools/deployment/pytorch2onnx.py
+++ b/tools/deployment/pytorch2onnx.py
@@ -71,8 +71,8 @@
dynamic_axes = {
input_name: {
0: 'batch',
- 2: 'width',
- 3: 'height'
+ 2: 'height',
+ 3: 'width'
},
'dets': {
0: 'batch',
|
{"golden_diff": "diff --git a/tools/deployment/pytorch2onnx.py b/tools/deployment/pytorch2onnx.py\n--- a/tools/deployment/pytorch2onnx.py\n+++ b/tools/deployment/pytorch2onnx.py\n@@ -71,8 +71,8 @@\n dynamic_axes = {\n input_name: {\n 0: 'batch',\n- 2: 'width',\n- 3: 'height'\n+ 2: 'height',\n+ 3: 'width'\n },\n 'dets': {\n 0: 'batch',\n", "issue": "Shape of ONNX export dynamic batch\nThanks for your error report and we appreciate it a lot.\r\n\r\nhttps://github.com/open-mmlab/mmdetection/blob/master/tools/deployment/pytorch2onnx.py#L71\r\n\r\nThis line should be \r\n```\r\n if dynamic_export:\r\n dynamic_axes = {\r\n input_name: {\r\n 0: 'batch',\r\n 2: 'height',\r\n 3: 'width'\r\n },\r\n 'dets': {\r\n 0: 'batch',\r\n 1: 'num_dets',\r\n },\r\n 'labels': {\r\n 0: 'batch',\r\n 1: 'num_dets',\r\n },\r\n }\r\n```\r\n\r\nBased on the way the line below is implemented the dynamic axis should be `bs x c x h x w`. \r\nhttps://github.com/open-mmlab/mmdetection/blob/master/tools/deployment/pytorch2onnx.py#L139\r\n\r\n\r\nI can create a quick PR for this if needed. \r\n\r\n\n", "before_files": [{"content": "# Copyright (c) OpenMMLab. All rights reserved.\nimport argparse\nimport os.path as osp\nimport warnings\nfrom functools import partial\n\nimport numpy as np\nimport onnx\nimport torch\nfrom mmcv import Config, DictAction\n\nfrom mmdet.core.export import build_model_from_cfg, preprocess_example_input\nfrom mmdet.core.export.model_wrappers import ONNXRuntimeDetector\n\n\ndef pytorch2onnx(model,\n input_img,\n input_shape,\n normalize_cfg,\n opset_version=11,\n show=False,\n output_file='tmp.onnx',\n verify=False,\n test_img=None,\n do_simplify=False,\n dynamic_export=None,\n skip_postprocess=False):\n\n input_config = {\n 'input_shape': input_shape,\n 'input_path': input_img,\n 'normalize_cfg': normalize_cfg\n }\n # prepare input\n one_img, one_meta = preprocess_example_input(input_config)\n img_list, img_meta_list = [one_img], [[one_meta]]\n\n if skip_postprocess:\n warnings.warn('Not all models support export onnx without post '\n 'process, especially two stage detectors!')\n model.forward = model.forward_dummy\n torch.onnx.export(\n model,\n one_img,\n output_file,\n input_names=['input'],\n export_params=True,\n keep_initializers_as_inputs=True,\n do_constant_folding=True,\n verbose=show,\n opset_version=opset_version)\n\n print(f'Successfully exported ONNX model without '\n f'post process: {output_file}')\n return\n\n # replace original forward function\n origin_forward = model.forward\n model.forward = partial(\n model.forward,\n img_metas=img_meta_list,\n return_loss=False,\n rescale=False)\n\n output_names = ['dets', 'labels']\n if model.with_mask:\n output_names.append('masks')\n input_name = 'input'\n dynamic_axes = None\n if dynamic_export:\n dynamic_axes = {\n input_name: {\n 0: 'batch',\n 2: 'width',\n 3: 'height'\n },\n 'dets': {\n 0: 'batch',\n 1: 'num_dets',\n },\n 'labels': {\n 0: 'batch',\n 1: 'num_dets',\n },\n }\n if model.with_mask:\n dynamic_axes['masks'] = {0: 'batch', 1: 'num_dets'}\n\n torch.onnx.export(\n model,\n img_list,\n output_file,\n input_names=[input_name],\n output_names=output_names,\n export_params=True,\n keep_initializers_as_inputs=True,\n do_constant_folding=True,\n verbose=show,\n opset_version=opset_version,\n dynamic_axes=dynamic_axes)\n\n model.forward = origin_forward\n\n # get the custom op path\n ort_custom_op_path = ''\n try:\n from mmcv.ops import get_onnxruntime_op_path\n ort_custom_op_path = get_onnxruntime_op_path()\n except (ImportError, ModuleNotFoundError):\n warnings.warn('If input model has custom op from mmcv, \\\n you may have to build mmcv with ONNXRuntime from source.')\n\n if do_simplify:\n import onnxsim\n\n from mmdet import digit_version\n\n min_required_version = '0.3.0'\n assert digit_version(onnxsim.__version__) >= digit_version(\n min_required_version\n ), f'Requires to install onnx-simplify>={min_required_version}'\n\n input_dic = {'input': img_list[0].detach().cpu().numpy()}\n onnxsim.simplify(\n output_file, input_data=input_dic, custom_lib=ort_custom_op_path)\n print(f'Successfully exported ONNX model: {output_file}')\n\n if verify:\n # check by onnx\n onnx_model = onnx.load(output_file)\n onnx.checker.check_model(onnx_model)\n\n # wrap onnx model\n onnx_model = ONNXRuntimeDetector(output_file, model.CLASSES, 0)\n if dynamic_export:\n # scale up to test dynamic shape\n h, w = [int((_ * 1.5) // 32 * 32) for _ in input_shape[2:]]\n h, w = min(1344, h), min(1344, w)\n input_config['input_shape'] = (1, 3, h, w)\n\n if test_img is None:\n input_config['input_path'] = input_img\n\n # prepare input once again\n one_img, one_meta = preprocess_example_input(input_config)\n img_list, img_meta_list = [one_img], [[one_meta]]\n\n # get pytorch output\n with torch.no_grad():\n pytorch_results = model(\n img_list,\n img_metas=img_meta_list,\n return_loss=False,\n rescale=True)[0]\n\n img_list = [_.cuda().contiguous() for _ in img_list]\n if dynamic_export:\n img_list = img_list + [_.flip(-1).contiguous() for _ in img_list]\n img_meta_list = img_meta_list * 2\n # get onnx output\n onnx_results = onnx_model(\n img_list, img_metas=img_meta_list, return_loss=False)[0]\n # visualize predictions\n score_thr = 0.3\n if show:\n out_file_ort, out_file_pt = None, None\n else:\n out_file_ort, out_file_pt = 'show-ort.png', 'show-pt.png'\n\n show_img = one_meta['show_img']\n model.show_result(\n show_img,\n pytorch_results,\n score_thr=score_thr,\n show=True,\n win_name='PyTorch',\n out_file=out_file_pt)\n onnx_model.show_result(\n show_img,\n onnx_results,\n score_thr=score_thr,\n show=True,\n win_name='ONNXRuntime',\n out_file=out_file_ort)\n\n # compare a part of result\n if model.with_mask:\n compare_pairs = list(zip(onnx_results, pytorch_results))\n else:\n compare_pairs = [(onnx_results, pytorch_results)]\n err_msg = 'The numerical values are different between Pytorch' + \\\n ' and ONNX, but it does not necessarily mean the' + \\\n ' exported ONNX model is problematic.'\n # check the numerical value\n for onnx_res, pytorch_res in compare_pairs:\n for o_res, p_res in zip(onnx_res, pytorch_res):\n np.testing.assert_allclose(\n o_res, p_res, rtol=1e-03, atol=1e-05, err_msg=err_msg)\n print('The numerical values are the same between Pytorch and ONNX')\n\n\ndef parse_normalize_cfg(test_pipeline):\n transforms = None\n for pipeline in test_pipeline:\n if 'transforms' in pipeline:\n transforms = pipeline['transforms']\n break\n assert transforms is not None, 'Failed to find `transforms`'\n norm_config_li = [_ for _ in transforms if _['type'] == 'Normalize']\n assert len(norm_config_li) == 1, '`norm_config` should only have one'\n norm_config = norm_config_li[0]\n return norm_config\n\n\ndef parse_args():\n parser = argparse.ArgumentParser(\n description='Convert MMDetection models to ONNX')\n parser.add_argument('config', help='test config file path')\n parser.add_argument('checkpoint', help='checkpoint file')\n parser.add_argument('--input-img', type=str, help='Images for input')\n parser.add_argument(\n '--show',\n action='store_true',\n help='Show onnx graph and detection outputs')\n parser.add_argument('--output-file', type=str, default='tmp.onnx')\n parser.add_argument('--opset-version', type=int, default=11)\n parser.add_argument(\n '--test-img', type=str, default=None, help='Images for test')\n parser.add_argument(\n '--dataset',\n type=str,\n default='coco',\n help='Dataset name. This argument is deprecated and will be removed \\\n in future releases.')\n parser.add_argument(\n '--verify',\n action='store_true',\n help='verify the onnx model output against pytorch output')\n parser.add_argument(\n '--simplify',\n action='store_true',\n help='Whether to simplify onnx model.')\n parser.add_argument(\n '--shape',\n type=int,\n nargs='+',\n default=[800, 1216],\n help='input image size')\n parser.add_argument(\n '--mean',\n type=float,\n nargs='+',\n default=[123.675, 116.28, 103.53],\n help='mean value used for preprocess input data.This argument \\\n is deprecated and will be removed in future releases.')\n parser.add_argument(\n '--std',\n type=float,\n nargs='+',\n default=[58.395, 57.12, 57.375],\n help='variance value used for preprocess input data. '\n 'This argument is deprecated and will be removed in future releases.')\n parser.add_argument(\n '--cfg-options',\n nargs='+',\n action=DictAction,\n help='Override some settings in the used config, the key-value pair '\n 'in xxx=yyy format will be merged into config file. If the value to '\n 'be overwritten is a list, it should be like key=\"[a,b]\" or key=a,b '\n 'It also allows nested list/tuple values, e.g. key=\"[(a,b),(c,d)]\" '\n 'Note that the quotation marks are necessary and that no white space '\n 'is allowed.')\n parser.add_argument(\n '--dynamic-export',\n action='store_true',\n help='Whether to export onnx with dynamic axis.')\n parser.add_argument(\n '--skip-postprocess',\n action='store_true',\n help='Whether to export model without post process. Experimental '\n 'option. We do not guarantee the correctness of the exported '\n 'model.')\n args = parser.parse_args()\n return args\n\n\nif __name__ == '__main__':\n args = parse_args()\n warnings.warn('Arguments like `--mean`, `--std`, `--dataset` would be \\\n parsed directly from config file and are deprecated and \\\n will be removed in future releases.')\n\n assert args.opset_version == 11, 'MMDet only support opset 11 now'\n\n try:\n from mmcv.onnx.symbolic import register_extra_symbolics\n except ModuleNotFoundError:\n raise NotImplementedError('please update mmcv to version>=v1.0.4')\n register_extra_symbolics(args.opset_version)\n\n cfg = Config.fromfile(args.config)\n if args.cfg_options is not None:\n cfg.merge_from_dict(args.cfg_options)\n\n if args.shape is None:\n img_scale = cfg.test_pipeline[1]['img_scale']\n input_shape = (1, 3, img_scale[1], img_scale[0])\n elif len(args.shape) == 1:\n input_shape = (1, 3, args.shape[0], args.shape[0])\n elif len(args.shape) == 2:\n input_shape = (1, 3) + tuple(args.shape)\n else:\n raise ValueError('invalid input shape')\n\n # build the model and load checkpoint\n model = build_model_from_cfg(args.config, args.checkpoint,\n args.cfg_options)\n\n if not args.input_img:\n args.input_img = osp.join(osp.dirname(__file__), '../../demo/demo.jpg')\n\n normalize_cfg = parse_normalize_cfg(cfg.test_pipeline)\n\n # convert model to onnx file\n pytorch2onnx(\n model,\n args.input_img,\n input_shape,\n normalize_cfg,\n opset_version=args.opset_version,\n show=args.show,\n output_file=args.output_file,\n verify=args.verify,\n test_img=args.test_img,\n do_simplify=args.simplify,\n dynamic_export=args.dynamic_export,\n skip_postprocess=args.skip_postprocess)\n", "path": "tools/deployment/pytorch2onnx.py"}], "after_files": [{"content": "# Copyright (c) OpenMMLab. All rights reserved.\nimport argparse\nimport os.path as osp\nimport warnings\nfrom functools import partial\n\nimport numpy as np\nimport onnx\nimport torch\nfrom mmcv import Config, DictAction\n\nfrom mmdet.core.export import build_model_from_cfg, preprocess_example_input\nfrom mmdet.core.export.model_wrappers import ONNXRuntimeDetector\n\n\ndef pytorch2onnx(model,\n input_img,\n input_shape,\n normalize_cfg,\n opset_version=11,\n show=False,\n output_file='tmp.onnx',\n verify=False,\n test_img=None,\n do_simplify=False,\n dynamic_export=None,\n skip_postprocess=False):\n\n input_config = {\n 'input_shape': input_shape,\n 'input_path': input_img,\n 'normalize_cfg': normalize_cfg\n }\n # prepare input\n one_img, one_meta = preprocess_example_input(input_config)\n img_list, img_meta_list = [one_img], [[one_meta]]\n\n if skip_postprocess:\n warnings.warn('Not all models support export onnx without post '\n 'process, especially two stage detectors!')\n model.forward = model.forward_dummy\n torch.onnx.export(\n model,\n one_img,\n output_file,\n input_names=['input'],\n export_params=True,\n keep_initializers_as_inputs=True,\n do_constant_folding=True,\n verbose=show,\n opset_version=opset_version)\n\n print(f'Successfully exported ONNX model without '\n f'post process: {output_file}')\n return\n\n # replace original forward function\n origin_forward = model.forward\n model.forward = partial(\n model.forward,\n img_metas=img_meta_list,\n return_loss=False,\n rescale=False)\n\n output_names = ['dets', 'labels']\n if model.with_mask:\n output_names.append('masks')\n input_name = 'input'\n dynamic_axes = None\n if dynamic_export:\n dynamic_axes = {\n input_name: {\n 0: 'batch',\n 2: 'height',\n 3: 'width'\n },\n 'dets': {\n 0: 'batch',\n 1: 'num_dets',\n },\n 'labels': {\n 0: 'batch',\n 1: 'num_dets',\n },\n }\n if model.with_mask:\n dynamic_axes['masks'] = {0: 'batch', 1: 'num_dets'}\n\n torch.onnx.export(\n model,\n img_list,\n output_file,\n input_names=[input_name],\n output_names=output_names,\n export_params=True,\n keep_initializers_as_inputs=True,\n do_constant_folding=True,\n verbose=show,\n opset_version=opset_version,\n dynamic_axes=dynamic_axes)\n\n model.forward = origin_forward\n\n # get the custom op path\n ort_custom_op_path = ''\n try:\n from mmcv.ops import get_onnxruntime_op_path\n ort_custom_op_path = get_onnxruntime_op_path()\n except (ImportError, ModuleNotFoundError):\n warnings.warn('If input model has custom op from mmcv, \\\n you may have to build mmcv with ONNXRuntime from source.')\n\n if do_simplify:\n import onnxsim\n\n from mmdet import digit_version\n\n min_required_version = '0.3.0'\n assert digit_version(onnxsim.__version__) >= digit_version(\n min_required_version\n ), f'Requires to install onnx-simplify>={min_required_version}'\n\n input_dic = {'input': img_list[0].detach().cpu().numpy()}\n onnxsim.simplify(\n output_file, input_data=input_dic, custom_lib=ort_custom_op_path)\n print(f'Successfully exported ONNX model: {output_file}')\n\n if verify:\n # check by onnx\n onnx_model = onnx.load(output_file)\n onnx.checker.check_model(onnx_model)\n\n # wrap onnx model\n onnx_model = ONNXRuntimeDetector(output_file, model.CLASSES, 0)\n if dynamic_export:\n # scale up to test dynamic shape\n h, w = [int((_ * 1.5) // 32 * 32) for _ in input_shape[2:]]\n h, w = min(1344, h), min(1344, w)\n input_config['input_shape'] = (1, 3, h, w)\n\n if test_img is None:\n input_config['input_path'] = input_img\n\n # prepare input once again\n one_img, one_meta = preprocess_example_input(input_config)\n img_list, img_meta_list = [one_img], [[one_meta]]\n\n # get pytorch output\n with torch.no_grad():\n pytorch_results = model(\n img_list,\n img_metas=img_meta_list,\n return_loss=False,\n rescale=True)[0]\n\n img_list = [_.cuda().contiguous() for _ in img_list]\n if dynamic_export:\n img_list = img_list + [_.flip(-1).contiguous() for _ in img_list]\n img_meta_list = img_meta_list * 2\n # get onnx output\n onnx_results = onnx_model(\n img_list, img_metas=img_meta_list, return_loss=False)[0]\n # visualize predictions\n score_thr = 0.3\n if show:\n out_file_ort, out_file_pt = None, None\n else:\n out_file_ort, out_file_pt = 'show-ort.png', 'show-pt.png'\n\n show_img = one_meta['show_img']\n model.show_result(\n show_img,\n pytorch_results,\n score_thr=score_thr,\n show=True,\n win_name='PyTorch',\n out_file=out_file_pt)\n onnx_model.show_result(\n show_img,\n onnx_results,\n score_thr=score_thr,\n show=True,\n win_name='ONNXRuntime',\n out_file=out_file_ort)\n\n # compare a part of result\n if model.with_mask:\n compare_pairs = list(zip(onnx_results, pytorch_results))\n else:\n compare_pairs = [(onnx_results, pytorch_results)]\n err_msg = 'The numerical values are different between Pytorch' + \\\n ' and ONNX, but it does not necessarily mean the' + \\\n ' exported ONNX model is problematic.'\n # check the numerical value\n for onnx_res, pytorch_res in compare_pairs:\n for o_res, p_res in zip(onnx_res, pytorch_res):\n np.testing.assert_allclose(\n o_res, p_res, rtol=1e-03, atol=1e-05, err_msg=err_msg)\n print('The numerical values are the same between Pytorch and ONNX')\n\n\ndef parse_normalize_cfg(test_pipeline):\n transforms = None\n for pipeline in test_pipeline:\n if 'transforms' in pipeline:\n transforms = pipeline['transforms']\n break\n assert transforms is not None, 'Failed to find `transforms`'\n norm_config_li = [_ for _ in transforms if _['type'] == 'Normalize']\n assert len(norm_config_li) == 1, '`norm_config` should only have one'\n norm_config = norm_config_li[0]\n return norm_config\n\n\ndef parse_args():\n parser = argparse.ArgumentParser(\n description='Convert MMDetection models to ONNX')\n parser.add_argument('config', help='test config file path')\n parser.add_argument('checkpoint', help='checkpoint file')\n parser.add_argument('--input-img', type=str, help='Images for input')\n parser.add_argument(\n '--show',\n action='store_true',\n help='Show onnx graph and detection outputs')\n parser.add_argument('--output-file', type=str, default='tmp.onnx')\n parser.add_argument('--opset-version', type=int, default=11)\n parser.add_argument(\n '--test-img', type=str, default=None, help='Images for test')\n parser.add_argument(\n '--dataset',\n type=str,\n default='coco',\n help='Dataset name. This argument is deprecated and will be removed \\\n in future releases.')\n parser.add_argument(\n '--verify',\n action='store_true',\n help='verify the onnx model output against pytorch output')\n parser.add_argument(\n '--simplify',\n action='store_true',\n help='Whether to simplify onnx model.')\n parser.add_argument(\n '--shape',\n type=int,\n nargs='+',\n default=[800, 1216],\n help='input image size')\n parser.add_argument(\n '--mean',\n type=float,\n nargs='+',\n default=[123.675, 116.28, 103.53],\n help='mean value used for preprocess input data.This argument \\\n is deprecated and will be removed in future releases.')\n parser.add_argument(\n '--std',\n type=float,\n nargs='+',\n default=[58.395, 57.12, 57.375],\n help='variance value used for preprocess input data. '\n 'This argument is deprecated and will be removed in future releases.')\n parser.add_argument(\n '--cfg-options',\n nargs='+',\n action=DictAction,\n help='Override some settings in the used config, the key-value pair '\n 'in xxx=yyy format will be merged into config file. If the value to '\n 'be overwritten is a list, it should be like key=\"[a,b]\" or key=a,b '\n 'It also allows nested list/tuple values, e.g. key=\"[(a,b),(c,d)]\" '\n 'Note that the quotation marks are necessary and that no white space '\n 'is allowed.')\n parser.add_argument(\n '--dynamic-export',\n action='store_true',\n help='Whether to export onnx with dynamic axis.')\n parser.add_argument(\n '--skip-postprocess',\n action='store_true',\n help='Whether to export model without post process. Experimental '\n 'option. We do not guarantee the correctness of the exported '\n 'model.')\n args = parser.parse_args()\n return args\n\n\nif __name__ == '__main__':\n args = parse_args()\n warnings.warn('Arguments like `--mean`, `--std`, `--dataset` would be \\\n parsed directly from config file and are deprecated and \\\n will be removed in future releases.')\n\n assert args.opset_version == 11, 'MMDet only support opset 11 now'\n\n try:\n from mmcv.onnx.symbolic import register_extra_symbolics\n except ModuleNotFoundError:\n raise NotImplementedError('please update mmcv to version>=v1.0.4')\n register_extra_symbolics(args.opset_version)\n\n cfg = Config.fromfile(args.config)\n if args.cfg_options is not None:\n cfg.merge_from_dict(args.cfg_options)\n\n if args.shape is None:\n img_scale = cfg.test_pipeline[1]['img_scale']\n input_shape = (1, 3, img_scale[1], img_scale[0])\n elif len(args.shape) == 1:\n input_shape = (1, 3, args.shape[0], args.shape[0])\n elif len(args.shape) == 2:\n input_shape = (1, 3) + tuple(args.shape)\n else:\n raise ValueError('invalid input shape')\n\n # build the model and load checkpoint\n model = build_model_from_cfg(args.config, args.checkpoint,\n args.cfg_options)\n\n if not args.input_img:\n args.input_img = osp.join(osp.dirname(__file__), '../../demo/demo.jpg')\n\n normalize_cfg = parse_normalize_cfg(cfg.test_pipeline)\n\n # convert model to onnx file\n pytorch2onnx(\n model,\n args.input_img,\n input_shape,\n normalize_cfg,\n opset_version=args.opset_version,\n show=args.show,\n output_file=args.output_file,\n verify=args.verify,\n test_img=args.test_img,\n do_simplify=args.simplify,\n dynamic_export=args.dynamic_export,\n skip_postprocess=args.skip_postprocess)\n", "path": "tools/deployment/pytorch2onnx.py"}]}
| 4,031 | 124 |
gh_patches_debug_2086
|
rasdani/github-patches
|
git_diff
|
google__timesketch-90
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Importing of JSON timelines creates duplicate timelines with same name.
Steps to reproduce
1) command line:
echo '[
{
"datetime": "2012-04-12T17:24:38-08:00",
"timestamp_desc": "Test",
"timestamp": 1334251478000000,
"message": "Test message"
}
]' > test_dupe.json
tsctl json2ts --name test_dupe --file test_dupe.json
tsctl json2ts --name test_dupe --file test_dupe.json
2) Create new sketch
3) Notice duplicate "test_dupe" timelines on list to select from.
4) Add both
5) Explore, using "*" as filter.
6) notice duplicate results.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wsgi.py`
Content:
```
1 #!/usr/bin/env python
2 # Copyright 2015 Google Inc. All rights reserved.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 """This module is for creating the app for a WSGI server.
16
17 Example with Gunicorn:
18 $ gunicorn -b 127.0.0.1:4000 --log-file - wsgi:application
19
20 Example configuration for Apache with mod_wsgi (a2enmod mod_wsgi):
21 <VirtualHost *:443>
22 ServerAdmin root@localhost
23 SSLEngine On
24 SSLCertificateFile /etc/apache2/cert.crt
25 SSLCertificateKeyFile /etc/apache2/cert.key
26 WSGIScriptAlias / /path/to/this/file/wsgi.py
27 </VirtualHost>
28 """
29
30 # If you installed Timesketch in a virtualenv you need to activate it.
31 # This needs to be before any imports in order to import from the virtualenv.
32 #activate_virtualenv = '/path/to/your/virtualenv/bin/activate_this.py'
33 #execfile(activate_virtualenv, dict(__file__=activate_virtualenv))
34
35 from timesketch import create_app
36 from timesketch.models import db_session
37
38 application = create_app()
39
40 # Remove the session after every request or app shutdown.
41 @application.teardown_appcontext
42 def shutdown_session(exception=None):
43 db_session.remove()
44
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/wsgi.py b/wsgi.py
--- a/wsgi.py
+++ b/wsgi.py
@@ -37,7 +37,8 @@
application = create_app()
-# Remove the session after every request or app shutdown.
+# pylint: disable=unused-argument
@application.teardown_appcontext
def shutdown_session(exception=None):
+ """Remove the database session after every request or app shutdown."""
db_session.remove()
|
{"golden_diff": "diff --git a/wsgi.py b/wsgi.py\n--- a/wsgi.py\n+++ b/wsgi.py\n@@ -37,7 +37,8 @@\n \n application = create_app()\n \n-# Remove the session after every request or app shutdown.\n+# pylint: disable=unused-argument\n @application.teardown_appcontext\n def shutdown_session(exception=None):\n+ \"\"\"Remove the database session after every request or app shutdown.\"\"\"\n db_session.remove()\n", "issue": "Importing of JSON timelines creates duplicate timelines with same name.\nSteps to reproduce\n1) command line:\necho '[\n {\n \"datetime\": \"2012-04-12T17:24:38-08:00\",\n \"timestamp_desc\": \"Test\",\n \"timestamp\": 1334251478000000,\n \"message\": \"Test message\"\n }\n]' > test_dupe.json \ntsctl json2ts --name test_dupe --file test_dupe.json\ntsctl json2ts --name test_dupe --file test_dupe.json\n\n2) Create new sketch\n3) Notice duplicate \"test_dupe\" timelines on list to select from.\n4) Add both\n5) Explore, using \"*\" as filter.\n6) notice duplicate results.\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# Copyright 2015 Google Inc. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"This module is for creating the app for a WSGI server.\n\nExample with Gunicorn:\n$ gunicorn -b 127.0.0.1:4000 --log-file - wsgi:application\n\nExample configuration for Apache with mod_wsgi (a2enmod mod_wsgi):\n<VirtualHost *:443>\n ServerAdmin root@localhost\n SSLEngine On\n SSLCertificateFile /etc/apache2/cert.crt\n SSLCertificateKeyFile /etc/apache2/cert.key\n WSGIScriptAlias / /path/to/this/file/wsgi.py\n</VirtualHost>\n\"\"\"\n\n# If you installed Timesketch in a virtualenv you need to activate it.\n# This needs to be before any imports in order to import from the virtualenv.\n#activate_virtualenv = '/path/to/your/virtualenv/bin/activate_this.py'\n#execfile(activate_virtualenv, dict(__file__=activate_virtualenv))\n\nfrom timesketch import create_app\nfrom timesketch.models import db_session\n\napplication = create_app()\n\n# Remove the session after every request or app shutdown.\[email protected]_appcontext\ndef shutdown_session(exception=None):\n db_session.remove()\n", "path": "wsgi.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# Copyright 2015 Google Inc. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"This module is for creating the app for a WSGI server.\n\nExample with Gunicorn:\n$ gunicorn -b 127.0.0.1:4000 --log-file - wsgi:application\n\nExample configuration for Apache with mod_wsgi (a2enmod mod_wsgi):\n<VirtualHost *:443>\n ServerAdmin root@localhost\n SSLEngine On\n SSLCertificateFile /etc/apache2/cert.crt\n SSLCertificateKeyFile /etc/apache2/cert.key\n WSGIScriptAlias / /path/to/this/file/wsgi.py\n</VirtualHost>\n\"\"\"\n\n# If you installed Timesketch in a virtualenv you need to activate it.\n# This needs to be before any imports in order to import from the virtualenv.\n#activate_virtualenv = '/path/to/your/virtualenv/bin/activate_this.py'\n#execfile(activate_virtualenv, dict(__file__=activate_virtualenv))\n\nfrom timesketch import create_app\nfrom timesketch.models import db_session\n\napplication = create_app()\n\n# pylint: disable=unused-argument\[email protected]_appcontext\ndef shutdown_session(exception=None):\n \"\"\"Remove the database session after every request or app shutdown.\"\"\"\n db_session.remove()\n", "path": "wsgi.py"}]}
| 920 | 96 |
gh_patches_debug_57409
|
rasdani/github-patches
|
git_diff
|
kornia__kornia-1316
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Importing kornia causes `logging` to print to stderr?
### Describe the bug
I pip-installed the master version of kornia to access my latest PR and now my training scripts started to print all kinds of debug info. Could it be because importing kornia imports in turn `kornia.x.trainer` which has [this](https://github.com/kornia/kornia/blob/ed4eb7ab77218b021914f77cad426528a59bd780/kornia/x/trainer.py#L18) line? If so, how can I disable `x` when installing via `pip install git+https://github.com/kornia/kornia.git`?
### Reproduction steps
```bash
Import `kornia` in any script which uses `logging`.
```
### Expected behavior
Merely importing `kornia` should not toggle global settings of `logging`.
### Environment
```shell
PyTorch version: 1.9.0
Is debug build: False
CUDA used to build PyTorch: 11.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.3 LTS (x86_64)
GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.8.10 (default, Jun 4 2021, 15:09:15) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.11.0-1018-gcp-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: Tesla V100-SXM2-16GB
GPU 1: Tesla V100-SXM2-16GB
GPU 2: Tesla V100-SXM2-16GB
GPU 3: Tesla V100-SXM2-16GB
GPU 4: Tesla V100-SXM2-16GB
GPU 5: Tesla V100-SXM2-16GB
GPU 6: Tesla V100-SXM2-16GB
GPU 7: Tesla V100-SXM2-16GB
Nvidia driver version: 470.57.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.19.5
[pip3] pytorch-lightning==1.4.7
[pip3] torch==1.9.0
[pip3] torch-dimcheck==0.0.1
[pip3] torchaudio==0.9.0a0+33b2469
[pip3] torchmetrics==0.4.1
[pip3] torchvision==0.10.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.1.74 h6bb024c_0 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.3.0 h06a4308_520
[conda] mkl-service 2.4.0 py38h7f8727e_0
[conda] mkl_fft 1.3.0 py38h42c9631_2
[conda] mkl_random 1.2.2 py38h51133e4_0
[conda] mypy-extensions 0.4.3 pypi_0 pypi
[conda] numpy 1.19.5 pypi_0 pypi
[conda] pytorch 1.9.0 py3.8_cuda11.1_cudnn8.0.5_0 pytorch
[conda] pytorch-lightning 1.4.7 pypi_0 pypi
[conda] torchaudio 0.9.0 py38 pytorch
[conda] torchmetrics 0.4.1 pypi_0 pypi
[conda] torchvision 0.10.0 py38_cu111 pytorch
```
### Additional context
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kornia/x/trainer.py`
Content:
```
1 import logging
2 from typing import Callable, Dict
3
4 import torch
5 import torch.nn as nn
6 from torch.utils.data import DataLoader
7
8 # the accelerator library is a requirement for the Trainer
9 # but it is optional for grousnd base user of kornia.
10 try:
11 from accelerate import Accelerator
12 except ImportError:
13 Accelerator = None
14
15 from .metrics import AverageMeter
16 from .utils import Configuration, TrainerState
17
18 logging.basicConfig(format='%(levelname)s:%(message)s', level=logging.DEBUG)
19
20
21 callbacks_whitelist = [
22 "preprocess", "augmentations", "evaluate", "fit", "checkpoint", "terminate"
23 ]
24
25
26 class Trainer:
27 """Base class to train the different models in kornia.
28
29 .. warning::
30 The API is experimental and subject to be modified based on the needs of kornia models.
31
32 Args:
33 model: the nn.Module to be optimized.
34 train_dataloader: the data loader used in the training loop.
35 valid_dataloader: the data loader used in the validation loop.
36 criterion: the nn.Module with the function that computes the loss.
37 optimizer: the torch optimizer object to be used during the optimization.
38 scheduler: the torch scheduler object with defiing the scheduling strategy.
39 accelerator: the Accelerator object to distribute the training.
40 config: a TrainerConfiguration structure containing the experiment hyper parameters.
41 callbacks: a dictionary containing the pointers to the functions to overrides. The
42 main supported hooks are ``evaluate``, ``preprocess``, ``augmentations`` and ``fit``.
43
44 .. important::
45 The API heavily relies on `accelerate <https://github.com/huggingface/accelerate/>`_.
46 In order to use it, you must: ``pip install kornia[x]``
47
48 .. seealso::
49 Learn how to use the API in our documentation
50 `here <https://kornia.readthedocs.io/en/latest/get-started/training.html>`_.
51 """
52 def __init__(
53 self,
54 model: nn.Module,
55 train_dataloader: DataLoader,
56 valid_dataloader: DataLoader,
57 criterion: nn.Module,
58 optimizer: torch.optim.Optimizer,
59 scheduler: torch.optim.lr_scheduler.CosineAnnealingLR,
60 config: Configuration,
61 callbacks: Dict[str, Callable] = {},
62 ) -> None:
63 # setup the accelerator
64 if Accelerator is None:
65 raise ModuleNotFoundError(
66 "accelerate library is not installed: pip install kornia[x]")
67 self.accelerator = Accelerator()
68
69 # setup the data related objects
70 self.model = self.accelerator.prepare(model)
71 self.train_dataloader = self.accelerator.prepare(train_dataloader)
72 self.valid_dataloader = self.accelerator.prepare(valid_dataloader)
73 self.criterion = criterion.to(self.device)
74 self.optimizer = self.accelerator.prepare(optimizer)
75 self.scheduler = scheduler
76 self.config = config
77
78 # configure callbacks
79 for fn_name, fn in callbacks.items():
80 if fn_name not in callbacks_whitelist:
81 raise ValueError(f"Not supported: {fn_name}.")
82 setattr(self, fn_name, fn)
83
84 # hyper-params
85 self.num_epochs = config.num_epochs
86
87 self._logger = logging.getLogger('train')
88
89 @property
90 def device(self) -> torch.device:
91 return self.accelerator.device
92
93 def backward(self, loss: torch.Tensor) -> None:
94 self.accelerator.backward(loss)
95
96 def fit_epoch(self, epoch: int) -> None:
97 # train loop
98 self.model.train()
99 losses = AverageMeter()
100 for sample_id, sample in enumerate(self.train_dataloader):
101 source, target = sample # this might change with new pytorch dataset structure
102 self.optimizer.zero_grad()
103
104 # perform the preprocess and augmentations in batch
105 img = self.preprocess(source)
106 img = self.augmentations(img)
107 # make the actual inference
108 output = self.model(img)
109 loss = self.criterion(output, target)
110 self.backward(loss)
111 self.optimizer.step()
112
113 losses.update(loss.item(), img.shape[0])
114
115 if sample_id % 50 == 0:
116 self._logger.info(
117 f"Train: {epoch + 1}/{self.num_epochs} "
118 f"Sample: {sample_id + 1}/{len(self.train_dataloader)} "
119 f"Loss: {losses.val:.3f} {losses.avg:.3f}"
120 )
121
122 def fit(self,) -> None:
123 # execute the main loop
124 # NOTE: Do not change and keep this structure clear for readability.
125 for epoch in range(self.num_epochs):
126 # call internally the training loop
127 # NOTE: override to customize your evaluation routine
128 self.fit_epoch(epoch)
129
130 # call internally the evaluation loop
131 # NOTE: override to customize your evaluation routine
132 valid_stats = self.evaluate()
133
134 self.checkpoint(self.model, epoch, valid_stats)
135
136 state = self.terminate(self.model, epoch, valid_stats)
137 if state == TrainerState.TERMINATE:
138 break
139
140 # END OF THE EPOCH
141 self.scheduler.step()
142
143 ...
144
145 def evaluate(self):
146 ...
147
148 def preprocess(self, x):
149 return x
150
151 def augmentations(self, x):
152 return x
153
154 def checkpoint(self, *args, **kwargs):
155 ...
156
157 def terminate(self, *args, **kwargs):
158 ...
159
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kornia/x/trainer.py b/kornia/x/trainer.py
--- a/kornia/x/trainer.py
+++ b/kornia/x/trainer.py
@@ -15,9 +15,6 @@
from .metrics import AverageMeter
from .utils import Configuration, TrainerState
-logging.basicConfig(format='%(levelname)s:%(message)s', level=logging.DEBUG)
-
-
callbacks_whitelist = [
"preprocess", "augmentations", "evaluate", "fit", "checkpoint", "terminate"
]
|
{"golden_diff": "diff --git a/kornia/x/trainer.py b/kornia/x/trainer.py\n--- a/kornia/x/trainer.py\n+++ b/kornia/x/trainer.py\n@@ -15,9 +15,6 @@\n from .metrics import AverageMeter\n from .utils import Configuration, TrainerState\n \n-logging.basicConfig(format='%(levelname)s:%(message)s', level=logging.DEBUG)\n-\n-\n callbacks_whitelist = [\n \"preprocess\", \"augmentations\", \"evaluate\", \"fit\", \"checkpoint\", \"terminate\"\n ]\n", "issue": "Importing kornia causes `logging` to print to stderr?\n### Describe the bug\n\nI pip-installed the master version of kornia to access my latest PR and now my training scripts started to print all kinds of debug info. Could it be because importing kornia imports in turn `kornia.x.trainer` which has [this](https://github.com/kornia/kornia/blob/ed4eb7ab77218b021914f77cad426528a59bd780/kornia/x/trainer.py#L18) line? If so, how can I disable `x` when installing via `pip install git+https://github.com/kornia/kornia.git`?\n\n### Reproduction steps\n\n```bash\nImport `kornia` in any script which uses `logging`.\n```\n\n\n### Expected behavior\n\nMerely importing `kornia` should not toggle global settings of `logging`.\n\n### Environment\n\n```shell\nPyTorch version: 1.9.0\r\nIs debug build: False\r\nCUDA used to build PyTorch: 11.1\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 20.04.3 LTS (x86_64)\r\nGCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0\r\nClang version: Could not collect\r\nCMake version: Could not collect\r\nLibc version: glibc-2.31\r\n\r\nPython version: 3.8.10 (default, Jun 4 2021, 15:09:15) [GCC 7.5.0] (64-bit runtime)\r\nPython platform: Linux-5.11.0-1018-gcp-x86_64-with-glibc2.17\r\nIs CUDA available: True\r\nCUDA runtime version: Could not collect\r\nGPU models and configuration: \r\nGPU 0: Tesla V100-SXM2-16GB\r\nGPU 1: Tesla V100-SXM2-16GB\r\nGPU 2: Tesla V100-SXM2-16GB\r\nGPU 3: Tesla V100-SXM2-16GB\r\nGPU 4: Tesla V100-SXM2-16GB\r\nGPU 5: Tesla V100-SXM2-16GB\r\nGPU 6: Tesla V100-SXM2-16GB\r\nGPU 7: Tesla V100-SXM2-16GB\r\n\r\nNvidia driver version: 470.57.02\r\ncuDNN version: Could not collect\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\n\r\nVersions of relevant libraries:\r\n[pip3] mypy-extensions==0.4.3\r\n[pip3] numpy==1.19.5\r\n[pip3] pytorch-lightning==1.4.7\r\n[pip3] torch==1.9.0\r\n[pip3] torch-dimcheck==0.0.1\r\n[pip3] torchaudio==0.9.0a0+33b2469\r\n[pip3] torchmetrics==0.4.1\r\n[pip3] torchvision==0.10.0\r\n[conda] blas 1.0 mkl \r\n[conda] cudatoolkit 11.1.74 h6bb024c_0 nvidia\r\n[conda] ffmpeg 4.3 hf484d3e_0 pytorch\r\n[conda] mkl 2021.3.0 h06a4308_520 \r\n[conda] mkl-service 2.4.0 py38h7f8727e_0 \r\n[conda] mkl_fft 1.3.0 py38h42c9631_2 \r\n[conda] mkl_random 1.2.2 py38h51133e4_0 \r\n[conda] mypy-extensions 0.4.3 pypi_0 pypi\r\n[conda] numpy 1.19.5 pypi_0 pypi\r\n[conda] pytorch 1.9.0 py3.8_cuda11.1_cudnn8.0.5_0 pytorch\r\n[conda] pytorch-lightning 1.4.7 pypi_0 pypi\r\n[conda] torchaudio 0.9.0 py38 pytorch\r\n[conda] torchmetrics 0.4.1 pypi_0 pypi\r\n[conda] torchvision 0.10.0 py38_cu111 pytorch\n```\n\n\n### Additional context\n\n_No response_\n", "before_files": [{"content": "import logging\nfrom typing import Callable, Dict\n\nimport torch\nimport torch.nn as nn\nfrom torch.utils.data import DataLoader\n\n# the accelerator library is a requirement for the Trainer\n# but it is optional for grousnd base user of kornia.\ntry:\n from accelerate import Accelerator\nexcept ImportError:\n Accelerator = None\n\nfrom .metrics import AverageMeter\nfrom .utils import Configuration, TrainerState\n\nlogging.basicConfig(format='%(levelname)s:%(message)s', level=logging.DEBUG)\n\n\ncallbacks_whitelist = [\n \"preprocess\", \"augmentations\", \"evaluate\", \"fit\", \"checkpoint\", \"terminate\"\n]\n\n\nclass Trainer:\n \"\"\"Base class to train the different models in kornia.\n\n .. warning::\n The API is experimental and subject to be modified based on the needs of kornia models.\n\n Args:\n model: the nn.Module to be optimized.\n train_dataloader: the data loader used in the training loop.\n valid_dataloader: the data loader used in the validation loop.\n criterion: the nn.Module with the function that computes the loss.\n optimizer: the torch optimizer object to be used during the optimization.\n scheduler: the torch scheduler object with defiing the scheduling strategy.\n accelerator: the Accelerator object to distribute the training.\n config: a TrainerConfiguration structure containing the experiment hyper parameters.\n callbacks: a dictionary containing the pointers to the functions to overrides. The\n main supported hooks are ``evaluate``, ``preprocess``, ``augmentations`` and ``fit``.\n\n .. important::\n The API heavily relies on `accelerate <https://github.com/huggingface/accelerate/>`_.\n In order to use it, you must: ``pip install kornia[x]``\n\n .. seealso::\n Learn how to use the API in our documentation\n `here <https://kornia.readthedocs.io/en/latest/get-started/training.html>`_.\n \"\"\"\n def __init__(\n self,\n model: nn.Module,\n train_dataloader: DataLoader,\n valid_dataloader: DataLoader,\n criterion: nn.Module,\n optimizer: torch.optim.Optimizer,\n scheduler: torch.optim.lr_scheduler.CosineAnnealingLR,\n config: Configuration,\n callbacks: Dict[str, Callable] = {},\n ) -> None:\n # setup the accelerator\n if Accelerator is None:\n raise ModuleNotFoundError(\n \"accelerate library is not installed: pip install kornia[x]\")\n self.accelerator = Accelerator()\n\n # setup the data related objects\n self.model = self.accelerator.prepare(model)\n self.train_dataloader = self.accelerator.prepare(train_dataloader)\n self.valid_dataloader = self.accelerator.prepare(valid_dataloader)\n self.criterion = criterion.to(self.device)\n self.optimizer = self.accelerator.prepare(optimizer)\n self.scheduler = scheduler\n self.config = config\n\n # configure callbacks\n for fn_name, fn in callbacks.items():\n if fn_name not in callbacks_whitelist:\n raise ValueError(f\"Not supported: {fn_name}.\")\n setattr(self, fn_name, fn)\n\n # hyper-params\n self.num_epochs = config.num_epochs\n\n self._logger = logging.getLogger('train')\n\n @property\n def device(self) -> torch.device:\n return self.accelerator.device\n\n def backward(self, loss: torch.Tensor) -> None:\n self.accelerator.backward(loss)\n\n def fit_epoch(self, epoch: int) -> None:\n # train loop\n self.model.train()\n losses = AverageMeter()\n for sample_id, sample in enumerate(self.train_dataloader):\n source, target = sample # this might change with new pytorch dataset structure\n self.optimizer.zero_grad()\n\n # perform the preprocess and augmentations in batch\n img = self.preprocess(source)\n img = self.augmentations(img)\n # make the actual inference\n output = self.model(img)\n loss = self.criterion(output, target)\n self.backward(loss)\n self.optimizer.step()\n\n losses.update(loss.item(), img.shape[0])\n\n if sample_id % 50 == 0:\n self._logger.info(\n f\"Train: {epoch + 1}/{self.num_epochs} \"\n f\"Sample: {sample_id + 1}/{len(self.train_dataloader)} \"\n f\"Loss: {losses.val:.3f} {losses.avg:.3f}\"\n )\n\n def fit(self,) -> None:\n # execute the main loop\n # NOTE: Do not change and keep this structure clear for readability.\n for epoch in range(self.num_epochs):\n # call internally the training loop\n # NOTE: override to customize your evaluation routine\n self.fit_epoch(epoch)\n\n # call internally the evaluation loop\n # NOTE: override to customize your evaluation routine\n valid_stats = self.evaluate()\n\n self.checkpoint(self.model, epoch, valid_stats)\n\n state = self.terminate(self.model, epoch, valid_stats)\n if state == TrainerState.TERMINATE:\n break\n\n # END OF THE EPOCH\n self.scheduler.step()\n\n ...\n\n def evaluate(self):\n ...\n\n def preprocess(self, x):\n return x\n\n def augmentations(self, x):\n return x\n\n def checkpoint(self, *args, **kwargs):\n ...\n\n def terminate(self, *args, **kwargs):\n ...\n", "path": "kornia/x/trainer.py"}], "after_files": [{"content": "import logging\nfrom typing import Callable, Dict\n\nimport torch\nimport torch.nn as nn\nfrom torch.utils.data import DataLoader\n\n# the accelerator library is a requirement for the Trainer\n# but it is optional for grousnd base user of kornia.\ntry:\n from accelerate import Accelerator\nexcept ImportError:\n Accelerator = None\n\nfrom .metrics import AverageMeter\nfrom .utils import Configuration, TrainerState\n\ncallbacks_whitelist = [\n \"preprocess\", \"augmentations\", \"evaluate\", \"fit\", \"checkpoint\", \"terminate\"\n]\n\n\nclass Trainer:\n \"\"\"Base class to train the different models in kornia.\n\n .. warning::\n The API is experimental and subject to be modified based on the needs of kornia models.\n\n Args:\n model: the nn.Module to be optimized.\n train_dataloader: the data loader used in the training loop.\n valid_dataloader: the data loader used in the validation loop.\n criterion: the nn.Module with the function that computes the loss.\n optimizer: the torch optimizer object to be used during the optimization.\n scheduler: the torch scheduler object with defiing the scheduling strategy.\n accelerator: the Accelerator object to distribute the training.\n config: a TrainerConfiguration structure containing the experiment hyper parameters.\n callbacks: a dictionary containing the pointers to the functions to overrides. The\n main supported hooks are ``evaluate``, ``preprocess``, ``augmentations`` and ``fit``.\n\n .. important::\n The API heavily relies on `accelerate <https://github.com/huggingface/accelerate/>`_.\n In order to use it, you must: ``pip install kornia[x]``\n\n .. seealso::\n Learn how to use the API in our documentation\n `here <https://kornia.readthedocs.io/en/latest/get-started/training.html>`_.\n \"\"\"\n def __init__(\n self,\n model: nn.Module,\n train_dataloader: DataLoader,\n valid_dataloader: DataLoader,\n criterion: nn.Module,\n optimizer: torch.optim.Optimizer,\n scheduler: torch.optim.lr_scheduler.CosineAnnealingLR,\n config: Configuration,\n callbacks: Dict[str, Callable] = {},\n ) -> None:\n # setup the accelerator\n if Accelerator is None:\n raise ModuleNotFoundError(\n \"accelerate library is not installed: pip install kornia[x]\")\n self.accelerator = Accelerator()\n\n # setup the data related objects\n self.model = self.accelerator.prepare(model)\n self.train_dataloader = self.accelerator.prepare(train_dataloader)\n self.valid_dataloader = self.accelerator.prepare(valid_dataloader)\n self.criterion = criterion.to(self.device)\n self.optimizer = self.accelerator.prepare(optimizer)\n self.scheduler = scheduler\n self.config = config\n\n # configure callbacks\n for fn_name, fn in callbacks.items():\n if fn_name not in callbacks_whitelist:\n raise ValueError(f\"Not supported: {fn_name}.\")\n setattr(self, fn_name, fn)\n\n # hyper-params\n self.num_epochs = config.num_epochs\n\n self._logger = logging.getLogger('train')\n\n @property\n def device(self) -> torch.device:\n return self.accelerator.device\n\n def backward(self, loss: torch.Tensor) -> None:\n self.accelerator.backward(loss)\n\n def fit_epoch(self, epoch: int) -> None:\n # train loop\n self.model.train()\n losses = AverageMeter()\n for sample_id, sample in enumerate(self.train_dataloader):\n source, target = sample # this might change with new pytorch dataset structure\n self.optimizer.zero_grad()\n\n # perform the preprocess and augmentations in batch\n img = self.preprocess(source)\n img = self.augmentations(img)\n # make the actual inference\n output = self.model(img)\n loss = self.criterion(output, target)\n self.backward(loss)\n self.optimizer.step()\n\n losses.update(loss.item(), img.shape[0])\n\n if sample_id % 50 == 0:\n self._logger.info(\n f\"Train: {epoch + 1}/{self.num_epochs} \"\n f\"Sample: {sample_id + 1}/{len(self.train_dataloader)} \"\n f\"Loss: {losses.val:.3f} {losses.avg:.3f}\"\n )\n\n def fit(self,) -> None:\n # execute the main loop\n # NOTE: Do not change and keep this structure clear for readability.\n for epoch in range(self.num_epochs):\n # call internally the training loop\n # NOTE: override to customize your evaluation routine\n self.fit_epoch(epoch)\n\n # call internally the evaluation loop\n # NOTE: override to customize your evaluation routine\n valid_stats = self.evaluate()\n\n self.checkpoint(self.model, epoch, valid_stats)\n\n state = self.terminate(self.model, epoch, valid_stats)\n if state == TrainerState.TERMINATE:\n break\n\n # END OF THE EPOCH\n self.scheduler.step()\n\n ...\n\n def evaluate(self):\n ...\n\n def preprocess(self, x):\n return x\n\n def augmentations(self, x):\n return x\n\n def checkpoint(self, *args, **kwargs):\n ...\n\n def terminate(self, *args, **kwargs):\n ...\n", "path": "kornia/x/trainer.py"}]}
| 2,892 | 115 |
gh_patches_debug_39744
|
rasdani/github-patches
|
git_diff
|
Pycord-Development__pycord-1250
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
BridgeOption not raising BadArgument when conversion fails
### Summary
BridgeOption does not raise the correct error when conversion fails for a built-in type.
### Reproduction Steps
Create a bridge command with an Option parameter of type `int` (other built-ins also work here).
Then use the prefixed command with an invalid value.
This will raise ValueError (in the case of int), when BadArgument should be raised instead.
### Minimal Reproducible Code
```python
@bot.bridge_command()
async def test(ctx, value: Option(int, name='value')):
await ctx.respond(str(value))
```
### Expected Results
BadArgument to be raised when using `-test a`
### Actual Results
ValueError is raised
```
Traceback (most recent call last):
File "site-packages\discord\ext\commands\converter.py", line 1071, in _actual_conversion
return await converter.convert(ctx, argument)
File "site-packages\discord\ext\bridge\core.py", line 161, in convert
converted = converter(argument)
ValueError: invalid literal for int() with base 10: 'a'
```
### Intents
default + message_content
### System Information
- Python v3.10.1-final
- py-cord v2.0.0-beta
- py-cord pkg_resources: v2.0.0b7
- aiohttp v3.8.1
- system info: Windows 10 10.0.19042
### Checklist
- [X] I have searched the open issues for duplicates.
- [X] I have shown the entire traceback, if possible.
- [X] I have removed my token from display, if visible.
### Additional Context
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `discord/ext/bridge/core.py`
Content:
```
1 """
2 The MIT License (MIT)
3
4 Copyright (c) 2015-2021 Rapptz
5 Copyright (c) 2021-present Pycord Development
6
7 Permission is hereby granted, free of charge, to any person obtaining a
8 copy of this software and associated documentation files (the "Software"),
9 to deal in the Software without restriction, including without limitation
10 the rights to use, copy, modify, merge, publish, distribute, sublicense,
11 and/or sell copies of the Software, and to permit persons to whom the
12 Software is furnished to do so, subject to the following conditions:
13
14 The above copyright notice and this permission notice shall be included in
15 all copies or substantial portions of the Software.
16
17 THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
18 OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
19 FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
20 AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
21 LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
22 FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
23 DEALINGS IN THE SOFTWARE.
24 """
25 from typing import Union, Any
26
27 import discord.commands.options
28 from discord.commands import Option, SlashCommand
29 from discord.enums import SlashCommandOptionType
30
31 from ..commands import AutoShardedBot as ExtAutoShardedBot
32 from ..commands import BadArgument
33 from ..commands import Bot as ExtBot
34 from ..commands import (
35 Command,
36 Converter,
37 GuildChannelConverter,
38 RoleConverter,
39 UserConverter,
40 )
41
42 __all__ = ("BridgeCommand", "bridge_command", "BridgeExtCommand", "BridgeSlashCommand")
43
44 from ...utils import get
45
46
47 class BridgeSlashCommand(SlashCommand):
48 """
49 A subclass of :class:`.SlashCommand` that is used to implement bridge commands.
50 """
51 ...
52
53
54 class BridgeExtCommand(Command):
55 """
56 A subclass of :class:`.ext.commands.Command` that is used to implement bridge commands.
57 """
58 ...
59
60
61 class BridgeCommand:
62 def __init__(self, callback, **kwargs):
63 """
64 This is the base class for commands that are compatible with both traditional (prefix-based) commands and slash
65 commands.
66
67 Parameters
68 ----------
69 callback: Callable[[BridgeContext, ...], Awaitable[Any]]
70 The callback to invoke when the command is executed. The first argument will be a :class:`BridgeContext`,
71 and any additional arguments will be passed to the callback. This callback must be a coroutine.
72 kwargs: Optional[Dict[str, Any]]
73 Keyword arguments that are directly passed to the respective command constructors.
74 """
75 self.callback = callback
76 self.kwargs = kwargs
77
78 def get_ext_command(self):
79 """A method to get the ext.commands version of this command.
80
81 Returns
82 -------
83 :class:`BridgeExtCommand`
84 The respective traditional (prefix-based) version of the command.
85 """
86 command = BridgeExtCommand(self.callback, **self.kwargs)
87 return command
88
89 def get_application_command(self):
90 """A method to get the discord.commands version of this command.
91
92 Returns
93 -------
94 :class:`BridgeSlashCommand`
95 The respective slash command version of the command.
96 """
97 command = BridgeSlashCommand(self.callback, **self.kwargs)
98 return command
99
100 def add_to(self, bot: Union[ExtBot, ExtAutoShardedBot]) -> None:
101 """Adds the command to a bot.
102
103 Parameters
104 ----------
105 bot: Union[:class:`ExtBot`, :class:`ExtAutoShardedBot`]
106 The bot to add the command to.
107 """
108 bot.add_command(self.get_ext_command())
109 bot.add_application_command(self.get_application_command())
110
111
112 def bridge_command(**kwargs):
113 """A decorator that is used to wrap a function as a command.
114
115 Parameters
116 ----------
117 kwargs: Optional[Dict[str, Any]]
118 Keyword arguments that are directly passed to the respective command constructors.
119 """
120
121 def decorator(callback):
122 return BridgeCommand(callback, **kwargs)
123
124 return decorator
125
126
127 class MentionableConverter(Converter):
128 """A converter that can convert a mention to a user or a role."""
129
130 async def convert(self, ctx, argument):
131 try:
132 return await RoleConverter().convert(ctx, argument)
133 except BadArgument:
134 return await UserConverter().convert(ctx, argument)
135
136
137 def attachment_callback(*args): # pylint: disable=unused-argument
138 raise ValueError("Attachments are not supported for compatibility commands.")
139
140
141 class BridgeOption(Option, Converter):
142 async def convert(self, ctx, argument) -> Any:
143 if self.converter is not None:
144 converted = await self.converter.convert(ctx, argument)
145 else:
146 mapping = {
147 SlashCommandOptionType.string: str,
148 SlashCommandOptionType.integer: int,
149 SlashCommandOptionType.boolean: bool,
150 SlashCommandOptionType.user: UserConverter,
151 SlashCommandOptionType.channel: GuildChannelConverter,
152 SlashCommandOptionType.role: RoleConverter,
153 SlashCommandOptionType.mentionable: MentionableConverter,
154 SlashCommandOptionType.number: float,
155 SlashCommandOptionType.attachment: attachment_callback,
156 }
157 converter = mapping[self.input_type]
158 if issubclass(converter, Converter):
159 converted = await converter().convert(ctx, argument)
160 else:
161 converted = converter(argument)
162 if self.choices:
163 choices_names = [choice.name for choice in self.choices]
164 if converted in choices_names:
165 converted = get(self.choices, name=converted).value
166 else:
167 choices = [choice.value for choice in self.choices]
168 if converted not in choices:
169 print(self.choices)
170 raise ValueError(
171 f"{argument} is not a valid choice. Valid choices: {list(set(choices_names + choices))}"
172 )
173
174 return converted
175
176
177 discord.commands.options.Option = BridgeOption
178
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/discord/ext/bridge/core.py b/discord/ext/bridge/core.py
--- a/discord/ext/bridge/core.py
+++ b/discord/ext/bridge/core.py
@@ -41,6 +41,8 @@
__all__ = ("BridgeCommand", "bridge_command", "BridgeExtCommand", "BridgeSlashCommand")
+from ..commands.converter import _convert_to_bool
+
from ...utils import get
@@ -140,38 +142,40 @@
class BridgeOption(Option, Converter):
async def convert(self, ctx, argument) -> Any:
- if self.converter is not None:
- converted = await self.converter.convert(ctx, argument)
- else:
- mapping = {
- SlashCommandOptionType.string: str,
- SlashCommandOptionType.integer: int,
- SlashCommandOptionType.boolean: bool,
- SlashCommandOptionType.user: UserConverter,
- SlashCommandOptionType.channel: GuildChannelConverter,
- SlashCommandOptionType.role: RoleConverter,
- SlashCommandOptionType.mentionable: MentionableConverter,
- SlashCommandOptionType.number: float,
- SlashCommandOptionType.attachment: attachment_callback,
- }
- converter = mapping[self.input_type]
- if issubclass(converter, Converter):
- converted = await converter().convert(ctx, argument)
- else:
- converted = converter(argument)
- if self.choices:
- choices_names = [choice.name for choice in self.choices]
- if converted in choices_names:
- converted = get(self.choices, name=converted).value
+ try:
+ if self.converter is not None:
+ converted = await self.converter.convert(ctx, argument)
else:
- choices = [choice.value for choice in self.choices]
- if converted not in choices:
- print(self.choices)
- raise ValueError(
- f"{argument} is not a valid choice. Valid choices: {list(set(choices_names + choices))}"
- )
-
- return converted
+ mapping = {
+ SlashCommandOptionType.string: str,
+ SlashCommandOptionType.integer: int,
+ SlashCommandOptionType.boolean: lambda val: _convert_to_bool(str(val)),
+ SlashCommandOptionType.user: UserConverter,
+ SlashCommandOptionType.channel: GuildChannelConverter,
+ SlashCommandOptionType.role: RoleConverter,
+ SlashCommandOptionType.mentionable: MentionableConverter,
+ SlashCommandOptionType.number: float,
+ SlashCommandOptionType.attachment: attachment_callback,
+ }
+ converter = mapping[self.input_type]
+ if issubclass(converter, Converter):
+ converted = await converter().convert(ctx, argument)
+ else:
+ converted = converter(argument)
+ if self.choices:
+ choices_names = [choice.name for choice in self.choices]
+ if converted in choices_names:
+ converted = get(self.choices, name=converted).value
+ else:
+ choices = [choice.value for choice in self.choices]
+ if converted not in choices:
+ raise ValueError(
+ f"{argument} is not a valid choice. Valid choices: {list(set(choices_names + choices))}"
+ )
+
+ return converted
+ except ValueError as exc:
+ raise BadArgument() from exc
discord.commands.options.Option = BridgeOption
|
{"golden_diff": "diff --git a/discord/ext/bridge/core.py b/discord/ext/bridge/core.py\n--- a/discord/ext/bridge/core.py\n+++ b/discord/ext/bridge/core.py\n@@ -41,6 +41,8 @@\n \n __all__ = (\"BridgeCommand\", \"bridge_command\", \"BridgeExtCommand\", \"BridgeSlashCommand\")\n \n+from ..commands.converter import _convert_to_bool\n+\n from ...utils import get\n \n \n@@ -140,38 +142,40 @@\n \n class BridgeOption(Option, Converter):\n async def convert(self, ctx, argument) -> Any:\n- if self.converter is not None:\n- converted = await self.converter.convert(ctx, argument)\n- else:\n- mapping = {\n- SlashCommandOptionType.string: str,\n- SlashCommandOptionType.integer: int,\n- SlashCommandOptionType.boolean: bool,\n- SlashCommandOptionType.user: UserConverter,\n- SlashCommandOptionType.channel: GuildChannelConverter,\n- SlashCommandOptionType.role: RoleConverter,\n- SlashCommandOptionType.mentionable: MentionableConverter,\n- SlashCommandOptionType.number: float,\n- SlashCommandOptionType.attachment: attachment_callback,\n- }\n- converter = mapping[self.input_type]\n- if issubclass(converter, Converter):\n- converted = await converter().convert(ctx, argument)\n- else:\n- converted = converter(argument)\n- if self.choices:\n- choices_names = [choice.name for choice in self.choices]\n- if converted in choices_names:\n- converted = get(self.choices, name=converted).value\n+ try:\n+ if self.converter is not None:\n+ converted = await self.converter.convert(ctx, argument)\n else:\n- choices = [choice.value for choice in self.choices]\n- if converted not in choices:\n- print(self.choices)\n- raise ValueError(\n- f\"{argument} is not a valid choice. Valid choices: {list(set(choices_names + choices))}\"\n- )\n-\n- return converted\n+ mapping = {\n+ SlashCommandOptionType.string: str,\n+ SlashCommandOptionType.integer: int,\n+ SlashCommandOptionType.boolean: lambda val: _convert_to_bool(str(val)),\n+ SlashCommandOptionType.user: UserConverter,\n+ SlashCommandOptionType.channel: GuildChannelConverter,\n+ SlashCommandOptionType.role: RoleConverter,\n+ SlashCommandOptionType.mentionable: MentionableConverter,\n+ SlashCommandOptionType.number: float,\n+ SlashCommandOptionType.attachment: attachment_callback,\n+ }\n+ converter = mapping[self.input_type]\n+ if issubclass(converter, Converter):\n+ converted = await converter().convert(ctx, argument)\n+ else:\n+ converted = converter(argument)\n+ if self.choices:\n+ choices_names = [choice.name for choice in self.choices]\n+ if converted in choices_names:\n+ converted = get(self.choices, name=converted).value\n+ else:\n+ choices = [choice.value for choice in self.choices]\n+ if converted not in choices:\n+ raise ValueError(\n+ f\"{argument} is not a valid choice. Valid choices: {list(set(choices_names + choices))}\"\n+ )\n+\n+ return converted\n+ except ValueError as exc:\n+ raise BadArgument() from exc\n \n \n discord.commands.options.Option = BridgeOption\n", "issue": "BridgeOption not raising BadArgument when conversion fails \n### Summary\n\nBridgeOption does not raise the correct error when conversion fails for a built-in type.\n\n### Reproduction Steps\n\nCreate a bridge command with an Option parameter of type `int` (other built-ins also work here).\r\nThen use the prefixed command with an invalid value.\r\nThis will raise ValueError (in the case of int), when BadArgument should be raised instead.\n\n### Minimal Reproducible Code\n\n```python\[email protected]_command()\r\nasync def test(ctx, value: Option(int, name='value')):\r\n await ctx.respond(str(value))\n```\n\n\n### Expected Results\n\nBadArgument to be raised when using `-test a`\n\n### Actual Results\n\nValueError is raised\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"site-packages\\discord\\ext\\commands\\converter.py\", line 1071, in _actual_conversion\r\n return await converter.convert(ctx, argument)\r\n File \"site-packages\\discord\\ext\\bridge\\core.py\", line 161, in convert\r\n converted = converter(argument)\r\nValueError: invalid literal for int() with base 10: 'a'\r\n```\n\n### Intents\n\ndefault + message_content\n\n### System Information\n\n- Python v3.10.1-final\r\n- py-cord v2.0.0-beta\r\n - py-cord pkg_resources: v2.0.0b7\r\n- aiohttp v3.8.1\r\n- system info: Windows 10 10.0.19042\n\n### Checklist\n\n- [X] I have searched the open issues for duplicates.\n- [X] I have shown the entire traceback, if possible.\n- [X] I have removed my token from display, if visible.\n\n### Additional Context\n\n_No response_\n", "before_files": [{"content": "\"\"\"\nThe MIT License (MIT)\n\nCopyright (c) 2015-2021 Rapptz\nCopyright (c) 2021-present Pycord Development\n\nPermission is hereby granted, free of charge, to any person obtaining a\ncopy of this software and associated documentation files (the \"Software\"),\nto deal in the Software without restriction, including without limitation\nthe rights to use, copy, modify, merge, publish, distribute, sublicense,\nand/or sell copies of the Software, and to permit persons to whom the\nSoftware is furnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in\nall copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS\nOR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\nFROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\nDEALINGS IN THE SOFTWARE.\n\"\"\"\nfrom typing import Union, Any\n\nimport discord.commands.options\nfrom discord.commands import Option, SlashCommand\nfrom discord.enums import SlashCommandOptionType\n\nfrom ..commands import AutoShardedBot as ExtAutoShardedBot\nfrom ..commands import BadArgument\nfrom ..commands import Bot as ExtBot\nfrom ..commands import (\n Command,\n Converter,\n GuildChannelConverter,\n RoleConverter,\n UserConverter,\n)\n\n__all__ = (\"BridgeCommand\", \"bridge_command\", \"BridgeExtCommand\", \"BridgeSlashCommand\")\n\nfrom ...utils import get\n\n\nclass BridgeSlashCommand(SlashCommand):\n \"\"\"\n A subclass of :class:`.SlashCommand` that is used to implement bridge commands.\n \"\"\"\n ...\n\n\nclass BridgeExtCommand(Command):\n \"\"\"\n A subclass of :class:`.ext.commands.Command` that is used to implement bridge commands.\n \"\"\"\n ...\n\n\nclass BridgeCommand:\n def __init__(self, callback, **kwargs):\n \"\"\"\n This is the base class for commands that are compatible with both traditional (prefix-based) commands and slash\n commands.\n\n Parameters\n ----------\n callback: Callable[[BridgeContext, ...], Awaitable[Any]]\n The callback to invoke when the command is executed. The first argument will be a :class:`BridgeContext`,\n and any additional arguments will be passed to the callback. This callback must be a coroutine.\n kwargs: Optional[Dict[str, Any]]\n Keyword arguments that are directly passed to the respective command constructors.\n \"\"\"\n self.callback = callback\n self.kwargs = kwargs\n\n def get_ext_command(self):\n \"\"\"A method to get the ext.commands version of this command.\n\n Returns\n -------\n :class:`BridgeExtCommand`\n The respective traditional (prefix-based) version of the command.\n \"\"\"\n command = BridgeExtCommand(self.callback, **self.kwargs)\n return command\n\n def get_application_command(self):\n \"\"\"A method to get the discord.commands version of this command.\n\n Returns\n -------\n :class:`BridgeSlashCommand`\n The respective slash command version of the command.\n \"\"\"\n command = BridgeSlashCommand(self.callback, **self.kwargs)\n return command\n\n def add_to(self, bot: Union[ExtBot, ExtAutoShardedBot]) -> None:\n \"\"\"Adds the command to a bot.\n\n Parameters\n ----------\n bot: Union[:class:`ExtBot`, :class:`ExtAutoShardedBot`]\n The bot to add the command to.\n \"\"\"\n bot.add_command(self.get_ext_command())\n bot.add_application_command(self.get_application_command())\n\n\ndef bridge_command(**kwargs):\n \"\"\"A decorator that is used to wrap a function as a command.\n\n Parameters\n ----------\n kwargs: Optional[Dict[str, Any]]\n Keyword arguments that are directly passed to the respective command constructors.\n \"\"\"\n\n def decorator(callback):\n return BridgeCommand(callback, **kwargs)\n\n return decorator\n\n\nclass MentionableConverter(Converter):\n \"\"\"A converter that can convert a mention to a user or a role.\"\"\"\n\n async def convert(self, ctx, argument):\n try:\n return await RoleConverter().convert(ctx, argument)\n except BadArgument:\n return await UserConverter().convert(ctx, argument)\n\n\ndef attachment_callback(*args): # pylint: disable=unused-argument\n raise ValueError(\"Attachments are not supported for compatibility commands.\")\n\n\nclass BridgeOption(Option, Converter):\n async def convert(self, ctx, argument) -> Any:\n if self.converter is not None:\n converted = await self.converter.convert(ctx, argument)\n else:\n mapping = {\n SlashCommandOptionType.string: str,\n SlashCommandOptionType.integer: int,\n SlashCommandOptionType.boolean: bool,\n SlashCommandOptionType.user: UserConverter,\n SlashCommandOptionType.channel: GuildChannelConverter,\n SlashCommandOptionType.role: RoleConverter,\n SlashCommandOptionType.mentionable: MentionableConverter,\n SlashCommandOptionType.number: float,\n SlashCommandOptionType.attachment: attachment_callback,\n }\n converter = mapping[self.input_type]\n if issubclass(converter, Converter):\n converted = await converter().convert(ctx, argument)\n else:\n converted = converter(argument)\n if self.choices:\n choices_names = [choice.name for choice in self.choices]\n if converted in choices_names:\n converted = get(self.choices, name=converted).value\n else:\n choices = [choice.value for choice in self.choices]\n if converted not in choices:\n print(self.choices)\n raise ValueError(\n f\"{argument} is not a valid choice. Valid choices: {list(set(choices_names + choices))}\"\n )\n\n return converted\n\n\ndiscord.commands.options.Option = BridgeOption\n", "path": "discord/ext/bridge/core.py"}], "after_files": [{"content": "\"\"\"\nThe MIT License (MIT)\n\nCopyright (c) 2015-2021 Rapptz\nCopyright (c) 2021-present Pycord Development\n\nPermission is hereby granted, free of charge, to any person obtaining a\ncopy of this software and associated documentation files (the \"Software\"),\nto deal in the Software without restriction, including without limitation\nthe rights to use, copy, modify, merge, publish, distribute, sublicense,\nand/or sell copies of the Software, and to permit persons to whom the\nSoftware is furnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in\nall copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS\nOR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\nFROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\nDEALINGS IN THE SOFTWARE.\n\"\"\"\nfrom typing import Union, Any\n\nimport discord.commands.options\nfrom discord.commands import Option, SlashCommand\nfrom discord.enums import SlashCommandOptionType\n\nfrom ..commands import AutoShardedBot as ExtAutoShardedBot\nfrom ..commands import BadArgument\nfrom ..commands import Bot as ExtBot\nfrom ..commands import (\n Command,\n Converter,\n GuildChannelConverter,\n RoleConverter,\n UserConverter,\n)\n\n__all__ = (\"BridgeCommand\", \"bridge_command\", \"BridgeExtCommand\", \"BridgeSlashCommand\")\n\nfrom ..commands.converter import _convert_to_bool\n\nfrom ...utils import get\n\n\nclass BridgeSlashCommand(SlashCommand):\n \"\"\"\n A subclass of :class:`.SlashCommand` that is used to implement bridge commands.\n \"\"\"\n ...\n\n\nclass BridgeExtCommand(Command):\n \"\"\"\n A subclass of :class:`.ext.commands.Command` that is used to implement bridge commands.\n \"\"\"\n ...\n\n\nclass BridgeCommand:\n def __init__(self, callback, **kwargs):\n \"\"\"\n This is the base class for commands that are compatible with both traditional (prefix-based) commands and slash\n commands.\n\n Parameters\n ----------\n callback: Callable[[BridgeContext, ...], Awaitable[Any]]\n The callback to invoke when the command is executed. The first argument will be a :class:`BridgeContext`,\n and any additional arguments will be passed to the callback. This callback must be a coroutine.\n kwargs: Optional[Dict[str, Any]]\n Keyword arguments that are directly passed to the respective command constructors.\n \"\"\"\n self.callback = callback\n self.kwargs = kwargs\n\n def get_ext_command(self):\n \"\"\"A method to get the ext.commands version of this command.\n\n Returns\n -------\n :class:`BridgeExtCommand`\n The respective traditional (prefix-based) version of the command.\n \"\"\"\n command = BridgeExtCommand(self.callback, **self.kwargs)\n return command\n\n def get_application_command(self):\n \"\"\"A method to get the discord.commands version of this command.\n\n Returns\n -------\n :class:`BridgeSlashCommand`\n The respective slash command version of the command.\n \"\"\"\n command = BridgeSlashCommand(self.callback, **self.kwargs)\n return command\n\n def add_to(self, bot: Union[ExtBot, ExtAutoShardedBot]) -> None:\n \"\"\"Adds the command to a bot.\n\n Parameters\n ----------\n bot: Union[:class:`ExtBot`, :class:`ExtAutoShardedBot`]\n The bot to add the command to.\n \"\"\"\n bot.add_command(self.get_ext_command())\n bot.add_application_command(self.get_application_command())\n\n\ndef bridge_command(**kwargs):\n \"\"\"A decorator that is used to wrap a function as a command.\n\n Parameters\n ----------\n kwargs: Optional[Dict[str, Any]]\n Keyword arguments that are directly passed to the respective command constructors.\n \"\"\"\n\n def decorator(callback):\n return BridgeCommand(callback, **kwargs)\n\n return decorator\n\n\nclass MentionableConverter(Converter):\n \"\"\"A converter that can convert a mention to a user or a role.\"\"\"\n\n async def convert(self, ctx, argument):\n try:\n return await RoleConverter().convert(ctx, argument)\n except BadArgument:\n return await UserConverter().convert(ctx, argument)\n\n\ndef attachment_callback(*args): # pylint: disable=unused-argument\n raise ValueError(\"Attachments are not supported for compatibility commands.\")\n\n\nclass BridgeOption(Option, Converter):\n async def convert(self, ctx, argument) -> Any:\n try:\n if self.converter is not None:\n converted = await self.converter.convert(ctx, argument)\n else:\n mapping = {\n SlashCommandOptionType.string: str,\n SlashCommandOptionType.integer: int,\n SlashCommandOptionType.boolean: lambda val: _convert_to_bool(str(val)),\n SlashCommandOptionType.user: UserConverter,\n SlashCommandOptionType.channel: GuildChannelConverter,\n SlashCommandOptionType.role: RoleConverter,\n SlashCommandOptionType.mentionable: MentionableConverter,\n SlashCommandOptionType.number: float,\n SlashCommandOptionType.attachment: attachment_callback,\n }\n converter = mapping[self.input_type]\n if issubclass(converter, Converter):\n converted = await converter().convert(ctx, argument)\n else:\n converted = converter(argument)\n if self.choices:\n choices_names = [choice.name for choice in self.choices]\n if converted in choices_names:\n converted = get(self.choices, name=converted).value\n else:\n choices = [choice.value for choice in self.choices]\n if converted not in choices:\n raise ValueError(\n f\"{argument} is not a valid choice. Valid choices: {list(set(choices_names + choices))}\"\n )\n\n return converted\n except ValueError as exc:\n raise BadArgument() from exc\n\n\ndiscord.commands.options.Option = BridgeOption\n", "path": "discord/ext/bridge/core.py"}]}
| 2,319 | 729 |
gh_patches_debug_35392
|
rasdani/github-patches
|
git_diff
|
ipython__ipython-3947
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
The PDF option for `--post` should work with lowercase
Right now to get a PDF out of nbconvert you have to to
```
ipython nbconvert --to latex --post PDF foo.ipynb
```
Many users will try `pdf` instead and the error message is very confusing. We should just make it work with lowercase.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `IPython/nbconvert/nbconvertapp.py`
Content:
```
1 #!/usr/bin/env python
2 """NBConvert is a utility for conversion of .ipynb files.
3
4 Command-line interface for the NbConvert conversion utility.
5 """
6 #-----------------------------------------------------------------------------
7 #Copyright (c) 2013, the IPython Development Team.
8 #
9 #Distributed under the terms of the Modified BSD License.
10 #
11 #The full license is in the file COPYING.txt, distributed with this software.
12 #-----------------------------------------------------------------------------
13
14 #-----------------------------------------------------------------------------
15 #Imports
16 #-----------------------------------------------------------------------------
17
18 # Stdlib imports
19 from __future__ import print_function
20
21 import logging
22 import sys
23 import os
24 import glob
25
26 # From IPython
27 from IPython.core.application import BaseIPythonApplication, base_aliases, base_flags
28 from IPython.config import catch_config_error, Configurable
29 from IPython.utils.traitlets import (
30 Unicode, List, Instance, DottedObjectName, Type, CaselessStrEnum,
31 )
32 from IPython.utils.importstring import import_item
33 from IPython.utils.text import dedent
34
35 from .exporters.export import get_export_names, exporter_map
36 from IPython.nbconvert import exporters, transformers, writers, post_processors
37 from .utils.base import NbConvertBase
38 from .utils.exceptions import ConversionException
39
40 #-----------------------------------------------------------------------------
41 #Classes and functions
42 #-----------------------------------------------------------------------------
43
44 class DottedOrNone(DottedObjectName):
45 """
46 A string holding a valid dotted object name in Python, such as A.b3._c
47 Also allows for None type."""
48
49 default_value = u''
50
51 def validate(self, obj, value):
52 if value is not None and len(value) > 0:
53 return super(DottedOrNone, self).validate(obj, value)
54 else:
55 return value
56
57 nbconvert_aliases = {}
58 nbconvert_aliases.update(base_aliases)
59 nbconvert_aliases.update({
60 'to' : 'NbConvertApp.export_format',
61 'template' : 'Exporter.template_file',
62 'notebooks' : 'NbConvertApp.notebooks',
63 'writer' : 'NbConvertApp.writer_class',
64 'post': 'NbConvertApp.post_processor_class',
65 'output': 'NbConvertApp.output_base'
66 })
67
68 nbconvert_flags = {}
69 nbconvert_flags.update(base_flags)
70 nbconvert_flags.update({
71 'stdout' : (
72 {'NbConvertApp' : {'writer_class' : "StdoutWriter"}},
73 "Write notebook output to stdout instead of files."
74 )
75 })
76
77
78 class NbConvertApp(BaseIPythonApplication):
79 """Application used to convert to and from notebook file type (*.ipynb)"""
80
81 name = 'ipython-nbconvert'
82 aliases = nbconvert_aliases
83 flags = nbconvert_flags
84
85 def _log_level_default(self):
86 return logging.INFO
87
88 def _classes_default(self):
89 classes = [NbConvertBase]
90 for pkg in (exporters, transformers, writers):
91 for name in dir(pkg):
92 cls = getattr(pkg, name)
93 if isinstance(cls, type) and issubclass(cls, Configurable):
94 classes.append(cls)
95 return classes
96
97 description = Unicode(
98 u"""This application is used to convert notebook files (*.ipynb)
99 to various other formats.
100
101 WARNING: THE COMMANDLINE INTERFACE MAY CHANGE IN FUTURE RELEASES.""")
102
103 output_base = Unicode('', config=True, help='''overwrite base name use for output files.
104 can only be use when converting one notebook at a time.
105 ''')
106
107 examples = Unicode(u"""
108 The simplest way to use nbconvert is
109
110 > ipython nbconvert mynotebook.ipynb
111
112 which will convert mynotebook.ipynb to the default format (probably HTML).
113
114 You can specify the export format with `--to`.
115 Options include {0}
116
117 > ipython nbconvert --to latex mynotebook.ipnynb
118
119 Both HTML and LaTeX support multiple output templates. LaTeX includes
120 'basic', 'book', and 'article'. HTML includes 'basic' and 'full'. You
121 can specify the flavor of the format used.
122
123 > ipython nbconvert --to html --template basic mynotebook.ipynb
124
125 You can also pipe the output to stdout, rather than a file
126
127 > ipython nbconvert mynotebook.ipynb --stdout
128
129 A post-processor can be used to compile a PDF
130
131 > ipython nbconvert mynotebook.ipynb --to latex --post PDF
132
133 You can get (and serve) a Reveal.js-powered slideshow
134
135 > ipython nbconvert myslides.ipynb --to slides --post serve
136
137 Multiple notebooks can be given at the command line in a couple of
138 different ways:
139
140 > ipython nbconvert notebook*.ipynb
141 > ipython nbconvert notebook1.ipynb notebook2.ipynb
142
143 or you can specify the notebooks list in a config file, containing::
144
145 c.NbConvertApp.notebooks = ["my_notebook.ipynb"]
146
147 > ipython nbconvert --config mycfg.py
148 """.format(get_export_names()))
149
150 # Writer specific variables
151 writer = Instance('IPython.nbconvert.writers.base.WriterBase',
152 help="""Instance of the writer class used to write the
153 results of the conversion.""")
154 writer_class = DottedObjectName('FilesWriter', config=True,
155 help="""Writer class used to write the
156 results of the conversion""")
157 writer_aliases = {'FilesWriter': 'IPython.nbconvert.writers.files.FilesWriter',
158 'DebugWriter': 'IPython.nbconvert.writers.debug.DebugWriter',
159 'StdoutWriter': 'IPython.nbconvert.writers.stdout.StdoutWriter'}
160 writer_factory = Type()
161
162 def _writer_class_changed(self, name, old, new):
163 if new in self.writer_aliases:
164 new = self.writer_aliases[new]
165 self.writer_factory = import_item(new)
166
167 # Post-processor specific variables
168 post_processor = Instance('IPython.nbconvert.post_processors.base.PostProcessorBase',
169 help="""Instance of the PostProcessor class used to write the
170 results of the conversion.""")
171
172 post_processor_class = DottedOrNone(config=True,
173 help="""PostProcessor class used to write the
174 results of the conversion""")
175 post_processor_aliases = {'PDF': 'IPython.nbconvert.post_processors.pdf.PDFPostProcessor',
176 'serve': 'IPython.nbconvert.post_processors.serve.ServePostProcessor'}
177 post_processor_factory = Type()
178
179 def _post_processor_class_changed(self, name, old, new):
180 if new in self.post_processor_aliases:
181 new = self.post_processor_aliases[new]
182 if new:
183 self.post_processor_factory = import_item(new)
184
185
186 # Other configurable variables
187 export_format = CaselessStrEnum(get_export_names(),
188 default_value="html",
189 config=True,
190 help="""The export format to be used."""
191 )
192
193 notebooks = List([], config=True, help="""List of notebooks to convert.
194 Wildcards are supported.
195 Filenames passed positionally will be added to the list.
196 """)
197
198 @catch_config_error
199 def initialize(self, argv=None):
200 super(NbConvertApp, self).initialize(argv)
201 self.init_syspath()
202 self.init_notebooks()
203 self.init_writer()
204 self.init_post_processor()
205
206
207
208 def init_syspath(self):
209 """
210 Add the cwd to the sys.path ($PYTHONPATH)
211 """
212 sys.path.insert(0, os.getcwd())
213
214
215 def init_notebooks(self):
216 """Construct the list of notebooks.
217 If notebooks are passed on the command-line,
218 they override notebooks specified in config files.
219 Glob each notebook to replace notebook patterns with filenames.
220 """
221
222 # Specifying notebooks on the command-line overrides (rather than adds)
223 # the notebook list
224 if self.extra_args:
225 patterns = self.extra_args
226 else:
227 patterns = self.notebooks
228
229 # Use glob to replace all the notebook patterns with filenames.
230 filenames = []
231 for pattern in patterns:
232
233 # Use glob to find matching filenames. Allow the user to convert
234 # notebooks without having to type the extension.
235 globbed_files = glob.glob(pattern)
236 globbed_files.extend(glob.glob(pattern + '.ipynb'))
237 if not globbed_files:
238 self.log.warn("pattern %r matched no files", pattern)
239
240 for filename in globbed_files:
241 if not filename in filenames:
242 filenames.append(filename)
243 self.notebooks = filenames
244
245 def init_writer(self):
246 """
247 Initialize the writer (which is stateless)
248 """
249 self._writer_class_changed(None, self.writer_class, self.writer_class)
250 self.writer = self.writer_factory(parent=self)
251
252 def init_post_processor(self):
253 """
254 Initialize the post_processor (which is stateless)
255 """
256 self._post_processor_class_changed(None, self.post_processor_class,
257 self.post_processor_class)
258 if self.post_processor_factory:
259 self.post_processor = self.post_processor_factory(parent=self)
260
261 def start(self):
262 """
263 Ran after initialization completed
264 """
265 super(NbConvertApp, self).start()
266 self.convert_notebooks()
267
268 def convert_notebooks(self):
269 """
270 Convert the notebooks in the self.notebook traitlet
271 """
272 # Export each notebook
273 conversion_success = 0
274
275 if self.output_base != '' and len(self.notebooks) > 1:
276 self.log.error(
277 """UsageError: --output flag or `NbConvertApp.output_base` config option
278 cannot be used when converting multiple notebooks.
279 """)
280 self.exit(1)
281
282 exporter = exporter_map[self.export_format](config=self.config)
283
284 for notebook_filename in self.notebooks:
285 self.log.info("Converting notebook %s to %s", notebook_filename, self.export_format)
286
287 # Get a unique key for the notebook and set it in the resources object.
288 basename = os.path.basename(notebook_filename)
289 notebook_name = basename[:basename.rfind('.')]
290 if self.output_base:
291 notebook_name = self.output_base
292 resources = {}
293 resources['unique_key'] = notebook_name
294 resources['output_files_dir'] = '%s_files' % notebook_name
295 self.log.info("Support files will be in %s", os.path.join(resources['output_files_dir'], ''))
296
297 # Try to export
298 try:
299 output, resources = exporter.from_filename(notebook_filename, resources=resources)
300 except ConversionException as e:
301 self.log.error("Error while converting '%s'", notebook_filename,
302 exc_info=True)
303 self.exit(1)
304 else:
305 write_resultes = self.writer.write(output, resources, notebook_name=notebook_name)
306
307 #Post-process if post processor has been defined.
308 if hasattr(self, 'post_processor') and self.post_processor:
309 self.post_processor(write_resultes)
310 conversion_success += 1
311
312 # If nothing was converted successfully, help the user.
313 if conversion_success == 0:
314 self.print_help()
315 sys.exit(-1)
316
317 #-----------------------------------------------------------------------------
318 # Main entry point
319 #-----------------------------------------------------------------------------
320
321 launch_new_instance = NbConvertApp.launch_instance
322
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/IPython/nbconvert/nbconvertapp.py b/IPython/nbconvert/nbconvertapp.py
--- a/IPython/nbconvert/nbconvertapp.py
+++ b/IPython/nbconvert/nbconvertapp.py
@@ -154,14 +154,14 @@
writer_class = DottedObjectName('FilesWriter', config=True,
help="""Writer class used to write the
results of the conversion""")
- writer_aliases = {'FilesWriter': 'IPython.nbconvert.writers.files.FilesWriter',
- 'DebugWriter': 'IPython.nbconvert.writers.debug.DebugWriter',
- 'StdoutWriter': 'IPython.nbconvert.writers.stdout.StdoutWriter'}
+ writer_aliases = {'fileswriter': 'IPython.nbconvert.writers.files.FilesWriter',
+ 'debugwriter': 'IPython.nbconvert.writers.debug.DebugWriter',
+ 'stdoutwriter': 'IPython.nbconvert.writers.stdout.StdoutWriter'}
writer_factory = Type()
def _writer_class_changed(self, name, old, new):
- if new in self.writer_aliases:
- new = self.writer_aliases[new]
+ if new.lower() in self.writer_aliases:
+ new = self.writer_aliases[new.lower()]
self.writer_factory = import_item(new)
# Post-processor specific variables
@@ -172,13 +172,13 @@
post_processor_class = DottedOrNone(config=True,
help="""PostProcessor class used to write the
results of the conversion""")
- post_processor_aliases = {'PDF': 'IPython.nbconvert.post_processors.pdf.PDFPostProcessor',
+ post_processor_aliases = {'pdf': 'IPython.nbconvert.post_processors.pdf.PDFPostProcessor',
'serve': 'IPython.nbconvert.post_processors.serve.ServePostProcessor'}
post_processor_factory = Type()
def _post_processor_class_changed(self, name, old, new):
- if new in self.post_processor_aliases:
- new = self.post_processor_aliases[new]
+ if new.lower() in self.post_processor_aliases:
+ new = self.post_processor_aliases[new.lower()]
if new:
self.post_processor_factory = import_item(new)
|
{"golden_diff": "diff --git a/IPython/nbconvert/nbconvertapp.py b/IPython/nbconvert/nbconvertapp.py\n--- a/IPython/nbconvert/nbconvertapp.py\n+++ b/IPython/nbconvert/nbconvertapp.py\n@@ -154,14 +154,14 @@\n writer_class = DottedObjectName('FilesWriter', config=True, \n help=\"\"\"Writer class used to write the \n results of the conversion\"\"\")\n- writer_aliases = {'FilesWriter': 'IPython.nbconvert.writers.files.FilesWriter',\n- 'DebugWriter': 'IPython.nbconvert.writers.debug.DebugWriter',\n- 'StdoutWriter': 'IPython.nbconvert.writers.stdout.StdoutWriter'}\n+ writer_aliases = {'fileswriter': 'IPython.nbconvert.writers.files.FilesWriter',\n+ 'debugwriter': 'IPython.nbconvert.writers.debug.DebugWriter',\n+ 'stdoutwriter': 'IPython.nbconvert.writers.stdout.StdoutWriter'}\n writer_factory = Type()\n \n def _writer_class_changed(self, name, old, new):\n- if new in self.writer_aliases:\n- new = self.writer_aliases[new]\n+ if new.lower() in self.writer_aliases:\n+ new = self.writer_aliases[new.lower()]\n self.writer_factory = import_item(new)\n \n # Post-processor specific variables\n@@ -172,13 +172,13 @@\n post_processor_class = DottedOrNone(config=True, \n help=\"\"\"PostProcessor class used to write the \n results of the conversion\"\"\")\n- post_processor_aliases = {'PDF': 'IPython.nbconvert.post_processors.pdf.PDFPostProcessor',\n+ post_processor_aliases = {'pdf': 'IPython.nbconvert.post_processors.pdf.PDFPostProcessor',\n 'serve': 'IPython.nbconvert.post_processors.serve.ServePostProcessor'}\n post_processor_factory = Type()\n \n def _post_processor_class_changed(self, name, old, new):\n- if new in self.post_processor_aliases:\n- new = self.post_processor_aliases[new]\n+ if new.lower() in self.post_processor_aliases:\n+ new = self.post_processor_aliases[new.lower()]\n if new:\n self.post_processor_factory = import_item(new)\n", "issue": "The PDF option for `--post` should work with lowercase \nRight now to get a PDF out of nbconvert you have to to\n\n```\nipython nbconvert --to latex --post PDF foo.ipynb\n```\n\nMany users will try `pdf` instead and the error message is very confusing. We should just make it work with lowercase.\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\"\"\"NBConvert is a utility for conversion of .ipynb files.\n\nCommand-line interface for the NbConvert conversion utility.\n\"\"\"\n#-----------------------------------------------------------------------------\n#Copyright (c) 2013, the IPython Development Team.\n#\n#Distributed under the terms of the Modified BSD License.\n#\n#The full license is in the file COPYING.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n#Imports\n#-----------------------------------------------------------------------------\n\n# Stdlib imports\nfrom __future__ import print_function\n\nimport logging\nimport sys\nimport os\nimport glob\n\n# From IPython\nfrom IPython.core.application import BaseIPythonApplication, base_aliases, base_flags\nfrom IPython.config import catch_config_error, Configurable\nfrom IPython.utils.traitlets import (\n Unicode, List, Instance, DottedObjectName, Type, CaselessStrEnum,\n)\nfrom IPython.utils.importstring import import_item\nfrom IPython.utils.text import dedent\n\nfrom .exporters.export import get_export_names, exporter_map\nfrom IPython.nbconvert import exporters, transformers, writers, post_processors\nfrom .utils.base import NbConvertBase\nfrom .utils.exceptions import ConversionException\n\n#-----------------------------------------------------------------------------\n#Classes and functions\n#-----------------------------------------------------------------------------\n\nclass DottedOrNone(DottedObjectName):\n \"\"\"\n A string holding a valid dotted object name in Python, such as A.b3._c\n Also allows for None type.\"\"\"\n \n default_value = u''\n\n def validate(self, obj, value):\n if value is not None and len(value) > 0:\n return super(DottedOrNone, self).validate(obj, value)\n else:\n return value\n \nnbconvert_aliases = {}\nnbconvert_aliases.update(base_aliases)\nnbconvert_aliases.update({\n 'to' : 'NbConvertApp.export_format',\n 'template' : 'Exporter.template_file',\n 'notebooks' : 'NbConvertApp.notebooks',\n 'writer' : 'NbConvertApp.writer_class',\n 'post': 'NbConvertApp.post_processor_class',\n 'output': 'NbConvertApp.output_base'\n})\n\nnbconvert_flags = {}\nnbconvert_flags.update(base_flags)\nnbconvert_flags.update({\n 'stdout' : (\n {'NbConvertApp' : {'writer_class' : \"StdoutWriter\"}},\n \"Write notebook output to stdout instead of files.\"\n )\n})\n\n\nclass NbConvertApp(BaseIPythonApplication):\n \"\"\"Application used to convert to and from notebook file type (*.ipynb)\"\"\"\n\n name = 'ipython-nbconvert'\n aliases = nbconvert_aliases\n flags = nbconvert_flags\n \n def _log_level_default(self):\n return logging.INFO\n \n def _classes_default(self):\n classes = [NbConvertBase]\n for pkg in (exporters, transformers, writers):\n for name in dir(pkg):\n cls = getattr(pkg, name)\n if isinstance(cls, type) and issubclass(cls, Configurable):\n classes.append(cls)\n return classes\n\n description = Unicode(\n u\"\"\"This application is used to convert notebook files (*.ipynb)\n to various other formats.\n\n WARNING: THE COMMANDLINE INTERFACE MAY CHANGE IN FUTURE RELEASES.\"\"\")\n\n output_base = Unicode('', config=True, help='''overwrite base name use for output files.\n can only be use when converting one notebook at a time.\n ''')\n\n examples = Unicode(u\"\"\"\n The simplest way to use nbconvert is\n \n > ipython nbconvert mynotebook.ipynb\n \n which will convert mynotebook.ipynb to the default format (probably HTML).\n \n You can specify the export format with `--to`.\n Options include {0}\n \n > ipython nbconvert --to latex mynotebook.ipnynb\n\n Both HTML and LaTeX support multiple output templates. LaTeX includes\n 'basic', 'book', and 'article'. HTML includes 'basic' and 'full'. You \n can specify the flavor of the format used.\n\n > ipython nbconvert --to html --template basic mynotebook.ipynb\n \n You can also pipe the output to stdout, rather than a file\n \n > ipython nbconvert mynotebook.ipynb --stdout\n\n A post-processor can be used to compile a PDF\n\n > ipython nbconvert mynotebook.ipynb --to latex --post PDF\n \n You can get (and serve) a Reveal.js-powered slideshow\n \n > ipython nbconvert myslides.ipynb --to slides --post serve\n \n Multiple notebooks can be given at the command line in a couple of \n different ways:\n \n > ipython nbconvert notebook*.ipynb\n > ipython nbconvert notebook1.ipynb notebook2.ipynb\n \n or you can specify the notebooks list in a config file, containing::\n \n c.NbConvertApp.notebooks = [\"my_notebook.ipynb\"]\n \n > ipython nbconvert --config mycfg.py\n \"\"\".format(get_export_names()))\n\n # Writer specific variables\n writer = Instance('IPython.nbconvert.writers.base.WriterBase', \n help=\"\"\"Instance of the writer class used to write the \n results of the conversion.\"\"\")\n writer_class = DottedObjectName('FilesWriter', config=True, \n help=\"\"\"Writer class used to write the \n results of the conversion\"\"\")\n writer_aliases = {'FilesWriter': 'IPython.nbconvert.writers.files.FilesWriter',\n 'DebugWriter': 'IPython.nbconvert.writers.debug.DebugWriter',\n 'StdoutWriter': 'IPython.nbconvert.writers.stdout.StdoutWriter'}\n writer_factory = Type()\n\n def _writer_class_changed(self, name, old, new):\n if new in self.writer_aliases:\n new = self.writer_aliases[new]\n self.writer_factory = import_item(new)\n\n # Post-processor specific variables\n post_processor = Instance('IPython.nbconvert.post_processors.base.PostProcessorBase', \n help=\"\"\"Instance of the PostProcessor class used to write the \n results of the conversion.\"\"\")\n\n post_processor_class = DottedOrNone(config=True, \n help=\"\"\"PostProcessor class used to write the \n results of the conversion\"\"\")\n post_processor_aliases = {'PDF': 'IPython.nbconvert.post_processors.pdf.PDFPostProcessor',\n 'serve': 'IPython.nbconvert.post_processors.serve.ServePostProcessor'}\n post_processor_factory = Type()\n\n def _post_processor_class_changed(self, name, old, new):\n if new in self.post_processor_aliases:\n new = self.post_processor_aliases[new]\n if new:\n self.post_processor_factory = import_item(new)\n\n\n # Other configurable variables\n export_format = CaselessStrEnum(get_export_names(),\n default_value=\"html\",\n config=True,\n help=\"\"\"The export format to be used.\"\"\"\n )\n\n notebooks = List([], config=True, help=\"\"\"List of notebooks to convert.\n Wildcards are supported.\n Filenames passed positionally will be added to the list.\n \"\"\")\n\n @catch_config_error\n def initialize(self, argv=None):\n super(NbConvertApp, self).initialize(argv)\n self.init_syspath()\n self.init_notebooks()\n self.init_writer()\n self.init_post_processor()\n\n\n\n def init_syspath(self):\n \"\"\"\n Add the cwd to the sys.path ($PYTHONPATH)\n \"\"\"\n sys.path.insert(0, os.getcwd())\n \n\n def init_notebooks(self):\n \"\"\"Construct the list of notebooks.\n If notebooks are passed on the command-line,\n they override notebooks specified in config files.\n Glob each notebook to replace notebook patterns with filenames.\n \"\"\"\n\n # Specifying notebooks on the command-line overrides (rather than adds)\n # the notebook list\n if self.extra_args:\n patterns = self.extra_args\n else:\n patterns = self.notebooks\n\n # Use glob to replace all the notebook patterns with filenames.\n filenames = []\n for pattern in patterns:\n \n # Use glob to find matching filenames. Allow the user to convert \n # notebooks without having to type the extension.\n globbed_files = glob.glob(pattern)\n globbed_files.extend(glob.glob(pattern + '.ipynb'))\n if not globbed_files:\n self.log.warn(\"pattern %r matched no files\", pattern)\n\n for filename in globbed_files:\n if not filename in filenames:\n filenames.append(filename)\n self.notebooks = filenames\n\n def init_writer(self):\n \"\"\"\n Initialize the writer (which is stateless)\n \"\"\"\n self._writer_class_changed(None, self.writer_class, self.writer_class)\n self.writer = self.writer_factory(parent=self)\n\n def init_post_processor(self):\n \"\"\"\n Initialize the post_processor (which is stateless)\n \"\"\"\n self._post_processor_class_changed(None, self.post_processor_class, \n self.post_processor_class)\n if self.post_processor_factory:\n self.post_processor = self.post_processor_factory(parent=self)\n\n def start(self):\n \"\"\"\n Ran after initialization completed\n \"\"\"\n super(NbConvertApp, self).start()\n self.convert_notebooks()\n\n def convert_notebooks(self):\n \"\"\"\n Convert the notebooks in the self.notebook traitlet\n \"\"\"\n # Export each notebook\n conversion_success = 0\n\n if self.output_base != '' and len(self.notebooks) > 1:\n self.log.error(\n \"\"\"UsageError: --output flag or `NbConvertApp.output_base` config option\n cannot be used when converting multiple notebooks.\n \"\"\")\n self.exit(1)\n \n exporter = exporter_map[self.export_format](config=self.config)\n\n for notebook_filename in self.notebooks:\n self.log.info(\"Converting notebook %s to %s\", notebook_filename, self.export_format)\n\n # Get a unique key for the notebook and set it in the resources object.\n basename = os.path.basename(notebook_filename)\n notebook_name = basename[:basename.rfind('.')]\n if self.output_base:\n notebook_name = self.output_base\n resources = {}\n resources['unique_key'] = notebook_name\n resources['output_files_dir'] = '%s_files' % notebook_name\n self.log.info(\"Support files will be in %s\", os.path.join(resources['output_files_dir'], ''))\n\n # Try to export\n try:\n output, resources = exporter.from_filename(notebook_filename, resources=resources)\n except ConversionException as e:\n self.log.error(\"Error while converting '%s'\", notebook_filename,\n exc_info=True)\n self.exit(1)\n else:\n write_resultes = self.writer.write(output, resources, notebook_name=notebook_name)\n\n #Post-process if post processor has been defined.\n if hasattr(self, 'post_processor') and self.post_processor:\n self.post_processor(write_resultes)\n conversion_success += 1\n\n # If nothing was converted successfully, help the user.\n if conversion_success == 0:\n self.print_help()\n sys.exit(-1)\n \n#-----------------------------------------------------------------------------\n# Main entry point\n#-----------------------------------------------------------------------------\n\nlaunch_new_instance = NbConvertApp.launch_instance\n", "path": "IPython/nbconvert/nbconvertapp.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\"\"\"NBConvert is a utility for conversion of .ipynb files.\n\nCommand-line interface for the NbConvert conversion utility.\n\"\"\"\n#-----------------------------------------------------------------------------\n#Copyright (c) 2013, the IPython Development Team.\n#\n#Distributed under the terms of the Modified BSD License.\n#\n#The full license is in the file COPYING.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n#Imports\n#-----------------------------------------------------------------------------\n\n# Stdlib imports\nfrom __future__ import print_function\n\nimport logging\nimport sys\nimport os\nimport glob\n\n# From IPython\nfrom IPython.core.application import BaseIPythonApplication, base_aliases, base_flags\nfrom IPython.config import catch_config_error, Configurable\nfrom IPython.utils.traitlets import (\n Unicode, List, Instance, DottedObjectName, Type, CaselessStrEnum,\n)\nfrom IPython.utils.importstring import import_item\nfrom IPython.utils.text import dedent\n\nfrom .exporters.export import get_export_names, exporter_map\nfrom IPython.nbconvert import exporters, transformers, writers, post_processors\nfrom .utils.base import NbConvertBase\nfrom .utils.exceptions import ConversionException\n\n#-----------------------------------------------------------------------------\n#Classes and functions\n#-----------------------------------------------------------------------------\n\nclass DottedOrNone(DottedObjectName):\n \"\"\"\n A string holding a valid dotted object name in Python, such as A.b3._c\n Also allows for None type.\"\"\"\n \n default_value = u''\n\n def validate(self, obj, value):\n if value is not None and len(value) > 0:\n return super(DottedOrNone, self).validate(obj, value)\n else:\n return value\n \nnbconvert_aliases = {}\nnbconvert_aliases.update(base_aliases)\nnbconvert_aliases.update({\n 'to' : 'NbConvertApp.export_format',\n 'template' : 'Exporter.template_file',\n 'notebooks' : 'NbConvertApp.notebooks',\n 'writer' : 'NbConvertApp.writer_class',\n 'post': 'NbConvertApp.post_processor_class',\n 'output': 'NbConvertApp.output_base'\n})\n\nnbconvert_flags = {}\nnbconvert_flags.update(base_flags)\nnbconvert_flags.update({\n 'stdout' : (\n {'NbConvertApp' : {'writer_class' : \"StdoutWriter\"}},\n \"Write notebook output to stdout instead of files.\"\n )\n})\n\n\nclass NbConvertApp(BaseIPythonApplication):\n \"\"\"Application used to convert to and from notebook file type (*.ipynb)\"\"\"\n\n name = 'ipython-nbconvert'\n aliases = nbconvert_aliases\n flags = nbconvert_flags\n \n def _log_level_default(self):\n return logging.INFO\n \n def _classes_default(self):\n classes = [NbConvertBase]\n for pkg in (exporters, transformers, writers):\n for name in dir(pkg):\n cls = getattr(pkg, name)\n if isinstance(cls, type) and issubclass(cls, Configurable):\n classes.append(cls)\n return classes\n\n description = Unicode(\n u\"\"\"This application is used to convert notebook files (*.ipynb)\n to various other formats.\n\n WARNING: THE COMMANDLINE INTERFACE MAY CHANGE IN FUTURE RELEASES.\"\"\")\n\n output_base = Unicode('', config=True, help='''overwrite base name use for output files.\n can only be use when converting one notebook at a time.\n ''')\n\n examples = Unicode(u\"\"\"\n The simplest way to use nbconvert is\n \n > ipython nbconvert mynotebook.ipynb\n \n which will convert mynotebook.ipynb to the default format (probably HTML).\n \n You can specify the export format with `--to`.\n Options include {0}\n \n > ipython nbconvert --to latex mynotebook.ipnynb\n\n Both HTML and LaTeX support multiple output templates. LaTeX includes\n 'basic', 'book', and 'article'. HTML includes 'basic' and 'full'. You \n can specify the flavor of the format used.\n\n > ipython nbconvert --to html --template basic mynotebook.ipynb\n \n You can also pipe the output to stdout, rather than a file\n \n > ipython nbconvert mynotebook.ipynb --stdout\n\n A post-processor can be used to compile a PDF\n\n > ipython nbconvert mynotebook.ipynb --to latex --post PDF\n \n You can get (and serve) a Reveal.js-powered slideshow\n \n > ipython nbconvert myslides.ipynb --to slides --post serve\n \n Multiple notebooks can be given at the command line in a couple of \n different ways:\n \n > ipython nbconvert notebook*.ipynb\n > ipython nbconvert notebook1.ipynb notebook2.ipynb\n \n or you can specify the notebooks list in a config file, containing::\n \n c.NbConvertApp.notebooks = [\"my_notebook.ipynb\"]\n \n > ipython nbconvert --config mycfg.py\n \"\"\".format(get_export_names()))\n\n # Writer specific variables\n writer = Instance('IPython.nbconvert.writers.base.WriterBase', \n help=\"\"\"Instance of the writer class used to write the \n results of the conversion.\"\"\")\n writer_class = DottedObjectName('FilesWriter', config=True, \n help=\"\"\"Writer class used to write the \n results of the conversion\"\"\")\n writer_aliases = {'fileswriter': 'IPython.nbconvert.writers.files.FilesWriter',\n 'debugwriter': 'IPython.nbconvert.writers.debug.DebugWriter',\n 'stdoutwriter': 'IPython.nbconvert.writers.stdout.StdoutWriter'}\n writer_factory = Type()\n\n def _writer_class_changed(self, name, old, new):\n if new.lower() in self.writer_aliases:\n new = self.writer_aliases[new.lower()]\n self.writer_factory = import_item(new)\n\n # Post-processor specific variables\n post_processor = Instance('IPython.nbconvert.post_processors.base.PostProcessorBase', \n help=\"\"\"Instance of the PostProcessor class used to write the \n results of the conversion.\"\"\")\n\n post_processor_class = DottedOrNone(config=True, \n help=\"\"\"PostProcessor class used to write the \n results of the conversion\"\"\")\n post_processor_aliases = {'pdf': 'IPython.nbconvert.post_processors.pdf.PDFPostProcessor',\n 'serve': 'IPython.nbconvert.post_processors.serve.ServePostProcessor'}\n post_processor_factory = Type()\n\n def _post_processor_class_changed(self, name, old, new):\n if new.lower() in self.post_processor_aliases:\n new = self.post_processor_aliases[new.lower()]\n if new:\n self.post_processor_factory = import_item(new)\n\n\n # Other configurable variables\n export_format = CaselessStrEnum(get_export_names(),\n default_value=\"html\",\n config=True,\n help=\"\"\"The export format to be used.\"\"\"\n )\n\n notebooks = List([], config=True, help=\"\"\"List of notebooks to convert.\n Wildcards are supported.\n Filenames passed positionally will be added to the list.\n \"\"\")\n\n @catch_config_error\n def initialize(self, argv=None):\n super(NbConvertApp, self).initialize(argv)\n self.init_syspath()\n self.init_notebooks()\n self.init_writer()\n self.init_post_processor()\n\n\n\n def init_syspath(self):\n \"\"\"\n Add the cwd to the sys.path ($PYTHONPATH)\n \"\"\"\n sys.path.insert(0, os.getcwd())\n \n\n def init_notebooks(self):\n \"\"\"Construct the list of notebooks.\n If notebooks are passed on the command-line,\n they override notebooks specified in config files.\n Glob each notebook to replace notebook patterns with filenames.\n \"\"\"\n\n # Specifying notebooks on the command-line overrides (rather than adds)\n # the notebook list\n if self.extra_args:\n patterns = self.extra_args\n else:\n patterns = self.notebooks\n\n # Use glob to replace all the notebook patterns with filenames.\n filenames = []\n for pattern in patterns:\n \n # Use glob to find matching filenames. Allow the user to convert \n # notebooks without having to type the extension.\n globbed_files = glob.glob(pattern)\n globbed_files.extend(glob.glob(pattern + '.ipynb'))\n if not globbed_files:\n self.log.warn(\"pattern %r matched no files\", pattern)\n\n for filename in globbed_files:\n if not filename in filenames:\n filenames.append(filename)\n self.notebooks = filenames\n\n def init_writer(self):\n \"\"\"\n Initialize the writer (which is stateless)\n \"\"\"\n self._writer_class_changed(None, self.writer_class, self.writer_class)\n self.writer = self.writer_factory(parent=self)\n\n def init_post_processor(self):\n \"\"\"\n Initialize the post_processor (which is stateless)\n \"\"\"\n self._post_processor_class_changed(None, self.post_processor_class, \n self.post_processor_class)\n if self.post_processor_factory:\n self.post_processor = self.post_processor_factory(parent=self)\n\n def start(self):\n \"\"\"\n Ran after initialization completed\n \"\"\"\n super(NbConvertApp, self).start()\n self.convert_notebooks()\n\n def convert_notebooks(self):\n \"\"\"\n Convert the notebooks in the self.notebook traitlet\n \"\"\"\n # Export each notebook\n conversion_success = 0\n\n if self.output_base != '' and len(self.notebooks) > 1:\n self.log.error(\n \"\"\"UsageError: --output flag or `NbConvertApp.output_base` config option\n cannot be used when converting multiple notebooks.\n \"\"\")\n self.exit(1)\n \n exporter = exporter_map[self.export_format](config=self.config)\n\n for notebook_filename in self.notebooks:\n self.log.info(\"Converting notebook %s to %s\", notebook_filename, self.export_format)\n\n # Get a unique key for the notebook and set it in the resources object.\n basename = os.path.basename(notebook_filename)\n notebook_name = basename[:basename.rfind('.')]\n if self.output_base:\n notebook_name = self.output_base\n resources = {}\n resources['unique_key'] = notebook_name\n resources['output_files_dir'] = '%s_files' % notebook_name\n self.log.info(\"Support files will be in %s\", os.path.join(resources['output_files_dir'], ''))\n\n # Try to export\n try:\n output, resources = exporter.from_filename(notebook_filename, resources=resources)\n except ConversionException as e:\n self.log.error(\"Error while converting '%s'\", notebook_filename,\n exc_info=True)\n self.exit(1)\n else:\n write_resultes = self.writer.write(output, resources, notebook_name=notebook_name)\n\n #Post-process if post processor has been defined.\n if hasattr(self, 'post_processor') and self.post_processor:\n self.post_processor(write_resultes)\n conversion_success += 1\n\n # If nothing was converted successfully, help the user.\n if conversion_success == 0:\n self.print_help()\n sys.exit(-1)\n \n#-----------------------------------------------------------------------------\n# Main entry point\n#-----------------------------------------------------------------------------\n\nlaunch_new_instance = NbConvertApp.launch_instance\n", "path": "IPython/nbconvert/nbconvertapp.py"}]}
| 3,572 | 474 |
gh_patches_debug_16870
|
rasdani/github-patches
|
git_diff
|
sunpy__sunpy-4088
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
OSError: Failed to load return from the HEKClient.
When trying to search for the flares on 2014/10/24 to /25, I get a "Failed to Load Return" error.
Sunpy Version: 1.1.3
Here's a minimal reproducible example:
```
from sunpy.net import hek
client = hek.HEKClient()
tstart = '2014/10/24 20:50'
tend = '2014/10/25 00:14'
event_type = 'FL'
client.search(hek.attrs.Time(tstart,tend),hek.attrs.EventType(event_type))
```
```python
Traceback (most recent call last):
File "/home/user/anaconda3/envs/pytorch/lib/python3.8/site-packages/sunpy/net/hek/hek.py", line 69, in _download
result = json.load(fd)
File "/home/user/anaconda3/envs/pytorch/lib/python3.8/json/__init__.py", line 293, in load
return loads(fp.read(),
File "/home/user/anaconda3/envs/pytorch/lib/python3.8/json/__init__.py", line 343, in loads
s = s.decode(detect_encoding(s), 'surrogatepass')
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xc5 in position 33279: invalid continuation byte
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/home/user/anaconda3/envs/pytorch/lib/python3.8/site-packages/sunpy/net/hek/hek.py", line 99, in search
return self._download(ndata[0])
File "/home/user/anaconda3/envs/pytorch/lib/python3.8/site-packages/sunpy/net/hek/hek.py", line 71, in _download
raise IOError("Failed to load return from the HEKClient.") from e
OSError: Failed to load return from the HEKClient.
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sunpy/net/hek/hek.py`
Content:
```
1 """
2 Facilities to interface with the Heliophysics Events Knowledgebase.
3 """
4
5 import json
6
7 import urllib
8 from itertools import chain
9
10 from astropy.table import Table, Row, Column
11 from astropy.time import Time
12
13 from sunpy.net import attr
14 from sunpy.util import dict_keys_same, unique
15 from sunpy.net.hek import attrs
16 import sunpy.net._attrs as core_attrs
17 from sunpy.util.xml import xml_to_dict
18
19
20 __all__ = ['HEKClient']
21
22 DEFAULT_URL = 'https://www.lmsal.com/hek/her?'
23
24
25 def _freeze(obj):
26 """ Create hashable representation of result dict. """
27 if isinstance(obj, dict):
28 return tuple((k, _freeze(v)) for k, v in obj.items())
29 if isinstance(obj, list):
30 return tuple(_freeze(elem) for elem in obj)
31 return obj
32
33
34 class HEKClient:
35 """ Client to interact with the Heliophysics Event Knowledgebase (HEK).
36 The HEK stores solar feature and event data generated by algorithms and
37 human observers."""
38 # FIXME: Expose fields in .attrs with the right types
39 # that is, not all StringParamWrapper!
40
41 default = {
42 'cosec': '2',
43 'cmd': 'search',
44 'type': 'column',
45 'event_type': '**',
46 }
47 # Default to full disk.
48 attrs.walker.apply(attrs.SpatialRegion(), {}, default)
49
50 def __init__(self, url=DEFAULT_URL):
51 self.url = url
52
53 def _download(self, data):
54 """ Download all data, even if paginated. """
55 page = 1
56 results = []
57
58 while True:
59 data['page'] = page
60 fd = urllib.request.urlopen(self.url+urllib.parse.urlencode(data))
61 try:
62 result = json.load(fd)
63 except Exception as e:
64 raise IOError("Failed to load return from the HEKClient.") from e
65 finally:
66 fd.close()
67 results.extend(result['result'])
68
69 if not result['overmax']:
70 if len(results) > 0:
71 return HEKTable(dict_keys_same(results))
72 else:
73 return HEKTable()
74
75 page += 1
76
77 def search(self, *query):
78 """ Retrieves information about HEK records matching the criteria
79 given in the query expression. If multiple arguments are passed,
80 they are connected with AND. The result of a query is a list of
81 unique HEK Response objects that fulfill the criteria."""
82 query = attr.and_(*query)
83
84 data = attrs.walker.create(query, {})
85 ndata = []
86 for elem in data:
87 new = self.default.copy()
88 new.update(elem)
89 ndata.append(new)
90
91 if len(ndata) == 1:
92 return self._download(ndata[0])
93 else:
94 return self._merge(self._download(data) for data in ndata)
95
96 def _merge(self, responses):
97 """ Merge responses, removing duplicates. """
98 return list(unique(chain.from_iterable(responses), _freeze))
99
100
101 class HEKTable(Table):
102 def __getitem__(self, item):
103 table_item = super().__getitem__(item)
104
105 if table_item.__class__ == Column:
106 table_item.__class__ = HEKColumn
107 elif table_item.__class__ == Row:
108 table_item.__class__ = HEKRow
109
110 return table_item
111
112
113 class HEKColumn(Column):
114 pass
115
116
117 class HEKRow(Row):
118 """
119 Handles the response from the HEK. Each HEKRow object is a subclass
120 of `astropy.Table.row`. The column-row key-value pairs correspond to the
121 HEK feature/event properties and their values, for that record from the
122 HEK. Each HEKRow object also has extra properties that relate HEK
123 concepts to VSO concepts.
124 """
125 @property
126 def vso_time(self):
127 return core_attrs.Time(
128 Time.strptime(self['event_starttime'], "%Y-%m-%dT%H:%M:%S"),
129 Time.strptime(self['event_endtime'], "%Y-%m-%dT%H:%M:%S")
130 )
131
132 @property
133 def vso_instrument(self):
134 if self['obs_instrument'] == 'HEK':
135 raise ValueError("No instrument contained.")
136 return core_attrs.Instrument(self['obs_instrument'])
137
138 @property
139 def vso_all(self):
140 return attr.and_(self.vso_time, self.vso_instrument)
141
142 def get_voevent(self, as_dict=True,
143 base_url="http://www.lmsal.com/hek/her?"):
144 """Retrieves the VOEvent object associated with a given event and
145 returns it as either a Python dictionary or an XML string."""
146
147 # Build URL
148 params = {
149 "cmd": "export-voevent",
150 "cosec": 1,
151 "ivorn": self['kb_archivid']
152 }
153 url = base_url + urllib.parse.urlencode(params)
154
155 # Query and read response
156 response = urllib.request.urlopen(url).read()
157
158 # Return a string or dict
159 if as_dict:
160 return xml_to_dict(response)
161 else:
162 return response
163
164 def get(self, key, default=None):
165 try:
166 return self[key]
167 except KeyError:
168 return default
169
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sunpy/net/hek/hek.py b/sunpy/net/hek/hek.py
--- a/sunpy/net/hek/hek.py
+++ b/sunpy/net/hek/hek.py
@@ -1,8 +1,8 @@
"""
Facilities to interface with the Heliophysics Events Knowledgebase.
"""
-
import json
+import codecs
import urllib
from itertools import chain
@@ -59,7 +59,8 @@
data['page'] = page
fd = urllib.request.urlopen(self.url+urllib.parse.urlencode(data))
try:
- result = json.load(fd)
+ result = codecs.decode(fd.read(), encoding='utf-8', errors='replace')
+ result = json.loads(result)
except Exception as e:
raise IOError("Failed to load return from the HEKClient.") from e
finally:
|
{"golden_diff": "diff --git a/sunpy/net/hek/hek.py b/sunpy/net/hek/hek.py\n--- a/sunpy/net/hek/hek.py\n+++ b/sunpy/net/hek/hek.py\n@@ -1,8 +1,8 @@\n \"\"\"\n Facilities to interface with the Heliophysics Events Knowledgebase.\n \"\"\"\n-\n import json\n+import codecs\n \n import urllib\n from itertools import chain\n@@ -59,7 +59,8 @@\n data['page'] = page\n fd = urllib.request.urlopen(self.url+urllib.parse.urlencode(data))\n try:\n- result = json.load(fd)\n+ result = codecs.decode(fd.read(), encoding='utf-8', errors='replace')\n+ result = json.loads(result)\n except Exception as e:\n raise IOError(\"Failed to load return from the HEKClient.\") from e\n finally:\n", "issue": "OSError: Failed to load return from the HEKClient.\nWhen trying to search for the flares on 2014/10/24 to /25, I get a \"Failed to Load Return\" error. \r\nSunpy Version: 1.1.3\r\n\r\nHere's a minimal reproducible example:\r\n\r\n```\r\nfrom sunpy.net import hek\r\nclient = hek.HEKClient()\r\ntstart = '2014/10/24 20:50'\r\ntend = '2014/10/25 00:14'\r\nevent_type = 'FL'\r\nclient.search(hek.attrs.Time(tstart,tend),hek.attrs.EventType(event_type))\r\n```\r\n\r\n\r\n\r\n```python\r\nTraceback (most recent call last):\r\n File \"/home/user/anaconda3/envs/pytorch/lib/python3.8/site-packages/sunpy/net/hek/hek.py\", line 69, in _download\r\n result = json.load(fd)\r\n File \"/home/user/anaconda3/envs/pytorch/lib/python3.8/json/__init__.py\", line 293, in load\r\n return loads(fp.read(),\r\n File \"/home/user/anaconda3/envs/pytorch/lib/python3.8/json/__init__.py\", line 343, in loads\r\n s = s.decode(detect_encoding(s), 'surrogatepass')\r\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0xc5 in position 33279: invalid continuation byte\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"<input>\", line 1, in <module>\r\n File \"/home/user/anaconda3/envs/pytorch/lib/python3.8/site-packages/sunpy/net/hek/hek.py\", line 99, in search\r\n return self._download(ndata[0])\r\n File \"/home/user/anaconda3/envs/pytorch/lib/python3.8/site-packages/sunpy/net/hek/hek.py\", line 71, in _download\r\n raise IOError(\"Failed to load return from the HEKClient.\") from e\r\nOSError: Failed to load return from the HEKClient.\r\n```\n", "before_files": [{"content": "\"\"\"\nFacilities to interface with the Heliophysics Events Knowledgebase.\n\"\"\"\n\nimport json\n\nimport urllib\nfrom itertools import chain\n\nfrom astropy.table import Table, Row, Column\nfrom astropy.time import Time\n\nfrom sunpy.net import attr\nfrom sunpy.util import dict_keys_same, unique\nfrom sunpy.net.hek import attrs\nimport sunpy.net._attrs as core_attrs\nfrom sunpy.util.xml import xml_to_dict\n\n\n__all__ = ['HEKClient']\n\nDEFAULT_URL = 'https://www.lmsal.com/hek/her?'\n\n\ndef _freeze(obj):\n \"\"\" Create hashable representation of result dict. \"\"\"\n if isinstance(obj, dict):\n return tuple((k, _freeze(v)) for k, v in obj.items())\n if isinstance(obj, list):\n return tuple(_freeze(elem) for elem in obj)\n return obj\n\n\nclass HEKClient:\n \"\"\" Client to interact with the Heliophysics Event Knowledgebase (HEK).\n The HEK stores solar feature and event data generated by algorithms and\n human observers.\"\"\"\n # FIXME: Expose fields in .attrs with the right types\n # that is, not all StringParamWrapper!\n\n default = {\n 'cosec': '2',\n 'cmd': 'search',\n 'type': 'column',\n 'event_type': '**',\n }\n # Default to full disk.\n attrs.walker.apply(attrs.SpatialRegion(), {}, default)\n\n def __init__(self, url=DEFAULT_URL):\n self.url = url\n\n def _download(self, data):\n \"\"\" Download all data, even if paginated. \"\"\"\n page = 1\n results = []\n\n while True:\n data['page'] = page\n fd = urllib.request.urlopen(self.url+urllib.parse.urlencode(data))\n try:\n result = json.load(fd)\n except Exception as e:\n raise IOError(\"Failed to load return from the HEKClient.\") from e\n finally:\n fd.close()\n results.extend(result['result'])\n\n if not result['overmax']:\n if len(results) > 0:\n return HEKTable(dict_keys_same(results))\n else:\n return HEKTable()\n\n page += 1\n\n def search(self, *query):\n \"\"\" Retrieves information about HEK records matching the criteria\n given in the query expression. If multiple arguments are passed,\n they are connected with AND. The result of a query is a list of\n unique HEK Response objects that fulfill the criteria.\"\"\"\n query = attr.and_(*query)\n\n data = attrs.walker.create(query, {})\n ndata = []\n for elem in data:\n new = self.default.copy()\n new.update(elem)\n ndata.append(new)\n\n if len(ndata) == 1:\n return self._download(ndata[0])\n else:\n return self._merge(self._download(data) for data in ndata)\n\n def _merge(self, responses):\n \"\"\" Merge responses, removing duplicates. \"\"\"\n return list(unique(chain.from_iterable(responses), _freeze))\n\n\nclass HEKTable(Table):\n def __getitem__(self, item):\n table_item = super().__getitem__(item)\n\n if table_item.__class__ == Column:\n table_item.__class__ = HEKColumn\n elif table_item.__class__ == Row:\n table_item.__class__ = HEKRow\n\n return table_item\n\n\nclass HEKColumn(Column):\n pass\n\n\nclass HEKRow(Row):\n \"\"\"\n Handles the response from the HEK. Each HEKRow object is a subclass\n of `astropy.Table.row`. The column-row key-value pairs correspond to the\n HEK feature/event properties and their values, for that record from the\n HEK. Each HEKRow object also has extra properties that relate HEK\n concepts to VSO concepts.\n \"\"\"\n @property\n def vso_time(self):\n return core_attrs.Time(\n Time.strptime(self['event_starttime'], \"%Y-%m-%dT%H:%M:%S\"),\n Time.strptime(self['event_endtime'], \"%Y-%m-%dT%H:%M:%S\")\n )\n\n @property\n def vso_instrument(self):\n if self['obs_instrument'] == 'HEK':\n raise ValueError(\"No instrument contained.\")\n return core_attrs.Instrument(self['obs_instrument'])\n\n @property\n def vso_all(self):\n return attr.and_(self.vso_time, self.vso_instrument)\n\n def get_voevent(self, as_dict=True,\n base_url=\"http://www.lmsal.com/hek/her?\"):\n \"\"\"Retrieves the VOEvent object associated with a given event and\n returns it as either a Python dictionary or an XML string.\"\"\"\n\n # Build URL\n params = {\n \"cmd\": \"export-voevent\",\n \"cosec\": 1,\n \"ivorn\": self['kb_archivid']\n }\n url = base_url + urllib.parse.urlencode(params)\n\n # Query and read response\n response = urllib.request.urlopen(url).read()\n\n # Return a string or dict\n if as_dict:\n return xml_to_dict(response)\n else:\n return response\n\n def get(self, key, default=None):\n try:\n return self[key]\n except KeyError:\n return default\n", "path": "sunpy/net/hek/hek.py"}], "after_files": [{"content": "\"\"\"\nFacilities to interface with the Heliophysics Events Knowledgebase.\n\"\"\"\nimport json\nimport codecs\n\nimport urllib\nfrom itertools import chain\n\nfrom astropy.table import Table, Row, Column\nfrom astropy.time import Time\n\nfrom sunpy.net import attr\nfrom sunpy.util import dict_keys_same, unique\nfrom sunpy.net.hek import attrs\nimport sunpy.net._attrs as core_attrs\nfrom sunpy.util.xml import xml_to_dict\n\n\n__all__ = ['HEKClient']\n\nDEFAULT_URL = 'https://www.lmsal.com/hek/her?'\n\n\ndef _freeze(obj):\n \"\"\" Create hashable representation of result dict. \"\"\"\n if isinstance(obj, dict):\n return tuple((k, _freeze(v)) for k, v in obj.items())\n if isinstance(obj, list):\n return tuple(_freeze(elem) for elem in obj)\n return obj\n\n\nclass HEKClient:\n \"\"\" Client to interact with the Heliophysics Event Knowledgebase (HEK).\n The HEK stores solar feature and event data generated by algorithms and\n human observers.\"\"\"\n # FIXME: Expose fields in .attrs with the right types\n # that is, not all StringParamWrapper!\n\n default = {\n 'cosec': '2',\n 'cmd': 'search',\n 'type': 'column',\n 'event_type': '**',\n }\n # Default to full disk.\n attrs.walker.apply(attrs.SpatialRegion(), {}, default)\n\n def __init__(self, url=DEFAULT_URL):\n self.url = url\n\n def _download(self, data):\n \"\"\" Download all data, even if paginated. \"\"\"\n page = 1\n results = []\n\n while True:\n data['page'] = page\n fd = urllib.request.urlopen(self.url+urllib.parse.urlencode(data))\n try:\n result = codecs.decode(fd.read(), encoding='utf-8', errors='replace')\n result = json.loads(result)\n except Exception as e:\n raise IOError(\"Failed to load return from the HEKClient.\") from e\n finally:\n fd.close()\n results.extend(result['result'])\n\n if not result['overmax']:\n if len(results) > 0:\n return HEKTable(dict_keys_same(results))\n else:\n return HEKTable()\n\n page += 1\n\n def search(self, *query):\n \"\"\" Retrieves information about HEK records matching the criteria\n given in the query expression. If multiple arguments are passed,\n they are connected with AND. The result of a query is a list of\n unique HEK Response objects that fulfill the criteria.\"\"\"\n query = attr.and_(*query)\n\n data = attrs.walker.create(query, {})\n ndata = []\n for elem in data:\n new = self.default.copy()\n new.update(elem)\n ndata.append(new)\n\n if len(ndata) == 1:\n return self._download(ndata[0])\n else:\n return self._merge(self._download(data) for data in ndata)\n\n def _merge(self, responses):\n \"\"\" Merge responses, removing duplicates. \"\"\"\n return list(unique(chain.from_iterable(responses), _freeze))\n\n\nclass HEKTable(Table):\n def __getitem__(self, item):\n table_item = super().__getitem__(item)\n\n if table_item.__class__ == Column:\n table_item.__class__ = HEKColumn\n elif table_item.__class__ == Row:\n table_item.__class__ = HEKRow\n\n return table_item\n\n\nclass HEKColumn(Column):\n pass\n\n\nclass HEKRow(Row):\n \"\"\"\n Handles the response from the HEK. Each HEKRow object is a subclass\n of `astropy.Table.row`. The column-row key-value pairs correspond to the\n HEK feature/event properties and their values, for that record from the\n HEK. Each HEKRow object also has extra properties that relate HEK\n concepts to VSO concepts.\n \"\"\"\n @property\n def vso_time(self):\n return core_attrs.Time(\n Time.strptime(self['event_starttime'], \"%Y-%m-%dT%H:%M:%S\"),\n Time.strptime(self['event_endtime'], \"%Y-%m-%dT%H:%M:%S\")\n )\n\n @property\n def vso_instrument(self):\n if self['obs_instrument'] == 'HEK':\n raise ValueError(\"No instrument contained.\")\n return core_attrs.Instrument(self['obs_instrument'])\n\n @property\n def vso_all(self):\n return attr.and_(self.vso_time, self.vso_instrument)\n\n def get_voevent(self, as_dict=True,\n base_url=\"http://www.lmsal.com/hek/her?\"):\n \"\"\"Retrieves the VOEvent object associated with a given event and\n returns it as either a Python dictionary or an XML string.\"\"\"\n\n # Build URL\n params = {\n \"cmd\": \"export-voevent\",\n \"cosec\": 1,\n \"ivorn\": self['kb_archivid']\n }\n url = base_url + urllib.parse.urlencode(params)\n\n # Query and read response\n response = urllib.request.urlopen(url).read()\n\n # Return a string or dict\n if as_dict:\n return xml_to_dict(response)\n else:\n return response\n\n def get(self, key, default=None):\n try:\n return self[key]\n except KeyError:\n return default\n", "path": "sunpy/net/hek/hek.py"}]}
| 2,312 | 189 |
gh_patches_debug_11147
|
rasdani/github-patches
|
git_diff
|
dask__dask-10113
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Removal of dask.compatibility.entry_points has broken dask-kubernetes
It looks like `dask.compatibility.entry_points` was removed in #10070 without warning. This was being used in `dask-kubernetes` so CI is now failing.
https://github.com/dask/dask-kubernetes/actions/runs/4499027159/jobs/7916366189?pr=683
cc @graingert @jrbourbeau
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dask/compatibility.py`
Content:
```
1 import sys
2
3 from packaging.version import parse as parse_version
4
5 _PY_VERSION = parse_version(".".join(map(str, sys.version_info[:3])))
6
7 _EMSCRIPTEN = sys.platform == "emscripten"
8
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/dask/compatibility.py b/dask/compatibility.py
--- a/dask/compatibility.py
+++ b/dask/compatibility.py
@@ -1,7 +1,19 @@
import sys
+import warnings
+from importlib_metadata import entry_points as _entry_points
from packaging.version import parse as parse_version
_PY_VERSION = parse_version(".".join(map(str, sys.version_info[:3])))
_EMSCRIPTEN = sys.platform == "emscripten"
+
+
+def entry_points(group=None):
+ warnings.warn(
+ "`dask.compatibility.entry_points` has been replaced by `importlib_metadata.entry_points` and will be removed "
+ "in a future version. Please use `importlib_metadata.entry_points` instead.",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+ return _entry_points(group=group)
|
{"golden_diff": "diff --git a/dask/compatibility.py b/dask/compatibility.py\n--- a/dask/compatibility.py\n+++ b/dask/compatibility.py\n@@ -1,7 +1,19 @@\n import sys\n+import warnings\n \n+from importlib_metadata import entry_points as _entry_points\n from packaging.version import parse as parse_version\n \n _PY_VERSION = parse_version(\".\".join(map(str, sys.version_info[:3])))\n \n _EMSCRIPTEN = sys.platform == \"emscripten\"\n+\n+\n+def entry_points(group=None):\n+ warnings.warn(\n+ \"`dask.compatibility.entry_points` has been replaced by `importlib_metadata.entry_points` and will be removed \"\n+ \"in a future version. Please use `importlib_metadata.entry_points` instead.\",\n+ DeprecationWarning,\n+ stacklevel=2,\n+ )\n+ return _entry_points(group=group)\n", "issue": "Removal of dask.compatibility.entry_points has broken dask-kubernetes\nIt looks like `dask.compatibility.entry_points` was removed in #10070 without warning. This was being used in `dask-kubernetes` so CI is now failing.\r\n\r\nhttps://github.com/dask/dask-kubernetes/actions/runs/4499027159/jobs/7916366189?pr=683\r\n\r\ncc @graingert @jrbourbeau \n", "before_files": [{"content": "import sys\n\nfrom packaging.version import parse as parse_version\n\n_PY_VERSION = parse_version(\".\".join(map(str, sys.version_info[:3])))\n\n_EMSCRIPTEN = sys.platform == \"emscripten\"\n", "path": "dask/compatibility.py"}], "after_files": [{"content": "import sys\nimport warnings\n\nfrom importlib_metadata import entry_points as _entry_points\nfrom packaging.version import parse as parse_version\n\n_PY_VERSION = parse_version(\".\".join(map(str, sys.version_info[:3])))\n\n_EMSCRIPTEN = sys.platform == \"emscripten\"\n\n\ndef entry_points(group=None):\n warnings.warn(\n \"`dask.compatibility.entry_points` has been replaced by `importlib_metadata.entry_points` and will be removed \"\n \"in a future version. Please use `importlib_metadata.entry_points` instead.\",\n DeprecationWarning,\n stacklevel=2,\n )\n return _entry_points(group=group)\n", "path": "dask/compatibility.py"}]}
| 426 | 194 |
gh_patches_debug_2563
|
rasdani/github-patches
|
git_diff
|
microsoft__ptvsd-297
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unable to launch the debugger
Getting the following error in master when debugging in VSC:
```
Could not connect to None: 60857
Traceback (most recent call last):
File "/Users/donjayamanne/Desktop/Development/vscode/ptvsd/ptvsd/pydevd/pydevd.py", line 1620, in main
debugger.connect(host, port)
File "/Users/donjayamanne/Desktop/Development/vscode/ptvsd/ptvsd/pydevd/pydevd.py", line 326, in connect
s = start_server(port)
File "/Users/donjayamanne/Desktop/Development/vscode/ptvsd/ptvsd/wrapper.py", line 1766, in start_server
server = _create_server(port)
File "/Users/donjayamanne/Desktop/Development/vscode/ptvsd/ptvsd/wrapper.py", line 1701, in _create_server
server.bind(('127.0.0.1', port))
OSError: [Errno 48] Address already in u
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ptvsd/debugger.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License. See LICENSE in the project root
3 # for license information.
4
5 from ptvsd.__main__ import run_module, run_file
6
7
8 __author__ = "Microsoft Corporation <[email protected]>"
9 __version__ = "4.0.0a5"
10
11 # TODO: not needed?
12 DONT_DEBUG = []
13
14
15 def debug(filename, port_num, debug_id, debug_options, run_as, **kwargs):
16 # TODO: docstring
17 address = (None, port_num)
18 if run_as == 'module':
19 run_module(address, filename, **kwargs)
20 else:
21 run_file(address, filename, **kwargs)
22
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ptvsd/debugger.py b/ptvsd/debugger.py
--- a/ptvsd/debugger.py
+++ b/ptvsd/debugger.py
@@ -14,7 +14,7 @@
def debug(filename, port_num, debug_id, debug_options, run_as, **kwargs):
# TODO: docstring
- address = (None, port_num)
+ address = ('localhost', port_num)
if run_as == 'module':
run_module(address, filename, **kwargs)
else:
|
{"golden_diff": "diff --git a/ptvsd/debugger.py b/ptvsd/debugger.py\n--- a/ptvsd/debugger.py\n+++ b/ptvsd/debugger.py\n@@ -14,7 +14,7 @@\n \n def debug(filename, port_num, debug_id, debug_options, run_as, **kwargs):\n # TODO: docstring\n- address = (None, port_num)\n+ address = ('localhost', port_num)\n if run_as == 'module':\n run_module(address, filename, **kwargs)\n else:\n", "issue": "Unable to launch the debugger\nGetting the following error in master when debugging in VSC:\r\n```\r\nCould not connect to None: 60857\r\nTraceback (most recent call last):\r\n File \"/Users/donjayamanne/Desktop/Development/vscode/ptvsd/ptvsd/pydevd/pydevd.py\", line 1620, in main\r\n debugger.connect(host, port)\r\n File \"/Users/donjayamanne/Desktop/Development/vscode/ptvsd/ptvsd/pydevd/pydevd.py\", line 326, in connect\r\n s = start_server(port)\r\n File \"/Users/donjayamanne/Desktop/Development/vscode/ptvsd/ptvsd/wrapper.py\", line 1766, in start_server\r\n server = _create_server(port)\r\n File \"/Users/donjayamanne/Desktop/Development/vscode/ptvsd/ptvsd/wrapper.py\", line 1701, in _create_server\r\n server.bind(('127.0.0.1', port))\r\nOSError: [Errno 48] Address already in u\r\n```\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See LICENSE in the project root\n# for license information.\n\nfrom ptvsd.__main__ import run_module, run_file\n\n\n__author__ = \"Microsoft Corporation <[email protected]>\"\n__version__ = \"4.0.0a5\"\n\n# TODO: not needed?\nDONT_DEBUG = []\n\n\ndef debug(filename, port_num, debug_id, debug_options, run_as, **kwargs):\n # TODO: docstring\n address = (None, port_num)\n if run_as == 'module':\n run_module(address, filename, **kwargs)\n else:\n run_file(address, filename, **kwargs)\n", "path": "ptvsd/debugger.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See LICENSE in the project root\n# for license information.\n\nfrom ptvsd.__main__ import run_module, run_file\n\n\n__author__ = \"Microsoft Corporation <[email protected]>\"\n__version__ = \"4.0.0a5\"\n\n# TODO: not needed?\nDONT_DEBUG = []\n\n\ndef debug(filename, port_num, debug_id, debug_options, run_as, **kwargs):\n # TODO: docstring\n address = ('localhost', port_num)\n if run_as == 'module':\n run_module(address, filename, **kwargs)\n else:\n run_file(address, filename, **kwargs)\n", "path": "ptvsd/debugger.py"}]}
| 706 | 120 |
gh_patches_debug_31566
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-python-141
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Log more extra data for Celery
The old integration in celery used to log arguments to the task and more. Add that to our celery integration
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sentry_sdk/integrations/celery.py`
Content:
```
1 from __future__ import absolute_import
2
3 import sys
4
5 from celery.signals import task_failure, task_prerun, task_postrun
6 from celery.exceptions import SoftTimeLimitExceeded
7
8 from sentry_sdk.hub import Hub
9 from sentry_sdk.utils import capture_internal_exceptions, event_from_exception
10 from sentry_sdk.integrations import Integration
11 from sentry_sdk.integrations.logging import ignore_logger
12
13
14 class CeleryIntegration(Integration):
15 identifier = "celery"
16
17 @staticmethod
18 def setup_once():
19 task_prerun.connect(_handle_task_prerun, weak=False)
20 task_postrun.connect(_handle_task_postrun, weak=False)
21 task_failure.connect(_process_failure_signal, weak=False)
22
23 # This logger logs every status of every task that ran on the worker.
24 # Meaning that every task's breadcrumbs are full of stuff like "Task
25 # <foo> raised unexpected <bar>".
26 ignore_logger("celery.worker.job")
27
28
29 def _process_failure_signal(sender, task_id, einfo, **kw):
30 # einfo from celery is not reliable
31 exc_info = sys.exc_info()
32
33 hub = Hub.current
34 integration = hub.get_integration(CeleryIntegration)
35 if integration is None:
36 return
37
38 if hasattr(sender, "throws") and isinstance(einfo.exception, sender.throws):
39 return
40
41 if isinstance(einfo.exception, SoftTimeLimitExceeded):
42 # TODO: Move this into event processor
43 with hub.push_scope() as scope:
44 scope.fingerprint = [
45 "celery",
46 "SoftTimeLimitExceeded",
47 getattr(sender, "name", sender),
48 ]
49 _capture_event(hub, exc_info)
50 else:
51 _capture_event(hub, exc_info)
52
53
54 def _handle_task_prerun(sender, task, **kw):
55 hub = Hub.current
56 if hub.get_integration(CeleryIntegration) is not None:
57 scope = hub.push_scope().__enter__()
58 with capture_internal_exceptions():
59 scope.transaction = task.name
60
61
62 def _handle_task_postrun(sender, task_id, task, **kw):
63 hub = Hub.current
64 if hub.get_integration(CeleryIntegration) is not None:
65 hub.pop_scope_unsafe()
66
67
68 def _capture_event(hub, exc_info):
69 event, hint = event_from_exception(
70 exc_info,
71 client_options=hub.client.options,
72 mechanism={"type": "celery", "handled": False},
73 )
74 hub.capture_event(event, hint=hint)
75
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sentry_sdk/integrations/celery.py b/sentry_sdk/integrations/celery.py
--- a/sentry_sdk/integrations/celery.py
+++ b/sentry_sdk/integrations/celery.py
@@ -35,28 +35,48 @@
if integration is None:
return
- if hasattr(sender, "throws") and isinstance(einfo.exception, sender.throws):
- return
-
- if isinstance(einfo.exception, SoftTimeLimitExceeded):
- # TODO: Move this into event processor
- with hub.push_scope() as scope:
- scope.fingerprint = [
- "celery",
- "SoftTimeLimitExceeded",
- getattr(sender, "name", sender),
- ]
- _capture_event(hub, exc_info)
- else:
- _capture_event(hub, exc_info)
+ _capture_event(hub, exc_info)
-def _handle_task_prerun(sender, task, **kw):
+def _handle_task_prerun(sender, task, args, kwargs, **_):
hub = Hub.current
if hub.get_integration(CeleryIntegration) is not None:
scope = hub.push_scope().__enter__()
+ scope.add_event_processor(_make_event_processor(args, kwargs, task))
+
+
+def _make_event_processor(args, kwargs, task):
+ def event_processor(event, hint):
+ with capture_internal_exceptions():
+ if "transaction" not in event:
+ event["transaction"] = task.name
+
with capture_internal_exceptions():
- scope.transaction = task.name
+ extra = event.setdefault("extra", {})
+ extra["celery-job"] = {
+ "task_name": task.name,
+ "args": args,
+ "kwargs": kwargs,
+ }
+
+ if "exc_info" in hint:
+ with capture_internal_exceptions():
+ if issubclass(hint["exc_info"][0], SoftTimeLimitExceeded):
+ event["fingerprint"] = [
+ "celery",
+ "SoftTimeLimitExceeded",
+ getattr(task, "name", task),
+ ]
+
+ with capture_internal_exceptions():
+ if hasattr(task, "throws") and isinstance(
+ hint["exc_info"][1], task.throws
+ ):
+ return None
+
+ return event
+
+ return event_processor
def _handle_task_postrun(sender, task_id, task, **kw):
|
{"golden_diff": "diff --git a/sentry_sdk/integrations/celery.py b/sentry_sdk/integrations/celery.py\n--- a/sentry_sdk/integrations/celery.py\n+++ b/sentry_sdk/integrations/celery.py\n@@ -35,28 +35,48 @@\n if integration is None:\n return\n \n- if hasattr(sender, \"throws\") and isinstance(einfo.exception, sender.throws):\n- return\n-\n- if isinstance(einfo.exception, SoftTimeLimitExceeded):\n- # TODO: Move this into event processor\n- with hub.push_scope() as scope:\n- scope.fingerprint = [\n- \"celery\",\n- \"SoftTimeLimitExceeded\",\n- getattr(sender, \"name\", sender),\n- ]\n- _capture_event(hub, exc_info)\n- else:\n- _capture_event(hub, exc_info)\n+ _capture_event(hub, exc_info)\n \n \n-def _handle_task_prerun(sender, task, **kw):\n+def _handle_task_prerun(sender, task, args, kwargs, **_):\n hub = Hub.current\n if hub.get_integration(CeleryIntegration) is not None:\n scope = hub.push_scope().__enter__()\n+ scope.add_event_processor(_make_event_processor(args, kwargs, task))\n+\n+\n+def _make_event_processor(args, kwargs, task):\n+ def event_processor(event, hint):\n+ with capture_internal_exceptions():\n+ if \"transaction\" not in event:\n+ event[\"transaction\"] = task.name\n+\n with capture_internal_exceptions():\n- scope.transaction = task.name\n+ extra = event.setdefault(\"extra\", {})\n+ extra[\"celery-job\"] = {\n+ \"task_name\": task.name,\n+ \"args\": args,\n+ \"kwargs\": kwargs,\n+ }\n+\n+ if \"exc_info\" in hint:\n+ with capture_internal_exceptions():\n+ if issubclass(hint[\"exc_info\"][0], SoftTimeLimitExceeded):\n+ event[\"fingerprint\"] = [\n+ \"celery\",\n+ \"SoftTimeLimitExceeded\",\n+ getattr(task, \"name\", task),\n+ ]\n+\n+ with capture_internal_exceptions():\n+ if hasattr(task, \"throws\") and isinstance(\n+ hint[\"exc_info\"][1], task.throws\n+ ):\n+ return None\n+\n+ return event\n+\n+ return event_processor\n \n \n def _handle_task_postrun(sender, task_id, task, **kw):\n", "issue": "Log more extra data for Celery\nThe old integration in celery used to log arguments to the task and more. Add that to our celery integration\n", "before_files": [{"content": "from __future__ import absolute_import\n\nimport sys\n\nfrom celery.signals import task_failure, task_prerun, task_postrun\nfrom celery.exceptions import SoftTimeLimitExceeded\n\nfrom sentry_sdk.hub import Hub\nfrom sentry_sdk.utils import capture_internal_exceptions, event_from_exception\nfrom sentry_sdk.integrations import Integration\nfrom sentry_sdk.integrations.logging import ignore_logger\n\n\nclass CeleryIntegration(Integration):\n identifier = \"celery\"\n\n @staticmethod\n def setup_once():\n task_prerun.connect(_handle_task_prerun, weak=False)\n task_postrun.connect(_handle_task_postrun, weak=False)\n task_failure.connect(_process_failure_signal, weak=False)\n\n # This logger logs every status of every task that ran on the worker.\n # Meaning that every task's breadcrumbs are full of stuff like \"Task\n # <foo> raised unexpected <bar>\".\n ignore_logger(\"celery.worker.job\")\n\n\ndef _process_failure_signal(sender, task_id, einfo, **kw):\n # einfo from celery is not reliable\n exc_info = sys.exc_info()\n\n hub = Hub.current\n integration = hub.get_integration(CeleryIntegration)\n if integration is None:\n return\n\n if hasattr(sender, \"throws\") and isinstance(einfo.exception, sender.throws):\n return\n\n if isinstance(einfo.exception, SoftTimeLimitExceeded):\n # TODO: Move this into event processor\n with hub.push_scope() as scope:\n scope.fingerprint = [\n \"celery\",\n \"SoftTimeLimitExceeded\",\n getattr(sender, \"name\", sender),\n ]\n _capture_event(hub, exc_info)\n else:\n _capture_event(hub, exc_info)\n\n\ndef _handle_task_prerun(sender, task, **kw):\n hub = Hub.current\n if hub.get_integration(CeleryIntegration) is not None:\n scope = hub.push_scope().__enter__()\n with capture_internal_exceptions():\n scope.transaction = task.name\n\n\ndef _handle_task_postrun(sender, task_id, task, **kw):\n hub = Hub.current\n if hub.get_integration(CeleryIntegration) is not None:\n hub.pop_scope_unsafe()\n\n\ndef _capture_event(hub, exc_info):\n event, hint = event_from_exception(\n exc_info,\n client_options=hub.client.options,\n mechanism={\"type\": \"celery\", \"handled\": False},\n )\n hub.capture_event(event, hint=hint)\n", "path": "sentry_sdk/integrations/celery.py"}], "after_files": [{"content": "from __future__ import absolute_import\n\nimport sys\n\nfrom celery.signals import task_failure, task_prerun, task_postrun\nfrom celery.exceptions import SoftTimeLimitExceeded\n\nfrom sentry_sdk.hub import Hub\nfrom sentry_sdk.utils import capture_internal_exceptions, event_from_exception\nfrom sentry_sdk.integrations import Integration\nfrom sentry_sdk.integrations.logging import ignore_logger\n\n\nclass CeleryIntegration(Integration):\n identifier = \"celery\"\n\n @staticmethod\n def setup_once():\n task_prerun.connect(_handle_task_prerun, weak=False)\n task_postrun.connect(_handle_task_postrun, weak=False)\n task_failure.connect(_process_failure_signal, weak=False)\n\n # This logger logs every status of every task that ran on the worker.\n # Meaning that every task's breadcrumbs are full of stuff like \"Task\n # <foo> raised unexpected <bar>\".\n ignore_logger(\"celery.worker.job\")\n\n\ndef _process_failure_signal(sender, task_id, einfo, **kw):\n # einfo from celery is not reliable\n exc_info = sys.exc_info()\n\n hub = Hub.current\n integration = hub.get_integration(CeleryIntegration)\n if integration is None:\n return\n\n _capture_event(hub, exc_info)\n\n\ndef _handle_task_prerun(sender, task, args, kwargs, **_):\n hub = Hub.current\n if hub.get_integration(CeleryIntegration) is not None:\n scope = hub.push_scope().__enter__()\n scope.add_event_processor(_make_event_processor(args, kwargs, task))\n\n\ndef _make_event_processor(args, kwargs, task):\n def event_processor(event, hint):\n with capture_internal_exceptions():\n if \"transaction\" not in event:\n event[\"transaction\"] = task.name\n\n with capture_internal_exceptions():\n extra = event.setdefault(\"extra\", {})\n extra[\"celery-job\"] = {\n \"task_name\": task.name,\n \"args\": args,\n \"kwargs\": kwargs,\n }\n\n if \"exc_info\" in hint:\n with capture_internal_exceptions():\n if issubclass(hint[\"exc_info\"][0], SoftTimeLimitExceeded):\n event[\"fingerprint\"] = [\n \"celery\",\n \"SoftTimeLimitExceeded\",\n getattr(task, \"name\", task),\n ]\n\n with capture_internal_exceptions():\n if hasattr(task, \"throws\") and isinstance(\n hint[\"exc_info\"][1], task.throws\n ):\n return None\n\n return event\n\n return event_processor\n\n\ndef _handle_task_postrun(sender, task_id, task, **kw):\n hub = Hub.current\n if hub.get_integration(CeleryIntegration) is not None:\n hub.pop_scope_unsafe()\n\n\ndef _capture_event(hub, exc_info):\n event, hint = event_from_exception(\n exc_info,\n client_options=hub.client.options,\n mechanism={\"type\": \"celery\", \"handled\": False},\n )\n hub.capture_event(event, hint=hint)\n", "path": "sentry_sdk/integrations/celery.py"}]}
| 978 | 542 |
gh_patches_debug_44216
|
rasdani/github-patches
|
git_diff
|
nautobot__nautobot-3522
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Pre-Release Github Action Workflow Doesn't Publish Python Version Specific Builds
<!--
NOTE: IF YOUR ISSUE DOES NOT FOLLOW THIS TEMPLATE, IT WILL BE CLOSED.
This form is only for reporting reproducible bugs. If you need assistance
with Nautobot installation, or if you have a general question, please start a
discussion instead: https://github.com/nautobot/nautobot/discussions
Please describe the environment in which you are running Nautobot. Be sure
that you are running an unmodified instance of the latest stable release
before submitting a bug report, and that any plugins have been disabled.
-->
### Environment
* Nautobot version (Docker tag too if applicable): 2.0.0-alpha.1
* Python version: 3.7-3.10
* Database platform, version: NA
* Middleware(s): NA
<!--
Describe in detail the exact steps that someone else can take to reproduce
this bug using the current stable release of Nautobot. Begin with the
creation of any necessary database objects and call out every operation
being performed explicitly. If reporting a bug in the REST API, be sure to
reconstruct the raw HTTP request(s) being made: Don't rely on a client
library such as pynautobot.
-->
### Steps to Reproduce
1. Create Pre-Release Release named `2.0.0-alpha.1`
2. Wait for CI to Run
<!-- What did you expect to happen? -->
### Expected Behavior
Docker containers built following or normal release convention `2.0.0-alpha.1` as well as `2.0.0-alpha.1-py3.10`, `2.0.0-alpha.1-py3.9`, etc.
<!-- What happened instead? -->
### Observed Behavior
Whichever container is published last "wins" and becomes the `2.0.0-alpha.1` build: https://hub.docker.com/layers/networktocode/nautobot/2.0.0-alpha.1/images/sha256-6f41c067abac092cbed73b5b6176f771e8432e6cbd380b8d5f9013ba521d87a6?context=explore
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nautobot/ipam/lookups.py`
Content:
```
1 import netaddr
2 from django.db import NotSupportedError
3 from django.db import connection as _connection
4 from django.db.models import Lookup, lookups
5
6
7 def _mysql_varbin_to_broadcast():
8 return "HEX(broadcast)"
9
10
11 def _mysql_varbin_to_hex(lhs):
12 return f"HEX({lhs})"
13
14
15 def _mysql_varbin_to_network():
16 return "HEX(network)"
17
18
19 def _postgresql_varbin_to_broadcast(length):
20 return f"right(broadcast::text, -1)::varbit::bit({length})"
21
22
23 def _postgresql_varbin_to_integer(lhs, length):
24 return f"right({lhs}::text, -1)::varbit::bit({length})"
25
26
27 def _postgresql_varbin_to_network(lhs, length):
28 # convert to bitstring, 0 out everything larger than prefix_length
29 return f"lpad(right({lhs}::text, -1)::varbit::text, prefix_length, '0')::bit({length})"
30
31
32 def py_to_hex(ip, length):
33 return str(hex(int(ip)))[2:].zfill(int(length / 4))
34
35
36 def get_ip_info(field_name, ip_str):
37 """Function to set all details about an IP, that may be needed."""
38 ip_details = IPDetails()
39 ip = netaddr.IPNetwork(ip_str)
40 if field_name == "network":
41 ip_details.addr = ip.network
42 elif field_name == "host":
43 ip_details.addr = ip.ip
44 ip_details.ip = ip
45 ip_details.prefix = ip.prefixlen
46 ip_details.length = ip_details.to_len[ip.version]
47
48 if _connection.vendor == "mysql":
49 ip_details.rhs = py_to_hex(ip.ip, ip_details.length)
50 ip_details.net_addr = f"'{py_to_hex(ip.network, ip_details.length)}'"
51 ip_details.bcast_addr = f"'{py_to_hex(ip[-1], ip_details.length)}'"
52 ip_details.q_net = _mysql_varbin_to_network()
53 ip_details.q_bcast = _mysql_varbin_to_broadcast()
54 ip_details.q_ip = _mysql_varbin_to_hex(field_name)
55
56 elif _connection.vendor == "postgresql":
57 ip_details.rhs = bin(int(ip_details.addr))[2:].zfill(ip_details.length)
58 ip_details.addr_str = f"B'{bin(int(ip_details.addr))[2:].zfill(ip_details.length)}'"
59 ip_details.net_addr = f"B'{bin(int(ip.network))[2:].zfill(ip_details.length)}'"
60 ip_details.bcast_addr = f"B'{bin(int(ip[-1]))[2:].zfill(ip_details.length)}'"
61 ip_details.q_net = _postgresql_varbin_to_network(field_name, ip_details.length)
62 ip_details.q_bcast = _postgresql_varbin_to_broadcast(ip_details.length)
63 ip_details.q_ip = _postgresql_varbin_to_integer(field_name, ip_details.length)
64
65 return ip_details
66
67
68 class IPDetails:
69 """Class for setting up all details about an IP they may be needed"""
70
71 net = None
72 addr = None
73 ip = None
74 prefix = None
75 length = None
76 addr_str = None
77 rhs = None
78 net_addr = None
79 bcast_addr = None
80 q_net = None
81 q_bcast = None
82 q_ip = None
83 to_len = {4: 32, 6: 128}
84
85
86 class StringMatchMixin:
87 def process_lhs(self, qn, connection, lhs=None):
88 lhs = lhs or self.lhs
89 lhs_string, lhs_params = qn.compile(lhs)
90 if connection.vendor == "postgresql":
91 raise NotSupportedError("Lookup not supported on postgresql.")
92 return f"INET6_NTOA({lhs_string})", lhs_params
93
94
95 class Exact(StringMatchMixin, lookups.Exact):
96 pass
97
98
99 class IExact(StringMatchMixin, lookups.IExact):
100 pass
101
102
103 class EndsWith(StringMatchMixin, lookups.EndsWith):
104 pass
105
106
107 class IEndsWith(StringMatchMixin, lookups.IEndsWith):
108 pass
109
110
111 class StartsWith(StringMatchMixin, lookups.StartsWith):
112 pass
113
114
115 class IStartsWith(StringMatchMixin, lookups.IStartsWith):
116 pass
117
118
119 class Regex(StringMatchMixin, lookups.Regex):
120 pass
121
122
123 class IRegex(StringMatchMixin, lookups.IRegex):
124 pass
125
126
127 class NetworkFieldMixin:
128 def get_prep_lookup(self):
129 field_name = self.lhs.field.name
130 if field_name not in ["host", "network"]:
131 raise NotSupportedError(f"Lookup only provided on the host and network fields, not {field_name}.")
132 if field_name == "network" and self.lookup_name in ["net_host", "net_host_contained", "net_in"]:
133 raise NotSupportedError(f"Lookup for network field does not include the {self.lookup_name} lookup.")
134 if field_name == "host" and self.lookup_name not in ["net_host", "net_host_contained", "net_in"]:
135 raise NotSupportedError(f"Lookup for host field does not include the {self.lookup_name} lookup.")
136 self.ip = get_ip_info(field_name, self.rhs)
137 return str(self.ip.ip)
138
139 def process_rhs(self, qn, connection):
140 sql, params = super().process_rhs(qn, connection)
141 params[0] = self.ip.rhs
142 return sql, params
143
144
145 class NetEquals(NetworkFieldMixin, Lookup):
146 lookup_name = "net_equals"
147
148 def as_sql(self, qn, connection):
149 _, lhs_params = self.process_lhs(qn, connection)
150 rhs, rhs_params = self.process_rhs(qn, connection)
151 query = f"prefix_length = {self.ip.prefix} AND {rhs} = {self.ip.q_ip}"
152 return query, lhs_params + rhs_params
153
154
155 class NetContainsOrEquals(NetworkFieldMixin, Lookup):
156 lookup_name = "net_contains_or_equals"
157
158 def as_sql(self, qn, connection):
159 _, lhs_params = self.process_lhs(qn, connection)
160 rhs, rhs_params = self.process_rhs(qn, connection)
161 query = f"prefix_length <= {self.ip.prefix} AND {rhs} BETWEEN {self.ip.q_net} AND {self.ip.q_bcast}"
162 return query, lhs_params + rhs_params
163
164
165 class NetContains(NetworkFieldMixin, Lookup):
166 lookup_name = "net_contains"
167
168 def as_sql(self, qn, connection):
169 _, lhs_params = self.process_lhs(qn, connection)
170 rhs, rhs_params = self.process_rhs(qn, connection)
171 query = f"prefix_length < {self.ip.prefix} AND {rhs} BETWEEN {self.ip.q_net} AND {self.ip.q_bcast}"
172 return query, lhs_params + rhs_params
173
174
175 class NetContainedOrEqual(NetworkFieldMixin, Lookup):
176 lookup_name = "net_contained_or_equal"
177
178 def as_sql(self, qn, connection):
179 _, lhs_params = self.process_lhs(qn, connection)
180 rhs, rhs_params = self.process_rhs(qn, connection)
181 query = f"prefix_length >= {self.ip.prefix} AND {self.ip.q_net} BETWEEN {rhs} AND {self.ip.bcast_addr}"
182 return query, lhs_params + rhs_params
183
184
185 class NetContained(NetworkFieldMixin, Lookup):
186 lookup_name = "net_contained"
187
188 def as_sql(self, qn, connection):
189 _, lhs_params = self.process_lhs(qn, connection)
190 rhs, rhs_params = self.process_rhs(qn, connection)
191 query = f"prefix_length > {self.ip.prefix} AND {self.ip.q_net} BETWEEN {rhs} AND {self.ip.bcast_addr}"
192 return query, lhs_params + rhs_params
193
194
195 class NetHost(Lookup):
196 lookup_name = "net_host"
197
198 def get_prep_lookup(self):
199 field_name = self.lhs.field.name
200 if field_name != "host":
201 raise NotSupportedError(f"Lookup only provided on the host fields, not {field_name}.")
202 self.ip = get_ip_info(field_name, self.rhs)
203 return str(self.ip.ip)
204
205 def process_rhs(self, qn, connection):
206 sql, params = super().process_rhs(qn, connection)
207 params[0] = self.ip.rhs
208 return sql, params
209
210 def process_lhs(self, qn, connection, lhs=None):
211 lhs = lhs or self.lhs
212 _, lhs_params = qn.compile(lhs)
213 return self.ip.q_ip, lhs_params
214
215 def as_sql(self, qn, connection):
216 lhs, lhs_params = self.process_lhs(qn, connection)
217 rhs, rhs_params = self.process_rhs(qn, connection)
218 return f"{lhs} = {rhs}", lhs_params + rhs_params
219
220
221 class NetIn(Lookup):
222 lookup_name = "net_in"
223
224 def get_prep_lookup(self):
225 field_name = self.lhs.field.name
226 if field_name != "host":
227 raise NotSupportedError(f"Lookup only provided on the host field, not {field_name}.")
228 self.ips = []
229 for _ip in self.rhs:
230 ip = get_ip_info(field_name, _ip)
231 self.ips.append(ip)
232 # This is to satisfy an issue with django cacheops, specifically this line:
233 # https://github.com/Suor/django-cacheops/blob/a5ed1ac28c7259f5ad005e596cc045d1d61e2c51/cacheops/query.py#L175
234 # Without 1, and one 1 value as %s, will result in stacktrace. A non-impacting condition is added to the query
235 if _connection.vendor == "mysql":
236 self.query_starter = "'1' NOT IN %s AND "
237 elif _connection.vendor == "postgresql":
238 self.query_starter = "'1' != ANY(%s) AND "
239 return self.rhs
240
241 def as_sql(self, qn, connection):
242 _, lhs_params = self.process_lhs(qn, connection)
243 _, rhs_params = self.process_rhs(qn, connection)
244 query = self.query_starter
245 query += "OR ".join(f"{ip.q_ip} BETWEEN {ip.net_addr} AND {ip.bcast_addr} " for ip in self.ips)
246 return query, lhs_params + rhs_params
247
248
249 class NetHostContained(NetworkFieldMixin, Lookup):
250 lookup_name = "net_host_contained"
251
252 def as_sql(self, qn, connection):
253 _, lhs_params = self.process_lhs(qn, connection)
254 rhs, rhs_params = self.process_rhs(qn, connection)
255 query = f"{self.ip.q_ip} BETWEEN {rhs} AND {self.ip.bcast_addr}"
256 return query, lhs_params + rhs_params
257
258
259 class NetFamily(Lookup):
260 lookup_name = "family"
261
262 def get_prep_lookup(self):
263 if self.rhs not in [4, 6]:
264 raise NotSupportedError("Family must be either integer of value 4 or 6")
265 if self.rhs == 6:
266 self.rhs = 16
267 return self.rhs
268
269 def process_lhs(self, qn, connection, lhs=None):
270 lhs = lhs or self.lhs
271 lhs_string, lhs_params = qn.compile(lhs)
272 return f"LENGTH({lhs_string})", lhs_params
273
274 def as_sql(self, qn, connection):
275 lhs, lhs_params = self.process_lhs(qn, connection)
276 rhs, rhs_params = self.process_rhs(qn, connection)
277 return f"{lhs} = {rhs}", lhs_params + rhs_params
278
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/nautobot/ipam/lookups.py b/nautobot/ipam/lookups.py
--- a/nautobot/ipam/lookups.py
+++ b/nautobot/ipam/lookups.py
@@ -8,7 +8,9 @@
return "HEX(broadcast)"
-def _mysql_varbin_to_hex(lhs):
+def _mysql_varbin_to_hex(lhs, alias=None):
+ if alias:
+ return f"HEX({alias}.{lhs})"
return f"HEX({lhs})"
@@ -20,7 +22,9 @@
return f"right(broadcast::text, -1)::varbit::bit({length})"
-def _postgresql_varbin_to_integer(lhs, length):
+def _postgresql_varbin_to_integer(lhs, length, alias=None):
+ if alias:
+ return f"right({alias}.{lhs}::text, -1)::varbit::bit({length})"
return f"right({lhs}::text, -1)::varbit::bit({length})"
@@ -33,7 +37,7 @@
return str(hex(int(ip)))[2:].zfill(int(length / 4))
-def get_ip_info(field_name, ip_str):
+def get_ip_info(field_name, ip_str, alias=None):
"""Function to set all details about an IP, that may be needed."""
ip_details = IPDetails()
ip = netaddr.IPNetwork(ip_str)
@@ -51,7 +55,7 @@
ip_details.bcast_addr = f"'{py_to_hex(ip[-1], ip_details.length)}'"
ip_details.q_net = _mysql_varbin_to_network()
ip_details.q_bcast = _mysql_varbin_to_broadcast()
- ip_details.q_ip = _mysql_varbin_to_hex(field_name)
+ ip_details.q_ip = _mysql_varbin_to_hex(field_name, alias=alias)
elif _connection.vendor == "postgresql":
ip_details.rhs = bin(int(ip_details.addr))[2:].zfill(ip_details.length)
@@ -60,7 +64,7 @@
ip_details.bcast_addr = f"B'{bin(int(ip[-1]))[2:].zfill(ip_details.length)}'"
ip_details.q_net = _postgresql_varbin_to_network(field_name, ip_details.length)
ip_details.q_bcast = _postgresql_varbin_to_broadcast(ip_details.length)
- ip_details.q_ip = _postgresql_varbin_to_integer(field_name, ip_details.length)
+ ip_details.q_ip = _postgresql_varbin_to_integer(field_name, ip_details.length, alias=alias)
return ip_details
@@ -133,7 +137,7 @@
raise NotSupportedError(f"Lookup for network field does not include the {self.lookup_name} lookup.")
if field_name == "host" and self.lookup_name not in ["net_host", "net_host_contained", "net_in"]:
raise NotSupportedError(f"Lookup for host field does not include the {self.lookup_name} lookup.")
- self.ip = get_ip_info(field_name, self.rhs)
+ self.ip = get_ip_info(field_name, self.rhs, alias=self.lhs.alias)
return str(self.ip.ip)
def process_rhs(self, qn, connection):
@@ -199,7 +203,7 @@
field_name = self.lhs.field.name
if field_name != "host":
raise NotSupportedError(f"Lookup only provided on the host fields, not {field_name}.")
- self.ip = get_ip_info(field_name, self.rhs)
+ self.ip = get_ip_info(field_name, self.rhs, alias=self.lhs.alias)
return str(self.ip.ip)
def process_rhs(self, qn, connection):
@@ -227,7 +231,7 @@
raise NotSupportedError(f"Lookup only provided on the host field, not {field_name}.")
self.ips = []
for _ip in self.rhs:
- ip = get_ip_info(field_name, _ip)
+ ip = get_ip_info(field_name, _ip, alias=self.lhs.alias)
self.ips.append(ip)
# This is to satisfy an issue with django cacheops, specifically this line:
# https://github.com/Suor/django-cacheops/blob/a5ed1ac28c7259f5ad005e596cc045d1d61e2c51/cacheops/query.py#L175
|
{"golden_diff": "diff --git a/nautobot/ipam/lookups.py b/nautobot/ipam/lookups.py\n--- a/nautobot/ipam/lookups.py\n+++ b/nautobot/ipam/lookups.py\n@@ -8,7 +8,9 @@\n return \"HEX(broadcast)\"\n \n \n-def _mysql_varbin_to_hex(lhs):\n+def _mysql_varbin_to_hex(lhs, alias=None):\n+ if alias:\n+ return f\"HEX({alias}.{lhs})\"\n return f\"HEX({lhs})\"\n \n \n@@ -20,7 +22,9 @@\n return f\"right(broadcast::text, -1)::varbit::bit({length})\"\n \n \n-def _postgresql_varbin_to_integer(lhs, length):\n+def _postgresql_varbin_to_integer(lhs, length, alias=None):\n+ if alias:\n+ return f\"right({alias}.{lhs}::text, -1)::varbit::bit({length})\"\n return f\"right({lhs}::text, -1)::varbit::bit({length})\"\n \n \n@@ -33,7 +37,7 @@\n return str(hex(int(ip)))[2:].zfill(int(length / 4))\n \n \n-def get_ip_info(field_name, ip_str):\n+def get_ip_info(field_name, ip_str, alias=None):\n \"\"\"Function to set all details about an IP, that may be needed.\"\"\"\n ip_details = IPDetails()\n ip = netaddr.IPNetwork(ip_str)\n@@ -51,7 +55,7 @@\n ip_details.bcast_addr = f\"'{py_to_hex(ip[-1], ip_details.length)}'\"\n ip_details.q_net = _mysql_varbin_to_network()\n ip_details.q_bcast = _mysql_varbin_to_broadcast()\n- ip_details.q_ip = _mysql_varbin_to_hex(field_name)\n+ ip_details.q_ip = _mysql_varbin_to_hex(field_name, alias=alias)\n \n elif _connection.vendor == \"postgresql\":\n ip_details.rhs = bin(int(ip_details.addr))[2:].zfill(ip_details.length)\n@@ -60,7 +64,7 @@\n ip_details.bcast_addr = f\"B'{bin(int(ip[-1]))[2:].zfill(ip_details.length)}'\"\n ip_details.q_net = _postgresql_varbin_to_network(field_name, ip_details.length)\n ip_details.q_bcast = _postgresql_varbin_to_broadcast(ip_details.length)\n- ip_details.q_ip = _postgresql_varbin_to_integer(field_name, ip_details.length)\n+ ip_details.q_ip = _postgresql_varbin_to_integer(field_name, ip_details.length, alias=alias)\n \n return ip_details\n \n@@ -133,7 +137,7 @@\n raise NotSupportedError(f\"Lookup for network field does not include the {self.lookup_name} lookup.\")\n if field_name == \"host\" and self.lookup_name not in [\"net_host\", \"net_host_contained\", \"net_in\"]:\n raise NotSupportedError(f\"Lookup for host field does not include the {self.lookup_name} lookup.\")\n- self.ip = get_ip_info(field_name, self.rhs)\n+ self.ip = get_ip_info(field_name, self.rhs, alias=self.lhs.alias)\n return str(self.ip.ip)\n \n def process_rhs(self, qn, connection):\n@@ -199,7 +203,7 @@\n field_name = self.lhs.field.name\n if field_name != \"host\":\n raise NotSupportedError(f\"Lookup only provided on the host fields, not {field_name}.\")\n- self.ip = get_ip_info(field_name, self.rhs)\n+ self.ip = get_ip_info(field_name, self.rhs, alias=self.lhs.alias)\n return str(self.ip.ip)\n \n def process_rhs(self, qn, connection):\n@@ -227,7 +231,7 @@\n raise NotSupportedError(f\"Lookup only provided on the host field, not {field_name}.\")\n self.ips = []\n for _ip in self.rhs:\n- ip = get_ip_info(field_name, _ip)\n+ ip = get_ip_info(field_name, _ip, alias=self.lhs.alias)\n self.ips.append(ip)\n # This is to satisfy an issue with django cacheops, specifically this line:\n # https://github.com/Suor/django-cacheops/blob/a5ed1ac28c7259f5ad005e596cc045d1d61e2c51/cacheops/query.py#L175\n", "issue": "Pre-Release Github Action Workflow Doesn't Publish Python Version Specific Builds\n<!--\r\n NOTE: IF YOUR ISSUE DOES NOT FOLLOW THIS TEMPLATE, IT WILL BE CLOSED.\r\n\r\n This form is only for reporting reproducible bugs. If you need assistance\r\n with Nautobot installation, or if you have a general question, please start a\r\n discussion instead: https://github.com/nautobot/nautobot/discussions\r\n\r\n Please describe the environment in which you are running Nautobot. Be sure\r\n that you are running an unmodified instance of the latest stable release\r\n before submitting a bug report, and that any plugins have been disabled.\r\n-->\r\n### Environment\r\n* Nautobot version (Docker tag too if applicable): 2.0.0-alpha.1\r\n* Python version: 3.7-3.10\r\n* Database platform, version: NA\r\n* Middleware(s): NA\r\n\r\n<!--\r\n Describe in detail the exact steps that someone else can take to reproduce\r\n this bug using the current stable release of Nautobot. Begin with the\r\n creation of any necessary database objects and call out every operation\r\n being performed explicitly. If reporting a bug in the REST API, be sure to\r\n reconstruct the raw HTTP request(s) being made: Don't rely on a client\r\n library such as pynautobot.\r\n-->\r\n### Steps to Reproduce\r\n1. Create Pre-Release Release named `2.0.0-alpha.1`\r\n2. Wait for CI to Run\r\n\r\n<!-- What did you expect to happen? -->\r\n### Expected Behavior\r\n\r\nDocker containers built following or normal release convention `2.0.0-alpha.1` as well as `2.0.0-alpha.1-py3.10`, `2.0.0-alpha.1-py3.9`, etc.\r\n\r\n\r\n\r\n<!-- What happened instead? -->\r\n### Observed Behavior\r\n\r\nWhichever container is published last \"wins\" and becomes the `2.0.0-alpha.1` build: https://hub.docker.com/layers/networktocode/nautobot/2.0.0-alpha.1/images/sha256-6f41c067abac092cbed73b5b6176f771e8432e6cbd380b8d5f9013ba521d87a6?context=explore\n", "before_files": [{"content": "import netaddr\nfrom django.db import NotSupportedError\nfrom django.db import connection as _connection\nfrom django.db.models import Lookup, lookups\n\n\ndef _mysql_varbin_to_broadcast():\n return \"HEX(broadcast)\"\n\n\ndef _mysql_varbin_to_hex(lhs):\n return f\"HEX({lhs})\"\n\n\ndef _mysql_varbin_to_network():\n return \"HEX(network)\"\n\n\ndef _postgresql_varbin_to_broadcast(length):\n return f\"right(broadcast::text, -1)::varbit::bit({length})\"\n\n\ndef _postgresql_varbin_to_integer(lhs, length):\n return f\"right({lhs}::text, -1)::varbit::bit({length})\"\n\n\ndef _postgresql_varbin_to_network(lhs, length):\n # convert to bitstring, 0 out everything larger than prefix_length\n return f\"lpad(right({lhs}::text, -1)::varbit::text, prefix_length, '0')::bit({length})\"\n\n\ndef py_to_hex(ip, length):\n return str(hex(int(ip)))[2:].zfill(int(length / 4))\n\n\ndef get_ip_info(field_name, ip_str):\n \"\"\"Function to set all details about an IP, that may be needed.\"\"\"\n ip_details = IPDetails()\n ip = netaddr.IPNetwork(ip_str)\n if field_name == \"network\":\n ip_details.addr = ip.network\n elif field_name == \"host\":\n ip_details.addr = ip.ip\n ip_details.ip = ip\n ip_details.prefix = ip.prefixlen\n ip_details.length = ip_details.to_len[ip.version]\n\n if _connection.vendor == \"mysql\":\n ip_details.rhs = py_to_hex(ip.ip, ip_details.length)\n ip_details.net_addr = f\"'{py_to_hex(ip.network, ip_details.length)}'\"\n ip_details.bcast_addr = f\"'{py_to_hex(ip[-1], ip_details.length)}'\"\n ip_details.q_net = _mysql_varbin_to_network()\n ip_details.q_bcast = _mysql_varbin_to_broadcast()\n ip_details.q_ip = _mysql_varbin_to_hex(field_name)\n\n elif _connection.vendor == \"postgresql\":\n ip_details.rhs = bin(int(ip_details.addr))[2:].zfill(ip_details.length)\n ip_details.addr_str = f\"B'{bin(int(ip_details.addr))[2:].zfill(ip_details.length)}'\"\n ip_details.net_addr = f\"B'{bin(int(ip.network))[2:].zfill(ip_details.length)}'\"\n ip_details.bcast_addr = f\"B'{bin(int(ip[-1]))[2:].zfill(ip_details.length)}'\"\n ip_details.q_net = _postgresql_varbin_to_network(field_name, ip_details.length)\n ip_details.q_bcast = _postgresql_varbin_to_broadcast(ip_details.length)\n ip_details.q_ip = _postgresql_varbin_to_integer(field_name, ip_details.length)\n\n return ip_details\n\n\nclass IPDetails:\n \"\"\"Class for setting up all details about an IP they may be needed\"\"\"\n\n net = None\n addr = None\n ip = None\n prefix = None\n length = None\n addr_str = None\n rhs = None\n net_addr = None\n bcast_addr = None\n q_net = None\n q_bcast = None\n q_ip = None\n to_len = {4: 32, 6: 128}\n\n\nclass StringMatchMixin:\n def process_lhs(self, qn, connection, lhs=None):\n lhs = lhs or self.lhs\n lhs_string, lhs_params = qn.compile(lhs)\n if connection.vendor == \"postgresql\":\n raise NotSupportedError(\"Lookup not supported on postgresql.\")\n return f\"INET6_NTOA({lhs_string})\", lhs_params\n\n\nclass Exact(StringMatchMixin, lookups.Exact):\n pass\n\n\nclass IExact(StringMatchMixin, lookups.IExact):\n pass\n\n\nclass EndsWith(StringMatchMixin, lookups.EndsWith):\n pass\n\n\nclass IEndsWith(StringMatchMixin, lookups.IEndsWith):\n pass\n\n\nclass StartsWith(StringMatchMixin, lookups.StartsWith):\n pass\n\n\nclass IStartsWith(StringMatchMixin, lookups.IStartsWith):\n pass\n\n\nclass Regex(StringMatchMixin, lookups.Regex):\n pass\n\n\nclass IRegex(StringMatchMixin, lookups.IRegex):\n pass\n\n\nclass NetworkFieldMixin:\n def get_prep_lookup(self):\n field_name = self.lhs.field.name\n if field_name not in [\"host\", \"network\"]:\n raise NotSupportedError(f\"Lookup only provided on the host and network fields, not {field_name}.\")\n if field_name == \"network\" and self.lookup_name in [\"net_host\", \"net_host_contained\", \"net_in\"]:\n raise NotSupportedError(f\"Lookup for network field does not include the {self.lookup_name} lookup.\")\n if field_name == \"host\" and self.lookup_name not in [\"net_host\", \"net_host_contained\", \"net_in\"]:\n raise NotSupportedError(f\"Lookup for host field does not include the {self.lookup_name} lookup.\")\n self.ip = get_ip_info(field_name, self.rhs)\n return str(self.ip.ip)\n\n def process_rhs(self, qn, connection):\n sql, params = super().process_rhs(qn, connection)\n params[0] = self.ip.rhs\n return sql, params\n\n\nclass NetEquals(NetworkFieldMixin, Lookup):\n lookup_name = \"net_equals\"\n\n def as_sql(self, qn, connection):\n _, lhs_params = self.process_lhs(qn, connection)\n rhs, rhs_params = self.process_rhs(qn, connection)\n query = f\"prefix_length = {self.ip.prefix} AND {rhs} = {self.ip.q_ip}\"\n return query, lhs_params + rhs_params\n\n\nclass NetContainsOrEquals(NetworkFieldMixin, Lookup):\n lookup_name = \"net_contains_or_equals\"\n\n def as_sql(self, qn, connection):\n _, lhs_params = self.process_lhs(qn, connection)\n rhs, rhs_params = self.process_rhs(qn, connection)\n query = f\"prefix_length <= {self.ip.prefix} AND {rhs} BETWEEN {self.ip.q_net} AND {self.ip.q_bcast}\"\n return query, lhs_params + rhs_params\n\n\nclass NetContains(NetworkFieldMixin, Lookup):\n lookup_name = \"net_contains\"\n\n def as_sql(self, qn, connection):\n _, lhs_params = self.process_lhs(qn, connection)\n rhs, rhs_params = self.process_rhs(qn, connection)\n query = f\"prefix_length < {self.ip.prefix} AND {rhs} BETWEEN {self.ip.q_net} AND {self.ip.q_bcast}\"\n return query, lhs_params + rhs_params\n\n\nclass NetContainedOrEqual(NetworkFieldMixin, Lookup):\n lookup_name = \"net_contained_or_equal\"\n\n def as_sql(self, qn, connection):\n _, lhs_params = self.process_lhs(qn, connection)\n rhs, rhs_params = self.process_rhs(qn, connection)\n query = f\"prefix_length >= {self.ip.prefix} AND {self.ip.q_net} BETWEEN {rhs} AND {self.ip.bcast_addr}\"\n return query, lhs_params + rhs_params\n\n\nclass NetContained(NetworkFieldMixin, Lookup):\n lookup_name = \"net_contained\"\n\n def as_sql(self, qn, connection):\n _, lhs_params = self.process_lhs(qn, connection)\n rhs, rhs_params = self.process_rhs(qn, connection)\n query = f\"prefix_length > {self.ip.prefix} AND {self.ip.q_net} BETWEEN {rhs} AND {self.ip.bcast_addr}\"\n return query, lhs_params + rhs_params\n\n\nclass NetHost(Lookup):\n lookup_name = \"net_host\"\n\n def get_prep_lookup(self):\n field_name = self.lhs.field.name\n if field_name != \"host\":\n raise NotSupportedError(f\"Lookup only provided on the host fields, not {field_name}.\")\n self.ip = get_ip_info(field_name, self.rhs)\n return str(self.ip.ip)\n\n def process_rhs(self, qn, connection):\n sql, params = super().process_rhs(qn, connection)\n params[0] = self.ip.rhs\n return sql, params\n\n def process_lhs(self, qn, connection, lhs=None):\n lhs = lhs or self.lhs\n _, lhs_params = qn.compile(lhs)\n return self.ip.q_ip, lhs_params\n\n def as_sql(self, qn, connection):\n lhs, lhs_params = self.process_lhs(qn, connection)\n rhs, rhs_params = self.process_rhs(qn, connection)\n return f\"{lhs} = {rhs}\", lhs_params + rhs_params\n\n\nclass NetIn(Lookup):\n lookup_name = \"net_in\"\n\n def get_prep_lookup(self):\n field_name = self.lhs.field.name\n if field_name != \"host\":\n raise NotSupportedError(f\"Lookup only provided on the host field, not {field_name}.\")\n self.ips = []\n for _ip in self.rhs:\n ip = get_ip_info(field_name, _ip)\n self.ips.append(ip)\n # This is to satisfy an issue with django cacheops, specifically this line:\n # https://github.com/Suor/django-cacheops/blob/a5ed1ac28c7259f5ad005e596cc045d1d61e2c51/cacheops/query.py#L175\n # Without 1, and one 1 value as %s, will result in stacktrace. A non-impacting condition is added to the query\n if _connection.vendor == \"mysql\":\n self.query_starter = \"'1' NOT IN %s AND \"\n elif _connection.vendor == \"postgresql\":\n self.query_starter = \"'1' != ANY(%s) AND \"\n return self.rhs\n\n def as_sql(self, qn, connection):\n _, lhs_params = self.process_lhs(qn, connection)\n _, rhs_params = self.process_rhs(qn, connection)\n query = self.query_starter\n query += \"OR \".join(f\"{ip.q_ip} BETWEEN {ip.net_addr} AND {ip.bcast_addr} \" for ip in self.ips)\n return query, lhs_params + rhs_params\n\n\nclass NetHostContained(NetworkFieldMixin, Lookup):\n lookup_name = \"net_host_contained\"\n\n def as_sql(self, qn, connection):\n _, lhs_params = self.process_lhs(qn, connection)\n rhs, rhs_params = self.process_rhs(qn, connection)\n query = f\"{self.ip.q_ip} BETWEEN {rhs} AND {self.ip.bcast_addr}\"\n return query, lhs_params + rhs_params\n\n\nclass NetFamily(Lookup):\n lookup_name = \"family\"\n\n def get_prep_lookup(self):\n if self.rhs not in [4, 6]:\n raise NotSupportedError(\"Family must be either integer of value 4 or 6\")\n if self.rhs == 6:\n self.rhs = 16\n return self.rhs\n\n def process_lhs(self, qn, connection, lhs=None):\n lhs = lhs or self.lhs\n lhs_string, lhs_params = qn.compile(lhs)\n return f\"LENGTH({lhs_string})\", lhs_params\n\n def as_sql(self, qn, connection):\n lhs, lhs_params = self.process_lhs(qn, connection)\n rhs, rhs_params = self.process_rhs(qn, connection)\n return f\"{lhs} = {rhs}\", lhs_params + rhs_params\n", "path": "nautobot/ipam/lookups.py"}], "after_files": [{"content": "import netaddr\nfrom django.db import NotSupportedError\nfrom django.db import connection as _connection\nfrom django.db.models import Lookup, lookups\n\n\ndef _mysql_varbin_to_broadcast():\n return \"HEX(broadcast)\"\n\n\ndef _mysql_varbin_to_hex(lhs, alias=None):\n if alias:\n return f\"HEX({alias}.{lhs})\"\n return f\"HEX({lhs})\"\n\n\ndef _mysql_varbin_to_network():\n return \"HEX(network)\"\n\n\ndef _postgresql_varbin_to_broadcast(length):\n return f\"right(broadcast::text, -1)::varbit::bit({length})\"\n\n\ndef _postgresql_varbin_to_integer(lhs, length, alias=None):\n if alias:\n return f\"right({alias}.{lhs}::text, -1)::varbit::bit({length})\"\n return f\"right({lhs}::text, -1)::varbit::bit({length})\"\n\n\ndef _postgresql_varbin_to_network(lhs, length):\n # convert to bitstring, 0 out everything larger than prefix_length\n return f\"lpad(right({lhs}::text, -1)::varbit::text, prefix_length, '0')::bit({length})\"\n\n\ndef py_to_hex(ip, length):\n return str(hex(int(ip)))[2:].zfill(int(length / 4))\n\n\ndef get_ip_info(field_name, ip_str, alias=None):\n \"\"\"Function to set all details about an IP, that may be needed.\"\"\"\n ip_details = IPDetails()\n ip = netaddr.IPNetwork(ip_str)\n if field_name == \"network\":\n ip_details.addr = ip.network\n elif field_name == \"host\":\n ip_details.addr = ip.ip\n ip_details.ip = ip\n ip_details.prefix = ip.prefixlen\n ip_details.length = ip_details.to_len[ip.version]\n\n if _connection.vendor == \"mysql\":\n ip_details.rhs = py_to_hex(ip.ip, ip_details.length)\n ip_details.net_addr = f\"'{py_to_hex(ip.network, ip_details.length)}'\"\n ip_details.bcast_addr = f\"'{py_to_hex(ip[-1], ip_details.length)}'\"\n ip_details.q_net = _mysql_varbin_to_network()\n ip_details.q_bcast = _mysql_varbin_to_broadcast()\n ip_details.q_ip = _mysql_varbin_to_hex(field_name, alias=alias)\n\n elif _connection.vendor == \"postgresql\":\n ip_details.rhs = bin(int(ip_details.addr))[2:].zfill(ip_details.length)\n ip_details.addr_str = f\"B'{bin(int(ip_details.addr))[2:].zfill(ip_details.length)}'\"\n ip_details.net_addr = f\"B'{bin(int(ip.network))[2:].zfill(ip_details.length)}'\"\n ip_details.bcast_addr = f\"B'{bin(int(ip[-1]))[2:].zfill(ip_details.length)}'\"\n ip_details.q_net = _postgresql_varbin_to_network(field_name, ip_details.length)\n ip_details.q_bcast = _postgresql_varbin_to_broadcast(ip_details.length)\n ip_details.q_ip = _postgresql_varbin_to_integer(field_name, ip_details.length, alias=alias)\n\n return ip_details\n\n\nclass IPDetails:\n \"\"\"Class for setting up all details about an IP they may be needed\"\"\"\n\n net = None\n addr = None\n ip = None\n prefix = None\n length = None\n addr_str = None\n rhs = None\n net_addr = None\n bcast_addr = None\n q_net = None\n q_bcast = None\n q_ip = None\n to_len = {4: 32, 6: 128}\n\n\nclass StringMatchMixin:\n def process_lhs(self, qn, connection, lhs=None):\n lhs = lhs or self.lhs\n lhs_string, lhs_params = qn.compile(lhs)\n if connection.vendor == \"postgresql\":\n raise NotSupportedError(\"Lookup not supported on postgresql.\")\n return f\"INET6_NTOA({lhs_string})\", lhs_params\n\n\nclass Exact(StringMatchMixin, lookups.Exact):\n pass\n\n\nclass IExact(StringMatchMixin, lookups.IExact):\n pass\n\n\nclass EndsWith(StringMatchMixin, lookups.EndsWith):\n pass\n\n\nclass IEndsWith(StringMatchMixin, lookups.IEndsWith):\n pass\n\n\nclass StartsWith(StringMatchMixin, lookups.StartsWith):\n pass\n\n\nclass IStartsWith(StringMatchMixin, lookups.IStartsWith):\n pass\n\n\nclass Regex(StringMatchMixin, lookups.Regex):\n pass\n\n\nclass IRegex(StringMatchMixin, lookups.IRegex):\n pass\n\n\nclass NetworkFieldMixin:\n def get_prep_lookup(self):\n field_name = self.lhs.field.name\n if field_name not in [\"host\", \"network\"]:\n raise NotSupportedError(f\"Lookup only provided on the host and network fields, not {field_name}.\")\n if field_name == \"network\" and self.lookup_name in [\"net_host\", \"net_host_contained\", \"net_in\"]:\n raise NotSupportedError(f\"Lookup for network field does not include the {self.lookup_name} lookup.\")\n if field_name == \"host\" and self.lookup_name not in [\"net_host\", \"net_host_contained\", \"net_in\"]:\n raise NotSupportedError(f\"Lookup for host field does not include the {self.lookup_name} lookup.\")\n self.ip = get_ip_info(field_name, self.rhs, alias=self.lhs.alias)\n return str(self.ip.ip)\n\n def process_rhs(self, qn, connection):\n sql, params = super().process_rhs(qn, connection)\n params[0] = self.ip.rhs\n return sql, params\n\n\nclass NetEquals(NetworkFieldMixin, Lookup):\n lookup_name = \"net_equals\"\n\n def as_sql(self, qn, connection):\n _, lhs_params = self.process_lhs(qn, connection)\n rhs, rhs_params = self.process_rhs(qn, connection)\n query = f\"prefix_length = {self.ip.prefix} AND {rhs} = {self.ip.q_ip}\"\n return query, lhs_params + rhs_params\n\n\nclass NetContainsOrEquals(NetworkFieldMixin, Lookup):\n lookup_name = \"net_contains_or_equals\"\n\n def as_sql(self, qn, connection):\n _, lhs_params = self.process_lhs(qn, connection)\n rhs, rhs_params = self.process_rhs(qn, connection)\n query = f\"prefix_length <= {self.ip.prefix} AND {rhs} BETWEEN {self.ip.q_net} AND {self.ip.q_bcast}\"\n return query, lhs_params + rhs_params\n\n\nclass NetContains(NetworkFieldMixin, Lookup):\n lookup_name = \"net_contains\"\n\n def as_sql(self, qn, connection):\n _, lhs_params = self.process_lhs(qn, connection)\n rhs, rhs_params = self.process_rhs(qn, connection)\n query = f\"prefix_length < {self.ip.prefix} AND {rhs} BETWEEN {self.ip.q_net} AND {self.ip.q_bcast}\"\n return query, lhs_params + rhs_params\n\n\nclass NetContainedOrEqual(NetworkFieldMixin, Lookup):\n lookup_name = \"net_contained_or_equal\"\n\n def as_sql(self, qn, connection):\n _, lhs_params = self.process_lhs(qn, connection)\n rhs, rhs_params = self.process_rhs(qn, connection)\n query = f\"prefix_length >= {self.ip.prefix} AND {self.ip.q_net} BETWEEN {rhs} AND {self.ip.bcast_addr}\"\n return query, lhs_params + rhs_params\n\n\nclass NetContained(NetworkFieldMixin, Lookup):\n lookup_name = \"net_contained\"\n\n def as_sql(self, qn, connection):\n _, lhs_params = self.process_lhs(qn, connection)\n rhs, rhs_params = self.process_rhs(qn, connection)\n query = f\"prefix_length > {self.ip.prefix} AND {self.ip.q_net} BETWEEN {rhs} AND {self.ip.bcast_addr}\"\n return query, lhs_params + rhs_params\n\n\nclass NetHost(Lookup):\n lookup_name = \"net_host\"\n\n def get_prep_lookup(self):\n field_name = self.lhs.field.name\n if field_name != \"host\":\n raise NotSupportedError(f\"Lookup only provided on the host fields, not {field_name}.\")\n self.ip = get_ip_info(field_name, self.rhs, alias=self.lhs.alias)\n return str(self.ip.ip)\n\n def process_rhs(self, qn, connection):\n sql, params = super().process_rhs(qn, connection)\n params[0] = self.ip.rhs\n return sql, params\n\n def process_lhs(self, qn, connection, lhs=None):\n lhs = lhs or self.lhs\n _, lhs_params = qn.compile(lhs)\n return self.ip.q_ip, lhs_params\n\n def as_sql(self, qn, connection):\n lhs, lhs_params = self.process_lhs(qn, connection)\n rhs, rhs_params = self.process_rhs(qn, connection)\n return f\"{lhs} = {rhs}\", lhs_params + rhs_params\n\n\nclass NetIn(Lookup):\n lookup_name = \"net_in\"\n\n def get_prep_lookup(self):\n field_name = self.lhs.field.name\n if field_name != \"host\":\n raise NotSupportedError(f\"Lookup only provided on the host field, not {field_name}.\")\n self.ips = []\n for _ip in self.rhs:\n ip = get_ip_info(field_name, _ip, alias=self.lhs.alias)\n self.ips.append(ip)\n # This is to satisfy an issue with django cacheops, specifically this line:\n # https://github.com/Suor/django-cacheops/blob/a5ed1ac28c7259f5ad005e596cc045d1d61e2c51/cacheops/query.py#L175\n # Without 1, and one 1 value as %s, will result in stacktrace. A non-impacting condition is added to the query\n if _connection.vendor == \"mysql\":\n self.query_starter = \"'1' NOT IN %s AND \"\n elif _connection.vendor == \"postgresql\":\n self.query_starter = \"'1' != ANY(%s) AND \"\n return self.rhs\n\n def as_sql(self, qn, connection):\n _, lhs_params = self.process_lhs(qn, connection)\n _, rhs_params = self.process_rhs(qn, connection)\n query = self.query_starter\n query += \"OR \".join(f\"{ip.q_ip} BETWEEN {ip.net_addr} AND {ip.bcast_addr} \" for ip in self.ips)\n return query, lhs_params + rhs_params\n\n\nclass NetHostContained(NetworkFieldMixin, Lookup):\n lookup_name = \"net_host_contained\"\n\n def as_sql(self, qn, connection):\n _, lhs_params = self.process_lhs(qn, connection)\n rhs, rhs_params = self.process_rhs(qn, connection)\n query = f\"{self.ip.q_ip} BETWEEN {rhs} AND {self.ip.bcast_addr}\"\n return query, lhs_params + rhs_params\n\n\nclass NetFamily(Lookup):\n lookup_name = \"family\"\n\n def get_prep_lookup(self):\n if self.rhs not in [4, 6]:\n raise NotSupportedError(\"Family must be either integer of value 4 or 6\")\n if self.rhs == 6:\n self.rhs = 16\n return self.rhs\n\n def process_lhs(self, qn, connection, lhs=None):\n lhs = lhs or self.lhs\n lhs_string, lhs_params = qn.compile(lhs)\n return f\"LENGTH({lhs_string})\", lhs_params\n\n def as_sql(self, qn, connection):\n lhs, lhs_params = self.process_lhs(qn, connection)\n rhs, rhs_params = self.process_rhs(qn, connection)\n return f\"{lhs} = {rhs}\", lhs_params + rhs_params\n", "path": "nautobot/ipam/lookups.py"}]}
| 4,023 | 980 |
gh_patches_debug_5102
|
rasdani/github-patches
|
git_diff
|
encode__starlette-623
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
GraphQL response should not include error key if no error occured
The [GraphQL Spec](https://graphql.github.io/graphql-spec/June2018/#sec-Errors) states that:
> If no errors were encountered during the requested operation, the errors entry should not be present in the result.
Currently, if no errors are encountered, starlette will return `{"data": {...}, "errors": null}`.
This is only a small thing, but enough to break some clients.
I have a PR for this incoming.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `starlette/graphql.py`
Content:
```
1 import json
2 import typing
3
4 from starlette import status
5 from starlette.background import BackgroundTasks
6 from starlette.concurrency import run_in_threadpool
7 from starlette.requests import Request
8 from starlette.responses import HTMLResponse, JSONResponse, PlainTextResponse, Response
9 from starlette.types import Receive, Scope, Send
10
11 try:
12 import graphene
13 from graphql.execution.executors.asyncio import AsyncioExecutor
14 from graphql.error import format_error as format_graphql_error
15 from graphql.error import GraphQLError
16 except ImportError: # pragma: nocover
17 graphene = None # type: ignore
18 AsyncioExecutor = None # type: ignore
19 format_graphql_error = None # type: ignore
20 GraphQLError = None # type: ignore
21
22
23 class GraphQLApp:
24 def __init__(
25 self,
26 schema: "graphene.Schema",
27 executor: typing.Any = None,
28 executor_class: type = None,
29 graphiql: bool = True,
30 ) -> None:
31 self.schema = schema
32 self.graphiql = graphiql
33 if executor is None:
34 # New style in 0.10.0. Use 'executor_class'.
35 # See issue https://github.com/encode/starlette/issues/242
36 self.executor = executor
37 self.executor_class = executor_class
38 self.is_async = executor_class is not None and issubclass(
39 executor_class, AsyncioExecutor
40 )
41 else:
42 # Old style. Use 'executor'.
43 # We should remove this in the next median/major version bump.
44 self.executor = executor
45 self.executor_class = None
46 self.is_async = isinstance(executor, AsyncioExecutor)
47
48 async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:
49 if self.executor is None and self.executor_class is not None:
50 self.executor = self.executor_class()
51
52 request = Request(scope, receive=receive)
53 response = await self.handle_graphql(request)
54 await response(scope, receive, send)
55
56 async def handle_graphql(self, request: Request) -> Response:
57 if request.method in ("GET", "HEAD"):
58 if "text/html" in request.headers.get("Accept", ""):
59 if not self.graphiql:
60 return PlainTextResponse(
61 "Not Found", status_code=status.HTTP_404_NOT_FOUND
62 )
63 return await self.handle_graphiql(request)
64
65 data = request.query_params # type: typing.Mapping[str, typing.Any]
66
67 elif request.method == "POST":
68 content_type = request.headers.get("Content-Type", "")
69
70 if "application/json" in content_type:
71 data = await request.json()
72 elif "application/graphql" in content_type:
73 body = await request.body()
74 text = body.decode()
75 data = {"query": text}
76 elif "query" in request.query_params:
77 data = request.query_params
78 else:
79 return PlainTextResponse(
80 "Unsupported Media Type",
81 status_code=status.HTTP_415_UNSUPPORTED_MEDIA_TYPE,
82 )
83
84 else:
85 return PlainTextResponse(
86 "Method Not Allowed", status_code=status.HTTP_405_METHOD_NOT_ALLOWED
87 )
88
89 try:
90 query = data["query"]
91 variables = data.get("variables")
92 operation_name = data.get("operationName")
93 except KeyError:
94 return PlainTextResponse(
95 "No GraphQL query found in the request",
96 status_code=status.HTTP_400_BAD_REQUEST,
97 )
98
99 background = BackgroundTasks()
100 context = {"request": request, "background": background}
101
102 result = await self.execute(
103 query, variables=variables, context=context, operation_name=operation_name
104 )
105 error_data = (
106 [format_graphql_error(err) for err in result.errors]
107 if result.errors
108 else None
109 )
110 response_data = {"data": result.data, "errors": error_data}
111 status_code = (
112 status.HTTP_400_BAD_REQUEST if result.errors else status.HTTP_200_OK
113 )
114
115 return JSONResponse(
116 response_data, status_code=status_code, background=background
117 )
118
119 async def execute( # type: ignore
120 self, query, variables=None, context=None, operation_name=None
121 ):
122 if self.is_async:
123 return await self.schema.execute(
124 query,
125 variables=variables,
126 operation_name=operation_name,
127 executor=self.executor,
128 return_promise=True,
129 context=context,
130 )
131 else:
132 return await run_in_threadpool(
133 self.schema.execute,
134 query,
135 variables=variables,
136 operation_name=operation_name,
137 context=context,
138 )
139
140 async def handle_graphiql(self, request: Request) -> Response:
141 text = GRAPHIQL.replace("{{REQUEST_PATH}}", json.dumps(request.url.path))
142 return HTMLResponse(text)
143
144
145 GRAPHIQL = """
146 <!--
147 * Copyright (c) Facebook, Inc.
148 * All rights reserved.
149 *
150 * This source code is licensed under the license found in the
151 * LICENSE file in the root directory of this source tree.
152 -->
153 <!DOCTYPE html>
154 <html>
155 <head>
156 <style>
157 body {
158 height: 100%;
159 margin: 0;
160 width: 100%;
161 overflow: hidden;
162 }
163 #graphiql {
164 height: 100vh;
165 }
166 </style>
167 <!--
168 This GraphiQL example depends on Promise and fetch, which are available in
169 modern browsers, but can be "polyfilled" for older browsers.
170 GraphiQL itself depends on React DOM.
171 If you do not want to rely on a CDN, you can host these files locally or
172 include them directly in your favored resource bunder.
173 -->
174 <link href="//cdn.jsdelivr.net/npm/[email protected]/graphiql.css" rel="stylesheet"/>
175 <script src="//cdn.jsdelivr.net/npm/[email protected]/fetch.min.js"></script>
176 <script src="//cdn.jsdelivr.net/npm/[email protected]/umd/react.production.min.js"></script>
177 <script src="//cdn.jsdelivr.net/npm/[email protected]/umd/react-dom.production.min.js"></script>
178 <script src="//cdn.jsdelivr.net/npm/[email protected]/graphiql.min.js"></script>
179 </head>
180 <body>
181 <div id="graphiql">Loading...</div>
182 <script>
183 /**
184 * This GraphiQL example illustrates how to use some of GraphiQL's props
185 * in order to enable reading and updating the URL parameters, making
186 * link sharing of queries a little bit easier.
187 *
188 * This is only one example of this kind of feature, GraphiQL exposes
189 * various React params to enable interesting integrations.
190 */
191 // Parse the search string to get url parameters.
192 var search = window.location.search;
193 var parameters = {};
194 search.substr(1).split('&').forEach(function (entry) {
195 var eq = entry.indexOf('=');
196 if (eq >= 0) {
197 parameters[decodeURIComponent(entry.slice(0, eq))] =
198 decodeURIComponent(entry.slice(eq + 1));
199 }
200 });
201 // if variables was provided, try to format it.
202 if (parameters.variables) {
203 try {
204 parameters.variables =
205 JSON.stringify(JSON.parse(parameters.variables), null, 2);
206 } catch (e) {
207 // Do nothing, we want to display the invalid JSON as a string, rather
208 // than present an error.
209 }
210 }
211 // When the query and variables string is edited, update the URL bar so
212 // that it can be easily shared
213 function onEditQuery(newQuery) {
214 parameters.query = newQuery;
215 updateURL();
216 }
217 function onEditVariables(newVariables) {
218 parameters.variables = newVariables;
219 updateURL();
220 }
221 function onEditOperationName(newOperationName) {
222 parameters.operationName = newOperationName;
223 updateURL();
224 }
225 function updateURL() {
226 var newSearch = '?' + Object.keys(parameters).filter(function (key) {
227 return Boolean(parameters[key]);
228 }).map(function (key) {
229 return encodeURIComponent(key) + '=' +
230 encodeURIComponent(parameters[key]);
231 }).join('&');
232 history.replaceState(null, null, newSearch);
233 }
234 // Defines a GraphQL fetcher using the fetch API. You're not required to
235 // use fetch, and could instead implement graphQLFetcher however you like,
236 // as long as it returns a Promise or Observable.
237 function graphQLFetcher(graphQLParams) {
238 // This example expects a GraphQL server at the path /graphql.
239 // Change this to point wherever you host your GraphQL server.
240 return fetch({{REQUEST_PATH}}, {
241 method: 'post',
242 headers: {
243 'Accept': 'application/json',
244 'Content-Type': 'application/json',
245 },
246 body: JSON.stringify(graphQLParams),
247 credentials: 'include',
248 }).then(function (response) {
249 return response.text();
250 }).then(function (responseBody) {
251 try {
252 return JSON.parse(responseBody);
253 } catch (error) {
254 return responseBody;
255 }
256 });
257 }
258 // Render <GraphiQL /> into the body.
259 // See the README in the top level of this module to learn more about
260 // how you can customize GraphiQL by providing different values or
261 // additional child elements.
262 ReactDOM.render(
263 React.createElement(GraphiQL, {
264 fetcher: graphQLFetcher,
265 query: parameters.query,
266 variables: parameters.variables,
267 operationName: parameters.operationName,
268 onEditQuery: onEditQuery,
269 onEditVariables: onEditVariables,
270 onEditOperationName: onEditOperationName
271 }),
272 document.getElementById('graphiql')
273 );
274 </script>
275 </body>
276 </html>
277 """
278
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/starlette/graphql.py b/starlette/graphql.py
--- a/starlette/graphql.py
+++ b/starlette/graphql.py
@@ -107,7 +107,9 @@
if result.errors
else None
)
- response_data = {"data": result.data, "errors": error_data}
+ response_data = {"data": result.data}
+ if error_data:
+ response_data["errors"] = error_data
status_code = (
status.HTTP_400_BAD_REQUEST if result.errors else status.HTTP_200_OK
)
|
{"golden_diff": "diff --git a/starlette/graphql.py b/starlette/graphql.py\n--- a/starlette/graphql.py\n+++ b/starlette/graphql.py\n@@ -107,7 +107,9 @@\n if result.errors\n else None\n )\n- response_data = {\"data\": result.data, \"errors\": error_data}\n+ response_data = {\"data\": result.data}\n+ if error_data:\n+ response_data[\"errors\"] = error_data\n status_code = (\n status.HTTP_400_BAD_REQUEST if result.errors else status.HTTP_200_OK\n )\n", "issue": "GraphQL response should not include error key if no error occured\nThe [GraphQL Spec](https://graphql.github.io/graphql-spec/June2018/#sec-Errors) states that:\r\n> If no errors were encountered during the requested operation, the errors entry should not be present in the result.\r\n\r\nCurrently, if no errors are encountered, starlette will return `{\"data\": {...}, \"errors\": null}`. \r\nThis is only a small thing, but enough to break some clients.\r\n\r\nI have a PR for this incoming.\n", "before_files": [{"content": "import json\nimport typing\n\nfrom starlette import status\nfrom starlette.background import BackgroundTasks\nfrom starlette.concurrency import run_in_threadpool\nfrom starlette.requests import Request\nfrom starlette.responses import HTMLResponse, JSONResponse, PlainTextResponse, Response\nfrom starlette.types import Receive, Scope, Send\n\ntry:\n import graphene\n from graphql.execution.executors.asyncio import AsyncioExecutor\n from graphql.error import format_error as format_graphql_error\n from graphql.error import GraphQLError\nexcept ImportError: # pragma: nocover\n graphene = None # type: ignore\n AsyncioExecutor = None # type: ignore\n format_graphql_error = None # type: ignore\n GraphQLError = None # type: ignore\n\n\nclass GraphQLApp:\n def __init__(\n self,\n schema: \"graphene.Schema\",\n executor: typing.Any = None,\n executor_class: type = None,\n graphiql: bool = True,\n ) -> None:\n self.schema = schema\n self.graphiql = graphiql\n if executor is None:\n # New style in 0.10.0. Use 'executor_class'.\n # See issue https://github.com/encode/starlette/issues/242\n self.executor = executor\n self.executor_class = executor_class\n self.is_async = executor_class is not None and issubclass(\n executor_class, AsyncioExecutor\n )\n else:\n # Old style. Use 'executor'.\n # We should remove this in the next median/major version bump.\n self.executor = executor\n self.executor_class = None\n self.is_async = isinstance(executor, AsyncioExecutor)\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n if self.executor is None and self.executor_class is not None:\n self.executor = self.executor_class()\n\n request = Request(scope, receive=receive)\n response = await self.handle_graphql(request)\n await response(scope, receive, send)\n\n async def handle_graphql(self, request: Request) -> Response:\n if request.method in (\"GET\", \"HEAD\"):\n if \"text/html\" in request.headers.get(\"Accept\", \"\"):\n if not self.graphiql:\n return PlainTextResponse(\n \"Not Found\", status_code=status.HTTP_404_NOT_FOUND\n )\n return await self.handle_graphiql(request)\n\n data = request.query_params # type: typing.Mapping[str, typing.Any]\n\n elif request.method == \"POST\":\n content_type = request.headers.get(\"Content-Type\", \"\")\n\n if \"application/json\" in content_type:\n data = await request.json()\n elif \"application/graphql\" in content_type:\n body = await request.body()\n text = body.decode()\n data = {\"query\": text}\n elif \"query\" in request.query_params:\n data = request.query_params\n else:\n return PlainTextResponse(\n \"Unsupported Media Type\",\n status_code=status.HTTP_415_UNSUPPORTED_MEDIA_TYPE,\n )\n\n else:\n return PlainTextResponse(\n \"Method Not Allowed\", status_code=status.HTTP_405_METHOD_NOT_ALLOWED\n )\n\n try:\n query = data[\"query\"]\n variables = data.get(\"variables\")\n operation_name = data.get(\"operationName\")\n except KeyError:\n return PlainTextResponse(\n \"No GraphQL query found in the request\",\n status_code=status.HTTP_400_BAD_REQUEST,\n )\n\n background = BackgroundTasks()\n context = {\"request\": request, \"background\": background}\n\n result = await self.execute(\n query, variables=variables, context=context, operation_name=operation_name\n )\n error_data = (\n [format_graphql_error(err) for err in result.errors]\n if result.errors\n else None\n )\n response_data = {\"data\": result.data, \"errors\": error_data}\n status_code = (\n status.HTTP_400_BAD_REQUEST if result.errors else status.HTTP_200_OK\n )\n\n return JSONResponse(\n response_data, status_code=status_code, background=background\n )\n\n async def execute( # type: ignore\n self, query, variables=None, context=None, operation_name=None\n ):\n if self.is_async:\n return await self.schema.execute(\n query,\n variables=variables,\n operation_name=operation_name,\n executor=self.executor,\n return_promise=True,\n context=context,\n )\n else:\n return await run_in_threadpool(\n self.schema.execute,\n query,\n variables=variables,\n operation_name=operation_name,\n context=context,\n )\n\n async def handle_graphiql(self, request: Request) -> Response:\n text = GRAPHIQL.replace(\"{{REQUEST_PATH}}\", json.dumps(request.url.path))\n return HTMLResponse(text)\n\n\nGRAPHIQL = \"\"\"\n<!--\n * Copyright (c) Facebook, Inc.\n * All rights reserved.\n *\n * This source code is licensed under the license found in the\n * LICENSE file in the root directory of this source tree.\n-->\n<!DOCTYPE html>\n<html>\n <head>\n <style>\n body {\n height: 100%;\n margin: 0;\n width: 100%;\n overflow: hidden;\n }\n #graphiql {\n height: 100vh;\n }\n </style>\n <!--\n This GraphiQL example depends on Promise and fetch, which are available in\n modern browsers, but can be \"polyfilled\" for older browsers.\n GraphiQL itself depends on React DOM.\n If you do not want to rely on a CDN, you can host these files locally or\n include them directly in your favored resource bunder.\n -->\n <link href=\"//cdn.jsdelivr.net/npm/[email protected]/graphiql.css\" rel=\"stylesheet\"/>\n <script src=\"//cdn.jsdelivr.net/npm/[email protected]/fetch.min.js\"></script>\n <script src=\"//cdn.jsdelivr.net/npm/[email protected]/umd/react.production.min.js\"></script>\n <script src=\"//cdn.jsdelivr.net/npm/[email protected]/umd/react-dom.production.min.js\"></script>\n <script src=\"//cdn.jsdelivr.net/npm/[email protected]/graphiql.min.js\"></script>\n </head>\n <body>\n <div id=\"graphiql\">Loading...</div>\n <script>\n /**\n * This GraphiQL example illustrates how to use some of GraphiQL's props\n * in order to enable reading and updating the URL parameters, making\n * link sharing of queries a little bit easier.\n *\n * This is only one example of this kind of feature, GraphiQL exposes\n * various React params to enable interesting integrations.\n */\n // Parse the search string to get url parameters.\n var search = window.location.search;\n var parameters = {};\n search.substr(1).split('&').forEach(function (entry) {\n var eq = entry.indexOf('=');\n if (eq >= 0) {\n parameters[decodeURIComponent(entry.slice(0, eq))] =\n decodeURIComponent(entry.slice(eq + 1));\n }\n });\n // if variables was provided, try to format it.\n if (parameters.variables) {\n try {\n parameters.variables =\n JSON.stringify(JSON.parse(parameters.variables), null, 2);\n } catch (e) {\n // Do nothing, we want to display the invalid JSON as a string, rather\n // than present an error.\n }\n }\n // When the query and variables string is edited, update the URL bar so\n // that it can be easily shared\n function onEditQuery(newQuery) {\n parameters.query = newQuery;\n updateURL();\n }\n function onEditVariables(newVariables) {\n parameters.variables = newVariables;\n updateURL();\n }\n function onEditOperationName(newOperationName) {\n parameters.operationName = newOperationName;\n updateURL();\n }\n function updateURL() {\n var newSearch = '?' + Object.keys(parameters).filter(function (key) {\n return Boolean(parameters[key]);\n }).map(function (key) {\n return encodeURIComponent(key) + '=' +\n encodeURIComponent(parameters[key]);\n }).join('&');\n history.replaceState(null, null, newSearch);\n }\n // Defines a GraphQL fetcher using the fetch API. You're not required to\n // use fetch, and could instead implement graphQLFetcher however you like,\n // as long as it returns a Promise or Observable.\n function graphQLFetcher(graphQLParams) {\n // This example expects a GraphQL server at the path /graphql.\n // Change this to point wherever you host your GraphQL server.\n return fetch({{REQUEST_PATH}}, {\n method: 'post',\n headers: {\n 'Accept': 'application/json',\n 'Content-Type': 'application/json',\n },\n body: JSON.stringify(graphQLParams),\n credentials: 'include',\n }).then(function (response) {\n return response.text();\n }).then(function (responseBody) {\n try {\n return JSON.parse(responseBody);\n } catch (error) {\n return responseBody;\n }\n });\n }\n // Render <GraphiQL /> into the body.\n // See the README in the top level of this module to learn more about\n // how you can customize GraphiQL by providing different values or\n // additional child elements.\n ReactDOM.render(\n React.createElement(GraphiQL, {\n fetcher: graphQLFetcher,\n query: parameters.query,\n variables: parameters.variables,\n operationName: parameters.operationName,\n onEditQuery: onEditQuery,\n onEditVariables: onEditVariables,\n onEditOperationName: onEditOperationName\n }),\n document.getElementById('graphiql')\n );\n </script>\n </body>\n</html>\n\"\"\"\n", "path": "starlette/graphql.py"}], "after_files": [{"content": "import json\nimport typing\n\nfrom starlette import status\nfrom starlette.background import BackgroundTasks\nfrom starlette.concurrency import run_in_threadpool\nfrom starlette.requests import Request\nfrom starlette.responses import HTMLResponse, JSONResponse, PlainTextResponse, Response\nfrom starlette.types import Receive, Scope, Send\n\ntry:\n import graphene\n from graphql.execution.executors.asyncio import AsyncioExecutor\n from graphql.error import format_error as format_graphql_error\n from graphql.error import GraphQLError\nexcept ImportError: # pragma: nocover\n graphene = None # type: ignore\n AsyncioExecutor = None # type: ignore\n format_graphql_error = None # type: ignore\n GraphQLError = None # type: ignore\n\n\nclass GraphQLApp:\n def __init__(\n self,\n schema: \"graphene.Schema\",\n executor: typing.Any = None,\n executor_class: type = None,\n graphiql: bool = True,\n ) -> None:\n self.schema = schema\n self.graphiql = graphiql\n if executor is None:\n # New style in 0.10.0. Use 'executor_class'.\n # See issue https://github.com/encode/starlette/issues/242\n self.executor = executor\n self.executor_class = executor_class\n self.is_async = executor_class is not None and issubclass(\n executor_class, AsyncioExecutor\n )\n else:\n # Old style. Use 'executor'.\n # We should remove this in the next median/major version bump.\n self.executor = executor\n self.executor_class = None\n self.is_async = isinstance(executor, AsyncioExecutor)\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n if self.executor is None and self.executor_class is not None:\n self.executor = self.executor_class()\n\n request = Request(scope, receive=receive)\n response = await self.handle_graphql(request)\n await response(scope, receive, send)\n\n async def handle_graphql(self, request: Request) -> Response:\n if request.method in (\"GET\", \"HEAD\"):\n if \"text/html\" in request.headers.get(\"Accept\", \"\"):\n if not self.graphiql:\n return PlainTextResponse(\n \"Not Found\", status_code=status.HTTP_404_NOT_FOUND\n )\n return await self.handle_graphiql(request)\n\n data = request.query_params # type: typing.Mapping[str, typing.Any]\n\n elif request.method == \"POST\":\n content_type = request.headers.get(\"Content-Type\", \"\")\n\n if \"application/json\" in content_type:\n data = await request.json()\n elif \"application/graphql\" in content_type:\n body = await request.body()\n text = body.decode()\n data = {\"query\": text}\n elif \"query\" in request.query_params:\n data = request.query_params\n else:\n return PlainTextResponse(\n \"Unsupported Media Type\",\n status_code=status.HTTP_415_UNSUPPORTED_MEDIA_TYPE,\n )\n\n else:\n return PlainTextResponse(\n \"Method Not Allowed\", status_code=status.HTTP_405_METHOD_NOT_ALLOWED\n )\n\n try:\n query = data[\"query\"]\n variables = data.get(\"variables\")\n operation_name = data.get(\"operationName\")\n except KeyError:\n return PlainTextResponse(\n \"No GraphQL query found in the request\",\n status_code=status.HTTP_400_BAD_REQUEST,\n )\n\n background = BackgroundTasks()\n context = {\"request\": request, \"background\": background}\n\n result = await self.execute(\n query, variables=variables, context=context, operation_name=operation_name\n )\n error_data = (\n [format_graphql_error(err) for err in result.errors]\n if result.errors\n else None\n )\n response_data = {\"data\": result.data}\n if error_data:\n response_data[\"errors\"] = error_data\n status_code = (\n status.HTTP_400_BAD_REQUEST if result.errors else status.HTTP_200_OK\n )\n\n return JSONResponse(\n response_data, status_code=status_code, background=background\n )\n\n async def execute( # type: ignore\n self, query, variables=None, context=None, operation_name=None\n ):\n if self.is_async:\n return await self.schema.execute(\n query,\n variables=variables,\n operation_name=operation_name,\n executor=self.executor,\n return_promise=True,\n context=context,\n )\n else:\n return await run_in_threadpool(\n self.schema.execute,\n query,\n variables=variables,\n operation_name=operation_name,\n context=context,\n )\n\n async def handle_graphiql(self, request: Request) -> Response:\n text = GRAPHIQL.replace(\"{{REQUEST_PATH}}\", json.dumps(request.url.path))\n return HTMLResponse(text)\n\n\nGRAPHIQL = \"\"\"\n<!--\n * Copyright (c) Facebook, Inc.\n * All rights reserved.\n *\n * This source code is licensed under the license found in the\n * LICENSE file in the root directory of this source tree.\n-->\n<!DOCTYPE html>\n<html>\n <head>\n <style>\n body {\n height: 100%;\n margin: 0;\n width: 100%;\n overflow: hidden;\n }\n #graphiql {\n height: 100vh;\n }\n </style>\n <!--\n This GraphiQL example depends on Promise and fetch, which are available in\n modern browsers, but can be \"polyfilled\" for older browsers.\n GraphiQL itself depends on React DOM.\n If you do not want to rely on a CDN, you can host these files locally or\n include them directly in your favored resource bunder.\n -->\n <link href=\"//cdn.jsdelivr.net/npm/[email protected]/graphiql.css\" rel=\"stylesheet\"/>\n <script src=\"//cdn.jsdelivr.net/npm/[email protected]/fetch.min.js\"></script>\n <script src=\"//cdn.jsdelivr.net/npm/[email protected]/umd/react.production.min.js\"></script>\n <script src=\"//cdn.jsdelivr.net/npm/[email protected]/umd/react-dom.production.min.js\"></script>\n <script src=\"//cdn.jsdelivr.net/npm/[email protected]/graphiql.min.js\"></script>\n </head>\n <body>\n <div id=\"graphiql\">Loading...</div>\n <script>\n /**\n * This GraphiQL example illustrates how to use some of GraphiQL's props\n * in order to enable reading and updating the URL parameters, making\n * link sharing of queries a little bit easier.\n *\n * This is only one example of this kind of feature, GraphiQL exposes\n * various React params to enable interesting integrations.\n */\n // Parse the search string to get url parameters.\n var search = window.location.search;\n var parameters = {};\n search.substr(1).split('&').forEach(function (entry) {\n var eq = entry.indexOf('=');\n if (eq >= 0) {\n parameters[decodeURIComponent(entry.slice(0, eq))] =\n decodeURIComponent(entry.slice(eq + 1));\n }\n });\n // if variables was provided, try to format it.\n if (parameters.variables) {\n try {\n parameters.variables =\n JSON.stringify(JSON.parse(parameters.variables), null, 2);\n } catch (e) {\n // Do nothing, we want to display the invalid JSON as a string, rather\n // than present an error.\n }\n }\n // When the query and variables string is edited, update the URL bar so\n // that it can be easily shared\n function onEditQuery(newQuery) {\n parameters.query = newQuery;\n updateURL();\n }\n function onEditVariables(newVariables) {\n parameters.variables = newVariables;\n updateURL();\n }\n function onEditOperationName(newOperationName) {\n parameters.operationName = newOperationName;\n updateURL();\n }\n function updateURL() {\n var newSearch = '?' + Object.keys(parameters).filter(function (key) {\n return Boolean(parameters[key]);\n }).map(function (key) {\n return encodeURIComponent(key) + '=' +\n encodeURIComponent(parameters[key]);\n }).join('&');\n history.replaceState(null, null, newSearch);\n }\n // Defines a GraphQL fetcher using the fetch API. You're not required to\n // use fetch, and could instead implement graphQLFetcher however you like,\n // as long as it returns a Promise or Observable.\n function graphQLFetcher(graphQLParams) {\n // This example expects a GraphQL server at the path /graphql.\n // Change this to point wherever you host your GraphQL server.\n return fetch({{REQUEST_PATH}}, {\n method: 'post',\n headers: {\n 'Accept': 'application/json',\n 'Content-Type': 'application/json',\n },\n body: JSON.stringify(graphQLParams),\n credentials: 'include',\n }).then(function (response) {\n return response.text();\n }).then(function (responseBody) {\n try {\n return JSON.parse(responseBody);\n } catch (error) {\n return responseBody;\n }\n });\n }\n // Render <GraphiQL /> into the body.\n // See the README in the top level of this module to learn more about\n // how you can customize GraphiQL by providing different values or\n // additional child elements.\n ReactDOM.render(\n React.createElement(GraphiQL, {\n fetcher: graphQLFetcher,\n query: parameters.query,\n variables: parameters.variables,\n operationName: parameters.operationName,\n onEditQuery: onEditQuery,\n onEditVariables: onEditVariables,\n onEditOperationName: onEditOperationName\n }),\n document.getElementById('graphiql')\n );\n </script>\n </body>\n</html>\n\"\"\"\n", "path": "starlette/graphql.py"}]}
| 3,233 | 126 |
gh_patches_debug_34657
|
rasdani/github-patches
|
git_diff
|
pantsbuild__pants-14125
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ResolveError: Directory '{mydir}' does not contain any BUILD files (when Dockerizing packages)
**Describe the bug**
Created a repo at https://github.com/sureshjoshi/pantsbuild-14031 to help illustrate this problem.
Essentially, I use custom output paths for my .pex files, and while testing out the `docker_image` target, I noticed some of my components fail with the error
> ResolveError: Directory 'backend' does not contain any BUILD files
After a lot of debugging, I only ran into this problem when my output folders were common to multiple `pex_binary` targets.
For example, in the repo above, I have 3 identical projects (A, B, C) - where they only differ by the `pex_binary` `output_path` (and this location updated in the associated Dockerfile), and one of the projects refuses to compile.
As per the README in the repo:
```bash
# Should create a pex at dist/backend/projecta/projecta.pex
# Docker image created successfully as projecta-container:latest
./pants package backend/projecta::
# Should create a pex at dist/backend.projectc/projectc.pex
# Docker image created successfully as projectc-container:latest
./pants package backend/projectc::
```
```bash
# Should create a pex at dist/backend/projectb.pex
./pants package backend/projectb:projectb
# FAILS: With ResolveError
./pants package backend/projectb:projectb-container
```
So, the difference above is that Project C uses no `output_path` and uses the dot-syntax for the dist folder. ProjectA places the pex file under a `backend/projecta` directory. The failing ProjectB places the pex file directly under `backend`.
This isn't a big issue, and easily worked around, and I'm guessing it has to do with namespacing or module/package semantics, but it's just a weird problem that is difficult to debug based on the error message.
**Pants version**
- 2.8.0
- 2.9.0rc1
**OS**
macOS 12.1
Untested on Linux
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/python/pants/backend/docker/util_rules/dependencies.py`
Content:
```
1 # Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 from pants.backend.docker.subsystems.dockerfile_parser import DockerfileInfo, DockerfileInfoRequest
5 from pants.backend.docker.target_types import DockerDependenciesField
6 from pants.core.goals.package import PackageFieldSet
7 from pants.engine.addresses import Addresses, UnparsedAddressInputs
8 from pants.engine.rules import Get, collect_rules, rule
9 from pants.engine.target import (
10 FieldSetsPerTarget,
11 FieldSetsPerTargetRequest,
12 InjectDependenciesRequest,
13 InjectedDependencies,
14 Targets,
15 )
16 from pants.engine.unions import UnionRule
17
18
19 class InjectDockerDependencies(InjectDependenciesRequest):
20 inject_for = DockerDependenciesField
21
22
23 @rule
24 async def inject_docker_dependencies(request: InjectDockerDependencies) -> InjectedDependencies:
25 """Inspects COPY instructions in the Dockerfile for references to known targets."""
26 dockerfile_info = await Get(
27 DockerfileInfo, DockerfileInfoRequest(request.dependencies_field.address)
28 )
29
30 targets = await Get(
31 Targets,
32 UnparsedAddressInputs(
33 dockerfile_info.putative_target_addresses,
34 owning_address=dockerfile_info.address,
35 ),
36 )
37 package = await Get(FieldSetsPerTarget, FieldSetsPerTargetRequest(PackageFieldSet, targets))
38 referenced_targets = (
39 field_sets[0].address for field_sets in package.collection if len(field_sets) > 0
40 )
41 return InjectedDependencies(Addresses(referenced_targets))
42
43
44 def rules():
45 return [
46 *collect_rules(),
47 UnionRule(InjectDependenciesRequest, InjectDockerDependencies),
48 ]
49
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/python/pants/backend/docker/util_rules/dependencies.py b/src/python/pants/backend/docker/util_rules/dependencies.py
--- a/src/python/pants/backend/docker/util_rules/dependencies.py
+++ b/src/python/pants/backend/docker/util_rules/dependencies.py
@@ -3,6 +3,7 @@
from pants.backend.docker.subsystems.dockerfile_parser import DockerfileInfo, DockerfileInfoRequest
from pants.backend.docker.target_types import DockerDependenciesField
+from pants.base.specs import AddressSpecs, MaybeEmptySiblingAddresses
from pants.core.goals.package import PackageFieldSet
from pants.engine.addresses import Addresses, UnparsedAddressInputs
from pants.engine.rules import Get, collect_rules, rule
@@ -22,18 +23,28 @@
@rule
async def inject_docker_dependencies(request: InjectDockerDependencies) -> InjectedDependencies:
- """Inspects COPY instructions in the Dockerfile for references to known targets."""
+ """Inspects COPY instructions in the Dockerfile for references to known packagable targets."""
dockerfile_info = await Get(
DockerfileInfo, DockerfileInfoRequest(request.dependencies_field.address)
)
- targets = await Get(
- Targets,
+ # Parse all putative target addresses.
+ putative_addresses = await Get(
+ Addresses,
UnparsedAddressInputs(
dockerfile_info.putative_target_addresses,
owning_address=dockerfile_info.address,
),
)
+
+ # Get the target for those addresses that are known.
+ directories = {address.spec_path for address in putative_addresses}
+ all_addresses = await Get(Addresses, AddressSpecs(map(MaybeEmptySiblingAddresses, directories)))
+ targets = await Get(
+ Targets, Addresses((address for address in putative_addresses if address in all_addresses))
+ )
+
+ # Only keep those targets that we can "package".
package = await Get(FieldSetsPerTarget, FieldSetsPerTargetRequest(PackageFieldSet, targets))
referenced_targets = (
field_sets[0].address for field_sets in package.collection if len(field_sets) > 0
|
{"golden_diff": "diff --git a/src/python/pants/backend/docker/util_rules/dependencies.py b/src/python/pants/backend/docker/util_rules/dependencies.py\n--- a/src/python/pants/backend/docker/util_rules/dependencies.py\n+++ b/src/python/pants/backend/docker/util_rules/dependencies.py\n@@ -3,6 +3,7 @@\n \n from pants.backend.docker.subsystems.dockerfile_parser import DockerfileInfo, DockerfileInfoRequest\n from pants.backend.docker.target_types import DockerDependenciesField\n+from pants.base.specs import AddressSpecs, MaybeEmptySiblingAddresses\n from pants.core.goals.package import PackageFieldSet\n from pants.engine.addresses import Addresses, UnparsedAddressInputs\n from pants.engine.rules import Get, collect_rules, rule\n@@ -22,18 +23,28 @@\n \n @rule\n async def inject_docker_dependencies(request: InjectDockerDependencies) -> InjectedDependencies:\n- \"\"\"Inspects COPY instructions in the Dockerfile for references to known targets.\"\"\"\n+ \"\"\"Inspects COPY instructions in the Dockerfile for references to known packagable targets.\"\"\"\n dockerfile_info = await Get(\n DockerfileInfo, DockerfileInfoRequest(request.dependencies_field.address)\n )\n \n- targets = await Get(\n- Targets,\n+ # Parse all putative target addresses.\n+ putative_addresses = await Get(\n+ Addresses,\n UnparsedAddressInputs(\n dockerfile_info.putative_target_addresses,\n owning_address=dockerfile_info.address,\n ),\n )\n+\n+ # Get the target for those addresses that are known.\n+ directories = {address.spec_path for address in putative_addresses}\n+ all_addresses = await Get(Addresses, AddressSpecs(map(MaybeEmptySiblingAddresses, directories)))\n+ targets = await Get(\n+ Targets, Addresses((address for address in putative_addresses if address in all_addresses))\n+ )\n+\n+ # Only keep those targets that we can \"package\".\n package = await Get(FieldSetsPerTarget, FieldSetsPerTargetRequest(PackageFieldSet, targets))\n referenced_targets = (\n field_sets[0].address for field_sets in package.collection if len(field_sets) > 0\n", "issue": "ResolveError: Directory '{mydir}' does not contain any BUILD files (when Dockerizing packages)\n**Describe the bug**\r\n\r\nCreated a repo at https://github.com/sureshjoshi/pantsbuild-14031 to help illustrate this problem. \r\n\r\nEssentially, I use custom output paths for my .pex files, and while testing out the `docker_image` target, I noticed some of my components fail with the error \r\n\r\n> ResolveError: Directory 'backend' does not contain any BUILD files\r\n\r\nAfter a lot of debugging, I only ran into this problem when my output folders were common to multiple `pex_binary` targets. \r\n\r\nFor example, in the repo above, I have 3 identical projects (A, B, C) - where they only differ by the `pex_binary` `output_path` (and this location updated in the associated Dockerfile), and one of the projects refuses to compile.\r\n\r\nAs per the README in the repo:\r\n\r\n```bash\r\n# Should create a pex at dist/backend/projecta/projecta.pex\r\n# Docker image created successfully as projecta-container:latest\r\n./pants package backend/projecta::\r\n\r\n# Should create a pex at dist/backend.projectc/projectc.pex\r\n# Docker image created successfully as projectc-container:latest\r\n./pants package backend/projectc::\r\n```\r\n\r\n```bash\r\n# Should create a pex at dist/backend/projectb.pex\r\n./pants package backend/projectb:projectb\r\n\r\n# FAILS: With ResolveError\r\n./pants package backend/projectb:projectb-container \r\n```\r\n\r\nSo, the difference above is that Project C uses no `output_path` and uses the dot-syntax for the dist folder. ProjectA places the pex file under a `backend/projecta` directory. The failing ProjectB places the pex file directly under `backend`.\r\n\r\nThis isn't a big issue, and easily worked around, and I'm guessing it has to do with namespacing or module/package semantics, but it's just a weird problem that is difficult to debug based on the error message.\r\n\r\n**Pants version**\r\n\r\n- 2.8.0\r\n- 2.9.0rc1\r\n\r\n**OS**\r\n\r\nmacOS 12.1\r\nUntested on Linux\r\n\n", "before_files": [{"content": "# Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom pants.backend.docker.subsystems.dockerfile_parser import DockerfileInfo, DockerfileInfoRequest\nfrom pants.backend.docker.target_types import DockerDependenciesField\nfrom pants.core.goals.package import PackageFieldSet\nfrom pants.engine.addresses import Addresses, UnparsedAddressInputs\nfrom pants.engine.rules import Get, collect_rules, rule\nfrom pants.engine.target import (\n FieldSetsPerTarget,\n FieldSetsPerTargetRequest,\n InjectDependenciesRequest,\n InjectedDependencies,\n Targets,\n)\nfrom pants.engine.unions import UnionRule\n\n\nclass InjectDockerDependencies(InjectDependenciesRequest):\n inject_for = DockerDependenciesField\n\n\n@rule\nasync def inject_docker_dependencies(request: InjectDockerDependencies) -> InjectedDependencies:\n \"\"\"Inspects COPY instructions in the Dockerfile for references to known targets.\"\"\"\n dockerfile_info = await Get(\n DockerfileInfo, DockerfileInfoRequest(request.dependencies_field.address)\n )\n\n targets = await Get(\n Targets,\n UnparsedAddressInputs(\n dockerfile_info.putative_target_addresses,\n owning_address=dockerfile_info.address,\n ),\n )\n package = await Get(FieldSetsPerTarget, FieldSetsPerTargetRequest(PackageFieldSet, targets))\n referenced_targets = (\n field_sets[0].address for field_sets in package.collection if len(field_sets) > 0\n )\n return InjectedDependencies(Addresses(referenced_targets))\n\n\ndef rules():\n return [\n *collect_rules(),\n UnionRule(InjectDependenciesRequest, InjectDockerDependencies),\n ]\n", "path": "src/python/pants/backend/docker/util_rules/dependencies.py"}], "after_files": [{"content": "# Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom pants.backend.docker.subsystems.dockerfile_parser import DockerfileInfo, DockerfileInfoRequest\nfrom pants.backend.docker.target_types import DockerDependenciesField\nfrom pants.base.specs import AddressSpecs, MaybeEmptySiblingAddresses\nfrom pants.core.goals.package import PackageFieldSet\nfrom pants.engine.addresses import Addresses, UnparsedAddressInputs\nfrom pants.engine.rules import Get, collect_rules, rule\nfrom pants.engine.target import (\n FieldSetsPerTarget,\n FieldSetsPerTargetRequest,\n InjectDependenciesRequest,\n InjectedDependencies,\n Targets,\n)\nfrom pants.engine.unions import UnionRule\n\n\nclass InjectDockerDependencies(InjectDependenciesRequest):\n inject_for = DockerDependenciesField\n\n\n@rule\nasync def inject_docker_dependencies(request: InjectDockerDependencies) -> InjectedDependencies:\n \"\"\"Inspects COPY instructions in the Dockerfile for references to known packagable targets.\"\"\"\n dockerfile_info = await Get(\n DockerfileInfo, DockerfileInfoRequest(request.dependencies_field.address)\n )\n\n # Parse all putative target addresses.\n putative_addresses = await Get(\n Addresses,\n UnparsedAddressInputs(\n dockerfile_info.putative_target_addresses,\n owning_address=dockerfile_info.address,\n ),\n )\n\n # Get the target for those addresses that are known.\n directories = {address.spec_path for address in putative_addresses}\n all_addresses = await Get(Addresses, AddressSpecs(map(MaybeEmptySiblingAddresses, directories)))\n targets = await Get(\n Targets, Addresses((address for address in putative_addresses if address in all_addresses))\n )\n\n # Only keep those targets that we can \"package\".\n package = await Get(FieldSetsPerTarget, FieldSetsPerTargetRequest(PackageFieldSet, targets))\n referenced_targets = (\n field_sets[0].address for field_sets in package.collection if len(field_sets) > 0\n )\n return InjectedDependencies(Addresses(referenced_targets))\n\n\ndef rules():\n return [\n *collect_rules(),\n UnionRule(InjectDependenciesRequest, InjectDockerDependencies),\n ]\n", "path": "src/python/pants/backend/docker/util_rules/dependencies.py"}]}
| 1,176 | 452 |
gh_patches_debug_39570
|
rasdani/github-patches
|
git_diff
|
ibis-project__ibis-3117
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
perf: add fast path for simple selections in dask backend
For simple selections we don't need to bother with the `dd.concat` here: https://github.com/ibis-project/ibis/blob/master/ibis/backends/dask/execution/selection.py#L154 and should probably select on the data directly
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ibis/backends/dask/execution/selection.py`
Content:
```
1 """Dispatching code for Selection operations.
2 """
3
4
5 import functools
6 import operator
7 from typing import Optional
8
9 import dask.dataframe as dd
10 import pandas
11 from toolz import concatv
12
13 import ibis.expr.operations as ops
14 import ibis.expr.types as ir
15 from ibis.backends.pandas.execution.selection import (
16 compute_projection,
17 compute_projection_table_expr,
18 map_new_column_names_to_data,
19 remap_overlapping_column_names,
20 )
21 from ibis.expr.scope import Scope
22 from ibis.expr.typing import TimeContext
23
24 from ..core import execute
25 from ..dispatch import execute_node
26 from ..execution import constants
27 from ..execution.util import (
28 add_partitioned_sorted_column,
29 coerce_to_output,
30 compute_sorted_frame,
31 )
32
33
34 @compute_projection.register(ir.ScalarExpr, ops.Selection, dd.DataFrame)
35 def compute_projection_scalar_expr(
36 expr,
37 parent,
38 data,
39 scope: Scope,
40 timecontext: Optional[TimeContext] = None,
41 **kwargs,
42 ):
43 name = expr._name
44 assert name is not None, 'Scalar selection name is None'
45
46 op = expr.op()
47 parent_table_op = parent.table.op()
48
49 data_columns = frozenset(data.columns)
50
51 scope = scope.merge_scopes(
52 Scope(
53 {
54 t: map_new_column_names_to_data(
55 remap_overlapping_column_names(
56 parent_table_op, t, data_columns
57 ),
58 data,
59 )
60 },
61 timecontext,
62 )
63 for t in op.root_tables()
64 )
65 scalar = execute(expr, scope=scope, **kwargs)
66 return data.assign(**{name: scalar})[name]
67
68
69 @compute_projection.register(ir.ColumnExpr, ops.Selection, dd.DataFrame)
70 def compute_projection_column_expr(
71 expr,
72 parent,
73 data,
74 scope: Scope,
75 timecontext: Optional[TimeContext],
76 **kwargs,
77 ):
78 result_name = getattr(expr, '_name', None)
79 op = expr.op()
80 parent_table_op = parent.table.op()
81
82 if isinstance(op, ops.TableColumn):
83 # slightly faster path for simple column selection
84 name = op.name
85
86 if name in data:
87 return data[name].rename(result_name or name)
88
89 if not isinstance(parent_table_op, ops.Join):
90 raise KeyError(name)
91 (root_table,) = op.root_tables()
92 left_root, right_root = ops.distinct_roots(
93 parent_table_op.left, parent_table_op.right
94 )
95 suffixes = {
96 left_root: constants.LEFT_JOIN_SUFFIX,
97 right_root: constants.RIGHT_JOIN_SUFFIX,
98 }
99 return data.loc[:, name + suffixes[root_table]].rename(
100 result_name or name
101 )
102
103 data_columns = frozenset(data.columns)
104
105 scope = scope.merge_scopes(
106 Scope(
107 {
108 t: map_new_column_names_to_data(
109 remap_overlapping_column_names(
110 parent_table_op, t, data_columns
111 ),
112 data,
113 )
114 },
115 timecontext,
116 )
117 for t in op.root_tables()
118 )
119
120 result = execute(expr, scope=scope, timecontext=timecontext, **kwargs)
121 result = coerce_to_output(result, expr, data.index)
122 assert result_name is not None, 'Column selection name is None'
123
124 return result
125
126
127 compute_projection.register(ir.TableExpr, ops.Selection, dd.DataFrame)(
128 compute_projection_table_expr
129 )
130
131
132 @execute_node.register(ops.Selection, dd.DataFrame)
133 def execute_selection_dataframe(
134 op, data, scope: Scope, timecontext: Optional[TimeContext], **kwargs
135 ):
136 selections = op.selections
137 predicates = op.predicates
138 sort_keys = op.sort_keys
139 result = data
140
141 # Build up the individual dask structures from column expressions
142 if selections:
143 # Create a unique row identifier and set it as the index. This is used
144 # in dd.concat to merge the pieces back together.
145 data = add_partitioned_sorted_column(data)
146 data_pieces = []
147 for selection in selections:
148 dask_object = compute_projection(
149 selection,
150 op,
151 data,
152 scope=scope,
153 timecontext=timecontext,
154 **kwargs,
155 )
156 data_pieces.append(dask_object)
157
158 result = dd.concat(data_pieces, axis=1)
159 result.reset_index(drop=True)
160
161 if predicates:
162 predicates = _compute_predicates(
163 op.table.op(), predicates, data, scope, timecontext, **kwargs
164 )
165 predicate = functools.reduce(operator.and_, predicates)
166 result = result.loc[predicate]
167
168 if sort_keys:
169 if len(sort_keys) > 1:
170 raise NotImplementedError(
171 """
172 Multi-key sorting is not implemented for the Dask backend
173 """
174 )
175 sort_key = sort_keys[0]
176 ascending = getattr(sort_key.op(), 'ascending', True)
177 if not ascending:
178 raise NotImplementedError(
179 "Descending sort is not supported for the Dask backend"
180 )
181 result = compute_sorted_frame(
182 result,
183 order_by=sort_key,
184 scope=scope,
185 timecontext=timecontext,
186 **kwargs,
187 )
188
189 return result
190 else:
191 grouping_keys = ordering_keys = ()
192
193 # return early if we do not have any temporary grouping or ordering columns
194 assert not grouping_keys, 'group by should never show up in Selection'
195 if not ordering_keys:
196 return result
197
198 # create a sequence of columns that we need to drop
199 temporary_columns = pandas.Index(
200 concatv(grouping_keys, ordering_keys)
201 ).difference(data.columns)
202
203 # no reason to call drop if we don't need to
204 if temporary_columns.empty:
205 return result
206
207 # drop every temporary column we created for ordering or grouping
208 return result.drop(temporary_columns, axis=1)
209
210
211 def _compute_predicates(
212 table_op,
213 predicates,
214 data,
215 scope: Scope,
216 timecontext: Optional[TimeContext],
217 **kwargs,
218 ):
219 """Compute the predicates for a table operation.
220
221 Parameters
222 ----------
223 table_op : TableNode
224 predicates : List[ir.ColumnExpr]
225 data : pd.DataFrame
226 scope : Scope
227 timecontext: Optional[TimeContext]
228 kwargs : dict
229
230 Returns
231 -------
232 computed_predicate : pd.Series[bool]
233
234 Notes
235 -----
236 This handles the cases where the predicates are computed columns, in
237 addition to the simple case of named columns coming directly from the input
238 table.
239 """
240 for predicate in predicates:
241 # Map each root table of the predicate to the data so that we compute
242 # predicates on the result instead of any left or right tables if the
243 # Selection is on a Join. Project data to only inlude columns from
244 # the root table.
245 root_tables = predicate.op().root_tables()
246
247 # handle suffixes
248 data_columns = frozenset(data.columns)
249
250 additional_scope = Scope()
251 for root_table in root_tables:
252 mapping = remap_overlapping_column_names(
253 table_op, root_table, data_columns
254 )
255 if mapping is not None:
256 new_data = data.loc[:, mapping.keys()].rename(columns=mapping)
257 else:
258 new_data = data
259 additional_scope = additional_scope.merge_scope(
260 Scope({root_table: new_data}, timecontext)
261 )
262
263 scope = scope.merge_scope(additional_scope)
264 yield execute(predicate, scope=scope, **kwargs)
265
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ibis/backends/dask/execution/selection.py b/ibis/backends/dask/execution/selection.py
--- a/ibis/backends/dask/execution/selection.py
+++ b/ibis/backends/dask/execution/selection.py
@@ -4,7 +4,7 @@
import functools
import operator
-from typing import Optional
+from typing import List, Optional
import dask.dataframe as dd
import pandas
@@ -47,7 +47,6 @@
parent_table_op = parent.table.op()
data_columns = frozenset(data.columns)
-
scope = scope.merge_scopes(
Scope(
{
@@ -129,6 +128,42 @@
)
+def build_df_from_selection(
+ selections: List[ir.ColumnExpr], data: dd.DataFrame
+) -> dd.DataFrame:
+ """Build up a df by doing direct selections, renaming if necessary."""
+ cols = [
+ (s.op().name, getattr(s, "_name", s.op().name)) for s in selections
+ ]
+ renamed_cols = {
+ col: renamed_col for col, renamed_col in cols if col != renamed_col
+ }
+
+ result = data[[col for col, _ in cols]]
+ if renamed_cols:
+ result = result.rename(columns=renamed_cols)
+
+ return result
+
+
+def build_df_from_projection(
+ selections: List[ir.Expr], op: ops.Selection, data: dd.DataFrame, **kwargs
+) -> dd.DataFrame:
+ """
+ Build up a df from individual pieces by dispatching to `compute_projection`
+ for each expression.
+ """
+
+ # Create a unique row identifier and set it as the index. This is
+ # used in dd.concat to merge the pieces back together.
+ data = add_partitioned_sorted_column(data)
+ data_pieces = [
+ compute_projection(s, op, data, **kwargs) for s in selections
+ ]
+
+ return dd.concat(data_pieces, axis=1).reset_index(drop=True)
+
+
@execute_node.register(ops.Selection, dd.DataFrame)
def execute_selection_dataframe(
op, data, scope: Scope, timecontext: Optional[TimeContext], **kwargs
@@ -138,25 +173,22 @@
sort_keys = op.sort_keys
result = data
- # Build up the individual dask structures from column expressions
if selections:
- # Create a unique row identifier and set it as the index. This is used
- # in dd.concat to merge the pieces back together.
- data = add_partitioned_sorted_column(data)
- data_pieces = []
- for selection in selections:
- dask_object = compute_projection(
- selection,
+ # if we are just performing select operations and all columns are in
+ # the table we can do a direct selection
+ if all(isinstance(s.op(), ops.TableColumn) for s in selections) and {
+ s.op().name for s in selections
+ }.issubset(set(result.columns)):
+ result = build_df_from_selection(selections, data)
+ else:
+ result = build_df_from_projection(
+ selections,
op,
data,
scope=scope,
timecontext=timecontext,
**kwargs,
)
- data_pieces.append(dask_object)
-
- result = dd.concat(data_pieces, axis=1)
- result.reset_index(drop=True)
if predicates:
predicates = _compute_predicates(
|
{"golden_diff": "diff --git a/ibis/backends/dask/execution/selection.py b/ibis/backends/dask/execution/selection.py\n--- a/ibis/backends/dask/execution/selection.py\n+++ b/ibis/backends/dask/execution/selection.py\n@@ -4,7 +4,7 @@\n \n import functools\n import operator\n-from typing import Optional\n+from typing import List, Optional\n \n import dask.dataframe as dd\n import pandas\n@@ -47,7 +47,6 @@\n parent_table_op = parent.table.op()\n \n data_columns = frozenset(data.columns)\n-\n scope = scope.merge_scopes(\n Scope(\n {\n@@ -129,6 +128,42 @@\n )\n \n \n+def build_df_from_selection(\n+ selections: List[ir.ColumnExpr], data: dd.DataFrame\n+) -> dd.DataFrame:\n+ \"\"\"Build up a df by doing direct selections, renaming if necessary.\"\"\"\n+ cols = [\n+ (s.op().name, getattr(s, \"_name\", s.op().name)) for s in selections\n+ ]\n+ renamed_cols = {\n+ col: renamed_col for col, renamed_col in cols if col != renamed_col\n+ }\n+\n+ result = data[[col for col, _ in cols]]\n+ if renamed_cols:\n+ result = result.rename(columns=renamed_cols)\n+\n+ return result\n+\n+\n+def build_df_from_projection(\n+ selections: List[ir.Expr], op: ops.Selection, data: dd.DataFrame, **kwargs\n+) -> dd.DataFrame:\n+ \"\"\"\n+ Build up a df from individual pieces by dispatching to `compute_projection`\n+ for each expression.\n+ \"\"\"\n+\n+ # Create a unique row identifier and set it as the index. This is\n+ # used in dd.concat to merge the pieces back together.\n+ data = add_partitioned_sorted_column(data)\n+ data_pieces = [\n+ compute_projection(s, op, data, **kwargs) for s in selections\n+ ]\n+\n+ return dd.concat(data_pieces, axis=1).reset_index(drop=True)\n+\n+\n @execute_node.register(ops.Selection, dd.DataFrame)\n def execute_selection_dataframe(\n op, data, scope: Scope, timecontext: Optional[TimeContext], **kwargs\n@@ -138,25 +173,22 @@\n sort_keys = op.sort_keys\n result = data\n \n- # Build up the individual dask structures from column expressions\n if selections:\n- # Create a unique row identifier and set it as the index. This is used\n- # in dd.concat to merge the pieces back together.\n- data = add_partitioned_sorted_column(data)\n- data_pieces = []\n- for selection in selections:\n- dask_object = compute_projection(\n- selection,\n+ # if we are just performing select operations and all columns are in\n+ # the table we can do a direct selection\n+ if all(isinstance(s.op(), ops.TableColumn) for s in selections) and {\n+ s.op().name for s in selections\n+ }.issubset(set(result.columns)):\n+ result = build_df_from_selection(selections, data)\n+ else:\n+ result = build_df_from_projection(\n+ selections,\n op,\n data,\n scope=scope,\n timecontext=timecontext,\n **kwargs,\n )\n- data_pieces.append(dask_object)\n-\n- result = dd.concat(data_pieces, axis=1)\n- result.reset_index(drop=True)\n \n if predicates:\n predicates = _compute_predicates(\n", "issue": "perf: add fast path for simple selections in dask backend\nFor simple selections we don't need to bother with the `dd.concat` here: https://github.com/ibis-project/ibis/blob/master/ibis/backends/dask/execution/selection.py#L154 and should probably select on the data directly \n", "before_files": [{"content": "\"\"\"Dispatching code for Selection operations.\n\"\"\"\n\n\nimport functools\nimport operator\nfrom typing import Optional\n\nimport dask.dataframe as dd\nimport pandas\nfrom toolz import concatv\n\nimport ibis.expr.operations as ops\nimport ibis.expr.types as ir\nfrom ibis.backends.pandas.execution.selection import (\n compute_projection,\n compute_projection_table_expr,\n map_new_column_names_to_data,\n remap_overlapping_column_names,\n)\nfrom ibis.expr.scope import Scope\nfrom ibis.expr.typing import TimeContext\n\nfrom ..core import execute\nfrom ..dispatch import execute_node\nfrom ..execution import constants\nfrom ..execution.util import (\n add_partitioned_sorted_column,\n coerce_to_output,\n compute_sorted_frame,\n)\n\n\n@compute_projection.register(ir.ScalarExpr, ops.Selection, dd.DataFrame)\ndef compute_projection_scalar_expr(\n expr,\n parent,\n data,\n scope: Scope,\n timecontext: Optional[TimeContext] = None,\n **kwargs,\n):\n name = expr._name\n assert name is not None, 'Scalar selection name is None'\n\n op = expr.op()\n parent_table_op = parent.table.op()\n\n data_columns = frozenset(data.columns)\n\n scope = scope.merge_scopes(\n Scope(\n {\n t: map_new_column_names_to_data(\n remap_overlapping_column_names(\n parent_table_op, t, data_columns\n ),\n data,\n )\n },\n timecontext,\n )\n for t in op.root_tables()\n )\n scalar = execute(expr, scope=scope, **kwargs)\n return data.assign(**{name: scalar})[name]\n\n\n@compute_projection.register(ir.ColumnExpr, ops.Selection, dd.DataFrame)\ndef compute_projection_column_expr(\n expr,\n parent,\n data,\n scope: Scope,\n timecontext: Optional[TimeContext],\n **kwargs,\n):\n result_name = getattr(expr, '_name', None)\n op = expr.op()\n parent_table_op = parent.table.op()\n\n if isinstance(op, ops.TableColumn):\n # slightly faster path for simple column selection\n name = op.name\n\n if name in data:\n return data[name].rename(result_name or name)\n\n if not isinstance(parent_table_op, ops.Join):\n raise KeyError(name)\n (root_table,) = op.root_tables()\n left_root, right_root = ops.distinct_roots(\n parent_table_op.left, parent_table_op.right\n )\n suffixes = {\n left_root: constants.LEFT_JOIN_SUFFIX,\n right_root: constants.RIGHT_JOIN_SUFFIX,\n }\n return data.loc[:, name + suffixes[root_table]].rename(\n result_name or name\n )\n\n data_columns = frozenset(data.columns)\n\n scope = scope.merge_scopes(\n Scope(\n {\n t: map_new_column_names_to_data(\n remap_overlapping_column_names(\n parent_table_op, t, data_columns\n ),\n data,\n )\n },\n timecontext,\n )\n for t in op.root_tables()\n )\n\n result = execute(expr, scope=scope, timecontext=timecontext, **kwargs)\n result = coerce_to_output(result, expr, data.index)\n assert result_name is not None, 'Column selection name is None'\n\n return result\n\n\ncompute_projection.register(ir.TableExpr, ops.Selection, dd.DataFrame)(\n compute_projection_table_expr\n)\n\n\n@execute_node.register(ops.Selection, dd.DataFrame)\ndef execute_selection_dataframe(\n op, data, scope: Scope, timecontext: Optional[TimeContext], **kwargs\n):\n selections = op.selections\n predicates = op.predicates\n sort_keys = op.sort_keys\n result = data\n\n # Build up the individual dask structures from column expressions\n if selections:\n # Create a unique row identifier and set it as the index. This is used\n # in dd.concat to merge the pieces back together.\n data = add_partitioned_sorted_column(data)\n data_pieces = []\n for selection in selections:\n dask_object = compute_projection(\n selection,\n op,\n data,\n scope=scope,\n timecontext=timecontext,\n **kwargs,\n )\n data_pieces.append(dask_object)\n\n result = dd.concat(data_pieces, axis=1)\n result.reset_index(drop=True)\n\n if predicates:\n predicates = _compute_predicates(\n op.table.op(), predicates, data, scope, timecontext, **kwargs\n )\n predicate = functools.reduce(operator.and_, predicates)\n result = result.loc[predicate]\n\n if sort_keys:\n if len(sort_keys) > 1:\n raise NotImplementedError(\n \"\"\"\n Multi-key sorting is not implemented for the Dask backend\n \"\"\"\n )\n sort_key = sort_keys[0]\n ascending = getattr(sort_key.op(), 'ascending', True)\n if not ascending:\n raise NotImplementedError(\n \"Descending sort is not supported for the Dask backend\"\n )\n result = compute_sorted_frame(\n result,\n order_by=sort_key,\n scope=scope,\n timecontext=timecontext,\n **kwargs,\n )\n\n return result\n else:\n grouping_keys = ordering_keys = ()\n\n # return early if we do not have any temporary grouping or ordering columns\n assert not grouping_keys, 'group by should never show up in Selection'\n if not ordering_keys:\n return result\n\n # create a sequence of columns that we need to drop\n temporary_columns = pandas.Index(\n concatv(grouping_keys, ordering_keys)\n ).difference(data.columns)\n\n # no reason to call drop if we don't need to\n if temporary_columns.empty:\n return result\n\n # drop every temporary column we created for ordering or grouping\n return result.drop(temporary_columns, axis=1)\n\n\ndef _compute_predicates(\n table_op,\n predicates,\n data,\n scope: Scope,\n timecontext: Optional[TimeContext],\n **kwargs,\n):\n \"\"\"Compute the predicates for a table operation.\n\n Parameters\n ----------\n table_op : TableNode\n predicates : List[ir.ColumnExpr]\n data : pd.DataFrame\n scope : Scope\n timecontext: Optional[TimeContext]\n kwargs : dict\n\n Returns\n -------\n computed_predicate : pd.Series[bool]\n\n Notes\n -----\n This handles the cases where the predicates are computed columns, in\n addition to the simple case of named columns coming directly from the input\n table.\n \"\"\"\n for predicate in predicates:\n # Map each root table of the predicate to the data so that we compute\n # predicates on the result instead of any left or right tables if the\n # Selection is on a Join. Project data to only inlude columns from\n # the root table.\n root_tables = predicate.op().root_tables()\n\n # handle suffixes\n data_columns = frozenset(data.columns)\n\n additional_scope = Scope()\n for root_table in root_tables:\n mapping = remap_overlapping_column_names(\n table_op, root_table, data_columns\n )\n if mapping is not None:\n new_data = data.loc[:, mapping.keys()].rename(columns=mapping)\n else:\n new_data = data\n additional_scope = additional_scope.merge_scope(\n Scope({root_table: new_data}, timecontext)\n )\n\n scope = scope.merge_scope(additional_scope)\n yield execute(predicate, scope=scope, **kwargs)\n", "path": "ibis/backends/dask/execution/selection.py"}], "after_files": [{"content": "\"\"\"Dispatching code for Selection operations.\n\"\"\"\n\n\nimport functools\nimport operator\nfrom typing import List, Optional\n\nimport dask.dataframe as dd\nimport pandas\nfrom toolz import concatv\n\nimport ibis.expr.operations as ops\nimport ibis.expr.types as ir\nfrom ibis.backends.pandas.execution.selection import (\n compute_projection,\n compute_projection_table_expr,\n map_new_column_names_to_data,\n remap_overlapping_column_names,\n)\nfrom ibis.expr.scope import Scope\nfrom ibis.expr.typing import TimeContext\n\nfrom ..core import execute\nfrom ..dispatch import execute_node\nfrom ..execution import constants\nfrom ..execution.util import (\n add_partitioned_sorted_column,\n coerce_to_output,\n compute_sorted_frame,\n)\n\n\n@compute_projection.register(ir.ScalarExpr, ops.Selection, dd.DataFrame)\ndef compute_projection_scalar_expr(\n expr,\n parent,\n data,\n scope: Scope,\n timecontext: Optional[TimeContext] = None,\n **kwargs,\n):\n name = expr._name\n assert name is not None, 'Scalar selection name is None'\n\n op = expr.op()\n parent_table_op = parent.table.op()\n\n data_columns = frozenset(data.columns)\n scope = scope.merge_scopes(\n Scope(\n {\n t: map_new_column_names_to_data(\n remap_overlapping_column_names(\n parent_table_op, t, data_columns\n ),\n data,\n )\n },\n timecontext,\n )\n for t in op.root_tables()\n )\n scalar = execute(expr, scope=scope, **kwargs)\n return data.assign(**{name: scalar})[name]\n\n\n@compute_projection.register(ir.ColumnExpr, ops.Selection, dd.DataFrame)\ndef compute_projection_column_expr(\n expr,\n parent,\n data,\n scope: Scope,\n timecontext: Optional[TimeContext],\n **kwargs,\n):\n result_name = getattr(expr, '_name', None)\n op = expr.op()\n parent_table_op = parent.table.op()\n\n if isinstance(op, ops.TableColumn):\n # slightly faster path for simple column selection\n name = op.name\n\n if name in data:\n return data[name].rename(result_name or name)\n\n if not isinstance(parent_table_op, ops.Join):\n raise KeyError(name)\n (root_table,) = op.root_tables()\n left_root, right_root = ops.distinct_roots(\n parent_table_op.left, parent_table_op.right\n )\n suffixes = {\n left_root: constants.LEFT_JOIN_SUFFIX,\n right_root: constants.RIGHT_JOIN_SUFFIX,\n }\n return data.loc[:, name + suffixes[root_table]].rename(\n result_name or name\n )\n\n data_columns = frozenset(data.columns)\n\n scope = scope.merge_scopes(\n Scope(\n {\n t: map_new_column_names_to_data(\n remap_overlapping_column_names(\n parent_table_op, t, data_columns\n ),\n data,\n )\n },\n timecontext,\n )\n for t in op.root_tables()\n )\n\n result = execute(expr, scope=scope, timecontext=timecontext, **kwargs)\n result = coerce_to_output(result, expr, data.index)\n assert result_name is not None, 'Column selection name is None'\n\n return result\n\n\ncompute_projection.register(ir.TableExpr, ops.Selection, dd.DataFrame)(\n compute_projection_table_expr\n)\n\n\ndef build_df_from_selection(\n selections: List[ir.ColumnExpr], data: dd.DataFrame\n) -> dd.DataFrame:\n \"\"\"Build up a df by doing direct selections, renaming if necessary.\"\"\"\n cols = [\n (s.op().name, getattr(s, \"_name\", s.op().name)) for s in selections\n ]\n renamed_cols = {\n col: renamed_col for col, renamed_col in cols if col != renamed_col\n }\n\n result = data[[col for col, _ in cols]]\n if renamed_cols:\n result = result.rename(columns=renamed_cols)\n\n return result\n\n\ndef build_df_from_projection(\n selections: List[ir.Expr], op: ops.Selection, data: dd.DataFrame, **kwargs\n) -> dd.DataFrame:\n \"\"\"\n Build up a df from individual pieces by dispatching to `compute_projection`\n for each expression.\n \"\"\"\n\n # Create a unique row identifier and set it as the index. This is\n # used in dd.concat to merge the pieces back together.\n data = add_partitioned_sorted_column(data)\n data_pieces = [\n compute_projection(s, op, data, **kwargs) for s in selections\n ]\n\n return dd.concat(data_pieces, axis=1).reset_index(drop=True)\n\n\n@execute_node.register(ops.Selection, dd.DataFrame)\ndef execute_selection_dataframe(\n op, data, scope: Scope, timecontext: Optional[TimeContext], **kwargs\n):\n selections = op.selections\n predicates = op.predicates\n sort_keys = op.sort_keys\n result = data\n\n if selections:\n # if we are just performing select operations and all columns are in\n # the table we can do a direct selection\n if all(isinstance(s.op(), ops.TableColumn) for s in selections) and {\n s.op().name for s in selections\n }.issubset(set(result.columns)):\n result = build_df_from_selection(selections, data)\n else:\n result = build_df_from_projection(\n selections,\n op,\n data,\n scope=scope,\n timecontext=timecontext,\n **kwargs,\n )\n\n if predicates:\n predicates = _compute_predicates(\n op.table.op(), predicates, data, scope, timecontext, **kwargs\n )\n predicate = functools.reduce(operator.and_, predicates)\n result = result.loc[predicate]\n\n if sort_keys:\n if len(sort_keys) > 1:\n raise NotImplementedError(\n \"\"\"\n Multi-key sorting is not implemented for the Dask backend\n \"\"\"\n )\n sort_key = sort_keys[0]\n ascending = getattr(sort_key.op(), 'ascending', True)\n if not ascending:\n raise NotImplementedError(\n \"Descending sort is not supported for the Dask backend\"\n )\n result = compute_sorted_frame(\n result,\n order_by=sort_key,\n scope=scope,\n timecontext=timecontext,\n **kwargs,\n )\n\n return result\n else:\n grouping_keys = ordering_keys = ()\n\n # return early if we do not have any temporary grouping or ordering columns\n assert not grouping_keys, 'group by should never show up in Selection'\n if not ordering_keys:\n return result\n\n # create a sequence of columns that we need to drop\n temporary_columns = pandas.Index(\n concatv(grouping_keys, ordering_keys)\n ).difference(data.columns)\n\n # no reason to call drop if we don't need to\n if temporary_columns.empty:\n return result\n\n # drop every temporary column we created for ordering or grouping\n return result.drop(temporary_columns, axis=1)\n\n\ndef _compute_predicates(\n table_op,\n predicates,\n data,\n scope: Scope,\n timecontext: Optional[TimeContext],\n **kwargs,\n):\n \"\"\"Compute the predicates for a table operation.\n\n Parameters\n ----------\n table_op : TableNode\n predicates : List[ir.ColumnExpr]\n data : pd.DataFrame\n scope : Scope\n timecontext: Optional[TimeContext]\n kwargs : dict\n\n Returns\n -------\n computed_predicate : pd.Series[bool]\n\n Notes\n -----\n This handles the cases where the predicates are computed columns, in\n addition to the simple case of named columns coming directly from the input\n table.\n \"\"\"\n for predicate in predicates:\n # Map each root table of the predicate to the data so that we compute\n # predicates on the result instead of any left or right tables if the\n # Selection is on a Join. Project data to only inlude columns from\n # the root table.\n root_tables = predicate.op().root_tables()\n\n # handle suffixes\n data_columns = frozenset(data.columns)\n\n additional_scope = Scope()\n for root_table in root_tables:\n mapping = remap_overlapping_column_names(\n table_op, root_table, data_columns\n )\n if mapping is not None:\n new_data = data.loc[:, mapping.keys()].rename(columns=mapping)\n else:\n new_data = data\n additional_scope = additional_scope.merge_scope(\n Scope({root_table: new_data}, timecontext)\n )\n\n scope = scope.merge_scope(additional_scope)\n yield execute(predicate, scope=scope, **kwargs)\n", "path": "ibis/backends/dask/execution/selection.py"}]}
| 2,611 | 779 |
gh_patches_debug_1339
|
rasdani/github-patches
|
git_diff
|
kivy__kivy-7520
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
kivy.uix.Video._on_eos might be called after unoad.
**Software Versions**
* Python: 3.7
* OS: linux
* Kivy: 2.0.0
* Kivy installation method: pip
**Describe the bug**
When using ffpyplayer based video implementation, it's possible that ``eos`` gets set from frame fetching thread after the video has been unload, which results in an ``AttributeError``, since ``self._video`` gets set to ``None`` in ``kivy.uix.Video.unload``.
**Proposed fix**
Add additional check whether ``self._video`` is set in ``_do_eos`` (https://github.com/kivy/kivy/blob/master/kivy/uix/video.py#L260)
```python
def _on_eos(self, *largs):
if not self._video or self._video.eos != 'loop':
self.state = 'stop'
self.eos = True
```
Any objections? Otherwise i'd create a PR for this.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kivy/uix/video.py`
Content:
```
1 '''
2 Video
3 =====
4
5 The :class:`Video` widget is used to display video files and streams.
6 Depending on your Video core provider, platform, and plugins, you will
7 be able to play different formats. For example, the pygame video
8 provider only supports MPEG1 on Linux and OSX. GStreamer is more
9 versatile, and can read many video containers and codecs such as MKV,
10 OGV, AVI, MOV, FLV (if the correct gstreamer plugins are installed). Our
11 :class:`~kivy.core.video.VideoBase` implementation is used under the
12 hood.
13
14 Video loading is asynchronous - many properties are not available until
15 the video is loaded (when the texture is created)::
16
17 def on_position_change(instance, value):
18 print('The position in the video is', value)
19
20 def on_duration_change(instance, value):
21 print('The duration of the video is', value)
22
23 video = Video(source='PandaSneezes.avi')
24 video.bind(
25 position=on_position_change,
26 duration=on_duration_change
27 )
28
29 One can define a preview image which gets displayed until the video is
30 started/loaded by passing ``preview`` to the constructor::
31
32 video = Video(
33 source='PandaSneezes.avi',
34 preview='PandaSneezes_preview.png'
35 )
36
37 One can display the placeholder image when the video stops by reacting on eos::
38
39 def on_eos_change(self, inst, val):
40 if val and self.preview:
41 self.set_texture_from_resource(self.preview)
42
43 video.bind(eos=on_eos_change)
44 '''
45
46 __all__ = ('Video', )
47
48 from kivy.clock import Clock
49 from kivy.uix.image import Image
50 from kivy.core.video import Video as CoreVideo
51 from kivy.resources import resource_find
52 from kivy.properties import (BooleanProperty, NumericProperty, ObjectProperty,
53 OptionProperty, StringProperty)
54
55
56 class Video(Image):
57 '''Video class. See module documentation for more information.
58 '''
59
60 preview = StringProperty(None, allownone=True)
61 '''Filename / source of a preview image displayed before video starts.
62
63 :attr:`preview` is a :class:`~kivy.properties.StringProperty` and
64 defaults to None.
65
66 If set, it gets displayed until the video is loaded/started.
67
68 .. versionadded:: 2.1.0
69 '''
70
71 state = OptionProperty('stop', options=('play', 'pause', 'stop'))
72 '''String, indicates whether to play, pause, or stop the video::
73
74 # start playing the video at creation
75 video = Video(source='movie.mkv', state='play')
76
77 # create the video, and start later
78 video = Video(source='movie.mkv')
79 # and later
80 video.state = 'play'
81
82 :attr:`state` is an :class:`~kivy.properties.OptionProperty` and defaults
83 to 'stop'.
84 '''
85
86 play = BooleanProperty(False, deprecated=True)
87 '''
88 .. deprecated:: 1.4.0
89 Use :attr:`state` instead.
90
91 Boolean, indicates whether the video is playing or not.
92 You can start/stop the video by setting this property::
93
94 # start playing the video at creation
95 video = Video(source='movie.mkv', play=True)
96
97 # create the video, and start later
98 video = Video(source='movie.mkv')
99 # and later
100 video.play = True
101
102 :attr:`play` is a :class:`~kivy.properties.BooleanProperty` and defaults to
103 False.
104
105 .. deprecated:: 1.4.0
106 Use :attr:`state` instead.
107 '''
108
109 eos = BooleanProperty(False)
110 '''Boolean, indicates whether the video has finished playing or not
111 (reached the end of the stream).
112
113 :attr:`eos` is a :class:`~kivy.properties.BooleanProperty` and defaults to
114 False.
115 '''
116
117 loaded = BooleanProperty(False)
118 '''Boolean, indicates whether the video is loaded and ready for playback
119 or not.
120
121 .. versionadded:: 1.6.0
122
123 :attr:`loaded` is a :class:`~kivy.properties.BooleanProperty` and defaults
124 to False.
125 '''
126
127 position = NumericProperty(-1)
128 '''Position of the video between 0 and :attr:`duration`. The position
129 defaults to -1 and is set to a real position when the video is loaded.
130
131 :attr:`position` is a :class:`~kivy.properties.NumericProperty` and
132 defaults to -1.
133 '''
134
135 duration = NumericProperty(-1)
136 '''Duration of the video. The duration defaults to -1, and is set to a real
137 duration when the video is loaded.
138
139 :attr:`duration` is a :class:`~kivy.properties.NumericProperty` and
140 defaults to -1.
141 '''
142
143 volume = NumericProperty(1.)
144 '''Volume of the video, in the range 0-1. 1 means full volume, 0
145 means mute.
146
147 :attr:`volume` is a :class:`~kivy.properties.NumericProperty` and defaults
148 to 1.
149 '''
150
151 options = ObjectProperty({})
152 '''Options to pass at Video core object creation.
153
154 .. versionadded:: 1.0.4
155
156 :attr:`options` is an :class:`kivy.properties.ObjectProperty` and defaults
157 to {}.
158 '''
159
160 _video_load_event = None
161
162 def __init__(self, **kwargs):
163 self._video = None
164 super(Video, self).__init__(**kwargs)
165 self.fbind('source', self._trigger_video_load)
166
167 if "eos" in kwargs:
168 self.options["eos"] = kwargs["eos"]
169 if self.source:
170 self._trigger_video_load()
171
172 def texture_update(self, *largs):
173 if self.preview:
174 self.set_texture_from_resource(self.preview)
175 else:
176 self.set_texture_from_resource(self.source)
177
178 def seek(self, percent, precise=True):
179 '''Change the position to a percentage (strictly, a proportion)
180 of duration.
181
182 :Parameters:
183 `percent`: float or int
184 Position to seek as a proportion of the total duration,
185 must be between 0-1.
186 `precise`: bool, defaults to True
187 Precise seeking is slower, but seeks to exact requested
188 percent.
189
190 .. warning::
191 Calling seek() before the video is loaded has no effect.
192
193 .. versionadded:: 1.2.0
194
195 .. versionchanged:: 1.10.1
196 The `precise` keyword argument has been added.
197 '''
198 if self._video is None:
199 raise Exception('Video not loaded.')
200 self._video.seek(percent, precise=precise)
201
202 def _trigger_video_load(self, *largs):
203 ev = self._video_load_event
204 if ev is None:
205 ev = self._video_load_event = Clock.schedule_once(
206 self._do_video_load, -1)
207 ev()
208
209 def _do_video_load(self, *largs):
210 if CoreVideo is None:
211 return
212 self.unload()
213 if not self.source:
214 self._video = None
215 self.texture = None
216 else:
217 filename = self.source
218 # Check if filename is not url
219 if '://' not in filename:
220 filename = resource_find(filename)
221 self._video = CoreVideo(filename=filename, **self.options)
222 self._video.volume = self.volume
223 self._video.bind(on_load=self._on_load,
224 on_frame=self._on_video_frame,
225 on_eos=self._on_eos)
226 if self.state == 'play' or self.play:
227 self._video.play()
228 self.duration = 1.
229 self.position = 0.
230
231 def on_play(self, instance, value):
232 value = 'play' if value else 'stop'
233 return self.on_state(instance, value)
234
235 def on_state(self, instance, value):
236 if not self._video:
237 return
238 if value == 'play':
239 if self.eos:
240 self._video.stop()
241 self._video.position = 0.
242 self.eos = False
243 self._video.play()
244 elif value == 'pause':
245 self._video.pause()
246 else:
247 self._video.stop()
248 self._video.position = 0
249
250 def _on_video_frame(self, *largs):
251 video = self._video
252 if not video:
253 return
254 self.duration = video.duration
255 self.position = video.position
256 self.texture = video.texture
257 self.canvas.ask_update()
258
259 def _on_eos(self, *largs):
260 if self._video.eos != 'loop':
261 self.state = 'stop'
262 self.eos = True
263
264 def _on_load(self, *largs):
265 self.loaded = True
266 self._on_video_frame(largs)
267
268 def on_volume(self, instance, value):
269 if self._video:
270 self._video.volume = value
271
272 def unload(self):
273 '''Unload the video. The playback will be stopped.
274
275 .. versionadded:: 1.8.0
276 '''
277 if self._video:
278 self._video.stop()
279 self._video.unload()
280 self._video = None
281 self.loaded = False
282
283
284 if __name__ == '__main__':
285 from kivy.app import App
286 import sys
287
288 if len(sys.argv) != 2:
289 print("usage: %s file" % sys.argv[0])
290 sys.exit(1)
291
292 class VideoApp(App):
293 def build(self):
294 self.v = Video(source=sys.argv[1], state='play')
295 self.v.bind(state=self.replay)
296 return self.v
297
298 def replay(self, *args):
299 if self.v.state == 'stop':
300 self.v.state = 'play'
301
302 VideoApp().run()
303
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kivy/uix/video.py b/kivy/uix/video.py
--- a/kivy/uix/video.py
+++ b/kivy/uix/video.py
@@ -257,7 +257,7 @@
self.canvas.ask_update()
def _on_eos(self, *largs):
- if self._video.eos != 'loop':
+ if not self._video or self._video.eos != 'loop':
self.state = 'stop'
self.eos = True
|
{"golden_diff": "diff --git a/kivy/uix/video.py b/kivy/uix/video.py\n--- a/kivy/uix/video.py\n+++ b/kivy/uix/video.py\n@@ -257,7 +257,7 @@\n self.canvas.ask_update()\n \n def _on_eos(self, *largs):\n- if self._video.eos != 'loop':\n+ if not self._video or self._video.eos != 'loop':\n self.state = 'stop'\n self.eos = True\n", "issue": "kivy.uix.Video._on_eos might be called after unoad.\n**Software Versions**\r\n* Python: 3.7\r\n* OS: linux\r\n* Kivy: 2.0.0\r\n* Kivy installation method: pip\r\n\r\n**Describe the bug**\r\nWhen using ffpyplayer based video implementation, it's possible that ``eos`` gets set from frame fetching thread after the video has been unload, which results in an ``AttributeError``, since ``self._video`` gets set to ``None`` in ``kivy.uix.Video.unload``.\r\n\r\n**Proposed fix**\r\nAdd additional check whether ``self._video`` is set in ``_do_eos`` (https://github.com/kivy/kivy/blob/master/kivy/uix/video.py#L260)\r\n\r\n```python\r\n def _on_eos(self, *largs):\r\n if not self._video or self._video.eos != 'loop':\r\n self.state = 'stop'\r\n self.eos = True\r\n```\r\n\r\nAny objections? Otherwise i'd create a PR for this.\n", "before_files": [{"content": "'''\nVideo\n=====\n\nThe :class:`Video` widget is used to display video files and streams.\nDepending on your Video core provider, platform, and plugins, you will\nbe able to play different formats. For example, the pygame video\nprovider only supports MPEG1 on Linux and OSX. GStreamer is more\nversatile, and can read many video containers and codecs such as MKV,\nOGV, AVI, MOV, FLV (if the correct gstreamer plugins are installed). Our\n:class:`~kivy.core.video.VideoBase` implementation is used under the\nhood.\n\nVideo loading is asynchronous - many properties are not available until\nthe video is loaded (when the texture is created)::\n\n def on_position_change(instance, value):\n print('The position in the video is', value)\n\n def on_duration_change(instance, value):\n print('The duration of the video is', value)\n\n video = Video(source='PandaSneezes.avi')\n video.bind(\n position=on_position_change,\n duration=on_duration_change\n )\n\nOne can define a preview image which gets displayed until the video is\nstarted/loaded by passing ``preview`` to the constructor::\n\n video = Video(\n source='PandaSneezes.avi',\n preview='PandaSneezes_preview.png'\n )\n\nOne can display the placeholder image when the video stops by reacting on eos::\n\n def on_eos_change(self, inst, val):\n if val and self.preview:\n self.set_texture_from_resource(self.preview)\n\n video.bind(eos=on_eos_change)\n'''\n\n__all__ = ('Video', )\n\nfrom kivy.clock import Clock\nfrom kivy.uix.image import Image\nfrom kivy.core.video import Video as CoreVideo\nfrom kivy.resources import resource_find\nfrom kivy.properties import (BooleanProperty, NumericProperty, ObjectProperty,\n OptionProperty, StringProperty)\n\n\nclass Video(Image):\n '''Video class. See module documentation for more information.\n '''\n\n preview = StringProperty(None, allownone=True)\n '''Filename / source of a preview image displayed before video starts.\n\n :attr:`preview` is a :class:`~kivy.properties.StringProperty` and\n defaults to None.\n\n If set, it gets displayed until the video is loaded/started.\n\n .. versionadded:: 2.1.0\n '''\n\n state = OptionProperty('stop', options=('play', 'pause', 'stop'))\n '''String, indicates whether to play, pause, or stop the video::\n\n # start playing the video at creation\n video = Video(source='movie.mkv', state='play')\n\n # create the video, and start later\n video = Video(source='movie.mkv')\n # and later\n video.state = 'play'\n\n :attr:`state` is an :class:`~kivy.properties.OptionProperty` and defaults\n to 'stop'.\n '''\n\n play = BooleanProperty(False, deprecated=True)\n '''\n .. deprecated:: 1.4.0\n Use :attr:`state` instead.\n\n Boolean, indicates whether the video is playing or not.\n You can start/stop the video by setting this property::\n\n # start playing the video at creation\n video = Video(source='movie.mkv', play=True)\n\n # create the video, and start later\n video = Video(source='movie.mkv')\n # and later\n video.play = True\n\n :attr:`play` is a :class:`~kivy.properties.BooleanProperty` and defaults to\n False.\n\n .. deprecated:: 1.4.0\n Use :attr:`state` instead.\n '''\n\n eos = BooleanProperty(False)\n '''Boolean, indicates whether the video has finished playing or not\n (reached the end of the stream).\n\n :attr:`eos` is a :class:`~kivy.properties.BooleanProperty` and defaults to\n False.\n '''\n\n loaded = BooleanProperty(False)\n '''Boolean, indicates whether the video is loaded and ready for playback\n or not.\n\n .. versionadded:: 1.6.0\n\n :attr:`loaded` is a :class:`~kivy.properties.BooleanProperty` and defaults\n to False.\n '''\n\n position = NumericProperty(-1)\n '''Position of the video between 0 and :attr:`duration`. The position\n defaults to -1 and is set to a real position when the video is loaded.\n\n :attr:`position` is a :class:`~kivy.properties.NumericProperty` and\n defaults to -1.\n '''\n\n duration = NumericProperty(-1)\n '''Duration of the video. The duration defaults to -1, and is set to a real\n duration when the video is loaded.\n\n :attr:`duration` is a :class:`~kivy.properties.NumericProperty` and\n defaults to -1.\n '''\n\n volume = NumericProperty(1.)\n '''Volume of the video, in the range 0-1. 1 means full volume, 0\n means mute.\n\n :attr:`volume` is a :class:`~kivy.properties.NumericProperty` and defaults\n to 1.\n '''\n\n options = ObjectProperty({})\n '''Options to pass at Video core object creation.\n\n .. versionadded:: 1.0.4\n\n :attr:`options` is an :class:`kivy.properties.ObjectProperty` and defaults\n to {}.\n '''\n\n _video_load_event = None\n\n def __init__(self, **kwargs):\n self._video = None\n super(Video, self).__init__(**kwargs)\n self.fbind('source', self._trigger_video_load)\n\n if \"eos\" in kwargs:\n self.options[\"eos\"] = kwargs[\"eos\"]\n if self.source:\n self._trigger_video_load()\n\n def texture_update(self, *largs):\n if self.preview:\n self.set_texture_from_resource(self.preview)\n else:\n self.set_texture_from_resource(self.source)\n\n def seek(self, percent, precise=True):\n '''Change the position to a percentage (strictly, a proportion)\n of duration.\n\n :Parameters:\n `percent`: float or int\n Position to seek as a proportion of the total duration,\n must be between 0-1.\n `precise`: bool, defaults to True\n Precise seeking is slower, but seeks to exact requested\n percent.\n\n .. warning::\n Calling seek() before the video is loaded has no effect.\n\n .. versionadded:: 1.2.0\n\n .. versionchanged:: 1.10.1\n The `precise` keyword argument has been added.\n '''\n if self._video is None:\n raise Exception('Video not loaded.')\n self._video.seek(percent, precise=precise)\n\n def _trigger_video_load(self, *largs):\n ev = self._video_load_event\n if ev is None:\n ev = self._video_load_event = Clock.schedule_once(\n self._do_video_load, -1)\n ev()\n\n def _do_video_load(self, *largs):\n if CoreVideo is None:\n return\n self.unload()\n if not self.source:\n self._video = None\n self.texture = None\n else:\n filename = self.source\n # Check if filename is not url\n if '://' not in filename:\n filename = resource_find(filename)\n self._video = CoreVideo(filename=filename, **self.options)\n self._video.volume = self.volume\n self._video.bind(on_load=self._on_load,\n on_frame=self._on_video_frame,\n on_eos=self._on_eos)\n if self.state == 'play' or self.play:\n self._video.play()\n self.duration = 1.\n self.position = 0.\n\n def on_play(self, instance, value):\n value = 'play' if value else 'stop'\n return self.on_state(instance, value)\n\n def on_state(self, instance, value):\n if not self._video:\n return\n if value == 'play':\n if self.eos:\n self._video.stop()\n self._video.position = 0.\n self.eos = False\n self._video.play()\n elif value == 'pause':\n self._video.pause()\n else:\n self._video.stop()\n self._video.position = 0\n\n def _on_video_frame(self, *largs):\n video = self._video\n if not video:\n return\n self.duration = video.duration\n self.position = video.position\n self.texture = video.texture\n self.canvas.ask_update()\n\n def _on_eos(self, *largs):\n if self._video.eos != 'loop':\n self.state = 'stop'\n self.eos = True\n\n def _on_load(self, *largs):\n self.loaded = True\n self._on_video_frame(largs)\n\n def on_volume(self, instance, value):\n if self._video:\n self._video.volume = value\n\n def unload(self):\n '''Unload the video. The playback will be stopped.\n\n .. versionadded:: 1.8.0\n '''\n if self._video:\n self._video.stop()\n self._video.unload()\n self._video = None\n self.loaded = False\n\n\nif __name__ == '__main__':\n from kivy.app import App\n import sys\n\n if len(sys.argv) != 2:\n print(\"usage: %s file\" % sys.argv[0])\n sys.exit(1)\n\n class VideoApp(App):\n def build(self):\n self.v = Video(source=sys.argv[1], state='play')\n self.v.bind(state=self.replay)\n return self.v\n\n def replay(self, *args):\n if self.v.state == 'stop':\n self.v.state = 'play'\n\n VideoApp().run()\n", "path": "kivy/uix/video.py"}], "after_files": [{"content": "'''\nVideo\n=====\n\nThe :class:`Video` widget is used to display video files and streams.\nDepending on your Video core provider, platform, and plugins, you will\nbe able to play different formats. For example, the pygame video\nprovider only supports MPEG1 on Linux and OSX. GStreamer is more\nversatile, and can read many video containers and codecs such as MKV,\nOGV, AVI, MOV, FLV (if the correct gstreamer plugins are installed). Our\n:class:`~kivy.core.video.VideoBase` implementation is used under the\nhood.\n\nVideo loading is asynchronous - many properties are not available until\nthe video is loaded (when the texture is created)::\n\n def on_position_change(instance, value):\n print('The position in the video is', value)\n\n def on_duration_change(instance, value):\n print('The duration of the video is', value)\n\n video = Video(source='PandaSneezes.avi')\n video.bind(\n position=on_position_change,\n duration=on_duration_change\n )\n\nOne can define a preview image which gets displayed until the video is\nstarted/loaded by passing ``preview`` to the constructor::\n\n video = Video(\n source='PandaSneezes.avi',\n preview='PandaSneezes_preview.png'\n )\n\nOne can display the placeholder image when the video stops by reacting on eos::\n\n def on_eos_change(self, inst, val):\n if val and self.preview:\n self.set_texture_from_resource(self.preview)\n\n video.bind(eos=on_eos_change)\n'''\n\n__all__ = ('Video', )\n\nfrom kivy.clock import Clock\nfrom kivy.uix.image import Image\nfrom kivy.core.video import Video as CoreVideo\nfrom kivy.resources import resource_find\nfrom kivy.properties import (BooleanProperty, NumericProperty, ObjectProperty,\n OptionProperty, StringProperty)\n\n\nclass Video(Image):\n '''Video class. See module documentation for more information.\n '''\n\n preview = StringProperty(None, allownone=True)\n '''Filename / source of a preview image displayed before video starts.\n\n :attr:`preview` is a :class:`~kivy.properties.StringProperty` and\n defaults to None.\n\n If set, it gets displayed until the video is loaded/started.\n\n .. versionadded:: 2.1.0\n '''\n\n state = OptionProperty('stop', options=('play', 'pause', 'stop'))\n '''String, indicates whether to play, pause, or stop the video::\n\n # start playing the video at creation\n video = Video(source='movie.mkv', state='play')\n\n # create the video, and start later\n video = Video(source='movie.mkv')\n # and later\n video.state = 'play'\n\n :attr:`state` is an :class:`~kivy.properties.OptionProperty` and defaults\n to 'stop'.\n '''\n\n play = BooleanProperty(False, deprecated=True)\n '''\n .. deprecated:: 1.4.0\n Use :attr:`state` instead.\n\n Boolean, indicates whether the video is playing or not.\n You can start/stop the video by setting this property::\n\n # start playing the video at creation\n video = Video(source='movie.mkv', play=True)\n\n # create the video, and start later\n video = Video(source='movie.mkv')\n # and later\n video.play = True\n\n :attr:`play` is a :class:`~kivy.properties.BooleanProperty` and defaults to\n False.\n\n .. deprecated:: 1.4.0\n Use :attr:`state` instead.\n '''\n\n eos = BooleanProperty(False)\n '''Boolean, indicates whether the video has finished playing or not\n (reached the end of the stream).\n\n :attr:`eos` is a :class:`~kivy.properties.BooleanProperty` and defaults to\n False.\n '''\n\n loaded = BooleanProperty(False)\n '''Boolean, indicates whether the video is loaded and ready for playback\n or not.\n\n .. versionadded:: 1.6.0\n\n :attr:`loaded` is a :class:`~kivy.properties.BooleanProperty` and defaults\n to False.\n '''\n\n position = NumericProperty(-1)\n '''Position of the video between 0 and :attr:`duration`. The position\n defaults to -1 and is set to a real position when the video is loaded.\n\n :attr:`position` is a :class:`~kivy.properties.NumericProperty` and\n defaults to -1.\n '''\n\n duration = NumericProperty(-1)\n '''Duration of the video. The duration defaults to -1, and is set to a real\n duration when the video is loaded.\n\n :attr:`duration` is a :class:`~kivy.properties.NumericProperty` and\n defaults to -1.\n '''\n\n volume = NumericProperty(1.)\n '''Volume of the video, in the range 0-1. 1 means full volume, 0\n means mute.\n\n :attr:`volume` is a :class:`~kivy.properties.NumericProperty` and defaults\n to 1.\n '''\n\n options = ObjectProperty({})\n '''Options to pass at Video core object creation.\n\n .. versionadded:: 1.0.4\n\n :attr:`options` is an :class:`kivy.properties.ObjectProperty` and defaults\n to {}.\n '''\n\n _video_load_event = None\n\n def __init__(self, **kwargs):\n self._video = None\n super(Video, self).__init__(**kwargs)\n self.fbind('source', self._trigger_video_load)\n\n if \"eos\" in kwargs:\n self.options[\"eos\"] = kwargs[\"eos\"]\n if self.source:\n self._trigger_video_load()\n\n def texture_update(self, *largs):\n if self.preview:\n self.set_texture_from_resource(self.preview)\n else:\n self.set_texture_from_resource(self.source)\n\n def seek(self, percent, precise=True):\n '''Change the position to a percentage (strictly, a proportion)\n of duration.\n\n :Parameters:\n `percent`: float or int\n Position to seek as a proportion of the total duration,\n must be between 0-1.\n `precise`: bool, defaults to True\n Precise seeking is slower, but seeks to exact requested\n percent.\n\n .. warning::\n Calling seek() before the video is loaded has no effect.\n\n .. versionadded:: 1.2.0\n\n .. versionchanged:: 1.10.1\n The `precise` keyword argument has been added.\n '''\n if self._video is None:\n raise Exception('Video not loaded.')\n self._video.seek(percent, precise=precise)\n\n def _trigger_video_load(self, *largs):\n ev = self._video_load_event\n if ev is None:\n ev = self._video_load_event = Clock.schedule_once(\n self._do_video_load, -1)\n ev()\n\n def _do_video_load(self, *largs):\n if CoreVideo is None:\n return\n self.unload()\n if not self.source:\n self._video = None\n self.texture = None\n else:\n filename = self.source\n # Check if filename is not url\n if '://' not in filename:\n filename = resource_find(filename)\n self._video = CoreVideo(filename=filename, **self.options)\n self._video.volume = self.volume\n self._video.bind(on_load=self._on_load,\n on_frame=self._on_video_frame,\n on_eos=self._on_eos)\n if self.state == 'play' or self.play:\n self._video.play()\n self.duration = 1.\n self.position = 0.\n\n def on_play(self, instance, value):\n value = 'play' if value else 'stop'\n return self.on_state(instance, value)\n\n def on_state(self, instance, value):\n if not self._video:\n return\n if value == 'play':\n if self.eos:\n self._video.stop()\n self._video.position = 0.\n self.eos = False\n self._video.play()\n elif value == 'pause':\n self._video.pause()\n else:\n self._video.stop()\n self._video.position = 0\n\n def _on_video_frame(self, *largs):\n video = self._video\n if not video:\n return\n self.duration = video.duration\n self.position = video.position\n self.texture = video.texture\n self.canvas.ask_update()\n\n def _on_eos(self, *largs):\n if not self._video or self._video.eos != 'loop':\n self.state = 'stop'\n self.eos = True\n\n def _on_load(self, *largs):\n self.loaded = True\n self._on_video_frame(largs)\n\n def on_volume(self, instance, value):\n if self._video:\n self._video.volume = value\n\n def unload(self):\n '''Unload the video. The playback will be stopped.\n\n .. versionadded:: 1.8.0\n '''\n if self._video:\n self._video.stop()\n self._video.unload()\n self._video = None\n self.loaded = False\n\n\nif __name__ == '__main__':\n from kivy.app import App\n import sys\n\n if len(sys.argv) != 2:\n print(\"usage: %s file\" % sys.argv[0])\n sys.exit(1)\n\n class VideoApp(App):\n def build(self):\n self.v = Video(source=sys.argv[1], state='play')\n self.v.bind(state=self.replay)\n return self.v\n\n def replay(self, *args):\n if self.v.state == 'stop':\n self.v.state = 'play'\n\n VideoApp().run()\n", "path": "kivy/uix/video.py"}]}
| 3,451 | 112 |
gh_patches_debug_12409
|
rasdani/github-patches
|
git_diff
|
conan-io__conan-7262
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[bug] Unable to `python_requires_extend` a package with a '.' in the name
### Environment Details (include every applicable attribute)
* Operating System+version: Windows 10
* Conan version: 1.25.1
* Python version: 3.8.1
### Steps to reproduce (Include if Applicable)
- Create a package with a '.' in the name (E.g. my.package/1.2.3)
- Create a recipe for another package with
```
python_requires = 'my.package/1.2.3'
python_requires_extend = 'my.package.MyPackage'
```
- Receive the error ERROR: too many values to unpack (expected 2)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conans/client/graph/python_requires.py`
Content:
```
1 import os
2 from collections import namedtuple
3 from contextlib import contextmanager
4
5 from conans.client.loader import parse_conanfile
6 from conans.client.recorder.action_recorder import ActionRecorder
7 from conans.errors import ConanException, NotFoundException
8 from conans.model.ref import ConanFileReference
9 from conans.model.requires import Requirement
10 from conans.util.conan_v2_mode import CONAN_V2_MODE_ENVVAR
11 from conans.util.conan_v2_mode import conan_v2_behavior
12
13 PythonRequire = namedtuple("PythonRequire", ["ref", "module", "conanfile",
14 "exports_folder", "exports_sources_folder"])
15
16
17 class PyRequire(object):
18 def __init__(self, module, conanfile, ref, path):
19 self.module = module
20 self.conanfile = conanfile
21 self.ref = ref
22 self.path = path
23
24
25 class PyRequires(object):
26 """ this is the object that replaces the declared conanfile.py_requires"""
27 def __init__(self):
28 self._pyrequires = {} # {pkg-name: PythonRequire}
29 self._transitive = {}
30
31 def update_transitive(self, conanfile):
32 transitive = getattr(conanfile, "python_requires", None)
33 if not transitive:
34 return
35 for name, transitive_py_require in transitive.all_items():
36 existing = self._pyrequires.get(name)
37 if existing and existing.ref != transitive_py_require.ref:
38 raise ConanException("Conflict in py_requires %s - %s"
39 % (existing.ref, transitive_py_require.ref))
40 self._transitive[name] = transitive_py_require
41
42 def all_items(self):
43 new_dict = self._pyrequires.copy()
44 new_dict.update(self._transitive)
45 return new_dict.items()
46
47 def all_refs(self):
48 return ([r.ref for r in self._pyrequires.values()] +
49 [r.ref for r in self._transitive.values()])
50
51 def items(self):
52 return self._pyrequires.items()
53
54 def __getitem__(self, item):
55 try:
56 return self._pyrequires[item]
57 except KeyError:
58 raise ConanException("'%s' is not a python_require" % item)
59
60 def __setitem__(self, key, value):
61 # single item assignment, direct
62 existing = self._pyrequires.get(key)
63 if existing:
64 raise ConanException("The python_require '%s' already exists" % key)
65 self._pyrequires[key] = value
66
67
68 class PyRequireLoader(object):
69 def __init__(self, proxy, range_resolver):
70 self._proxy = proxy
71 self._range_resolver = range_resolver
72 self._cached_py_requires = {}
73
74 def enable_remotes(self, check_updates=False, update=False, remotes=None):
75 self._check_updates = check_updates
76 self._update = update
77 self._remotes = remotes
78
79 @contextmanager
80 def capture_requires(self):
81 # DO nothing, just to stay compatible with the interface of python_requires
82 yield []
83
84 def load_py_requires(self, conanfile, lock_python_requires, loader):
85 if not hasattr(conanfile, "python_requires") or isinstance(conanfile.python_requires, dict):
86 return
87 py_requires_refs = conanfile.python_requires
88 if isinstance(py_requires_refs, str):
89 py_requires_refs = [py_requires_refs, ]
90
91 py_requires = self._resolve_py_requires(py_requires_refs, lock_python_requires, loader)
92 if hasattr(conanfile, "python_requires_extend"):
93 py_requires_extend = conanfile.python_requires_extend
94 if isinstance(py_requires_extend, str):
95 py_requires_extend = [py_requires_extend, ]
96 for p in py_requires_extend:
97 pkg_name, base_class_name = p.split(".")
98 base_class = getattr(py_requires[pkg_name].module, base_class_name)
99 conanfile.__bases__ = (base_class,) + conanfile.__bases__
100 conanfile.python_requires = py_requires
101
102 def _resolve_py_requires(self, py_requires_refs, lock_python_requires, loader):
103 result = PyRequires()
104 for py_requires_ref in py_requires_refs:
105 py_requires_ref = self._resolve_ref(py_requires_ref, lock_python_requires)
106 try:
107 py_require = self._cached_py_requires[py_requires_ref]
108 except KeyError:
109 conanfile, module, new_ref, path = self._load_pyreq_conanfile(loader,
110 lock_python_requires,
111 py_requires_ref)
112 py_require = PyRequire(module, conanfile, new_ref, path)
113 self._cached_py_requires[py_requires_ref] = py_require
114 result[py_require.ref.name] = py_require
115 # Update transitive and check conflicts
116 result.update_transitive(py_require.conanfile)
117 return result
118
119 def _resolve_ref(self, py_requires_ref, lock_python_requires):
120 ref = ConanFileReference.loads(py_requires_ref)
121 if lock_python_requires:
122 locked = {r.name: r for r in lock_python_requires}[ref.name]
123 ref = locked
124 else:
125 requirement = Requirement(ref)
126 self._range_resolver.resolve(requirement, "py_require", update=self._update,
127 remotes=self._remotes)
128 ref = requirement.ref
129 return ref
130
131 def _load_pyreq_conanfile(self, loader, lock_python_requires, ref):
132 recipe = self._proxy.get_recipe(ref, self._check_updates, self._update,
133 remotes=self._remotes, recorder=ActionRecorder())
134 path, _, _, new_ref = recipe
135 conanfile, module = loader.load_basic_module(path, lock_python_requires, user=new_ref.user,
136 channel=new_ref.channel)
137 conanfile.name = new_ref.name
138 conanfile.version = str(new_ref.version) \
139 if os.environ.get(CONAN_V2_MODE_ENVVAR, False) else new_ref.version
140
141 if getattr(conanfile, "alias", None):
142 ref = ConanFileReference.loads(conanfile.alias)
143 conanfile, module, new_ref, path = self._load_pyreq_conanfile(loader,
144 lock_python_requires,
145 ref)
146 return conanfile, module, new_ref, os.path.dirname(path)
147
148
149 class ConanPythonRequire(object):
150 def __init__(self, proxy, range_resolver):
151 self._cached_requires = {} # {reference: PythonRequire}
152 self._proxy = proxy
153 self._range_resolver = range_resolver
154 self._requires = None
155 self.valid = True
156 self._check_updates = False
157 self._update = False
158 self._remote_name = None
159 self.locked_versions = None
160
161 def enable_remotes(self, check_updates=False, update=False, remotes=None):
162 self._check_updates = check_updates
163 self._update = update
164 self._remotes = remotes
165
166 @contextmanager
167 def capture_requires(self):
168 old_requires = self._requires
169 self._requires = []
170 yield self._requires
171 self._requires = old_requires
172
173 def _look_for_require(self, reference):
174 ref = ConanFileReference.loads(reference)
175 ref = self.locked_versions[ref.name] if self.locked_versions is not None else ref
176 try:
177 python_require = self._cached_requires[ref]
178 except KeyError:
179 requirement = Requirement(ref)
180 self._range_resolver.resolve(requirement, "python_require", update=self._update,
181 remotes=self._remotes)
182 ref = requirement.ref
183 result = self._proxy.get_recipe(ref, self._check_updates, self._update,
184 remotes=self._remotes,
185 recorder=ActionRecorder())
186 path, _, _, new_ref = result
187 module, conanfile = parse_conanfile(conanfile_path=path, python_requires=self)
188
189 # Check for alias
190 if getattr(conanfile, "alias", None):
191 # Will register also the aliased
192 python_require = self._look_for_require(conanfile.alias)
193 else:
194 package_layout = self._proxy._cache.package_layout(new_ref, conanfile.short_paths)
195 exports_sources_folder = package_layout.export_sources()
196 exports_folder = package_layout.export()
197 python_require = PythonRequire(new_ref, module, conanfile,
198 exports_folder, exports_sources_folder)
199 self._cached_requires[ref] = python_require
200
201 return python_require
202
203 def __call__(self, reference):
204 conan_v2_behavior("Old syntax for python_requires is deprecated")
205 if not self.valid:
206 raise ConanException("Invalid use of python_requires(%s)" % reference)
207 try:
208 python_req = self._look_for_require(reference)
209 self._requires.append(python_req)
210 return python_req.module
211 except NotFoundException:
212 raise ConanException('Unable to find python_requires("{}") in remotes'.format(reference))
213
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/conans/client/graph/python_requires.py b/conans/client/graph/python_requires.py
--- a/conans/client/graph/python_requires.py
+++ b/conans/client/graph/python_requires.py
@@ -94,7 +94,7 @@
if isinstance(py_requires_extend, str):
py_requires_extend = [py_requires_extend, ]
for p in py_requires_extend:
- pkg_name, base_class_name = p.split(".")
+ pkg_name, base_class_name = p.rsplit(".", 1)
base_class = getattr(py_requires[pkg_name].module, base_class_name)
conanfile.__bases__ = (base_class,) + conanfile.__bases__
conanfile.python_requires = py_requires
|
{"golden_diff": "diff --git a/conans/client/graph/python_requires.py b/conans/client/graph/python_requires.py\n--- a/conans/client/graph/python_requires.py\n+++ b/conans/client/graph/python_requires.py\n@@ -94,7 +94,7 @@\n if isinstance(py_requires_extend, str):\n py_requires_extend = [py_requires_extend, ]\n for p in py_requires_extend:\n- pkg_name, base_class_name = p.split(\".\")\n+ pkg_name, base_class_name = p.rsplit(\".\", 1)\n base_class = getattr(py_requires[pkg_name].module, base_class_name)\n conanfile.__bases__ = (base_class,) + conanfile.__bases__\n conanfile.python_requires = py_requires\n", "issue": "[bug] Unable to `python_requires_extend` a package with a '.' in the name\n### Environment Details (include every applicable attribute)\r\n * Operating System+version: Windows 10\r\n * Conan version: 1.25.1\r\n * Python version: 3.8.1\r\n\r\n### Steps to reproduce (Include if Applicable)\r\n- Create a package with a '.' in the name (E.g. my.package/1.2.3)\r\n- Create a recipe for another package with \r\n```\r\npython_requires = 'my.package/1.2.3'\r\npython_requires_extend = 'my.package.MyPackage'\r\n```\r\n\r\n- Receive the error ERROR: too many values to unpack (expected 2)\r\n\n", "before_files": [{"content": "import os\nfrom collections import namedtuple\nfrom contextlib import contextmanager\n\nfrom conans.client.loader import parse_conanfile\nfrom conans.client.recorder.action_recorder import ActionRecorder\nfrom conans.errors import ConanException, NotFoundException\nfrom conans.model.ref import ConanFileReference\nfrom conans.model.requires import Requirement\nfrom conans.util.conan_v2_mode import CONAN_V2_MODE_ENVVAR\nfrom conans.util.conan_v2_mode import conan_v2_behavior\n\nPythonRequire = namedtuple(\"PythonRequire\", [\"ref\", \"module\", \"conanfile\",\n \"exports_folder\", \"exports_sources_folder\"])\n\n\nclass PyRequire(object):\n def __init__(self, module, conanfile, ref, path):\n self.module = module\n self.conanfile = conanfile\n self.ref = ref\n self.path = path\n\n\nclass PyRequires(object):\n \"\"\" this is the object that replaces the declared conanfile.py_requires\"\"\"\n def __init__(self):\n self._pyrequires = {} # {pkg-name: PythonRequire}\n self._transitive = {}\n\n def update_transitive(self, conanfile):\n transitive = getattr(conanfile, \"python_requires\", None)\n if not transitive:\n return\n for name, transitive_py_require in transitive.all_items():\n existing = self._pyrequires.get(name)\n if existing and existing.ref != transitive_py_require.ref:\n raise ConanException(\"Conflict in py_requires %s - %s\"\n % (existing.ref, transitive_py_require.ref))\n self._transitive[name] = transitive_py_require\n\n def all_items(self):\n new_dict = self._pyrequires.copy()\n new_dict.update(self._transitive)\n return new_dict.items()\n\n def all_refs(self):\n return ([r.ref for r in self._pyrequires.values()] +\n [r.ref for r in self._transitive.values()])\n\n def items(self):\n return self._pyrequires.items()\n\n def __getitem__(self, item):\n try:\n return self._pyrequires[item]\n except KeyError:\n raise ConanException(\"'%s' is not a python_require\" % item)\n\n def __setitem__(self, key, value):\n # single item assignment, direct\n existing = self._pyrequires.get(key)\n if existing:\n raise ConanException(\"The python_require '%s' already exists\" % key)\n self._pyrequires[key] = value\n\n\nclass PyRequireLoader(object):\n def __init__(self, proxy, range_resolver):\n self._proxy = proxy\n self._range_resolver = range_resolver\n self._cached_py_requires = {}\n\n def enable_remotes(self, check_updates=False, update=False, remotes=None):\n self._check_updates = check_updates\n self._update = update\n self._remotes = remotes\n\n @contextmanager\n def capture_requires(self):\n # DO nothing, just to stay compatible with the interface of python_requires\n yield []\n\n def load_py_requires(self, conanfile, lock_python_requires, loader):\n if not hasattr(conanfile, \"python_requires\") or isinstance(conanfile.python_requires, dict):\n return\n py_requires_refs = conanfile.python_requires\n if isinstance(py_requires_refs, str):\n py_requires_refs = [py_requires_refs, ]\n\n py_requires = self._resolve_py_requires(py_requires_refs, lock_python_requires, loader)\n if hasattr(conanfile, \"python_requires_extend\"):\n py_requires_extend = conanfile.python_requires_extend\n if isinstance(py_requires_extend, str):\n py_requires_extend = [py_requires_extend, ]\n for p in py_requires_extend:\n pkg_name, base_class_name = p.split(\".\")\n base_class = getattr(py_requires[pkg_name].module, base_class_name)\n conanfile.__bases__ = (base_class,) + conanfile.__bases__\n conanfile.python_requires = py_requires\n\n def _resolve_py_requires(self, py_requires_refs, lock_python_requires, loader):\n result = PyRequires()\n for py_requires_ref in py_requires_refs:\n py_requires_ref = self._resolve_ref(py_requires_ref, lock_python_requires)\n try:\n py_require = self._cached_py_requires[py_requires_ref]\n except KeyError:\n conanfile, module, new_ref, path = self._load_pyreq_conanfile(loader,\n lock_python_requires,\n py_requires_ref)\n py_require = PyRequire(module, conanfile, new_ref, path)\n self._cached_py_requires[py_requires_ref] = py_require\n result[py_require.ref.name] = py_require\n # Update transitive and check conflicts\n result.update_transitive(py_require.conanfile)\n return result\n\n def _resolve_ref(self, py_requires_ref, lock_python_requires):\n ref = ConanFileReference.loads(py_requires_ref)\n if lock_python_requires:\n locked = {r.name: r for r in lock_python_requires}[ref.name]\n ref = locked\n else:\n requirement = Requirement(ref)\n self._range_resolver.resolve(requirement, \"py_require\", update=self._update,\n remotes=self._remotes)\n ref = requirement.ref\n return ref\n\n def _load_pyreq_conanfile(self, loader, lock_python_requires, ref):\n recipe = self._proxy.get_recipe(ref, self._check_updates, self._update,\n remotes=self._remotes, recorder=ActionRecorder())\n path, _, _, new_ref = recipe\n conanfile, module = loader.load_basic_module(path, lock_python_requires, user=new_ref.user,\n channel=new_ref.channel)\n conanfile.name = new_ref.name\n conanfile.version = str(new_ref.version) \\\n if os.environ.get(CONAN_V2_MODE_ENVVAR, False) else new_ref.version\n\n if getattr(conanfile, \"alias\", None):\n ref = ConanFileReference.loads(conanfile.alias)\n conanfile, module, new_ref, path = self._load_pyreq_conanfile(loader,\n lock_python_requires,\n ref)\n return conanfile, module, new_ref, os.path.dirname(path)\n\n\nclass ConanPythonRequire(object):\n def __init__(self, proxy, range_resolver):\n self._cached_requires = {} # {reference: PythonRequire}\n self._proxy = proxy\n self._range_resolver = range_resolver\n self._requires = None\n self.valid = True\n self._check_updates = False\n self._update = False\n self._remote_name = None\n self.locked_versions = None\n\n def enable_remotes(self, check_updates=False, update=False, remotes=None):\n self._check_updates = check_updates\n self._update = update\n self._remotes = remotes\n\n @contextmanager\n def capture_requires(self):\n old_requires = self._requires\n self._requires = []\n yield self._requires\n self._requires = old_requires\n\n def _look_for_require(self, reference):\n ref = ConanFileReference.loads(reference)\n ref = self.locked_versions[ref.name] if self.locked_versions is not None else ref\n try:\n python_require = self._cached_requires[ref]\n except KeyError:\n requirement = Requirement(ref)\n self._range_resolver.resolve(requirement, \"python_require\", update=self._update,\n remotes=self._remotes)\n ref = requirement.ref\n result = self._proxy.get_recipe(ref, self._check_updates, self._update,\n remotes=self._remotes,\n recorder=ActionRecorder())\n path, _, _, new_ref = result\n module, conanfile = parse_conanfile(conanfile_path=path, python_requires=self)\n\n # Check for alias\n if getattr(conanfile, \"alias\", None):\n # Will register also the aliased\n python_require = self._look_for_require(conanfile.alias)\n else:\n package_layout = self._proxy._cache.package_layout(new_ref, conanfile.short_paths)\n exports_sources_folder = package_layout.export_sources()\n exports_folder = package_layout.export()\n python_require = PythonRequire(new_ref, module, conanfile,\n exports_folder, exports_sources_folder)\n self._cached_requires[ref] = python_require\n\n return python_require\n\n def __call__(self, reference):\n conan_v2_behavior(\"Old syntax for python_requires is deprecated\")\n if not self.valid:\n raise ConanException(\"Invalid use of python_requires(%s)\" % reference)\n try:\n python_req = self._look_for_require(reference)\n self._requires.append(python_req)\n return python_req.module\n except NotFoundException:\n raise ConanException('Unable to find python_requires(\"{}\") in remotes'.format(reference))\n", "path": "conans/client/graph/python_requires.py"}], "after_files": [{"content": "import os\nfrom collections import namedtuple\nfrom contextlib import contextmanager\n\nfrom conans.client.loader import parse_conanfile\nfrom conans.client.recorder.action_recorder import ActionRecorder\nfrom conans.errors import ConanException, NotFoundException\nfrom conans.model.ref import ConanFileReference\nfrom conans.model.requires import Requirement\nfrom conans.util.conan_v2_mode import CONAN_V2_MODE_ENVVAR\nfrom conans.util.conan_v2_mode import conan_v2_behavior\n\nPythonRequire = namedtuple(\"PythonRequire\", [\"ref\", \"module\", \"conanfile\",\n \"exports_folder\", \"exports_sources_folder\"])\n\n\nclass PyRequire(object):\n def __init__(self, module, conanfile, ref, path):\n self.module = module\n self.conanfile = conanfile\n self.ref = ref\n self.path = path\n\n\nclass PyRequires(object):\n \"\"\" this is the object that replaces the declared conanfile.py_requires\"\"\"\n def __init__(self):\n self._pyrequires = {} # {pkg-name: PythonRequire}\n self._transitive = {}\n\n def update_transitive(self, conanfile):\n transitive = getattr(conanfile, \"python_requires\", None)\n if not transitive:\n return\n for name, transitive_py_require in transitive.all_items():\n existing = self._pyrequires.get(name)\n if existing and existing.ref != transitive_py_require.ref:\n raise ConanException(\"Conflict in py_requires %s - %s\"\n % (existing.ref, transitive_py_require.ref))\n self._transitive[name] = transitive_py_require\n\n def all_items(self):\n new_dict = self._pyrequires.copy()\n new_dict.update(self._transitive)\n return new_dict.items()\n\n def all_refs(self):\n return ([r.ref for r in self._pyrequires.values()] +\n [r.ref for r in self._transitive.values()])\n\n def items(self):\n return self._pyrequires.items()\n\n def __getitem__(self, item):\n try:\n return self._pyrequires[item]\n except KeyError:\n raise ConanException(\"'%s' is not a python_require\" % item)\n\n def __setitem__(self, key, value):\n # single item assignment, direct\n existing = self._pyrequires.get(key)\n if existing:\n raise ConanException(\"The python_require '%s' already exists\" % key)\n self._pyrequires[key] = value\n\n\nclass PyRequireLoader(object):\n def __init__(self, proxy, range_resolver):\n self._proxy = proxy\n self._range_resolver = range_resolver\n self._cached_py_requires = {}\n\n def enable_remotes(self, check_updates=False, update=False, remotes=None):\n self._check_updates = check_updates\n self._update = update\n self._remotes = remotes\n\n @contextmanager\n def capture_requires(self):\n # DO nothing, just to stay compatible with the interface of python_requires\n yield []\n\n def load_py_requires(self, conanfile, lock_python_requires, loader):\n if not hasattr(conanfile, \"python_requires\") or isinstance(conanfile.python_requires, dict):\n return\n py_requires_refs = conanfile.python_requires\n if isinstance(py_requires_refs, str):\n py_requires_refs = [py_requires_refs, ]\n\n py_requires = self._resolve_py_requires(py_requires_refs, lock_python_requires, loader)\n if hasattr(conanfile, \"python_requires_extend\"):\n py_requires_extend = conanfile.python_requires_extend\n if isinstance(py_requires_extend, str):\n py_requires_extend = [py_requires_extend, ]\n for p in py_requires_extend:\n pkg_name, base_class_name = p.rsplit(\".\", 1)\n base_class = getattr(py_requires[pkg_name].module, base_class_name)\n conanfile.__bases__ = (base_class,) + conanfile.__bases__\n conanfile.python_requires = py_requires\n\n def _resolve_py_requires(self, py_requires_refs, lock_python_requires, loader):\n result = PyRequires()\n for py_requires_ref in py_requires_refs:\n py_requires_ref = self._resolve_ref(py_requires_ref, lock_python_requires)\n try:\n py_require = self._cached_py_requires[py_requires_ref]\n except KeyError:\n conanfile, module, new_ref, path = self._load_pyreq_conanfile(loader,\n lock_python_requires,\n py_requires_ref)\n py_require = PyRequire(module, conanfile, new_ref, path)\n self._cached_py_requires[py_requires_ref] = py_require\n result[py_require.ref.name] = py_require\n # Update transitive and check conflicts\n result.update_transitive(py_require.conanfile)\n return result\n\n def _resolve_ref(self, py_requires_ref, lock_python_requires):\n ref = ConanFileReference.loads(py_requires_ref)\n if lock_python_requires:\n locked = {r.name: r for r in lock_python_requires}[ref.name]\n ref = locked\n else:\n requirement = Requirement(ref)\n self._range_resolver.resolve(requirement, \"py_require\", update=self._update,\n remotes=self._remotes)\n ref = requirement.ref\n return ref\n\n def _load_pyreq_conanfile(self, loader, lock_python_requires, ref):\n recipe = self._proxy.get_recipe(ref, self._check_updates, self._update,\n remotes=self._remotes, recorder=ActionRecorder())\n path, _, _, new_ref = recipe\n conanfile, module = loader.load_basic_module(path, lock_python_requires, user=new_ref.user,\n channel=new_ref.channel)\n conanfile.name = new_ref.name\n conanfile.version = str(new_ref.version) \\\n if os.environ.get(CONAN_V2_MODE_ENVVAR, False) else new_ref.version\n\n if getattr(conanfile, \"alias\", None):\n ref = ConanFileReference.loads(conanfile.alias)\n conanfile, module, new_ref, path = self._load_pyreq_conanfile(loader,\n lock_python_requires,\n ref)\n return conanfile, module, new_ref, os.path.dirname(path)\n\n\nclass ConanPythonRequire(object):\n def __init__(self, proxy, range_resolver):\n self._cached_requires = {} # {reference: PythonRequire}\n self._proxy = proxy\n self._range_resolver = range_resolver\n self._requires = None\n self.valid = True\n self._check_updates = False\n self._update = False\n self._remote_name = None\n self.locked_versions = None\n\n def enable_remotes(self, check_updates=False, update=False, remotes=None):\n self._check_updates = check_updates\n self._update = update\n self._remotes = remotes\n\n @contextmanager\n def capture_requires(self):\n old_requires = self._requires\n self._requires = []\n yield self._requires\n self._requires = old_requires\n\n def _look_for_require(self, reference):\n ref = ConanFileReference.loads(reference)\n ref = self.locked_versions[ref.name] if self.locked_versions is not None else ref\n try:\n python_require = self._cached_requires[ref]\n except KeyError:\n requirement = Requirement(ref)\n self._range_resolver.resolve(requirement, \"python_require\", update=self._update,\n remotes=self._remotes)\n ref = requirement.ref\n result = self._proxy.get_recipe(ref, self._check_updates, self._update,\n remotes=self._remotes,\n recorder=ActionRecorder())\n path, _, _, new_ref = result\n module, conanfile = parse_conanfile(conanfile_path=path, python_requires=self)\n\n # Check for alias\n if getattr(conanfile, \"alias\", None):\n # Will register also the aliased\n python_require = self._look_for_require(conanfile.alias)\n else:\n package_layout = self._proxy._cache.package_layout(new_ref, conanfile.short_paths)\n exports_sources_folder = package_layout.export_sources()\n exports_folder = package_layout.export()\n python_require = PythonRequire(new_ref, module, conanfile,\n exports_folder, exports_sources_folder)\n self._cached_requires[ref] = python_require\n\n return python_require\n\n def __call__(self, reference):\n conan_v2_behavior(\"Old syntax for python_requires is deprecated\")\n if not self.valid:\n raise ConanException(\"Invalid use of python_requires(%s)\" % reference)\n try:\n python_req = self._look_for_require(reference)\n self._requires.append(python_req)\n return python_req.module\n except NotFoundException:\n raise ConanException('Unable to find python_requires(\"{}\") in remotes'.format(reference))\n", "path": "conans/client/graph/python_requires.py"}]}
| 2,843 | 155 |
gh_patches_debug_21630
|
rasdani/github-patches
|
git_diff
|
ipython__ipython-4599
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Qtconsole docstring pop-up fails on method containing defaulted enum argument
[We've found](http://trac.mantidproject.org/mantid/ticket/8422) that an error is generated in the (admittedly rare) situation where a C++ enum is exposed to Python via boost python and included in a python function as a default to an argument. This is seen in IPython 1.1 and the current tip of master.
Here's the simplest example I could come up with, though it does still require compiling C++ and linking to boost python. In a C++ file:
``` c++
#include <boost/python.hpp>
enum MyEnum
{
Red,
Yellow,
Blue
};
BOOST_PYTHON_MODULE(enum_test)
{
using namespace boost::python;
enum_<MyEnum>("MyEnum")
.value("Red", Red)
.value("Yellow", Yellow)
.value("Blue", Blue);
}
```
This should be compiled to a shared library with something like `gcc -fPIC -I /usr/include/python2.6 -shared -o enum_test.so enum_test.cpp -lboost_python`
Then, in the IPython qtconsole enter:
```
In [1]: import enum_test
In [2]: def MyFunc(color = enum_test.MyEnum.Red):
...: pass
...:
In [3]: MyFunc(
```
On typing the opening parenthesis a stack trace will appear that culminates in:
```
File "/usr/lib/python2.6/site-packages/IPython/kernel/zmq/session.py", line 83, in <lambda>
json_unpacker = lambda s: extract_dates(jsonapi.loads(s))
File "/usr/lib64/python2.6/site-packages/zmq/utils/jsonapi.py", line 81, in loads
return jsonmod.loads(s, **kwargs)
File "/usr/lib64/python2.6/site-packages/simplejson/__init__.py", line 307, in loads
return _default_decoder.decode(s)
File "/usr/lib64/python2.6/site-packages/simplejson/decoder.py", line 335, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib64/python2.6/site-packages/simplejson/decoder.py", line 353, in raw_decode
raise ValueError("No JSON object could be decoded")
```
The string that's going into the raw_decode function of decoder.py and leads to the exception is:
```
{"base_class":"<type 'function'>","init_definition":null,"type_name":"function","name":"MyFunc","isclass":null,"namespace":"Interactive","isalias":false,"init_docstring":null,"argspec":{"args":["color"],"varkw":null,"defaults":[Red],"varargs":null},"source":null,"length":null,"call_def":null,"call_docstring":null,"file":"/home/enumproblem/<ipython-input-2-b6e10dea3e06>","string_form":"<function MyFunc at 0x16a8de8>","found":true,"class_docstring":null,"definition":"\u001b[0mMyFunc\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mcolor\u001b[0m\u001b[1;33m=\u001b[0m\u001b[0menum_test\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mMyEnum\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mRed\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n","docstring":"<no docstring>","ismagic":false}
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `IPython/utils/jsonutil.py`
Content:
```
1 """Utilities to manipulate JSON objects.
2 """
3 #-----------------------------------------------------------------------------
4 # Copyright (C) 2010-2011 The IPython Development Team
5 #
6 # Distributed under the terms of the BSD License. The full license is in
7 # the file COPYING.txt, distributed as part of this software.
8 #-----------------------------------------------------------------------------
9
10 #-----------------------------------------------------------------------------
11 # Imports
12 #-----------------------------------------------------------------------------
13 # stdlib
14 import math
15 import re
16 import types
17 from datetime import datetime
18
19 try:
20 # base64.encodestring is deprecated in Python 3.x
21 from base64 import encodebytes
22 except ImportError:
23 # Python 2.x
24 from base64 import encodestring as encodebytes
25
26 from IPython.utils import py3compat
27 from IPython.utils.py3compat import string_types, unicode_type, iteritems
28 from IPython.utils.encoding import DEFAULT_ENCODING
29 next_attr_name = '__next__' if py3compat.PY3 else 'next'
30
31 #-----------------------------------------------------------------------------
32 # Globals and constants
33 #-----------------------------------------------------------------------------
34
35 # timestamp formats
36 ISO8601 = "%Y-%m-%dT%H:%M:%S.%f"
37 ISO8601_PAT=re.compile(r"^(\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d{1,6})Z?([\+\-]\d{2}:?\d{2})?$")
38
39 #-----------------------------------------------------------------------------
40 # Classes and functions
41 #-----------------------------------------------------------------------------
42
43 def rekey(dikt):
44 """Rekey a dict that has been forced to use str keys where there should be
45 ints by json."""
46 for k in dikt:
47 if isinstance(k, string_types):
48 ik=fk=None
49 try:
50 ik = int(k)
51 except ValueError:
52 try:
53 fk = float(k)
54 except ValueError:
55 continue
56 if ik is not None:
57 nk = ik
58 else:
59 nk = fk
60 if nk in dikt:
61 raise KeyError("already have key %r"%nk)
62 dikt[nk] = dikt.pop(k)
63 return dikt
64
65 def parse_date(s):
66 """parse an ISO8601 date string
67
68 If it is None or not a valid ISO8601 timestamp,
69 it will be returned unmodified.
70 Otherwise, it will return a datetime object.
71 """
72 if s is None:
73 return s
74 m = ISO8601_PAT.match(s)
75 if m:
76 # FIXME: add actual timezone support
77 # this just drops the timezone info
78 notz = m.groups()[0]
79 return datetime.strptime(notz, ISO8601)
80 return s
81
82 def extract_dates(obj):
83 """extract ISO8601 dates from unpacked JSON"""
84 if isinstance(obj, dict):
85 new_obj = {} # don't clobber
86 for k,v in iteritems(obj):
87 new_obj[k] = extract_dates(v)
88 obj = new_obj
89 elif isinstance(obj, (list, tuple)):
90 obj = [ extract_dates(o) for o in obj ]
91 elif isinstance(obj, string_types):
92 obj = parse_date(obj)
93 return obj
94
95 def squash_dates(obj):
96 """squash datetime objects into ISO8601 strings"""
97 if isinstance(obj, dict):
98 obj = dict(obj) # don't clobber
99 for k,v in iteritems(obj):
100 obj[k] = squash_dates(v)
101 elif isinstance(obj, (list, tuple)):
102 obj = [ squash_dates(o) for o in obj ]
103 elif isinstance(obj, datetime):
104 obj = obj.isoformat()
105 return obj
106
107 def date_default(obj):
108 """default function for packing datetime objects in JSON."""
109 if isinstance(obj, datetime):
110 return obj.isoformat()
111 else:
112 raise TypeError("%r is not JSON serializable"%obj)
113
114
115 # constants for identifying png/jpeg data
116 PNG = b'\x89PNG\r\n\x1a\n'
117 # front of PNG base64-encoded
118 PNG64 = b'iVBORw0KG'
119 JPEG = b'\xff\xd8'
120 # front of JPEG base64-encoded
121 JPEG64 = b'/9'
122
123 def encode_images(format_dict):
124 """b64-encodes images in a displaypub format dict
125
126 Perhaps this should be handled in json_clean itself?
127
128 Parameters
129 ----------
130
131 format_dict : dict
132 A dictionary of display data keyed by mime-type
133
134 Returns
135 -------
136
137 format_dict : dict
138 A copy of the same dictionary,
139 but binary image data ('image/png' or 'image/jpeg')
140 is base64-encoded.
141
142 """
143 encoded = format_dict.copy()
144
145 pngdata = format_dict.get('image/png')
146 if isinstance(pngdata, bytes):
147 # make sure we don't double-encode
148 if not pngdata.startswith(PNG64):
149 pngdata = encodebytes(pngdata)
150 encoded['image/png'] = pngdata.decode('ascii')
151
152 jpegdata = format_dict.get('image/jpeg')
153 if isinstance(jpegdata, bytes):
154 # make sure we don't double-encode
155 if not jpegdata.startswith(JPEG64):
156 jpegdata = encodebytes(jpegdata)
157 encoded['image/jpeg'] = jpegdata.decode('ascii')
158
159 return encoded
160
161
162 def json_clean(obj):
163 """Clean an object to ensure it's safe to encode in JSON.
164
165 Atomic, immutable objects are returned unmodified. Sets and tuples are
166 converted to lists, lists are copied and dicts are also copied.
167
168 Note: dicts whose keys could cause collisions upon encoding (such as a dict
169 with both the number 1 and the string '1' as keys) will cause a ValueError
170 to be raised.
171
172 Parameters
173 ----------
174 obj : any python object
175
176 Returns
177 -------
178 out : object
179
180 A version of the input which will not cause an encoding error when
181 encoded as JSON. Note that this function does not *encode* its inputs,
182 it simply sanitizes it so that there will be no encoding errors later.
183
184 Examples
185 --------
186 >>> json_clean(4)
187 4
188 >>> json_clean(list(range(10)))
189 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
190 >>> sorted(json_clean(dict(x=1, y=2)).items())
191 [('x', 1), ('y', 2)]
192 >>> sorted(json_clean(dict(x=1, y=2, z=[1,2,3])).items())
193 [('x', 1), ('y', 2), ('z', [1, 2, 3])]
194 >>> json_clean(True)
195 True
196 """
197 # types that are 'atomic' and ok in json as-is. bool doesn't need to be
198 # listed explicitly because bools pass as int instances
199 atomic_ok = (unicode_type, int, type(None))
200
201 # containers that we need to convert into lists
202 container_to_list = (tuple, set, types.GeneratorType)
203
204 if isinstance(obj, float):
205 # cast out-of-range floats to their reprs
206 if math.isnan(obj) or math.isinf(obj):
207 return repr(obj)
208 return obj
209
210 if isinstance(obj, atomic_ok):
211 return obj
212
213 if isinstance(obj, bytes):
214 return obj.decode(DEFAULT_ENCODING, 'replace')
215
216 if isinstance(obj, container_to_list) or (
217 hasattr(obj, '__iter__') and hasattr(obj, next_attr_name)):
218 obj = list(obj)
219
220 if isinstance(obj, list):
221 return [json_clean(x) for x in obj]
222
223 if isinstance(obj, dict):
224 # First, validate that the dict won't lose data in conversion due to
225 # key collisions after stringification. This can happen with keys like
226 # True and 'true' or 1 and '1', which collide in JSON.
227 nkeys = len(obj)
228 nkeys_collapsed = len(set(map(str, obj)))
229 if nkeys != nkeys_collapsed:
230 raise ValueError('dict can not be safely converted to JSON: '
231 'key collision would lead to dropped values')
232 # If all OK, proceed by making the new dict that will be json-safe
233 out = {}
234 for k,v in iteritems(obj):
235 out[str(k)] = json_clean(v)
236 return out
237
238 # If we get here, we don't know how to handle the object, so we just get
239 # its repr and return that. This will catch lambdas, open sockets, class
240 # objects, and any other complicated contraption that json can't encode
241 return repr(obj)
242
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/IPython/utils/jsonutil.py b/IPython/utils/jsonutil.py
--- a/IPython/utils/jsonutil.py
+++ b/IPython/utils/jsonutil.py
@@ -194,9 +194,8 @@
>>> json_clean(True)
True
"""
- # types that are 'atomic' and ok in json as-is. bool doesn't need to be
- # listed explicitly because bools pass as int instances
- atomic_ok = (unicode_type, int, type(None))
+ # types that are 'atomic' and ok in json as-is.
+ atomic_ok = (unicode_type, type(None))
# containers that we need to convert into lists
container_to_list = (tuple, set, types.GeneratorType)
@@ -205,7 +204,14 @@
# cast out-of-range floats to their reprs
if math.isnan(obj) or math.isinf(obj):
return repr(obj)
- return obj
+ return float(obj)
+
+ if isinstance(obj, int):
+ # cast int to int, in case subclasses override __str__ (e.g. boost enum, #4598)
+ if isinstance(obj, bool):
+ # bools are ints, but we don't want to cast them to 0,1
+ return obj
+ return int(obj)
if isinstance(obj, atomic_ok):
return obj
|
{"golden_diff": "diff --git a/IPython/utils/jsonutil.py b/IPython/utils/jsonutil.py\n--- a/IPython/utils/jsonutil.py\n+++ b/IPython/utils/jsonutil.py\n@@ -194,9 +194,8 @@\n >>> json_clean(True)\n True\n \"\"\"\n- # types that are 'atomic' and ok in json as-is. bool doesn't need to be\n- # listed explicitly because bools pass as int instances\n- atomic_ok = (unicode_type, int, type(None))\n+ # types that are 'atomic' and ok in json as-is.\n+ atomic_ok = (unicode_type, type(None))\n \n # containers that we need to convert into lists\n container_to_list = (tuple, set, types.GeneratorType)\n@@ -205,7 +204,14 @@\n # cast out-of-range floats to their reprs\n if math.isnan(obj) or math.isinf(obj):\n return repr(obj)\n- return obj\n+ return float(obj)\n+ \n+ if isinstance(obj, int):\n+ # cast int to int, in case subclasses override __str__ (e.g. boost enum, #4598)\n+ if isinstance(obj, bool):\n+ # bools are ints, but we don't want to cast them to 0,1\n+ return obj\n+ return int(obj)\n \n if isinstance(obj, atomic_ok):\n return obj\n", "issue": "Qtconsole docstring pop-up fails on method containing defaulted enum argument\n[We've found](http://trac.mantidproject.org/mantid/ticket/8422) that an error is generated in the (admittedly rare) situation where a C++ enum is exposed to Python via boost python and included in a python function as a default to an argument. This is seen in IPython 1.1 and the current tip of master.\n\nHere's the simplest example I could come up with, though it does still require compiling C++ and linking to boost python. In a C++ file:\n\n``` c++\n#include <boost/python.hpp>\n\nenum MyEnum\n{\n Red,\n Yellow,\n Blue\n};\n\nBOOST_PYTHON_MODULE(enum_test)\n{\n using namespace boost::python;\n\n enum_<MyEnum>(\"MyEnum\")\n .value(\"Red\", Red)\n .value(\"Yellow\", Yellow)\n .value(\"Blue\", Blue);\n}\n```\n\nThis should be compiled to a shared library with something like `gcc -fPIC -I /usr/include/python2.6 -shared -o enum_test.so enum_test.cpp -lboost_python`\n\nThen, in the IPython qtconsole enter:\n\n```\nIn [1]: import enum_test\n\nIn [2]: def MyFunc(color = enum_test.MyEnum.Red):\n ...: pass\n ...: \n\nIn [3]: MyFunc(\n```\n\nOn typing the opening parenthesis a stack trace will appear that culminates in:\n\n```\n File \"/usr/lib/python2.6/site-packages/IPython/kernel/zmq/session.py\", line 83, in <lambda>\n json_unpacker = lambda s: extract_dates(jsonapi.loads(s))\n File \"/usr/lib64/python2.6/site-packages/zmq/utils/jsonapi.py\", line 81, in loads\n return jsonmod.loads(s, **kwargs)\n File \"/usr/lib64/python2.6/site-packages/simplejson/__init__.py\", line 307, in loads\n return _default_decoder.decode(s)\n File \"/usr/lib64/python2.6/site-packages/simplejson/decoder.py\", line 335, in decode\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\n File \"/usr/lib64/python2.6/site-packages/simplejson/decoder.py\", line 353, in raw_decode\n raise ValueError(\"No JSON object could be decoded\")\n```\n\nThe string that's going into the raw_decode function of decoder.py and leads to the exception is:\n\n```\n{\"base_class\":\"<type 'function'>\",\"init_definition\":null,\"type_name\":\"function\",\"name\":\"MyFunc\",\"isclass\":null,\"namespace\":\"Interactive\",\"isalias\":false,\"init_docstring\":null,\"argspec\":{\"args\":[\"color\"],\"varkw\":null,\"defaults\":[Red],\"varargs\":null},\"source\":null,\"length\":null,\"call_def\":null,\"call_docstring\":null,\"file\":\"/home/enumproblem/<ipython-input-2-b6e10dea3e06>\",\"string_form\":\"<function MyFunc at 0x16a8de8>\",\"found\":true,\"class_docstring\":null,\"definition\":\"\\u001b[0mMyFunc\\u001b[0m\\u001b[1;33m(\\u001b[0m\\u001b[0mcolor\\u001b[0m\\u001b[1;33m=\\u001b[0m\\u001b[0menum_test\\u001b[0m\\u001b[1;33m.\\u001b[0m\\u001b[0mMyEnum\\u001b[0m\\u001b[1;33m.\\u001b[0m\\u001b[0mRed\\u001b[0m\\u001b[1;33m)\\u001b[0m\\u001b[1;33m\\u001b[0m\\u001b[0m\\n\",\"docstring\":\"<no docstring>\",\"ismagic\":false}\n```\n\n", "before_files": [{"content": "\"\"\"Utilities to manipulate JSON objects.\n\"\"\"\n#-----------------------------------------------------------------------------\n# Copyright (C) 2010-2011 The IPython Development Team\n#\n# Distributed under the terms of the BSD License. The full license is in\n# the file COPYING.txt, distributed as part of this software.\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\n# stdlib\nimport math\nimport re\nimport types\nfrom datetime import datetime\n\ntry:\n # base64.encodestring is deprecated in Python 3.x\n from base64 import encodebytes\nexcept ImportError:\n # Python 2.x\n from base64 import encodestring as encodebytes\n\nfrom IPython.utils import py3compat\nfrom IPython.utils.py3compat import string_types, unicode_type, iteritems\nfrom IPython.utils.encoding import DEFAULT_ENCODING\nnext_attr_name = '__next__' if py3compat.PY3 else 'next'\n\n#-----------------------------------------------------------------------------\n# Globals and constants\n#-----------------------------------------------------------------------------\n\n# timestamp formats\nISO8601 = \"%Y-%m-%dT%H:%M:%S.%f\"\nISO8601_PAT=re.compile(r\"^(\\d{4}-\\d{2}-\\d{2}T\\d{2}:\\d{2}:\\d{2}\\.\\d{1,6})Z?([\\+\\-]\\d{2}:?\\d{2})?$\")\n\n#-----------------------------------------------------------------------------\n# Classes and functions\n#-----------------------------------------------------------------------------\n\ndef rekey(dikt):\n \"\"\"Rekey a dict that has been forced to use str keys where there should be\n ints by json.\"\"\"\n for k in dikt:\n if isinstance(k, string_types):\n ik=fk=None\n try:\n ik = int(k)\n except ValueError:\n try:\n fk = float(k)\n except ValueError:\n continue\n if ik is not None:\n nk = ik\n else:\n nk = fk\n if nk in dikt:\n raise KeyError(\"already have key %r\"%nk)\n dikt[nk] = dikt.pop(k)\n return dikt\n\ndef parse_date(s):\n \"\"\"parse an ISO8601 date string\n \n If it is None or not a valid ISO8601 timestamp,\n it will be returned unmodified.\n Otherwise, it will return a datetime object.\n \"\"\"\n if s is None:\n return s\n m = ISO8601_PAT.match(s)\n if m:\n # FIXME: add actual timezone support\n # this just drops the timezone info\n notz = m.groups()[0]\n return datetime.strptime(notz, ISO8601)\n return s\n\ndef extract_dates(obj):\n \"\"\"extract ISO8601 dates from unpacked JSON\"\"\"\n if isinstance(obj, dict):\n new_obj = {} # don't clobber\n for k,v in iteritems(obj):\n new_obj[k] = extract_dates(v)\n obj = new_obj\n elif isinstance(obj, (list, tuple)):\n obj = [ extract_dates(o) for o in obj ]\n elif isinstance(obj, string_types):\n obj = parse_date(obj)\n return obj\n\ndef squash_dates(obj):\n \"\"\"squash datetime objects into ISO8601 strings\"\"\"\n if isinstance(obj, dict):\n obj = dict(obj) # don't clobber\n for k,v in iteritems(obj):\n obj[k] = squash_dates(v)\n elif isinstance(obj, (list, tuple)):\n obj = [ squash_dates(o) for o in obj ]\n elif isinstance(obj, datetime):\n obj = obj.isoformat()\n return obj\n\ndef date_default(obj):\n \"\"\"default function for packing datetime objects in JSON.\"\"\"\n if isinstance(obj, datetime):\n return obj.isoformat()\n else:\n raise TypeError(\"%r is not JSON serializable\"%obj)\n\n\n# constants for identifying png/jpeg data\nPNG = b'\\x89PNG\\r\\n\\x1a\\n'\n# front of PNG base64-encoded\nPNG64 = b'iVBORw0KG'\nJPEG = b'\\xff\\xd8'\n# front of JPEG base64-encoded\nJPEG64 = b'/9'\n\ndef encode_images(format_dict):\n \"\"\"b64-encodes images in a displaypub format dict\n\n Perhaps this should be handled in json_clean itself?\n\n Parameters\n ----------\n\n format_dict : dict\n A dictionary of display data keyed by mime-type\n\n Returns\n -------\n\n format_dict : dict\n A copy of the same dictionary,\n but binary image data ('image/png' or 'image/jpeg')\n is base64-encoded.\n\n \"\"\"\n encoded = format_dict.copy()\n\n pngdata = format_dict.get('image/png')\n if isinstance(pngdata, bytes):\n # make sure we don't double-encode\n if not pngdata.startswith(PNG64):\n pngdata = encodebytes(pngdata)\n encoded['image/png'] = pngdata.decode('ascii')\n\n jpegdata = format_dict.get('image/jpeg')\n if isinstance(jpegdata, bytes):\n # make sure we don't double-encode\n if not jpegdata.startswith(JPEG64):\n jpegdata = encodebytes(jpegdata)\n encoded['image/jpeg'] = jpegdata.decode('ascii')\n\n return encoded\n\n\ndef json_clean(obj):\n \"\"\"Clean an object to ensure it's safe to encode in JSON.\n\n Atomic, immutable objects are returned unmodified. Sets and tuples are\n converted to lists, lists are copied and dicts are also copied.\n\n Note: dicts whose keys could cause collisions upon encoding (such as a dict\n with both the number 1 and the string '1' as keys) will cause a ValueError\n to be raised.\n\n Parameters\n ----------\n obj : any python object\n\n Returns\n -------\n out : object\n\n A version of the input which will not cause an encoding error when\n encoded as JSON. Note that this function does not *encode* its inputs,\n it simply sanitizes it so that there will be no encoding errors later.\n\n Examples\n --------\n >>> json_clean(4)\n 4\n >>> json_clean(list(range(10)))\n [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n >>> sorted(json_clean(dict(x=1, y=2)).items())\n [('x', 1), ('y', 2)]\n >>> sorted(json_clean(dict(x=1, y=2, z=[1,2,3])).items())\n [('x', 1), ('y', 2), ('z', [1, 2, 3])]\n >>> json_clean(True)\n True\n \"\"\"\n # types that are 'atomic' and ok in json as-is. bool doesn't need to be\n # listed explicitly because bools pass as int instances\n atomic_ok = (unicode_type, int, type(None))\n\n # containers that we need to convert into lists\n container_to_list = (tuple, set, types.GeneratorType)\n\n if isinstance(obj, float):\n # cast out-of-range floats to their reprs\n if math.isnan(obj) or math.isinf(obj):\n return repr(obj)\n return obj\n\n if isinstance(obj, atomic_ok):\n return obj\n\n if isinstance(obj, bytes):\n return obj.decode(DEFAULT_ENCODING, 'replace')\n\n if isinstance(obj, container_to_list) or (\n hasattr(obj, '__iter__') and hasattr(obj, next_attr_name)):\n obj = list(obj)\n\n if isinstance(obj, list):\n return [json_clean(x) for x in obj]\n\n if isinstance(obj, dict):\n # First, validate that the dict won't lose data in conversion due to\n # key collisions after stringification. This can happen with keys like\n # True and 'true' or 1 and '1', which collide in JSON.\n nkeys = len(obj)\n nkeys_collapsed = len(set(map(str, obj)))\n if nkeys != nkeys_collapsed:\n raise ValueError('dict can not be safely converted to JSON: '\n 'key collision would lead to dropped values')\n # If all OK, proceed by making the new dict that will be json-safe\n out = {}\n for k,v in iteritems(obj):\n out[str(k)] = json_clean(v)\n return out\n\n # If we get here, we don't know how to handle the object, so we just get\n # its repr and return that. This will catch lambdas, open sockets, class\n # objects, and any other complicated contraption that json can't encode\n return repr(obj)\n", "path": "IPython/utils/jsonutil.py"}], "after_files": [{"content": "\"\"\"Utilities to manipulate JSON objects.\n\"\"\"\n#-----------------------------------------------------------------------------\n# Copyright (C) 2010-2011 The IPython Development Team\n#\n# Distributed under the terms of the BSD License. The full license is in\n# the file COPYING.txt, distributed as part of this software.\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\n# stdlib\nimport math\nimport re\nimport types\nfrom datetime import datetime\n\ntry:\n # base64.encodestring is deprecated in Python 3.x\n from base64 import encodebytes\nexcept ImportError:\n # Python 2.x\n from base64 import encodestring as encodebytes\n\nfrom IPython.utils import py3compat\nfrom IPython.utils.py3compat import string_types, unicode_type, iteritems\nfrom IPython.utils.encoding import DEFAULT_ENCODING\nnext_attr_name = '__next__' if py3compat.PY3 else 'next'\n\n#-----------------------------------------------------------------------------\n# Globals and constants\n#-----------------------------------------------------------------------------\n\n# timestamp formats\nISO8601 = \"%Y-%m-%dT%H:%M:%S.%f\"\nISO8601_PAT=re.compile(r\"^(\\d{4}-\\d{2}-\\d{2}T\\d{2}:\\d{2}:\\d{2}\\.\\d{1,6})Z?([\\+\\-]\\d{2}:?\\d{2})?$\")\n\n#-----------------------------------------------------------------------------\n# Classes and functions\n#-----------------------------------------------------------------------------\n\ndef rekey(dikt):\n \"\"\"Rekey a dict that has been forced to use str keys where there should be\n ints by json.\"\"\"\n for k in dikt:\n if isinstance(k, string_types):\n ik=fk=None\n try:\n ik = int(k)\n except ValueError:\n try:\n fk = float(k)\n except ValueError:\n continue\n if ik is not None:\n nk = ik\n else:\n nk = fk\n if nk in dikt:\n raise KeyError(\"already have key %r\"%nk)\n dikt[nk] = dikt.pop(k)\n return dikt\n\ndef parse_date(s):\n \"\"\"parse an ISO8601 date string\n \n If it is None or not a valid ISO8601 timestamp,\n it will be returned unmodified.\n Otherwise, it will return a datetime object.\n \"\"\"\n if s is None:\n return s\n m = ISO8601_PAT.match(s)\n if m:\n # FIXME: add actual timezone support\n # this just drops the timezone info\n notz = m.groups()[0]\n return datetime.strptime(notz, ISO8601)\n return s\n\ndef extract_dates(obj):\n \"\"\"extract ISO8601 dates from unpacked JSON\"\"\"\n if isinstance(obj, dict):\n new_obj = {} # don't clobber\n for k,v in iteritems(obj):\n new_obj[k] = extract_dates(v)\n obj = new_obj\n elif isinstance(obj, (list, tuple)):\n obj = [ extract_dates(o) for o in obj ]\n elif isinstance(obj, string_types):\n obj = parse_date(obj)\n return obj\n\ndef squash_dates(obj):\n \"\"\"squash datetime objects into ISO8601 strings\"\"\"\n if isinstance(obj, dict):\n obj = dict(obj) # don't clobber\n for k,v in iteritems(obj):\n obj[k] = squash_dates(v)\n elif isinstance(obj, (list, tuple)):\n obj = [ squash_dates(o) for o in obj ]\n elif isinstance(obj, datetime):\n obj = obj.isoformat()\n return obj\n\ndef date_default(obj):\n \"\"\"default function for packing datetime objects in JSON.\"\"\"\n if isinstance(obj, datetime):\n return obj.isoformat()\n else:\n raise TypeError(\"%r is not JSON serializable\"%obj)\n\n\n# constants for identifying png/jpeg data\nPNG = b'\\x89PNG\\r\\n\\x1a\\n'\n# front of PNG base64-encoded\nPNG64 = b'iVBORw0KG'\nJPEG = b'\\xff\\xd8'\n# front of JPEG base64-encoded\nJPEG64 = b'/9'\n\ndef encode_images(format_dict):\n \"\"\"b64-encodes images in a displaypub format dict\n\n Perhaps this should be handled in json_clean itself?\n\n Parameters\n ----------\n\n format_dict : dict\n A dictionary of display data keyed by mime-type\n\n Returns\n -------\n\n format_dict : dict\n A copy of the same dictionary,\n but binary image data ('image/png' or 'image/jpeg')\n is base64-encoded.\n\n \"\"\"\n encoded = format_dict.copy()\n\n pngdata = format_dict.get('image/png')\n if isinstance(pngdata, bytes):\n # make sure we don't double-encode\n if not pngdata.startswith(PNG64):\n pngdata = encodebytes(pngdata)\n encoded['image/png'] = pngdata.decode('ascii')\n\n jpegdata = format_dict.get('image/jpeg')\n if isinstance(jpegdata, bytes):\n # make sure we don't double-encode\n if not jpegdata.startswith(JPEG64):\n jpegdata = encodebytes(jpegdata)\n encoded['image/jpeg'] = jpegdata.decode('ascii')\n\n return encoded\n\n\ndef json_clean(obj):\n \"\"\"Clean an object to ensure it's safe to encode in JSON.\n\n Atomic, immutable objects are returned unmodified. Sets and tuples are\n converted to lists, lists are copied and dicts are also copied.\n\n Note: dicts whose keys could cause collisions upon encoding (such as a dict\n with both the number 1 and the string '1' as keys) will cause a ValueError\n to be raised.\n\n Parameters\n ----------\n obj : any python object\n\n Returns\n -------\n out : object\n\n A version of the input which will not cause an encoding error when\n encoded as JSON. Note that this function does not *encode* its inputs,\n it simply sanitizes it so that there will be no encoding errors later.\n\n Examples\n --------\n >>> json_clean(4)\n 4\n >>> json_clean(list(range(10)))\n [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n >>> sorted(json_clean(dict(x=1, y=2)).items())\n [('x', 1), ('y', 2)]\n >>> sorted(json_clean(dict(x=1, y=2, z=[1,2,3])).items())\n [('x', 1), ('y', 2), ('z', [1, 2, 3])]\n >>> json_clean(True)\n True\n \"\"\"\n # types that are 'atomic' and ok in json as-is.\n atomic_ok = (unicode_type, type(None))\n\n # containers that we need to convert into lists\n container_to_list = (tuple, set, types.GeneratorType)\n\n if isinstance(obj, float):\n # cast out-of-range floats to their reprs\n if math.isnan(obj) or math.isinf(obj):\n return repr(obj)\n return float(obj)\n \n if isinstance(obj, int):\n # cast int to int, in case subclasses override __str__ (e.g. boost enum, #4598)\n if isinstance(obj, bool):\n # bools are ints, but we don't want to cast them to 0,1\n return obj\n return int(obj)\n\n if isinstance(obj, atomic_ok):\n return obj\n\n if isinstance(obj, bytes):\n return obj.decode(DEFAULT_ENCODING, 'replace')\n\n if isinstance(obj, container_to_list) or (\n hasattr(obj, '__iter__') and hasattr(obj, next_attr_name)):\n obj = list(obj)\n\n if isinstance(obj, list):\n return [json_clean(x) for x in obj]\n\n if isinstance(obj, dict):\n # First, validate that the dict won't lose data in conversion due to\n # key collisions after stringification. This can happen with keys like\n # True and 'true' or 1 and '1', which collide in JSON.\n nkeys = len(obj)\n nkeys_collapsed = len(set(map(str, obj)))\n if nkeys != nkeys_collapsed:\n raise ValueError('dict can not be safely converted to JSON: '\n 'key collision would lead to dropped values')\n # If all OK, proceed by making the new dict that will be json-safe\n out = {}\n for k,v in iteritems(obj):\n out[str(k)] = json_clean(v)\n return out\n\n # If we get here, we don't know how to handle the object, so we just get\n # its repr and return that. This will catch lambdas, open sockets, class\n # objects, and any other complicated contraption that json can't encode\n return repr(obj)\n", "path": "IPython/utils/jsonutil.py"}]}
| 3,685 | 314 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.